INDIVIDUALIZING ELEMENTARY GENERAL MUSIC INSTRUCTION: CASE STUDIES OF ASSESSMENT AND DIFFERENTIATION By Karen Salvador A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Music Education 2011 ABSTRACT INDIVIDUALIZING ELEMENTARY GENERAL MUSIC INSTRUCTION: CASE STUDIES OF ASSESSMENT AND DIFFERENTIATION By Karen Salvador Elementary general music teachers typically teach hundreds of students every week. Each child has individual learning needs due to a variety of factors, such as prior experiences with music, music aptitude, learning style, and personality. The purpose of this qualitative study was to explore ways that experienced teachers used assessments to differentiate instruction so they could meet the music learning needs of individual students. The guiding questions were as follows: (1) When and how did the participants assess musical skills and behaviors? (2) How did participants score or keep track of what students knew and could do in music? And (3) What was the impact of assessment on differentiation of instruction? I selected three elementary music teachers who had been teaching for at least eight years and were known to use assessments regularly. I observed the first participant as she taught a kindergarten and a fourth grade class every time they met for seven weeks. With the second participant, I observed a third grade, a fourth grade, and a self-contained class for children with cognitive impairments each time they met for four weeks. I observed the final participant each time she taught one first grade and one third grade for seven weeks. In addition to my field notes of these observations, data collection included interviews, teacher journals, videotape review forms, and verbal protocol analysis (think-alouds). Data were analyzed on an ongoing basis using the constant comparative method of data analysis, guided by my initial research questions and also seeking emergent themes. The results are presented in the form of case studies of each teacher’s practices, followed by cross-case analysis. All participants used a variety of assessment methods, including rating scales, checklists, report cards, observation, and aptitude testing. Two participants included selfassessments, and one compiled all written work into a portfolio for each student. Although each teacher occasionally assessed specifically for report card grades, most assessment was consistent and ongoing throughout the school year and its primary purpose was to inform instruction. Participants reported that the number of students they taught, lack of time and support, and preparation for performances were major hindrances to assessment, yet they nevertheless each continued consistently to integrate assessment. They disagreed about the role of large-group performance (i.e., after-school “programs” or concerts) as an assessment activity. Although some assessments were directly applied to personalize instruction in a linear or spiraling fashion, assessment practices and differentiation of instruction were typically interwoven in a complex relationship that varied among participants. Group work—including praxial group work, creative group work, and centers-based instruction—was one way that teachers individualized instruction and also assessed the music learning of individual students. Participants utilized a variety of presentation styles and offered a range of musical activities in order to personalize whole-group instruction, as well as providing opportunities for individual responses to open-ended high-challenge and self-challenge activities in whole-group contexts. Furthermore, each participant was expected to differentiate music instruction for students with a variety of special needs. This study concludes with a discussion of the implications of these results for practice and suggestions for future research. DEDICATION This document is dedicated to my family. To Ele and Zoe: I hope someday each of you will find a task as engrossing and fulfilling as I found this one and that you enjoy the journey on the way there as much as I did. The two of you certainly enriched this project. I love each of you best. And to Jim: Thanks for all the love, humor, work, help, support, guidance, distraction, empathy… I’m glad you did this first… I think you were more understanding than I was. Thanks for all you do for me, our relationship, and our family. You are my favorite. iv ACKNOWLEDGEMENTS To Betty-Anne, Bridget, Caroyln, Clint, Gina, Julie, Julie (yes, two of them), Stephen, Nancy, Nate, Tami, and all the other DMA choral folks and PhD music education students in my graduate cohort: I have been profoundly influenced and inspired by each of you. Thank you for the stimulating conversations, laughter, and support. Thanks also to Peter, Jason, and Holly, who let me use their house as an office so I could sort through data and write in solitude, and to Aimee, Julie, Heather, and others who provided peer feedback in writing or discussion. Thanks to Carol, the staff at Eastminster Child Development Center, my mom and my mother-in-law, who helped with child care during this process. My children are happy and healthy, and I could relax and concentrate because I knew that they were being cared for by such wonderful people. I am grateful for the support of my parents, Paula and Sam Hudnutt, Ken and Suzi Huber. They have always supported my dreams. Thanks to my committee: To Dr. Sandra Snow, who picked me out of a choir, believed I could do this, and started me down this path. To Dr. Judy Palac, who has been a consistent model of supportive mentoring and perseverance under pressure. To Dr. John Kratus, whose insights are invaluable to my development as a music teacher educator, and who is a master of seeing the “big picture.” To Dr. Mitchell Robinson, for encouraging me to see myself as a leader in music education and helping me to think critically about how I choose to create myself in that role. And to my Chair Dr. Cynthia Taggart, who balanced allowing me the freedom to be selfdirected in my scholarship while offering critical feedback and prodding me to achieve. Thank you for your warmth, understanding, support, and the occasional well-deserved kick in the pants. Finally, thank you to my participants, without whom this project would not have been possible. Thank you for your time, your openness, and the risks you took by allowing me to observe and analyze your teaching. I hope what I have written honors you--you are all incredible teachers and people. v TABLE OF CONTENTS LIST OF TABLES……………………………………………………………………………….xi LIST OF FIGURES……………………………………………………………………………...xii Introduction………………………………………………………………………………………..1 Chapter 1: Review of Literature………………………………………………………………….4 Assessment and measurement in music education………………………………………..5 A brief history……………………………………………………………………..5 Criticisms of assessment in music education…………………………………….10 Optimal role of assessment in elementary general music………………………..10 Purposes and types of assessment………………………………………..12 Criticisms of testing……………………………………………………...14 Summative and formative assessments…………………………………..17 Assessment and individualization of instruction………………………...19 Individual response in music instruction………………………………...21 Differentiation of instruction…………………………………………….21 Reported uses of assessment in elementary general music………………………………27 Challenges to assessment in elementary general music………………………………….33 Philosophical barriers to assessment……………………………………………..34 Institutional barriers to assessment………………………………………………35 Proposed role of assessment in elementary general music education……………………36 Need for this study……………………………………………………………………….37 Purpose of this study……………………………………………………………………..38 Delimitations……………………………………………………………………………..38 Definitions of terms……………………………………………………………………...39 Chapter 2: Review of Related Research………………………………………………………...41 Assessment and differentiation of instruction in elementary education…………………43 Implicit applications of assessment to learning and instruction in general music……….46 Summary…………………………………………………………………………51 Assessment applied to differentiation of instruction in the elementary music classroom.52 Summary…………………………………………………………………………………60 Chapter 3: Methodology…………………………………………………………………………62 Researcher lens…………………………………………………………………………..62 Design……………………………………………………………………………………64 Participants………………………………………………………………………………65 Danielle Wheeler………………………………………………………………...67 Carrie Davis……………………………………………………………………...68 Hailey Stevens…………………………………………………………………...70 Data collection…………………………………………………………………………...71 Observation………………………………………………………………………72 vi Videotaping………………………………………………………………………73 Verbal Protocol Analysis………………………………………………………...74 Journals…………………………………………………………………………..74 Interviews………………………………………………………………………...75 Trustworthiness/Credibility……………………………………………………………...76 Limitations……………………………………………………………………………….77 Analysis…………………………………………………………………………………..78 Chapter 4: Danielle Wheeler: Curiosity and Curriculum……………………………………….80 When and how did Ms. Wheeler assess………………………………………………….82 Types of assessment……………………………………………………………...82 Portfolios…………………………………………………………………82 Self-assessment…………………………………………………………..83 Report cards……………………………………………………………...83 Formative assessments…………………………………………………...85 Other assessments………………………………………………………..85 Aptitude testing…………………………………………………………..86 Performances……………………………………………………………..86 When music learning was assessed………………………………………………87 Scoring assessments and tracking results………………………………………………..90 Checklists and rating scales……………………………………………………...90 Observational assessments……………………………………………………….92 Written tests……………………………………………………………………...92 Methods for eliciting response…………………………………………………...93 Challenges to scoring and tracking results……………………………………….94 Differentiation and assessment…………………………………………………………..96 Differentiation in kindergarten…………………………………………………..96 Differentiation in fourth grade…………………………………………………...99 Differentiation based on the assessments of others…………………………….102 Summary………………………………………………………………………..105 Emergent themes………………………………………………………………………..105 Inquisitive disposition…………………………………………………………..105 Linkage of curriculum to assessment…………………………………………...110 Teacher behaviors conducive to differentiation………………………………...113 Chapter Summary………………………………………………………………………117 Chapter 5: Carrie Davis: Chaos and Creativity………………………………………………...119 Self-Reports of Assessment…………………………………………………………….123 Aptitude testing…………………………………………………………………124 Report cards…………………………………………………………………….124 Observational assessments……………………………………………………..125 Other formal assessments………………………………………………………126 Importance of individual responses…………………………………………….127 Challenges to assessments……………………………………………………...128 Summary of self-reported assessments…………………………………………129 Assessment and differentiation of instruction in small-group composition……………130 vii Flexible grouping……………………………………………………………….130 Student-centered learning………………………………………………………130 Peer coaching…………………………………………………………………...133 Informal, emergent assessment methods……………………………………….133 Summative assessments………………………………………………………...134 Summary………………………………………………………………………..136 Differentiation of music instruction for students with cognitive impairments…………137 Early Childhood approach……………………………………………………...139 Paraprofessionals……………………………………………………………….142 Social mainstreaming vs. inclusion…………………………………………….144 Summary………………………………………………………………………..149 Constructivism and differentiation……………………………………………………..149 Teacher as facilitator……………………………………………………………151 Differentiation inherent in Ms. Davis’s practice of constructivism…………….161 Collaborative, cooperative learning atmosphere………………………………..165 Summary………………………………………………………………………..168 Chapter Summary………………………………………………………………………169 Chapter 6: Hailey Stevens: Assessment and Differentiation Intertwined……………………...173 When and how was music learning assessed…………………...………………………174 Report cards…………………………………………………………………….174 Aptitude testing…………………………………………………………………177 Written assessments…………………………………………………………….177 Learning Sequence Activities…………………………………………………..178 Embedded assessments…………………………………………………………181 Summary of when and how music learning was assessed……………..……….184 Scoring and Tracking the Results of Assessments……………………………………..184 Scoring Learning Sequence Activities………………………………………….184 Embedded assessments…………………………………………………………185 Necessity of individual response……………………………………………….189 Challenges to assessment……………………………………………………….190 Summary of scoring and tracking the results of assessments…………………..191 Impact of assessment on differentiation of instruction…………………………………192 Differentiation inextricably intertwined with assessment practices……………192 Differentiation as a natural consequent of assessment…………………193 Assessment as a form of differentiation………………………………...196 Separating musical abilities from academic or behavioral abilities…………….199 Data-driven student-centered learning………………………………………….203 Summary of the impact of assessment on differentiation of instruction……….208 Emergent Themes………………………………………………………………………208 Environment conducive to assessment and differentiation……………………..209 Purpose of music class………………………………………………….209 Normalizing musicking…………………………………………………212 Structuring activities with multiple response levels…………………....217 Summary………………………………………………………………..226 Overarching impact of teacher beliefs………………………………………….226 viii Chapter summary……………………………………………………………………….230 Chapter 7: Cross-case Analysis………………………………………………………………..233 When and how did participants assess music learning………………………...……….236 When participants assessed……………………………………………………..236 How did participants assess music learning……………………...……………..238 How did participants score and track students’ music learning………………………...242 What was the impact of assessment on differentiation of instruction………………….244 Tactics for differentiation of whole-group music instruction…………………..245 Group work strategies for differentiation in music class………………………252 Use of centers…………………………………………………………..252 Praxial group work……………………………………………………..254 Creative group work……………………………………………………255 Analysis of grouping strategies………………………………………...257 Approaches to differentiation for students with special needs…………………258 Differentiation of instruction for mainstreamed students………………253 Strategies for teaching music to self-contained classes………………...263 Summary of the impact of assessment on differentiated instruction…………...264 Emergent themes……………………………………………………………………….264 Factors facilitating assessment and differentiation……………………………..265 Organizational factors…………………………………………………..265 Personal factors…………………………………………………………268 The influence of instructional philosophy on assessment and differentiation …269 Continuum between direct instruction and teacher facilitation………...270 Effect of directness of instruction on assessment and differentiation….271 Summary of Cross-Case Analysis……………………………………………………...274 CHAPTER 8: Chapter Eight: Conclusions and Implications………………………………….277 Implications for practice………………………………………………………………..278 Implications for the practice of assessment…………………………………….279 Aptitude testing…………………………………………………………280 Role of performances in assessment……………………………………280 Logistical considerations……………………………………………….281 Summary………………………………………………………………..282 Implications for differentiated instruction……………………………………...283 Whole-group differentiation……………………………………………283 Groupings-based differentiation………………………………………..284 Differentiation for students with special needs…………………………284 Implications at the secondary level……………………………………………..286 Summary of implications……………………………………………………….286 Suggestions for future research…………………………………………………………287 Assessment practices…………………………………………………………...287 Performances…………………………………………………………………...288 Differentiation practices…………………………………………….….………288 Grouping practices………………………………………………….….……….288 Group work………………………………………………………….………….289 ix Learning Sequence Activities…………………………………………………..289 Students with special needs…………………………………………………….290 Philosophy/teacher beliefs……………………………………………….……..290 Applications to other music learning settings…………………………………..291 Conclusion……………………………………………………………………………...291 APPENDIX A: Videotape Analysis Summary Form ………………………………………….301 APPENDIX B: Initial Interview………………………………………………………………..302 APPENDIX C: Exit Interview………………………………………………………………….303 REFERENCES…………………………………………………………………………………305 x LIST OF TABLES Table 1.1 Think-tac-toe, adapted from Roberts and Inman (2007)……………………………..26 Table 7.1 Summary of Findings, Danielle Wheeler…………………………………………...235 Table 7.2 Summary of Findings, Carrie Davis………………………………………………...235 Table 7.3 Summary of Findings, Hailey Stevens……………………………………………...236 xi LIST OF FIGURES Figure 1.1 Venn Diagram, adapted from Roberts and Inman (2007) …………………..………25 Figure 5.1 Tom Izzo Jingle…………………………………………………………………….158 Figure 6.1 Common “Improvised” Response………………………………………………….178 Figure 6.2 Easier rhythm……………………………………………………………………….193 Figure 6.3 More difficult rhythm ……………………………………………………………...193 Figure 6.4 “Safe” answer………………………………………………………………………216 Figure 6.5 Megan’s response …………………………………………………………………216 Figure 6.6 Jada’s response………..……………………………………………………………216 Figure 8.1 Metaphor for a balanced approach to elementary music instruction………….295-296 xii Introduction It is a crisp sunny day in late February, and snow blankets the garden in the school entryway just outside the music room window. First grade students sit in a circle on the gray carpet, legs crossed and hands behind their backs. They are unusually still and quiet, hoping to be chosen to participate in a singing game. Eager eyes watch as the teacher “hides” small stuffed animals in the hands of children who are “ready.” The teacher sings: “Who has the penguin?” and the child with the penguin sings an echoed reply “I have the penguin” in an accurate, slightly husky voice. The teacher sings “Who has the bear?” Another student echoes “I have the bear” in a mostly speaking voice. The teacher sings “Who has the snowman?” A boy sings “I have the snowman” in a sweet, crystalclear head voice. He stands, and the three children with stuffed animals run them to the teacher as she sings “Hide them somewhere.” The class echoes “Hide them somewhere” and then sings “bum-bum-bum” on the resting tone as the teacher quickly redistributes the animals for another round of turns. I asked the teacher how she chose which student to give each animal, as the sung phrases seem to be different levels of difficulty: 1 I was originally planning on letting the students choose the next students [to give the stuffed animals], but based on the wide range of singing abilities in this class, I decided to choose which student would sing which echo. This enabled me to give the students who had showed consistent, accurate use of singing voice the challenging (high) phrase and those that hadn’t shown as much consistent, accurate use of singing voice one of the easier (lower) phrases to sing (HS Journal 2/23, p. 2). Several rounds of the game proceed in much the same fashion. As the children become familiar with the song, they begin to sing the prompts with the teacher. When individual children respond, the teacher neither praises nor corrects, but simply moves to the next phrase of the song in a continuous rhythm. The game continues until the teacher sings “Who has the snowman?” to 1 a little boy. Troy speak-sings the echoed reply. Without any interruption in the rhythm, the teacher repeats the prompt, with a clear implication that she thinks he can do better. Troy smiles and sings the response in an accurate head voice. The teacher smiles and winks at Troy as she continues to sing, “Hide them somewhere” and the game goes on. When I asked the teacher how she decided to ask that particular student for a better response, she replied: Many of the students who sang inaccurately I did not press because they hadn’t shown higher singing achievement in the past. If they haven’t shown that they CAN sing in tune at this point, I don’t want to risk embarrassing them by pointing it out. They may simply need more opportunities for solo singing and more opportunities to develop their tonal audiation and skill (just as a young child who speaks a word incorrectly needs more time 1 All names in this document are pseudonyms. 2 to develop their speech and vocabulary without being told they’re wrong). Generally, when they’re ready to sing, they’ll sing! I chose to press Troy because he was a student who didn’t use singing voice for most of kindergarten and then one day showed he could sing IN tune IN head voice. At that point, it appeared that he had been CHOOSING not to sing. If that is the case, I will encourage those students to use head voice and/or give them the “come on, I know you can do it!” look. I’ve also tried to flatter him a lot in the past (praising his use of head voice when he did choose to do it and/or commenting on how I couldn’t “trick him”) so that he would WANT to use his singing voice (HS Journal 2/23, pp. 3-4). 3 Chapter One: Review of Literature In the preceding anecdote, the teacher differentiated instruction. That is, she personalized her teaching to meet the music learning needs of individual students. In a class of 22 students, she varied the difficulty levels of material the children sang based on their previous achievements. Furthermore, she chose in the moment to “press” one child to achieve at a higher level while allowing other children simply to experiment. How could she differentiate her instruction when she, like many elementary general music teachers, teaches about 400 students each week? How did she provide chances for students to demonstrate their music abilities, skills, and knowledge so that she could understand what different students needed to learn? How did she apply the results of these assessments to help each child progress at his own rate, from his own starting place, toward his musical potential? Through this study, I sought to answer these questions by examining the impact of assessment practices on differentiation of instruction in three elementary general music classrooms. I wondered how full-time elementary general music teachers in public school settings learned what individual students knew and could do musically. I wanted to know how they kept track of the information they gleaned from these assessments, both in the moment and over time. Most important, I wanted to see how assessment affected instructional practices and facilitated the musical progress of individual students. In this paper, I investigated assessments within the classroom context. These assessment practices included formal and informal measures that teachers often designed themselves and that were primarily used to learn about students’ abilities and thus inform instruction. I wanted to learn about assessment as a natural component of teaching and learning. 4 Although I focused on how such assessment practices could lead to differentiated instruction, a review of the literature and conversations with the participants revealed a lack of agreement regarding the nature, value, and purpose of assessment in music teaching and learning. Therefore, I will begin this paper with a brief description of the history of assessment in music education. I will then discuss the role that assessment could play in elementary general music education and discuss the concept of differentiated instruction. Finally, this introduction will summarize recent studies that describe current assessment practices in elementary general music instruction and the challenges teachers report as they strive to integrate assessments. Assessment and Measurement in Music Education A brief history. Researchers and music educators have shown increasing interest in methods of measuring and assessing music aptitude, achievement, preferences, and ability since the turn of the twentieth century. Seashore’s Measures of Musical Talent (1919); Gordon’s Musical Aptitude Profile (1965) and Iowa Test of Music Literacy (1970); and Colwell’s Music Achievement Tests (1968) and Silver Burdett Music Competency Tests (1979) represent only a handful of the published music aptitude and achievement tests that have been developed since that time (Colwell and Barlow, 1986). Methodological articles and conference presentations related to assessment, publication of The Measurement and Evaluation of Musical Experience (Boyle and Radocy, 1987), and the formation of the MENC Tests and Measurements Special Interest Group (SRIG; later re-named the Assessment SRIG) signaled a rise in interest in assessment among researchers throughout the 1980s. The Handbook of Research on Music Teaching and Learning (Colwell, 1992) indicated and stimulated widespread interest in assessment by including five chapters describing research regarding the measurement and 5 evaluation of: music ability, creative thinking in music, music curricula/programs, music teachers and teaching, and attitudes and preferences in music education. Music education researchers’ increased interest in the measurement and assessment of music learning coincided with a national trend toward standards-based educational reform. Responding to this trend as well as to calls from music educators, researchers, and policy groups, MENC: the National Association for Music Education (MENC) commissioned and adopted the National Standards for Music Education in 1994. In the aftermath of the adoption of the National Standards, the value of assessment in the music classroom received increased attention, this time from teachers in addition to researchers. Perhaps in an effort to help teachers implement the Standards, MENC published several monographs related to assessment of the National Standards. Performance Standards for Music Grades Pre-k -12: Strategies and Benchmarks for Assessing Progress Toward the National Standards (Music Educator’s National Conference, 1996b) suggested specific assessment methods and described examples of different levels of achievement on specific standards. That same year, MENC also published Aiming for Excellence: The Impact of the Standards Movement on Music Education (Music Educator’s National Conference, 1996a), which included three papers (Boyle, 1996; Colwell, 1996; Shuler, 1996) regarding the effects of the National Standards on assessment practices. Interest in assessment and its relationship to standards-based music education in public schools was not limited to the United States. In 1998, the international journal Research Studies in Music Education published a special issue on assessment. In it, Swanwick discussed the “perils and possibilities of assessment” (1998, p. 1) with regard to the National Curriculum for music in England and Wales. Swanwick articulated concerns regarding assessment that seemed to transcend any differences in curricula or standards between the United States and England: 6 Formal assessment is but a very small part of any classroom or studio transaction, but it is important to get the process as right as we can, otherwise it can badly skew the educational enterprise and divert our focus from the centre to the periphery; from musical to unmusical criteria or toward summative concerns about range or complexity rather than the formative here-and-now of musical quality and integrity. There are many benefits from having a valid assessment model that is true to the rich layers of musical experience and, at the same time, is reasonably reliable. One of these possibilities is a richer way of evaluating teaching and learning. . . (p. 7). In 2001, MENC published Spotlight on Assessment in Music Education (2001), a compilation of articles originally published in magazines of MENC state affiliates (e.g., Connecticut’s CMEA News, Texas’s TMEC Connections, Ohio’s Triad, New Jersey’s Tempo, and Florida Music Director) and in General Music Today. Of the thirty-one articles, most presented specific ideas regarding how music teachers could assess a particular musical skill, such as “Assessing Elementary Improvisation” (Lopez, 2001). About half of the articles were specific to secondary performance ensembles. Several articles argued for more authentic methods of assessing and reporting musical skills, such as using performances in addition to paper and pencil tests (Burbridge, 2001), or reporting how students were progressing toward stated goals rather than simply giving a letter grade for music class as a whole (Bouton, 2001). Other articles described or advocated for the use of alternative methods of assessment, such as portfolios or process-folios (e.g., Kelly, 2001; Nierman, 2001). However, none of the articles in this monograph discussed how these assessment methods would impact instruction or how the results of the assessments could be used to help individual students learn. 7 In The New Handbook of Research on Music Teaching and Learning, Colwell contributed a chapter entitled “Assessment’s Potential in Music Education” (Colwell & Richardson, 2002). He asserted: “…assessment is one of the more important issues in education” (Colwell, 2002, p. 194). Colwell described the educational political climate of the time, in which assessments served not only their traditional role in facilitating teaching and learning but were also used to “portray the success of society in enabling all students to attain high standards in multiple areas, with the additional role of determining the value of funding for administration, programs, and facilities” (p. 194). Colwell admonished researchers and teachers to remember that assessments must directly correlate with the curriculum taught, and that assessments must attempt to record progress toward important musical outcomes, not just those that are easy to measure. In 2007 and in 2009, the University of Florida hosted symposia on assessment in music education. The proceedings of the 2007 meeting (Brophy, 2008) contain sections on the relationship of curriculum and assessment, large-scale music assessment (program evaluation), and specific assessment methods for various types of music classes. The symposium featured keynote addresses from Richard Colwell and Paul Lehman. In this venue, Lehman argued for an effective integration of assessment and instruction: Too often, assessment is thought of as a separate process that’s added on at the end of instruction. And it simply can’t be done properly that way. Assessment has to be planned along with instruction from the very beginning because the relationship between the two is intimate and inherent. If you plan the assessment along with the instruction, not only will you have better assessment, but the instruction itself will be better because 8 the very act of planning the assessment will force you to think about how you want the student to behave differently as a result of the instruction (Lehman, 2008, p. 198). The symposium also featured “think-tank” style work sessions that brought together presenters and symposium participants to discuss key questions regarding assessment in music education, including “in what ways can assessment data be most effectively used to improve music teaching and learning?” (Brophy, 2008, p. 45). The 2009 Symposium (Brophy, 2010) began with an address by Richard Colwell (2010) in which he concluded that the current standards for music education have “outlived their usefulness” (p. 15), and that arts policy makers and state departments of education “…seem to be panting and drooling to become involved in music assessment” (p. 15). He argued that music is not amenable to such large-scale testing: The enduring outcomes of music education are not judged by performance errors or by amateur efforts of composing ala rules from freshman theory. Those individuals interested in assessment must start thinking about the true and unique contributions of music to our culture, and though many outcomes may be hard to capture on a test, that does not mean that the teacher ignores teaching for them (p. 16). The remainder of the symposium explored a number of assessment strategies and problems related to assessment and music education in settings from early childhood to college aged students and beyond. Although a number of sessions directly pertained to assessment in elementary general music, they primarily investigated what teachers should assess, assessment design, and assessment implementation rather than the focus of the current paper, which is the relationship of assessment and differentiated instruction. 9 Criticisms of assessment in music education. So far, this history of assessment practices has mentioned some concerns various researchers voiced, including: that assessments must authentically describe the richness of musical experience and learning (Swanwick, 1998); that assessment cannot be an afterthought tacked on after a lesson is complete (Lehman, 2008) and that constructing large-scale standardized assessments of music is plagued with problems (Colwell, 2010). In this vein, some educational researchers have criticized the standards movement in general and have questioned the assessment practices that accompany a standardsbased approach. According to Eisner (2005, p. 5), a standards-based educational approach constitutes a superficially attractive, rational approach to education, in which standards guide curriculum and facilitate assessment. However, standards-based education requires “…youngsters to arrive at the same place at the same time. I would argue that really good schools increase variance in student performance. Really good schools increase that variance and raise the mean” (Eisner, 2005; p. 191). In Eisner’s view, standards-based education and standards-based (i.e., large-scale) assessment practices are incompatible with good teaching and optimal learning. Optimal role of assessment in elementary general music. The following guidelines were suggested by the MENC committee on Standards, chaired by Paul Lehman, in a monograph entitled “Strategies and benchmarks for assessing progress toward the National Standards, Grades pre-K-12.” These guidelines frequently have been cited as a foundation upon which an optimal role for assessment in elementary general music could be built: 1. Assessment should be standards-based and should reflect the music skills and knowledge that are most important for students to learn. 2. Assessment should support, enhance, and reinforce learning. 10 3. Assessment should be reliable. 4. Assessment should be valid. 5. Assessment should be authentic. 6. The process should be open to review by interested parties. (MENC, 1996b, p. 7-9). Guidelines 3 to 5—assessment must be reliable, valid, and authentic—raise difficulty for many elementary general music teachers, who may not have encountered these words (often taught in graduate courses) as a part of their undergraduate education (Hepworth-Osiowy, 2004). It is difficult to design and administer reliable, valid, authentic assessment without knowing the meanings of these terms or the ways that one might pursue reliability, validity, or authenticity. However, if assessment is to be a meaningful part of the instructional process, it must possess these characteristics (Brophy, 2000). 2 MENC also recommended that the assessment process be transparent and open to review by interested parties. Transparency refers to a teacher’s willingness to share information about the material being assessed, how it was measured, and how the assessments were scored. For example, if a parent wanted to know how a teacher determined that their child was a “limited range singer,” the teacher could share the rubric used in evaluating the child’s performance or even a video or audio recording of the child singing. If teachers design valid, authentic measures that reflect the standards and benchmarks that were taught, transparency becomes less difficult. 2 Although MENC specifies that assessments must be both reliable and valid, reliability is a necessary precursor to validity. That is, if an assessment tool is not reliable (does not yield the same or similar results in varied trials) it cannot be valid (measure what it purports to measure). Therefore, the remainder of this study will refer to validity rather than both reliability and validity. 11 A thorough discussion of the first guideline, that assessment should be standards-based, is beyond the scope of this paper. Not all teachers and researchers agree that a standards-based assessment model is necessarily the right road for music education to choose. I have already cited Eisner’s trepidation about the short-term appeal of standards-based education, and his suggestion that excellent education should result in increased diversity of outcomes as well as raising the mean performance level of students. Suffice it to say that this paper is focused on how assessments allow individual teachers to personalize teaching in the moment to meet the unique music learning needs of each student. MENC’s second recommendation—assessment should support, enhance, and reinforce learning—defines the optimal role of assessment in the elementary general music classroom as a natural outgrowth of instruction. Brummett and Haywood (1997) proposed conceptualizing teaching, learning, and evaluating as interrelated rather than separate. That is, although all of these activities occur in each music class, on some days, the balance shifts more toward one or another. The game of chess may be a useful metaphor: the player (teacher) routinely checks in with each piece (student) to ascertain needs and create strategies for moving forward. Similar to chess pieces, our students come with different needs and abilities, but, when guided by an expert, each contributes his or her own strengths. Using a variety of assessments to check in with each of the “chess pieces” allows each child to move forward in the way that is best. The problem with this analogy is that the chess pieces have no self-determination and individual pieces are sacrificed in order to win the game. Unlike a chess player, a teacher values what individual children bring to the teaching/learning transaction and hopes that each child will learn and grow. Purposes and types of assessment. Miller, Linn, and Gronlund (2009) described the following purposes and types of assessment when discussing general education classrooms: 12 In any classroom, there are substantial individual differences in aptitude and achievement. Thus, it is necessary to study the strengths and weaknesses of each student in a class so that instruction can be adapted as much as possible to individual learning needs. For this purpose (a) aptitude tests provide clues concerning learning ability, (b) reading tests indicate the difficulty of the material the student can read and understand, (c) norm-referenced achievement tests point out general areas of strength and weakness, (d) criterion-referenced achievement tests describe how well specific tasks are being performed, and (e) diagnostic tests aid in detecting and overcoming specific learning errors (Italics added, p. 454). While this source may be considered biased toward a positivist or behaviorist model of assessment, the quote nevertheless obliquely identifies another of the many difficulties surrounding assessment in elementary general music. Teachers in general education settings have access to a variety of standardized assessment tools that have been developed and validated for each of the above specific purposes. Elementary music teachers do not have access to comparable testing resources. Although a few quality tests of elementary students’ music aptitude are available (e.g., Primary Measures of Music Audiation, Gordon, 1986), curricular expectations across various music classrooms and at different grade levels render achievement tests nearly impossible to standardize. The measurement of achievement must be based on what students were actually taught (Ravitch, 2010). Furthermore, the lack of standardized achievement tests may be a blessing in disguise, as it prevents comparison of music achievement among schools, districts and states and the inevitable “teaching to the test” that accompanies such comparison (Eisner, 2005). Miller, Linn and Gronlund (2009) also advocated for the use of more authentic assessment strategies, such as portfolios and performance assessments, which 13 music teachers could certainly design. Ongoing use of a variety of assessments, including aptitude tests and authentic measurements of music achievement, could facilitate teaching and learning in a way that increases variance in student performance levels and also raises the mean level of achievement (Eisner, 2005). Criticisms of testing. Many teachers, parents and other stakeholders prefer that music educators refrain from adding more testing to the educational experiences of children (Shih, 1997). They express concerns that students are tested too often and for the wrong reasons. Eisner, an outspoken proponent of this viewpoint, stated: Most efforts at school reform operate on the assumption that the important outcomes of schooling, indeed the primary indices of academic success, are high levels of academic achievement as measured by standardized achievement tests. But what do scores on academic achievement tests predict? They predict scores on other academic achievement tests. But schools, I would argue, do not exist for the sake of high levels of performance in the context of schools, but in the contexts of life outside of the school. The significant dependent variables in education are located in the kinds of interests, voluntary activities, levels of thinking and problem solving, that students engage in when they are not in school. In other words, the real test of successful schooling is not what students do in school, but what they do outside of it (2005, p. 147). According to Eisner, the optimal result of a unit of study would be that students would be able to ask questions and think critically about the subject at hand. To extrapolate, the real measure of the success of a music program would be evident in students’ musicking, in the questions they posed (musically and verbally), and the degree to which students sought out musical opportunities outside of school and/or applied what they learned in school music to their 14 musicking outside of school. Eisner indicated that a culture of standards-based assessment enslaves teachers merely to enact the will of government and requires students to memorize decontextualized information in order to perform well on a test that has little meaning to the child as an individual (Robinson, 2002). However, Eisner does not argue that individual teachers should not find ways to track the progress of students so that learning can be individualized and optimized. In fact, Eisner argued persuasively for a model of “personalized teaching” (p. 4) in which heterogeneity and diversity of outcome are valued. At the time of this study, few topics in education are as inflammatory as “high-stakes testing,” which is currently used to make decisions regarding school funding, staffing, and even teacher pay (Ravitch, 2010). Assessments also function as a determinant in such “high stakes” decisions as whether a student passes a grade level, graduates from high school, is certified as a nurse, or granted a variety of other credentials. For the purposes of this paper, I propose that “testing” and “assessment” may serve separate functions. Testing seems intended to track group progress on specific curricular goals, to allow comparisons between classrooms, across demographic groups, and among regions. This testing is imposed from outside individual classrooms, and may or may not accurately reflect an individual student’s progress on the material he was taught (Ravitch, 2010). The current political and social climate sees testing as the way to prove what a child has learned, and as the way to hold schools and teachers accountable for that learning (Eisner, 2005; Ravitch, 2010). Economic factors also intrude: failure to raise tests scores results in cuts to funding, school closures, and/or teacher firings. However, there are few, if any, high stakes assessments in school music programs (Colwell, 2002, p. 195). Although the National Assessment of Educational Progress (NAEP) includes a music test, this measure is administered sporadically (every 8 years or so) and does 15 not disaggregate data at the district, building, classroom or individual student level. Because music programs do not typically test in the same manner as other subject areas, some policymakers propose that budget-conscious leaders might then view them as expendable, due to a lack of proof that learning is taking place. This could place music programs in jeopardy of policy decisions such as reduced funding, reduced staffing and program elimination (Philip, 2001). Some teachers and researchers suggest that music educators must incorporate more testing as a way to increase funding and improve policies (Brophy, 2000; Campbell & Scott-Kassner, 2002; Holster, 2005; Niebur, 2001; Peppers, 2010; Talley, 2005). Ravitch (2010) was critical of this stance: Tests can be designed and used well or badly. The problem [is] the misuse of testing for high-stakes purposes, the belief that tests could identify with certainty which students should be held back, which teachers and principals should be fired or rewarded, and which schools should be closed—and the idea that these changes would inevitably produce better education. Policy decisions that were momentous for students and educators came down from elected officials who did not understand the limitations of testing (p. 150). Although the current educational political climate might encourage school music programs to move toward a more standardized and decontextualized testing model that would allow comparisons among schools and districts and communicate testing gains to parents, administrators, policymakers and the community, high-stakes testing is not the kind of assessment discussed and promoted in the current study. Instead, this study investigates assessment as a necessary and natural component of curriculum and instruction (Lehman, 2008; Ravitch, 2010). 16 Even outside the controversial arena of high-stakes testing, any assessment endeavor includes the caveat that “the map is not the territory.” The results of an assessment are not the same as the thing itself: any test or assessment is only a representation of the trait, ability, aptitude, or cognitive skill being measured. Not only are measurement tools inherently subject to numerous possible errors, but also each measurement is only a single snapshot on one day. Thus, the implementation of assessments and their use in personalizing instruction requires a certain humility, which was described with regard to IQ tests in the Handbook of Psychological Assessment: Despite the many relevant areas measured by IQ tests… Many persons with quite high IQs achieve little or nothing. Having a high IQ is in no way a guarantee of success but merely means that one important prerequisite has been met… Although 50-75% of the variance of children’s academic success is dependant on nonintellectual factors (persistence, personal adjustment, family support), most of a typical assessment is spent evaluating IQ. Some of these nonintellectual areas might be quite difficult to assess, and others might be impossible to account for (Groth-Marnat, 2009, pp. 134-135). Perhaps we as music educators and music education researchers should observe similar humility with regard to assessments of music aptitude and achievement. Summative and formative assessments. Assessments can have summative and formative purposes. Summative assessments “…generally [take] place after a period of instruction and [require] making a judgment about the learning that has occurred” (Boston, 2003, p. 1). Assessments given at the end of a unit of study to determine a final level of achievement are summative. Summative assessments have been criticized for being acontextual or atomistic rather than authentic and holistic (Brummett & Haywood, 1997). However, a summative 17 assessment does not need to be a paper and pencil “sit still and write” experience. In the elementary music setting, a summative assessment could be a composition, a performance, or another more holistic measure of musical progress. Many elementary music teachers also believe that summative assessment is and/or should be inextricably linked to grading (HepworthOsiowy, 2004; Peppers, 2010; Schuler, 1996), but summative assessments do not yield an evaluative result, such as a percentage or letter grade, unless a teacher assigns one. A summative assessment could help a teacher who is required to grade, but does not need to be used in this fashion. It could give the teacher information about the skills and concepts the students have mastered or that will need to be revisited at a later time. Moreover, a well-designed summative assessment could contribute to learning even as it measures progress. For example, a capstone composition project that demonstrates a final level of achievement on specific objectives would simultaneously allow summative assessment and continued learning. Formative assessments entail “the diagnostic use of assessments to provide feedback to students and teachers over the course of instruction” (Boston, 2003, p. 1). Formative assessment tracks individual progress toward instructional goals as a natural part of the instructional process, whereas summative assessment represents more of an endpoint to a unit of study. Many teachers seem to equate “formative” with informal assessments that do not result in recorded data, such as observation of the class or “checking the group,” and “summative” with tests that result in record-keeping of individual data (Peppers, 2010; Talley, 2005). However, formative is not necessarily only informal, as formative assessment may also include keeping records of individual student performance. While informal assessments of group performance can help a teacher to target whole-group instruction to the needs of the majority of the class, the current study is focused on assessment practices that result in data about individual students. Data from 18 individual formative assessments help teachers choose pedagogical techniques to suit the needs of individual learners, determine if individual students need challenge or remediation, and decide when to move on to new material. “Formative assessment does not occur unless some learning action follows the testing [or data collection]. . . Assessments are formative only if something is contingent on their outcomes and the information is used to alter what would have happened in the absence of the information” (Colwell, 2008, p. 13). By this definition, there could be some blurring of the line between formative and summative assessment. Even a test given at the end of a unit of study should inform instructional decisions made about individual learners. Assessment and individualization of instruction. In “Meeting the Musical Needs of all Students in Elementary General Music,” Taggart (2005) related music teaching to math instruction. She described a first grade classroom in which one student worked on number identification while another completed two-digit multiplication problems. In the context of a math lesson, assigning students to work at different levels would be seen as differentiation of instruction to meet the needs of individual students—as excellent teaching. However, elementary music teachers have often taught the same material in the same manner to entire classes of students. Taggart asserted: “If music educators do not know the musical aptitude and achievement of each child, they will never be able to facilitate optimal achievement from their students” (2005, p. 128). It stands to reason that discovering this detailed information about each student would require frequent, ongoing opportunities for each child to demonstrate what he knows or can do individually in music. In other subject areas, when students are excelling, they are given additional challenges, including, but not limited to, assisting their peers (Tomlinson, 2000). There is evidence that lower-performing students learn tasks (e.g., singing) more efficiently from their peers than they 19 do from teachers (Gordon, 1986). When students struggle with reading or with math, teachers quickly intervene to determine what is causing the problem. This intervention often includes testing to determine aptitude for the subject so that teachers can be as certain as possible that expectations are appropriate. Jordan (1989) discussed how aptitude assessment could be used in addition to measuring singing voice development when teaching singing: Most who are classified as “non-singers” are high- or average-aptitude students who have severe vocal technique problems. These students, unaided by a knowledgeable teacher of vocal technique, continually compound their problems because they have the aptitude to know that they are not matching pitch. They often resort to improper vocal technique in an attempt to administer music “first aid” to themselves. If the teacher were armed with aptitude scores, he could tailor vocal instruction to focus upon a balance between the technical needs and the musical needs of the student, rather than confounding problems of technique with problems of hearing (audiation) (p. 171). By combining aptitude and achievement measures, teachers can intervene with the appropriate assistance so that children who have average (or even high) aptitude for the subject but are low performing are identified and helped to rise toward their potential, while students who have low aptitude for the subject are offered additional assistance, strategies, and support. An aptitude test score should never be used to label a child as musical or unmusical, and should never limit a child in any way (Gordon, 2010). When used appropriately, aptitude tests merely provide one lens through which to view achievement and one way to assist teachers who wish to individualize instruction. This differentiated model of instruction, in which individual students are taught according to their aptitude and achievement in each subject, is common in elementary classroom teaching (e.g., Adams & Pierce, 2006; Roberts & Inman, 2007; Tiseo, 20 2005; Tomlinson, 2000). Frequent and varied opportunities for individual students to demonstrate what they know and can do are an integral component of differentiated instruction. Individual response in music instruction. Although few research studies have explored the importance of individual response in successful music instruction, several researchers have found that individual or small group instruction and response opportunities resulted in increased achievement (Rutkowski & Snell Miller, 2003; Levinowitz & Scheetz, 1998, Rutkowski, 1996; Rutkowski, 1994). Further, Rutkowski’s research indicated that individual and small group instruction were particularly beneficial to those with low or high (as opposed to average) music aptitudes. Although she did not suggest this, it is possible that students who needed remediation and children who needed challenges achieved more in small group and individual settings because the teacher could engage more easily in formative assessment of their individual performances and adjust instruction to meet each child’s specific needs. It is also possible that students in small group settings were able to learn from one another in addition to the teacher. Although Shih (1996) reported that most teachers “checked group performance” when assessing singing voice, Hoffer (2008) found that the assessment of individual students is required for meaningful assessment. Informal group assessment is not sufficient. Assessing singing by having students sing in a group is the equivalent of a classroom teacher having groups of students read a passage in unison and using that information to decide that all students in the class read on grade level. Assessment of the group at best gives a vague idea of what some students can do and at worst allows others to fall behind without intervention. Differentiation of instruction. Differentiating instruction is an approach to instruction that music teachers could implement in order to meet the individual needs of each learner. As classrooms grow increasingly diverse and inclusive, teachers must adapt their teaching to meet 21 the needs of the students in their classrooms, despite the added diversity (Adams & Pierce, 2006, p. 1). Teachers who engage in differentiated instruction believe that a one-size-fits-all method of teaching is not the best way for most students to learn. The goal of differentiation is to tailor instruction to meet the needs of individual learners, not only in terms of achievement, but also by providing a variety of different venues for learning or practicing a skill. Eisner’s “personalized teaching,” which “increase[s] that variance [in student performance] and raise[s] the mean” (Eisner, 2005; p. 191) is one way to conceptualize differentiation. Tomlinson (2000) observed multiple classrooms to examine the ways that teachers differentiated instruction. She found that differentiated instruction varied in different settings and with different teachers. However, her research indicated that three main threads were consistently present in well-functioning differentiated classrooms: (1) Assessment was ongoing and tightly linked to instruction, (2) Teachers designed “respectful,” diverse activities for all students, and (3) Groupings were flexible (p. 2). The concept of ongoing assessment that reflects the objectives of instruction is self-explanatory. I will elaborate on the other two threads. Tomlinson defined “respectful” activities as follows: Each student’s work should be equally interesting, equally appealing, and equally focused on essential understandings and skills. There should not be a group of students that frequently does ‘dull drill’ and another that generally does ‘fluff.’ Rather, everyone is continually working with tasks that students and teachers perceive to be worthwhile and valuable (2000, p. 2). In this model, high-performing students would not simply teach lower performers. In addition to peer tutoring and group leadership, high achieving students would be challenged with tasks individualized to their aptitude and level of preparation. 22 Tomlinson’s final “thread” of differentiated instruction, flexible grouping, refers to the variety of different ways that students could be grouped as they interacted with one another and with concepts in the classroom. Groups could be homogeneous by ability, mixed-ability, grouped homogeneously or heterogeneously by learning styles or expressive styles, cooperative learning groups, teacher-assigned, student-chosen, or random. “Flexible grouping allows students to see themselves in a variety of contexts and aids the teacher in ‘auditioning’ students in different settings and with different kinds of work” (Tomlinson, 2000, p. 2). Although the books I cite regarding differentiated instruction focused on general classroom instruction (i.e., Adams and Pierce, 2006; Roberts and Inman, 2007; Tomlinson, 2000), many of the strategies they suggested could be implemented in elementary music. Roberts and Inman (2007) suggested using Bloom’s taxonomy to offer a variety of learning experiences based on the same concept by varying the process (learning action undertaken by children), content (basic or complex), and/or product choices (p. 49). In a music classroom, this could be achieved by using centers. One center could allow children to practice terminology or notation by creating a word wall on a dry erase board or playing music bingo (Knowledge/ Comprehension). At another center, students could play material related to the topic at hand on instruments (Application). At another center, children could arrange preselected phrases of music into songs, improvise, or compose songs for one another in a way that demonstrated the concept being taught (Synthesis). Finally, students at a listening (or viewing) center could evaluate (in writing or in discussion) audio or video-recorded examples of music in terms of the topic being studied. Any of these response modes could be at more basic or complex levels depending on the needs of the student. Use of centers and these particular sample activities are some of many possible applications of Bloom’s Taxonomy in elementary general music, based 23 on the suggestions in Roberts and Inman (2007). Roberts and Inman (2007) also proposed using Venn diagrams to illustrate that certain activities must be undertaken by all students, but that others can be selected or assigned to various students. In general music, a central area in a Venn diagram could indicate that all students would be expected to sing independently for the teacher. Alone or within cooperative work groups, students would design performance details. Overlapping circles on the Venn diagram might suggest various performance possibilities (see Figure 1). A variety of ways to work with desired topical material could also be presented to students in a “Think-Tac-Toe,” which "provides multiple options in a tic-tac-toe format for student projects, products, or lessons" (p. 89), (see Figure 2). 24 Figure 1.1 Venn Diagram, adapted from Roberts and In Inman (2007) For interpretation of the references to color in this and all other figures, the reader is referred to the electronic version of this dissertation. 25 Table 1.1 Think-tac-toe, adapted from Roberts and Inman (2007) Sing chord roots do, fa, and sol to a song of your choice while another student or the teacher sings melody Play I, IV, and V chords on an instrument with labeled chords (qChord, keyboard in harmony mode) while you and/or a friend sing a song of your choice Play I, IV, V to accompany a song of your choice on a keyboard (no harmony enabled) or a ukelele while the teacher or a friend sings. Find a song (Ipod, youtube) that uses I, IV and V for its refrain. Write out the lyrics with chord symbols in the correct places (send me a link so I can hear it when I check) Harmonic Structure: I, IV, V Choose 2 of these. A list of three-chord songs is on the board. Play a I, IV, V crossover bordun while a friend or the teacher sings a song of your choice. Compose a melody that needs I, IV and V and show in your notation where those chords would change. Sing or play an improvised melody while I play a 12-bar blues progression. Find another way to show me what you know, but check with me first. Adams and Pierce (2006) developed a model of differentiation (Creating an Integrated Response for Challenging Learners Equitably: A Model by Adams and Pierce: CIRCLE MAP), which was intended as appropriate for all grade levels and content areas. By incorporating a system of tiered lessons that provided a variety of avenues toward understanding a particular concept, Adams and Pierce asserted that this model moved away from simply using high-ability students as "teachers" and lower-ability students as "learners." Instead, they proposed that their model would engage all students "in meaningful work at a level that provides a moderate challenge for them" (p. 5). In this model, flexible grouping that varied from day to day allowed lessons to be tiered by readiness, interest, and/or learning style. While the above literature indicated a variety of teaching strategies and student groupings could be implemented to differentiate instruction, assessment was a key component in each of the proposed models. Assessment allowed teachers to know each child’s current achievement level, each child’s aptitude or potential for performance, each child’s interests, and even each 26 child’s preferred modes of expression (Roberts & Inman, 2007). According to Roberts and Inman (2007), assessment is "the only real communication that lets children know if they are making progress" (p. 131). More important, assessment allows teachers to ask the key questions that lead to differentiation: What does this child already know? What can this child already do? How can I facilitate progress for this individual child? Reported Uses of Assessment in Elementary General Music As in other curricular areas, music educators have noted increased pressure to test students as a measure of teacher accountability and to evaluate program effectiveness (Hepworth-Osiowy, 2004; Colwell, 2002; Shih, 1997). Assessments can and have been used in a number of settings in order to evaluate music teachers (e.g., Robinson, 2005) and programs (e.g., DeNardo, 2001; Duling & Cadegan, 2001; Masear, 1999). While measuring student progress as a way to monitor teacher accountability and evaluate program effectiveness are controversial uses of assessment as discussed earlier, the current study will focus on the role of assessment in student learning. According to Hamann (2001), systematic assessment as a method of improving instruction may be underutilized in the majority of elementary music classrooms. Instead of systematic assessment, Hamann believed that many teachers rely on informal methods, such as observation of group progress, and asserted that formal assessment of individual progress toward specific music learning goals was rare. She stated that, although informal observations may allow a teacher to adjust instruction to address the broad needs of a group, “…it is only through formal assessment techniques that teachers are able to gather and report, detailed, objective information regarding individual musical achievement” (Hamann, 2001, p. 23). Schuler (1996) agreed: good music teachers have always informally monitored student learning, but few music 27 teachers have systematically tracked the music learning of all individuals in their classrooms. However, neither author cited research with evidence of these assertions. Fortunately, several researchers have undertaken surveys that have attempted to describe assessment practices in elementary general music classrooms across the United States and in Canada. Shih (1997) surveyed 136 fifth-grade general music teachers in Texas regarding standards-based teaching and assessment practices and received 59 valid responses. The survey included a list of 82 teaching objectives from the Texas state curriculum. For each objective, the teachers marked how often they assessed and what method they used to assess. By adding up all the objectives that teachers reported assessing in any way, Shih found that 77.9% of targeted objectives were assessed. More specifically, teachers reported assessing 92.52% of singing objectives, 83.44% of listening objectives, 78.09% of movement objectives, and 65.39% of notation objectives. These percentages were based on any type of assessment—teachers could choose “written tests,” “checking individual performance,” “checking group performance,” or “other ways.” “I don’t check this objective” was also included as an option (p. 177). Therefore, percentages of assessment did not reflect frequency of assessment, and any type of assessment was counted toward the final percentage. Stated another way, 22.1% of objectives were not checked in any way. By far the most popular way to assess was “checking group performance” (65.17% singing, 41.14% listening, 45.66% movement, and 31.32% notation), followed by “checking individual performance” (26.5% singing, 19.49% listening, 35.51% movement, and 24.91% notation). So, even among teachers who reported assessing in these areas, 64.5 to 80.5% did not assess individual performance. This data is troubling, because without measuring the achievement of individuals, these assessments are rendered useless as a tool for differentiation of 28 instruction. Furthermore, this form of observation may not be sufficiently valid to be meaningful. Heddon and Johnson (2008) reported that reliabilities of teachers’ ratings of in-tune and out-of-tune singing based solely on observation ranged from .25 to .84, with a combined average reliability of .63. Although .63 could be viewed as a low but perhaps acceptable reliability, the wide variability of these reliability coefficients indicated that observation alone may be insufficient to judge singing ability accurately. Clearly, teachers must find ways other than checking group performance or observation to assess students’ musical performances if the results are to be sufficiently valid for use in guiding instruction. Hepworth-Osiowy (2004) surveyed 190 elementary music teachers in Winnipeg, Canada regarding assessment in their classrooms. Her 88 respondents (46% return rate) indicated that they used a variety of assessment tools and stated that assessment was most valuable when it informed instruction. Hepworth-Osiowy drew the following three conclusions based on quantitative and qualitative data in her survey (p. 105): (1) Some teachers used on-going assessment (time spent assessing during each class), but the majority of respondents assessed on a less consistent basis (mostly prior to reporting times). (2) Teachers who did not engage in ongoing assessment reported that they had difficulty obtaining adequate amounts of assessment data, and they felt that assessment was stressful and difficult to schedule. (3) Teachers who used ongoing assessment reported less stress related to assessment and greater success in obtaining and reporting data. The impact of these findings in relation to student learning was not reported. In addition, Hepworth-Osiowy asked teachers to rank different assessment practices by the frequency with which they were used. “Systematic observation and roaming” was by far the most frequently used method of assessment, followed by performances and exhibits, written tests, and checklists and rubrics. These results and the associated qualitative responses seemed 29 to indicate that many teachers were using systematic observation of entire classes and wholegroup performances as the two main assessments of individual music learning. It is doubtful that these methods allowed for accurate assessment of each individual student’s skills and abilities. Livingston (2000) surveyed the 414 members of the Organization of American Kodaly 3 Educators Midwestern Division regarding assessment and grading practices. One hundred ninety-six surveys were returned for a response rate of 47%. Respondents reported grading 0 to 1200 students; some teachers were not required to grade. The average number of students graded by each teacher was 396. In terms of assessment frequency, 44 teachers (31% of respondents) reported assessing 0-9 times per year, and an additional 44 teachers said they assessed 10-19 times a year. Seven teachers (about 3% of respondents) assessed 20-29 times, 12 (about 6%) reported assessing 30-39 times, and 28 respondents (about 20% of the total) said they assessed “constantly.” The survey did not ask what was assessed or if the rates of assessment reflected assessing every student for every assessment reported. Further, Livingston did not inquire about how the assessments were linked to the grading practices described in the study. The survey did ask what kinds of assessment were used, and the most frequent responses were: teacher observation (n=137), live performances (n=118), quizzes/tests (n=100), checklists (n=64), rubrics (n=61) and presentations (n=60). This survey also investigated how elementary music teachers graded learners with special needs and whether they used different assessment tools with these populations. Results indicated that many respondents graded special learners using the same assessments as other students (n=40, 28%) or used the same assessments with modifications, such as additional assistance, Braille, or alternate response styles (n=31, 22%). Seven of the respondents indicated that they 3 IL, IN, KT, KN, MI, MN, MO, NE, ND, OH, SD, and WI 30 graded students with special needs according to their Individual Education Plan (IEP), while others indicated that they graded based on participation/effort/behavior (n=19), observation (n=11), or other social factors (n=4). Eleven teachers left this question blank, seven marked “N/A,” and 10 respondents indicated that they were not required to grade students with special needs. One respondent stated, “[I have] too many students with a wide range of needs to even attempt assessing individually” (Livingston, 2000). In 2005, Talley surveyed 200 elementary general music teachers in Michigan. Of the 35 respondents (18% response rate), many did not frequently assess their students, and some did not assess at all. The survey asked what skills were assessed at which grade levels and how they were assessed. Talley’s results indicated that elementary music teachers did not use published achievement tests, and few used aptitude tests. Nearly 16% of respondents indicated that they did not formally assess students or did not believe in assessment. Teachers who did assess used self-designed measures including rating scales or rubrics, checklists, written tests, and worksheets. Each of these methods seemed to require individual response, but this was not stated explicitly in the research. Respondents to Talley’s survey assessed subjects such as beat competency, singing voice, matching pitch, rhythm, recorders, music reading, and instrument identification. However, there was not broad agreement regarding the topics assessed: the highest level of agreement among the respondents on any single area of assessment was 50% for beat competency in kindergarten (p. 49). In addition, due to the low response rate, Talley’s results cannot be interpreted to represent all elementary general music teachers or even those teaching in Michigan. Talley incorporated questions regarding respondents’ reasons for assessing and how they applied the results of assessments. The most frequently cited reasons for assessing included: (1) 31 to allow the teacher to adapt instruction, (2) to assist in assigning student grades, (3) to establish if students understood a concept, and (4) to monitor student progress (p. 60). Respondents also indicated that they “ . . . were motivated to assess their students for accountability purposes. Assessment [also] motivated students and assisted teachers in evaluating their pedagogical techniques” (p. 61). In addition, Talley’s respondents indicated that assessments provided documentation of student achievement in music that could validate the role of music education in the general education curriculum. Peppers (2010) surveyed all the elementary music teachers in Michigan regarding attitudes toward formal assessment. Specifically, she investigated why teachers used formal assessment, what challenges they encountered related to assessment, and what teachers believed would improve their ability to assess their students’ learning. Overall, Peppers’ 100 respondents (43% return rate) indicated that they strongly agreed that assessment was a valuable tool in their classrooms. Respondents’ beliefs varied regarding the purpose of assessment but were similar to results found in other studies. Most teachers reported using assessments to improve instruction, including measuring student progress over time, identifying students’ needs, and modifying curriculum. Respondents reported that assessments were used to communicate music learning to parents and to inform report card grades. However, respondents did not view formal assessment as a way to communicate with or motivate students: “Perhaps… because they believe that it may negatively affect their development or because they do not use formal assessment in their classrooms” (Peppers, 2010, p. 71). Some respondents reported negative attitudes toward formal assessment and seemed to equate assessment with grading (Peppers, 2010, p. 72). Most respondents indicated that assessment should be used to validate music education in the 32 curriculum and that music assessments could communicate music learning to policy makers who controlled resources. Several of Peppers’ findings disagreed with those of other studies. In contrast to Hepworth-Osiowy’s participants (2004), most respondents in Peppers’ study felt that their undergraduate studies adequately prepared them to assess music learning, although they did indicate that their ability could be improved with more study, reading, observation, and inservices. Unlike participants in Niebur (2001), respondents in Peppers’ study believed that music skills and learning could be measured and that formal assessment could be undertaken without dampening musical creativity. When analyzed as a group, these five surveys regarding the assessment practices of elementary music teachers had several findings that were salient to the topic of this study. Although some teachers did not assess or reported philosophical opposition to testing in music education, many teachers reported engaging in a variety of assessment practices. Some of this assessment was related to grading, and some was ongoing. Assessment was undertaken for a number of reasons, including improving music instruction. Despite a variety of challenges to assessment, many teachers persisted in attempting to discern the musical skills abilities of their students. Challenges to Assessment in Elementary General Music When I began teaching elementary general music, the course of study I used had a column for evaluation. Many objectives for each grade level indicated that evaluation should occur through teacher observation of student performance. I began to wonder if I had the ability to validly observe hundreds of students. Remembering many things about many students was possible. However, I could not recall enough about every child to answer questions about their musical progress. Actually, at times I had trouble recalling a grade level or mental picture of a particular student… (Snell Miller, 2001, p. 37). I have already described some of the difficulties related to assessment in elementary 33 music, including many teachers’ lack of training in the design and use of assessment materials and the relative lack of testing resources compared to general educators. The above quote described one teacher’s recollections of her early experiences with assessment. She tried to use the materials available to her, but found them unhelpful, and was concerned that she did not know every individual student (among hundreds of children) well enough to picture faces, let alone describe musicianship. Researchers have discussed a number of challenges to the implementation of individual assessment practices in elementary music classrooms. Philosophical barriers to assessment. In an editorial article, Shuler (1996) identified two main misconceptions held by practicing teachers with regard to assessment: (1) assessments must be designed and/or administered by people with PhDs and/or are only of interest to people with PhDs, and (2) many music teachers had philosophical reservations about assessment—they did not believe in traditional grades, and equated assessment with grading. Talley (2005) reported this response from a teacher: “I do not believe in formal assessment for music. The only assessment is whether students try the given task” (p. 61). Shuler suggested that music teachers would benefit from “practical training in assessment as a natural and necessary part of the teaching/learning process” (p. 89). For those teachers who had philosophical reservations, Shuler suggested it might be helpful to differentiate between measurement and evaluation: measurement involves a determination of achievement level on a particular task, while evaluation assigns a grade (Shuler, 1996). Those with philosophical opposition to assessment also may benefit from separating high-stakes testing from curriculum and instruction, which necessarily include components of assessment (Lehman, 2008; Ravitch, 2010). In This, Too, is Music, Upitis (1990) stated that she “never graded children in a summative fashion” because she believed that “marks [grades] almost never have meaning, no 34 matter how ‘objective.’ At best, they confirm what the student already has judged about his or her performance. At worst, they leave children with the impression that they are dumb or stupid in comparison to their peers” (p. 125). Her use of the word “summative,” may be confusing, as it brings to mind summative assessment, which is not necessarily linked with grading. While Upitis viewed grading and formalized summative assessments as interfering with learning, in the next paragraph, she went on to describe an atmosphere of continuous formative assessment, in which she and her students engaged in “constant evaluation, observation, examination, judgment, reflection, change, reevaluation. . .” (p. 125). This evaluation, observation, examination, (etc) was of pieces of music that children or groups of children were in the process of creating (composing or improvising), and also of musical performances by individual children or groups of children. As Uptitis described them, these activities are among the types of classroom-based assessments I was curious about when I designed this study. Certainly, the types of assessment she described contribute more to an atmosphere of learning than grading. Upitis described her ability to avoid “get[ting] caught up in the giving and receiving of grades” as “one of the luxuries often associated with teaching an arts subject” (p. 126). In the current educational climate, this luxury is no longer afforded to many elementary general music teachers. However, even teachers who are required to grade could choose to create a learning atmosphere of “constant evaluation,” in which individual musical progress is the focus and grading is secondary. Institutional barriers to assessment. Teachers have reported a number of challenges associated with systematically assessing the musical progress of individual music students. Teaching loads, including overall number of students and large class sizes, were viewed as a major obstacle. Teachers reported a lack of time, both in-class to administer assessments and 35 also outside of class to maintain records (Brummett and Haywood, 1997; Hepworth-Osiowy, 2004; Peppers, 2010). Administrative, community, and parent expectations that students would perform in front of audiences also complicated routine assessment (Hepworth-Osiowy, 2004). One teacher commented: “Time is so short, curriculum is so big, and performances are always around the corner. I’m lucky if I can assess three times in one term” (p. 97). Comments like these indicate that some teachers do not view curriculum and assessment as mutually dependent components of instruction. Instead, when time is a factor, it seems that many teachers opt to deliver as much curriculum as possible and forego assessment of what has been learned. Music teachers also struggled with discipline problems, accommodating individual education plans, and attendance issues (Hepworth-0siowy, 2004). Teachers in Niebur’s (1997) study viewed population migration as a hindrance to assessment, but Peppers’ (2010) participants disagreed, perhaps due to regional differences—Neibur’s study took place in Arizona, and Peppers’ participants taught in Michigan. After enumerating the variety of obstacles to assessment that elementary music teachers frequently encounter, the task of integrating assessment-based differentiation of instruction seems daunting. However, if teachers are reporting these obstacles, it is clear that they must be trying to assess in some form. The literature seems to indicate that many teachers are interested in tracking the progress of their students and are willing to be accountable for what they are teaching. However, these teachers face considerable administrative, curricular, and logistical challenges. Proposed role of assessment in elementary music education. Given the number of voices in the debate surrounding assessment, testing, and accountability, it is difficult to arrive at a middle ground, even without considering the special difficulties music teachers face. Perhaps a 36 moderate approach to assessment in elementary music education would combine a variety of measurement tools, including standardized aptitude tests and teacher-designed measures of achievement that include authentic assessments, such as portfolios and performance measures, in order to provide systematic, objective evidence upon which to base instructional decisions. Optimally, numerous snapshots of student functioning on an assortment of tasks, recorded and tracked in a variety of ways, would result in a well-rounded picture of each child’s aptitude, achievement, learning style and response style, which would allow the teacher to differentiate instruction: to meet each student where he is and offer scaffolding, remediation, and challenges as needed. Need for this Study Along with widespread disagreements regarding the importance of various curricular goals and the value of different methodological and philosophical approaches (Boston, 1996; Colwell, 2002), considerable debate continues among elementary music teachers regarding the meaning and value of assessment (Hepworth-Osiowy, 2004; Peppers, 2010; Talley, 2005; Upitis, 1990). Some elementary music teachers do not appear to want to know about students’ individual differences in music aptitude or ability (Peppers, 2010; Niebur, 2001, p. 148) and researchers have proposed that it may be impossible to truly evaluate music learning (Arostegui, 2003). However, if teachers do not have a clear picture of their students’ musical aptitude and achievement levels, they may fail to challenge a child who has high aptitude, which could result in boredom and a lost opportunity for advanced musicianship, or fail to recognize a child who is struggling and adjust instruction accordingly, which could result in a student feeling frustrated or incompetent (Gfeller, 1992; Gordon, 1986, 2010; Taggart, 2005). Students experiencing either of these situations might seem poorly motivated or badly behaved, but the interventions required 37 differ, and only individual assessment would allow a teacher to know the underlying cause behind the behavior. Despite a wealth of research studies, methodological articles, and books pertaining to assessment techniques for the elementary music classroom, little published work has explored what is perhaps the most crucial question regarding assessment and measurement in this setting: How can information gained through assessment be used to differentiate instruction for individual music students in real-life elementary general music teaching? Few studies describe the progress of individual students in elementary general music classes, which entails both assessment and subsequent use of assessment information to differentiate instruction. Edmund, Birkner, Burcham, and Heffner (2008) identified several research priorities regarding assessment in music education, including a need for qualitative research investigating the success of various assessment tactics. Lehman shared this viewpoint: “We need to create a ‘best practices’ culture in education, which means finding ways to share what we do that works, so we can all benefit from the experiences of our colleagues” (2008, p. 23). The current study sought to present a qualitative picture of promising practices in elementary general music classrooms, specifically pertaining to the application of assessment in order to differentiate instruction. Purpose of this Study The purpose of this qualitative study was to explore how three exemplary teachers used assessment to individualize music instruction. Specifically: (1) When and how did the participants assess musical skills and behaviors? (2) How did participants score or keep track of what students knew and could do in music? And (3) What was the impact of assessment on differentiation of instruction? Delimitations 38 When speaking of measurement and assessment in elementary music, it is nearly impossible to avoid debates about curriculum and methodology. Assessment should be linked closely to curriculum, and, in this study, educational, philosophical, and methodological background certainly influenced participants’ decisions about curriculum and assessment. However, a discussion of the merits of various methodologies or the relative importance of diverse curricular goals was beyond the scope of this study. This study sought to find how the information gleaned from assessment was used to improve music learning and differentiation of instruction in the practices of exemplary teachers, regardless of methodological grounding. Therefore, methodology and curriculum were discussed only as they impacted assessment and instruction in the individual classrooms. Definitions of Terms Definitions of many of these terms vary greatly from author to author. This study adhered to the following definitions: Assessment: the gathering of information about a student’s status relevant to one’s academic and/or musical expectations (Brophy, 2000, p. 455). Authentic Assessment: planned assessment procedures and tasks that simulate the context in which the original learning occurred (Brophy, 2000, p. 456). Differentiation: teaching with student differences in mind. Instruction stems from assessment, meets students where they are, and features a strong link between assessment and instruction, an emphasis on individual growth, high standards and clear expectations for all students, and flexible grouping strategies (Cox, 2008). Evaluation: the comparison of assessment data in relation to a standard or set of pre- 39 established criteria, with the purpose of determining whether that data represents the achievement of the standard or criteria (Brophy, 2000, p. 457). Formative Assessment: assessment used to monitor learning progress during instruction (Miller, Linn, & Gronlund, 2009, p. 38). Grading: any of a variety of systems designed to summarize and communicate a student’s performance on assessments of stated instructional objectives. These systems include but are not limited to letter grades, verbal labels such as “proficient or above average,” and whether or not performance meets a proficiency standard—pass/fail (Miller, Linn, & Gronlund, 2009, p. 367-368). Measurement: the use of a systematic methodology to observe musical behaviors in order to represent the magnitude of performance capability, task completion, and/or concept attainment (Brophy, 2000, p. 458). Reliability: the extent to which an assessment task yields consistent results (Brophy, 2000, p. 459). Summative Assessment: assessment used to assess achievement at the end of instruction (Miller, Linn, & Gronlund, 2009, p. 38). Although many teachers associate summative assessment with grading, the two are not necessarily related. Validity: the degree to which a task measures what it is supposed to measure; for general music, this is related primarily to the content of the task and its relationship to the purpose of the task (Brophy, 2000, p. 460). Reliability is a necessary condition for validity. 40 Chapter Two: Review of Related Research Assessment is about more than children and teachers, although it must always be for them. It is about more than sending home papers, giving performances, or generating data for reports, as important as all of these things can be. Assessment is more than a scoreboard that dispassionately displays how closely an educational endeavor approximates compliance with a given set of criteria, regardless of how sophisticated and humane the criteria may be… [Assessment] demands the dignity of submitting only reports that are likely to be useful and then having the information used as wisely as possible. Assessment is not only about asking and answering questions, but is also about the reciprocal responsibility of listening respectfully to the answers. In short, educational assessment of any kind is an inescapably human endeavor, and should, above all, edify (Niebur, 2001, p 158-159). The current study examined how teachers in elementary general music settings applied the results of assessments in order to individualize their instruction and meet the needs of the diverse learners in their classrooms. This model of differentiated instruction is common in elementary classrooms. Therefore, the following review of related research begins with a discussion of selected studies from the elementary classroom research literature in which educational outcomes for students with a variety of learning needs were improved through the use of assessments. I then describe studies from elementary general music classroom settings in which the authors indicated that an assessment could be used to adapt instruction to meet 41 individual needs. However, in these studies, the act of differentiation was not the focus of the study but was a theme in the research, implicit in the method, and/or mentioned in the discussion section. Finally, this review presents the few studies that specifically addressed the use of assessment results to increase student learning or to improve instruction in elementary music settings. Due to the large amount of material available on assessment in music education, this review of literature was delimited in several ways. Although numerous research studies used a variety of criterion measures pertaining to music achievement, aptitude, preferences, and behavior, these studies often were unrelated to classroom instruction. Because this study focused on assessment as it relates to improving music teaching and learning and individualizing instruction, this review is limited to studies in which assessment(s) were part of instruction in a classroom setting and/or could be used by practicing teachers. Furthermore, if the report of research did not include any information about how the results of an assessment contributed to or could be applied to individualization of music teaching and learning, the study was excluded. This review was limited to studies with elementary-aged subjects or participants (k-6). Elementary students have different developmental abilities and response styles than older learners, and elementary general music curricula are different from music curricula in more advanced grade levels. As a result, information from studies with older children or adults has limited application to elementary general music settings. In addition, this review is limited to assessments or measurements of musical aptitudes, skills, and abilities, as these are the primary focal points of music learning in elementary general music. Studies that measured children’s music preferences, social behaviors, or attitudes about music and/or music class, which are secondary instructional goals, were not included. 42 Assessment and Differentiation of Instruction in Elementary Education Many elementary educators have implemented models of differentiated instruction for academic subjects such as reading and math (Hallam, Ireson, & Lister, 2003). According to Cox (2008), differentiation of instruction requires a strong link between assessment and instruction, an emphasis on individual growth, high standards, and clear expectations for all students, as well as flexible grouping strategies. Perhaps because of the implicit link between assessment and instruction in differentiated instruction, many studies pertained more directly to other facets of assessment or differentiation, such as the relative merits of homogenous and heterogeneous grouping practices. Or, perhaps the elementary education literature has the same weakness as the music education literature: too much emphasis on how to measure achievement and not enough focus on how then to use that information to improve instruction. I included the following studies because they demonstrated clearly how assessment-based differentiation practices improved learning for students, even if that was not their implicit focus. Tieso (2005) investigated the effects of various instructional practices on the math achievement of 645 elementary students. Over the course of a 3-week math unit, students in the control group were taught in intact classrooms using lessons taken in order from a math textbook. The remaining groups were assigned to one of several treatment conditions, including differentiation of instruction through use of flexible groupings in intact classrooms. In differentiated instruction, learning centers and journal prompts were used to capitalize on students’ prior knowledge and to allow different students to work at a different pace. That is, while all students worked on the same concept, less-ready students completed fewer problems at a more basic level, and higher-performing students worked with more complex problems. Students’ readiness/performance levels had been determined by previous math assessments, 43 including tests and daily work. Results indicated that average and high performing students performed significantly better in the differentiated classroom than did those in the control group. Results from low-performing students were not significantly different between the two groups but were confounded by higher pretest scores in the control group. Most students learned more when their instruction was differentiated based on their previous achievements than when all students were taught the same material at the same time in the same way. In 1996, Lou et. al. undertook a meta-analysis of more than 3,000 quantitative research studies regarding within-class grouping practices. Among their many analyses, they found that, on average, students performed better when taught in small groups than as a whole class. In addition, students’ attitudes toward the subject being taught were better in small-group conditions, as were the students’ general self-concepts. Findings related to whether groups should be homogenous or heterogeneous by ability were mixed. After examining the approximately 3,000 studies, the authors concluded: “Overall, it appears that the positive effects of within-class groupings are maximized when the physical placement of students into groups for learning is accompanied by modifications to teaching methods and instructional materials. Merely placing students together is not sufficient for promoting substantive gains in achievement” (p. 448). Smaller groups of students did not necessarily result in improved learning—differentation of instruction (changing teaching methods and instructional materials based on the needs of children in the group) is what resulted in increased learning. Much of the research literature on differentiaton of instruction in elementary education focused on gifted or special education populations and discussed the relative merits of selfcontained classrooms for these students. Many of these studies investigated applications of various models of differentiation but did not comment on their impacts on achievement. Futher, 44 there was little qualitative research on this topic. One qualitative study described differentiation practices in two self-contained gifted classrooms, but not as they related to assessment (LinnCohen & Herzog, 2007). Perhaps assessment to determine individual needs is necessary to an even greater extent in a heterogeneous classroom, such as most elementary general music classes. Based on her study of nine kindergarten to third-grade classrooms in Title I elementary schools in the Fairfax, VA area, Howard (2007) concluded that utilizing “ongoing assessment and [a] data driven style of teaching” was one method to help at-risk students succeed in heterogeneous classrooms. Howard’s study described the classroom environments, teaching strategies, and personal beliefs of nine teachers who taught underperforming children with low school readiness, but who typically did not use special education referrals in order to help the children perform at grade level. Among her findings, Howard reported that these teachers each used a variety of formal assessments (e.g., Developmental Reading Assessment, various math inventories) and informal assessments (e.g., observations, portfolios, and running records) in order to differentiate instruction to meet the needs of individual students. All of Howard’s participants used flexible grouping strategies, opting for homogenous groupings for cohesive instruction of those with like needs, and heterogenous groupings when children were likely to benefit from peer modeling. Differentiation of instruction in these classrooms was also achieved through a democratic, discovery-learning model that emphasized integration of previous school learning and prior outside knowledge. According to Howard’s research, the attitudes and philosophies teachers needed in order to help all children succeed included: collaborating with others (parents, other teachers, etc.), providing background 45 knowledge (scaffolding), child-centered teaching, high expectations, and perceiving children as having assets in addition to any difficulties they exhibited. Howard’s study was particularly pertinent to the current study, as it was similar in design and sought to provide teachers with models of success that could be appropriated or emulated. Many of the teaching strategies she described could be adapted for the music room, such as use of a variety of formal and informal assessments to diagnose learning needs, flexible groupings to meet those learning needs, and a more democratic, discovery-based learning environment. Implicit Applications of Assessment to Learning and Instruction in General Music The current study investigated assessments that were used by teachers in general music classroom settings in order to improve music instruction and/or music learning. A number of teachers and researchers have investigated a variety of assessment methods in elementary general music settings. However, after extensive review of the literature, I concluded that little of the research regarding assessment in the elementary music classroom was applicable to the current study. Many of the studies were acontextual to instruction, such as when assessments took place outside of the classroom setting or the material assessed was unrelated to the subjects’ music curriculum. Other studies were not relevant to the current study because the authors did not offer information on how the results of the assessment would contribute to better teaching or increased learning. The following section presents studies that were related to the current study because the application of assessment results to instruction was embedded in the method or discussed in the closing material, and was therefore implicitly a part of the study, even if it was not the focus of the research. Perhaps because “singing, alone and with others, a varied repertoire of music” is the first standard in the National Standards for Music Education (1994), assessments of singing voice 46 development and pitch accuracy were investigated frequently. Teachers and students interested in improving singing performance may have a great deal to gain from applications of assessment to the instructional process. Guerrini (2006) measured singing voice achievement and theorized about ways that assessment scores could be used to differentiate instruction. In her study, 174 fourth and fifth grade students sang melodic patterns and two songs into a tape recorder, controlled by randomization for order effect. Three judges assessed the performances using the Singing Voice Development Measure (SVDM, Rutkowski, 1990). Guerrini found that students were able to sing patterns significantly more accurately than familiar or newly learned songs. Guerrini advocated use of SVDM to identify children whose pattern singing scores indicated they were ready to sing songs accurately with extra time and attention. She concluded: If I merely note the ratings of students singing either a familiar or unfamiliar song, I will find many students scoring a 2 or 3, indicating they have some mobility to their range but are clearly not accurate singers. However, in many cases, if I also look at the pattern score, I may find that the same child has a 4 or even a 5 with that singing task. This indicates to me that the child has the ability to sing accurately and above the lift under certain circumstances, and will most likely transfer that developing skill into singing complete songs accurately (p. 29) Guerrini implied that results from the SVDM could be used to modify instruction for individual students to increase their singing achievement. Rutkowski developed the SVDM to identify the steps children go through on the path to achieving singing accuracy, because she viewed singing to be a developmental skill that required time, context, and maturity (Rutkowski, 1990). This viewpoint has been supported by additional research since 1990 that has indicated that singing accurately may be as much or more a matter 47 of physical skill related to vocal production than a result of tonal aptitude (Hornbach & Taggart, 2005; Pfordrisher & Brown, 2007; Phillips & Aicheson, 1997a; 1997b; Levinowitz & Scheetz, 1998). Therefore, a teacher must have reliable evidence of both a child’s music aptitude (from a test such as PMMA) and singing voice development (from a measure such as SVDM) in order to intervene correctly to assist that child’s vocal progress. Several researchers explored classroom uses of composition as a way to discover and track progress in music conceptualization. Although Strand (2005) did not specifically mention assessment as a keyword in her qualitative study of the relationship between instruction and transfer in 9 to 12 year-old students, assessment was an important component of her work. Strand used a summer enrichment class of eight students from an urban elementary school in Chicago as participants in this action research project. She wanted to know how best to facilitate transfer of knowledge from music instruction to compositional tasks. The abstract stated: “[r]eflective analysis with expert observers at the end of each unit yielded tentative findings and new queries, which in turn allowed for instructional improvement and expand (sic) upon knowledge gained from prior research” (p. 17). That is, the teacher-researcher used students’ compositional processes and performances to identify their needs and used that information to find ways to help them become better composers. In her model, she referred to “develop[ing] efficient teaching protocols… coach[ing] students through problems… direct instruction on revision… encourag[ing] peer mentoring…” and “value of public concert” (p. 31). Each of these activities could be considered as an assessment component embedded in instruction to allow each young composer to grow. Although the body of her study described the process of using individuals’ compositions to differentiate instruction, Strand was studying the transfer of 48 conceptual learning from task to task, so her conclusions were related to concept transfer rather than how her ongoing assessments affected her teaching. In a similar project, Miller (2004), investigated whether learning through composition could meet the needs of students with widely diverse ability levels in one of her elementary general music classes. Although, again, assessment was not one of the keywords associated with her research, Miller stated: The wonderful thing about using composition is that I am able to assess what they know so much better than I could before. It was easy to fool myself into thinking that the entire class understood a musical concept when, actually, only a few students were doing all the answering. Now, each child is not only personally engaged in the music, but is personally accountable for showing what he knows (p. 64). Miller’s findings reinforce the importance of both individual response and ongoing assessment to differentiation of instruction for students with a variety of needs. Christensen (1992) undertook an action research project involving small-group composition projects. Her dissertation proposed an “artistry-based” model, in which the music classroom became more of a studio or workshop. In this model, composing, notating (using invented notation), performing, and continuously reflecting on a project would increase students’ learning and provide a window into students’ musical metacognition. Each class began with a brief, whole-class discussion of the progress of each group and introduction of the next task in the compositional process. For the remainder of class time, students worked independently in their small groups. Christensen circulated within the classroom and functioned as a facilitator: …guiding students rather than directing them; suggesting they explore their own ideas rather than supplying them with solutions worked out by others; making teaching more of 49 a process of asking questions rather than answering them; giving students the opportunity to take responsibility for their own learning rather than being told what and how to learn; and providing time and place for students to reflect on their learning while it was going on rather than wait until it was completed (Christensen, 1992, pp. 236-237). Christensen kept daily logs and analyzed videotapes of each meeting (twice a week for 40 minutes) of one class of fourth grade students for the course of a single compositional process (seven weeks). All students described compositional and notational activities by responding to open-ended questions both in writing and in class discussions. Students’ written reflections and notation were collected in portfolios that provided the researcher further means of assessing musical metacognition. In addition, 12 students were each interviewed three times. Data on these 12 students included brief descriptions of appearance, personality, and family; a summary of musical experiences outside of school (such as piano lessons); IQ scores; scores on the CTBT (a school achievement test that resulted in a percentile ranking); and scores on the tonal and rhythm subtests of the Primary Measures of Music Audiation (Gordon, 1986). Christensen proposed an assessment protocol that consisted of a number of formal and informal assessment tasks. Students completed two written reflection worksheets, one at the beginning of the project, and one at the end. Students presented their works-in-progress, both during the compositional and notational processes, for class review and discussion, including answering questions from the teacher and students as well as listening to suggestions for improvement. As a capstone, not only did the students perform their composition, but they also presented their notation to the class and explained what they did and how they did it. Finally, the students were required to explain this project to their parent(s) and reflect together in writing about the value of the project for the student’s music learning. These formal assessments were 50 supplemented by the teacher’s informal interactions with students as they worked in their groups: “Questions as simple as: “What did you do?” “How did you do it?” and “What did you find out?” elicited diverse and revealing responses about student understanding of music and their own artistic processes. They were essential to the assessment of student learning” (Christensesn 1992, p. 238). Unfortunately, this project did not address how the information Christensen gained about her students’ musical cognition was then used to differentiate instruction. “The composition project in this study was a first-time experience for the fourth-graders. It is not known what would happen during the second, third, or fourth time students were asked to participate in similar composition projects. A longitudinal study… could be expected to show increased sophistication in student learning” (p. 245). According to Christensen, the conceptual framework of this study (Vygotsky’s zones of proximal development) assumes that such individualization will occur naturally as a result of students’ interactions with music, with the teacher, and with each other (p. 250). While I am intrigued by this notion, Christensen’s project did not include information on any further experiences of the participants, and I cannot find evidence in the literature that she continued this promising thread of research. Summary of implicit applications. Studies in the above section have demonstrated that assessments can be used to individualize instruction for elementary students. However, these studies were not designed for this purpose and, therefore, these demonstrations were extrapolated. Furthermore, Guerrini (2006) used tape-recorded examples of individual singers rated by judges, an assessment practice that does not typically occur in elementary general music classrooms. Strand (2005) had a class of only eight students, which raised similar problems with relevance to the current study, in that most classes in elementary schools have many more than 51 eight students. Christensen (1992) differentiated instruction during her study by basing her minilessons on the emergent needs of her students. Furthermore, the notion that interacting with other students and the teacher, combined with rigorous reflection on group progress, performance and presentation, could result naturally in differentiation is tantalizing. However, she did not elaborate on how her numerous assessment components resulted in individualized instruction or how her use of student-directed mini-lessons increased musical achievement. Examination of parts of these studies, including their method and discussion sections, illustrates that assessments can be embedded in instruction in a variety of ways and that these assessments can be applied to the learning of individual students. Assessment Applied to Differentiation of Instruction in the Music Classroom Few studies have examined the role and function of assessment specifically as it contributes to teachers’ abilities to adapt instruction to increase individual student learning in the elementary general music classroom. Froseth (1971) administered the Music Aptitude Profile (MAP, Gordon, 1965) to 190 fifth- and sixth-grade beginning band students. Subjects were grouped by their aptitude scores into four music ability groups: high, above average, below average, and low. Students from each group were assigned randomly to either a treatment or control group, while attempting to keep a balance of instrument, gender, and age, to control as much as possible for the known effects of maturation, gender, and instrument choice on achievement. All students received curricular instrumental music instruction from one of seven public school music teachers for 30 minutes once a week for one school year with others who played the same or similar instruments. Class size, materials taught, teaching methods, supplementary materials, and other factors were comparable for all the classes. The only difference between treatment and control groups was that teachers were aware of experimental 52 students’ MAP scores and subscores and were blind to those scores for the control subjects. “…[T]eaching suggestions, supplementary exercises, flash cards, and work sheets that were provided were used by teachers in both their experimental and control group classes in addition to the traditional published materials” (p. 99). At the end of the year, each student was audiorecorded playing (1) an etude learned with teacher help, (2) an etude learned without teacher help, and (3) a sight-read etude. Each student recorded his or her performance a second time a week later (as a measure of stability of response). Two trained judges rated each performance, blind to both subject identity and treatment condition. Froseth’s results indicated that mean scores for students in each of the four aptitude levels consistently favored the experimental group. The largest mean differences were found in the highest and lowest aptitude levels. Test-retest reliabilities of the same student from week to week ranged from .82 to .89, and interjudge reliabilities ranged from .90 to .97. Treatments-bylevels ANOVA revealed no significant interactions, so Froseth concluded that his study did not indicate that teacher awareness of MAP results was more beneficial to students depending upon aptitude level (p. 104). However, there was a significant main effect for teacher awareness of student aptitudes. That is, students whose teachers were aware of their aptitude scores performed significantly better than those whose teachers did not know their scores, regardless of aptitude level. Froseth’s findings that instruction should be adapted to meet the needs of students with differing aptitude levels were supported by more recent research. For example, Henry (2002) studied the effects of pattern instruction and music aptitude on the compositional processes and products of fourth grade children. He suggested “…that aptitude, in conjunction with instruction, does affect what children compose. Therefore, teachers should consider the aptitude 53 levels of students when planning compositional instruction for children” (p. 26). However, he did not propose any method or approach regarding how a teacher should modify instruction based on differing levels of aptitude. Similarly, Gromko and Walters (1998) found that, despite a likeness in overt musical behaviors, children with differences in music aptitude developed differently in terms of music pattern perception. These studies provided evidence that children with different levels of music aptitude may learn music differently from one another, and that students with all levels of aptitude may benefit from differentiated instruction. Freed-Garrod (1999) took a qualitative, action-research approach to investigating thirdgrade students’ abilities to assess themselves and each other. She was interested in how composition projects would allow students to operate in four “fields of understanding: making, presenting, responding, and evaluating” (p. 51). In this context, she proposed that evaluation was a necessary part of the learning process, because it required students to communicate their perceptions and assign meaning to their musicking (p. 51). In Freed-Garrod’s study, small groups of students worked together to create a song, with parameters of their choosing. The timeframe for the study was determined by the 23 students in the class—each of the six groups had as much time to plan and rehearse as they wished, with a final “sharing” for comments by peers before they recorded their final version. Groups’ times to completion ranged from eight to twelve 40-minute music classes. The elements of teacher guidance and embedded assessment combined with a final summative assessment are of most interest to the current study. Freed-Garrod stressed that evaluation was “ongoing, integral and concurrent to the rest of the compositional process” (p. 53). Each class session started with a period of whole group instruction, during which Freed-Garrod taught based on themes that had emerged in the previous day’s formative assessment. In this project “… assessment was ongoing—formative evaluation 54 occurred between [Freed-Garrod] as a teacher and student composers and between peers as listeners and composer/performers, and summative assessment occurred at the end of the unit, focusing on the composition in its final form” (p. 54). So, Freed-Garrod was able to structure her teaching to meet the needs of students based on a compositional process that allowed her to see the students’ music cognition in action. Summative data for this study included a written selfevaluation and a rating sheet for the videotaped performances. Freed-Garrod concluded that, through this project, students developed both aesthetic awareness and artistic judgment, along with considerable conceptual knowledge and vocabulary. Among her questions for future research, Freed-Garrod saw the need for studies that investigate students’ level of improvement and mastery of skills as it relates to the amount of time and effort required, and she also proposed the need for studies that focus on individual musical growth (p. 59). Brummett (1992) explored how two teachers applied a holistic, process-oriented student evaluation framework in intact music classrooms. Brummett created an interactive evaluation framework, purposefully selected two teachers to study, trained them in the use of the framework, and provided a detailed teacher handbook. During this training phase, Brummett also observed the sixth grade classes in which the framework was to be implemented, conducted interviews, and collected demographic information regarding the community. The study concluded after 4 months of data collection, except for the final questionnaires from the teachers. The results of Brummett’s study were written as a narrative that wove together data from all these sources. She told the story of the teachers, their schools, their classrooms, and how they were able to integrate more authentic and individualized assessment in their day-to-day teaching. She then analyzed the story she had told in light of literature on learning and assessment. 55 Brummett concluded that her evaluation framework allowed students to have musical independence in a cooperative environment and reflect on their learning, and she concluded that the framework was flexible enough for use in the real world of elementary music instruction. However, Brummet’s study examined teachers’ use of the framework and did not delve into the music learning experiences or achievements of the students (p. 229). That is, while the processfolios contained detailed records of individual student progress based on a variety of measures, Brummett’s research report instead described the students’ and teachers’ perceptions regarding the assessment framework. Therefore, data that may have indicated precisely how the processfolios contributed to individual learning were not included. However, Brummett did mention that students believed that the elements of group work, reflection, and self/peer evaluation contributed to learning in the classroom. She also stated that teachers agreed with her concept of teaching-learning-evaluating as a continuum and embraced the process-oriented framework (p.248). Niebur (2001) based the book Incorporating Assessment and the National Standards for Music Education into Everyday Teaching on her dissertation from 1997. In it, she provided a narrative (including vignettes and thick description) of the standards-based teaching and assessment of four teachers in Arizona. Rather than taking a quantitative approach, Niebur chose to explore the experiences of her four participants in depth, looking for themes that resonated with her experiences as a music teacher and that might seem true to others practicing in the field (p. 8-9). Niebur presented a holistic picture of these teachers and their teaching, so standards and assessment were presented as they interacted in real teaching rather than discussing them in isolation. In a design similar to the current study, Niebur did not attempt to propose optimal definitions or uses of standards or assessments, or to evaluate the relative success of any 56 particular approach to them. She simply described how “reflective teachers” integrated standards and assessment into their teaching as a way to inform other teachers and researchers and allow them to draw their own conclusions regarding the meaning and usefulness of the methods or approaches described (p. 9-10). Niebur’s participants were four practicing teachers who had just completed a graduate course on measurement in music education at Arizona State University. As part of the class requirements, each participant implemented a new assessment plan to track students’ individual progress toward a musical goal of their choice in a single classroom of students. The professor recommended these participants based on the quality of their assessment assignments and reflective ability. Niebur shadowed each informant for five full school days (rarely consecutive), and also attended selected classes, performances, and special events that related directly to the classes she had observed. During this time period, the study participants also met seven times for group discussion led by the measurement professor. Niebur acted as a participant-observer during these meetings. Finally, Niebur conducted formal and informal interviews with each participant and invited the participants’ feedback in the form of member checks. Following is a summary of Niebur’s portrayal of one of the teachers as an example. Niebur described Stephanie Martin in the midst of teaching a recorder unit to two third grade classes. For her graduate school project, Ms. Martin was examining the effect of alternative assessment practices, such as journal writing, on her student’s recorder achievement. While both classes learned the same material (the notes B, A, and G) over the course of the 4-week study period, only one of the classes wrote in daily journals and received written feedback from Ms. Martin. After a week of instruction, each student played individually for Ms. Martin so that she could check if each student was blowing correctly, covering the holes sufficiently, and holding 57 the recorder with the left hand on top. She told one student, “I just want to hear what you’re going to do for me and if I need to help you some more” (p. 71). Niebur reported that she coached several students to help them achieve a better performance in this brief assessment. Clearly, this assessment allowed Ms. Martin to individualize instruction, not only because she could hear individual progress, but also because she had a few minutes to interact individually with each child. At the end of 4 weeks, when students played their final patterns for a videocamera, Niebur wrote: …in this classroom where learning is a living, social experience and where students regularly risk performance, freely discuss their triumphs and mistakes, then immediately incorporate their insights into a new performance, today’s stilted silence feels unnatural and unproductive, even unfair. For a few moments, the demands of a test that is specifically designed to generate statistical information is at odds with the ongoing culture of assessment that nourishes the students and the teacher inside this classroom (p. 82-83). Niebur and Ms. Martin were both concerned about the effect that the videocamera had on the students’ responses, although the recordings were intended to contribute to the validity of the assessment measure. The quantitative study revealed no significant differences in performance ability between children who journaled and those who did not. However, Ms. Martin stated, “the time for journal writing was well-spent, because it reinforced and preserved a written record of what the children learned” (p. 87). In addition, she also reported that, based on her observations, the class that kept journals was more likely to think from one class to the next, to listen, and to follow directions. 58 As the observation period had ended, Niebur did not report if the third grade students continued to work on recorders or moved on to another unit. Therefore, it was difficult to evaluate if results from the summative video assessment were applied to instruction. However, it was clear that results from ongoing individual and group assessments were routinely applied by Ms. Martin while teaching recorder to her third grade students. Based on data collected from four participant teachers, Niebur drew several conclusions that directly inform the current study. She stated: “…conditions that are favorable to group music making are not always conducive to individual assessment, so teachers who choose to track the learning of individual students often must adjust their teaching styles to accommodate assessing and recording individual student progress” (p. 145). In Chapter One, I detailed the myriad of difficulties teachers have reported regarding assessment of students’ progress in elementary general music, and nevertheless proposed that optimal instruction of elementary music would include tracking individual music learning progress. Niebur reported, “[the participants] have taken on the challenge of seeking out, and often inventing, assessment tools with which they can attempt to create and share images of individual students’ musical growth” (p. 145). As a result of their course in measurement and their participation in this study, participants reported increased comfort with assessment tools that allowed them to track individual music learning progress without compromising the instruction and musicking the teachers desired in an elementary general music setting (p. 148). Participants voiced concerns that assessment might stifle creativity or result in children with low achievement or aptitude giving up on music. However, they also mentioned benefits of their increased use of assessment, such as increased ability to share information with other teachers, administrators, and parents, increased evidence of accountability for the music curriculum, and advocacy for 59 general music education. Several illustrative comments included: “…other teachers think I’m more of a teacher…” (p. 148), “…when I mention to the kids that I’m checking for a certain skill, they sit up taller and try harder… [music class] is not just a place to relax for forty minutes. It’s a class. We’re going to learn something” (p. 148) and “…I think some of my staff have changed their minds, too. It’s not just a planning period anymore” (p. 148). However, according to Niebur’s analysis, “…most often, assessment functioned as a means of illuminating for teacher and students the progress that they had worked so hard to achieve” (p. 152). Summary. Research literature in music education frequently investigated various methods intended to assess achievement in elementary music education classrooms. However, few studies examined how these assessments could be used to differentiate instruction for individual learners. Extensive research in non-music elementary classrooms indicated that assessment-based differentiated instruction delivered in flexible groupings led to increased achievement. This review revealed only one quantitative study in music education that investigated assessment-based differentiation of instruction, and this study had promising results (Froseth, 1971). A handful of qualitative studies have approached this topic, but they focused on teacher and student attitudes regarding implementation of the assessment rather than on student achievement (Brummett, 1996; Niebur, 2001). Freed-Garrod described using formative assessments of small group work in combination with student self-assessments to guide instruction. In light of the research available, the current study seeks to describe how practicing teachers use the results of assessments to differentiate instruction in their elementary general music classrooms. The current study uses a qualitative design similar to that of Niebur (1997) and Howard (2007), in which examples of assessment and differentiation of instruction will be 60 described in narrative form and analyzed for themes that might be informative to practicing elementary general music teachers. 61 Chapter Three: Methodology Researcher Lens As an undergraduate, I pursued a double major in vocal performance and music therapy. Although I ultimately decided not to complete board certification in music therapy, leading individual and small-group musical interactions in music therapy practica enriched my teaching when I eventually went back to school and became an educator. These early experiences also contributed directly to my interest in assessment. In music therapy, interventions are structured by a treatment plan that defines the therapeutic goals of each individual client, describes how music will be used to help the client reach each objective, and includes an assessment method to determine when the stated goal has been achieved. When I started teaching elementary general music, my early training in planning therapy sessions influenced my teaching, and I wanted to understand the needs of individual students and document their progress. I taught in a typical school music setting: about 550 kindergarten through 4th grade students spread over two buildings, whom I saw twice a week for 30 minutes. In subsequent years, my teaching load became somewhat smaller (about 400 students per week), but in my four years of teaching elementary general music I did not engage in anything approximating systematic assessment of music learning as a natural part of instruction. My curriculum included assessments: I did “voice checks” twice a year, I gave some written tests regarding music theory and composers, and my recorder unit in 4th grade had strong assessment components. I also used my Orff background to help children improvise and compose, which allowed me to see individual response and informally assess musicality. Although I was trying to gather information about student achievement, I did not know much about the individual musical abilities of my students or how well they were learning what I was teaching. I did know quite a 62 bit about many of my students—especially after 4 years in a town with a stable population. However, most of what I knew about students was behavioral information, such as which students were typically easy (or difficult) to direct. I knew the children who were very strong or very weak rhythmically, or very strong or weak singers. However, there were a number of quiet, reserved children about whom I knew nothing, musically or otherwise. I also did not know if my low-performing students were struggling with music concepts, not trying in music for personal reasons, or bored students with high music aptitude who needed more engaging challenges. Most important, I did not understand how to develop assessments that could inform my instruction. The “voice checks” that I did twice a year were the only time that every child I taught had an opportunity to give an individual musical response. Even when I had this opportunity to hear them sing individually, I simply marked U (Unsatisfactory), S (Satisfactory, could also have a – or a +), or O (outstanding). I did not have operational definitions or rubrics to define what those marks meant for any grade level, and they did not inform my teaching because they did not give me any information about the singing voice development of the student. All my records told me was whether or not a child could sing “Happy Birthday” on a given day. I struggled with the pressures enumerated by many elementary general music teachers (high number of students, lack of time, big class sizes, performance pressure, etc.). However, as I have pursued graduate work in music education, I have become convinced that we, as music teachers, do our students a disservice when we do not ascertain aptitude and achievement levels and use that information to modify our instruction to meet individual students’ needs. I think we can benefit from using multiple types of assessments to create a holistic portrait of a child’s 63 musicianship. Assessments should not only reveal a child’s current abilities, but should also indicate what needs to happen next to build musical skills and knowledge. Assessments should also provide meaningful feedback to students regarding their progress, and not necessarily in the form of a grade. My interest in assessment has little to do with evaluation or grading. In fact, I regard assigning an “A,” a “U,” a percentage, or a numerical value to a child’s musical achievements only as a peripheral use of assessment information. I am more interested in the role that assessment could play in optimizing music learning for individual students. I am not sure that the dry words “assessment” and “differentiation” really capture the spirit of my interest, which is the dynamic intersection of knowing enough about a student (abilities, personality, achievement) to be able to respond to student needs, both in lesson planning and in the moment. Design The purpose of this study was to explore the role of assessment in individualizing instruction in elementary general music classrooms. In order to illuminate this issue, I observed three exemplary teachers every time they taught two or three selected classes for five to eight weeks. I observed how these teachers differentiated instruction for the variety of students they taught each day. For several reasons, the participant teachers selected which classes I observed. The research questions in this study targeted promising practices, so I wanted to allow the participants to show me what they considered to be their best teaching. Participants knew that I was interested in seeing how they differentiated instruction, so they seemed to choose classes in which students demonstrated a variety of needs and abilities. Also, because this study targeted assessment, and research literature indicated that some teachers assess more or less in different grade levels (Talley, 2005), I wanted each teacher to select grade levels in which I would see assessment activities during the observation period. From a logistics perspective, the 64 participating teachers knew which classes would be missing music (because of holidays, teacher work days, conferences, etc.) or what classes were preparing for performances rather than engaging in typical curricular music learning. Finally, I wanted to honor the contributions of the participants by increasing their comfort level and the ease associated with their participation in whatever ways that I could. Therefore, allowing participant teachers who were familiar with what I was studying to select the classes I observed seemed to be the best course of action. This study followed a qualitative case study design. Specifically, it was an instrumental, collective case study: instrumental because the cases were examined to provide insight into the specific issue of how teachers used assessment to individualize instruction (Stake, 2000, p. 437), and collective because I described more than one case (Creswell, 1998, p. 62). The informants were purposefully selected (Miles and Huberman, 1984) because they provided exemplary teacher perspectives concerning an area of music education about which many teachers are inexperienced or uncertain. I gathered multiple types of data, which allowed for triangulation of sources, including observation field notes, teacher journals, video, and interviews. Transcriptions of interviews were returned to the participants for “member checks,” in which participants ensured that their thoughts were accurately portrayed by editing or adding to the transcript (Janesick, 2000, p. 393). Once transcribed and member checked, these multiple sources were analyzed for themes, using the constant comparative method of data analysis (Glaser and Strauss, 1967). Participants Similar to Niebur’s (2001) study, participants for this study were selected purposefully based on recommendations of the faculty at Michigan State University and the University of Michigan. I contacted faculty members and asked them for names of practicing “exemplary 65 teachers” who (1) were known for their ability to individualize instruction, (2) could be reflective about their teaching practice, and (3) could articulate thoughts and ideas regarding assessment and differentiation. I intentionally chose teachers who had varied philosophies, teaching methodologies, and curricular goals. The criterion for selection included a master’s degree in music education (or a related field), at least eight years full-time elementary general music teaching in a public school, and state certification to teach music. I experienced some difficulty in recruiting participants. I visited several classrooms of teachers who had been recommended, and, based on my observations and discussions with these teachers, I concluded that they did not use ongoing assessment or, if they did, it was not used to differentiate instruction. I excluded one master teacher who wanted to participate, because she taught part-time in a private school for gifted students, so the results from her classroom would have been less transferrable to public school settings. Some teachers whom I contacted were understandably uncomfortable with the idea that their practices would be examined and/or stated that they did not feel that their teaching practices were exemplary with regard to assessment and differentiation. Other teachers were uncomfortable with the time commitment—6 weeks of observations and biweekly journaling, two interviews, and two think-alouds in addition to member checks of transcripts was not something they were willing to take on in addition to their already busy schedules. Participant Hailey Stevens told me after our last observation session that she had initially been reluctant to participate because of the demands on her time, but that she found the experience of reflecting on her teaching in writing and in conversations with me to be rewarding and was glad that she had decided to take part. 66 Danielle Wheeler. 4 I met Danielle Wheeler through the local Orff chapter when I first began to teach. We later reconnected when I observed a student teacher in her classroom. Danielle has taught for 26 years in a variety of placements, including k-8 general music, first grade classroom instruction, k-2 general music, and middle school general music and chorus. At the time of this study, she had taught in her current placement, Developmental Kindergarten through 5th grade general music, for 13 years. Danielle is certified to teach all subjects k-8 and music k-12 in her state. She holds a Master of Arts in Teaching, with an additional 40 credit hours of master’s level courses in music, including certifications in Orff (Level I) and Music Learning Theory (Early Childhood and Elementary). She has served as Secretary and Vice President of the local Orff chapter and was their current President. In addition, Danielle has presented on several occasions at state-level conferences and workshops, and has published articles in the state music educators’ journal. She also served as the music director at a local church and was an instructor at a nearby college in their Master of Arts in Teaching program. At the time of data collection, Ms. Wheeler taught 498 students each week in a mediumsized suburban district (about 5,000 total students) in the Midwest. Elementary students in this district received general music instruction twice each week for 30 minutes. The district was nearly 90% white and was a low-poverty district (fewer than 15% of students qualified for free/reduced lunch). Danielle described the elementary school in which she taught: [It was] a neighborhood school when I first began. But in the past 5 years, many apartments have been built and the school is getting a more transient population, and is transitioning to a lower economic population—more students are beginning to get free or reduced lunch. We have added an ESL teacher in the past 3 years due to a significant rise 4 All names of participants and their schools are pseudonyms. 67 in students with no English skills or English as a second language. This school also houses the Autistic room. We currently have 13 autistic students who are all mainstreamed (Interview, January 15, 2010). Ms. Wheeler is an advocate for assessment in her district and has encountered resistance from other music teachers in her district who prefer not to integrate assessment into their teaching. Carrie Davis. At the time of data collection, Carrie Davis was just completing her eighth year teaching k-4 general music. She also taught music to the Young Fives, Early Childhood Special Education, and Cognitive Impairment (CI) programs housed in her buildings. Carrie completed the final two credits of her master’s degree in music education the summer directly following her participation in this study, and holds a BM in music education. She is certified to teach k-12 Music, 6-8 Spanish, k-5 All subjects, and k-8 self-contained classroom in her state. Ms. Davis is certified in Music Learning Theory (MLT) at both the Early Childhood and Elementary Levels, although she said “I'm not an MLT die-hard...more like a dabbler due to a different philosophical perspective than Gordon” (Email communication, April 14, 2010). Ms. Davis has been trained as an Odyssey of the Mind facilitator and described herself as a frequent “meeting attendee and/or workshop participant” who has not yet taken on any leadership roles due to conflicts with her performance schedule and master’s degree program. At the time of this study, Ms. Davis served as the Youth Personnel Director in charge of the middle school and high school ensembles for her regional flute association. Within the same organization, she was a member of the flute orchestra and played in the chamber ensemble, which was the auditioned group that played the "meatier" music. Ms. Davis played in pit orchestras “here and there” (most recently for a Gilbert and Sullivan operetta), played regularly with a local wind ensemble, and subbed regularly for the volunteer orchestra in her community. 68 In the past, Ms. Davis accompanied children's choirs at various churches on keyboard and performed in handbell ensembles. Until the year of the study, Ms. Davis was the flute instructor for middle school and high school flute lessons for two districts in which lessons were provided by the schools rather than the families, but, due to budget cuts, those programs were cancelled. Until she started a master's degree program during the summer of 2008, she taught at two or three marching band camps each summer. Ms. Davis taught in a large suburban district that served a mostly upper middle-class SES area. The district enrollment was approximately 10,000 students and was growing by approximately 200 students each year. Ms. Davis described a community-wide desire to "push for success" in all areas. Many students were in multiple extra-curricular activities from elementary through high school. According to Ms. Davis, the community (including the upper administration) was supportive of the arts; concerts, plays, and student art shows were often just as well-attended as athletic events. The specific elementary school in which Ms. Davis taught served about 500 students. Each grade level received 35 to 40 minutes of general music instruction twice each week, except kindergarten, which met once a week. The additional classes such as young fives and early childhood special education attended music once each week for 20 to 30 minutes, and the two CI classes each came twice a week for 25 minutes. Ms. Davis described the climate of her building as: …generally one of open acceptance of all diversity—one of the many goals of our staff being to create a climate in which students are first-inclined to think of another student NOT as "special needs," "from Kosovo," or "Muslim," but rather as "my friend George," "my friend Marik," or "my friend Asar." Yet at the same time, there is still a 69 strong sense—even [among] the students—of "keeping up with the Joneses" as far as possessions, name brands, etc. A large percentage of our families consist of two parents who are working-professionals who demonstrate great concern for their children's education (namely, wanting them to get good grades). (email communication, April 14, 2010) Hailey Stevens. Hailey Stevens had also taught for 8 years and was recommended as a rising star by faculty at both her undergraduate and graduate institutions. At the time of the study, she had recently completed her Master’s Degree in Music Education and had presented her teaching practices and research at conferences and workshops in Michigan, Indiana, Illinois, Wisconsin, and South Carolina. She holds several certificates in Music Learning Theory (MLT): Elementary General Music, Levels 1 & 2 and Instrumental Music Level 1. She is certified to teach music k-12 in her state. Hailey has served as President, Vice President, and Newsletter Editor for a state music educator’s organization and was the current Education Commission Chair for their national organization. Hailey is one of fewer than 25 people nationwide who are accredited MLT certification faculty. Ms. Stevens taught k-5 general music in a large suburban school district (more than 12,000 students) in the Midwest. Hailey traveled between two elementary buildings and saw about 350 students per week. In her district, kindergarten through fifth grade students attended general music twice per week for 40 minutes. Hailey also taught two self-contained classes of students with Autism Spectrum Disorders three times per week for about 25 minutes per class. Ms. Stevens only taught fifth grade students who did not participate in instrumental music, and she directed an optional choir of fourth and fifth grade students once per week for 40 minutes. She described her students as: 70 …very diverse, both socioeconomically and racially... My building qualifies as a 5 Title One school but is also situated in a very nice subdivision where the homes are 6 valued at probably $300-500,000 and up. We have many nationalities/races represented in our student population, including many different languages spoken in the homes of our families. (Email communication; February 4, 2010). All three participants taught in school districts that consistently ranked at the top of their state by many metrics. Each district achieved high ratings for its academic programs on the state report card, with strong test scores and much emphasis on college preparation, including offering numerous Advanced Placement (AP) courses. These districts had high graduation rates and high percentages of graduates who continued on for post-secondary education. By state law, students from other districts could choose to attend these schools if there was room; there was a waiting list for those slots each year in all three of these districts. Data Collection Methods of data collection for this study comprised (1) field notes of observations, (2) videotape observation forms, (3) verbal protocol analysis of selected video excerpts, (4) teacher journals, and (5) interviews. I received human subjects approval from the Michigan State University Institutional Review Board. Although I videotaped each class that I observed, the tapes were of the teachers, and the tapes were not viewed by anyone other than the teacher in the video and me. My only known impact in the classroom was as an observer of typical general music instruction. 5 This designation indicates that about 40% or more of the families served in a school building qualify as “low-income” as described by the US Census (Elementary and Secondary Education Act, 2002). 6 To put this in perspective, in the fourth quarter of 2009, the median home value in Ms. Stevens’s economically diverse, mostly suburban, county was about $130,000 and the urban county about 3 miles south of her school had median home values of about $92,000. 71 Observation. Naturalistic observation of elementary music classes was the primary data collection instrument in this study. My experience as a general music teacher and the knowledge of assessment and instruction that I have developed over the course of graduate study provided the lens through which I viewed each class. This is a typical practice in descriptive research (Creswell, 1998). I attempted to be as unobtrusive as possible in order to have the least impact, but I recognized that my presence in the classroom had the potential to change the classroom climate (Angrosino & Mays de Perez, 2000). Occasionally, students would check for my reaction to some event, or they would talk, sing, dance or play to me or to the camera. In general, students appeared accustomed to various adults coming in and out of the room and seemed to adjust quickly to my presence. In addition to videorecording each class, I took field notes on my computer as I observed; this is how I write most efficiently, and it facilitated data storage. When I designed the study, it was my goal to spend 6 weeks observing every meeting of two classes taught by each participant teacher, for a total of 12 observations of each class. Optimally, one class would be upper elementary and one lower. However, this goal was flexible to accommodate the needs of participant teachers as well as emergent issues. The observation period for Ms. Wheeler was from Jan 15-March 1, which resulted in a total of 11 30-minute observations of both a kindergarten and a fourth grade class. The classes that Ms. Wheeler selected for me to observe happened to fall on a Monday and a Friday, which resulted in several days with no school during the observation period: Martin Luther King, Jr. Day, and President’s Day. The students also had a Monday cancelled due to inclement weather, so the observation period was extended to 7 weeks and still did not reach the goal of 12 observations. Ms. Wheeler and I opted not to do a final make-up because the students were scheduled to miss the next two 72 music days due to a mid-winter break. I observed Ms. Stevens from February 4 to March 25. Ms. Stevens was ill for one observation day, her students had mid-winter break (resulting in one missed observation day), and school was cancelled on one observation day due to inclement weather. We persisted, and, over the course of 7 weeks, we were able to meet our goal of 12 40-minute observations of a first grade and a third grade class. Observation for Ms. Davis was different, because we were nearing the end of the school year and because there was a unique opportunity in her setting to observe fourth grade students with cognitive impairments receive music instruction in both mainstreamed and self-contained settings. Therefore, I observed three classes—one third grade, one fourth grade, and one selfcontained class of upper elementary students with cognitive impairments each time they met from April 19 to May 26. Observations of individual classes were cancelled on several occasions due to field trips or assemblies, Ms. Davis was ill on one observation day, and I attended dress rehearsal for both the third and fourth grade end-of-the-year programs when these fell on observation days. This resulted in ten observations of each class as it met normally, plus observations of entire grade levels at dress rehearsals. Videotaping. Each class I observed was also videotaped with a camera sitting on the desk near where I was taking notes. I would occasionally reposition the camera so that it was capturing the teacher as she moved around the room. The videotapes served two purposes. First, one week after each observation, I watched the videorecording and filled out a video response sheet (adapted from a sheet designed by Dr. Mitchell Robinson, based on examples in Miles and Huberman, 1984, pp. 53-55, see Appendix A). The video response sheet and the teacher’s journal for that class (see below) provided triangulation for my field notes. Videotapes were not 73 transcribed for data analysis. Because the video data included singing, moments of classroom noise, unidentifiable voices, group work, and other extraneous or unintelligible information, I found it unlikely that transcription of every video would result in data that were more meaningful than field notes, teacher journals, video response sheets, and verbal protocol analysis. Verbal protocol analysis (think alouds). The videotapes also provided brief excerpts to watch with the participant teacher for verbal protocol analysis (VPA). VPA, in which the participant is invited to pause a video and to describe what they were thinking as they were teaching or to reflect on what they are seeing in the video, is a method borrowed from psychological research traditions (Flinders and Richardson, 2002). This technique, also referred to as a “think-aloud,” can provide valuable information on the practices of teachers “in the moment.” Video excerpts for VPA were selected by the researcher and comprised segments of teaching when the participant seemed to be delivering instruction based on the needs of individual students. Each session of VPA was audio-recorded and transcribed for inclusion in data analysis. Ms. Wheeler and Ms. Stevens both participated in two sessions, one about four weeks into observations, and the other after observations were completed. These sessions lasted 35 to 50 minutes. Due to her shorter observation period, Ms. Davis had a single session of VPA that lasted nearly two hours. Journals. After each meeting of the targeted classes, the teacher completed a journal entry and emailed it to me. Each journal entry was based on the following questions: (1) What opportunities for individual or small group response did you give, and what interested you in the students’ responses? (2) How did you keep track of what individual students know and can do? (3) How and when did you deviate from your plans in order to individualize instruction? 74 (4) How will what you learned today about what your students can do affect your instructional planning? Teachers chose to answer the questions that were most applicable to the class they were describing and could also add comments unrelated to the questions if they wished. In addition, I occasionally asked them to comment on specific behavior I had witnessed or to comment on something we had discussed in the moment. Journal entries had several functions as data in this study. First, they were a source of triangulation—the teachers could present their thoughts about their teaching to enrich what I observed. Second, the journals offered the teachers a chance to reflect on their practice. Finally, the journals informed what video clips were chosen for verbal protocol analysis and suggested questions to be asked during final interviews. Interviews. In addition to soliciting the teachers’ thoughts through verbal protocol analysis and teacher journals, I also interviewed the teachers prior to beginning classroom observations and after data collection was complete. Initial interviews followed a semistructured interview protocol, guided by a list of questions (see Appendix B) and supplemented by additional questions to clarify responses or to investigate interesting statements. The initial interview informed my observations and my interpretations of the teachers’ journaling. General interview topics included: (1) the school setting (demographics, other topics of interest) (2) the teacher’s views on assessment and individualization of instruction, (3) what kinds of assessment had already taken place in the classes I was about to observe, and (4) the music learning goals the teacher was working on while I was observing. In this interview, participants also were given the opportunity to ask me any questions that they may have had about this study. The exit interview took place several weeks after the completion of the observations for all three participants. By this time, each teacher had performed a member check on the transcript 75 of their initial interviews and both think-alouds. In addition, I had already performed preliminary analysis for themes within and across cases, based on all the information collected so far (preliminary interviews, my field notes, teacher journals, verbal protocol analysis, and video response sheets). The exit interview questions were derived from this preliminary analysis and were intended to allow the teacher to share her opinions of the credibility of my findings (Appendix C). This was an important part of the research design, because the results should “ring true” to the participants and, if they did not, it was important to know why. I also asked how the act of being studied (including journaling and verbal protocol analysis) affected the teacher’s pedagogy and/or thoughts about assessment. The exit interview allowed me to refine my initial themes in conference with the teacher(s) to whom they applied. Trustworthiness/Credibility This study attempted to reveal experiences of public school elementary music teachers as they used information gleaned from assessments to help individual students progress musically. The study was only successful to the degree that it described these interactions in a manner that seemed meaningful and authentic to the reader. In order to ensure the trustworthiness or credibility of my data, I used several techniques. First, I used multiple sources of data, including observation field notes, teacher journals, video, and interviews. These various forms of information and the viewpoints they represented allowed for triangulation of data. That is, the sources were checked against one another to bolster credibility (Miles and Huberman, 1984). In addition, transcriptions of interviews were returned to the participant for “member checks,” in which a participant could ensure that her thoughts were accurately portrayed by editing or adding to the transcript (Janesick, 2000, p. 393). Each participant also was asked to comment on the credibility of my initial data analysis as a further member check. Finally, preliminary findings 76 and entire case studies were submitted for peer review by faculty members and fellow doctoral students at Michigan State University. The combination of triangulation, member checks, and peer review should enhance the trustworthiness of findings. Limitations This qualitative case study took place in three specific settings taught by three individual teachers. While the settings differed from one another, they each were associated with mediumto-large, suburban school districts in the Midwest. Due to the qualitative nature of this project, I did not attempt to find any kind of “sample” that might be construed to be widely representative of any group. Instead, I purposefully chose the participant teachers and settings based on recommendations by leaders in music education in an attempt to study promising practices. Because other elementary music settings and teachers differ from those described in this study, it would be inappropriate to expect that the results of this study could necessarily be generalized to other settings. However, perhaps teachers could adapt or modify ideas illuminated by this research for use in their own classrooms. Information from qualitative studies may be transferrable to similar situations (Creswell, 1998), and the results that resonate with particular teachers may be appropriated. I did not describe or evaluate the curricula being assessed, except as this information directly informed this investigation into assessment and differentiation of instruction. While some of this information may be apparent to the reader as I describe instructional practice and how the results of an assessment were used, a discussion of the relative merits of various curricular goals was beyond the scope of this paper. Participant teachers in this study came from different educational backgrounds and used a variety of methodologies. Similar to curriculum and assessment methods, the methodologies being used in these classrooms may become 77 apparent to the reader, but I did not set out to discuss the relative merits of methodological approaches. Analysis I transcribed all the data gathered in the course of this study and coded it by hand for themes. Although there are transcriptionists for hire and computer programs available to code data, I thought I would learn more about the data by transcribing and manipulating the data by hand. This forced immersion in the data allowed me to see emergent themes and gain a better understanding of how the data interacted. I developed a system of color-coding data with highlighters, used different colored paper for each participant, and built database-style workbooks of material on my computer for each theme and each chapter to assist in the management of the large amount of data. Once transcribed and member checked, the coded data from multiple sources were analyzed for themes using the constant comparative method of data analysis (Glaser & Strauss, 1967). First, I undertook within-case analysis (Creswell, 1998). In this analysis, I looked for themes that recurred within the data for a single case. I identified representative examples of each theme, and I also looked for unusual or exceptional occurrences related to the topic of this study. As Stake described, “Case researchers seek both what is common and what is particular about a case” (2000, p. 438). After I internally analyzed each case, I analyzed the data across cases. This was not a comparative analysis, but instead looked for themes that transcended setting to emerge in all cases, or, conversely, for codes that were specific to a particular setting in order to illuminate the topic of interest: how teachers were using assessment information to individualize music instruction. Finally, I made assertions “… [that made] sense of the data and provide[d] an 78 interpretation of the lessons learned” (Creswell, 1998, p. 249). The final interview was a form of analysis, as I discussed these assertions with the participant teachers to be sure that the assertions seemed trustworthy and to allow informants to comment on my findings. I sent follow-up questions to participants by email as needed until the study was complete. 79 Chapter Four: Results Danielle Wheeler: Curiosity and Curriculum “Quiet, quiet, nice and sweet, I’ll go in and take my seat.” I hear Ms. Wheeler outside the music room, chanting to kindergarten students lined up along the blue lockers in the spotless hallway of Riverview Elementary. A chorus of voices rhythmically echoes back her chant, overdoing the contrast of high and low inflections. Fifth graders silently file out of the music room, headed back to their classroom. Immediately as the last fifth grader leaves, kindergarteners enter, tiny and wide-eyed in comparison to the nearly adolescent students who have just left. They continue to echo Ms. Wheeler as she chants them into the room: “Walking, walking to my chair” “Walking, walking to my chair.” Somehow as the fifth graders were lining up, Ms. Wheeler had placed papers on each of the 28 chairs that ring three sides of the carpeted room. She continues her improvised chant: “Putting my paper under there.” Little voices dutifully respond, “Putting my paper under there.” The children take the paper off their chairs, place it underneath on the floor, and sit down, their feet swinging in the air. They look expectantly toward the front of the classroom, where they can see a white board, easel, piano, and shelves overflowing with tubs of pitched and unpitched percussion instruments, books, scarves, beanbags, ribbons, stretchy bands, and other props. Orff instruments are stored on shelves and on the floor behind the students, and recorders stand ready in boxes by the sink. The music room is packed to the ceiling with the detritus of over 25 years of teaching… masks, puppets, posters, homemade instruments. The instant the last child enters the room, Ms. Wheeler begins her greeting song, and the children join her without being prompted. “Hello everybody, yes indeed… Let’s make music, yes indeed, yes indeed my friends.” The song has barely ended when Ms. Wheeler sings, using 80 Curwen hand signs, “Sol-mi-sol-sol-mi” and the class echoes her singing and mirrors her hand signs. Ms. Wheeler sings a few more patterns of sol, mi and la, echoed by the whole group, smaller groups, and also a few individual students. Then, she as she starts to sing a new song, she motions for the class to stand and join her for the associated movement activity. Ms. Wheeler does not allow any transitional moments during which students might talk or misbehave, but segues immediately from one activity to the next, mixing singing, chanting, movement, and playing instruments in a total of nine activities over the course of the 30-minute music class. She is strict about off-task behavior and talking out of turn. It is January, and the children seem familiar with the rules, comfortable with the routine and excited to begin another day of singing and moving in the music room. Perhaps due to her strict management and established pattern of activities, it is not immediately apparent that any child has any behavioral, intellectual, or musical differences from any other child in the room. (DW Field notes, 1/15, condensed). I was pleased when Danielle Wheeler agreed to participate in this study. As I inquired with university faculty and area teachers about music educators who were interested in assessment and regularly implemented it in their classrooms, her name came up repeatedly. I knew Ms. Wheeler from my time as a beginning teacher nearly 10 years ago, when attending Orff meetings for activity ideas helped me survive my first year of teaching. At that time, Ms. Wheeler was the secretary of the local Orff chapter. More recently, I had observed and evaluated a student teacher in her classroom. Danielle was excited to participate as well, because she had worked intensively on integrating assessment components into her teaching about 3 years prior to this study but was afraid she had lapsed in the intervening years (DW Initial Interview, p. 13). 81 The following chapter will present findings regarding my guiding research questions as well as describe new themes that emerged out of the data, including: Danielle’s inquisitive disposition, her linkage of curriculum to assessment, and teacher behaviors conducive to differentiation. When and How did Ms. Wheeler Assess? Types of assessment. Ms. Wheeler used a variety of assessments to ascertain information about her students’ musical achievement and abilities. In the past, she had used aptitude testing to determine different levels of ability. At the time of the study, she used multiple choice and short-answer written tests to examine students’ awareness of concepts about music. Danielle collected written work, including tests as well as notated compositions and selfassessments, in portfolios. She measured music performance skills, such as singing and playing instruments, with criterion-based assessments like checklists, rating scales, and rubrics. Ms. Wheeler also used observational assessments when she circulated around the classroom checking for participation or demonstration of specific skills. Portfolios. In the initial interview, Ms. Wheeler indicated that she kept portfolios of all written work, including compositions, written assessments, student checklists, and selfassessments for students in grades 1 through 5. Kindergarten students did not have portfolios, because they did not do written work. Written assessments in the portfolio included multiplechoice and short-answer tests regarding music theory, composers, genres, and similar topics. In general, written assessments gathered information regarding what students knew about music concepts and related information. One example of a written test administered during the observation period was “Rocket Notes,” a note-reading exercise the fourth grade class completed once a week. These one-minute timed tests were modeled after “Rocket Math” tests, which the 82 fourth grade students took daily in their classroom. Notes were presented on a treble staff, and students wrote note names in blanks below. Self-assessments. The portfolios also contained student-administered checklists and other self-assessments. Student checklists were used while working on assignments such as compositions (e.g., Did I use a treble clef? Check yes or no. Do all of my measures have four beats? Check yes or no) (DW Field Notes 2/12, p. 1). Self-assessments were completed only in grades 3 through 5, because of the cognitive abilities and writing skills required. These selfassessments consisted of questions relevant to music behaviors targeted in the curriculum for that grade level during that trimester. For example, “My behavior is good in music class (yes/no)” or “Do you think it will be easy to write your own song in music class? Why or why not?” (DW Field Notes 2/12, p. 1). Self-assessments also included a section for Ms. Wheeler to comment on whether the student’s self-assessments matched her assessments of the student’s ability. For example, “…sometimes they’ll say, ‘I can play BAG but not E on the recorder.’ …I [Ms. Wheeler] might write something like ‘Well, I’ve seen you play E, but practice that more’” (DW Initial Interview, p. 5). Ms. Wheeler reported that many children were tough on themselves when they self-assessed. The self-assessments, including Ms. Wheeler’s comments, were sent home with the music report card twice a year. At the end of the middle trimester, Ms. Wheeler did not send home self-assessments or report cards because there were conferences, and she distributed the music curriculum instead. Report cards. Ms. Wheeler was required to grade students in first through fifth grades twice a year on report cards. However, Ms. Wheeler discounted the district’s music report card as a form of assessment of music learning. “Our report card is behavioral… You only get a report card with your name on it if there is a behavioral issue” (DW Initial Interview, p. 3). The 83 report card did not include curricular goals for the trimester. Instead, a blanket statement regarding behavior in each “special” (gym, art, computers, and music) was photocopied for all children whose behavior was acceptable. A child whose behavior needed improvement would receive a personalized card with information regarding the problems teachers had experienced. Danielle reported that, since the district music faculty had completed its curriculum about 3 years prior to this study, she had been arguing for a report card that reflected music learning. “After we wrote our curriculum, I thought it was really important. I felt that we need to now do a good report card and start putting some good assessments in place, because we had our curriculum piece” (DW Initial Interview, p. 3). On several occasions, Ms. Wheeler stressed that her interest in assessment was not shared by all of the music teachers in the district, because she believed that others did not want to discuss evaluation of the new curriculum, primarily out of fear that children would view themselves as unmusical if they did not receive top marks (e.g., DW Initial Interview, p. 3). Ms. Wheeler was in the process of developing her own report card for kindergarten, which did have an assessment function. It was adapted from an MENC publication and used pictographs to provide information on curricular expectations, such as ability to sing in a small group, to distinguish same and different tonal and rhythm patterns, fast and slow tempi, loud and soft dynamics, and to identify and play percussion instruments (DW Journal 1/29, p. 2). As stated previously, Ms. Wheeler did not keep portfolios of kindergarten work. The other grades completed written work that lent itself to inclusion in a portfolio as a way to demonstrate progress. Kindergarten students did not do any written work in music. Most of the kindergarten year in music was viewed as introductory: a time to expose children to the elements of music 84 (beat, rhythm, melody, tonality, harmony, form) and ways to interact with music (singing, playing instruments, and moving) (DW Artifact 1, District K-5 Music Curriculum). Formative assessments. In addition to the assessment measures in the student portfolios, Ms. Wheeler designed and used assessments for her own information. These formative assessments measured individual performance skills, such as: singing voice development, sung tonal patterns (echoed and improvised), vocalized rhythm patterns (echoed and improvised), instrument skills such as playing patterns or playing on the beat, and movement skills such as fluid movement or moving to selected features of the music, including the beat. According to Ms. Wheeler, these assessments typically took the form of checking yes or no on the class list if a child demonstrated a particular skill, although sometimes she simply checked who was or was not participating (e.g., DW Think Aloud 2/15, p. 4). Sometimes she used rating scales on the class list as well, recording information such as T for talking, S for singing, and S+ for singing on pitch. Ms. Wheeler also used rubrics to evaluate more complex tasks, such as compositions or playing songs on the recorder. However, she preferred checklists to rubrics, because she was concerned about the reliability of rubrics. She related the story of a professional development day when all the teachers in her school “…[got] a paper, read it together, and then we all ha[d] to grade it according to [a] rubric, and [despite all having the same paper and the same rubric] we still d[id]n’t agree [on the score]!” (DW Initial Interview, p. 5). Other assessments. A few more complex assessments also took place during the observation period. The fourth grade students wrote a song for their recorders, which they handed in along with a checklist of the elements of the composition. Kindergarten students had a centers day that included individual assessments of singing voice development and of ability to play a bordun and glissando on Orff instruments. Fourth graders played songs of their choice in 85 duets and trios for Ms. Wheeler, the student teacher, and a visiting teacher. Those who played acceptably well (pass/fail, with verbal feedback to encourage improvements) got to sign their names on a chart in the hall to indicate they had achieved a certain level of challenge. Final tests for the recorder unit were video-recorded in the hall, one child at a time, so that Ms. Wheeler could grade them using a checklist at home. Of all these assessment activities, only the final recorder-playing test was graded. Aptitude testing. In the past, Ms. Wheeler had administered the Primary Measures of Music Audiation (Gordon, 1986), a test of developmental music aptitude. She stated that it yielded useful information, helping her to identify those children who were high aptitude but low achieving so that she could push those students to reach their potential. However, Ms. Wheeler stated that administering and scoring the test to 90 students in one grade level was too time consuming to be justified by the one or two underperforming students she felt she might discover. She has offered to allow her student teachers to administer it for the experience and the data, but none of them have taken her up on the offer (DW Initial Interview, p. 14). Performances. Ms. Wheeler considered group musical performances for an audience to be a form of assessment--a chance to show a completed product (DW Initial Interview, p. 6-7). However, Ms. Wheeler has moved away from formal performances for her younger grades, instead offering informances--chances for parents of children in grades 1 and 2 to come see a music class. Despite some misgivings, Ms. Wheeler continued to prepare her kindergarten students for a performance as part of a “family day” celebration that was a longstanding school tradition. Grades 3 and 4 staged “performance level” (DW Initial Interview, p. 6) programs with singing, Orff instruments, and movement, and fifth graders produced a musical (this year, it was 86 Oliver!). Ms. Wheeler expressed concerns about the rehearsal time required to achieve “performance level:” I’d rather not do the programs, because it is taking a break in the middle of what I’m trying to teach, basically… It wasn’t just teaching the song for the song’s sake, which I did within the curriculum and [while still] teaching [music] skills, but we brought it to a performance level and then performed it… so that spiraling [of curriculum] can’t continue, because you have to take that one part to a certain level… [now the students are] lacking some skills, so I’m having to go back (DW Initial Interview, p. 6-7). Although Ms. Wheeler considered performances to be a form of assessment, they did not result in records of individual musical skills or abilities, except perhaps the video-recording of solo singing or instrument playing, which was not collected for assessment purposes or evaluated in any way. When music learning was assessed. Most assessments were embedded as a part of normal music instruction. During an activity, Ms. Wheeler would build in an opportunity for students to demonstrate some musical skill and record their participation or a score to rate their achievement. For example, one day in recorders, students composed eight-beat B sections to a song they were working on playing. Then, the whole class played the A sections, and individual students took turns performing their B sections (DW Journal 2/1, p.1). Ms. Wheeler marked on a class list which students chose to play their B sections for the class, but she did not evaluate playing ability or the student’s composition itself. Another example was a game played a few times in kindergarten during which the children were “messengers” who delivered different colored hearts (letters) to each other as part of a song (e.g., DW Field Notes 2/5, p. 3). Then, Ms. Wheeler would sing, “Who has the purple heart?” and the child (or children) with purple would 87 sing back, “I have the purple heart” as a way to practice for when Ms. Wheeler assessed their singing voice development at a future date. However, Ms. Wheeler did not record their participation or rate their singing achievement. Composition activities offered rich opportunities for informal assessment, as students experimented by playing their ideas on recorders (self assessment), talked to one another about questions they had (peer coaching/assessment), and asked Ms. Wheeler for feedback. Ms. Wheeler also assessed the compositions formally using a checklist. Although many assessments were embedded in instructional activities, this was not always the case. Some assessments were whole-class activities in and of themselves, such as self-assessments, the “Rocket Notes” note-reading quizzes, and other written assessments about music concepts or information. Rarely, students would be pulled aside for assessments, such as in kindergarten during centers time or in fourth grade when students went individually to another room to perform a playing test for a video camera, or when they played in duets and trios for Ms. Wheeler while everyone else practiced. In 7 weeks of observations, I saw repeated use of assessments. According to my field notes and corroborated by Ms. Wheeler’s journals, each class meeting featured multiple activities that offered the opportunity to assess music knowledge and skills. Many of these activities were whole-group, and Ms. Wheeler circulated around the classroom about once a week with a class list to mark children who had not yet achieved a targeted skill. The fourth grade class I observed did written work, such as a composition, self evaluation, or work in their recorder notebooks, two to three times a month. At least one (and particularly in kindergarten, usually more) activity per class would allow smaller groups or individuals to demonstrate what they knew and could 88 do. These included times that children sang or played alone or in small groups, or that they worked individually on dry-erase boards, playing instruments, or with manipulatives. In Ms. Wheeler’s journals, there were consistent references to assessment activities that I had not identified as assessments when I coded my field notes. For example, there were a number of times that Ms. Wheeler checked the whole group, halves of the class, small groups, or even individuals on a particular musical skill or behavior but did not record what she saw. I did not code this activity (informally checking for participation and/or comprehension) as an assessment, because it did not result in any kind of descriptive information about an individual that could be used later to adapt instruction to individual differences. Another example of an activity that Ms. Wheeler called an assessment in her journal that I did not code as an assessment in my field notes was composing a song as a whole class and having individual students contribute portions (e.g., treble clef, time signature, rhythm or tonal patterns). While allowing individual students to contribute such information would offer a chance to check the class’s understanding, Ms. Wheeler did not record which student volunteered information or what information was contributed by whom. Therefore, I viewed this activity and others like it as examples of well-delivered whole-group instruction, rather than as assessments of individual skills, knowledge, and abilities. In summary, some type of assessment activity was present in nearly every class I observed. More complex assessments like compositions, formal written assessments, recorder playing tests, and tests of singing voice development were undertaken less frequently—only once each in the seven-week course of this study. Self-assessments and portfolios were cumulative and presented to the students at the end of each trimester, and the observation period included times during which students worked on self-assessments and completed written work 89 that was placed in their portfolios. Rating scales and/or checklists were completed once or twice a week regarding specific demonstrations of musical skills, although Ms. Wheeler often chose simply to mark who participated. Performances for audiences did not take place in the observation period, but Ms. Wheeler indicated that grades k, 3, 4, and 5 performed once a year, and the kindergarten class I observed was starting to prepare music for their performance. Scoring Assessments and Tracking Results Checklists and rating scales. Ms. Wheeler’s assessments typically were some form of checklist or rating scale. For many assessments, Ms. Wheeler simply checked “yes” or “no” on class lists to record if a child was participating or demonstrating a particular skill. Danielle designed her own rating scales. The following scale was used to evaluate kindergarten singing: S+ if singing on pitch, S if singing but not on pitch T if talking (DW Journal 1/22, p 1). Kindergarten students also were rated on their abilities to make up a rhythm pattern in the context of a triple meter chant. P+ for pattern with correct solfege and meter P for a pattern in triple meter on a neutral syllable or with incorrect solfege P- for a response that was not in the rhythmic context (DW Journal 1/22, p 1). Ms. Wheeler designed checklists to evaluate summative assessments, such as the final recorder-playing test. Fourth grade students went into the hallway one at a time and played for a video camera. Ms. Wheeler took the video home, watched each example, and rated it with a yes/no checklist of the following: Posture (left on top)? 90 Correct notes? Correct rhythms? Good tone? (DW Journal, 1/15 p. 2). The checklist also included a space for comments. Grades on the recorder unit were determined exclusively by this summative recorder-playing test and were based on whether the child successfully performed a song in the grade class they wanted. That is, if they wanted an “A” they had to play a more difficult song than if they were playing for a “B” (DW Journal, 1/15 p. 2). A chart of which songs could be played for what grade was posted in the classroom for a few weeks prior to testing (DW Journal 3/1, p. 5). Ms. Wheeler also used formal criterion-based assessment of written compositions in fourth grade. The students wrote a song for their recorders and Danielle evaluated it by using a yes/no checklist of the following: Treble clef? Time signature? Measures with four counts? Begin on a tonic note? End on the resting tone? Writing in the key of C? Notes properly placed on the staff? (DW Journal 3/1, p. 4). This checklist was on the board for the students as they were composing. Providing the checklist assisted students as they composed, but also resulted in this activity encompassing only the lower levels of thinking on Bloom’s taxonomy. Bloom’s taxonomy stratifies levels of thought, beginning with knowledge, comprehension, and application, and then progressing to analysis, 91 synthesis, and evaluation. When students follow a checklist step-by-step, they are at most applying what they know to a proscribed task. Observational assessments. Ms. Wheeler described one of her assessment methods as “observational notes” (DW Initial Interview, p. 16). For example, she would circulate during recorders, notice students who were not demonstrating a particular skill (e.g. wrong hand on top) and jot their name down. Fourth grade students also played assessments in duets and trios. For this activity, Ms. Wheeler hung a large chart of different possible songs to play in the hall. The chart was organized by difficulty level. Student duets or trios who played a certain song correctly (pass/fail) were allowed to sign their names under that song in the hall. Ms. Wheeler, her student teacher, and a guest teacher took advantage of these opportunities to give constructive feedback and individual assistance. Some children responded well to the idea of trying for higher levels of challenge, including one duet team who chose to play melody and improvised harmony based on chord tones (DW Think Aloud 2/15, p. 15-16). These observational assessments were formative and interactional and did not result in any data other than the pass/fail list. Written tests. Although Ms. Wheeler administered other written tests and stored them in portfolios for grades 1 through 5, the only examples I saw were the one-minute “Rocket Notes” note-reading tests. “Rocket Notes” were scored as a total number of correct responses out of the 40 possible responses. Each child selected a personal goal for the next test, which was administered once a week for six weeks. Tests on different days had the same content (notes on the treble staff) but the information was presented in a different order so that students were not just memorizing. Ms. Wheeler graphed responses to track if each student was improving, and 92 this led to conversations with individual students who were not improving or who had consistently low scores. Methods for eliciting response. Ms. Wheeler indicated that she spent time in kindergarten teaching behaviors that allowed her to assess musical skills. In the observation period, I observed the following methods of teaching children to respond: letter/heart messenger game (echo singing), scan across room (individual response, but very fast), boys respond then girls respond, responses by section of chairs, small group singing into microphones, small groups playing instruments, movement responses (including fluid movement, beat movement, and thumbs up/thumbs down), response cards (each child has own card, points to pictures or holds up card), and popsicle sticks laid on the floor representing rhythm notation. Although many of these methods were still used in fourth grade, written responses on paper and individual white boards were added, and the most prevalent mode of assessable response was individuals playing instruments. Ms. Wheeler stated that routine was crucial to the success of assessment activities, especially in kindergarten. For example, she attributed an interruption in routine (several snow days on music days) to reduced participation in small-group singing (DW Think Aloud 2/15, p. 5). One day in kindergarten, Ms. Wheeler used centers time to facilitate individual assessments of singing voice development (in the hall) and instrument skills (in the classroom) (DW Field Notes 2/12, pp. 3-4). Her journals do not mention the centers until the week after centers, when I specifically asked her about them. She reported, “I only do them on a ‘party week’ [this day was the kindergarten Valentine’s Day class party]. So they play centers six times a year” (DW Journal 2/19, p/1). In a conversation between classes, Ms. Wheeler told me that she planned centers for these “crazy days” because she felt the students would have difficulty with 93 whole-group direct instruction (DW Field Notes 2/12, p.3). She also stated, “Most of the time, I am just observing [the centers], and I like to do that, because it lets me know the personality of the child… There’s [also] that observational piece of when four people sit down at that drum, and all of them are playing the macro beat—Oh, Cool! And I’ll make comments to them. Or they’ll pull me up to see the pattern that they wrote on the board…” (DW Think Aloud 2/15, p. 13-14). Centers provided not only opportunities for individual assessment, but also a chance to indulge Danielle’s curiosity about children’s abilities, preferred activities, and modes of expression when given the chance to self-select. Challenges to scoring assessments and tracking results. Ms. Wheeler indicated some challenges to keeping records of students’ music achievement. Attendance presented a problem: one student missed four of the seven Rocket Notes tests. It was also difficult when a new student joined the class and lacked prerequisite skills; two new students started in fourth grade while I was observing. However, Danielle stated that her main challenge was finding a way to record assessment data immediately. “If you had a class at nine [o’clock] and then you have how many classes [in a row without a break]… you don’t have time in between classes to write notes… so when do you have time to write those notes? Because an hour and a half later, do you remember what happened in the first class? Sometimes you do, and sometimes you don’t” (DW Initial Interview, p. 19). This was corroborated by her journals. On one occasion she waited two days to write her journal entry, and stated “Uh Oh… Waited two days after lesson and having trouble remembering what happened” (DW Journal 2/8, p. 1). Her typical journal entry comprised 3 to 6 pages, and this one barely filled one page, indicating that the richness and detail she was able to recall diminished greatly over time. 94 On several occasions, Ms. Wheeler recorded whether students participated rather than their levels of achievement. For example, as described earlier, when fourth grade students composed B sections and played them individually for the class, she simply recorded which students played (DW Think Aloud 2/15, p. 11). In another activity, Ms. Wheeler had kindergarten students sing in small groups into microphones but did not record her assessments at the time. During a think-aloud, while watching a videotape of that class, I asked, “Do you know which children are leading?” Ms. Wheeler replied, “At that particular time I would know. If I listened to it again I would know” (DW Think Aloud 2/15, p. 4). In light of her comments regarding how hard it was to remember specifics of what had happened in a class over the course of the day, recording more information than simply who participated may have painted a more detailed picture of student strengths and needs, since in most music classes, she did not have a video recording to review in order to make those assessments. Ms. Wheeler felt that it was important to pick one specific musical behavior when she assessed. “That was, I think, the hardest piece of all assessment for me… I think I had to identify specifically… It has to be one thing. I can’t seem to do more than that” (DW Initial Interview, p. 20). She indicated a wish to be more holistic, but indicated that collecting holistic portraits of all 500 students did not seem achievable. Yet, Ms. Wheeler also stated that selecting specific musical behaviors to assess made it difficult to keep track of all the different ways she scored everything, because each activity had its own scoring system (DW Journal 2/15, p.1). In addition, she felt that it was difficult to know whether the fifth person in a row to demonstrate a particular skill was actually demonstrating his own ability or imitating the response of another child (p. 2). Danielle also was concerned that rating everyone on a particular skill took too much time, preventing spending as much time as possible musicking (DW Think Aloud 2/15, p. 2). 95 Ms. Wheeler stated that she was looking for some specific musical behavior in nearly every activity she taught. At first, she needed to be deliberate about what exactly she was looking for, and it was difficult and very time consuming (DW Final Interview, p. 9). However, she was determined to integrate assessment components into her teaching. If she was not deliberately checking for something, it “…would not have any meaning. It would just be an activity” (DW Final Interview, p. 9). Danielle found that, with practice, the assessment mindset became more automatic. “I think at some point, you just generally do them [assessments]. I think you just have to implement that assessment piece and try it” (DW Final Interview, p. 9). She knew what she was looking for in each activity. Now, she was working to find the time and the best method to record that information. Differentiation and Assessment Ms. Wheeler’s assessments sometimes resulted in individualization of instruction, and differentiation also resulted from her instructional frameworks and strategies. The extent to which Danielle differentiated varied based on the age of the students. In kindergarten, when instruction nearly always was whole-group and experiential, differentiation was rare. In fourth grade, group work and self-paced individual work allowed Ms. Wheeler to use information gleaned from assessments to assist individuals. Ms. Wheeler frequently used the assessments of other teachers as a way to differentiate instruction, although this differentiation was primarily focused on social and academic skills rather than on music learning. Differentiation in kindergarten. At the kindergarten level, I observed little differentiation of instruction as a result of musical assessments. Individualization was limited to behavioral and social intervention for students with special needs, such as autism spectrum disorder (ASD) and English as a second language (ESL), rather than to musical skill 96 development. The curriculum in kindergarten primarily focused on exposure to music and music activities (e.g., singing, moving, and playing instruments) and teaching children the routines of the music classroom (procedures and social expectations). Perhaps because of this, Ms. Wheeler used an early childhood approach to teaching kindergarten, which did not usually require response or have an expectation of correctness. It may be that differentiation of instruction occurred naturally as children were allowed to acclimate to their new environment at their own pace. However, elements of the differentiated classroom as described by Tomlinson (2000), such as flexible groupings, or varying material or response styles for students with different levels of preparation and ability, were not present. The day Ms. Wheeler used centers constituted a notable exception in her approach to kindergarten differentiation. Ms. Wheeler began class by demonstrating each center. The students were accustomed to centers in their classroom and understood that some centers were required and some were free choice. The optional music centers included: 1) Large Taos drum with mallets. Students could play macrobeats, microbeats, or very little microbeats. They could also play My Mother, Your Mother (a chant echo game) with patterns notated on paper plates. Only four people could play. 2) Singing center with microphones. Students were to sing specific songs cued by picture cards. These songs were all part of their upcoming program. 3) Drawing on the white board with markers. The drawing must be a music picture. Students could draw an instrument or write music notes. 4) Instrument area with unpitched percussion instruments. Kids could play patterns or sing songs with the percussion. 5) A puppet stage made from a sheet over some chairs and a variety of puppets. 97 6) A group of chairs and colored paper hearts for “Messenger, Messenger” 7) Sitting in the teacher’s chair to read: A, You’re Adorable (a song they will sing in their program) (DW Field Notes, 2/12, p. 3). Compulsory centers were individual singing voice development testing in the hall with the student teacher and xylophone play (bordun and glissando) in the classroom with Ms. Wheeler. Centers offered a chance for free play in groups flexibly chosen by the children. According to my field notes: The drum center is popular. I hear several examples of steady beat. I see kids trying out the drum with their hands rather than the mallet and preferring that timbre. Some kids rush from center to center, others stay in the same place for a long time—especially those who started off at the drawing center. A group of girls play the “Messenger, Messenger” heart game for a few minutes. They move to the microphone-singing center, where they appear to sing a few songs, but I can’t hear them. Then, they come closer to sit in the teacher’s chair and accurately sing A, You’re Adorable—while one student holds the book (and turns the pages at the correct time). The drawing center (white board) is covered with fairly correct music notes. Students are now trying to draw treble clefs. I learned later that these students had never written as a part of music instruction. 98 I see one little girl reading the rhythm pattern cards for My Mother, Your Mother at the drum center with the correct solfege syllables. Six students organize a group to play the Messenger, Messenger heart game, and one child is the teacher. I am amazed to hear how accurately she leads the echo singing; the responses vary in accuracy. It is funny to watch the little “teacher” as she imitates Ms. Wheeler’s response style when one child forgets to echo. One girl sits by herself and sings all of “Hush, little baby” with accurate pitch and good tone while keeping macrobeat on a triangle. (DW Field Notes, 2/12 p. 4-5). Centers time resulted in student-directed learning of preferred topics in student-chosen groups, which is one way to differentiate instruction. This differentiation was the result of assessment, but not in the way I had anticipated when I designed this study. Rather than to provide learning activities based on a need for remediation or challenge discovered by assessment or using the centers to assess the musical skills used at each center, centers filled the need to have something for students to do while the teacher engaged in formal assessment. Differentiation in fourth grade. In fourth grade, there were several examples of differentiated instruction based on the results of assessments. Ms. Wheeler frequently circulated and wrote the names of students who were not demonstrating particular skill (e.g., fingering for low D) on a clipboard while the whole class played a song together. If the list was longer than five or so students, Ms. Wheeler would work with the whole class on that skill. If not, she would pull those specific students aside for additional instruction. 99 “Rocket Notes” offered an illustration of how assessment results could be used to individualize instruction. As stated above, this one-minute timed test of note reading ability was administered once a week for six weeks. Tests were not graded; they were marked with the number of correct answers. Individual children set a goal based on their previous score. One day, I overheard a girl talking about how she got 26 and her goal was 27. She seemed excited to try again. Ms. Wheeler reported that one child had gotten 40 out of 40 twice in a row, and that seemed to motivate students as well (DW Field Notes 2/2, p. 1). Although goal-setting was selfpaced, the tests were identical for all students. In my field notes, I wondered if there was a way that these tests could be sequentially differentiated so that students could work on different skills or levels. When I administered Rocket Math as a long-term substitute fourth grade teacher, some students were testing on single-digit subtraction, and others were reducing improper fractions (DW Field notes 2/12, p. 1). Ms. Wheeler charted Rocket Notes scores for each student. As a result, she learned that one student did not understand note reading at all. During whole group instruction, she started to help him track notes on the paper. The student was new this year and very quiet. Prior to Rocket Notes, Ms. Wheeler had not discerned that he was struggling based on observation and checking the group (DW Journal 2/5, p. 1). After Rocket Notes results showed he was struggling with note reading, Ms. Wheeler noticed he was also having difficulty with composition. As a result, she checked with his classroom teacher regarding possible learning problems and ideas for ways to help (DW Journal 2/26, p. 3). She also started checking in with him more often and offering additional instruction in music. Another student was writing the same pattern of four letters (EGBD) over and over for two quizzes. Danielle reviewed ways to remember the names of lines and spaces with him, and he improved on the next test. She thought it was likely that he was 100 simply not trying rather than confused about how to answer based on his rate of improvement (DW Journal 2/5, p. 2). When students were working in duets and trios, Ms. Wheeler provided for differentiation by allowing them to select different levels of challenge. Students were spread out over several rooms, and Ms. Wheeler, her student teacher, and a guest teacher (visiting as a professional development day) worked with children who needed assistance based on observations made while circulating around the practice areas as well as prior informal and formal assessments. I asked what would happen during the pass/fail assessment if one child’s playing was unsatisfactory, but her partner or the rest of her group passed. Ms. Wheeler replied that she has taken such students aside for diagnostics and coaching right after the assessment or set up a time at recess (DW Think Aloud 2/15, p. 16). Duets and trios also fostered peer coaching and selfpacing. Groupings were by student choice, and students who were more advanced helped friends who were less advanced. These groupings, and specifically the need for the groups to spread into other rooms and the hallway, also seemed to have another benefit: students could hear themselves more accurately in smaller groups. Being able to hear their own playing produced some immediate gains, not only in tone quality but also in accuracy for some students. Similar improvements may be possible if students could hear themselves better in other activities, such as composition, when it was very hard for students to hear themselves as they played their ideas on recorders, or singing, when students may not be aware of how they sound outside of the group. Although Ms. Wheeler did not group students in a way that she could specifically challenge those who had demonstrated high levels of achievement, offering options ranked by difficulty level for duets and trios to play resulted in some individuals challenging themselves. 101 “The kids by the sink… asked me ‘What’s this challenge with the chords?’ I explained it real quick, and I wrote the chords in for him [like guitar chords above the melody line], left, came, back, and he was able to play… harmony while his friend played the song. And I was like ‘Oh, how sweet is that?’” (DW Think Aloud 2/15, p. 18). In this case, one child opted for the more basic option of playing the melody of a song that they had learned in class, while his partner played an improvised harmonic accompaniment based on chord tones. Ms. Wheeler used a combination of information gained from observational assessments (lists), duets and trios, and rocket notes to seek out individuals for additional instruction during free warm-up. Warm-up typically constituted about 5 minutes at the beginning of recorder days. Ms. Wheeler checked in and worked with students whom she noticed struggling (informal assessment) or who were having trouble as discovered by means of more formal assessment practices (playing in duets/trios, checklist of correct fingering while circulating, rocket notes), and also based on IEP diagnoses. Differentiation based on assessments of others. Individual Education Plans (IEPs) served as a guide to Ms. Wheeler in differentiating instruction. She frequently relied on the assessments of other educators, such as classroom teachers, special educators, and psychologists, when she decided how to help children with special needs learn in music class. However, perhaps because these professionals do not consider music in their assessments, use of the IEP goals resulted in social and academic differentiation rather than in differentiation of music learning. Children with an IEP had been assessed to determine physical, occupational, cognitive and social learning strengths and needs. Ms. Wheeler used this information to alter her instruction so that instruction for students with an IEP was consistent across settings. For example, one child’s goal was to learn how to ask for a break when he needed it, and Ms. 102 Wheeler made this a goal in music as well. In the past, Ms. Wheeler had also used picture schedules or lists of tasks that needed to be accomplished as indicated by a child’s IEP (DW Initial Interview, p. 10). Ms. Wheeler also took student differences into account when she managed behavior. For example, she was struggling to help a kindergarten student who spoke very little English: …it seems like I am on him quite a bit, so sometimes I might let things slide. I don’t want to be on him all the time… I’ve talked to the ESL teacher about him, and she says he’s a little stinker sometimes. She hasn’t really offered me anything to do with that… I want him to learn the information, I’m helping him with his English skills, and yet he is being naughty… or does he just plain not understand? There is a lot going on with him (DW Think Aloud 2/15, p. 7). In such a case, Ms. Wheeler would rely on the assessments, judgment, and advice of other teachers based on conversations and on the IEP document in order to structure interventions that would help the student in question succeed in the music room. Use of IEPs and the assessments of other educators resulted in a variety of methods to differentiate instruction. Ms. Wheeler used students in the LINKS program, a building-wide buddy program, to provide peer assistance for students with Autism Spectrum Disorder (ASD) (DW Think Aloud 2/15, p. 8-9). Most students with ASD had paraprofessional aides, but those aides rotated every month. Ms. Wheeler stated that many of these aides lacked the music skills (such as reading notation, willingness to sing) to assist the students with ASD as well as the peer buddies could (p. 9). In addition to the students with ASD or ESL, Ms. Wheeler was also familiar with the IEPs of students with learning disabilities (LD) who were served in a pull-out resource room. In the context of a conversation about composition projects, and she commented: 103 You’ve got to check those resource [room] kids first to make sure they understand what you are doing and that they have started… [then] if you have those special needs kids like those with ASD, you’ve gotta go check them, or make sure they’ve got somebody to helps them, and THEN you check the others (DW Final Interview 5/31, p. 3). Familiarity with IEPs also helped Ms. Wheeler select modifications that might be helpful for individual students with learning difficulties: “…don’t write the pattern as long, don’t write as many patterns, here is a pattern for you to copy” (DW Initial Interview, p. 12). Ms. Wheeler worked tirelessly to integrate students with special needs into her music classes, primarily by helping with social skills, academic skills, and logistics. One day, the kindergarten class was playing a game that required children to choose another child and hand her a paper heart. Knowing that the student in the class with ASD would need help with this, Ms. Wheeler anticipated his needs and seamlessly helped him without other students noticing (DW Think Aloud 2/15, p. 4). Ms. Wheeler stated that she did not differentiate as much for these students in terms of music learning, because, in her experience, students who were ESL or had ASD or LD did not need help musically--just with written work, vocabulary, and/or social skills. One kid might come in and have a broken leg and maybe he can’t do fluid movement that day. That’s OK ‘cause we will do something else for you. Or maybe somebody else will come in with another disability, but we’re going to modify no matter what. We do modification for that particular student. I have found that, basically, [children with special needs]’ve been able to do everything just like any other kid. As long as I am taking care of that social piece for them and making sure they are on task” (DW Initial Interview, p. 10). 104 Summary. Ms. Wheeler differentiated instruction based on her own assessments of musical skills and abilities as well as the behavioral and academic assessments of others. Differentiation was present to varying degrees in different grade levels; in kindergarten it was relatively rare, whereas in fourth grade it was more frequent. Differentiated instruction ran the gamut from using prior knowledge to provide social or academic scaffolding for a child with special needs, to using assessments to decide whom to assist during free warm-up time or group work. Group work itself functioned to differentiate instruction, particularly on centers day in kindergarten, and when the fourth grade students worked together in duets and trios to prepare songs to perform on their recorders. Emergent Themes Data analysis revealed additional themes that were not encompassed by my initial research questions but were still pertinent to the relationship of assessment and differentiation in Ms. Wheeler’s teaching. These themes included Ms. Wheeler’s inquisitive disposition, her linkage of curriculum to assessment, and teacher behaviors conducive to differentiation. Inquisitive disposition. Ms. Wheeler demonstrated an inquisitive disposition that contributed to the quality and frequency of assessment activities as well as to differentiation of instruction in her classroom. Her inquisitive disposition was characterized by self-motivation to integrate assessment components into her teaching, curiosity about the results of assessments, ongoing learning regarding music teaching, and reflective thinking about her teaching practices and the progress of her students. These qualities seemed interdependent and interrelated. Ms. Wheeler’s journals and interviews made it clear that she was the one motivating herself to integrate assessment components into her teaching, and that her assessments had little to do with grading, per se. 105 After we got done with the curriculum and I become interested in assessment, then I wanted to identify all the different kinds of assessment that were possible in the music classroom, and I tried to plug all that into my curriculum. Which didn’t match my report card… that was just something I became interested in (DW Initial Interview, p. 4). [My principal] has no expectations for assessment in my classroom. When I try to tell him about assessments, he is surprised that I am doing them, but that conversation is very short. Being a tenured teacher, I get observed once every three years. When he is filling out my evaluation form, he always asks me what types of assessments I am using. He is always surprised that I use a variety of assessments. He expects a report card but he never looks at them… (DW Journal 1/15, page 3). In addition to a disinterested administrator, many other music teachers in Ms. Wheeler’s district were resistant to assessment of music learning. Teachers in our meeting did not want to look at assessments and tried to change the subject several times… Several teachers commented that our assessment is our performances. I made the comment—Yes, that is our MEAP [Michigan Educational Assessment Program, a yearly achievement test]. But other teachers teach MEAP and still teach a variety of curriculum (science, etc.)[that is not tested] and still assess for each subject area. They don’t just test one time (DW Journal 1/22, p. 3-4). “The negative is they don’t want kids to walk out of the room thinking that they are not a good singer… and that’s the kind of conversations we have about why we shouldn’t assess” (DW Initial Interview, p. 2). 106 Ms. Wheeler compensated for a lack of support from her administration and other teachers in her district by being independent and resourceful. For example, she found a book on assessment by MENC and adapted a form to use as kindergarten report card (DW Journal 1/29, p. 2). She talked to the kindergarten teacher about kindergarten classroom assessment practices to get ideas (DW Journal 2/1, p. 1), and she discussed assessment practices in general education with her students in the Master’s of Arts in Teaching program at a local college (DW Journal 3/1, p.2). Ms. Wheeler sometimes struggled with frustration regarding how to fit in all of the assessment components, track individual progress, and still teach and enjoy music (DW Journal 1/22, p. 3; DW Journal 3/1, p.2 and p. 6). Even after 26 years in the classroom, Ms. Wheeler demonstrated unflagging interest in her students’ progress and curiosity about their abilities. She commented frequently on how she was interested to see how students had performed on “Rocket Notes” (e.g., DW Journal 2/19, p. 3) or what their compositions would be like (e.g., DW Journal 3/1, p. 4). She also made statements like “I am curious to see what he did” while watching video (DW Think Aloud 2/15, p. 2). Ms. Wheeler’s curiosity extended to designing a mini-study. I don’t like doing drill [of note reading]…. I thought I’d rather [the students] write songs and learn through modeling of songs... But now I’m thinking I’ll go back to drilling a little more… There’s been a lot of conversation in the faculty meetings about [how] we’ve gone to this higher level thinking in math, and you can think this way and you can think that way and we’ll all come to the same answer. But now they realize that they are not drilling facts enough, so that piece is missing. So they need to get back to drilling. So I’ve been thinking, well, maybe I need to drill, so I’m going to try it. So in the one 107 class you are observing we are doing the little drilling test [Rocket Notes]. I’m going to compare that in like a month or two and see who seems to be achieving getter in writing songs. We’ll see if that makes a difference. I am thinking that will be a[n] interesting piece to see (DW Initial Interview, p. 17). The fourth grade I observed was the lowest achieving academically of her three sections. She commented in her journal, “I am interested to know the growth of my students in taking the “Rocket Notes test. I think it will show growth—and I think I will use this with everyone next year at the beginning of the recorder unit” (DW Journal 2/19, p. 3). 7 Perhaps curiosity also contributed to Ms. Wheeler’s belief in the need for ongoing and diverse training in music education. I think the more training you have, the more you have to choose from, the more variety of things that you can bring to your students so you can meet all the needs of your students, I think that helps… … As I have gotten more training I have more things for the students (DW Final Interview, p. 1). Ms. Wheeler felt that post-baccalaureate study had allowed her to become a better teacher, particularly regarding her ability to assess. “The assessment piece, for me, I wasn’t trained very well. There was not training, or there was no assessment when I started teaching. There is still not a whole lot [of training regarding assessment at the undergraduate level] at this point” (DW Final Interview, p. 4). Based on our informal conversations, Ms. Wheeler was not simply referring to her master’s degree study, but also to Orff and MLT certification, numerous workshops, and conference attendance. 7 In case the reader is curious, DW detailed “Rocket Notes” results in her Journal 3/1 p. 5. Although she described the class being tested as her “low” class, only one student had trouble with notation on the songwriting project, compared to four in each of the other classes. DW concluded that “Rocket Notes” did help. 108 Finally, Ms. Wheeler’s inquisitive disposition was marked by her reflective practice. Not very many teachers would be willing to take on a time-consuming project like participation in a dissertation study that required interviews, think-alouds, and seven weeks of observations and journals. Ms. Wheeler was a mother of two, college teacher, church music director, and many other roles in addition to her elementary school teaching. However, she contributed thoughtful journal entries regarding nearly every observation and made herself available for several hours of interviews. She seemed to enjoy having a “music person” observe her and discuss her teaching with her. Ms. Wheeler’s reflective natures showed in comments like: “It would be interesting to see my response if I had walked by and not seen anything. She was just sitting there, I wonder what I would have said?” (DW Think Aloud 3/22, p. 2) and “I’m not sure without that video there… I mean, I know they are all working, but seeing the process is very interesting, because I am not sure that I had picked up on everybody’s different process” (p. 3). In the final interview, she stated: When you came, I thought I had dropped some things, and this [participating in the dissertation] would be good, to make me go back there. I think I was just… naturally doing it [assessment]… I thought I had gotten lax, but I think [assessment] was just a natural thing that I was just normally doing. I had written it into the curriculum, or I’d written it in with particular activities—that this was what I was looking for or checking for (DW Final Interview, p. 9). Ms. Wheeler’s reflective nature resulted in continually striving to find out more about her students, so that she did not realize how much assessment she was doing. Ms. Wheeler integrated assessment components into her teaching essentially in isolation. Without her inquisitive disposition, she could easily have decided not to engage in any form of 109 assessment. She had to be self-motivated; no one was requiring her to assess music learning. It took curiosity about what her students could do, coupled with interest in how assessments could improve learning, for Ms. Wheeler to be motivated to assess students’ musical abilities and achievements. She needed the assistance of additional training and sought out venues for learning more. Danielle was reflective about her assessment practices, and they became semiautomatic. Linkage of curriculum to assessment. Nearly every time Ms. Wheeler discussed assessment, she mentioned curriculum. Three years prior to the time of this study, Ms. Wheeler and the other elementary teachers in the district had written a cohesive, sequential curriculum for k through 5 music. Ms. Wheeler was stymied by other teachers’ resistance to continuing on from writing the curriculum to creating assessments and ways to report progress, which she viewed as interrelated parts of instruction. Even before that assessment piece, I am looking at the curriculum… then, when I am writing my lesson plans, I am looking for a variety of activities that can meet the needs of all the different students. Then, I can do the assessment while I am [teaching]… and then I see what the outcome is (DW Final Interview 5/31, p.1). Although she viewed curriculum, planning, and assessment as interrelated, Ms. Wheeler consistently indicated curriculum as the root of instruction. When I asked her about the most important factor in her ability to meaningfully assess learning, she replied, I think the more training you have, the more you have to choose from, and pick from, the more variety of things you can bring to your students so you can meet all the needs of your students, I think that helps. I think that... I still go back to… that curriculum, I think, needs to be in place (DW Final Interview 5/31, p.1). 110 In discussing assessment with practicing teachers who were her students in a masterslevel college course, Ms. Wheeler discovered that this linkage of assessment to curriculum was also problematic in some general classroom settings. “[Their] main concern was that they have all this information from assessments but don’t know how to use it, and more important, they don’t have enough activities/skills/or curriculum to meet the needs of all the students once they have the test results” (DW Journal 3/1, p. 2). While Ms. Wheeler felt that her district had a strong music curriculum that would benefit from embedded assessments, these other teachers in the M. A. program felt that they had too many assessments, at least in part because they were not linked to a strong curriculum. Ms. Wheeler was a proponent of a spiral curriculum, in which young students learned a variety of music skills and information at basic levels, and then circled back to review and add context, depth, and theory in continuing spirals as they matured. “When you have the whole building over 6 years, you can spiral curriculum and they [the students] really learn something” (DW Think Aloud 2/15, p. 19). In Ms. Wheeler’s model, the fifth grade year was a sort of capstone year. In fifth grade, …I let them go a little more. We are doing more things… Individual creative-type activities or more creative group activities where I am not teaching them concepts as much any more. I am still spiraling concepts, but… now what can you do with [all the material you have learned]? (DW Think Aloud 2/15, p. 8). Since her curriculum is cumulative, Ms. Wheeler’s fifth grade year became the time for synthesis activities that were summative assessments of the k-5 music learning experience. All of the activities and assessments of previous years have spiraled to that point: 111 It’s so hard—that building piece… Because we don’t see [the students] very much. So how can you… You’ve got to build year after year. I think that’s the only reason I can now get to things like figuring out the chords [improvised accompaniments based on chord symbols] because they [the students] have previous information. But that’s taken years (DW Think Aloud 2/15, p 19). According to Ms. Wheeler’s experience, a cumulative spiraling curriculum requires the same teacher to see the students at every grade level. “I think you have to have the kids from kindergarten all the way up to fifth. When you have 500-something children… [talks at length about social issues like divorce, behavior issues, skills, abilities, preferences]. I think you need to be there the whole time to understand” (DW Think Aloud 3/22, p. 7). In the past, Ms. Wheeler had shared a building with another teacher, and the grade level assignments had varied from year to year. Even though she and the other teacher “…would do the exact same thing, then the next year, the next teacher would get [the students], and they would say ‘She didn’t teach me that…’ How frustrating!” (DW Think Aloud, 2/15, p. 19). Due to budget issues, Ms. Wheeler learned near the end of this study that she would be transferred to teaching first grade. In the final interview, I asked her what she would like to see from her replacement (another music teacher from the district). She replied: We are a lot the same, we’ve taken classes together, we’ve had many conversations… But I can’t really make a comment on that for her. She is starting over. She doesn’t know the kids. That spiraling piece I think is so important and even though we both have Orff background, MLT training, and she has experience, she still doesn’t know the kids. She is starting over again (DW Final Interview, p. 6). 112 Even though Ms. Wheeler admired her replacement as a teacher and considered her to be a close personal friend, she thought that her replacement would need 5 years to become an optimally effective teacher in her new building because of the cumulative nature of the spiral music curriculum. Teacher behaviors conducive to differentiation. Ms. Wheeler had a variety of frameworks in her classroom that were conducive to differentiated instruction but that were not necessarily assessment-based differentiation of music learning. For example, students in Ms. Wheeler’s class each had an assigned seat. Learning often occurred while moving around the room, playing instruments, or sitting on the carpet. However, at the beginning of each class, for some whole-group instruction, and during written work, students sat in their assigned seats. I saw students in this setting helping one another with behavior and work, so I asked Ms. Wheeler about how she assigned the seats. In kindergarten, I have no clue, I just do boy/girl/boy/girl, or try to… In my next grades up, I consider behavior first with them. So it’s not alphabetical, its boy/girl/boy/girl and behavior. If it’s somebody I feel needs to be by me, I put them close to me. Or I might, like some of them that are right next to me this year and I’m on them and on them, next year they might be away from me… Next year, when I go to do the seating chart, I make sure that the child [who] was a helper is not a helper next year, she needs a break… I do try sometimes to put a boy that’s a really good strong singer next to a boy that maybe is not. Especially if it may be a behavior sort of thing, maybe he can get him on task… I try to think of all that stuff. Behavior, singing… and I look back in my past records for that (DW Think Aloud 2/15, p. 8). 113 The seating chart facilitated peer assistance, which is one way to differentiate instruction. It was based on previous assessments of students’ strengths and weaknesses. However, I did not consider it to be an assessment-based form of differentiated instruction because it was not explicitly applied—it was simply a framework. Some activities had built-in differentiation that was not linked to assessment. For example, when the fourth grade students composed eight-beat B sections to play on their recorders along with a refrain they already knew. Ms. Wheeler sprinkled hearts with notes on them around the students. They were allowed to draw eight notes semi-randomly (the first note had to be tonic and the final note had to be the resting tone). However, students also could play different combinations on their recorders to decide what they liked (DW Think Aloud 2/15, p. 10). Students could choose to challenge themselves or to take the easier route. Because Ms. Wheeler only recorded who chose to play their B section for the class, this activity provided a framework for students to operate on different levels but not for Ms. Wheeler to assess either composition or playing, or to differentiate her instruction based on the known musical needs of individual students. In a similar example, Ms. Wheeler sang a melody line that the students were learning on recorder. Based on provided chord symbols, the class played triads by having individual students choose to play different chord tones on their recorders. Again, this provided a framework for various levels of challenge. Some students simply played chord roots, which was a skill that they had practiced as a class and was the easiest option, because the chord symbol named the chord root. However, some students chose other options, such as playing the same note the whole time (sol), playing the same series of notes each time the song was sung (i.e., memorizing rather than improvising), or playing something different each time (DW Field Notes 114 2/8, p. 1). Because there was no way for Ms. Wheeler to track which students were choosing the various options or to require certain students to choose certain options, this activity was a framework for differentiation but not an example of differentiation of instruction. Over the course of the seven-week observation period, Ms. Wheeler planned activities that allowed a variety of work and response styles on a number of levels of achievement. These activities may not have featured differentiated instruction when taken independently, particularly when differentiated instruction is conceived of as teaching different things in different ways to different groups of people. However, one of the ways Ms. Wheeler tried to differentiate instruction for her 500 students was to vary the difficulty of activities as well as the methods of information delivery (aural, visual, kinesthetic) and response style in an attempt to reach different children on different days. Ms. Wheeler did not assess students’ responses to these various modes of information delivery, which might have allowed her to track student progress and tailor her instruction more specifically. Ms. Wheeler’s teaching was marked by a reliance on established routines and strict enforcement of expectations, which she believed was conducive to differentiated instruction. You’ll see my room is set up in a certain way, where each person has their own space… One principal says I’m like an army sergeant. I’m very distinct in everything that I want them to do. I’m constantly saying rules… I start the same language in kindergarten is the exact same language I use in 5th grade. When I am saying I want this done, do this or that. So I think they feel comfortable but they also realize my routine. Same routines in kindergarten, same routines in 5th grade (DW Initial Interview, p. 11). Ms. Wheeler felt that her routines helped those with ASD, learning disabilities, and ESL to participate and learn because expectations were clear and the need for verbal direction was 115 reduced. In combination with the spiral curriculum, Danielle believed that her strict classroom management allowed for more exploration and creativity in older grades: “I think that is why I can be a little more free, especially with 5th grade, when we do a lot of creative project activit[ies]” (DW Think Aloud 2/15, p. 3). Knowing that her students were aware of her rules allowed Ms. Wheeler to release direct control and allow the individualized activities necessary for differentiation of instruction. Conversely, these rules also allowed some students with special needs the predictability that they needed in order to participate. I did not observe any aspects of this strict classroom management as directly detrimental to music learning, although I sometimes wondered if a little more freedom to explore or respond might have allowed different response styles for children who preferred them. While my observations certainly noted the strong routines and consistently enforced high expectations, I also noticed a number of occasions when Ms. Wheeler was flexible and this allowed for differentiated instruction. One day, a student brought in his own book of songs to play on the recorder. He had written in note names for ten songs and played one of them for the whole class after warm-up and before the beginning of whole group instruction. When the class was writing songs for their recorders, he chose to compose in 5/4 (DW Journal 3/1, p. 1). Another day, the kindergarten class was standing to sing a song (DW Field Notes 3/1, p. 3). Two boys joined hands and began rocking in time to the macrobeats. Abandoning whatever she had been planning, Ms. Wheeler had the whole class join. When I asked her about this, Ms. Wheeler replied that she thinks it is important that student contributions are valued. I had a class last year… They were just… in two different classes, the boys wanted to dance. And you can’t to that with every song, and it was a little out of my comfort zone, but OK, you’re dancing, OK, go ahead and dance. And so I let them do that all year. 116 And they still want to do that this year. And I think, if the boys want to dance, why would I stop that? …Why would I stop that creativity? (DW Think Aloud 3/22, p. 6). Ms. Wheeler’s willingness to allow students to contribute their ideas and strengths to the way they learn in the music room created an atmosphere conducive to differentiation. Chapter Summary Danielle Wheeler used a variety of assessments, including observations, checklists, rating scales, multiple-choice and short answer written work, and video-taped individual performances. Her assessments and the assessments of others resulted in differentiation of instruction primarily for students who were in need of some form of remediation: musical, academic, or social. Despite a lack of support from coworkers and administrators, Ms. Wheeler worked to embed a variety of assessments in her teaching that allowed her to track individual student progress on curricular goals. Ms. Wheeler viewed the music curriculum as cumulative from kindergarten to fifth grade. In her teaching, differentiation did not arise simply from assessments of musical skills but from a more multifaceted picture of student needs. Danielle based much of her instruction on her accumulation of personal knowledge of students’ social, academic and musical growth gained over the course of teaching them for their entire elementary career. In addition, she organized frameworks and provided flexibility that allowed for further differentiation. Given the inherent difficulties of teaching roughly 500 students that she only saw twice a week and the lack of any requirement that she track student progress, Danielle Wheeler nevertheless worked to implement assessment-based differentiated instruction. Ms. Wheeler had little formal training regarding assessment practices, and the preparation she had came from fragmented sources, sometimes with conflicting viewpoints. Sometimes, Ms. Wheeler opted not 117 to record data about her students’ musical progress, instead simply recording if they participated in the activity. Ms. Wheeler strove toward the admirable goal of improving music instruction for her students, and I admire her and appreciate her courage in allowing me to observe her teaching and to write about both her considerable successes and the areas where she continues to refine her practice. Perhaps the model she provided in this chapter will help other teachers by motivating them to try out new assessment methods, helping them see ways to improve their practice, and encouraging them to seek out additional training, mentors and collaborators as they, too, strive to implement assessment-based differentiation of music learning. 118 Chapter Five: Results Carrie Davis: Chaos and Creativity Ms. Wright’s class… left feeling very accomplished. We’d been putting everything together for the orange belt song—Au Claire de le Lune. We had been working on the rhythms, singing it (first just as a song with no words, then with pitch-letter names), moving to it, etc. The group sat 8 down and we “singered ” through the song. After singering, we attempted to play. It was definitely the first time most students had attempted playing it: there was a complete lack of cognizance of the beat. The students put down their recorders and said, “That was really bad!” So I asked what they thought was wrong, [and they said], “We weren’t together.” [I asked] “What might help us?” [and they replied] “Maybe trying again?” I had the students problem-solve for a bit and then had then put their recorders down and attempt just patsching the macrobeat on their laps while singing the song. Wow. Even the rhythmically achieving students weren’t really keeping a steady beat. So I had them patsch again, this time starting the macrobeat before singing. They were a bit better. We started rocking the macrobeat while patsching the microbeat. “Hey! We’re closer,” Christian said, “But we’re still not right!” The class problem-solved by having one half keep the macrobeat/microbeat while the other half sang the song on a neutral syllable. Both sides achieved perfection. 8 Singering: Singing note names while holding a recorder to the chin and fingering. 119 “I think it’s because we were trying harder,” said Zach. (Definitely the case for himself—he’d been following a stray ant with his eyes and could not be pulled into the lesson prior to that final attempt.) “Maybe we were listening better?” suggested Ashley. And that was when the “light bulb” went off over 50% of the students’ heads. It was fun to watch. I briefly described the concept of ensemble listening—concentrating on your own music making while at the same time listening— and how easy it is to ignore that part when you’re trying something new. I gave an example from a recent rehearsal where we [an orchestra she was playing in] sight-read Gould’s Jericho and the brass section was just glued to their parts and totally ignored the director and the rest of the ensemble… At this point, the light bulb seemed to click with a few more students, and they decided they wanted to try half of the class playing the song and the other half keeping a macro/microbeat. It went very well. They traded. That went well. They asked to put it together. That was a little less solid, but still 600% better than their initial attempt. “We’re almost there!” shouted Jace. Ashley beamed, Nicole buckled down for another attempt, and we tried three more times in succession, having the greatest success with students rocking the macrobeat until they began playing. Not only were the students together, but, because they were so focused on their “ensemble” they were less apprehensive over the details of fingerings. This circumvented their lack of confidence in recorder skills and allowed for higher achievement. (They can really get in their own way 120 sometimes. My biggest challenge is keeping them thinking positively.) By the end of the class every student--even Grayson—was beaming with pride, and someone said they were good enough to cut a CD. I told them maybe we should all hit the Vegas Strip and be the opening act for a big show. A student actually suggested Celine Dione! (Gotta love it!) (CD Journal, p. 1-2). I met Carrie Davis in a class we were both taking at a local university—I was just completing my doctorate, and she was finishing her master’s degree. Based on conversations in the class, I could see that she was a strong teacher and that her teaching style would provide a contrast to the other participants in this dissertation. I asked her to participate, and she was willing but concerned. In our first interview, she asked, “Are you sure that you are going to see what you need to see in this? I am thinking of the third graders creating their own performance from scratch. Is that going to let you see enough of the assessment process?” (CD Initial Interview, p. 7). Because she was preparing for performances, she would not be using as many assessments as she viewed as typical for her teaching. Earlier in the interview, she had described her performance pressures: each grade level (K-4) was expected to stage a “mini-Broadway-like show” (p. 5). Therefore, she was preparing for after-school programs on April 26 (second grade), May 17 (fourth grade), May 20 (kindergarten), May 24 (first grade) and May 25 (third grade). Carrie was also playing professional flute gigs (she had one the night before our initial interview). Fortunately, our three-credit graduate seminar ended the first week of May, giving her more time. The observation period was from April 19 until May 26. Even knowing how much pressure she was under and with awareness that I would not see her typical teaching in terms of assessment, I wanted Carrie to participate for several reasons. Based on conversations in class, I knew that she had a different classroom persona than either of 121 my other participants, who were very direct in their teaching style. Carrie was more of a facilitator: through questioning and experimentation, she tried to help students discover musical concepts. She also frequently abandoned her plans in light of social cues or to pursue teachable moments. I wanted to know about the role of assessment in this type of classroom environment. Ms. Davis was allowing her third grade students to write their own mini-musicals in small groups for their program, and she had never done this before. I wanted to see how she grouped students, how she kept track of what the groups were doing, and what the students learned as a result of the group composition activity. I also knew that Carrie taught students with moderate to severe cognitive impairments in both mainstreamed and self-contained settings, and I wanted to see how she differentiated music instruction for them. Danielle (see Chapter 4) and Hailey (see Chapter 6) agreed to participate before the winter holiday break and their observations took place much earlier in the semester. They each provided journals for nearly every class, answering most if not all of the prompt questions, and often adding additional comments. These journal entries were typically bulleted lists, sentence fragments, and a few longer comments, emailed to me within 24 hours of each observation. With Carrie, I would send journal prompts and not hear from her. Then, one day I would come into her room for an observation and she would hand me an ethnography of a class that met the previous week, with detailed biographical notes on some children. Carrie submitted a total of three journal entries, each about six pages, single-spaced, and constructed as a story, like the excerpt at the beginning of this chapter. Although her entries were harder to triangulate, Carrie’s journal writing style paralleled the differences in her classroom manner from the other participants. 122 Like Carrie, Danielle and Hailey offered their students opportunities to create and be creative, and their classrooms were filled with joyful, musical experiences. However, their reporting and teaching styles were more linear than Carrie’s and produced data that were easier to make sense of by applying my three research questions and then describing emergent themes. Data from Carrie’s interviews, think-aloud, field notes, and journal seemed to demand that I analyze differently. Therefore, this chapter is organized into four main sections: (1) Self-reports of assessment (2) Assessment and differentiation of instruction in small-group composition, (3) Differentiation of music instruction for students with cognitive impairments, and (4) Constructivism and differentiation. Self-Reports of Assessment The initial interview questions for Carrie were the same as those for Danielle and Hailey and included questions regarding assessment (see Appendix B). Carrie also described some of her views on assessment as part of her think aloud. Therefore, I amassed a considerable amount of information on the assessment components of Carrie’s teaching from her point of view. However, as she feared when deciding whether to participate in the study, I only observed limited evidence of these assessment activities as she worked to prepare students for upcoming performances. During the observation period, third grade students were working on composing mini-musicals in small groups, and, in fourth grade, Carrie was “aim[ing] for the ‘band rehearsal’ mentality from which I usually stay as far away as possible. This seems to suit these kids and will help them to feel confident enough that they don’t freeze up at their program next month” (CD Journal, p. 6). I cannot triangulate some of the assessment techniques or testing Carrie described in interviews with evidence from my field notes or other sources. However, in the interest of cross-case analysis, I will include information on self-reported use of assessment 123 with the caveat that I did not observe most of these activities. Ms. Davis reported using a variety of assessment practices, including aptitude testing, report cards, observational assessments and other formal assessments. She also emphasized the importance of individual assessment and discussed challenges to assessment. Aptitude testing. Ms. Davis administered the Primary Measures of Music Audiation (PMMA; Gordon, 1986) once a year in first grade, and twice a year in second and third grades (CD Initial Interview, p 2). “[PMMA] gives me a picture into those… really high aptitude [students who] haven’t shown much achievement in class. Because [those students] are thinking of other things in their heads, they are going beyond. That gives me enough of a window to know... to gauge where I need to make changes” (CD Initial Interview, p. 2). Ms. Davis stated that she wished to continue aptitude testing with her fourth grade students, but she had not yet found the money to purchase the Intermediate Measures of Music Audiation (IMMA; Gordon, 1986), and her older students needed the more challenging test for the results to be useful (CD Initial Interview, p. 2). Report cards. Ms. Davis’s district required grading music students on report cards twice a year, once in January and once in June (CD Initial Interview, p. 4). She described the grading system as “…kind of generic. ‘Making Progress’ or ‘Needs more Time to Develop’ for the lower el. For the upper el, it’s outstanding [face, hands and voice express awe], good, satisfactory, or needs improvement” (CD Initial Interview, p. 4). Students were graded on “Do they show development in use of singing voice… rhythm skills, and listening skills? And we have behavioral components as well. And then in third and fourth grade they add another area… combining those skills [singing, rhythm, and listening]” (CD Initial Interview, p. 4; CD Artifacts 1 and 2; lower and upper elementary report cards). Although Ms. Davis worked conscientiously 124 to ensure that the report cards accurately reflected student performance levels, she did not feel that grading students was a necessary use of assessment data. She preferred to use the results of assessments to monitor students’ progress (CD Think Aloud, p. 6). Observational assessments. Ms. Davis’s journal entries included notes on her observations of students’ individual achievements in class, which Ms. Davis stated she tried to record regularly. Here is an excerpt regarding fourth grade students I observed: 9 Jason is still using his head voice when he thinks the other boys aren’t listening (hooray!). When he was part of the group of 5 or so who were singing the “special” part, he backed off a ton, but good for him he didn’t back down completely! Allie was able to imitate the correct pitches for do and sol when the friends near her were singing. When she sang without them on the same part, she was not very accurate. This shows me she’s a becoming better split second imitator when she knows she’s not matching. Her melody was almost pitch-accurate, so she is gaining some independence. Abigail continues to be oblivious that the pitches she sings are not matching others. She is, however, becoming aware of the surprised/frustrated looks of her classmates when she is overly exuberant. Go, Tabby! Not only maintaining the chord root accompaniment on her own, but one of the few who ventured into rhythmic improvisation beyond macromicrobeat on a neutral syllable. (CD Journal, p. 3-4). 9 All names in this dissertation are pseudonyms. Abigail is also discussed below. 125 Ms. Davis provided entries like this regarding ten of the twenty-two students in class that day, and her observations were triangulated by the video for that day and my field notes (CD Field Notes, 4/21, p. 1). I observed Carrie jotting down handwritten notes, which I think she turned into a more polished, storytelling format as she typed her journal. Other formal assessments. When I asked about how she and other teachers in the district valued assessment, Carrie replied, “We are all constantly assessing… We all wiggle it in, in different ways, but we all find it very important. We all agree on the fact that you have to know where your students are [in their musical development]” (CD Initial Interview, p. 3). I can’t imagine not assessing my students as I go. Assessment… plays a huge role in music education. You have the assessment with different skills that lets you know if your students have it or not, whether you can move on. That would be kind of a summative; see if they’ve gotten it. Then you’ve got the formative assessment, where you test the waters to see where they are in the first place. And then, through along the way, you’ve got to stop and see where your students are, see what they are understanding, so you know how to proceed…. If you don’t ever take stock of that—you are just singing with them all the time, or just doing games… but not really focusing on their learning, then… it would kind of be an empty experience for all (CD initial Interview, p. 2). Interviews revealed this philosophical valuing of assessment as well as descriptions of some specific formal assessment strategies such as aptitude testing and grading on report cards. I also saw sign-up lists in the hall for recorder playing tests during lunch or recess to earn different colored belts (a la “Recorder Karate”). These playing tests (“auditions”) were also mentioned in passing during an interview (CD Think Aloud, p. 16). In her journal, Ms. Davis referred to note recognition quizzes taken by the fourth grade class (CD Journal p. 10). She also used self- 126 assessments of the compositional process and product with the third graders (CD Field Notes 5/26, p. 2). During my final observation, third grade students started to learn “Summer’s Coming,” a song that included solo responses from three different singers each time the song was sung (CD Field Notes, 5/26 p. 3). Carrie told me this would be used as a final test of singing voice development and aural skills for the year. “Aurally, I am hearing if they can accurately do tonic, dominant, and subdominant patterns, and I am also hearing singing voice [development] at the same time. They each get a turn with all three of [the responses]” (CD Think Aloud, p. 19). Although Ms. Davis was preparing students for performances and did not formally assess music learning during our observation period, I saw evidence of a variety of formal assessment techniques. Importance of individual responses. When I asked about “checking the group” (Hepworth-Osiowy, 2004) as a method of assessment, Carrie replied: “I would ask, what about the kids who are faking? [laughs] The child whose mouth is moving across the room and looks like they are singing, but isn’t singing at all? If you just listen to the group, it can sound great, because your strong ones [students] are carrying the group” (CD Initial Interview, p. 4). In conversation, Ms. Davis repeatedly emphasized that assessments must be of individual responses (e.g., CD Initial Interview, p. 2). When she taught the class for students with cognitive impairments (CI), most activities had some component of individual response. In third grade, composition activities allowed Ms. Davis to circulate among groups and interact with individuals as they worked with musical material, and she must have learned about individual skills and abilities during this process. I did not observe evidence of record-keeping regarding students’ progress and learning needs in this context. 127 In fourth grade, I observed only one class period in which instruction provided obvious opportunities for assessment of individual musical responses (CD Field Notes 4/21, p. 1). That day included chances to gather data on individual students while they: sang melody in small groups, sang chord roots in small groups, played chord roots on boomwhackers, played chord roots on recorders, and played melody on recorders. However, Carrie chose not to track individual progress that day because the activity was “just for fun” (CD Field notes 4/21, p.1). I did not see Ms. Davis’s typical instruction, particularly in fourth grade, as they prepared recorder music for their concert. Challenges to assessment. Ms. Davis mentioned several challenges to assessment of music learning. She stated that assessment was difficult because of how many students she taught and only seeing them twice a week. “I would love to hear them all, multiple times per class, solo, and that’s not possible” (CD Initial Interview, p. 2). Ms. Davis also struggled with record-keeping: “…sometimes, even with the pencil and paper in my hand to mark assessment information, a class leaves and I see I’ve missed half of them that did perform, or that I can’t seem to remember my own [rating] system” (CD Journal, p. 13). Furthermore, Carrie was concerned that her students were “hung up on grades” (CD Think Aloud, p. 6). Usually I have my seating chart out, and I am marking [assessment data]… A lot of them have figured out that even though I am telling them that I am [marking] turns, they know there is more going on. They know, because they know that everyone assesses. And they are asking, “What grade did I get?” I tell them “I am not giving you a grade, what are you talking about?” “Well, you were marking on the seating chart, so it must be a grade.” “You had a turn.” “Well, how did I do?” “Did you do your job?” “Yeah” “Well, 128 then you did great. Then you know you did what you were supposed to” (CD Think Aloud, p. 6). When I asked her about the relationship of assessment and grading, she replied, Assessment lets you know where your students are at in the learning that you are hoping to be imparting to them. That is very poorly worded. But it also lets you know what you need to re-teach. If an idea hasn’t gotten across. And it lets you know their background knowledge. Assessment should inform what you are going to do. OK, so this is how I need to approach this concept, because this is where they are at. And then it is summative also. But there is that piece in the middle, where you are constantly taking those snapshots to see. Assessment really is more for the teacher, to inform their teaching (CD Think Aloud, p. 6). Ms. Davis reported problems assessing due to the number of students she taught, how little she saw them, and how difficult it was to keep adequate records. She was concerned about how grade-conscious her students seemed, and felt that assigning a grade may actually detract from the more important role assessment could play in guiding her instruction. Summary of self-reports of assessment. Ms. Davis expressed a philosophy strongly supportive of assessment as a way to improve music teaching and learning. She reported regular use of aptitude tests (specifically PMMA), her district’s report cards, observational assessments and other formal assessments, some of which were triangulated in my field notes (specifically, self assessments of group composition activities, recorder playing tests, and a singing voice assessment). Ms. Davis believed that individual responses were a necessary prerequisite for an assessment to be valid. She said that the number of students she taught and the infrequency of music classes made formal assessment difficult to fit in. Ms. Davis used informal assessment 129 and emergent assessments during the third grade composition activities, and those will be discussed in the following section. Assessment and Differentiation of Instruction in Small-group Composition Ms. Davis’s third grade students spent the observation period composing musical “commercials” for their performance. Ms. Davis had never undertaken a project like this before and was unsure about having me observe and about what the performance outcomes would be (CD Initial Interview, p. 7). As the composition project unfolded, it displayed aspects of differentiated instruction, including flexible grouping, student-centered learning, and peer coaching. Ms. Davis employed a variety of informal, emergent assessment methods to track student progress and learning, and used both the performance in front of an audience and a written self-assessment as summative assessments. Flexible grouping. Work groups were often student-chosen and were flexible. For example, one set of groups wrote scripts (CD Field Notes 4/21, p. 1), and students experimented to find musical material with a different group (CD Field Notes 4/19, p. 2). However, Ms. Davis used student requests and a lottery system to assign parts in the commercials, so the performance groups were somewhat random (CD Field Notes, 5/3, p. 3). Flexible grouping strategies are one of the hallmarks of differentiated instruction (Tomlinson, 2000). Ms. Davis did not assign groupings so that different groups or individuals were learning different material, progressing at different paces, or asked to achieve more or less sophisticated outcomes. Nevertheless, different groups created responses that included various musical material and evidence of differing levels of musical sophistication. Student-centered learning. Ms. Davis viewed the student-centered nature of the composition activities to be a form of differentiation (CD Think Aloud, p. 5). Because the 130 students were writing their own commercials based on their interests, this composition activity inherently was student-centered. Topics for the four “commercials” were student-chosen and included a beauty product (“Glam-in-a-can”) that ruined the user’s appearance, a grocery store that sold everything (“Meglanita”), a commercial recruiting new students to their school, and an excerpt from a sports talk show featuring Tom Izzo, a university basketball coach (CD Field Notes 5/5, p. 3). However, in this case, student selection of topics may have functioned more as a motivator than a method of differentiation. If students select a specific topic to learn about as part of a unit of study and present their findings to the class, this would be an example of differentiation of instruction based on student interests. In the elementary general music room, an example of this approach might be if the class was studying form, and groups or individuals investigated the form of their favorite three songs to report back to the class. Clearly, Ms. Davis did not intend for her students to study Tom Izzo and report their findings as a part of music class. Using student choice as a motivator is an appropriate, student-centered approach but may not constitute differentiation of instruction. The composition project allowed an assortment of student-centered learning and response styles. Ms. Davis presented a variety of composition styles, methods of composition, and possibilities for performance, and she tried to balance the need for structured directions with the freedom to create. Allowing multiple pathways to learning about a particular topic and designing a variety of possible methods to express what has been learned are integral to differentiation of instruction (Adams and Pierce, 2006). One day, each group composed a melodic idea on xylophones for possible incorporation in one of the commercials. Then, each group played its idea for the class, and the class could decide to use it for a specific commercial or bank it for possible use later (CD Field Notes 4/19, p. 2). After about 20 minutes of group 131 exploration and practice, five different groups presented ideas, and the class “banked” all of them. Another day, the class worked in different groups on a rap for a commercial about their school (CD Field Notes 4/28, p. 4). Perhaps because the students had more background listening to rap, speaking, and writing poetry, this project went more smoothly and resulted not only in more harmonious group work but also in more apparent enjoyment of the performances (by performer and listener alike) and more participation in adapting and combining the group raps into something the whole class liked (pp. 5-6). In each case (melodic material and rap composition), I saw a variety of levels of musical sophistication and achievement, more as a result of individual students utilizing different levels of background knowledge or challenging themselves than as a result of Ms. Davis differentiating instruction. Ms. Davis circulated the classroom listening to works in progress and reflecting with students about their progress. As Christensen (1992) posited, it seemed that the interactions of students with one another, with the teacher, and with the music were the primary vehicles of differentiated instruction in this case. Although the students chose topics and there were a variety of different ways of composing and performing, Ms. Davis was unsure about the efficacy of this project in terms of music learning. I feel like musically… I saw at the beginning a lot of [learning] potential, when they [the students] were asking if they could do this and they had brainstormed [musical ideas] and wanted to come up with their own [musical material]… [I thought:] We can pull in styles and genres, and we can, you know…what makes it sound more bouncy, if you have a ball commercial? And what would sound... and we could do that, talk about historical time periods, and all these amazing things we can pull in. And then it just DIDN’T. OK, 132 that’s not going to work, and that’s not going to work… And the class, the way they were working together, it just… [became about trying to get something ready to perform in time] (CD Think Aloud, p. 18). Ms. Davis stated that she saw great potential for music learning from group composition activities. However, the pressure of trying to get the performance ready on time and the fact that students were distracted by the scriptwriting derailed some of the music learning potential of this particular project (CD Think Aloud, p. 19). Peer coaching. One “tried and true” method of differentiation involves high achieving students teaching lower achieving students (Tomlinson, 2000). Ms. Davis established a variety of settings for students to teach other students. Some students took musical or behavioral leadership roles within groups without being assigned, which seemed to be a natural outcome of work in groups that were heterogeneous by ability (e.g., CD Field Notes 4/28, p. 4). One day, Ms. Davis assigned “directors” to design blocking and oversee rehearsals of commercials in the hall while she worked in the classroom with other groups (CD Field Notes 5/19, p. 2). Another day, a boy in the class who was taking private drum lessons acted as a resident expert by bringing in his drum set and suggesting various rhythmic possibilities for the “Glam-in-a-can” commercial (CD Field Notes 5/17, p. 1). These opportunities for leadership by those with more prior experience or aptitude seemed to be a way to value what those students brought to the classroom. Using students as teachers may also have served to build in remediation for students with less music background knowledge or lower music aptitude, because they received personalized instruction and attention from their peers. Informal, emergent assessment methods. Attempting to track individual students’ music learning was one of the many challenges involved in allowing students to design their own 133 performance material. Ms. Davis relied on a combination of roaming the classroom as a facilitator/observer and setting up mini-performances to check the various groups on an assortment of projects. For example, Ms. Davis used performances of the melodic material (CD Field Notes 4/19, p. 3) and raps (CD Field Notes 4/28, p. 2) to track student progress. However, because she did not keep records of these assessments, it seemed that their primary purpose was to be sure the class would be ready for their performance rather than to ascertain information about individual students’ music learning. Furthermore, these assessments were “checking the group,” which Carrie herself characterized as poor assessment (CD Initial Interview, p. 4). None of the data from any source reflects any evidence of Ms. Davis tracking individual or group music learning progress at any point in the third grade composition projects, as she warned me in advance might be the case (CD Initial Interview, p. 7). Summative assessments. Ms. Davis considered both the performance in front of an audience and a written self-assessment as summative assessments of the group composition project (CD Field Notes 5/26, p. 2). The performance was video-recorded, but this functioned as a record of the performance rather than a formal measurement of any type of music achievement (CD Think Aloud, p. 19). For the self-assessment, each student got a pencil, a book to write on, and as much notebook paper as they needed. Carrie prompted them to write about: the process (creating, voting on ideas…), the performance (How did you do? Voices strong enough? Did you remember where to be?), composing (the rap, the jingles, the sound bank?) and if they would recommend this experience to future students (why or why not?). These prompts were presented verbally and written on the board (CD Field Notes, 5/26, p. 2). Ms. Davis removed the names from the responses and allowed me to read them. The students primarily reflected on social facets of group work and especially issues of fairness 134 relating to whose ideas made it into the performance and who got what part. “I only got one [part] that is what I didn’t like. It was unfair. One person had three but she gave one away… 10 Still she had more than me ” (Student 7 Self Evaluation, p. 1). Students were astute and sometimes harsh critics of the performances of themselves and others. “Some people like Jason and Ally needed to work on projecting their voice[s] but you could still kinda hear them” (Student 4 Self Evaluation, p. 1). “I thought I didn’t do a very good job in the musical. I thought everybody else did a good job but me… I really wish that I could be as good as everyone else” (Student 10 Self Evaluation, p. 1). Most students recommended composing their own minimusicals again, although some did not. “That took like six music times… it was really, really, really boring” (Student 7 Self Evaluation p. 2). “When we were just started planning this I did not want to, but now I think this was the best idea. It was hard to plan but really worth it” (Student 17 Self Evaluation p. 1). One student’s summary seemed to encapsulate the general feelings of the entire class: Another thing I liked was that we could make it up with our imagination. I wish the musical was longer and each person got more parts because a lot of us got only one part in class. Making the whole thing up was hard because we all had different ideas. It was sometimes frustrating because sometimes you would feel like nobody was liking your ideas. Sometimes I felt like going off and doing it all on my own. In the end it all turned out all right. I thought this was more fun than our musicals with the whole school. Sometimes it was hard to work with others but some people were easier to work with than others (Student 4 Self Evaluation, p. 1-2). 9 Student content edited by adding punctuation and correcting spelling. 135 Ms. Davis’s prompts were directed to musical issues—she asked students to comment on the process of composition and for their thoughts on writing the rap, the sound bank ideas, and the jingles, as well as for their review of the performance (CD Field Notes 5/26, p. 2). However, few students commented on musical performance issues, and discussion of the compositional process was limited to social-interactional and decision-making issues. Summary of assessment and differentiation of instruction in small-group composition. Although the group work projects resulted in scripted “plays” that incorporated some musical material (a sung “jingle” using the tune from the can-can accompanied by percussion, improvised atmospheric barred instrument background music, and a rap), Ms. Davis expressed dissatisfaction with the amount of music learning engendered by this project. The students seemed focused primarily on writing the script, learning to share ideas, compromise, and cooperate. I don’t feel that in the amount of time that it took that there was enough musical learning that took place to justify the whole experience. I think, as far as learning what it is like to put on a production, it was very helpful to them [the students], and it let them see it from my perspective… The class that, from my perspective as a music teacher did the best, was NOT that class [the class I observed]. They [the other class] got through their script rather quickly, and really focused on [composing] the ‘right’ music... (CD Think Aloud, p. 18-19). Ms. Davis indicated that her typical preparation for performance would include facilitation of a number of creative musical activities over the course of the year. Students would select which activities to polish for the program, and Carrie would write a script linking them together (CD Think Aloud, p. 19). According to the student self evaluations, students enjoyed being allowed 136 to create their own performance pieces, but most of what they learned or struggled with pertained to social skills and script writing. I saw evidence of differentiated instruction, including use of flexible groupings, studentcentered learning, and peer coaching. Furthermore, I think this model of small-group composition showed promise for differentiated instruction of music learning, but fell short for a variety of reasons. Ms. Davis seemed to agree: I felt like it did not go anywhere like the way that I wanted it to go. Looking back on it, I would have started the process… with the music, if I were to do it again. I would start with: think of a product, now think of a catchy song, and let’s write the script after. Because we did the scripts first, I feel they were so focused on that, that the music became an afterthought... [later] If I saw them four times a week, for an hour at a time, I would do this project again in a heartbeat… But, because I see them only when I see them, [it is hard] to justify it musically. They did learn a lot, and I learned a lot” (CD Think Aloud, p. 18-19). Group composition projects such as these are described as one way elementary general music teachers strive to teach the National Standards for Music (1994) (e.g., Phelps, 2008; Strand, 2006), and I appreciate being allowed to write critically about the lessons learned as Carrie incorporated them. It is my hope that examining and publishing Ms. Davis’s experiences will help other music teachers as they strive to incorporate these new ideas. Differentiation of Music Instruction for Students with Cognitive Impairments Carrie’s school housed the district’s elementary program for students with moderate to severe cognitive impairments (CI). These students were served in two self-contained classrooms, divided into lower and upper elementary based on the students’ chronological ages. 137 The approximate mental ages of the students served in the upper elementary CI class ranged from about 6 months to about 3 years. Like many music teachers, Carrie felt underprepared to work with this population (Hourigan, 2007; Linsenmeier, 2004; Salvador, 2010): When I started here, my principal said, “Oh, don’t worry—just do music therapy with them.” And I said, “I don’t know music therapy!” “Oh, sure you do.” and I said “Oh, no, no, no…” She [the principal] said, “Just try something. You’ll be fine. Just sing about wiping your nose or something.” [Later] From the beginning, I was just trying something new every time they came. My first year, I had no idea what they were capable of doing, because I hadn’t been given any more information than “Don’t worry about it right now, you’ve got to get to know the whole rest of the school” from their teacher. And I’m like “No, please, give me a little bit… The expectations, at least?” (CD Think Aloud, pp. 14-15). Students in the CI program came to music for 40 minutes twice a week mainstreamed with their age peers, as well as 25 minutes twice a week with their self-contained class. For this study, I observed a fourth grade class that included several students with cognitive impairments. Zack and Katie both had Down’s Syndrome and were served in the upper elementary CI classroom. I did not ask for any additional diagnostic information, because Ms. Davis was the subject of my study, and my agreement with the school district indicated the students’ information was to remain confidential. Another student in the class, Abigail, was not in the CI program but had severe learning disabilities: “She is still at a beginning level wordwise… [reading] ‘the ball is red’ is hard for her” (CD Think Aloud, pp. 7-8). This section will describe Carrie’s differentiation of music instruction for Zack, Katie, and Abigail. 138 Early childhood approach. When the self-contained CI class attended music, Ms. Davis used an early childhood model that she learned through a Music Learning Theory (MLT) certification course (CD Think Aloud, p. 17). In the MLT early childhood model, instruction is informal (Gordon, 2003). There is no particular expectation for response, and the early childhood music teacher varies the musical content and props according to student responsiveness in order to foster optimal musical development. That is, although the teacher may have a lesson plan in the form of a list of possible activities (songs, chants, movement) and related props (drums, egg shakers, scarves, puppets, etc.), this plan is used as a menu of possibilities to meet the emergent musical needs of individual students when they become apparent. This informal mode of instruction is not typically used in elementary music education. Elementary teachers often have plans that are structured in a certain order, in which activities have a proscribed amount of time, and which include specific learning goals for the day or specific expected responses. In an MLT early childhood music class, children are given constant opportunities to respond through vocalization/singing, movement, chanting, or improvisation, and their ideas are sought for how to structure activities (e.g., How can we move? What color should we paint? What animal can we pretend to be?). Teachers engage individual students in improvised sung, chanted, movement, or percussion conversational exchanges that are structured to foster individual musical development at the level the child demonstrates. Any response is incorporated into improvised musical conversations or even adapted into the musical activity material by the teacher, but there is no praise or other form of evaluation. It is acceptable and expected that some children may simply absorb the musical environment, and there is no “right” response. This child-centered, play-based musical exploration is different from typical 139 elementary music instruction. In most elementary music classrooms, there is an expectation of participation and correct or appropriate response. Many of these differences in instructional style simply reflect the cognitive development of older children, larger class sizes in elementary settings, and the specific curricular goals that are a part of formal schooling. According to the MLT early childhood model, music instruction optimally would start at birth, so expected responses vary from involuntary vocalizations or movements to purposeful physical or vocal responses, and could even include accurate, recognizable musicking such as moving to a beat or singing (Gordon, 2003). The following fictionalized anecdote synthesizes moments from several observations to allow the reader to “experience” Ms. Davis’s method of informal instruction, with particular focus on Zack and Katie in this setting. Ms. Davis starts singing: “Look who’s here, it’s a friend of mine.” This song incorporates each student’s name, and the student who is named accompanies the song on an instrument. Today, it is bongo drums, and Zack plays first. The instrumentalist also gets to choose how the other students move. Zack wants them to move like a pirate (swinging a bent arm, squinting an eye, and saying “argh” after each phrase of the song). He shows this movement rather than verbalizing; he rarely speaks. His playing on the drum seems random, unrelated to the song. The students are seated in a circle, and the three paraprofessionals are dispersed around the circle, seated on the floor next to students who need the most physical and social assistance. At the end of the song, Zack chooses Maya to have the next turn, and takes the bongo drums over to her. Maya asks the students to wiggle their eyebrows as their movement, and she plays the bongos on the beat. Eyebrow wiggling looks funny and it’s hard to sing and wiggle your eyebrows. The adults giggle along with the students. 140 This song is familiar, and many students sing. Singing abilities vary widely: Anna sings loudly and accurately. Katie drones the words in a speaking-voice monotone. Other students (Austin and Claire) sometimes respond with grunting vocalizations, and still others, such as Zack, are silent. All three paraprofessionals sing and model the movements, and seem enthusiastic even on the eleventh time through the song. Chuck is not singing, and says he is not having fun. He says he wants to go to gym and he is worried he missed it. Ms. Davis continues the activity, and one of the paraprofessionals talks to Chuck about how he has to participate. By now, everyone has taken a turn with the bongo drum, except Austin gives up his turn because he won’t take his hands out of his mouth. In the course of worrying about gym class, Chuck mentioned that he would like to play with a ball. Ms. Davis puts the bongos away and starts “Roll the ball like this,” a song in minor that incorporates a ball as the prop. She did not plan to sing this song today, but it succeeds in pulling Chuck back into participation. Ms. Davis used informal music instruction based upon MLT for her self-contained CI classes. The above anecdote demonstrated her use of student ideas (how to move, whose turn would be next, etc) and incorporating unplanned activities to draw a student back into the group (Roll the ball like this). In her informal teaching, Ms. Davis allowed for a variety of group musical responses, such as singing and movement, as well as individual responses such as playing the drums, improvised sung or chanted “conversations,” and singing when cleaning up. Ms. Davis told me over lunch that she is expected to teach social skills during CI music classes. Therefore, she incorporated an emphasis on socialization goals—learning each other’s names, taking turns, passing things nicely to each other, participation, and following directions. This 141 presented some difficulties, because Ms. Davis preferred to adhere to the informal music making ideal of voluntary participation, but the CI room goals included encouraging maximal participation for each student (CD Think Aloud, p. 17). Ms. Davis had ample reason to incorporate informal music instruction based upon MLT for her self-contained CI classes. MLT’s early childhood teaching methods were not simply intended for children under a certain age, but were designed for students in music babble—those who could not yet audiate—regardless of chronological age (Gordon, 2003 pp. 108-111). The immersion activities may be used with students of any age who struggle with matching pitch or finding beat, so they are age-appropriate for this specific group of upper elementary CI students. Furthermore, MLT early childhood music instruction provides a framework for music learning at the musical and cognitive functioning level of these students. The CI students were not only lacking audiation skills, but they were also between the ages of 6 months and 3 years in terms of their cognitive functioning. Several students in the CI class did not speak, and this early childhood approach incorporates and values the responses of nonverbal participants. Although MLT early childhood instructional methods are not specifically intended for elementary-aged special education populations, Ms. Davis is not the only teacher to apply them in this way; there is support for this approach in the literature (e.g., Gruber, 2007; Griffith, 2008; Stringer, 2004). Paraprofessionals. I was struck by the musicality and professionalism of the three paraprofessionals who accompanied the eleven CI students when they attended music. During my first observation of the CI class I wrote, “The paraprofessionals sing well and they are skilled with facilitating good behavior. They have a good sense of humor” (CD Field Notes 4/19, p. 2). I asked Carrie about how she facilitated this: 142 Let’s see… I have just pulled [Joan] aside a few times and said, “In music time, thank you so much for keeping behavior [under control], can you model the singing for them, too?” And some days [when she is starting a conversation with another adult], I will just stop and say, “Are we ready?” and she goes—“Oh, sorry.” …I’ve thought, oh my gosh, sometimes… They talk SO MUCH, when it is really just a little here and there. But then, I go in their room, [and] they are CONSTANTLY talking. Like, that is just the atmosphere in there. 11 In some ways, I can tell they are trying really hard to keep that under wraps… Janine, just one day said, “Now, some teachers don’t like us to do anything and sit in the corner, some like us to sit with the kids, and some like us to do what the kids are doing, and some like us to help them but not sing, what do you want?” And I said, “Well, here is what I would love.” “OK.” And she is just so natural… And then, Sharon came in, and the first time she came to my room, I was ready to say, “Hi, this is what we do…” And she said “Now, you tell me exactly what to do, and if I am doing it wrong, I don’t care if I have been here for three months, you bust me on it!” She’s been GREAT (CD Think Aloud, p. 13-14). It seems that Ms. Davis’s success working with CI students is due in part to excellent paraprofessionals with whom she has negotiated for a positive classroom environment. The paraprofessionals are trusted partners who are valued for their knowledge of the individual students’ physical, behavioral, social, and academic needs. They are expected to facilitate appropriate musical and social behavior by providing an excellent model. Carrie fosters a 11 Clarification based on observation and conversation with Carrie and the paraprofessionals: Most of the CI students are nonverbal, so the adults in the CI classroom talk and laugh with one another as they deliver physical therapy, occupational therapy, and speech-language programming or teach life skills such as toileting, self-feeding, manipulating objects, etc. 143 collaborative professional environment in which she invites participation from the paraprofessionals and communicates with them about any concerns she might have. Social mainstreaming vs. inclusion. In social mainstreaming, “Students with severe disabilities are included during regular education… with the goal of providing social interaction with nondisabled peers rather than mastering academic concepts” (Adamek & Darrow, 2005, p. 50). That is, material presented during music instruction might not be accessible to the student with special needs, but music learning is not the goal of social mainstreaming. In contrast, inclusion entails “the [music] teacher collaborat[ing] with special education experts for adaptation ideas and support” (p. 50). In an inclusive model, music activities and curriculum are adapted so that students with special needs also progress musically. When Zack and Katie attended music with their fourth grade class, whole-class singing and some other whole-group instructional activities, such as playing boomwhackers on chord roots, proceeded without observable differentiation other than the paraprofessional who accompanied them (e.g., CD Field Notes 4/21, p. 2). However, during recorder instruction, Zack used bells tuned to chord roots or melody (depending on the song being played) and harmonized or played the song the rest of the class was playing (e.g., CD Field Notes 4/19, p. 1). He played the bells by striking a button on the top of the bell with the palm of his hand, resulting in a pleasant, mellow sound, and accurate playing was facilitated by color-coded notation and the assistance of a paraprofessional. Ms. Davis stated that neither Zack nor Katie had the fine motor skills or academic capability for the recorder playing or music reading expected from the rest of the class, which was why Zack was playing bells and using alternative color-coded notation (CD Think Aloud, p. 9). Despite this acknowledged lack of prerequisite skills, Katie played recorder. I noted that she was often off- 144 task, and when she “played,” she was clearly not accurate in her fingerings, or even covering any holes (CD Field Notes 4/19, p. 1). When I asked why Ms. Davis had adapted her instruction for Zack and not Katie, she told me that Katie’s parents did not want her to do anything different than her peers in music (CD Think Aloud, p. 10). Zack’s parents responded differently to the suggestion of an alternate music curriculum: They were thrilled. I think partially they didn’t want to have a recorder at home, being hooted on… They asked… “Is there anything else [other than recorder] we can do?” …I said, “Well, actually, yeah. I was thinking of a couple of different things that we can do... I am not just going to have him wave a stick and pretend he is conducting, if that is what you are worried about…” And they [said] “Yeah, do whatever he can do to be successful.” And then at the beginning of the year, they voiced some concerns to his [CI] teacher… “We are a little worried about the program [performance] when it comes up, is he going to stick out like a sore thumb?” …And [the teacher] said, “Well, Ms. D. will make sure that it seems like a natural blend.” And then our handbells were on back order, and all we had was his set… And I had called them and said, “I don’t know what to do… And they said “No! You know what… He is loving the bells, and that’s his special thing…” …I think there are so many battles that they [parents of children with special needs] have to fight. It seems like there is a spectrum of acceptance. Sometimes… I think that with Katie’s parents, [they feel like] she can hold a recorder—she can look like everyone else… (CD Think Aloud, p. 10-11). 145 Often, the parents of a student with special needs control what services their child receives, including whether an adapted curriculum is provided, regardless of the teacher’s opinion regarding the educational soundness of this decision. In this case, Ms. Davis must and did abide by parents’ wishes. I mentioned another fourth grade student, Abigail, in the course of our think aloud, because I noticed that she rarely played her recorder in class (e.g., CD Field Notes 5/3 p. 3). When I asked about it, Carrie reported that Abigail struggled with fine motor coordination and a learning disability that affected her music reading skills (CD Think Aloud, p. 11). Abigail qualified for special education services, but her parents did not want her to be labeled: [Since kindergarten] they’ve refused… specific like pull-out things. It was just this year that she started to be able to go to the resource room. Originally [her teachers] wanted her in the resource room for a half day when she was younger, to try to [help her]… They are still trying to crack the code as to what [is going on]... But, [her parents] have refused the extra help. So, she just has had a little bit of pull out help and a lot of adapting by the classroom teacher. [Her parents] still want her tested at the level of the other fourth graders (p. 9). When Ms. Davis suggested a possible alternative to playing the recorder, Abigail’s parents stated they wanted Abigail to do the same as everyone else in music class. I didn’t want to put recorders in her hands. I wanted to have her play handbells with Zack. I had this all planned out. She was going to be his special helper, so it wouldn’t look like she was not able to do recorders… but that she could follow. I want her to feel successful so she’ll keep trying. But her parents [said] “She needs to play recorder, and I don’t care if... [she is not ready]” Well, OK. That is not what I feel is best for your child, 146 but… Even with a ton of extra [help]… she’d come down at lunch, “I want help on my recorder.” We… this is about as far as she has progressed. [CD plays example—fingers moving, but not on the holes, no tonguing, just puffing air with some squeaks] (CD Think Aloud, p. 7) Ms. Davis wanted to differentiate instruction for her students with special needs and even had ideas for how that could be accomplished, but she was not allowed to implement her ideas with all students. Zack seemed to be thriving musically while learning to play melody or chord roots on bells with alternate notation. “[We started] with the first note of every phrase… and then with “Hot Cross Buns” he started to fill in some of the other pitches himself, and they were correct” (CD Think Aloud, p. 17). He played “his part” on bells for his CI class and was glowing with apparent pride (CD Field Notes 5/17, p. 6). A few times, Ms. Davis gave Zack the bells for chord roots to a new song without his color-coded notation and asked his paraprofessional not to intercede, to see if he could hear where he needed to change pitches. He was inconsistent—sometimes it seemed that he heard, and other times, his playing seemed random (e.g., CD Field Notes 4/19, p. 1). Observing Zack and Katie in music with their fourth grade class and in their selfcontained CI class invited comparisons of their musicking in these settings. [Katie]’s not singing as much with this class as she does with the CI class. This is musically beyond her readiness, but her lack of vocal effort might be evidence that she is definitely aware that the sounds she produces are not the same as those around her” (CD Journal, p. 5). With Katie especially… [in fourth grade music] she has had a lot of shut down behaviors before… Where she just… she seems to just need to shut down, but she is still watching. 147 She just absorbs for… depending on the activity two classes to two whole months. And then she jumps right in as if she has been doing it all along, which is fine. But, with the CI class, she now has the role of mama hen. She is one of the older ones, and especially at the beginning of the year, it was so fun. “Now, you sit here, and do this…” (CD Think Aloud, p. 16). This difference in Katie’s behavior in the two settings is corroborated by my field notes. For example: “…in this class [CI], rather than being disengaged, Katie participates and smiles. It seems fun for her to operate at this level” (CD Field Notes 5/17, p. 5). Katie’s social and musical behavior was withdrawn and off-task in fourth grade music, where she often engaged in behavior such as playing with her recorder, asking for tissues, and going to the bathroom (e.g., CD Field Notes 4/28, p. 2). Zack was essentially nonverbal, and his participation levels in his self-contained and mainstreamed settings did not differ according to my observations. Ms. Davis commented, His participation is more group-oriented during CI. More…. Almost oblivious of what else is going on half the time with the fourth graders. Yet, at the same time the other half of the time, he knows he has a captive audience with them [the fourth grade class] and they are so loving and encouraging… He will do something again and again to hear that applause, or to get that “Good job, Zack!” (CD Think Aloud, p. 16). Inclusion in music class may be more beneficial to students with moderate to severe cognitive impairments than social mainstreaming. Katie took a leadership role in CI and withdrew in fourth grade music, while Zack’s behavior was similar in both settings. These differences could have been the result of personality or other factors. However, Katie was physically and academically incapable of many of the tasks she was asked to achieve in fourth 148 grade music, and her off-task or withdrawn behavior may be a response to that. Because Zack’s curriculum was modified in fourth grade music, the musical challenge was appropriate in both mainstreamed and self-contained settings, and he seemed to be learning, achieving, and comfortable both with his age peers and in his self-contained class. Summary. In today’s diverse school environments, music teachers are expected to serve students with an increasingly broad spectrum of learning needs (Adamek and Darrow, 2005). Ms. Davis, like many elementary music teachers, did not feel prepared by her undergraduate coursework to meet the music learning needs of students with moderate to severe cognitive impairments (Hourigan, 2007; Linsenmeier, 2004; Salvador, 2010). After struggling and experimenting, she adopted an informal approach based on MLT techniques for teaching selfcontained CI classes, which seemed to offer appropriate musical challenges and to elicit musical responses and behaviors. The musical modeling, teaching skill, and expertise of the CI paraprofessionals contributed to Ms. Davis’s success in working with these students. Differentiation of instruction for students with cognitive impairments might involve modification of curriculum when they are mainstreamed for music with their age peers. Ms. Davis’s experience seems to indicate that, if parents allow this differentiation, it may be beneficial to individual students’ music learning. Constructivism and Differentiation Much of the differentiated instruction that occurred in Carrie’s classroom may have been due to a constructivist approach, in which she functioned as a facilitator. Because it is a continually evolving, broad and diverse philosophical construct, an extended discussion of constructivism is outside the scope of the current study. However, I will briefly describe a few of the main theorists and tenets of constructivism. Jean Piaget, Lev Vygotsky, John Dewey, and 149 Jerome Bruner laid the philosophical foundations for current constructivist thinking, and different subgroups within constructivism (i.e., radical constructivists, social constructivists, etc.) emphasize each of these theorists more or less depending on their assertions. The essential underlying principal of constructivism is that new information cannot simply be given out. The learner must actively receive it; “constructing” this new information within. Because of this philosophical principle, classroom applications of the various iterations of constructivism tend to focus on the following: the learner as an active participant, purposive and interactive with the environment; concepts as best learned whole, rather than in isolated parts; a teacher who facilitates learning by listening to students and offering meaningful problems to solve; and a cooperative, Socratic, interactive teaching and learning style which values multiple perspectives (list condensed from Chen, 2000). Although she did not explicitly use the term “constructivism,” Eunice Boardman’s Generative Approach to music learning (Boardman, 1988a, 1988b, 1988c, 1998d) is one application of constructivism in music education. Ms. Davis also never used the word constructivism, but this approach was evident in how she taught and how she described her teaching. You could sum up my philosophy, what they need to get out of music… I want my students to leave feeling like they can make music. They can worry about all the technicalities and the labels and things later on in life. At this stage, I want them to feel like they are musicians. If they are not feeling that, they are going to close off… They won’t be as open to further musical experiences (CD Initial Interview, p. 6). In her classroom applications of this “philosophy,” Ms. Davis utilized constructivist approaches such as presenting music as holistic activities to be facilitated, rather than as sequential lessons to 150 be taught. Perhaps as a part of her philosophical stance, Carrie also viewed the degree to which a student chose to participate in music education as elective: I know a lot of people who appreciate music, but who have no clue about anything musical... Because I have students who don’t get to choose to come to my class, that have to come whether they like it or not… I am happy if they leave feeling they have enough tools to make music. Yet, you know, I don’t necessarily need them being able to identify subdominant function in music by the time they leave fourth grade (CD Initial Interview, p. 1). Carrie allowed students’ individual interest to motivate not only their participation but also the amount and trajectory of music learning that occurred. Ms. Davis’s constructivism seemed to have a direct effect on differentiation of instruction in her classroom. Teacher as facilitator. Carrie acted as a facilitator in her classroom. She relied on questioning, offering strategies, student leadership, and enabling a problem-solving approach to encourage students to think and figure things out for themselves. One of the boys takes private drum lessons and has brought in a drumset to help figure out how to accompany the “Glam-in-a-can” jingle. He demonstrates his “punk rock” drumbeat and “jazz” drumbeat. Students in the class try singing the can-can theme with each option. A vote between the two is a virtual tie, and the drummer suggests they use both and trade back and forth between the patterns. They try this and the class agrees this is a good solution. Ms. Davis’s role in this process was minimal—she asked the boy to bring in his drums, allowed time for the process, suggested an introduction on drums to set tempo before the singing started, and oversaw the voting process (CD Field Notes 5/17, p. 1). 151 Rather than simply telling students what she wanted them to know, Ms. Davis taught indirectly, using questions and requiring the students to problem-solve. She rarely asked right/wrong questions, instead asking for pros/cons, strategies, input, ideas or other conceptual feedback. I found as a student, when I was growing up… and the teacher said, “No, this is wrong,” or, “oh that’s right,” that was it. But there is always more. So for the kids who are ready to go on to more, you can challenge them to create more with whatever they are doing. And, for those who need more help, you can phrase it in a way that empowers them to keep trying, as opposed shutting down and saying “well, I can’t do it, so why try?” (CD Initial Interview, p. 1). Ms. Davis’s teaching style incorporated offering strategies but did not require that students use them. Students were told to use the ideas that worked for them. For example, in fourth grade recorders, Carrie told the students to “look for one thing and fix it” (CD Field Notes 4/28, p. 3). They played the song, she chanted: “Plan your fix, do not speak, ready again” and they played again. In between “planning fixes” Ms. Davis had students rocking to macrobeats, using solfege, singing and fingering, and singing note names. Different students chose to just play or to try the suggested strategy. The accuracy of the whole group’s performance improved markedly as a result of this exercise. However, just as when teachers use a direct instructional model, the impact of this approach on individual learning seemed to vary according to the learner. I could see some students who did not try any of the suggested “fixes” (p. 4). This may have been because they were already playing accurately or because they did not want to improve their performance. I saw some students who tried all of the suggested strategies, even though it looked as though they 152 were already able to play accurately. Other students, despite trying some or all of the strategies, still did not appear to play accurately. A few students (including Abigail and Katie as well as one or two others) simply did not play. Without hearing individuals, it was impossible to ascertain the relative progress of individual students, although some progress must have been made, since the sound of the group improved. Ms. Davis used a similar approach when teaching students to read notation. She presented notation to students and asked them to listen to the song while looking at the music to find patterns. As a class, they found places where the notation was the same as other places. Carrie asked them to think about patterns they knew from playing other songs, echoing patterns, and rhythm pattern reading. They then used this previous knowledge and the patterns in the notation to “figure it out” (CD Field Notes 5/3, p. 2). I noticed that the students who took private piano lessons or had other music instruction outside of school could typically be relied upon to provide information, if needed, and students seemed accustomed to working with each other—asking one another questions and offering each other help. As an observer, I was continually impressed by Carrie’s ability to gently rebuff students who were seeking more direction, her polite refusal to coach students toward an answer, and how she declined to give ideas to students who were struggling. Eventually, most students either worked out a solution on their own or took advantage of the expertise of another student, although a few students seemed to disengage. I have a lot of bite marks in my tongue! [we both laugh] I guess I kind of stumbled across that maybe 5, 6, 7 years ago at some point… We were doing a class composition, and at the end of class, I stopped and said [to myself], “Wait a minute, those were all my ideas!” …I was letting them come up to the piano and play a few notes, and try to find a 153 melodic up or down that they liked. And they would get something that wouldn’t quite fit into the tonality. So then I’d modify it, and the kids would go, “Oh, yeah, that!!!” And I was thinking, “Oh, I’m using their ideas, I’m just making it better.” But, it was MY composition and not the kids’. They were still proud of it and felt like it was their own, but it wasn’t really theirs. So, I’ve tried to step back and observe, and see… It is kind of fun to watch, too. To see if they really do arrive at an answer that fits the description of what their goal is supposed to be. It tells me a lot more about their learning, when they completely get there, and on their own. If they can do that, then “aha!” If they are totally far away… Everything inside of me is going… “No no no that’s not right, see if you can make it sound better.” But, to let them go through that process themselves, it seems to be a stronger reinforcement of learning, and a stronger picture for me… A snapshot of their thought process, which is more important than the product (CD Initial Interview, p. 2). One of the goals in a constructivist music classroom would be to foster self-motivation and learning independence so that students might be more likely to undertake projects or find things out by themselves (Chen, 2000). When I asked if some students might prefer more specific guidance, Carrie replied, Definitely. And I try to incorporate some of that… there are days where I will sit down and say “OK, kids. I am going to give you a lot of rules to follow in this activity, because we know some friends really like to have a lot of rules to follow.” So… You didn’t see any of that while you were here (CD Think Aloud, p. 2). Essentially, Carrie facilitated music learning by providing strategies and allowing students to use them (or not) and by assigning musical problems to solve and staying out of the way as students 154 struggled through them. Because I did not hear responses from individual students, the effect of this facilitation on the music learning of individual students was difficult to ascertain. Positioning herself as a facilitator who allowed students to work through problems resulted in a classroom atmosphere that might seem chaotic to some teachers. When I asked her about this, Carrie replied: It seems chaotic to me, too. It is totally against everything that I am comfortable with. Basically, I am a very type-A kind of personality. I want everything lined up and in order. But my first couple of years teaching, I noticed that kids just weren’t being very creative. Then, I went to some workshop… We were working in small groups, pretending we were kids. And there was a group that I remember they—on purpose—started to do something a little off-task. But, it was still musically related. I think they were curious to see what she [the workshop leader] would do. And she somehow just gave them enough guided prompts here and there, that she was able to incorporate that into what they were doing. And we were just all going “whoah!” And then, because she hadn’t said: “No, stop it, do what I asked you to do.” Theirs ended up being the strongest [project], because they could incorporate… Sometimes with kids they seem to need that, “Let me work around it,” or “Let me see what others are doing,” because their ideas don’t flow as readily… (CD Think Aloud p. 1). Student needs such as those she described resulted in students floating from group to group to see what others were doing and a high amount of classroom noise as students experimented and talked. Ms. Davis frequently did not redirect behavior that seemed off-task (e.g., CD Field Notes 4/19, p. 3) because she believed that different students’ learning processes required a variety of behavior. 155 Carrie’s role as facilitator transferred most of the responsibility for classroom management onto the students. In my field notes, I made frequent reference to students solving their own problems within a group (e.g., CD Field Notes 4/19, p. 2) and calming down extraneous noise from other students so class could continue (e.g., CD Field Notes 4/21, p. 2). When Ms. Davis did intervene, it was typically brief and (1) subtle and individual, or (2) wholegroup and logistical. For example: (1) Think ADHD children who did not take their meds, in class at 2:30 PM and completely distracted by noise… I bite my lip, fight for the right words… sidle up to that child inconspicuously, ask them quietly if there’s a spot they see where they’d be able to concentrate better, and give them more space to wiggle beyond the boundaries because that’s what they need at the moment (CD Journal p. 13). Two boys have been pretty poorly behaved—talking to each other and poking each other with their recorders during a class discussion. The whole class is moving from the circle to sit near the board. During the brief period of chatting, walking and settling back in, Carrie walks over to the two boys and simply says, “Take this opportunity to solve the problem.” They do not sit together up at the board (CD Field Notes, 5/17, p. 3). (2) Ms. Davis stops the group discussion of which ideas should be incorporated into the commercials as jingles. She says, “Ideas are not bad or good… Ideas can be good in different ways.” She asks the students to be careful of how they give 156 comments and feedback. “How would that feel if someone said that to me? I have to think about what I am saying” (CD Field Notes 5/12, p. 2). The third grade class has chosen dancing the Virginia Reel as a for-fun break activity instead of working on their scripts. As the dance progresses, some students start to be picky about touching other students, primarily based on gender differences, but also some personal issues. (That is, some people of the opposite gender seem to be OK but not others). As the reel continues this behavior escalates, with some boys refusing turns to reel. I was surprised that Carrie did not intervene, although I think she hoped the students would take charge and correct the problem. After a few minutes of dysfunction, she turns off the music and tells the students they need to respect one another and the reel. She starts the music again and the problem is resolved (CD Field Notes 4/21, p. 4). I often commented in my notes about how well students managed challenges without Carrie’s intervention, such as getting the needed materials for a project (e.g., CD Field Notes 5/26, p. 2), putting materials in their binders (CD Field Notes 5/3, p. 1), and deciding who would get a turn with an instrument (e.g., CD Field Notes 4/19, p. 2). However, just as some students preferred more direction on projects, I wondered if some students would prefer that Ms. Davis was quicker to intervene or more direct in her classroom management style. For example, I noticed some exasperated, frustrated faces, voices, and words in third grade as some students continually strove to keep the class on track, particularly as the performance date loomed closer and closer (e.g., CD Field Notes 4/28, p. 4). 157 As a part of her role as facilitator, Ms. Davis strove to be sensitive to the psychological/sociological needs of her students. This sensitivity sometimes resulted in changing her lesson plans. The third grade students came straight to music from their gym class, and one hot, humid day they ran a mile as part of their fitness evaluation for the year (CD Field Notes 4/21, p. 3). I basically threw everything out the window plan-wise for the day when the kids came in from running that mile. Wow. Even Riley, who is 100% athletic and full of energy… came in and sat down silently, red-faced, and seemed down-right lethargic. On went the fan, out went the lights, and I had them listen silently to the Libera CD. My intent was to listen for a minute or two, let them relax, and then go on to [the planned activity]. However, by the end, they remained quiet (a huge sign for this class that something is not right) and content to lie on the floor. To give them more time to re-charge, I decided on the spot to have then listen to the piece again, this time listening for timbral things—one voice vs. many… solo vs. unison—this wound up requiring further listens. Musically, I learned very little about the class today. (CD Journal, p. 7). Only 17 students were in the classroom—several were in the office with twisted ankles or cramps. The students trickled into music class, and some had clearly been crying (CD Field Notes 4/21, p. 3). By the time they seemed to be perking up and most of the class was finally in the room and over their “injuries” there were only a few precious minutes remaining. Given their mental state, I decided to ditch everything and just improvised what to do until the end. I had no real motive other than filling time. Poor instruction, yes, but sometimes you just have to cut your losses. I came, I tried, they weren’t at a point to receive new learning, I 158 adjusted, they remained unready, I adjusted more, they started to recoup, time to go. Win some, lose some (CD Journal, p. 8). The students had work to do on their scripts and the music for their mini-commercials. However, Carrie noticed their exhaustion and stress and changed her plan to something that would soothe them and allow them to rest. This excerpt also illustrates how critical Ms. Davis could be of her own instructional choices; she might not have given herself enough credit for the music learning that may have occurred from students listening and evaluating aspects of an unfamiliar recording. Ms. Davis’s role as facilitator also led to changes in lesson planning based on music learning needs. I had to do reactive teaching at that point in time… As rhythms were the biggest challenge for this, we hopped up to keep the macrobeat and microbeat while hearing the patterns in the song, repeating the patterns in the song, and reading the patterns in the song with no help from me. That was the instant plan (CD Journal, p. 11). On another occasion, Ms. Davis told me she felt like the kids “just need to play,” and she was not going to work on their recorder performance material that day (CD Field Notes 4/21, p. 1). The lesson that followed included learning a new song (Sandy Land), singing melody and chord roots in small groups, playing tonic and dominant chords on boomwhackers, learning Sandy Land on recorders, and playing chord roots on recorders, including learning a new fingering (low D). This was a strong music lesson that Carrie improvised because she felt the students were tired of working on their performance material. Carrie was reflective about her practice and often critical of herself and the choices she made during improvisational teaching based upon the immediate needs of her students: 159 I switched gears with the rhythmic work a little but, but, reflecting back, I should have entirely scrapped the piece, gone on to another activity, and them come back to it. Instead, I foolishly decided to plow ahead. Never mind the signals of obvious “I’m done” from so many of the students (Hear that buzz? See those restless movements? See the dueling recorder rods? I did, but I chose to ignore them)… In retrospect, this class session was two thumbs down. I gathered very little to no musical insight into my students. My students became disengaged probably about 20 seconds into [the activity], yet I kept going because I couldn’t think of what else to do…. I know I missed a lot of behavioral clues that normally would have tipped me off to switch gears, take a different route, ditch things altogether… (CD Journal, p. 12). Ms. Davis’s skill as a facilitator and her improvisational teaching were evident in the third grade group composition projects, which were organic, evolving, and student-driven. With third grade right now, because they are working on kind of composing their own thing, my goals are quite open at the moment, seeing where we need to go. Like after today, I want to steer them toward hearing a sense of finality in a piece. Bringing them back to what they already know about form… A lot of their experimentation today was jut kind of random, with no pitch center. I want to try to steer them toward getting some sense of fixed tonality in their piece… A lot of it was, they were just so excited with those instruments that they rarely get to use. So that’s going to be one of my right-now goals (CD Initial Interview, p. 7). Carrie’s improvisational teaching was sensitive to students’ psychological and sociological states, as well as their music learning needs. It also sometimes resulted in stops and starts and incompletions of the music learning it was intended to facilitate. For example, the coaching 160 toward a sense of pitch center and finality mentioned above never occurred, and the experimental melodic material from that day’s work was never revisited (CD Field Notes), perhaps because of the time pressures of the upcoming performance. In another example, the final version of the “Tom Izzo Show” skit did not have any music (CD Field Notes 5/24, p. 1). However, I recorded a jingle that a child composed in group work and performed for the class (CD Field Notes 5/12, p. 2) that would have functioned well in the performances (Figure 5.1). Figure 5.1 Tom Izzo Jingle Ms. Davis’s role as facilitator allowed her students to take the lead in classroom management, group dynamics, and music learning. The extent to which individual students benefited from this approach in terms of their music learning may have varied based on their personality and prior music experience. Teaching through questioning and problem solving might encourage higher-order thinking about music. Most of the effects of teacher-as-facilitator seemed social in nature—encouraging self-monitoring, self-control, leadership, and selfmotivation. Differentiation inherent in Ms. Davis’s practice of constructivism. In the classes I observed, some differentiation of instruction was inherent in the way that Carrie applied constructivism. Ms. Davis’s use of choice allowed students considerable freedom to determine what they worked on and how they approached their goals. Flexible groups utilized students with dissimilar social and musical backgrounds as teachers and leaders and differentiated by learning style. Use of centers was one way Ms. Davis allowed students a variety of pathways to interact with music information and demonstrate what they had learned. 161 Ms. Davis allowed her students considerable latitude to choose how they would work and what they would work on. For example, during one group composition project, I noticed a student circulating around all the groups (CD Field Notes 4/19, p. 1). He did not appear to be working on the project but seemed off-task and social. However, after a few minutes, he returned to his group, sat down, and offered some ideas. Ms. Davis chose not to intervene: That’s just kind of what I have observed just become part of his style. He just needs to make sure that everyone else is doing it, so… I think in some ways, it has proven to be more effective to let him wander first. At times, I have tried: “All right, go back and get to work,” and he can’t focus then. I think he has to get out all of his people issues before he can get to work (CD Think Aloud, p.1). I asked Ms. Davis about allowing individuals and groups of students to choose how they approached a task, and she replied: “As far as individual learning styles, that gives kids the freedom to use their best method of learning, and problem-solving to get to the solution. So that, to me, is a huge piece of individualization” (CD Think Aloud, p. 5). Carrie also allowed choice regarding participation, not only whether students wanted to participate but also the form participation would take. One day, a group of third grade students demonstrated melodic ideas on xylophones, but only two of the three boys played the instruments. When another student said they should all play, the boy said he did not want to “…because I su—stink” (CD Field Notes 4-19, p. 3). Ms. Davis said she did not agree (that he stinks) but that she would not require him to play. In a fourth grade lesson, students were allowed to choose whether to sing with Ms. Davis as she taught a new song, to sing in small groups, to sing melody or harmony, to play boomwhackers, sing, and/or play recorders, on melody or harmony (CD Field Notes 4/21, p. 1). I did not notice any students who opted out of 162 any of these activities other than singing in small groups. The students also made sure that the recorder, boomwhacker, and singing parts were each represented without Carrie’s intervention. Students also could choose their own level of challenge in recorder belt testing (CD Field Notes 5/3, p. 2). Testing to earn different colored “belts” tied on their recorders to reflect increasing levels of skill took place during lunch and recess and was voluntary. Student choice also was reflected in how long activities lasted. For example, Ms. Davis told students they would have 15 minutes to write their self-evaluations, but the class actually wrote for 35 minutes because nearly all the students were quietly working that whole time. Even after that amount of time, six students chose to continue writing in the hall while Carrie taught a new activity. Sometimes the choices made by students were not as positive. During one wholeclass compositional process, some kids were lying down and seemed disengaged (CD Field Notes 5/5, p. 4). Other students were braiding one another’s hair. These students occasionally contributed an idea, but mostly the composition proceeded without them. Students working in a variety of groups resulted in differentiation of instruction by learning/work style and sophistication of response. Ms. Davis nearly always allowed students to choose their own groups, “Because they know whom they work well with, and that class, in particular, seems to migrate toward the people who think the same musically” (CD Think Aloud, p. 5). Student choice in groups seemed to result in different amounts of music learning for different students. In one composition activity, I saw responses ranging from exploration (i.e., just pounding the bars or glissandos), to two children who worked together to create something replicable but quasi-improvised, to another group who negotiated a formal composition: a C major scale with a rhythmic motif in parallel and contrary motion, with an ending coda (CD Field Notes 4-19, p. 3). This difference in the sophistication of responses may have been a result 163 of differing levels of music readiness and also might have reflected varying levels of effort or attention. Although I was not present on a day Ms. Davis used centers, she described centers in a think aloud. It sounds as though centers offered a variety of pathways to music learning and also a number of ways that students could express their music achievement. Ms. Davis (Ms. D) stated that various centers were available different days throughout the year and described a sample set of centers from a day when she was assessing recorder achievement: …we had a warm up and play with Ms. D. center, we had a center where they were practicing their recorder piece that they would be being assessed on, together, but they each had a different job to do, they had to rotate. There was someone, they were practicing their conducting, so one person had to bring the other players in. They were checking for fingerings and just doing little brush-ups and things. There was a center where they were given a new piece of [notated] music to decode, together, to figure out on their recorder, to see if they could figure out what song it was… It was fun to watch— to get to that point “Oh, that’s this song!” For them to figure that out. We had--I call it our games station, I have a couple of musical games where it’s like memory with pitches… reinforcing that notation. We had a power point game going over here with recorder fingerings—it was skill day. And over there they were inventing their own games with rhythms (CD Think Aloud, p. 4-5). Centers-based learning allowed students to work in small groups on a number of tasks with a variety of music learning requirements, modes of expression, and levels of difficulty. Ms. Davis’s practice of constructivism included several embedded methods of differentiating instruction. Students were allowed to choose their degree and method of 164 participation. Students chose different groups on different days and were therefore exposed to diverse work styles and levels of background knowledge. Ms. Davis designed centers to encourage students to interact with various ways of learning about music and expressing their music achievement. Differentiation of instruction by learning style was a thread that united these subthemes. Addressing different learning styles was mentioned in each of these three contexts and also was evident in Carrie’s varied approaches to whole-group instruction. Cooperative, collaborative learning atmosphere. The main effect of Carrie’s philosophy and teaching style was a cooperative, collaborative learning atmosphere. Students got their own band-aids without asking (CD Field Notes 5/26, p. 1), policed their own level of talking (e.g., CD Field Notes 5/12, p. 1), helped one another (e.g., CD Field Notes 5/24 p. 1), and shared ideas and critiques (e.g., CD Field Notes 5/19, p. 1; 5/17 p. 3), all typically in a harmonious, happy manner. Some of it may be the school climate. Teachers ALWAYS greet me in the hall here, and students who don’t know me often say “hi” in passing… There is more sense of comfort in sharing ideas and also more self-regulation than I have seen in other settings. Perhaps it is Carrie that causes this. She allows the students to talk more, with her and with each other. She encourages them to take leadership and to solve their own problems. The students exhibit caring interactions with one another, and Carrie models caring interactions with them. Today as the third graders were entering the classroom, a girl was taking off her sweatshirt and threw it to the side. The zipper hit another child in the face. She apologized to him, and went to pick it up to put it where she had been trying to throw it. The situation was resolved on its own. I have been in other settings 165 where this would have led to an altercation that required teacher intervention (CD Field Notes 4/28, p. 1). Ms. Davis’s laissez faire management approach appeared to result in different levels of music learning from different people. It also fostered a sense of collaboration. For example, students would perform their group work for one another and offer feedback without being asked to do so (CD Field Notes 4/19, p. 2). Students also were subtle in assisting one another. In fourth grade, a girl asked “Are we on Zippy Toad Slide [one of the recorder songs]?” and Ms. Davis replied, “No, Big Boing Theory” (CD Field Notes 5/5, p. 1). I could see the questioner’s music, and she was on the correct song. However, the two boys next to her had been on the wrong page and were looking very puzzled. She seemed to ask the question for their benefit. Collaboration and cooperation were apparent especially in students’ treatment of those with special needs. Abigail often withdrew or played with her recorder when her level of frustration with learning recorder got too high (CD Think Aloud p. 7). However, one day another student noticed and helped her with a new fingering by physically placing her fingers on the recorder (CD Field Notes 4/21, p. 2). I do not know if this resulted in Abigail mastering a new skill, but it appeared to increase her level of participation and apparent enjoyment. In a similar situation, the student sitting next to Katie pointed with her finger for Katie to follow in the music (CD Field Notes 5/3, p. 2). I do not think that simply pointing in the music made it possible for Katie to read it, but her level of participation and positive affect increased. I also observed students helping a substitute paraprofessional find the correct bells for Zack to play (CD Field Notes 4/28, p. 2). On another occasion, Zack’s paraprofessional was talking to Ms. Davis about coding his music (CD Field Notes, 5/3 p. 1). A girl from the class walked Zack to the circle, and he tried to hug her. She firmly said, “Zack, no hug” and touched his outstretched 166 arms in a way that held him back (both voice and touch were gentle and appropriate). He hugged her anyway by ducking under her arms. She patted him on the head and stepped back. Another girl helped her disengage from him and get him seated between the two of them. Allowing students to handle their own problems seemed to result in some excellent solutions in these and other scenarios. Some of this collaboration and cooperation seemed to result from Ms. Davis’s acceptance of behavior that might seem off-task and from her persistent solicitation and apparent appreciation of student ideas. One day, a student arrived 15 minutes late. Other students greeted her verbally, and one girl got up, gave the late student a hug, had her join her group and started to tell her what they were working on (CD Field Notes, 4/28, p.4). If Ms. Davis had intervened to stop the greetings and given directions herself, this opportunity for cooperation would have been lost. In another example, one of the raps a group composed contained a loud raspberry sound (tongue protruded blowing). At the time of composition, Ms. Davis simply accepted this and allowed the students to teach their rap to the rest of the class with that sound (CD Field Notes 4/28, p.5). The following week, some students initiated a discussion about the raspberry sound (CD Field Notes 5/3, p. 4). They did not want to make that sound in a performance. They asserted that their parents would not like it, that the raspberry is not respectful and is a sound that some kids get in trouble for making. The students negotiated a satisfactory compromise, and all of this occurred with very little guidance from Ms. Davis. However, this negotiation and consensus building came at a cost. The whole class discussed nearly every decision that was made as the third graders designed their performance, from who got what part (CD Field Notes 5/3, p. 4), to which music went with what (CD Field Notes 5/5, p. 3-4), to what everyone should wear and what props would be used (CD Field Notes 167 5/24, p. 1). This ensured that the performance truly was “theirs” and also resulted in less time for making music. Also, the cooperation and collaboration were not without conflict, particularly as the performance deadline approached. One altercation (a student making a face while another one was singing) derailed the entire class for nearly ten minutes (CD Field Notes 5/12, p. 2). Ineffective leadership from a student “director” led to considerable frustration, some name-calling, and even some pushing (CD Field Notes 5/19, p. 2). The group asked for Ms. Davis’s intervention several times, and she did assist, but only briefly as she was trying to facilitate five groups that day. In fourth grade, a discussion regarding experiences with a substitute teacher took nearly 25 of the class’s 40 minutes of music time (CD Field Notes, 4/28 p. 1-2). My plan today was to get students’ reactions to our substitute teacher, [lists four other items on plan]. What resulted was a longer than anticipated review of the sub’s job… Followed by an unplanned review of recorder technique [because] part of the students’ beef with the sub was that she wanted them to play with correct hand placement (CD Journal, p. 9). Ms. Davis seemed to want the students to feel their opinions and wishes were valued and to foster a sense of ownership of the class climate and curriculum, sometimes resulting in diminished music learning because of the time devoted to discussion of non-music topics. Summary of constructivism and differentiation. A constructivist educational philosophy seemed responsible for much of the differentiation in Ms. Davis’s classroom. She primarily acted as a facilitator, teaching through questions and by setting up problems to solve, and transferred much of the responsibility for classroom management and learning onto her students. Carrie improvised new lessons or instructional material when she felt that students 168 needed scaffolding or they were socially unprepared for music learning. Ms. Davis offered students choices regarding their level and type of participation, how they approached assigned tasks, and what kind of classroom climate they created. Some of these choices fostered differentiation of instruction by learning style or response mode, and these types of differentiation were also provided through use of centers and flexible grouping. The main result of Ms. Davis’s role as facilitator and her constructivist philosophy seemed to be a cooperative and collaborative learning atmosphere, which fostered some differentiation of instruction/music learning. Students interacted with each other and with musical material in a generally kind, cheerful, and thoughtful manner. However, sometimes working toward the learning environment she sought may have paradoxically resulted in less music learning, as discussions and consensus building meant a considerable amount of time in music class was spent talking about non-musical topics rather than musicking. Chapter Summary Carrie Davis took considerable risks in participating in this project during a busy time of the school year. I observed teaching that she felt was not her best, such as when she drilled recorders. I also saw her try out something unfamiliar—facilitating small-group compositions for performance. I am grateful that she shared these teaching moments with me, and allowed me to analyze them and report my findings. I wanted to see real-life teaching, and I have worked diligently to honor her participation with an honest portrayal. Ms. Davis reported use of a variety of assessment methods in her typical teaching style. She used PMMA twice a year in most of the grades she taught, graded students on report cards twice a year, and used other formal assessments such as note recognition tests. She advocated a model in which assessments were used to inform instruction rather than as a way to grade 169 students. Carrie felt that assessments needed to be of individual performance in order to be useful, and felt that the number of students she taught and how infrequently she saw them made this difficult. Information on Ms. Davis’s assessment practices was difficult for me to triangulate, because I did not observe her typical teaching. During the observation period, Carrie’s third grade students wrote their own minimusicals for their end of year performance. These projects were undertaken in flexible groups, resulted in various work styles, combinations of background knowledge, student leadership roles, and levels of response sophistication. Sometimes, student leaders emerged, and other times they were assigned. The projects were student-centered in that students chose topics, wrote scripts, and directed the mini-musicals. Ms. Davis assessed for performance readiness but did not appear to track music learning as a result of these activities. The composition activity culminated in performance for an audience and an extended self-evaluation, in which students primarily commented on social aspects of the project. Ms. Davis taught students with cognitive impairments (CI) in both self-contained and mainstreamed settings. In the self-contained CI class, she used an early childhood approach that she learned in a Music Learning Theory certification course. This approach seemed appropriate both in terms of cognitive and musical readiness for the CI population. Carrie negotiated a positive relationship with the CI paraprofessionals, who were valued experts on the students’ needs, and who participated as active, enthusiastic musical models. I was able to observe two students in both self-contained and mainstreamed settings. At their parents’ request, Katie was socially mainstreamed, while curricular material was adapted to meet Zack’s music learning needs. It appeared that the inclusion model used for Zack may be more beneficial to students with special needs. 170 Based on observation, interviews, and think alouds, Ms. Davis’s teaching philosophy seemed constructivist. She functioned as a facilitator in her classroom, with a persona that involved questioning and required students to think of their own solutions. Her role as a facilitator extended to classroom management and transferred much of the responsibility of management onto the students. As a facilitator, she improvised new lessons when students demonstrated a need for scaffolding or when the material she had planned seemed unfit for the social needs of the students that day. Ms. Davis’s constructivism also was apparent in her use of choice and centers, as well as flexible grouping practices. The primary result of Carrie’s constructivism and facilitation seemed to be a cooperative and collaborative learning atmosphere. Balancing musicking with the discussion and consensus building required to create that atmosphere was sometimes difficult, and some students may have benefited from more guidance, both in terms of their behavior and their music learning. I did not observe much evidence of assessment practices, just as Ms. Davis had feared in our initial interview. I think music learning was difficult to assess because there were no specific goals for any of the projects. Good assessment flows naturally from a solid curriculum reflected in planned learning. However, such a direct instruction model may be prone to a lack of studentcentered features such as student-chosen topics, valuing student background (musical and otherwise), or allowing differentiation of learning style, response style, or level of musical sophistication. Ms. Davis’s teaching did allow this differentiation. Perhaps optimal music teaching and learning would occur somewhere in the middle of this continuum. While I did not see the connection I was looking for between assessment practices and differentiation of instruction in Ms. Davis’s teaching, I did see differentiated instruction. In Ms. Davis’s classes, differentiation was more often social in nature than related to sequential music learning. This 171 may be where assessment has a role to play: ensuring that individual students progress musically. 172 Chapter Six: Results Hailey Stevens: Assessment and Differentiation Intertwined Hailey Stevens’ eyes twinkle and her nose wrinkles as she laughs “Oh, no! I didn’t trick any of you! Let’s see if you can get this one!” First grade students sit cross-legged on the floor in three rows, hands in their laps and eyes intent on their teacher. They all know that the next challenge might be for the whole group or any individual in the group. An egg timer buzzes, and the kids groan. “Well, I guess I’ll have to wait and get you next time, vegetables are done for today.” Ms. Stevens starts to sing a folk song, and continues to sing as she puts the class binder down on her music stand and uses sign language to tell the students to stand. They follow her nonverbal instruction and she leads them in a movement activity related to levels of beat in a song with paired triple meter. Movement is easy in this classroom, where the only chairs are behind Ms. Stevens’ desk and at the six computer stations. One wall features a white board, and the remaining walls are filled with shelving and cabinets where instruments and props are stored. Orff instruments on and off stands fill the corner of the room across from Ms. Stevens’ desk. The movement activity has ended, and the students are standing on the blue circle ornamenting the otherwise drab grey carpeting. Ms. Stevens picks up a small stool, and the children grin and wiggle in anticipation of continuing the game they started last week. In the context of the game, each child will get at least one turn to stand on the stool and make up a sung pattern for the rest of the class to echo. The game includes a song, which the students sing without Ms. Stevens’ assistance. She comments on each student’s performance and records that they have had a turn in her palm pilot. I know that she is actually rating their performance in 173 terms of singing voice development and adherence to the tonality of the game song. The game continues for about fifteen turns, maybe four minutes, and then they move on to a new activity. When my initial email asking Hailey Stevens to participate in this dissertation went unanswered, I was disappointed. From the beginning of my doctoral study, I had been urged to go see Hailey teach, because my advisor thought so highly of her. When I solicited advice on possible participants from faculty at other universities in the area, Ms. Stevens’ name was at the top of several lists. I decided to email her again and was pleased when she responded with questions about what participation would entail and then ultimately agreed to participate. The following chapter explores each of my guiding questions and the themes that emerged from data analysis, including the impact of Ms. Stevens’ beliefs on her teaching and how the environment she created in her classroom impacted assessment and differentiation of instruction. When and How was Music Learning Assessed? In Hailey Stevens’ teaching, assessment was a part of nearly every activity, and several activities in each class were designed to allow formal tracking of individual student progress on specific musical skills. Therefore, the discussion of when and how Ms. Stevens assessed students will be combined. Learning Sequence Activities (LSAs) and embedded assessments took place in every class, but some assessments—report cards, aptitude testing, and written assessments—took place less frequently, so I will discuss those first. Report cards. Ms. Stevens was required to grade her students once a year using report cards supplied by her school district. Hailey did not like grading only once, “…because it only gives that one snapshot, it doesn’t show any progress over time… I would like to do [report cards] first trimester and last trimester, so there is some time to show growth” (HS Initial 174 Interview, p. 3). As a district, the music teachers decided not to grade kindergarten students, which Ms. Stevens liked, “[b]ecause they are all so young, and developmentally they are all in different places” (HS Initial Interview, p. 2). In grades 1 through 5, the district mandated grading on two grade-level specific benchmarks. The report cards also provided two blank slots where individual teachers could fill in benchmarks they wished to assess. “Some teachers just do the two required. I like to put in the additional two, so the parents have that information” (HS Initial Interview, p. 2). Ms. Stevens described the report card grading system: The grading system… aligns with what the classroom teachers do. N is novice, D is developing, so they are progressing towards grade level, P is proficient, so they are at grade level, and it used to be H for high achievement. Which they’ve just recently gotten rid of, and now you can give a P+, which is really the same thing, right? [chuckles] So the student is consistently achieving, going above and beyond grade level expectations (HS Initial Interview, pp. 2-3). I asked what she thought of this system, and she replied: I like the system, that it’s not ABCDE traditional letter grades, because it kind of takes away from that label. Like, an A is a good student and a D is a bad student. It kind of takes us away from that mentality, to really focusing on: are they achieving the benchmark? Are they progressing towards the benchmark? (HS Initial Interview, p. 3) I asked if she thought that the expectation of grading on a report card affected her instruction or student learning, and she said, I try not to let it. I’m not the kind of person who says to the kids: “You’re gonna get a grade on this” or “When I do your report card…[shakes her finger and scowls]” You know what I mean? I don’t hang grades over their heads…. (HS Initial Interview, p. 4). 175 Ms. Stevens indicated in our first interview that report cards were not the main reason to assess musical skills and abilities (HS Initial Interview, p. 1). In the course of our conversations and my observations, it became clear that Ms. Stevens collected more assessment information than was reflected on the report cards: I would say, depending on the grade level, maybe half of it is on the report card. The other half is just things for me that inform my teaching and help me keep track of [students’] progress for my own sake (HS Final Interview, p. 5). Ms. Stevens seemed to separate grading for report cards from everyday assessment in her classroom. I just feel like those are two different purposes for assessment. There is the one side that informs your own teaching, and helps you adjust your instruction to the students. And then there is the assessment that you use when you actually have to make those grades. The one that is just for me… and it is going to help me teach them better. And the other one, everybody else sees… (HS Final Interview, p. 5). Most of the assessments Ms. Stevens implemented were integrated into regular instructional activities. However, sometimes Hailey would assess specifically for report cards: And the things where I feel like we are just focusing on the assessment [rather than instuction]… [I]t’s usually something for the report card that I don’t really care about. Like in 2nd grade… we have to assess if [students] can identify step, skip, leap and repeat in notation. I don’t really care about that for my second graders, I want them hearing it, and being able to do it…. So I teach it with one or two activities, we go over it, I do like a token little written assessment, and that’s it (HS Final Interview, p. 8). 176 Ms. Stevens seldom assessed acontextually, but when she did, it was because the report cards included mandated benchmarks that she did not view as valuable to the students’ music learning. Aptitude testing. Ms. Stevens administered the Primary Measures of Music Audiation (PMMA; Gordon, 1986) to lower elementary students and the Intermediate Measures of Music Audiation (IMMA; Gordon, 1986) to upper elementary students every fall and every spring, “…so I can see where each student’s potential is, tonally and rhythmically” (Initial Interview, p. 5). PMMA and IMMA both consist of tonal and rhythm subtests, which each take about thirty minutes to administer. Testing materials are aural, and students do not need to be musically literate or literate in written English to answer. Scoring students’ responses results in a percentile ranking of music aptitude, which is normed (Gordon, 1986). Written assessments. Ms. Stevens administered a few written assessments of students’ musical comprehension, each directly related to measuring benchmarks required by the report cards (HS Final Interview, p. 8). During the observation period for this study, I observed two written quizzes in first grade. One tested students’ ability to tell same from different when Ms. Stevens sang brief tonal examples (HS Field Notes 2/25, p. 3), and the other assessed their ability to label form (e.g., ABA) in aural examples of familiar and unfamiliar songs without words (HS Field Notes 3/18, p. 2). Each of these tests took about 20 minutes of the 40-minute music class. I did not observe similar written tests in third grade. Ms. Stevens did not like to assign written work: I don’t tend to do much written work/assessment, especially with younger grades. You happened to see two written assessments in 1st grade recently because I have to assess those benchmarks for their report cards. The elementary music department has decided that identifying same/different musical ideas is one of the four benchmarks that should be 177 reported on the card. This is something I do teach, but I’ve had to create written tools to formally assess it. Typically, rather than written work, I prefer to assess students’ skills in a musical way, such as through singing, moving, and playing. I tend to value (and thus focus on) the skills and knowledge that can be measured in those musical ways over the skills and knowledge that are measured in writing (HS Journal 3/18, p. 2). Ms. Stevens described written assessments as a quick way to gauge students’ understanding of a concept (HS Journal 3/18, p. 2). However, she was concerned that written assessments were “not effective for measuring musical skill development,” and that they “[m]ay not truly indicate students’ understanding of concepts being measured (if directions are not understood, if the student has special needs that hinder their ability to complete written tasks, etc.)” (HS Journal 3/18, p. 2). Ms. Stevens used written assessments only when she felt she needed a more formal summative record of students’ abilities to corroborate report card grades. Learning Sequence Activities. Ms. Stevens began every class with “Learning Sequence Activities” (LSAs; HS Think Aloud 1, p. 1). LSAs are a sequential teaching and assessment activity designed to help individual students progress musically (see Gordon, 2007). LSAs typically lasted about 5 to 7 minutes, and this was the only time that students sat in assigned places, in three rows on the carpet. Ms. Stevens set an egg timer and stood next to a stand that held her binder containing seating charts and instructions for the current LSA for each class. LSAs could be tonal or rhythmic and involved a variety of response modes, including echoing chanted material or tonal phrases, responding with an improvised “answer,” responding with resting tone, labeling musical features with words, and associating solfege. Ms. Stevens would sing or chant cues and either gesture to the group or an individual to cue a response. Sometimes she would respond with the student (“teaching mode”) or allow the student to respond alone 178 (“evaluation mode”). When an individual responded correctly, Ms. Stevens marked this on her chart. Each LSA had easy, moderately difficult, and difficult prompt levels for the skill being taught (Gordon, 2007). All students were presented with easy pattern, and when they accomplished it in teaching and evaluation modes (as described above), they were presented with the medium pattern in teaching mode, and so on. The class would move to the next LSA according to the following guidelines: …the general guideline [is to] mov[e] on when 80% of the class reaches the achievement level that matches their aptitude (low aptitude students achieve at least the easy level; high aptitude students achieve easy, moderately difficult, & difficult). Usually this happens in 2-4 class periods. If 80% of the class is not achieving at their appropriate level in 2-4 class periods, then I assume that they may not be quite ready for that skill and need some more experiences to develop that readiness before we go back to it (HS Journal 3/16, p. 2). Not every student would get an individual turn during LSAs every day, but every student would participate individually in each LSA before moving on to a new one. Students in Ms. Stevens’ classes seemed to enjoy LSAs. Ms. Stevens called LSAs “vegetables,” and described them as the work the students needed to do before they could get on to “dessert:” the fun activities she had planned for the rest of their music class that day. In first grade, Hailey made “vegetables” into a game, in which she tried to “catch” individual students or “trick” them. The students giggled, and Ms. Stevens growled, groaned, and laughed as she “sneakily” tried to “catch” students unaware and marked “their turn” in her LSA binder (e.g., HS Field Notes 2/23, p. 3). In third grade, Ms. Stevens remained playful, but in ways appropriate for 179 the older students. At this level, she also talked about how she was not capable of these tasks until she was in college (e.g., associating solfege to tonal patterns) and “challenged them” to show what they could do (e.g., HS Field Notes 3/2, p. 1). I asked about students’ responses to LSAs, and Hailey responded, They are all different. You have some kids that are just always lazy, no matter what it is you are doing… always some kids that you are going to have to pull along. There [are] also a lot of kids, who [think LSAs are] a lot of fun. Like Mike... the other day we had to do aptitude testing, and I said “Oh, we don’t have time for vegetables today” and he said “Oh, man!” because he loves it. He has a lot of fun doing it (HS Final Interview, p. 11). Perhaps because of how Hailey presented LSAs, most students I observed seemed to anticipate eagerly the opportunity to respond—sitting tall with sparkling eyes focused on their teacher. When I asked about the strengths and weaknesses of LSAs, Ms. Stevens replied: Well, I think one of the weaknesses is, some people think that you have to do it a certain way, follow all the rules exactly, that you have to toe the line in that respect, rather than playing with it, finding what works… for you, what works for your kids. So I think that can be a weakness. If you are too rigid with it, that’s definitely a weakness. Strengths… I think it makes ME accountable. It forces ME to give each student my attention and individualize instruction for where they are. It forces ME to look at each student’s potential. And to see if they are achieving at a level that matches what their potential is… and it forces me to keep track of their progress… It forces me to HEAR students individually in the first place, so that they can build skills (HS Think Aloud 1, pp. 7-8). 180 LSAs offered a daily opportunity to teach and assess sequential music learning. Ms. Stevens used encouragement and humor to make LSAs an enjoyable part of her classroom routine. Embedded assessments. Ms. Stevens embedded assessments into her music instruction, so that she was constantly informally and formally tracking the music learning progress of individual students as well as the class as a whole. Hailey frequently checked group comprehension of musical concepts. For example she asked classes to: identify musical features (e.g., form, HS Field Notes 3/9, p. 3); demonstrate movement responses (e.g., in response to changes in instrument timbres; HS Field Notes 3/2, p. 3); and read notation as a group (e.g., HS Field Notes 3/9, p. 1). However, these informal observations seemed to function as teaching tools or as a way to allow students to practice content, rather than as assessments. In addition, Ms. Stevens monitored such whole-group musicking activities as folk dances (e.g., HS Field Notes 3/9, p. 2), singing in three-part chords under a melody (e.g., HS Field Notes 3/4, p. 1), and accompanying singing with body percussion (e.g., HS Field Notes 3/2, p. 2). Hailey never reported these types of activities when she described assessments in her journal, instead focusing on activities that allowed her to collect formal data regarding individual student responses. In a typical class period, Ms. Stevens began with LSAs. The remainder of music class time would be spent on a variety of instructional activities, including singing, movement, playing instruments, listening to music, and a few rare instances of written work or brief lecture-style instruction. Assessments were embedded in instructional activities in the form of frequent opportunities for individual children to sing, play, or move independently. Ms. Stevens rated these solo performances using four-point rating scales specific to each activity. I find [rating scales] to be really helpful because it is an easy way of having a 181 standard, a high and a low, and then you can compare students with your standard. So, I think rating scales are really effective… And an effective way of [assessing] quickly, and in a manageable way (HS Final Interview, p. 7). To illustrate the nature of Ms. Stevens’ embedded assessments, I will describe one activity from each grade level I observed. In first grade, Ms. Stevens gestured to individual students and chanted in triple meter, “Hickety pickety bumblebee, will you chant a pattern for me?” (Field Notes 3/23, p. 3). In response, the student chanted a four-macrobeat rhythm on neutral syllables, and then the remainder of the class echoed the rhythm. Using a palm pilot, Hailey recorded which students had a turn by rating their improvised rhythm performance using a four-point scale. About eight students had turns for this activity, and responses included one child who used a pickup, several responses of the same rhythm (Figure 6.1), and two students who used prolonged elongations. The students who did not get turns knew that they would have a turn for this activity another day, because Ms. Stevens rarely stayed with one activity long enough for every student in a class to take a turn on the same day. Figure 6.1 Common “Improvised” Response In third grade, students reviewed “Sarasponda,” a song that they had learned in second grade (HS Field Notes 2/23, p. 2). Students sang the melody while Ms. Stevens sang chord roots (do, fa, and sol), and then students sang the chord roots while she sang melody. Some students seemed confused by fa, and Hailey confirmed in her journal that this was the first time students had added IV (fa) to their externalized harmonic vocabulary, which previously consisted of I (do), V (sol), i (la), and v (mi) (HS Journal 2/23, p. 1). With little further instruction, groups of 182 four students played the chord roots on barred instruments to accompany the class as they sang. Ms. Stevens marked turns in her grade book by rating each student’s performance using a fourpoint scale. Perhaps because four students performed at the same time, each student had at least one turn in this activity. I asked Ms. Stevens if all these assessment activities interfered with instruction. She replied, “Mm mm [shakes head “no”]. It could. But I try to integrate it as much as possible and just make it part of the process. I do my assessments on things we would be doing anyway. So I don’t feel it interrupts” (HS Initial Interview, p. 4). She elaborated further in our final interview: Most of the time when I plan an assessment it is not just for the purpose of assessment. The assessment is just an outgrowth of—this is something that is important for [students] to experience and learn, so we are going to do this, and I’m going to keep track of it just so that I know where to go next… [There is m]ore a focus on the learning, and the sequential learning than the assessment itself… I don’t feel like [assessment] ever intrudes on what we are doing. I try to just make [assessment] a natural part of [music class] (HS Final Interview, p. 8). A simple tally of my field notes revealed that, in addition to daily LSAs, Ms. Stevens rated individual musical responses one to three times per class. Typically, about a third or a half of the students in a class gave individual responses as part of an activity before the class moved on to something else, and Ms. Stevens returned to the activity in subsequent classes to hear the remaining students. Ms. Stevens viewed assessment as a natural, embedded part of sequential music learning, which allowed her to track individual progress and adjust her instruction accordingly. 183 Summary of when and how music learning was assessed. Ms. Stevens assessed music learning in a variety of ways. She graded on report cards once a year and administered aptitude tests in the fall and spring. Hailey infrequently administered written quizzes and expressed concerns that the written format was not the best way to measure music learning. Every music class, Ms. Stevens’ students participated in LSAs, which were both a teaching tool and an assessment activity. Hailey observed group musicking and checked for group understanding of conceptual information but did not characterize these activities as assessments in her journal. Most assessments were embedded in instructional activities, and Ms. Stevens viewed them as a natural component of instruction. Scoring and Tracking the Results of Assessments Ms. Stevens’ assessment methods resulted in a variety of types of data. Aptitude tests produced percentile rankings of tonal aptitude and rhythm aptitude, which Ms. Stevens recorded in her grade book and on the seating charts in her LSA binder. Written quizzes were scored as a number of correct answers out of the number of possible answers, and this information was recorded on an assessment spreadsheet in Ms. Stevens’ computer (e.g., HS Journal 3/18, p. 2). Scoring procedures for LSAs and embedded assessments were more complicated. Scoring LSAs. Ms. Stevens kept a binder on a music stand by her keyboard in the front of the classroom, where she also kept an egg timer and pencil. The binder contained sheets for recording class progress on LSAs that were photocopied from a workbook (e.g., Gordon, 1990). Each sheet included directions for the LSA including easy, moderately difficult, and difficult prompts when applicable, and space for a seating chart. As described above, LSA prompts were directed to individual students by using hand gestures, and then the student would respond, first in teaching mode (with Ms. Stevens) and then in evaluation mode (alone). Each student must 184 first correctly respond in teaching mode before progressing to evaluation mode at any level, and must correctly respond at the easy level before progressing to moderately difficult and then to difficult (see Gordon, 2007; Hailey would sometimes skip teaching mode or skip the easy level for some students, HS Journal 3/16, p. 2). Usually, Ms. Stevens marked a tally next to each child’s name when he or she correctly responded at each level--one tally for easy teaching mode, another for easy evaluation mode, and so on. Five tally marks would indicate a child who had completed teaching mode at the difficult level. When an LSA required an improvised response, there was no teaching mode, and students’ responses could have a variety of levels of correctness. In such cases, Ms. Stevens designed a different rating system. For example, in third grade, Hailey sang an improvised Major tonic or Major dominant pattern as a prompt (HS Field Notes 3/9, p. 1). The students decided if the prompt was tonic or dominant and responded with a different pattern of the same variety as an answer. Hailey rated their responses as follows: If a student was able to improvise a tonal pattern with correct solfege and pitches, I marked it with a “+”. If a student improvised a tonal pattern in tune and function but with incorrect solfege applied, I marked it with a “(+).” If a student improvised a pattern that used correct solfege (e.g., “DO-MI-DO” for a major tonic) but did not sing correct pitches (or didn’t use a singing voice), I marked it with a “(-)”. If a student gave a response that was not sung and did not use correct solfege, I marked it with a “-” (HS Journal 3/9. p. 1). Embedded assessments. Ms. Stevens used four-point rating scales to score embedded assessment activities. She designed her own scales to measure exactly the musical behavior she wanted to track: 185 HS: Let’s say… it’s first grade and we are improvising rhythm patterns, just with neutral syllables. If they can do it consistently in my tempo and meter, it’s a 4. A 3 would be mostly there, but maybe there’s a little bobble where they change the meter or something like that. A 2 would be they came up with something different [from my prompt], but not quite rhythmically… you know… all there. And then, a 1 would be not at all. Well... I kind of do that differently with that one, maybe it’s not a good example. A one would be… no rhythm at all. Usually for that I’ll make a note… if they just [echo my prompt], I’ll make a note of that, because they weren’t able to discriminate that what they were doing was the same. KS: But doing the same as you might show a metric context, though. HS: Right… but I’m assessing if they can create something different… So if it’s just echoing the rhythm patterns, then 4 would be they can do the rhythm consistently in my tempo and my meter. 3 would be mostly there, but maybe one mistake. 2 would be they did a pattern in my tempo and meter, but maybe they changed a beat or two and 1 would be totally not in tempo or meter (HS Initial Interview, p. 6). As the nuances between the above “creating” and “echoing” scales demonstrate, designing rating scales explicitly for each activity allowed Ms. Stevens to track specific musical behaviors at particular performance levels. Moreover, her consistent use of a four-point scale meant that she was not reinventing the wheel with each new rating system. “I tend to stick with that. It’s easier for me to keep track of in my mind, when I’m having to write them all down quickly” (HS Initial Interview, p. 6). Hailey used four-point rating scales at least once per class to track musical progress on a variety of musical tasks. After a child’s solo response, Ms. Stevens would simply record “their 186 turn” as a numeral 1, 2, 3, or 4 in her grade book or palm pilot. This data was then transferred to her assessment spreadsheet in her computer. More samples of rating scales used during the observation period included the following: Melodic improvisation over chord roots: 4= melody stayed within tonality/meter and fit over the chord roots, 3=melody within tonality/meter and fit over chord roots most of the time, 2=singing voice but not in the context of tonality/meter given, 1=able to create something but not in singing voice I also make a note of students who simply sing the familiar [prompt] song (HS Journal 2/25, p. 1) Rhythm conversation: 4= created four-beat rhythm pattern in my tempo/meter, 3=created one or two-beat rhythm pattern in my tempo/meter, 2= created a rhythm pattern but not in my tempo/meter, 1= created something different but not in a tempo/meter (HS Journal 3/4, p. 2). Playing ostinato: 4=played the macrobeat ostinato correctly during entire song, 3=played the ostinato correctly during most of the song, 2=played a steady beat that didn’t correspond with the song, 1=did not play a steady beat (HS Journal 3/9, p. 2) Singing v-i: 4=sang MI-LA in tune, 3=sang MI-LA with minor intonation issue, 2=used singing voice but not accurate pitch, 1=did not use singing voice (HS Journal 3/11, p. 1). Creating tonal patterns: 4=created a pattern that was clearly in the given tonality, 187 3=created a pattern that was somewhat in the given tonality, 2=created a pattern in singing voice but not within the given tonality, 1=used speaking voice (HS Journal 3/11, p. 2). Associating solfege: 4=associated correct solfege and sang in tune, 3=associated some correct solfege and sang in tune, (3)= associated correct solfege but did not sing in tune, 2=associated incorrect solfege but sang in tune, 1=did not associate correct solfege or sing in tune (HS Journal 3/16, p. 1). Singing game: 4=sang response in tune, 3=sang response with minor intonation issues, 2=sang response using singing voice but not accurate pitches, 1=did not use singing voice (HS Journal 3/16, p. 2) Playing ostinato: 4=played the ostinato perfectly, 3=played the ostinato correctly most of the time, 2=played the correct bars but not always at the correct time/not to the beat, 1=did not play the correct bars (HS Journal 3/18, p. 1). Tracking individual responses this frequently and with this level of detail facilitated Ms. Stevens’ quest to know her students as musicians and people. I find record-keeping of students’ achievement to be EXTREMELY helpful... If I didn’t keep records of assessments, I would have no tangible information on which to base my expectations of students, measure their progress, or gauge where we need to go next in the learning process. [For example] I was surprised that Hiroyuki was able to play the chord roots perfectly, based on his singing achievement, but it was not surprising based 188 on his tonal aptitude score as indicated by IMMA. The other students who achieved at a level “4” did not surprise me, as they have shown high achievement in previous assessments. I was not surprised that Mario struggled, as he does with many skills in music (which is not surprising given the issues we talked about- new to the school, to the country, probably fairly new to English). I was surprised that Shanelle achieved at a level “1” because she typically does much better than that. I would be curious to see how she did with the activity on a future day, as we all have our “off” days! (HS Journal 2/23, p. 3) The quality and quantity of data Hailey amassed also allowed her to monitor the success of her teaching, tailor her instruction to meet students’ needs, and plan future lessons. Designing her own four-point rating scales meant not only that the scale was convenient to use, but also that it measured what she needed it to. Necessity of individual response. “The most important factor in the ability to assess…. You have to hear [students] alone. If you don’t hear them alone, you don’t know what they can do (HS Final Interview, p. 2). Although she used observation of the class as a whole and informal group assessments to guide her teaching, Hailey’s journal entries mention only those assessments based on individual responses. “I don’t feel I can accurately assess things if [students] are doing it together, because they could be imitating each other” (HS Initial Interview, p. 7). Ms. Stevens designed at least one embedded assessment activity and used LSAs every day as ways to elicit individual responses. “You can’t really individualize instruction if [students] don’t have opportunities to do things alone, and you have no idea what they CAN do, because you have never heard them alone…” (HS Think Aloud 2, p. 1). Individual response was integral to Ms. Stevens’ practice of assessment. 189 Challenges to assessment. Ms. Stevens faced considerable challenges as she worked to score and track students’ progress in music. [Elementary general music teachers] see so many students, and often we don’t get the same amount of planning time in our school day as a classroom teacher. It’s really hard to get back and look at all the assessments that you’re doing. That’s my main challenge. So, I have three hundred to four hundred students in a week. When do I sit down and really examine that assessment data? That’s my main challenge (HS Initial Interview, p. 2). Due to these challenges, Hailey had to be thorough, accurate, and organized with her record keeping. She talked about how her assessment practices required considerable multi-tasking: You’ve gotta be able to have your eyes on the kids, make sure they are all behaving… You have to be able to keep your own teaching plans in your head so that you can keep rolling while you are monitoring [the students]. AND you’ve got to be able to keep track of what each child is doing [musically]. And you have to keep track, written or in your mind, [of] exactly how each student did. I think you have to have a huge ability to multitask… (HS Final Interview, p. 3). During a think-aloud, we watched a clip of third-grade students singing improvised melodies over chord roots. While she was watching children sing, Ms. Stevens commented: “My memory is so bad… I remember, wow, Selina that day did something that was really cool. But remembering what it was, is gone. Seeing so many students, teaching so many classes, it’s like everything just kind of filters through” (HS Think Aloud 1, p. 3). On another occasion, Hailey facilitated whole-class songwriting as practice for a future small-group composition activity. Individual students suggested chunks of melody, and Hailey notated the song and provided a 190 harmonic framework. In the moment I made mental notes on who created what kinds of “chunks,” BUT now I cannot remember who created what! I remember being impressed that the second student created such a clear dominant pattern for the second measure, but I can’t remember who it was! This is why I like to take notes and/or document assessments... (HS Journal 3/23, p. 1, italics added). Hailey also needed to be self-motivated to track her students’ progress with music learning. Elementary music teachers in her district were philosophically divided regarding assessment (HS Initial Interview, pp. 2-3). “[P]eople like me… believe we can teach specific skills--we can break down these things that we can teach and assess. And other people that think [music] needs to just be a conceptual, holistic, experiential thing” (HS Initial Interview, p. 3). Furthermore, there was little administrative oversight of elementary music grading practices: [Y]ou could just make up the grades that go on the report card. You could be doing NO assessment, truly, whatsoever, of your students. It would be really easy… You could just say that everybody is grade level. In fact, I have heard that there are a couple of teachers in this district that do that. [Grade] everyone as proficient (HS Final Interview, p. 6). In order to integrate assessment practices into her teaching, Ms. Stevens had to be selfmotivated, keep detailed records, multi-task while teaching, and find the time to review the results of assessments so they could inform her instruction. Summary of scoring and tracking the results of assessments. To score and track music learning, Ms. Stevens typically designed her own four-point rating scales so they would be easy to use and valid for her purposes. These rating scales were utilized to evaluate the embedded assessments that constituted the majority of the assessment activities in Hailey’s 191 classroom. Ms. Stevens infrequently used written quizzes, which were scored as the number of correct answers out of the number of possible answers, and aptitude tests, which resulted in percentile rankings. Daily LSAs were scored by using tally marks or an adapted a rating system that described the nuances possible in students’ responses. Data from aptitude tests, four-point rating scales and quizzes were entered into a grading spreadsheet, and LSA progress was tracked in the LSA binder. Hailey believed that individual response was necessary for an assessment to be accurate. She faced challenges to her assessment practices, including a large number of students, limited contact time, lack of support from colleagues and administration, and the need to multi-task as she collected data. Impact of Assessment on Differentiation of Instruction. Ms. Stevens used the results of her assessments to track individual progress in music learning and to guide her instruction of each student. I think it’s important to go back and study the results of the assessment to see who is achieving with that particular skill. And the kids who achieved it need to be pushed on to something that is going to keep them more challenged. The kids who didn’t quite achieve that skill obviously need some remediation, they need some re-teaching and reinforcement, maybe they need to backtrack… So I [use] assessments to then decide what each individual child needs from that point on, whether it’s to advance or to have more experiences with the content they hadn’t yet mastered (HS Initial Interview, p. 8). Differentiation inextricably intertwined with assessment practices. The tapestry of Ms. Stevens’ music teaching included nearly omnipresent threads of assessment and differentiated instruction. To me as an observer, these threads were often so intertwined as to be 192 somewhat indistinguishable. Hailey described her view of the role of assessment in differentiating instruction: I think [assessment] forces you to hear individual students, to see where they are achieving, it forces you to keep track of [achievement] so you know where they all are, and hopefully [assessment] is informing the decisions that you are making as you are proceeding with what the kids need (HS Final Interview, p. 7). Differentiated instruction as a natural consequence of assessment. Ms. Stevens’ assessments of student abilities resulted in differentiation of instruction. She differentiated her instruction both while teaching in the moment and also as she planned new learning opportunities for the future. The metaphor of a tapestry again seems apt, as Hailey rarely differentiated simply based on one assessment experience, but seemed to maintain multiple assessment threads for each student—aptitude, singing voice development, rhythmic and tonal achievement, to name a few. These threads were woven together in the moment and in planning both for individual students and for whole classes. Ms. Stevens’ journal entries and my field notes are replete with descriptions of instructional decisions made in the moment based on either past or present assessments to differentiate instruction. One day, the first grade students played a singing game that featured three phrases echoed by individual student singers (HS Field Notes 2/23, p. 412). The echoed responses, sung on words as part of the song, offered different difficulty levels [1. Do re mi do, 2. Mi mi fa sol, 3. So la ti do sol]. I was originally planning on letting the students choose [who sang next], but based on the wide range of singing abilities in this class, I decided to choose which student would sing 12 This is the activity described in the opening vignette of this dissertation document. 193 which echo. This enabled me to give the students who had showed consistent, accurate use of singing voice the challenging phrase and those that hadn’t shown as much consistent, accurate use of singing voice one of the easier phrases to sing… Ms. Stevens used past assessments of students’ singing abilities to determine their level of challenge in this activity. She also weighed personality factors when she decided on level of challenge: I definitely considered the high phrase to be hardest and assigned that phrase to students who showed higher singing achievement in previous assessments. I would agree that phrase one was easy and phrase two was medium; however, I sometimes assigned some unsure/inaccurate singers the second phrase so that they wouldn’t have to sing first (HS Journal 2/23, p. 3). Hailey intentionally challenged one student whose abilities she did not know as much about: “I decided to give one of the newer students (Lyra, who moved to the school in December) a chance to sing the high phrase. She was not successful but was later able to sing one of the lower phrases accurately” (HS Journal 2/23, p. 2). While this differentiated instruction was taking place, Ms. Stevens was simultaneously using a rating scale to evaluate students’ performances. Another example of adaptations to teaching based on assessments in the moment occurred in third grade. Students were reading tonal patterns from flash cards using solfege (HS Field Notes 3/4 p. 1). This was one of the students’ first exposures to notation. Ms. Stevens showed the card and prompted students to “figure out” the solfege indicated by the notation, one note at a time. Finally, Hailey sang the pattern and the class echoed. 194 I noticed that some students were generalizing 13 and singing the pitches of the tonal patterns before I had finished giving the answer. Evan was one of the students that I noticed doing this. So I decided rather than just giving them the answer for the patterns by rote and simply having them echo that I would have the group generalize the pitches before reading the whole pattern. This was confusing for many students in the class, but those students who were ready to generalize were able to do it (HS Journal 3/4 p. 1). Based on Ms. Stevens’ assessments of responses in the moment, some students indicated a need for a greater challenge, and she changed her instruction accordingly. Students’ achievement levels on assessment activities also led to adaptation of future lesson plans both for individuals and also for the group. In third grade, students played an alternating i-v ostinato on barred instruments, which Ms. Stevens assessed using a four-point rating scale (HS Field Notes 3/18, p. 3). Hailey revealed the results of her assessment and her plans for the future in her journal: This was WAY too easy for them! Almost everyone played it perfectly (“4”) or mostly correct (“3”). Only one student achieved a “2”, and no one scored a “1.” …They are definitely ready for a more complicated ostinato—maybe a crossover bordun or melodic ostinato? (HS Journal 3/18, p. 1). Usually, activities were closer to the challenge level of the majority of the students. It was more typical to read journal entries such as this one: Singing V-i: Clearly, the two students who still achieved at a “1” level need some remedial experiences in developing singing voice, and the 14 students who achieved at a “4” level need more challenges! (HS Journal 2/23, p. 2). 13 Taking information learned in another context and applying it to a new task. 195 Sometimes, Hailey’s journals would simply reflect upon the need for more challenge or remediation, and other entries were more specific about exactly how she planned to offer these opportunities. For example: Playing I-IV-V chord roots: Since only seven students were able to play correctly during the whole song, we might either review/reinforce these chord roots in the future OR try a song with an easier progression, possibly using only I & V (HS Journal 2/23, p. 2). Hearing that students were able to improvise a melody over chord roots tells me they are ready for more sophisticated/restrictive improvising, such as improvising [over] tonic/dominant chord tones. Improvising tonic/dominant patterns and singing tonic/dominant harmonies in three parts also serves as readiness for improvising a melody on tonic/dominant. Hearing that students were able to improvise a melody lets me know they are ready for the composition project we will begin soon, where students create and revise melodies by ear (HS Journal 3/4, pp. 1-2). Ms. Stevens’ lesson planning was guided not only by her impressions of the group’s performance but also by her formal assessments of individual student progress. Differentiated instruction was a natural outgrowth of Ms. Stevens’ assessment practices, both as she adapted instruction in the moment and as she planned future lessons. Assessment as a form of differentiation. Just as differentiated instruction constituted a natural consequence of assessment in Ms. Stevens’ teaching, some assessment activities also provided opportunities for differentiated instruction. If an assessment only allowed for two possible outcomes—each student successfully did or did not demonstrate a target skill—the assessment activity was not a form of differentiation. However, Ms. Stevens often utilized assessment methods in which the assessment itself constituted differentiated instruction. 196 Ms. Stevens differentiated instruction during assessment activities by varying the difficulty level of the material being assessed based on the previously demonstrated abilities of the student responding. For example, LSAs provided easy, moderately difficult, and difficult tonal or rhythm prompts. Students who succeeded at the easy level would be advanced to the moderate and then difficult levels. Embedded assessments also allowed Ms. Stevens to offer appropriate challenges for each student. For example, first grade students played a game in which they echoed a rhythmic phrase that Hailey improvised (HS Field Notes 2/23, p. 4). Based on students’ previous performance, Ms. Stevens improvised rhythms appropriate for the child’s achievement level. One student echoed an easier rhythm (Figure 6.2) and another student echoed a more difficult one (Figure 6.3). Figure 6.2 Easier rhythm Figure 6.3 More difficult rhythm When I asked about the most important factors in a music teacher’s ability to assess music learning, Hailey responded, …I think knowing each student’s abilities individually, so that you know what is a success for which student. So, let’s say we are singing chord roots alone in second grade. For a really high achieving student, or a high aptitude student, that’s like no problem. For another student who is still struggling with singing voice, if I know they are still struggling with singing voice, even if they can’t sing the chord roots accurately, but they 197 are using their singing voice in some way, I know that is still a success for that child, even if they even if it didn’t meet my specific expectation for the assessment (HS Final Interview, p. 2). Ms. Stevens’ use of rating scales that described a variety of response levels allowed her to track students’ individual progress, even if they were not meeting the standard she was checking. Moreover, because Ms. Stevens assiduously tracked individual students’ progress, she knew what achievements constituted success for each child. Success at each individual’s level was also facilitated by open-ended assessment activities, in which students created their own answers rather than echoing or other more structured responses. Students in first grade played a game in which students provided melodic material for the rest of the class to echo (HS Field Notes 3/23, p. 2). One child responded with inaccurate singing for the tonality, and I asked about his response: Even if we are just talking about echoing and not creating, he’s an inconsistent singer. Sometimes he’ll use his head voice, sometimes he’ll just sing in a speaking voice. Already, I was kind of expecting something on the fence. I was really happy with that response in terms of creating. Because he did get into his head voice, even if it was really high and squeaky. But I could hear when he had that kind of (demonstrated his pattern) in there… You hear the high resting tone in there. I was happy with that, knowing what he was capable of (HS Think Aloud 2, pp. 5-6). Thus, the assessment activity differentiated instruction by allowing the child to musick individually at his own level of achievement. In Ms. Stevens’ teaching, differentiation of instruction and assessment practices were inextricably intertwined. Differentiated instruction occurred as a natural consequence of 198 assessment, because Ms. Stevens used the results of assessments to individualize instruction both as she was teaching in the moment and also as she planned future lessons. Furthermore, many of Hailey’s assessment activities provided chances to differentiate instruction even as she was tracking students’ progress. Based on prior achievement and/or aptitude, Ms. Stevens could structure assessments to offer different levels of challenge to different students. She also used open-ended assessments to allow students to demonstrate success at their own level. Separating musical abilities from academic or behavioral abilities. Ms. Stevens’ assessment practices seemed to allow her to separate a child’s music achievement and aptitude from his academic or behavioral abilities, and to differentiate music instruction based on music learning needs rather than (or perhaps in addition to) other gifts or deficits. For example, after an assessment in which first grade students circled icons to indicate if tonal patterns were the same or different, Ms. Stevens wrote: Some students such as Molly struggle with pencil-and-paper tasks and/or the focus necessary to complete them. They may have the musical ability to tell if the patterns are same/different but may not be cognitively able to complete the task of circling the correct answers. If I can clearly tell from looking at their paper (lots of wrong answers, weird marking, pattern circling, etc.) that the student was not able to complete the task accurately, I do not count the assessment for that student because it’s not telling me what I want to know. I might try to find a time to pull that student from their class and verbally ask them to identify same/different (HS Journal 2/25, p. 2, italics added). Molly’s problems with academic skills such as reading and writing prevented her from demonstrating her musical abilities on a pencil/paper assessment. Ms. Stevens’ frequent assessment of musicking behaviors informed her that Molly’s performance on this particular 199 measure did not seem indicative of her typical musical achievement, and Hailey therefore differentiated by adapting this assessment for Molly, allowing her to demonstrate music learning orally rather than in written form. In addition to the possible impact of a lack of academic skills on music assessments, behavioral issues such as compliance could also affect a student’s performance. It is clear… some students are quite high musically but struggle with appropriate behavior. Sometimes I think that such a student needs to be kept more engaged by being given a more challenging task to “chew on…” However, there are [also] some students who need to learn that there are behavior expectations at school and that they need to follow them. With students like Mike, who are high musically but struggle with behavior, I try to reinforce appropriate behavior but give consequences when necessary, after which I try to recognize their behavioral AND musical success as quickly as I can… so that they know that I recognize that they are still capable and skilled regardless of poor behavior choices (HS Journal 3/9, pp. 2-3). Ms. Stevens found ways to ascertain the musical abilities of students even when they were not compliant with directions or they were acting out. The frequency of assessment activities combined with a variety of response styles to allowed Ms. Stevens to isolate students’ musical abilities from their academic capacities or behavior and differentiate music instruction accordingly. In addition to providing numerous opportunities for students to demonstrate musicking skills using a variety of response styles, Ms. Stevens’ use of aptitude testing may contribute to her ability to separate musical from academic or behavioral capabilities. 200 …low scores to me, on the aptitude test could just be they had a bad day… they didn’t eat breakfast, they were in a bad mood… So I don’t always go by low scores if [students] are showing high achievement. But, a student who scores low, I know is going to need more time and more reinforcement to build their skills. Not that they can’t do it, but they just need MORE [emphasized] to get them there. Versus the students who are scoring off the charts high, I don’t want them sitting there bored out of their gourd. I want to keep them engaged. So, I want to know that they are high to I can keep them challenged… And also aptitude-wise, I do believe that there is a difference between aptitude and achievement. I’ve had numerous kids who score off the charts high, on their aptitude tests, and there is no singing voice. One in particular I can think of, kindergarten no singing voice—scored 99th percentile tonally. First grade no singing voice—99th percentile tonally. Second grade… finally in third grade, halfway through the year, he found his singing voice, [snaps] and boom. He was ready to roll. He was rockin’ from that point on. But, had I not known that his aptitude was high tonally it might have been really easy for me to say, “Well, that kid’s not musical. He’s never going to be able to do it.” And just ignore him and not make him feel uncomfortable. But because I knew it was all in his… he had that potential. Then I knew to keep chuggin’ along and trying to bring that out (HS Initial Interview, pp. 8-9). As Ms. Stevens described, a child with a high music aptitude who was acting out may need more musical challenges. A child with low music aptitude who acted out may need remediation so he could feel successful. Ms. Stevens did not limit children because of their aptitude scores—if a student’s achievement outstripped his measured aptitude, Ms. Stevens increased his musical challenges accordingly (HS Initial Interview, p. 8). However, information from aptitude testing 201 gave additional insight into students’ musical abilities that allowed Ms. Stevens to differentiate music instruction apart from academic skills and behavior. Some children in the classes I observed were labeled as having “special needs,” specifically learning disabilities (LD), English as a second language (ESL), or “giftedness.” Ms. Stevens felt that her approach to teaching music separated musical abilities from students’ other challenges or gifts: [Regarding ESL students] I do not find a significant difference in their [music] performance compared to other students, especially at early grade levels. I think this is because I tend to teach music by experiencing and DOING music (by ear) rather than trying to explain it. Even when I do explain it, I don't see language issues as a barrier. A good example is one of the third graders in Ms. Lea's class (Hiroyuki) who came to us in the fall from Japan with little or no English. When we started playing an elimination game where students had to jump only on major tonic patterns or they were out, Hiroyuki was winning the game only a month or two into the school year!” (HS Initial Interview, p. 1) [Regarding students with LD] Maybe it’s my philosophy or my beliefs… maybe it’s the way I go about teaching music… I don’t teach music in a traditional way. I don’t start with notation, I don’t teach letter names of the lines of spaces on the staff. So I can see that a student with a learning disability would struggle with that in music. But especially with younger students, I tend to teach [music] by ear. We just sing and chant and move, and I don’t find that things like learning disabilities really impact [students’] ability to participate in music class in that way (HS Initial Interview, p. 5) 202 [Regarding “gifted” students] I do not find that students who qualify for gifted/talented are necessarily gifted in music, and I do not believe that intelligence, IQ, or academic achievement are related to musical potential. Rather than the term "gifted", I would prefer to use "high aptitude" in a music setting because I don't like to see it as a "gift" or talent that some people are born with and others not. I try to keep [musically] highaptitude students challenged by giving them more difficult material, giving them more difficult tasks, having them make generalizations/inferences, or being an example for the class (HS Initial Interview, p. 1). Ms. Stevens believed that music was a separate intelligence that could be developed, regardless of academic skill level or behavioral challenges (HS Initial Interview, p. 10). By teaching and assessing music orally and aurally, she tried to access musical intelligence in a way that bypassed the need for the reading, writing, or spatial skills that could cause problems for many students with learning disabilities. Similarly, children who spoke English as a second language could respond musically by moving, playing instruments, and singing songs without words (a common activity in Ms. Stevens’ classroom). Use of aptitude testing (specifically PMMA and IMMA, which do not require music or English literacy, Gordon, 1986a; 1986b) as well as frequent, varied assessments of aural, oral, and movement-related musical achievement assisted Hailey as she worked to separate musical from other areas of intelligence for students who carried special needs labels as well as those who did not. Because Ms. Stevens taught and assessed musical skills primarily through musicking (moving, singing, chanting, playing instruments), she was able to disentangle a child’s musical achievement and aptitude from his or her academic or behavioral gifts or deficits. Therefore, she could differentiate instruction based on music achievement and aptitude rather than behavior or academic skills. 203 Data-driven, student-centered learning. Hailey’s assessment practices also contributed to differentiation of instruction by creating a climate of data-driven, student-centered learning. This atmosphere was characterized by flexible grouping practices, teaching for a variety of learning styles, and using assessments and assessment data as motivation for learning. Having the teacher step back and allow the students to work in groups (that have often been purposefully chosen so that each group contains strong AND weak students) and teach each other is something that happens frequently in the general education classroom, but I’m not sure it happens enough in music classrooms. So often in music classrooms (in mine, too!) students are always in a large group and/or are always being led by the teacher/conductor [and they] never have an opportunity to develop independence and ownership of their own learning/music making (HS Journal 3/23, p. 2-3). Hailey valued group work as a way to allow students to take ownership of their learning, to build musical independence, and to allow students to teach one another. Ms. Stevens used a variety of grouping practices in her teaching. For activities such as play parties and folk dancing, she often let students choose their partners or groups (e.g., HS Field Notes 2/11, p. 4). If a few students were not behaving well, she would assign partners only to those students, while the rest of the students still chose on their own (e.g., HS Field Notes 2/25 p. 3). When students choreographed a song in small groups and sang it in a round, Ms. Stevens initially allowed the students to choose their own groups. However, when some groups were not able to sustain their part of the round, she reassigned a few strong singers to help lead each part of the round (HS Field Notes 3/11, p. 1). In many classroom activities, students were allowed to choose their own groups unless Hailey needed to intervene for behavioral or musical reasons. Sometimes, Ms. Stevens assigned groups. In third grade when students were writing 204 compositions in groups of two to three students, Hailey assigned groups based on behavior and musical achievement (HS Final Interview, p. 2). I usually try to mix up abilities… …socially and behaviorally I can get them with who they need to be… But also if we are in a group of 2 or 3 kids, I want to be sure that there is at least one kid in there who is [musically] pretty strong, who can be a leader. I always try to include a kid who is maybe completely clueless, so they can have someone to go along with. So I do set it up based on ability, rather than having all the high kids in a group and the low kids in a group (HS Think Aloud 2, p. 10). In this case, Ms. Stevens grouped students with other students with whom they would behave, and tried to ensure that a variety of ability levels were represented. Data from her previous assessments influenced her view of which students could provide leadership on this task. Ms. Stevens also differentiated instruction according to the variety of learning styles in her classes. For example, some students learn best in teacher-led, whole group instruction, others prefer group work with other students, and some children prefer to work alone. In Hailey’s classroom, students often received instruction as a whole group (HS Journal 3/23, p. 2), but they also worked cooperatively in smaller groups on compositions (HS Final Interview, p. 2), or with partners, such as in first grade when students improvised rhythmic conversations with each other (HS Field Notes 3/4, p.3). Occasionally, students worked independently, playing instruments (e.g., HS Field Notes 2/23, p. 2), working on white boards (e.g., HS Field Notes 3/2, p. 2), or completing written assessments (e.g., HS Field Notes 2/25, p. 3). Differentiation by learning style also was reflected in the variety of response styles available to students. Students sang, chanted, and moved their whole bodies and parts of their bodies in formal choreographed dances, movement and singing games, and activities involving 205 improvised or creative movement. Students also played instruments, interacted with props such as scarves, balls, and stretchy bands, wrote on paper or white boards, and occasionally described music or musical features in words. Correspondingly, Ms. Stevens used a variety of methods to convey information, such as through demonstrations of singing, chanting, playing, or moving (by teacher or students); visual information such as body language (use of sign language, facial expressions) and written information (a large white board, bulletin boards, flash cards); and auditory stimulus, including recorded music, and verbal directions. Hailey displayed sensitivity to students’ responses and adjusted her teaching accordingly. One day, third grade students were working on associating solfege syllables to tonal patterns that Hailey was singing on neutral syllables (HS Field Notes 3/11, p. 1). Many students struggled with this activity, singing the pattern accurately but with incorrect solfege. Ms. Stevens changed her strategy by speaking the solfege and asking the students to sing what that solfege would sound like. I didn’t want to encourage the problem [by] singing incorrect solfege with the pattern. But I chose to speak it to see if they could make that transfer. Because some kids think of it that way. They think “Ok , I want it to be re ti,” and how does that sound? Some kids think the pattern first, like bum bum (sings do mi on neutral syllable) in their head and then apply, “ok, that’s do mi.” But I was realizing, for some kids it’s really the other way around. The solfege is informing the choice that they are making… so a kid might be picking mi do so, and not being able to figure out how it would sound. So I wanted to give them examples of that. For those kids that were thinking in that way (HS Think Aloud 1, p. 8). 206 Ms. Stevens saw that some students thought of solfege syllables and then associated music, while others “heard” their musical answer than then added solfege. She changed her teaching to accommodate those students whose learning style was the reverse of the way she had been teaching. In addition to allowing students a variety of response styles and teaching through a variety of media, Ms. Stevens analyzed students’ responses in light of assessment data to determine how best to proceed in their instruction. Hailey used assessments and assessment data to motivate students’ music learning. Well, [the students] benefit… if I am giving them appropriate instruction based on what they have accomplished so far, that is going to benefit them in their learning. Versus if I didn’t assess, and didn’t realize that those five kids had no idea what that concept was, and I move right along, they are going to fall farther behind. Then, also I think it’s important… you know, sometimes when I’m assessing something, I’ll tell them what I am looking for. SO they can know that x, y, and z are the focus in the assessment, and that kind of helps them focus in their learning, too (HS Initial Interview, p. 2). Ms. Stevens felt students would be more motivated to learn if they were operating on their own level of appropriate challenge, and that some students learned more when they knew what they were supposed to be working on. As such, assessments and differentiation of instruction in Hailey’s classroom exemplified Vygotsky’s “zone of proximal development,” which is a way to describe learning activities that are perfectly positioned between what a child can already do independently and those that are beyond his reach—the zone in which optimal learning would occur. Vygostky believed that by giving children experiences that were within their zones of 207 proximal development, teachers could encourage and advance individual learning (Chen, 2000). 14 Hailey felt that students wanted to show her what they could do, and assessments allowed that opportunity. “When I do say ‘this is what I’m listening for’ oftentimes I find it makes them all try a little bit harder, you know… sit up a little bit taller… really make sure they are doing their best…” (HS Initial Interview, p. 2). In addition to a chance to show that they could do, individual assessment also allowed students to reflect on their own learning, because they could hear their own responses and the responses of other students. [Assessment] gives [students] time to process and reflect. Hopefully they are reflecting in a way that accurately reflects what they have been doing. And, a lot of kids CAN do that… Some kids you think, really??? Did you and I just experience the same thing? But the reflection piece I do think is really valuable (HS Final Interview, pp. 7-8). Assessment activities can contribute to motivation by allowing students to show what they know and can do, helping students understand what they are working toward, and allowing them to reflect on their learning. Summary of the impact of assessment on differentiation of instruction. Assessment and differentiation of instruction were inextricably intertwined in Ms. Stevens’ teaching. Differentiated instruction resulted from her assessment practices, both when teaching in the moment and also in lesson planning. Assessment activities also provided opportunities for differentiation of instruction. Because of Ms. Stevens’ frequent and varied assessments, she was 14 Although Hailey did not mention Vygotsky, her teaching was in line with his theories. She told me that as a teacher, it was her job to “…provid[e] experiences and activities that are going to give each child what they need in a progression that is going to take them farther in their musical development.” (HS Think Aloud 2, p. 2). 208 able to separate musical abilities from academic or behavioral abilities for all students, including those with special needs. Assessments facilitated data-driven student-centered learning, in which various grouping strategies and sensitivity to varied learning and response styles were used to motivate and direct learning. Emergent Themes In addition to information regarding the initial research questions for this study, a number of themes related to assessment and differentiated instruction emerged as a result of data analysis. Several facets of Hailey’s classroom climate facilitated her practice of assessment and differentiation, including her normalization of independent musicking and use of activities with multiple levels of response. Ms. Stevens’ beliefs regarding the nature of musicality and the process of music learning were also crucial to her classroom climate, use of assessments, and differentiation of instruction. Environment conducive to assessment and differentiation. Ms. Stevens’ classroom environment included multiple features that facilitated assessment and differentiation of instruction. The clear goal of music class was to help individual students progress musically. In order to meet this goal, Hailey used a combination of classroom management strategies and building readiness in order to normalize independent musicking. Most importantly, assessment and differentiation were achieved by structuring activities with multiple response levels, including self-challenge activities and high-challenge activities. Purpose of music class. Hailey was unequivocal about her purpose as an elementary music teacher: I view my job as to help [students] learn [music] by setting up an appropriate environment, by guiding them and providing experiences and activities that are going to 209 give each child what they need in a progression that is going to take them farther in their musical development. I don’t see myself as someone who is just imparting knowledge onto the kids. I don’t see myself as just being there to entertain them or babysit. So we are making progress towards goals. And I help them do that by guiding and providing appropriate experiences (HS Think Aloud 2, p. 2). This purpose was reflected in Ms. Stevens’ expectations for participation. On one occasion, a class seemed disengaged and lethargic and she compared their participation in music with participation in spelling. “If the class is taking spelling tests, that’s what you do, ‘cause that’s your job. And when you are in music class, we do music. That’s what you do, because that’s your job” (HS Field Notes 3/16, p. 1). According to my observations, conversations like this were unusual, because students typically appeared alert and interested during class, and Hailey could usually engage students in musicking by playing, teasing, laughing, and encouraging. However, this discussion was one example of Ms. Stevens’ communication with her students regarding her ideas about the purpose of music education; namely that everyone would try, learn and progress. I asked Ms. Stevens about requiring music participation from students for whom music was not a preferred subject, and she answered, “Not everybody wants to do spelling, not everyone wants to be there for math! It’s something that everyone can and should learn, so why shouldn’t they?” (HS Final Interview, p. 4). I responded, “So, you would push the kids who don’t want to be [in music class]?” And Hailey replied: I would, but also… I am not saying you would do it in a forceful way. There are ways that you can bring those students on board with you and make them want to learn—by building relationships, and making connections, maybe connecting to music they listen 210 to, or something that they do in the home. I am not saying that it has to be a forceful “you are GOING to do this.” It can be more of a drawing them in sort of approach, and meeting them where their needs and interests are. (HS Final Interview, p. 4). This exchange accurately depicts my observations of Ms. Steven’s teaching regarding the purpose of music instruction. Every child was expected to participate and progress but was never coerced or demeaned. Instead, Hailey encouraged participation through fun activities, a playful attitude, and constant reminders that learning music, like any other subject, is something that requires perseverance. Ms. Stevens balanced fun and work within a classroom atmosphere that was clearly focused on each student’s music learning progress. In our final interview, I described my overall impression of Hailey’s teaching persona (HS Final Interview, p. 10). Hailey was warm and playful toward students: smiling, energetic, genuine, and encouraging. Music classes involved a large amount of play and were conducted in a generally joyful atmosphere. At the same time, it was crystal clear that the kids were there to learn music—not to just enjoy it, or to be passive consumers, but to actively engage as musicians. Ms. Stevens responded: Good… because that’s my focus… Maybe this is bad of me—I never plan anything just thinking what’s going to be fun. It is always what should they be learning next, what COULD they be learning next… and THEN how could I make it fun (HS Final Interview, p. 10). All of the playful, exciting activities the children in Ms. Stevens’ room had come to expect were planned with their music learning needs as the primary goal and fun as an intended, but secondary quality. “[M]y first purpose is to help them learn, and learn something of substance. And if I can make it fun, cool!” (Final Interview, p. 10). This approach to balancing musical 211 progress with fun appeared similar to the teaching style one would expect from an excellent elementary classroom teacher. Ms. Stevens’ thoughts on the purpose of school music education contributed to an environment conducive to assessment and differentiation. Hailey frequently articulated her thoughts regarding the purpose of music education in class, and her students knew that they were expected to learn and progress in music (e.g., HS Field Notes 2/25, p. 3). Students were reminded that, just as in other subjects, some would have to work harder than others, and some might be more advanced than others, but that every student was expected to participate, put forth effort, learn, and grow (e.g., HS Field Notes 3/25, p. 1). Perhaps because of this, Ms. Stevens’ keeping track of her students’ progress [assessment] seemed as natural for her students as a classroom teacher keeping track of their progress in math. The fact that some students would offer more or less sophisticated responses, or that Ms. Stevens would offer challenges or remediation to individual students [differentiation], were also natural outgrowths of her stance on the purpose of music education. Normalizing independent musicking. Ms. Stevens’ teaching was characterized by normalizing independent musicking behaviors, such as singing, chanting, movement, and instrument play. The most obvious examples of independent musicking were the myriad opportunities for individual sung, chanted, or played responses already described in this chapter. In addition, Ms. Stevens rarely sang with students, so they demonstrated independent musicking as they assumed leadership of singing in unison and in parts. It is when I STOP singing that the students truly accept the responsibility for the singing. This is where some students really step up and become leaders for their classmates, and they can all take ownership of the singing, as well as modeling appropriate behavior. If I 212 sing with them, they typically back off in their singing for whatever reason (HS Journal 3/23, p. 2). Independent musicking also was evident when students responded in chorus with their own musical answers. For example, Hailey sang a Major pattern that was either tonic or dominant (HS Field Notes 3/2, p. 1). Individual third grade students decided if her pattern was dominant or tonic and created a different pattern of the same variety in their heads. If Hailey gestured to an individual, he would sing his pattern alone. If Hailey gestured to the class, they all sang their response at the same time, resulting in a harmonic pastiche of tonic or dominant. Opportunities such as individuals responding with their own answers in chorus might be called “individual musicking alongside other students.” Independent musicking alongside others allowed students to try out their own ideas within the group and strengthened independent musicking skills by requiring students to “hold their own.” My observations indicated that during every class Ms. Stevens taught, individual students responded alone, singing, chanting, playing, and moving. In addition, in every class I observed, the students led unison singing and sometimes part-singing, and students often would musick individually alongside one another. Hailey stated that she normalized independent musicking with two main approaches: Classroom management and building readiness. One is just creating that culture of: “We are all supportive and we are all respectful, and everyone is going to take turns, and it’s not a big deal…” so that you can get to individual responses. And I think also, building that expectation that everyone CAN do this. So that all students feel empowered and they feel like they CAN achieve. It just might take some students longer than others, some students might succeed at a different level. But everyone CAN do it. I think those are two important things (HS Final Interview, p. 3). 213 Ms. Stevens used classroom management strategies to create a culture in which individual musicking was safe, expected, and normal. I observed a class of third graders singing solo improvisations on neutral syllables while the rest of the class quietly hummed chord roots (HS Field Notes 2/25, p. 2; 3/4 p. 2; 3/9 p. 1). All of the students took at least one turn, and I was surprised to note that the majority wanted additional chances to improvise. I asked Ms. Stevens how she accomplished this level of personal risk taking. She replied: Well… I think it goes back to, you set that environment from the very first days that you have them, that we all participate, we all take turns, you don’t have to be afraid to make mistakes, if we do make mistakes, no one is going to laugh… there’s not going to be teasing… it’s ok to just give it a try… and then over time, I think a lot of them feel empowered to be able to do the stuff like improvising (HS Think Aloud 1, p. 5) From the first days of kindergarten, Hailey established expectations that (1) everyone would participate, (2) everyone would be supportive of one another’s efforts, and (3) that you don’t always have to do it “correctly” (HS Think Aloud 1, p. 2; HS Think Aloud 1, p. 7). If a kid does mess up, we just go, “Oh, no big deal. Let’s give it another try. I mess up all the time. We make mistakes, that’s how we learn.” I hope that I can establish an environment like that, where we don’t have to worry so much about putting kids on the spot (HS Think Aloud 1, p. 1). Ms. Stevens coached students toward waiting and listening quietly to other students’ performances, and celebrating one another’s successes (e.g., HS Field Notes 3/2, p.4). Any behavior that was not supportive and respectful was dealt with immediately through redirection or a time-out (e.g., HS Field Notes 3/11, p. 2). When intervening to manage behavior, Ms. Stevens was not punitive. Instead, she was likely to remind students that their job at school was 214 to learn, and part of that job included creating an environment in which other students also could learn. I think that goes back to the empathy idea like with Love and Logic® [a popular approach to parenting and classroom discipline]. Expressing to the kids it is not me against you, I am trying to help you learn by putting you here [move to sit by another student]. Or, by asking you to go here [time out], that is going to help you learn, and that is the important thing (HS Think Aloud 2, p. 2). Before students were expected to sing, chant, play, or move alone, they had the opportunity to try out similar material as a group (HS Think Aloud 2, p. 4). When she wanted individual responses in a new activity, Ms. Stevens would often start with students she knew felt confident and whom she thought would be successful (HS Think Aloud 1, p. 1), and she often provided examples that students could use to guide their musicking (HS Think Aloud 1, p. 4). Moreover, most activities in which students musicked alone were structured as games, individual answers were brief, and fun was the focus. Ms. Stevens also worked to build musical readiness as a way to reduce the risk of independent musicking. For example, in third grade, Hailey was preparing students for an activity in which pairs of students would compose a song using tonic and dominant harmonies in minor (Final Interview, p. 2). They practiced the compositional process as a whole group: children tried out phrases of music by singing independently alongside one another, sang their ideas to another child, and then raised their hand to volunteer to singing an idea to the group (HS Field Notes 3/ 23, p. 1). Ms. Stevens then demonstrated her process of turning those sounds into notation, so that students would have a model for their work in pairs and small groups. 215 I asked Ms. Stevens if there were assessment activities that influenced her thinking regarding if the students were ready to engage in this kind of compositional activity. She replied: I think so… the majority of the tonal things we have been doing up until this point, listening to them sing alone… Are they singing in tune? Can they sing a tonic and a dominant pattern in minor in tune? Can they sing chord roots alone in minor? Which, you know, is building their harmonic sense… So I think all of those things go into knowing if they can do this. Little things like improvising tonal patterns… (HS Think Aloud 2, p. 7). Before Ms. Stevens asked a student to take the risk of independent musicking, she used assessments to be sure that the skill set required for the activity was in place. She also supported students’ various levels of readiness by building scaffolding into some activities. For example, when the children were songwriting in pairs, Ms. Stevens planned to provide the poem so the children would have a rhythm and prosody to inspire their musicking and guide their collaboration (HS Think Aloud 2, p. 8). 15 “Audiation ” was important in Ms. Stevens’ concept of readiness. She wanted her students to internalize the music in a cognizant way, so that they could manipulate musical material in their heads and so that they could sing their own ideas alongside others: Even that I try to build in from a young age… in first grade, we’ll do tonal patterns where I’ll sing a pattern, they will audiate theirs, and then as a group they all sing their different pattern together. So, that is a readiness for this, even though it is on a smaller scale… What makes it hard to all respond at the same time is, you really have to be hearing in 15 “Hearing and comprehending in one’s mind the sound of music that is not, or may never have been physically present” (Gordon, 2007 p. 399). 216 your head what YOU want to come out of YOUR mouth. And not be distracted by everything else around you. I think that’s built in when they are building audiation… (HS Think Aloud 2, p. 7-8). Developing skills in audiation was one way that Ms. Stevens built her students’ readiness for independent musicking. Ms. Stevens used classroom management strategies and built readiness in order to normalize independent musicking. Ms. Stevens required students to be supportive of one another and cultivated an atmosphere in which mistakes were welcomed as a chance to learn. Ms. Stevens also mitigated the risk of independent musicking by presenting the opportunity to experiment with new activities as a whole group and by demonstrating sample responses before students were required to musick alone. Normalizing independent musicking helped create an environment conducive to assessment and differentiation. Individual students were accustomed to singing, chanting, moving and playing by themselves as well as alongside other students. Therefore, Ms. Stevens was able to plan multiple assessments of individual musicking on various tasks and levels of difficulty as a normal part of music class. Structuring activities with multiple response levels. Ms. Stevens designed opportunities for individual musicking that were open-ended to allow multiple levels of appropriate response. Some of these activities involved responses that were comfortably within the abilities of most, if not all of the students. However, differentiation of instruction was evident in activities that I categorized as “self-challenge” and “high-challenge.” In self-challenge activities, opportunities for individual musicking were structured to allow a myriad of “correct” responses that varied in level of difficulty or musical sophistication. For example, when improvising over chord roots, a child could choose a “safe” answer, like 217 singing the chord roots with a rhythmic variation, or a child could choose to sing a sophisticated improvised answer. Either response was “correct,” but each child responded at a different level based on factors such as personality and musical readiness. I observed one example of a self-challenge activity when first grade students improvised rhythm patterns using neutral syllables on chord roots in mixolydian (HS Field Notes, 3/11 p. 3). Hailey and I watched a video excerpt of this activity, in which several children responded by singing rhythm patterns on chord roots or making up non-patterned rhythms on the chord roots. Some children simply sang the chord roots without any added rhythm, while other children chanted rhythms in a speaking voice during their turn. I asked Ms. Stevens what she thought about me labeling these as “self-challenge activities.” I think it is appropriate because [the students] are choosing what they are doing in response to what I am asking them to do. So they could just be doing the chord roots plain like we learned the first time… you mentioned how [some students] kind of stop using their singing voice? Even that to me is saying, “I am not ready to use my singing voice and make up rhythms at the same time… so I am just going to make up some rhythms in my chanting voice.” But that tells me that they are giving themselves what they need, because they can’t handle doing both at the same time (HS Think Aloud 2, p. 3). Ms. Stevens believed that, by offering activities with a variety of levels of correct response, not only could high achieving students challenge themselves, but students who needed remediation also could scaffold for themselves. In essence, these self-challenge activities constituted both assessment and differentiated instruction, allowing Ms. Stevens to simultaneously assess what her students knew and could do and challenge her students to work at their own level. 218 I asked about the strengths and weaknesses of self-challenge activities. Ms. Stevens thought for a minute and replied: Well, the pros, is that the kids who need to be pushed for some harder things can do that. And most of the kids who are ready to be challenged, do it, because otherwise they are bored. Those are the kids that are sitting there kind of plotting what they are going to do to throw you off. To add that weird rhythm at the end… like if we are making up rhythms, they might end not on just a macrobeat. They might end with microbeats or divisions or something unusual. So, the pro is those kids can challenge themselves. Also I think the kids that aren’t ready for the harder things can regulate and take a step back and give themselves something easier to do. Cons… I guess you may have kids who maybe are a little bit lazy… you know there are some kids that are high [aptitude] but lazy, just in general, who might not push themselves. They might just take the easy way out, if they are not being asked to do something more difficult... I guess that’s a potential con… (HS Think Aloud 2, p. 4) I asked if she had ever asked a child to change a response when she thought they were capable of more. Ms. Stevens said, I might not do it immediately. I might just address to the class—we could also do… like if we were making up a melody like third grade, if there was a kid I thought could do a melody but just did chord roots and some rhythms, I might say to the class “we can make up totally different songs, like this, or this or this!” [Demonstrating different responses]. Then I might go back to that child and say “Would you like to do another one and try to make it different or totally different?” I might do something like that… where I kind of come back to them (HS Think Aloud 2, p. 4). 219 As a part of normalizing individual musicking, Ms. Stevens would not directly criticize any serious response, and I observed her make whole-class suggestions as she described (e.g., HS Field Notes, 2/25, p. 2). I also watched her on several occasions ask an individual child for a better response if he was being silly or goofing around (e.g., HS Field Notes 3/11, p. 2), or if she thought he was capable of more (e.g., HS Field Notes 2/23, p. 1). In another clip, the first grade students chanted rhythm “conversations” in triple meter with some “peepers” (a mini-puppet). Ms. Stevens described the students’ responses: The default pattern for some students was [Figure 6.4]. So you can tell the kids who kind of fell back on this [safe answer], versus the kids, I think it was Megan that we watched, who came up with [Figure 6.5] which is a pattern that we have done in LSAs. She’s obviously retained that. And I think it was Jada that did something like [Figure 6.6], with an elongation… I just think that’s a cool indication of them individualizing their own performances. Like, they were all performing at levels that were, you know, where they were at (HS Think Aloud 1, p. 10). Figure 6.4 “Safe” answer Figure 6.5 Megan’s response Figure 6.6 Jada’s response 220 Ms. Stevens was able to assess the various levels of her students’ achievement in triple meter because they could differentiate their level of response: from those who wanted to be “safe” and use a known pattern (but were able to perform accurately), to those who appropriated a pattern from another context, to those who created their own unique response. “It was interesting to me that the sophistication of each child’s rhythm seemed indicative of their abilities. It was as if the students were individualizing their own instruction by creating something that was at their own level!” (HS Journal 3/4, p. 2). It seems that self-challenge activities combine assessment, opportunities for students to work on their own musicking, and differentiation as equal collaborators in a single activity. In addition to structuring self-challenge activities and planning whole-group activities in which most students were likely to succeed, Ms. Stevens also provided “high-challenge activities.” In a high-challenge activity, the expected responses were difficult enough that only 10 to 20% of the students could approach “correctness,” and the remainder of the students simply absorbed the new information or were exposed to trying a new skill. If I am doing average things most of the time, I am hitting that middle percentage of kids, but what about that 10-20% that really have high aptitude, [who] need a challenge? I can’t just let them be bored, and never have anything pushing them and helping them grow. So, I do intentionally choose those things [high-challenge activities], hoping that it will engage that high [aptitude] percentage of students. And everyone else just kind of comes along for the ride. And sometimes they surprise you. Sometimes when you pick those really challenging things, you’ll have students that you didn’t think could do it, but 221 they do, and you think: “Wow I never realized that that kid had that potential, and I wouldn’t have, had I not done this activity” (HS Think Aloud 2, p. 4). By offering her students high-challenge activities, Ms. Stevens not only provided for the music learning of students she knew to have high aptitude and/or high achievement, but also for other students who surprised her by showing that they were ready. I observed one high challenge activity in which third-grade students associated solfege to patterns Ms. Stevens sang on neutral syllables and they sang them back to her (HS Field Notes, 2/11, p. 2-3). The students had been given a few chances to try out this new skill as a group, and then Ms. Stevens asked for individual responses. According to my estimate, about a quarter of the responses were correct. However, perhaps due to Ms. Stevens’ established definition of mistakes as learning opportunities, which she reiterated in the course of this activity, or because of her playful demeanor (she said she was trying to “trick” them), I did not observe signs of anxiety or withdrawal. In fact, many students seemed to enjoy the challenge, greeting it with twinkling eyes focused on Ms. Stevens in anticipation of a turn. I asked Ms. Stevens if she worried that students would be turned off by this type of challenge: HS: I just can’t understand that. They LOVE it. When you have them engaged, and they are motivated to learn, they love to have those challenges thrown in… KS: Even if they are not necessarily successful right away? HS: Right! And I think it goes back to establishing that environment of exploration, everybody participates, we all do it alone, if you mess up who cares, and I think that’s another piece of that. If they are not afraid to get it wrong, or to not know the answer, it’s a lot more fun to figure out what the answer is (HS Think Aloud 1, p. 11-12). 222 In Ms. Stevens’ teaching, the classroom environment in which individual musicking is normalized through management and readiness also allows students the freedom to try challenging material. Ms. Stevens also seemed to enjoy keeping students “on their toes” by creating cognitive dissonance—occasionally tossing in an example they had not yet encountered or that could not be described by their vocabulary. One day in LSAs, third grade students were identifying whether a pattern was duple or triple (HS Field Notes 3/18, p. 1). Perhaps because this was an either/or choice, or because it was not difficult for most students, some were not as engaged as they were on other occasions. Noticing this, Hailey improvised a pattern in 5/8. I saw a wave of backs straightening as the students registered something different, regained their eye twinkles, and said “huh?” KS: You also threw them a curve ball and gave them an unusual paired pattern… and they respond really well to your curve balls, I think… HS: They are used to it now. [laughs] They know me. They know if they are getting it, I am not just going to give them the same thing. They know I’m gonna find some way to surprise them (HS Think Aloud 1, p. 11). In Ms. Stevens’ classroom, high-challenge activities seemed to function as a motivator for the students, rather than creating anxiety or withdrawal. Between the end of our observation period and the final interview, Ms. Stevens was pinkslipped due to budget problems in her district. Because she was facing the possibility of not returning to her students, she decided to try new things—to really push her kids, and they surprised her with their abilities (personal email communication, 4/2/2010). I asked her to describe this experience: 223 [The students surprised me] in various ways, for example in fourth grade, we had sung in three-part harmony… with each group singing different chord tones. So I thought, why not have some fun, a lot of my fourth grade girls love Justin Bieber so we took that Justin Bieber song, “Baby,” and we… I forget the progression… it’s like I, vi, IV, V maybe? And we did that same kind of thing, but we totally extended it to these new types of chords and we learned about submediant, and they could do it. If I had never pushed them to do that, I wouldn’t have known they could do it. And then we went to another song that’s out and popular right now… It’s got another weird funky progression that uses I, IV, vi and V. And they could totally do it, in three parts, by themselves. Another example is just… with little kids, pushing them more to do more creating and improvising. I added some more tonal pattern conversation stuff in mixolydian with the first graders, and they could totally do it. Improvise patterns in mixolydian, who knew? [In another activity, w]e were doing this little chant that had a part in duple and a part in triple and then moving around the room in two different ways to the two different parts. So I said, OK, let’s see if they can generalize can we change it to buhs, and do the chant with buh buhs and see if they know when to move, and they did. And I thought, OK, this time I am going to improvise in duple or in triple on neutral syllables and see if they can tell whether they should move in the duple or triple way. We didn’t talk about duple or triple, but they could sense it, and they could do it. That’s something I wouldn’t have done until third grade, and here, all along, first graders could have been doing it (HS Final Interview, p. 1). I asked about why she had never tried these types of activities at these levels before, and Ms. Stevens replied: 224 Part of it is I like things to be sequential. And I like to really build in step by step the process… So, part of it was not wanting to short-circuit that process—to spread it out over time. I also think part of it, too, was just thinking: “they can’t do that, it’s too hard for them” (HS Final Interview, p. 1). Possible drawbacks of highly teacher-directed and sequential music instruction include the assumptions that learning sequences discerned in research on groups of children would necessarily apply to each individual child, and that the teacher knows exactly what her students need and in what order. High challenge activities not only allowed students with high aptitude to be challenged, and students with the required readiness to expand their abilities, but they also allowed Ms. Stevens to be amazed by the capabilities of her students. Strict adherence to sequential presentation of musical material may actually have held some students back. According to my field notes, most of the time in music class, Ms. Stevens’ students were engaged in active musicking. Non-musicking moments I observed included some direct instruction, some discussion of appropriate behavior, and two written assessments. At the third grade level, about 50% of musicking activities targeted a medium difficulty level at which most students could successfully respond. Many of these activities were whole-group (e.g., folk dances or singing), and some included solo responses with answers that were right/wrong or an echo of the prompt. Perhaps 30 to 40% of activities involved some element of self-challenge: individual response within the group or alone, with innumerable possibilities for “rightness.” The remaining 10 to 20% of activities were high-challenge. In first grade, perhaps due to the developmental and musical readiness of students, more of the activities were medium difficulty—perhaps 70%, with about 15 to 20% self-challenge and 10 to 5% high challenge. 225 In self-challenge activities, students could practice at their own level and simultaneously allow Ms. Stevens to assess their performance. Structuring an activity with innumerable “correct” responses allowed each student to respond at his or her level and be “right.” Ms. Stevens also differentiated instruction during self-challenge activities by asking for more from students she knew were capable, and by praising progress at each student’s level. Highchallenge activities allowed assessment of more advanced skills, differentiation of instruction for those students in need of challenges, and opportunities for learning and experimenting with new skills. Use of these open-response activities was integral to Ms. Stevens’ practice of assessment and differentiation. Summary of environment conducive to assessment and differentiation. The environment Ms. Stevens created through her teaching practices fostered assessment and differentiated instruction. Hailey consistently reiterated her view that all her students could progress musically and that the purpose of music class was for all students to learn music. To that end, Ms. Stevens made independent musicking normal, both through classroom management strategies that reduced personal risk and also by building readiness before students were required to respond individually. Structuring activities with multiple response levels, including selfchallenge activities and high-challenge activities, both facilitated assessment and constituted differentiation of instruction. Overarching impact of teacher beliefs. Although I did not intend to discuss methodology and philosophy in this dissertation (see “Delimitations,” Chapter 1), Ms. Stevens’ frequent discussion of her strong methodological and philosophical stances and their direct impact on her practice of assessment and differentiated instruction seem to demand that I do so. 226 [I] belie[ve] that anyone can learn music, anyone can be good at music. I don’t really think it is something that is a talent where some of us may be able to be good at music and some of us not. Some might have more success, or easier success, in music depending on aptitude, but I believe everyone can do it, and everyone is there to learn it, so everyone should be trying. And I think knowing that everyone can do it makes it… the kids understand that everyone is expected to do it and participate. So when I am trying to differentiate instruction and give each student individual attention based on what they need, I think it is just understood that they give that response, individually (HS Think Aloud 2, p. 1). In our final interview, I asked about what factors contributed to Ms. Stevens’ self-motivation to track individual progress. She replied: Well, it’s funny, before you said “not necessarily philosophically,” but I think that [philosophy] is a big piece that goes into it. Because, for me, having an MLT [Music Learning Theory] background, and looking at students’ individual needs, and not looking at music as a talent, but as something that everyone can do, and everyone can succeed at... I think then enables me, or makes me want to track all of their individual progress, and to help them all achieve to the level [of their] potential… (HS Final Interview, pp. 34). Many of Ms. Stevens’ instructional decisions, including required participation, structured activities with a variety of response styles and levels of difficulty, constant assessment, and differentiated instruction, resulted from her belief that all children could (and should) learn music. 227 If we just say, “I don’t need to assess you, because it’s OK, honey…you can’t really get it anyway.” I guess that stems from my belief that… I don’t believe that music is a talent that some people have and some people don’t. I truly believe it’s an intelligence, and it’s a skill that anyone can achieve at. Maybe not all at the same level or with the same amount of work. Some of us might really have to work at it. But everyone can achieve (HS Initial Interview, p. 10-11). Although she acknowledged different innate capacities for learning music (which she called “aptitudes,” HS Initial Interview, p. 1), Ms. Stevens did not believe that music is a talent given only to the few. When I asked about concerns that a child would give up on music because of being required to participate in singing, Hailey replied: Anyone can learn to sing. Anyone can be musical. So I guess, I look at it as: everyone can do it. And I try to convey that to my students. You can do this. Some of us might need more time and more help. So, I find that my students don’t feel that way [like giving up]. Because they know that everyone can achieve the things that I am teaching… (HS Initial Interview, p. 2) She responded similarly to my question about grading a student’s musical achievement and the possibility of a child giving up on music as a result of a poor grade, adding: I think a lot of parents think, “Can my kid really do this? Are all kids really going to be expected to do this?” So I think when you grade them all the same [i.e., give all “proficient” or “satisfactory” grades] it perpetuates that view… You know, whereas, if it comes out in your assessment that you do use those different categories, then yes, every kid truly is achieving at different levels, but yet everyone can achieve (HS Final Interview, p. 6). 228 Ms. Stevens felt that her constant reassurances that music was something that everyone could learn and her requirement that students participate in music class led to better music learning from her students rather than to students withdrawing from music (HS Think Aloud 2, p. 1). Perhaps the strength and frequency of Ms. Stevens’ methodological and philosophical discourse was influenced by her recent thesis research regarding the impact of teacher beliefs on instructional practices (HS Final Interview, p. 12). Hailey described her view that some teachers seem to believe that music is NOT for everybody and that to be a musician requires talent. She even supplied an excerpt from her thesis to supplement one of her journal entries: …[T]here are cultural influences that might make children think that you have to be “professional” to be good at music or have to be perfect to good at music. A good example of this is American Idol, where performers are criticized and it is “cool” to make fun of the people who are “bad.” Also, I think it is true that our culture defines “musician” as someone who is a professional or is extremely talented, so it would come as no surprise that eight-year-olds don’t think they are musicians or are good at music. As an example of this, [here] is an excerpt from my thesis. This is a middle school band teacher answering the question “What is a musician”: In Scott’s view the term “musician” refers to someone who devotes considerable time and effort to music and practicing. “Being a musician, I think, takes a lot of training and exercise and work.” Scott’s definition of the word musician implies what is thought of as a professional musician. “I think a musician is more of a person that kind of, that’s what they do for their life. . . . They do it for a living. They’re good at it.” This belief is also evident when Scott describes his own students. “I don’t really consider them musicians. . . . I think that most of them are too young to be considered ‘a musician’.” 229 Scott also considers a “musician” to be someone who has a special talent for music. “I think musician kind of already says you’re good at music. You’re musical.” Scott’s words suggest that he believes that music is a talent which some people have and others don’t. When discussing his students in terms of being musicians, Scott states, “the students are here to learn how to be a musician, and eventually that will come if they have that innate talent.” Scott believes that the potential to be a musician is not something that everyone possesses. “Some people can’t be quote-unquote ‘a musician’ because they might not have that talent.” Sad, huh???? (HS Journal 3/16, p. 2-3, underlining added, italics excerpt from her thesis). Ms. Stevens’ practice of assessment and differentiated instruction stemmed directly from her philosophical beliefs regarding universal musicality, which were in part influenced by her methodological background in Music Learning Theory. Because of my hesitance to discuss philosophy and methodology in this document, I did not introduce these topics. However, as they emerged in interviews and journals, I did ask clarifying questions. Hailey’s fervent belief that all of her students could and should learn music was evident in each of our interviews, in her journals, and in my field notes. Furthermore, the influence of her philosophical and methodological stances could not be separated from her teaching or her participation in this research without compromising the veracity of my report. Discussion of Ms. Stevens’ instructional practices, and specifically of assessment and differentiation in her teaching, would not be complete without at least this brief description of her philosophical and methodological perspectives. Chapter Summary Hailey Stevens believed that every person is musical, and that it was her job as a public 230 school music teacher to help each student learn music. Each student brought different aptitudes and experiences into the classroom, and Ms. Stevens saw herself as a facilitator who provided appropriate activities to guide the sequential music learning of each student. Because of this, Hailey required students to participate in music and engaged in frequent assessment and differentiated instruction. One possible weakness to Ms. Stevens’ primarily teacher-directed approach to music teaching and learning was that it was predicated on the assumption that Hailey knows what is best for students to learn in music and in what sequence they should proceed. Her use of high challenge and self challenge activities may have mitigated this weakness. Ms. Stevens used report cards, aptitude tests, Learning Sequence Activities, embedded assessments, and occasional written quizzes to track individual music learning. Of these, LSAs and embedded assessments were the most frequently used, occurring in every lesson. These assessments required individual musicking responses and functioned both as a way to measure progress and as a way to differentiate instruction. They were typically rated either using tally marks (LSAs) or using four-point rating scales (embedded assessments). It was difficult to describe the impact of assessment practices on differentiation of instruction, because they seemed inextricably intertwined. Any time a student responded individually in class (and opportunities were frequent) it seemed to constitute both an assessment (since Ms. Stevens rated responses in her records) and also differentiated instruction (either because Ms. Stevens varied the difficulty level according to the child’s prior achievement or because the child could select his own level of challenge). The frequency of assessment activities, nature of Ms. Stevens’ instruction, and use of aptitude tests seemed to allow her to separate musical abilities from academic capabilities and behavioral challenges. Ms. Stevens’ 231 teaching artfully balanced nearly omnipresent musicking, assessment, and differentiated instruction in a fun, supportive environment. 232 Chapter Seven: Cross-Case Analysis The purpose of this dissertation was to explore the relationship between assessment and differentiated instruction in elementary general music. To that end, I have presented case studies detailing the assessment and differentiation practices of three public school elementary general music teachers: Danielle Wheeler (Chapter 4); Carrie Davis (Chapter 5) and Hailey Stevens (Chapter 6). In each case study I first answered each of my guiding research questions: (1) When and how did the participants assess musical skills and behaviors? (2) How did participants score or keep track of what students knew and could do in music? And (3) What was the impact of assessment on differentiation of instruction? Then, I described themes related to assessment and differentiation that emerged from my data analysis. Carrie Davis’s data required a different analytical approach (see Chapter 5). The current chapter presents a cross-case analysis of data from all three cases. This analysis is not intended to compare practices but to illuminate my focus: how teachers applied the results of assessments to individualize music instruction. To do this, I will identify themes that emerged across cases and also describe divergent practices (Stake, 1998). When I designed this study, I did not know that my participants’ practices would be so diverse that one participant’s data was not amenable to the same analysis as the others. I also did not know that participants would seem to debate one another, presenting strongly divergent viewpoints, as I asked them each the same (or similar) interview questions. It seems that one of the most salient findings of my cross-case analysis is that individual teachers’ practices vary widely, even when they were chosen specifically because they shared certain characteristics, namely that they valued the role of assessment in elementary general music instruction. With only three 233 participants, there was no “average” response. Either the participants were all similar, one disagreed with the other two, or all three had differing views or practices, and I believe it would be disingenuous to present some sort of conglomerate compromise as representative of all three participants when their approaches differed. Consequently, some comparison seems inevitable in the course of this chapter. Rather than evaluating the participants, I invite the reader not only to hear what their three voices say in concert but also to learn from their divergent practices. Therefore, I have structured this cross-case analysis as follows. First, I will present a summary of common practices and any significant variation among participants related to my three guiding research questions. I will also analyze emergent data from each case study across all three cases. Then, I will discuss the themes that emerged from cross-case analysis. When appropriate, I provided vignettes of teacher practices to illustrate themes. This chapter synthesizes information already presented with reference to source material in chapters 4, 5, and 6. This synthesis would be difficult to read if I cited three sources anytime I compared, contrasted, or summarized. Therefore, I cited only direct quotes and material that was not previously mentioned in individual case discussion. To assist the reader, I have included tables summarizing the results from each case study (see Table 7-1, Table 7-2, and Table 7-3). 234 Table 7.1 Summary of Findings, Danielle Wheeler When and How was Music Assessments were ongoing, including: Learning ssessed? Checklists Rubrics Rating Scales Report Cards Observational Assessments Portfolios Self-Assessments Aptitude Tests Scoring and Tracking the Checklists/Rubrics/Rating Scales/Observations in Results of Assessments gradebook or on class list. Portfolios (contained self assessments and written work like compositions and quizzes) Assessment and Kindergarten: Centers, Early Chidhood Approach Differentiation Fourth Grade: praxial groupwork, creative groupwork, independent warm-ups and written work Differentiation based on the assessments of others (IEPs, etc) Emergent themes Inquisitive disposition Linkage of curriculum to assessment Teacher behaviors conducive to differentiation Table 7.2 Summary of Findings, Carrie Davis Self-Reports of Assessment Ongoing assessments were reported, including: Aptitude testing Report cards Observational assessments Importance of individual responses Assessment and Differentiation of Flexible groupings Instruction in Small-group Composition Student-centered learning Peer coaching Informal, emergent assessment methods Summative assessments Differentiation of Music Instruction for Early Childhood approach Students with Cognitive Impairments Role of Paraprofessionals Social mainstreaming vs. inclusion Constructivism and Differentiation Teacher as facilitator Differentiation inherent in Ms. Davis’s practice of constructivism Collaborative, cooperative learning atmosphere 235 Table 7.3 Summary of Findings, Hailey Stevens When and How was Music Ongoing assessment: Learning Assessed? Report cards Aptitude testing Written assessments Learning Sequence Activities (LSAs) Embedded assessments Scoring and Tracking the LSAs, hash marks or 4-point rating scale in binder. Results of Assessments Embedded assessments 4-point rating scale in palm pilot or on paper, transferred to spreadsheet. Aptitude tests--percentile rank. Necessity of individual response Impact of Assessment on Inextricably intertwined Differentiation of Instruction Differentiation as a natural consequent of assessment Assessment as a form of differentiation Separating musical abilities from academic or behavioral abilities Data-driven student-centered learning Emergent Themes Environment conducive to assessment and differentiation Purpose of music class Normalizing musicking Structuring activities with multiple response levels Overarching impact of teacher beliefs When and How did Participants Assess Music Learning? Ms. Wheeler, Ms. Davis, and Ms. Stevens predominantly had similar practices in terms of when, how, and how often they assessed students’ music learning. All three teachers integrated assessments into their teaching on an ongoing basis, using a variety of assessment strategies, including performance measures, such as rating scales, as well as written assessments such as self-evaluations and quizzes. When participants assessed. Although the literature suggested that most elementary general music teachers primarily engaged in assessments prior to grading for report cards (Hepworth-Osiowy, 2004), all three participants in this study consistently assessed music learning on an ongoing basis throughout the school year. Each teacher mentioned summative 236 assessments or other assessments that were directly related to grading for report cards, but grading for report cards was not the primary reason any participant utilized assessments. Ms. Wheeler and Ms. Davis reported that preparing for performances hindered or even extinguished their usual assessment practices, similar to those teachers who reported pressures to perform as one challenge to their assessment practices in previous studies (e.g., HepworthOsiowy, 2004). Ms. Stevens did not prepare students for performances, opting instead to invite family members and other caregivers to come see a music class “informance.” Ms. Stevens taught the informance music class mostly like a normal music class, except she gave brief explanatory comments and invited the visitors to participate in musicking alongside the children (HS Field Notes 3/25, p.1). The ongoing nature of the participants’ assessment practices made separation between “when” teachers assessed and “how often” they assessed difficult. In effect, the “when” of these participants’ assessment practices was “regularly throughout the school year.” How often they assessed ranged from Ms. Stevens, who formally assessed two to three musical skills or abilities in every class, to Ms. Wheeler who informally assessed in nearly every class with her “observational lists” and formally assessed some skill or ability about once a week, to Ms. Davis, who only used informal assessment during the observation period. Ms. Davis was in the midst of performance preparation, so I did not see her normal assessment behavior, and she reported that ongoing formal and informal assessment was more typical of her practice. Extrapolated over the school year, overall rates of assessment in the current study seem higher than those reported by Talley (2005) and Livingston (2000). However, higher rates should be expected. Participants in the current study were purposefully selected because they valued the role of assessment in music 237 education, whereas the random samples in other studies included teachers who did not assess at all (Livingston, 2000; Shih, 1997; Talley, 2005). How did teachers assess music learning? Consistent with the literature, participants in this study used a variety of methods to gather assessment data. Previous research indicated that the most commonly used methods of assessment in elementary general music classrooms were systematic observation/roaming and checking the group (Hepworth-Osiowy, 2004; Livingston, 2000; Shih, 1997). Ms. Wheeler reported using these strategies, and often supplemented her observations by jotting down the names of students who needed additional assistance. Ms. Davis and Ms. Stevens also roamed and checked, but seemed to view this as part of instruction/facilitation of music learning, and not necessarily as an assessment strategy. Performances (i.e. formal performances in front of an audience) were reported as a frequently used method of assessment in previous studies (Hepworth-Osiowy, 2004; Livingston, 2000). Ms. Wheeler and Ms. Davis characterized performances as an assessment of student learning, although neither of them evaluated individual or group participation in the concert. In addition, they both indicated that preparing for performances interrupted typical music teaching and learning, including ongoing assessments. Ms. Stevens strongly disagreed with using performances as an assessment tool. Because her view is unusual among my participants and also in the literature, I include the following extended explanation from her journal: Doing a true informance [presenting a typical music lesson to an audience in the regular classroom setting with no additional preparation] enables us to spend ALL of our class time on the students’ learning and developing their musical skills. That is the purpose of music class. The purpose of music class is not to entertain parents. Thus, I do not believe that music class time should be spent preparing cute programs or musicals that take time 238 and energy away from student learning [in order] to entertain. Typically, traditional programs take up a lot of class time to memorize songs, learn lines, add choreography, plan costumes, etc. But what is the educational value of that for the students? They may have fun and may have a nice memory of that, but is that the purpose of music education? I believe the purpose of music education is to develop musical skills and understanding so that students can become independent musicians and musical thinkers. Traditional programs [performances] take time and energy away from achieving that goal. I choose to do true informances with my classes because, rather than taking away from that goal, it allows the focus to stay on that goal AND for us to share it with the parents. I believe that the parents leave the informances with a greater understanding of what the students are learning and doing in music class, and I have only heard appreciative things from the parents… I also think the informance allows students to feel ownership of and pride in their musical learning. …I did not spend any time prepping the students for [their] informance [because] I want the informances to be a true picture of what music class looks like and what the students know and can do. By NOT spending time prepping it allows the parents to get an authentic portrayal of what the students are learning and doing. It had never even occurred to me to “prepare” the students for the informance because I just don’t feel the pressure to “perform” in an informance. Also, I try to always encourage quality in our music-making each and every day, so I hope that the “product” we share in the informances is highquality without the need for rehearsing and the “drill-and-kill” that typically happens 239 before a performance (HS Journal 3/25, p. 1). Hailey questioned the value of performances as an assessment tool and also as a part of the elementary general music curriculum. Danielle and Carrie both teach in districts in which traditional performances/programs are expected by parents, students, and administrators, and in which these stakeholders are accustomed to a high-quality product on stage. Ms. Wheeler and Ms. Davis both expressed reservations about the impact of preparing a polished performance on their students’ music learning but felt that these performances were required. In fact, due to her concerns about performance preparation interfering with music learning, Ms. Wheeler recently switched to informances with her first and second grade students rather than performances. All three participant teachers reported using rating scales, checklists, and written assessments, similar to teachers in other studies (Hepworth-Osiowy, 2004; Livingston, 2000; Shih, 1997; Talley, 2005). Like the respondents to Talley’s (2005) survey, participants in the current study did not use published achievement tests and, instead, designed their own measures of student achievement. However, each participant used (or has used) published tests of music aptitude. Shih (1997), Hepworth-Osiowy (2004) and Livingston (2000) did not inquire about use of music aptitude tests. However, Talley (2005) asked if her respondents used aptitude tests and found that most did not. Gordon (2010) asserted that many music teacher preparation programs have not sufficiently informed their students about the purpose, utility, and availability of music aptitude tests. The fact that, despite their considerable differences in philosophy, methodology, and background, all three participants used (or had used) aptitude testing is noteworthy. The Learning Sequence Activities (LSAs) used on a consistent basis by Hailey were not mentioned in the results of any of the above studies, but none of the surveys asked specifically about LSAs as 240 an assessment tool. If respondent teachers were using LSAs like Ms. Stevens, it is possible they marked the box for “checklist” on the survey. It was difficult to ascertain the role of creative activities (e.g., composition projects) or various forms of group work as assessments in elementary general music classes by using the available large-group surveys (Hepworth-Osiowy, 2004; Livingston, 2000; Shih, 1997; Talley, 2005). These studies did not ask if “performances” or “presentations” being used as assessments were student-created. Furthermore, if student-created work was used as an assessment, respondents may have recorded this information by indicating that they used a rubric, rating scale, checklist, observation, presentation, or performance as the assessment for that project. Participants in the current study used rating scales, checklists, and observation to assess students’ creative work in music class. However, as Christensen (1992) proposed, teachers in the current study indicated (to varying degrees) that the process of interacting with music and/or other students in the process of composition was as important as the product. Perhaps because of this, both Ms. Wheeler and Ms. Davis used self-assessments of student learning, which have been studied in individual settings (e.g., Brummet, 1992; Niebur, 1997) but were not specifically mentioned in any of the surveys of music assessment practices. Ms. Wheeler was unusual among the participants in the current study in that she used portfolios, although Brummet (1992) and Brummet and Haywood (1997) suggested use of portfolios (or “process-folios”) as a holistic, authentic way to track individual students’ progress in music class. Perhaps I either asked or answered the question of “how” teachers assessed slightly differently than these surveys. I described the types of activities teachers used to elicit responses in addition to their assessment methods. That is, how were they able to gather the data they 241 needed? Mostly, teachers in the current study embedded their assessments in classroom activities, using games and other activities to elicit individual and small-group singing, chanting, movement, and instrument play. Typical assessment activities were not separate from instructional activities, which meant they were contextual, authentic, and flowed naturally from normal classroom musicking. Each teacher also occasionally used more acontextual, atomistic, less musical assessments, such as pencil and paper quizzes. Ms. Stevens assessed several times in each class using games with built-in individual responses or other activities such as improvisation or instrument play, and Ms. Wheeler used similar assessment activities, but less often. Although I did not see them, Ms. Davis also reported using similar activities outside the observation period. Ms. Stevens assessed as she taught and taught as she assessed to the degree that her practice of differentiated instruction and her assessments of students’ capabilities were virtually indistinguishable. Ms. Wheeler and Ms. Davis used centers as a way to build in the opportunity to assess. While students were exploring music through a variety of tasks at centers around the music room, the teachers stayed at one center and assessed a skill. How did Participants Score and Track Students’ Music Learning? Each participant in the current study graded students on report cards as required by her district (once a year for Ms. Wheeler and Ms. Stevens, twice a year for Ms. Davis). The grading systems were similar, reflecting the progress of each student in terms comparable in meaning to “developing,” “progressing at grade level” and “exceeds grade level expectations.” However, each teacher discounted her report card as a valuable assessment tool for a variety of reasons, including the report cards’ focus on assessment of behavior rather than musical skills and disagreements with the report cards regarding what facets of music learning were important 242 enough to grade. These problems with report cards were similar to those reported by HepworthOsiowy’s (2004) participants. Ms. Davis even suspected that her students were so concerned with what grade they would receive that they were distracted from their individual progress musicking. Each teacher reported that assessments needed to be of individual students’ performance, and each teacher therefore built in a variety of opportunities for obtaining individual responses. Individual musical responses were typically evaluated using a rating scale designed by the teacher, or the teacher simply checked yes or no if a skill was adequately demonstrated or if the student participated. The participants all reported that it was necessary to keep records in the moment, because it was nearly impossible to remember how each child performed and then record that information later. Participants also reported using class lists and/or grade books as a convenient place to jot down assessment data. In addition, Ms. Stevens used her palm pilot to record some assessment data and kept a dedicated binder to track each student’s progress on LSAs. Both Ms. Wheeler and Ms. Stevens reported charting student data so they could see which students were progressing with specific skills and tailor their instruction accordingly. All three participants created rating scales and checklists to evaluate various musical tasks, although Ms. Wheeler and Ms. Davis mentioned difficulties with remembering which scale they were using and/or what the ratings meant for various activities once the ratings were recorded in their grade books. Danielle and Carrie also both mentioned feeling like assessing every student on a particular task took too much class time. This concern was echoed in the literature (Brummett and Haywood, 1997; Hepworth-Osiowy, 2004; Peppers, 2010). Ms. Stevens had a system of four-point rating scales that were both specific to each activity and yet similar enough across activities that she was able to remember what each rating indicated. In 243 addition, Hailey rarely evaluated all students in a class on the same skill on the same day, instead opting to check perhaps a third of the students and then move on to another activity. She would then return to the assessment activity on subsequent music days to evaluate the remainder of the class. Furthermore, in Ms. Stevens’ teaching, the intertwining of assessment and instruction resulted in active student engagement in musicking, even as individual students had brief turns to demonstrate their abilities. Each participant mentioned significant challenges to her practice of assessment. Similar to teachers in prior studies (Brummett and Haywood, 1997; Hepworth-Osiowy, 2004; Peppers, 2010) Danielle, Carrie, and Hailey reported high class sizes, high numbers of students overall, and lack of time (both in-class to administer assessments and also outside of class to maintain records) interfered with their abilities to assess music learning. In addition, Ms. Wheeler and Ms. Stevens experienced resistance from other teachers in their districts who did not agree with their assessment practices. All three participants had to be self-motivated regarding ongoing assessment of individual students’ music learning, because there was little oversight or administrative support of their assessment or grading practices. What was the Impact of Assessment on Differentiation of Instruction? On the whole, participants in this study demonstrated similar assessment practices, even if rates of assessment differed somewhat and a few practices, such as the use of LSAs, portfolios, and self-assessments, were not universal. Analysis of the impact of assessment on differentiation revealed both areas of similarity and also some important divergences in the instructional practices among participants in this study. Participants used a variety of tactics for differentiation of whole-group instruction as well as a number of group work strategies. They also each differentiated for students with special needs. 244 Tactics for differentiation of whole-group music instruction. According to my observations, all three participants primarily taught the whole class at the same time. Ms. Wheeler generally used whole-group instruction with her fourth grade students, although they also played recorders independently during warm-ups, played recorders in duets and trios, and worked alone on written assignments, such as Rocket Notes and compositions. In kindergarten, instruction was always whole-group, with the notable exception of centers day. Ms. Davis exclusively used whole-group instruction with her fourth grade and CI students. In third grade, brief periods of whole-group instruction supplemented the cooperative group work that constituted the majority of my observations. Because Carrie was in the midst of preparation for a performance, my observations of both third and fourth grades may not represent her typical practice. Ms. Stevens primarily taught through whole-group instruction—in fact, I only observed two examples of other types of instruction (independent written work on quizzes). Different students have different learning needs, and therefore it seems logical that reliance on whole-group instruction would complicate differentiation. However, each participant indicated she differentiated whole-group instruction by varying activities over time. For example, Ms. Stevens varied the difficulty levels of whole-class activities and planned easier, “fun” activities to follow high challenge activities (HS Think Aloud 2, p. 5). Ms. Wheeler worked to integrate aural, visual, and kinesthetic elements into her teaching, and used technology and popular music (e.g., video karaoke of “Fireflies” DW Field Notes 1/15; p. 2; YouTube of the choir at PS 22 singing “Eye of the Tiger” and “Just Dance” with some rapping, DW Field Notes 2/19, p. 2) in addition to “school music” like folk songs, patriotic songs, and children’s songs. Hailey also integrated popular music (e.g., the Justin Bieber song “Baby,” HS Final Interview, p. 1), as did Carrie when her third graders composed raps. All three teachers consistently varied 245 meter, tonality, and other musical elements and included singing, chanting, moving, playing instruments and listening to music regularly. Ms. Wheeler and Ms. Stevens rarely talked about music or taught by talking. In contrast, Ms. Davis used Socratic-style questioning/discussions as a teaching tool with her students. Varying music class material over time in terms of presentation mode, difficulty level, types of music, and types of activities was one way that each teacher differentiated whole-group instruction. By presenting different material in different ways, participants hoped to meet the varying needs of each individual student at least some of the time. Hailey Stevens was particularly adroit in her differentiation of whole-class instruction. In addition to varying musical materials, presentation modes, types of activities, and difficulty levels, she also used open-response activities that included both self-challenge and high levels of challenge to differentiate whole-group instruction. By designing opportunities for all children to respond individually at different levels of achievement and musical sophistication several times in every class, Hailey found a way to teach different lessons to individual children in the wholeclass context. In addition to the variety of open-ended high-challenge and self-challenge activities, the most important features of Hailey’s differentiated whole-class instruction were the number of individual responses she elicited and the amount of data she was able to collect and track regarding each child’s various abilities. When I observed Ms. Stevens’ teaching, I could gauge each student’s musical achievement in a variety of areas (singing, rhythm/beat skills, playing instruments, improvisation), because there were so many opportunities for each child to musick alone. 246 The following fictionalized vignette synthesizes data found in my field notes and in journals from all three teachers. It is intended to illustrate ways that an elementary general music teacher could differentiate instruction while teaching a whole class. Third grade students file into the music room and take their assigned seats on the carpet for “vegetables” (LSAs). Several children smile or wave at me; I was not present at their last class meeting. As they settle in, the teacher takes a drink of water, puts away her iPod from the last class, grabs her Palm Pilot, smiles and says good morning to the students as she heads over to the music stand where she keeps her LSA binder. Today’s LSA is high-challenge and open-response. The teacher improvises a tonic or dominant tonal pattern in major, using solfege. The students each decide if it is a tonic or dominant pattern, and, during a wait time, create a different pattern with the same harmonic function to sing back. This is the first time the students have tried this particular LSA, so the teacher starts with a warm-up in which students echo tonic and dominant tonal patterns using solfege and label them as tonic or dominant. They also review which syllables constitute tonic and dominant chords by singing a jingle the teacher created. The teacher offers some suggestions for ways students could create their “answers,” such as using pieces of her “question,” or giving back her “question” in reverse. Then, students practice creating a different answer from the teacher’ prompt by musicking alongside one another. The teacher sings do-sol-mi, and the whole group listens and silently creates a response pattern. Then, she breathes and cues with a gesture, and they all respond with their own answer at the same time, resulting in a three-chord pastiche of tonic harmony. In this manner, they practice a few times while 247 the teacher reiterates strategies for creating replies, and then she starts soliciting individual responses. To start, the teacher sings a prompt, and students who feel ready to sing their response alone put their finger on their chin. The teacher takes an individual response, and then sings a new prompt. At any moment, she could alter her gesture to ask the whole group to sing their responses together, so the children each prepare an answer every time. Although today the teacher is only asking for individual responses from those who volunteer, most students are volunteering to try. The students seem excited to show the teacher what they can do, and they also know from experience that she will eventually get responses from everyone. Perhaps because this is a new activity and students need to experiment and practice, the teacher seems to be asking for whole-group responses more often than she usually does. Several students sing correct responses—a different pattern of the same harmonic function with the correctly applied solfege. Two students sing back the same notes as the prompt, but with different [incorrect] solfege syllables. A few other students formulate an answer that consists of solfege from the correct harmonic function, but they do not sing the pitches that match their solfege syllables. When either of these happens, the teacher sings “Did you mean …” and sings the pitches that correspond to the solfege the student provided. After not quite 10 minutes of this high-challenge, open-ended activity (including the opening warm-up and teaching students how to respond), The teacher has rated responses from nine students and closes her LSA binder. She asks the students to grab 248 their recorders on the way to sit in a circle on the carpet, and to warm up by practicing their “my-level song” 16 for about three minutes. After circulating for a few minutes to provide assistance to students with questions, the teacher reintroduces an eight-measure song that the class composed as a group by projecting a notated version onto the screen at the front of the room. Last week, the teacher used this “class song” as an A section, while individual students composed B sections. After explaining that each B section would be 8 beats long (the “class song” is in duple meter), the teacher sprinkled hearts with notation on them around the students. They drew eight of the hearts and tried the notes out in different orders until they were pleased with how they sounded. The teacher selected 8 volunteers to share their B sections last week, and today the remaining 15 students will have their turns. The students seem excited to share their compositions and I noticed that they also listened attentively to the compositions of others. The teacher rates each student’s performance on recorder playing skill 17 using a scale she designed: 4: Student plays accurately, in tempo, with good tone. 3: Student plays accurately, but not in tempo or with squeaks/cracks (circle applicable problem or both) 2: Student’s fingerings do not match her notation on two or three notes 1: Student’s fingerings do not match her notation on four or more notes. Because the song is short and the students are accustomed to this type of activity, allowing the remainder of the class to have a turn takes about 8 minutes. 16 In a system similar to Recorder Karate (Philipak, 1997), the teacher has ranked a set of songs by difficulty. Students work through these songs independently and test onto the next level by playing for the teacher. All students are expected to complete at least the first four levels. 17 Please note that this same activity could also rate the composition. Results from this study indicate that it is best to choose only one specific behavior to rate, rather than trying to use two rating scales at the same time or trying to rate two different dimensions on one scale. 249 Noticing that the class seems a little antsy from concentrating this long, the teacher sings A Ram Sam Sam as students are putting their recorders away. The students learned this song earlier in the year along with a body percussion partner game. This body percussion is quite challenging for some children, others are already fluent in their performances. The teacher has students take over singing the song while she sings chord roots and they play the body percussion game with a few different partners. The tempo creeps up as the students laugh, move and sing. As the students return to their spots sitting in a circle on the floor, the teacher establishes tonality in Aeolian using solfege, and one of the students hears that saying sol instead of si is different, and asks about it. The teacher says she will call it minor tonality for now, but that soon they will learn more about it. She praises the student’s discriminating ears, and the student glows. Based on a song the children know well, the teacher demonstrates several options for melodic improvisations over chord roots, and then asks if any students feel ready to give it a try. Ellen volunteers to go first, and as the rest of the class hums chord roots, she improvises a melody that fits the chord changes and is different from the prompt song. Several other students take turns to improvise, and the teacher rates their performances in her PDA using a four-point rating scale: 4=stayed within tonality/meter and fit over the chord roots, 3=stayed within tonality/meter and fit over chord roots most of the time, 2=in singing voice but not in the context of tonality/meter, 1=able to create something but not in singing voice. She also made a note of students who simply sing the familiar song. For her turn, Rachel sang the chord roots with a rhythmic pattern. Another girl said, “Hers sounds like [sings] ‘one bottle of pop, two bottle of pop, three bottle of pop, 250 four bottle of pop!’” This was a pattern that the students had previously learned to accompany the song Don’t chuck your muck in my dustbin. Without missing a beat, the teacher has half the class sing that song in minor and while the other half sang the chord roots they had been using for the improvisation. When I asked her later, she confirmed that the class had learned that song in Major, so I found it interesting that they could sing it in minor with ease. The 40-minute music time is over, and the teacher asks the students to line up at the door. The teacher is late to pick up her class. Some kids in line practice the body percussion to A Ram Sam Sam, another student asks about tonal patterns that are notated on the board, and someone else asks about a new instrument on the shelf (a gankogui). The teacher picks it up and plays a rhythm on it so the students can hear what it sounds like. Without “missing a beat” one of the students chants the pattern back on rhythmic solfege. This leads to a game in which the teacher plays rhythms and the students associate solfege--ending with a rhythm that was difficult enough that I am not sure I associated the correct syllables. As usual when the teacher “tricked” them, students laughed and created a jumbled mash of made-up solfege to attempt the complicated rhythm. During this class period, the teacher’s third grade students sang, moved, and played instruments. Students musicked individually alongside one another in the warm-up, whole class responses for LSAs, and recorder warm-up, and they responded individually during LSAs, playing B sections on their recorders, and while improvising over chord roots. The teacher gathered data on all three of those performances. Students were musicking for nearly every moment of class time, 251 and took part in a variety of activities 18 , including composition and improvisation as well as high-challenge and self-challenge. Group work strategies for differentiation in music class. Although each participant in this study primarily employed a whole-group approach to teaching, they also each utilized some group work during the observation period or spoke at length about an upcoming group project. Ms. Wheeler used centers with her kindergarten students, and her fourth-grade recorder students practiced and performed music in duets and trios. Ms. Davis’s third grades primarily worked in cooperative groups during the observation period, and she described centers she had used in the past with fourth grade students. Ms. Stevens did not use group work in the classes I observed, but she mentioned ongoing group work in other grades, and I saw her preparing her third grade students for an upcoming composition project they would undertake in groups of two or three students. Across cases, participants’ group work consisted of use of centers, praxial group work, 19 and creative group work, and they used various grouping practices. Use of centers. Ms. Wheeler and Ms. Davis both used centers both as a way to facilitate assessment of music learning and also to differentiate instruction. Ms. Stevens did not use centers in the classes I observed and also did not mention using centers, but I did not specifically ask whether she incorporated them into her instruction. Ms. Wheeler used centers in her kindergarten class as a way to assess individual students, and differentiation of instruction was a 18 In this fictionalized vignette, I assumed students had previous experience with these types of activities. Each participant built routines and expectations to facilitate use of activities like these. 19 Elliot’s praxial philosophy of music education advocates that “music making--of all kinds-should be at the center of the music curriculum” (Elliot, 1995) and that the praxis of making music (“combined with the rich kind of music listening required to make music well”) is the best way to learn music. Therefore, I am using the term “praxial group work” to describe group work in which students work together to prepare (and improve through listening, discussion, and practice) an existing piece of music. 252 secondary benefit to this classroom structure. She called small groups of students to the required assessment centers, but other centers were free choice. Students visited some or all of the centers, for varying lengths of time, alone or in groups. Some children stayed with their friends for all of centers time, moving together from station to station, and others freely joined in ad hoc partnerships and groupings with other students who happened to be at the same center. Children interacted with one another in ways that seemed to foster music learning, including acting as teachers and students, singing and reading together, and having rhythmic conversations on instruments. A few children chose to interact with the materials at the centers by themselves. Ms. Davis reported using centers as a way to facilitate assessment of music learning with her fourth grade students. Student-chosen groups of three or four students rotated to each of the centers in order, including the center where Carrie assessed recorder playing. These examples indicate that use of centers may be an efficient way to assess music learning and differentiate instruction for students in both upper and lower elementary grades. The choice of free-form groupings at optional centers or more formal rotations through centers with student- (or teacher-) selected groups could depend on the age level of the students, the students’ familiarity with centers-based instruction, and the goals of the music teacher (e.g., exploration of musical materials, student choice based on personal interests, specific learning goals at each center). A search of the literature revealed several studies and articles related to centers-based music instruction. Howard Gardner’s theory of multiple intelligences provided the theoretical framework for Bernard’s (2005) implementation of centers in her elementary general music room. Walsh (1995) utilized centers-based instrumental music instruction for elementary music students. Both of these studies were action research master’s degree theses, and they described the centers themselves, implementation procedures, and student reactions. They did not discuss 253 assessment of student learning or the effects of centers on individual music learning. Differentiation of instruction was inherent in the design of each study, but was not measured or analyzed. These studies focused on practical application/implementation rather than critical evaluation. In an editorial article, Pontiff (2004) advocated changing the format of the classroom by using centers as a way to successfully integrate students with special needs in elementary general music classrooms. Several other research studies have included use of centers, but not as the object of study. For example, Nelson (2007) used centers-based instruction in her investigation of the use of technology and composition to develop musicianship. Praxial group work. Ms. Wheeler’s fourth-grade students engaged in what I defined above as “praxial group work.” In groups of two or three, students selected pieces of music to prepare on their recorders (in unison or parts) and then performed them for an adult. The music that groups could choose was listed on the board, ranked in order of difficulty, and encompassed a wide swath of difficulty levels. Danielle allowed students to select their own partners or trios, with the exception of ensuring that the student with ASD worked with his “LINKS” partners. Praxial group work involved: group selection of a piece to work on; negotiation of playing in unison or parts (and if they were playing in parts, who would play which part and what kind of harmony they would use); rehearsal of the music, including discussion of how to improve performance and peer coaching; and then presentation of a completed performance product to an audience (teacher) with constructive feedback. Although the students were constrained by a list of songs and required to play recorders (rather than other instruments or combinations of instruments), this style of music learning is similar to the informal music-learning model that Lucy Greene has described in her research on non-school music groups such as garage bands (e.g., Greene, 2008). It is also similar to the ways 254 that children teach one another music on the playground (Campbell, 2010). This kind of praxial group work has been studied in secondary instrumental settings, where it usually takes the form of chamber music ensembles (e.g., Allsup, 2003; Larson, 2010). In addition to working on instrumental material, perhaps praxial group work would also be effective if students worked on sung material or could use a combination of instruments and voices. Creative group work. Ms. Davis and Ms. Stevens utilized creative group work projects, in which small groups of 2 to 4 students worked collaboratively to create music and other material such as dances and dramatic scripts. In Ms. Davis’s case, groups were student-chosen and varied for different tasks. That is, students had a scriptwriting group that met several times, but when they created melodic material for jingles it was in a different group, which was different from their performance group, and so on. The only assigned groupings were performance groups, which were chosen by lottery. Students ranked their first, second and third choices for parts, and then Ms. Davis randomly drew names. When a child’s name was drawn, he was assigned to his first choice if it was still available, and, if it was not, he was assigned to his second choice if it was available, etc. Ms. Stevens assigned groups of two or three students for their composition project, and students worked in these groups for parts of several music classes. When assigning the groups, Hailey considered both behavior and musicality. She wanted to ensure that each pair or trio had a stronger musician who could provide leadership, and she paired particularly strong students with students who really struggled to encourage peer coaching. Ms. Stevens also tried to ensure that the partnerships and trios consisted of children who would work well together without excessive socializing or other off-task behavior. 255 Composition tasks undertaken in small groups varied. Ms. Davis’s students wrote scripts, choreographed, created sound banks, wrote raps, and composed jingles. The compositional products varied from exploratory improvisation to fairly polished, replicable pieces. Within this spectrum, levels of sophistication also varied, from some groups who produced clever, catchy materials to others who barely completed the task. Based on the variety of processes and products, it seems clear that these group projects necessarily included some differentiation and resulted in the opportunity for assessment. Ms. Stevens’ group composition project was more constrained, as the third grade class composed their own “Carnival of the Animals” (after Saint-Saens). She provided each group a stimulus poem, which students set to music using q-chords as accompaniment. The final product songs were performed as movements of the class’s “Carnival of the Animals” and recorded on CDs for the students to take home. Use of teacher-assigned heterogeneous groupings ensured differentiation through peer tutelage, and the resultant songs were an assessable product. Researchers (e.g., Phelps, 2008; Strand, 2006) have undertaken surveys that indicate small-group composition projects are used in elementary school settings. Phelps (2008) found that some teachers used small-group composition projects to meet the national standard regarding composition, but that such activities happened infrequently. Other researchers have investigated small-group composition activities in elementary classroom contexts, but most of these studies focused on the compositional process (see Beegle, 2010), notated product (e.g., invented notation: Ilari, 2002), or on social processes and outcomes (e.g., Cornacchio, 2008) rather than how learning was assessed or how group work resulted in differentiated instruction. Christensen (1992) concluded not only that group composition projects provided an excellent 256 framework for assessment of musical thinking, but also that the nature of group work provided differentiation of instruction, an opinion shared by Freed-Garrod (1999). Analysis of grouping strategies. Flexible grouping strategies were described as one of the hallmarks of successful differentiation in elementary classroom instruction (e.g., Roberts & Inman, 2007; Tomlinson, 2000). According to Tomlinson, groups could be homogeneous by ability, mixed-ability, grouped homogeneously or heterogeneously by learning styles or expressive styles, cooperative learning groups, teacher-assigned, student-chosen, or random. Some of these groupings overlap; for example, a cooperative learning group could be teacherassigned and include homogeneous abilities, and student-chosen groupings are likely to be heterogeneous by learning styles and ability. Furthermore, each grouping strategy provides an opportunity for differentiation. If a teacher grouped homogeneously by ability, she could vary the difficulty level of assigned material accordingly. If a teacher assigned cooperative learning groups heterogeneously by learning styles and ability, she is providing an opportunity for students to learn from one another (both regarding different ways to think about the topic, and also in terms of musical skill level). Student-chosen groupings may be more democratic, and they might more closely approximate how music learning occurs outside of the music classroom (Greene, 2008). Most of the group work described in the current study was undertaken in student-chosen groupings. Only Ms. Stevens considered musical ability in assigning grouping when she ensured that weaker students were paired with a stronger musician. Ms. Wheeler and Ms. Davis both allowed students to choose their own groups, which seemed to result in groups based on friendships. Some of these groups were widely heterogeneous in musical ability and others were somewhat homogenous. Although group composition projects were described in research 257 regarding elementary general music settings, researchers did not focus on grouping practices. Not all authors specified how students were grouped, and, when they did, the groupings were student-chosen (e.g., Christensen, 1992; Freed-Garrod, 1999). Assuming that students are able to focus on the learning task at hand, friendship-based groups could have a number of benefits, including peer coaching, an increased feeling of democracy in the classroom, and enjoyment of the social aspects of musicking and music learning. However, I observed some evidence of students feeling left out of these friendshipbased groups (e.g., CD Field Notes 4/19, p. 2) and the appearance of some groups that were comprised of “leftovers” (e.g., CD Field Notes, 4/28, p. 4). A mixture of student-chosen, teacher-assigned, and random groupings may remind students that they are expected to work well with everyone and might ease the burden for students who are unpopular or unskilled (Cornacchio, 2008). Furthermore, not all high-achieving students enjoy peer coaching or leadership, which is typically their role in groups that are heterogeneous by ability (Adams & Pierce, 2006). Occasional use of teacher-assigned groupings, in which high-ability students work together, could relieve this obligation. Approaches to differentiation for students with special needs. Each participant in the current study taught children with a variety of special needs. Students with special needs were not an intended focus of this study. However, when I asked about differentiating instruction, participants frequently brought up this topic. They mentioned specific strategies and struggles related to teaching students who had special needs, and their differentiation of instruction for these students was often readily apparent in observations. Perhaps the nature of students’ special needs demanded adaptations or modifications to music instruction, making differentiation essentially required. Participants used a variety of specific strategies to differentiate for 258 mainstreamed students, and these strategies were different from the ways participants taught music to self-contained classes. The special education populations taught by participants in the current study varied based on district configurations. Ms. Wheeler’s building housed resource rooms for students with English as a Second Language (ESL) and milder forms of Autism Spectrum Disorder (ASD), as well as pull-out special education services for students with learning disabilities (LD). Students who had special needs were always mainstreamed when they came to music class. Ms. Davis’s building housed the programs for students with moderate to severe cognitive impairments (CI) and the Early Childhood Special Education (ECSE) program for her district. Her school also had pull-out programs for students who had LD or were “gifted and talented” (GATEways). Gifted students and those with LD attended music with their home classroom. The ECSE students came to music as self-contained classes. CI students attended music both with their home classroom and also as self-contained classes. In addition, one of Ms. Davis’s students, “Isaiah” was quadriplegic and had a wheelchair/respirator that he operated with his mouth (CD Field Notes 4/19, p. 2). Ms. Stevens’ building housed two classrooms for students with moderate to severe ASD as well as resource programs for students with LD, ESL, and Giftedness. The students with ASD were occasionally mainstreamed for music. However, students with ASD typically attended music as a group, with the upper elementary and lower elementary self-contained classrooms combined. Students in the remaining populations (LD, ESL, Gifted) came to music mainstreamed with their home classrooms. 259 To summarize, participants in this study reported teaching special education populations 20 including: LD, ESL, Gifted, CI, ECSE, ASD, and students with physical impairments . Only Ms. Wheeler had any formal training in teaching students with special needs. This training was specific to ASD, and she also taught students with ESL and LD. This lack of formal preparation to teach children with special needs is prevalent among music teachers (Hourigan, 2007; Salvador, 2010). Nevertheless, participants in this study found ways to vary their music instruction to meet the music learning needs of children with a variety of special needs, whether they were mainstreamed with their age peers or they came to music with their self-contained class. Differentiation of instruction for mainstreamed students. Students with special needs likely benefited from differentiated instructional techniques targeted at all students, such as opportunities for individual response and flexible grouping strategies. Participants in this study also used specific strategies to differentiate music instruction for students with special needs when they were mainstreamed with their age peers. Paradoxically, when I analyzed how participants differentiated specifically for students with special needs, I primarily found strategies for inclusion—it seemed that individualizing instruction for these students meant finding the ways they could best participate with the whole group. All three participants mentioned utilizing the assessments of other teachers in their differentiation of instruction for students with special needs. Participants learned about the results of these assessments by reading the students’ Individual Education Plans (IEPs) and/or 20 The special education populations taught by participants in this study included most of the diagnoses an elementary general music teacher might expect to teach (Adamek & Darrow, 2005), with the notable exception of students with Emotional Impairment (EI). None of the participants taught in a school that housed a categorical classroom or resource room for students with EI, and none of them mentioned mainstreamed students with EI. 260 IEP-at-a-glance forms, and through regular communication with special education and classroom teachers regarding specific children. Participants suggested familiarity with the IEP and talking with a child’s other teachers as ways to understand more about each child’s needs and learn ideas for successful inclusion in music, including incorporating behavior plans and any need for specific modifications. This approach is also recommended by Adamek and Darrow (2005) and in Atterbury’s seminal text on mainstreaming special education populations in general music (1990). In addition, participation in IEP meetings may assist music teachers to differentiate music instruction and could also contribute information about the child’s behavior in a musical setting to assist the treatment team (Hammel, 2004; McCord & Watts, 2006). Ms. Wheeler and Ms. Davis reported using peers to help students with special needs participate in music. Peer instruction was also a tactic employed to differentiate instruction for students without identified special needs, but the type of assistance peers provided was different for students with special needs. The assistance was often logistical, social, or physical rather than (or in addition to) musical. Ms. Wheeler sometimes employed “Links” partners from the school-wide peer-buddy system for students with ASD as one way to differentiate instruction. She also considered students’ special needs when assigning seats, so students would have a helper available during seatwork. Ms. Davis also relied on peer support to help students with special needs, although she rarely specifically assigned a “buddy.” Instead, various students helped those with special needs (and their paraprofessionals) when they noticed that someone required assistance with a task or they foresaw a need for help. One notable exception was made for Isaiah, whose wheelchair was bulky and tall, so that anytime his classmates sat on the floor he was isolated from them. Ms. Davis’s students sat on the floor frequently, and anytime this occurred, a buddy stood up next to 261 “Isaiah” and sometimes leaned on his chair 21 . Peer support and tutelage as a method of differentiating instruction is mentioned in the elementary education literature (e.g., Tomlinson, 2000), and its use is not limited to students with special needs. Music educators and researchers also have suggested use of peer assistance for mainstreamed students in music class (Adamek & Darrow, 2005; Hammel, 2004; Haywood, 2005). Ms. Wheeler and Ms. Stevens both mentioned the need to modify written work for some students with special needs, particularly those with LD. When she assigned written work, Ms. Wheeler reported “checking in” with students she knew might need additional help to be sure they understood the directions and to get them started. She sometimes adapted or modified written assignments by shortening the amount of material required, reducing the number of items to answer, or changing the nature of the work to be done. For example, Ms. Wheeler might ask a paraprofessional to read the questions on a quiz aloud and write down the student’s oral responses. Ms. Stevens mentioned that, if she noticed that a child’s performance on a written assessment did not match his typical musicking abilities, she would modify the assessment and find a way to give it aurally to be sure the student’s performance reflected his musical skills and not his academic abilities. A review of the literature did not reveal specific research regarding adaptations and modifications to written assignments for elementary music students with special needs. However, in Music and Special Education, Adamek and Darrow (2005) suggest similar modifications and adaptations to written work as those described above, including shortening assignments, offering oral alternatives to written work, and changing the nature of the written task (e.g., from composing to copying). 21 Isaiah had quadriplegia as the result of an automobile accident at age 4 and used a bulky motorized wheelchair/respirator that he drove with his mouth. 262 All three participants asserted that a student’s musicality was not necessarily affected by his special educational needs. Particularly for students who were developmentally typical (i.e., those with ESL, giftedness, or LD), participants reported that musical development seemed unrelated to the special education diagnosis. When these students sang (especially without words), moved, and/or played instruments, their musical development often seemed within the typical range of other children their age. Gfeller stated: From a review of the aptitude and achievement research of students with disabilities, one thing is clear: musical potential and ability vary greatly from one disability to another, but also within each category of exceptionality, depending on the severity of the condition as well as the particular musical task (1992, p. 630). The music aptitude of an individual student and the etiology of his specific disability label may be unrelated. Therefore, in addition to reading the IEP and speaking to special educators, assessments of music aptitude and achievement may help music teachers differentiate of instruction for mainstreamed students with special needs. Strategies for teaching music to self-contained classes of students with special needs. Ms. Davis and Ms. Stevens both taught self-contained classes of students with special needs. Carrie’s CI students had functional ages between 6 months and 3 years, and Hailey’s ASD students ranged from ages 2 to 5 developmentally (HS Initial Interview, p. 4). In these classes, many students were nonverbal. Neither Carrie nor Hailey had any formal training in how to teach music to students with such needs. However, they both arrived at the same solution: To use an early childhood approach influenced by Music Learning Theory (MLT, HS Initial Interview, pp. 9-10). 263 The MLT-influenced approach consisted mainly of immersing students in musical experiences and not requiring any particular response. Musical experiences in this context comprised singing songs and chants to and with the students (with and without words), movement activities, and use of manipulatives and percussion instruments. Teachers encouraged participation through the use of engaging activities and props and by incorporating student ideas. Teachers interacted on an individual musical level with students who chose to respond through movement, chanting, and singing. There is some support in the literature for teaching students with acute special needs by using an approach based on the MLT early childhood instructional model (e.g., Gruber, 2007; Griffith, 2008; Stringer, 2004). Summary of impact of assessment data on differentiated instruction. In retrospect, my research question, “What was the impact of assessment on differentiation of instruction?” presupposed that the relationship of assessment data and differentiation would be straightforward and unidirectional. This relationship might be found in a quantitative design like Froseth’s (1971). His use of a research design with a pretest, posttest, treatment and control groups allowed him to isolate the effects of using the results of assessment data on music achievement. However, perhaps due to the heuristic nature of this study, its results imply that the relationship of assessment data to differentiation of instruction is not as direct and simple as I had first imagined. My guiding question assumed I would find examples of differentiated instruction resulting directly from specific assessment practices in a linear fashion. While I did see some examples of such practices, the results of this study were much more complicated. Assessment and differentiation were interwoven richly, informing one another in a reciprocal as well as linear and spiral relationship. Emergent Themes. 264 This cross-case analysis has revealed a number of similar and a few divergent practices among the participants with regard to my guiding questions about assessment and differentiated instruction. One emergent theme across cases indicated that participants also shared a number of personal and institutional factors that facilitated assessment and differentiation. Finding additional emergent themes across cases was challenging, mostly due to the nature of the differences among participants. While striving to remain tightly within the scope of this research topic, I have nevertheless concluded that philosophical differences among the participants had a direct influence on their practice of assessment and differentiation, specifically with regard to the amount of structure in their classrooms. Although discussion of the underlying philosophical beliefs that lead to these differences is outside the scope of this paper, I will briefly discuss the impact of instructional style on assessment and differentiation. Factors facilitating assessment and differentiation. Participants in the current study came from different generations, attended different undergraduate and graduate degree programs (representing a total of five colleges/universities), and taught in similar settings but in dissimilar parts of Michigan. Despite the differences among their school districts in terms of political, religious, socioeconomic, and other factors, several organizational factors that facilitated assessment and differentiation emerged during data analysis. The participants also exhibited diverse personalities and communication styles. However, they shared personal characteristics that facilitated assessment and differentiation. Organizational factors that facilitated assessment and differentiation. The schools in which participant teachers worked shared several organizational factors that facilitated their practice of assessment and differentiated instruction. Each school served students from kindergarten to fourth or fifth grade, which allowed an accumulation of data over time. The 265 participants were resident music teachers with their own rooms who were nearly always in one building. The nature of special education provision affected assessment and differentiation of instruction. Finally, each participant had considerable independence to make teaching decisions. Ms. Wheeler felt strongly that a music curriculum should be cumulative from kindergarten through fifth grade. One of the ways she overcame the challenges to assessing and differentiating for nearly 500 students was by coming to know students as individuals over the course of six years. Furthermore, Danielle intentionally used this time to spiral content from introductory to more sophisticated levels. In this model, fifth grade constituted a sort of capstone year, in which larger-form activities including improvisations and compositions and a full-length musical production allowed her to assess summative progress from the kindergarten baseline. Although neither Ms. Davis nor Ms. Stevens specifically mentioned a belief in a k-5 cumulative curriculum until I asked about it, they also benefited from seeing students for five or six years. Music teachers who see 400 or 500 students a week cannot track music learning as closely as a classroom teacher with 25 students. However, participants in this study could track individual progress across five or six years of development. Furthermore, they could get to know this large number of students quite well over the years, which facilitated differentiation. Frequently, when I asked about particular students (because of behavior, musicality, etc.), our discussions would reveal an amazing depth of knowledge about the child, from what age he found his singing voice, to his struggles through his parents’ divorce, to how protective he is of his first-grade cousin, to how he competes in motorbike races outside of school. Such treasure troves of rich information, readily accessible in the teachers’ minds, must certainly contribute to these teachers’ ability to differentiate instruction for their students. 266 All three participants had their own music rooms and were in the same building nearly all the time, although Ms. Stevens traveled to another building for two half-days each week. Having their own rooms facilitated assessment, because it allowed them to keep materials and information organized and accessible. Staying mostly in one building contributed to assessment and differentiation, because teachers were better able to participate as a member of the staff— from more formal activities, such as participating in IEP planning, to talking with other teachers about students and their needs, to more informal but still important tasks for building community, such as participation in school festivals and events or leadership of extracurricular activities. The manner of music education provision for students with special needs affected teachers’ practice of assessment and differentiation. Students with special needs, such as ESL and LD, who were mainstreamed sometimes needed a different assessment format and required differentiation, such as modified or adapted written work or a peer buddy. Seeing self-contained classes of students with more acute needs, such as moderate to severe ASD or CI, changed the method of delivery of music instruction. Ms. Davis was able to see students with acute special needs in their self-contained classes and mainstreamed with their age peers. Each of these different classroom dynamics as well as each child’s specific needs affected both assessment practices and instructional decisions. Each of the participants in this study had considerable freedom in how she taught music. Although each district provided a curriculum, it was typically a flexible set of benchmarks to be taught and assessed in the manner chosen by the individual teacher. Furthermore, there was little oversight at the building or district levels regarding teaching practices or any sort of accountability measures to ensure curriculum delivery. Participants in this study capitalized on 267 this freedom by teaching and assessing in ways that complimented their teaching styles and personalities. Their independence also allowed them to experiment with new ideas (such as Ms. Davis’s use of small-group compositions) and to integrate emergent student interests into their teaching (like Ms. Stevens’ use of a Justin Bieber song). Personal characteristics that facilitated assessment and differentiation. Although the participants embodied a range of personalities, attitudes, and behaviors, they shared a number of personal characteristics that seemed to facilitate their practice of assessment and differentiation. Each teacher was a fabulous musician. I saw them accompany on piano and other instruments, make up songs on the spot, and improvise rhythms, chants, melodies, and movement. Furthermore, each participant utilized her own specific teaching style and set of routines, and knew what she wanted to accomplish on any given day and where that fit in her curriculum as a whole. Mastery of curricular content, comfort with teaching style, use of routines, and secure musicianship resulted in a sort of teaching automaticity, which in turn allowed participants to observe learning progress and differentiate instruction in the moment as well as while planning. I noticed that participants were organized, driven, and intelligent. Also, each was modest and self-critical to the degree that I am certain that they would each cite multiple examples to refute my assertions that they exhibited those qualities. Their modesty and self-criticism seemed to foster a sense that they were always learning more and striving to be better teachers. Furthermore, they exhibited clarity about what they thought it was important for students to know. This was not necessarily based on district curricula and was sometimes in direct conflict with other music teachers in the participants’ districts. Participants in this study self-imposed assessment of criteria they viewed as important to their students’ learning. They each seemed to view curriculum, planning, assessment, and differentiation as interrelated facets of teaching. 268 These elements were not implemented only in a linear fashion (use the curriculum, write a plan, assess the learning), but each piece informed the others—embedded, spiraling, reciprocal, interweaving. The most striking personal characteristic participants shared was self-motivation. Each teacher in this study noticed a lack of accountability measures and oversight of her teaching, and nevertheless felt a need to design assessments and differentiated instruction to meet the needs of her students. Assessment and differentiated instruction were time-consuming and difficult, yet participants in this study were motivated to implement them. This motivation seemed to stem in part from the participants’ reflective teaching practices. They seemed to consistently ask themselves how they could improve their teaching and increase students’ learning. However, the motivation primarily seemed to stem from how much each participant cared about individual children as people and as musicians. Furthermore, I wonder if the lack of specific accountability measures and oversight may actually have facilitated implementation of more meaningful and personalized responsibility and pride in teaching. Impact of instructional style on assessment and differentiation. When I chose participants in this study, I did not expect their philosophies and instructional styles to be so different. Although the variety of beliefs and practices described in this study made cross-case analysis more difficult, I also think it strengthened the study. I was able to see three positions on the continuum from primarily teacher-led instruction to a more student-led/teacher-facilitated learning. I will not compare or evaluate participants’ positions on that continuum. However, I will briefly describe a continuum between direct instruction and teacher facilitation/student autonomy. Then, I will discuss how the participants’ positions on this continuum seemed to affect assessment practices and differentiation of instruction in the data from this study. 269 Continuum between direct instruction and teacher facilitation. For the following discussion, it will be helpful to imagine teaching style as falling on a continuum from direct instruction to teacher facilitation/student autonomy. At one rhetorical extreme of this continuum, every student is taught the same material in the same way at the same time, in a sequence determined by the instructor and using instructor-chosen materials [direct instruction]. At the opposite rhetorical extreme, children are invited to a free-for-all exploration of musicking based on their interests (including lack of interest as an option). The teacher is available as a guide or to assist individuals or groups, but does not design lessons; she does not have particular goals for any learning experience. No participant in this study represented either of these radical positions, and it is unlikely that any practicing teacher would embody such rhetorical extremes. However, this rhetoric serves to illustrate the difficulties with differentiation and assessment at each end of the spectrum. Limiting instruction to whole-class activities using teacher-dictated materials might actually facilitate assessment, because the teacher set out to teach something specific to the whole group and can then find a way to test the whole group on what she meant for them to learn. However, the direct instruction extreme could suffer from an inherent lack of differentiation, which might reduce students’ investment in the learning process because it ignores their opinions, interests, backgrounds, learning styles and ways of interacting with each other and the world. Conversely, at the facilitation extreme of the continuum, tracking student progress is rendered nearly impossible by a lack of structure: no goals for the class or individual students, no interest in assessing what students know and can do, no opportunities for skill building or attention to readiness, and so many competing learning styles and musical interests and levels of participation. 270 Participants in the current study did not occupy extreme positions on this continuum. Ms. Wheeler was the closest to a direct instruction model, using primarily whole-class instruction with whole-class responses and materials that she selected. Nevertheless, she also occasionally used centers, praxial group work, and some popular music. Ms. Davis was the closest to the facilitation end of the spectrum, particularly in her third grade classes, which engaged in collaborative group work for the entire observation period. She refrained from giving ideas or directly solving problems, but instead posed questions and let students wrestle with the issues and discover their own solutions. Even in this context, Ms. Davis sometimes took a more direct instructional role, such as when she helped students select music from their sound bank to put with specific mini-musicals. Ms. Stevens’ teaching was closer to the middle of the spectrum than Ms. Wheeler’s, but like Ms. Wheeler, she was also closer to the direct instructional pole than the facilitation pole on the continuum. The influence of directness of instruction on assessment and differentiation. Each participant’s position on the continuum from direct instruction to facilitation directly affected her practice of assessment and differentiation. Therefore, I will present a brief analysis of the effects of directness of instruction on assessment and differentiation, but I will focus tightly on observed effects rather than the surrounding philosophical issues, which are not within the scope of this paper. Ms. Wheeler was the closest participant to the direct instructional pole on the continuum. In some ways this facilitated assessment—she taught a particular objective to the whole group and then assessed the group. However, at times her approach transformed an assessment that was ostensibly a way to track music learning into an assessment of which students could follow directions (DW Field Notes 3/1, p. 2). For example, she repeatedly drilled material that was 271 going to be on an upcoming test so that every student who was paying attention should ace the test (e.g., a chant “B is on the middle line, A is on the second space…” DW Field Notes 1/22, p. 2). Not only did this render the assessment less meaningful, but it also was not assessment of music learning as much as academic ability. Ms. Wheeler also used less direct instructional methods, such as centers and praxial group work, and more natural, embedded assessments of musicking behaviors, such as singing and movement. However, analysis of her instruction seemed to indicate that the more direct the instructional model, the more unmusical and atomistic the assessment. When the whole group learned the same material in the same way at the same time, this also had the potential to stifle differentiation, which at its core is teaching different things to different students based on their individual needs. In contrast, Ms. Davis’s used facilitation with her third grade students for most of the observation period. It became clear that assessing the musical progress of individual students was extremely challenging in this context. It was hard for Ms. Davis to predict what students would be working on from day to day and, thus, there were not specific goals for any of the activities. This lack of objectives meant there was little that could be measured in terms of individual student music learning. However, a great deal of differentiation resulted from the inherent sensitivities of this approach to individuals’ prior knowledge, interests, and learning styles. A lack of goals did not mean that students were not learning, but made it difficult to ascertain exactly what they were learning. Ms. Stevens’ approach to instruction was primarily teacher-directed and whole-class. However, she did not see herself as “imparting knowledge” but rather as a guide who “provided appropriate experiences” (HS Think Aloud 2, p. 2). Within her whole-class instruction, she provided consistent opportunities for independent musicking, when students responded 272 individually, musicked alongside one another, moved independently, and played instruments. Like Ms. Wheeler, Hailey’s direct instructional approach had clear goals that facilitated her practice of assessment, although she rarely used acontextual assessments. The amount of openended individual response allowed for considerable differentiation of music learning according to ability and prior knowledge. In summary, swinging toward the facilitation end of the continuum made it almost impossible to assess, because it was unclear what students were and/or should be learning. However, the nature of teacher-facilitated rather than teacher-directed instruction allowed for musical and social differentiation and for exploration of student interests and student ownership of learning. Swinging closer to the direct instructional side of the continuum facilitated assessment practice but impeded differentiation and may have resulted in more atomistic assessments. Without the benefit of assessment combined with high challenge activities, both sides of the continuum seemed prone to underestimating the abilities of students, teaching beneath their abilities, and not allowing students to surprise the teacher with their musicking. Direct instruction, as implemented in Ms. Stevens’ teaching, did seem to allow for differentiation based on music aptitude, prior musical knowledge and musical achievement. Danielle and Hailey both believed they were teaching measureable music skills as building blocks to provide readiness so that students would be better prepared to succeed as independent musicians both when they were given small group activities to work on in class and also later in life. They each believed that all students were capable of learning these readiness skills, and that they should be required to participate. They presented teacher-selected materials in the teacher-directed sequence they each felt would best facilitate music learning. In contrast, Carrie felt that, since students did not choose to be in her class and some did not care for music, 273 she should not force participation or focus on sequential skill building. Instead, she wanted to help students view themselves as musicians and enjoy interacting with music. Perhaps this disagreement regarding the nature and purpose of elementary general music education was the root of differences in instructional style and thus the practice of assessment and differentiation. Summary of Cross-Case Analysis. Participants in the current study demonstrated analogous assessment practices in terms of when, how, and how often they tracked student learning. Each participant used a variety of assessment methods, including aptitude testing, report cards, checklists, rating scales, and observation on an ongoing basis throughout the school year. Participants disagreed about whether observation/checking the group constituted an assessment or was simply an instructional strategy. They also disagreed regarding the value of whole-class (or whole-school) after-school performances as an assessment. The frequency of formal assessment of individual musicking ranged from two to three times per class to two to three times per month. Analysis of the participants’ instructional practices revealed both areas of similarity and also some divergences with regard to the impact of assessment on differentiation. In order to differentiate whole-group instruction, the participants varied their method of presentation, planned lessons for a variety of receptive learning styles, provided multiple ways to interact with music (singing, chanting, moving, playing, listening), and sought to integrate a variety of musical styles. Ms. Stevens, in particular, varied the levels of difficulty across activities and offered many open-ended opportunities for individual responses, both at self-challenge and also at highchallenge levels. All three participants taught mainstreamed students with a variety of special needs, and Ms. Davis and Ms. Stevens both taught self-contained classes as well. Participants noted that 274 students with giftedness, ESL, and LD did not seem outside of the normal range of musical ability expected for students their age, although they did differentiate by adapting or modifying written work and/or teaching by modeling or through demonstration rather than with words. The teachers who taught students with more profound special needs in self-contained classes both adapted an approach based on the MLT model of early childhood music instruction. This cross-case analysis revealed that the relationship of assessments to differentiation was complex, interwoven, and context-dependent. Sometimes, specific assessment data led directly in a linear fashion to differentiation of music instruction. This was particularly true of IEP data that resulted in modification or adaptation of written work. Assessment data was also directly applied to instruction when an aptitude test demonstrated that a low-performing child had high aptitude and needed additional challenges or motivation. Other times, an accumulation of data would result in differentiation. For example, Ms. Stevens chose to give easier prompts to students who had lower levels of singing voice development based on a variety of previous assessments. All three teachers used personal and musical information accumulated over the course of years to determine “success” for individual students in addition to how he or she performed on the particular task. However, sometimes differentiation stemmed from factors that were not assessed, such as interest-based learning when Ms. Wheeler allowed students to freely choose the centers that interested them. Differentiation also occurred without the direct influence of specific assessment data as a result of praxial group work, creative group work, and self-challenge activities. Emergent themes included a number of shared organizational and personal factors that seemed to facilitate participants’ practice of assessment and differentiation. Each participant was primarily a resident teacher with her own room in a k-4 or k-5 building. This increased the ease 275 with which assessment materials and records could be assembled and stored. It also facilitated differentiation, as teachers could get to know individual students as musicians and people over the course of five or six years. Furthermore, being resident in one building allowed conversations among teachers regarding students’ needs. Participants in this study also had considerable independence to make teaching decisions. On a personal level, the participants shared a teaching automaticity that resulted from comfort with a personal teaching style, excellent musicianship, and mastery of curriculum, content, and routines. Participants motivated themselves to assess and differentiate despite, or perhaps because of, a lack of guidance and support. The degree of directness of instruction was the primary emergent factor that seemed to directly affect participants’ practice of assessment and differentiated instruction. I proposed a rhetorical continuum from direct instruction to facilitation. Based on analysis of the practices of participants in this study, it seemed that a more teacher-directed approach facilitated assessment but complicated some types of differentiation, whereas teaching on the other end of the continuum was likely to result in highly differentiated instruction that was nearly impossible to assess. A more middle-ground approach seemed to balance both of these strengths. 276 Chapter Eight: Conclusions and Implications In this study, I investigated assessment practices and differentiation of instruction in elementary general music settings. I wanted to find out more about how teachers discerned individual students’ musical skills and abilities and how they then used that information to individualize instruction both in terms of planning and also “in the moment.” My initial guiding research questions were: 1) When and how did the participants assess musical skills and behaviors? 2) How did participants score or keep track of what students knew and could do in music? and 3) What was the impact of assessment on differentiation of instruction? Three elementary general music teachers allowed me to observe their typical teaching practices. I observed Danielle Wheeler each time she taught a kindergarten and a fourth grade for seven weeks. Over the course of four weeks, I watched Carrie Davis each time she taught three classes: a third grade, a fourth grade, and a self-contained class of students with cognitive impairments. Finally, I saw Hailey Stevens each time she taught a first grade and a third grade for seven weeks. Data collection consisted of field notes, videotapes and video review forms, interviews, teacher journals, and think-alouds. Using the constant comparative method of data analysis, I wrote case studies that described each teacher’s practices of assessment and differentiation with regard to my guiding research questions as well as themes that emerged from data analysis (Chapters 4, 5, and 6). Chapter 7 consisted of a cross-case analysis, in which I sought overarching themes related to my guiding questions. I also looked for themes that emerged from analysis of all three cases. All participants used a variety of assessment methods, including rating scales, checklists, report cards, observation, and aptitude testing. Two participants included self-assessments, and 277 one compiled all written work into a portfolio for each student. Although each teacher occasionally assessed specifically for report card grades, most assessment was consistent and ongoing throughout the school year and its primary purpose was to inform instruction. Participants reported that the number of students they taught, lack of time and support, and preparation for performances were the major hindrances to assessment. They disagreed about the role of large-group performance as an assessment activity. Although some assessments were directly applied to differentiate instruction in a linear or spiraling fashion, assessment practices and differentiation of instruction were typically interwoven in a complex relationship that varied among participants. Group work—including praxial group work, creative group work, and centers-based instruction—was one way that teachers differentiated instruction and also assessed the music learning of individual students. Utilizing of a variety of presentation styles and offering a range of musical activities provided differentiation in whole-group instruction, as did individual responses to open-ended highchallenge and self-challenge activities. Furthermore, each participant was expected to differentiate music instruction for students with a variety of special needs. In this final chapter, I will discuss implications for practice based on the results of this study, make suggestions for future research, and conclude with a proposal of a middle-ground approach to elementary general music education. Implications for Practice [Music educators] should… challenge our children within their lessons and class sessions, and in their individual practices at home. Do we? Or do we expect too little of them, lowering our standards and reducing the degree of their accomplishments? Even worse, do we sometimes teach them what they already know? For example, it is common knowledge that most first graders understand the concept of soft-loud (and have since the age of three), yet some teachers “teach” it and then teach it again. If we teach children what they already know, or if we expect less from then than what they can do, we may well miss our chance to seize the energy and momentum toward their becoming more fully musically thinking and feeling beings. As we 278 strive to know our students, their strengths, their capabilities, their dreams, and goals, we can be there for them—even those independent, self-motivated children—as references, troubleshooters, and guides. We can also occasionally push the envelope, offering them greater skill development, so as not to lose the best and the brightest from our programs. We can vary the complexity of what we teach: some may be hungry for a quicker pace and a greater challenge (Campbell, 2010; p. 260, bold added). In this excerpt, Campbell described the need for elementary general music teachers to know individual students’ abilities and interests in order to capitalize on the short time available in music class. Results from the current study support the notion that music teachers have a variety of challenges to knowing their students, including teaching large groups of children, infrequently, for too short an amount of time. However, the results of this study also refute the notion that differentiated instruction is impossible in elementary general music. Campbell noted, “We can grieve and gripe about the minimal music time, but with our best foot forward, we may be better off taking steps to determine how better to use the allotted time we have” (2010, p. 271). In that spirit, this study revealed several implications for the practice of assessment and differentiated instruction in the elementary general music room. Implications for the practice of assessment. Elementary music teachers have a variety of assessment tools at their disposal, which can be naturally interwoven in the process of teaching and learning. Participants in this study demonstrated that it is possible to assess music learning on an ongoing basis. They agreed that meaningful assessment of music learning must be of individual student responses, although each teacher still used observations and “checking the group” to informally monitor her instruction. Teacher-designed rating scales were the most successful and expedient method to assess individual musicking skills such as singing, chanting, moving, and playing instruments. Each teacher also used aptitude tests, written assessments, and report cards to assess music learning. In addition, creative projects (compositions, 279 improvisation) offered insights into students’ musical cognition. When done well, assessment was embedded as a consistent, organic thread in music teaching and learning. Whole-group singing, chanting, moving, playing instruments, and improvisation were structured to offer opportunities for brief individual responses (often in the guise of a “game”), and teachers quickly rated individual responses using rating scales. In this way, participants in the current study consistently gathered data on the musical progress of individual students while students nevertheless engaged in musicking for nearly all of each music class. Aptitude testing. Practicing music teachers may consider adopting aptitude testing into their assessment repertoire and applying the results to differentiate instruction. Participants in the current study used music aptitude testing once or twice a year as a diagnostic tool to help students learn. Students with low aptitude who were low achieving could be indentified and given additional scaffolding. Those with low aptitude who were still achieving would be challenged accordingly. Students who were low achieving but identified as high aptitude could be given challenges, leadership opportunities, or a “kick in the pants” to increase their achievement to more accurately reflect that high aptitude. Furthermore, research indicates that music aptitude is developmental (i.e., it can be increased through instruction and/or an enriching environment up until about age 9, Gordon, 2007), so ongoing measurement of students’ music aptitudes can also reveal increases in aptitude as a result of instruction. Role of performances in assessment. Music teachers may need to evaluate the role of large-scale performances (programs) in their curriculum and the impact of performance preparation on music teaching and learning. Participants in the current study disagreed about the role of large-group performances as a form of assessment. Ms. Wheeler characterized performances as the equivalent of the MEAP, a state-mandated yearly achievement test in math 280 and reading, because the performances occurred once a year. However, tests such as the MEAP result in standardized achievement data for individual students, whereas group performances do not. All of the participants in the current study stressed the importance of individual response for meaningful assessment, and this study also revealed the importance of record-keeping to build a holistic picture of each student’s musicking. A concert with nearly 100 children per grade level performing at the same time does not seem to meet those criteria. In many districts, large-scale performances are expected and/or required, and it was outside the scope of the current study to examine their overall value. However, this study does indicate that group performances should not be viewed as assessment tools for tracking individual music learning. This study supports extant literature indicating that performance preparation as currently practiced interferes with typical music learning in elementary general music. To ameliorate this problem, teachers might incorporate informances as Ms. Stevens suggested or look for other ways that performance preparation (and/or the performances themselves) could be modified to reflect and augment rather than derail learning. Logistical considerations. Practicing teachers should establish reliable methods to track individual students’ data over time. The participants’ practice of a variety of ongoing embedded assessments resulted in a more comprehensive picture of each student’s performance upon which to base instructional decisions. From a logistical standpoint, this required synthesis of a great deal of information, so teachers used grade books, palm pilots, spreadsheet programs, and other methods to track data. Participant teachers agreed that they could not accurately recall how all children performed on a given task when they did not record some form of data in the moment. If the data are inaccurate or inaccessible, they are also useless for their primary purpose: to inform instruction. Furthermore, teachers must synthesize data to create a holistic portrait of 281 performance so they are able to recall and apply information about students’ abilities and needs as they teach. Despite the difficulties of rating individual performances on multiple tasks and then recording and tracking all that data, participants demonstrated that it is possible to gather a variety of data on individual students’ abilities and nevertheless spend the bulk of music class time engaged in active musicking. Summary of implications for the practice of assessment. While acknowledging the challenges elementary general music teachers face, the current study indicated that teachers are able to track individual music learning progress for each of their students. Based on this study, and although the results of this study are not generalizable due to its qualitative nature, practicing teachers are encouraged to explore ways to naturally and consistently weave assessments of individual musicking behaviors including singing, chanting, moving, playing instruments, improvising, and composing into their teaching. Teacher-designed rating scales may be an efficient way to do this, although some written assessments such as rubrics, aptitude testing, quizzes, compositions, and self-assessments could also contribute to a well-rounded picture of achievement. Praxial preparation of existing music and creative projects, such as compositions and improvisation, offer rich, authentic opportunities to assess individual music learning. Preparing whole-class, grade-level, and whole-school performances did not lead to data regarding individual student progress, and was seen as distracting from normal music learning. Teachers may wish to evaluate the impact of performances on music teaching and learning and to reconsider their use as an assessment. Thoughtful integration of ongoing assessment activities will lead to a well-rounded picture of each student’s music achievement and aptitude, and allow music teachers to differentiate music instruction to meet individual music learning needs. 282 Implications for differentiated instruction. In this study, participant teachers used well-documented assessment methods and encountered challenges similar to those reported in the literature, although it seems they assessed more frequently than the literature indicated was typical, and their use of aptitude testing was unusual. Little research has investigated how assessment data is applied to individualize instruction or described differentiated instruction in the elementary general music classroom. The current study resulted in several implications for elementary general music teachers’ practice of differentiated instruction. Whole-group differentiation. Teachers can differentiate whole-group instruction both by varying activities over time and by providing opportunities for individual musicking within the context of whole-group instruction. Planning activities that provide a variety of ways to interact with music (i.e., singing, moving, playing instruments, listening, improvising, composing) is one way to reach a variety of learners. Teachers can also vary the presentational mode (aural, visual, kinesthetic), perhaps by using technology, and/or integrating different musics (popular, “school,” folk, etc). Allowing individual response within the context of whole-group instruction can build in differentiation of music instruction based on music aptitude and ability. Modes of individual response included musicking independently alongside other students in chorus as well as solo responses. Furthermore, opportunities for solo response can be varied (1) by the teacher according to an individual’s previously demonstrated achievement, (2) to present a high level of challenge, or (3) to be open-ended, allowing each student to challenge himself. Within wholegroup instruction, these opportunities could be designed specifically to demonstrate certain levels of achievement from different students, or they could allow students to choose their own level of challenge. 283 Groupings-based differentiation. Teachers can also differentiate instruction by using various forms of group work and a variety of grouping strategies. Teachers could use free or structured centers-based instruction, praxial group work (in which students prepare a performance of an existing piece of music), or creative group work (in which students compose, choreograph, improvise, etc.). Varying grouping practices within each of these group work models could further facilitate differentiation. For example, occasional homogenous groupings by ability in praxial group work would allow teachers to challenge high achieving students with new or advanced material and would also permit teachers to work intensively with lowperforming students. Assigned heterogeneous groupings could facilitate peer instruction during creative group work, such as a composition project, while student-chosen cooperative learning groups might mitigate social anxiety as students choreograph a dance that demonstrates selected musical features of a piece. The potential of various group work models and grouping strategies to increase individual music learning invites a number of models for implementation that teachers could select based on their needs. Differentiation for students with special needs. Participants in this study implemented a variety of practices to differentiate instruction for students with special needs. When students were mainstreamed, helpful strategies included use of the assessments of other teachers (IEPs), use of peer support, modification/adaptation of written work, and separating musicality from other abilities. Moreover, recognizing that significant modifications of curriculum should be discussed with parents and special educators, the results of this study indicated that students who were mainstreamed in music for primarily social reasons could still progress musically and participate meaningfully in music class with the help of thoughtful adaptations and modifications. Music teachers should consider ways that socially mainstreamed students could 284 musick alongside their peers at their own level. With regard to self-contained classes of students with more severe developmental delays, ASD, and/or CI, results from the current study supported findings in the literature that a Music Learning Theory-based early childhood approach may be appropriate to nurture individual musical development. All three participants noticed students with LD and ESL who may have struggled academically but were in the normal range of musical ability expected for students their age. Based on personal experience as well as IEPs, various participants identified use of verbal or written instructions, pencil/paper assignments and assessments, and/or notation as particularly problematic for students with LD or ESL. Because participants reported that the musicality of students with ESL and LD seemed unrelated to their label, it seems logical to assert that limiting the use of notation, pencil/paper tasks, and verbal “talking about music” may ameliorate the need for further modifications based on these particular special needs. Music teachers could implement teaching methods such as modeling/demonstration and design aural/oral assessments for students with these labels. Gardner (1993) proposed that musicality constitutes its own way of thinking, a separate intelligence from other modes of cognition such as interpersonal, verbal/linguistic, or logical/mathematic. Although the literature indicated that students with moderate to severe special needs might have corresponding deficits in music aptitude, it also indicated that these deficits were not present for all disorders, and that within specific disability populations these deficits could vary. Participants in the current study indicated that giftedness and milder forms of disability did not seem necessarily related to musical abilities and that students with more profound disabilities nevertheless sometimes demonstrated surprising musicking abilities. 285 Therefore, music teachers should find ways to foster individual musicking for all students so that musical intelligence can be separated from other deficits or gifts and nurtured. Implications at the secondary level. Although this study took place at the elementary level (and is not generalizable due to its qualitative nature), applicable findings may be appropriated to other settings. Assessment strategies suggested by the current study—including aptitude testing, use of rating scales, self-assessments, and creative projects—are all possible at the secondary level. The methods participants in this study used to elicit individual responses and to track the assessment data they accumulated may be of particular interest to secondary instructors. Use of centers, praxial and creative groupwork, high-challenge and self-challenge activities could be adapted to suit the learning needs of older students. Furthermore, using a variety of grouping strategies to differentiate instruction might be especially beneficial and appropriate with adolescent learners, who are highly motivated by peer interaction. Summary of implications for practice. Music teachers face a number of challenges as they seek to know each of their students as individual people and musicians. Elementary general music teachers must be prepared to individualize instruction for “typical” students, whose musical skills and abilities can be widely divergent, as well as teach students with a variety of special needs. Assessments of individual musicking can be integrated into music instruction on an ongoing basis in such a way that they do not significantly interfere with students’ immersion in musicking. Use of a variety of assessment strategies to track a number of musicking skills over time can result in a well-rounded picture of each student’s musicianship that can then be used to differentiate instruction. Differentiation of instruction in elementary general music settings can be accomplished by consistently varying the musical materials, presentation modes, and ways of interacting with music in whole-class instruction. Furthermore, opportunities for 286 individuals to musick independently alongside one another and respond alone can be integrated into whole-class instruction at a variety of levels of difficulty and self-challenge. Differentiation could also be facilitated through use of various grouping strategies within centers-based instruction, praxial group work, and creative group work. Suggestions for Future Research The results of the current study suggest a number of possible topics for future qualitative and quantitative studies. This study indicated that curriculum, assessment, differentiation, and planning are interwoven in an intricate web of reciprocal, linear and spiral relationships. Fleshing out a more precise description of the nature of this complex interaction would be an interesting topic for future research. Perhaps because of the interplay of instructional components, questions arising from the current study encompass not only issues related to assessment and differentiation, but also curriculum and instructional philosophy. Assessment practices. The current study described the assessment practices of three teachers and situated their practice within the literature, which included several broad surveys as well as studies of individual assessment methods. The results indicated that teacher-designed rating scales were an efficient way to evaluate individual student performances. How comfortable are practicing teachers with designing and using such scales? Do these scales reliably measure musical performance rather than behavior or other “halo” effects? How are teacher preparation programs addressing assessment topics, such as what should be assessed or how to design assessments so they are embedded in musicking? How often are teachers providing chances for individual musical responses, and do they have sufficient methods to elicit such responses to show a variety of musicking behaviors at a number of levels of difficulty and sophistication? 287 Performances. Further research is needed regarding the role and impact of formal performances on the music learning of students in public school elementary general music classes. Participants in the current study were troubled by the time that preparing a polished large-group performance took away from their normal instructional activities. Future studies could explore a number of facets regarding the preparation of musical performances as a part of elementary music classes, including: What is the role and value of large-group performance in an elementary general music curriculum? What do these performances contribute to individual music learning? Are they (or could they be) an effective assessment technique? Are there ways to modify or adapt the nature or practice of these performances to balance community expectations with individual music learning needs? Inquiries designed to answer these questions could shed light on the widespread but little-studied practice of producing large-scale performances as part of elementary general music curricula. Differentiation practices. Teachers in the current study used aptitude testing as a way to differentiate instruction. Froseth (1971) found that teaching with aptitudes in mind may increase achievement for elementary band students at all levels of aptitude, but little other research has explored this. Does knowledge of students’ aptitudes lead to increased differentiation of instruction in elementary general music settings? Does this kind of differentiation result in higher levels of achievement, and if so, for which students? How does the use of high challenge and self-challenge activities affect the achievement levels of students at differing levels of aptitude? Grouping practices. Participants in the current study usually allowed students to choose their own groups when they assigned group work. Other research regarding group work in music education did not explore grouping practices, but instead described compositional processes, 288 social dynamics, or the products of groupwork, such as written work or performances. Research from outside music education indicated a variety of possible grouping practices. How could teachers use a variety of grouping strategies (assigned, student-chosen, heterogeneous or homogenous by musical ability or aptitude, etc)? What are the effects of each grouping strategy on individual music achievement? What are the effects of using a variety of grouping strategies over time on individual music achievement? Group work. In addition to raising questions regarding grouping practices, results from the current study encourage further research into group work in general. For example, how (and how often) are elementary general music teachers currently implementing centers-based instruction, and what are they teaching when they do so? Does centers-based instruction increase individual music achievement (as a stand-alone question or in comparison to other methods such as whole-group instruction)? How and how often are music teachers using praxial group work or creative group work, and what are they teaching when they do? How do they rate the resultant performances or products, choose the groups, and tell how individuals are faring within the group? Learning sequence activities (LSAs). Ms. Stevens used LSAs at the beginning of every class for about 5 minutes. Not only did they allow individual responses, provide assessment data and differentiate instruction, but they also seemed to signal to students that music class had begun and to reinforce Hailey’s views regarding the purpose of music class. How many teachers use LSAs? Are they typically implemented in the playful, fun, safe way I noted in Ms. Stevens’ practice? What is the effect of the addition of LSAs on music achievement, even if other teaching elements remain the same? 289 Students with special needs. Participants in the current study taught students with a variety of special needs in mainstreamed and self-contained settings. Although some research has explored this topic, (Hourigan, 2007; Linsenmeier, 2004; Salvador, 2010), further research is needed regarding how to better prepare music teachers to differentiate instruction for students with special needs. Few studies have examined music learning and instruction for students with special needs. What are the specific benefits or possible drawbacks of implementing an MLTinspired early childhood approach for self-contained classes of students with special needs in public school music settings? Are there modifications that should be made to this approach, and do they vary based on disability grouping (e.g., would students with ASD benefit from a different approach than those with CI)? What are the effects on music learning for students with average music aptitude and LD or ESL when verbal, written and notational material are kept to a minimum? Can (and should) music class be taught without relying on verbal, written, or notated information? Might this result in more “musicking” for the class in general (Campbell, 2010)? Philosophy/Teacher beliefs. Even among three participants teaching in suburban schools within 150 miles of one another, there was considerable variation in instructional philosophy as well as beliefs regarding the purpose of public school music education, how children learn music, and other topics. These participants were chosen because they valued assessment in music education, but two of them mentioned regularly occurring disagreements with other elementary music teachers in their districts about this topic. Furthermore, even among the three participants, varying philosophies led to different approaches to classroom structure. How cognizant are music teachers of their philosophies, and how intentional are they in terms of how these philosophies play out in their teaching? Does their instructional style match their stated philosophy? Do teachers think about their views of the nature of music learning and the 290 purpose of music education and then plan lessons based on these views, or do they simply teach the way they were taught to teach? If their instructional decisions are rooted in personal philosophical ideas about the nature of music learning and the purpose of public school music education, are these philosophies/beliefs learned in teacher preparation programs, or were they already formed before students began their undergraduate study? Applications to other music learning settings. The findings of this study indicate that it is possible to create well-rounded pictures of student achievement and then apply this information to individualize instruction in the elementary general music setting. What is the current state of assessment and differentiation practices in other music learning settings, such as secondary ensembles and secondary general music? Teachers in these settings face similar challenges in terms of the high numbers of students they teach and the wide variety of ability and aptitude levels they are likely to encounter. How do secondary music teachers assess music learning and apply the results of those assessments to individualize music instruction? Are any of the strategies for differentiation identified in this document (such as different types of groupwork, high challenge activities, self-challenge activities, and so on) transferrable into secondary settings? What is the impact of their use on student learning? Conclusion School music programs are typically geared toward instruction en masse… Even as individualized and small-group instruction is common to math and language arts classes, there is a tendency for children to be musically educated at school in traditional ensembles and in their large-class group. While mass instruction may moderately benefit children, individual and small-group projects are important means of developing children’s musical knowledge and skills (Campbell 2010 pp. 270-271). Given the variety of practices observed in the current study, the overall impact of assessment data on differentiated instruction in the elementary general music classroom was difficult to determine. When I framed this study, my questions implied a linear relationship 291 between assessment and differentiation. This vision was shaped by instruction I witnessed in my non-music colleagues’ elementary classrooms during my tenure as “the music teacher” and also by instruction I administered as a long-term substitute teacher in third and fourth-grade elementary classrooms. In my experience, grade-level teachers had access to IQ scores and/or math and reading aptitude test scores for each of their students. Teachers administered ongoing assessments regarding classroom activities as well as standardized achievement tests in math and reading. Based on this assessment information, teachers could ascertain a student’s current achievement levels, ensure that they were commensurate with his aptitude and/or IQ, and structure assignments to help him to proceed. This model seems to assume that learning in math and reading is sequential, and also to imply substantial agreement among teachers, publishers (of tests and educational materials), and other educational leaders regarding not only the sequential nature of learning but also the sequence itself. However, music educators do not agree on a model for musical development, nor do they agree that music learning is sequential (although models for musical development and music learning sequences have been proposed, evaluated, and substantiated, e.g., Gordon, 2003; Gordon, 2007). This large-scale discussion is outside the scope of the current study. What is important to the current study is that, over the course of more than a year of work on this project, I have determined that the guiding questions of this study were based in a model which assumed a direct, unidirectional relationship of assessment and differentiation: that data gathered from assessments of individual students’ abilities would then be applied to differentiate instruction for each student, as I had observed and experienced in grade-level classrooms. Even among three teachers who valued assessment in their elementary general music and differentiation. Differentiation stemmed not only directly and indirectly from assessments, but 292 also resulted from other information, such as the music teacher’s relationship to an individual student over the course of years. Instructional strategies such as group work provided differentiated instruction as students interacted with one another, the teacher, and with music. Centers provided opportunities for students to explore areas of musical interest or interact with specific music learning goals in a variety of modalities. Disparate classroom organizational features along a continuum from direct instruction to teacher facilitation contributed to differentiated instruction in different ways. In one classroom, highly structured routines and assigned seats fostered participation for students with special needs and encouraged students to help one another, while in another classroom, student-led classroom management and conflict resolution led to the same behaviors. Most unexpected based on my original questions was the possibility that strategies used to differentiate instruction would illuminate information about students’ musical abilities (the precise opposite of the relationship I had imagined). I agree with those who state that much of what a students gain from immersion in musicking is immeasurable and invaluable (e.g., Campbell, 2010). However, if we as public school music teachers argue for universal music instruction, will we then need to cite some measurable benchmarks or expressive objectives (Eisner, 2005) toward which students would strive? Could rejection of the viability of assessment also project irrelevance of music as a subject to be included in a public school curriculum? Are the intangible benefits of musicking unintelligible to those who would gauge the importance of what music is and can do for students? The current study indicates that students’ skills and abilities on a variety of musical materials and tasks can be measured and tracked with little disruption of their immersion in musicking. This finding supports the possibility that music teachers could balance measurement 293 of individual musical progress with immersion in musicking. Furthermore, information gathered from ongoing assessment of individual musicking abilities could be used to individualize instruction--increasing not only mean achievement levels but also the diversity of demonstrated abilities. A balanced approach (See Figure 8.1) could weave together nearly constant musicking with consistent, ongoing assessments of individual musical skill development alongside regular, brief periods of whole-class instruction and a variety of group work activities in which students explore and create. Whole-class instruction differentiated by open-ended high-challenge and self-challenge activities could facilitate sequential progress on musical skills and provide frequent opportunities for assessable individual responses. These periods of differentiated whole-group instruction could provide skills and readiness for creative and praxial group work and individual musicking projects as well as data to inform grouping practices. Praxial and creative projects undertaken in various groupings, assigned and student-chosen, would support differentiation by interactional style, preferences and interests, ability and so on. A variety of centers-based instruction, ranging from free-choice of centers with ad hoc groups to studentchosen or assigned groups rotating through specific centers could also facilitate assessment and differentiation by allowing the teacher to instruct or assess small groups of students while others are learning at centers. Implementation of this balanced approach may seem daunting, but could be gradually phased in to a teacher’s normal practices. A teacher could design and implement music learning centers one month, try a creative group composition activity another month, and add a few assessment games with individual response to their normal classroom activities. Over the course 294 Figure 8.1 Metaphor for a balanced approach to elementary music instruction. 295 Figure 8.1 cont’d 296 of several years, the opportunities to learn about students’ individual skills and abilities would be built in and become automatic (as Ms. Wheeler experienced). Despite the challenges that elementary general music teachers face, the benefits of this balanced model for encouraging individual students’ music learning may be well worth the hassle. Based on the current study, it seems that the efficiency of using only whole-group instruction may not be the most effective way to reach individual learners. Some discussion of philosophy seemed unavoidable in individual chapters, because each teacher’s beliefs about the purpose of public school music, how children learn music, and the nature of musical ability directly influenced her practices of assessment and differentiation. I limited this discussion to teaching behaviors that resulted from different philosophies by proposing a continuum of classroom structure, with teacher-led, whole-group instruction at one extreme and teacher-facilitated group and independent work at the other. At one end of the continuum, a lack of defined goals for any learner made meaningful assessment difficult. However, at the other end of the spectrum, a teacher might only assess material they have directly taught and therefore remain unaware of students’ abilities and interests outside of this narrow scope. To varying degrees, I watched all three participants struggle to reconcile the sequential nature of school music curricula with their post-modern views of who children are and how they learn. Indeed, both blind reliance on facilitation of learning and also rigid insistence on direct instruction could lead to difficulties in differentiation and assessment. Regardless of the philosophical leanings of individual teachers, increased attention to assessment-based differentiation of instruction could ensure that individual students progress in music learning. As Jang, Reeve, and Deci (2010) suggest, a structured approach to learning does not have to be 297 opposed to teacher facilitation. They could compliment one another rather than being viewed as antagonistic, and both may need to be present for optimal learning. Imagine an approach to elementary general music education in which brief periods of teacher-directed whole-group instruction are interspersed among times for group, individual, and whole-class musicking—sometimes exploratory, and other times with specific learning goals. In this middle-ground model, the teacher designs opportunities for cooperative learning, differentiated instruction, and free exploration for students to engage in alone, with others of like ability, and in friendship-based mixed-ability groups. Groupings are varied for different projects, not only in terms of homogeneity or heterogeneity of musical abilities but also in terms of interests, learning style, and expressive styles. Thus, groupings are sometimes teacherassigned and sometimes student-chosen. The teacher allows students input and control and often functions as a facilitator, but also plans times of teacher-directed learning based not only on the interests of students but also on her assessments of students’ music learning needs. Within this approach, consistent, ongoing assessments of each student’s musical skills and abilities function both as yardsticks for musical achievement, and also as a springboard to new music learning. The assessments would inform instruction, even as differentiation might inform assessment practices by illuminating different levels of ability, learning styles, interests, expressive styles, and musical ideas. Applying this model could help elementary general music teachers work toward Eisner’s (2005) lofty goal of instruction that increases the variability of student achievement while simultaneously raising the mean performance level. 298 Appendices 299 APPENDIX A Video-tape Analysis Summary Form Date of Class: Grade Level: Today’s Date: Instructor: 1. What assessment activities were used in this class? 2. When and how were the music learning needs of individual students or groups of students addressed? 3. Pick out the most salient interactions on the video. Number them in order on the sheet, and note the time in the video. Assign a theme to each interaction in CAPITALS. Invent new themes where non exist and indicate with asterisks ***. Time Salient Interaction Theme(s) 4. What else was interesting or unexpected in this video? 300 APPENDIX B: Initial Interview This semi-structured interview will be guided by the following questions, and supplemented by additional questions for follow-up or clarification. (1) How many students do you teach each week, and how often to you see them? (2) What are the main populations you serve? (Ethnic, socioeconomic, other) (3) Are you required to grade students? How often, and in what format? What other expectations affect your instruction (i.e. performance expectations)? (4) How are students with special needs accommodated in music? (i.e., are they seen as a selfcontained group, mainstreamed, or both? What kinds of special needs are represented in the classes I will observe?) How do you individualize instruction for these and other students? (5) What kinds of formal testing have you already done this year for the classes I will see? How about informal assessment? (6) What is the purpose of assessment in your classroom? (7) What music learning goals will you be working on with the classes I am observing over the next six weeks? (8) Do you have any questions about this study? 301 APPENDIX C: Exit Interview I will also ask additional follow-up and clarification questions, and ask questions specific to individual participants. 1) What is the most important factor in a music teacher’s ability to meaningfully assess the music learning of her students? a) What conditions must she establish in the classroom? b) Are there certain personal qualities that are necessary? c) What kind of training might be needed at the undergraduate level? 2) Is it possible for a music teacher to differentiate instruction based on assessment with all the challenges that we face? a) What types/modes of assessment (i.e., self-assessment, performance assessment, pen and pencil) seem most helpful in differentiating instruction? 3) What would you like to see your replacement do in terms of assessment practices? How about in terms of differentiating instruction? 4) What advice would you give to a first-year music teacher regarding assessment? What would you say to her about differentiation of instruction? 5) Is there anything that you would like to add? (While you were participating, or in the time since then, are there thoughts you have had about my project and its focus and purpose?) 302 References 303 References Adamek, M. S., & Darrow, A. A. (2005). Music in special education. Silver Spring, Maryland: The American Music Therapy Association. Adams, C. M., & Pierce, R. L. (2006). Differentiating instruction: A practical guide to tiered lessons in the elementary grades. Waco, TX: Prufrock Press. Allsup, R. E. (2003). Mutual learning and democratic action in instrumental music education. Journal of Research in Music Education, 51(1), 24-37. Angrosino, M. V., & Mays de Perez, K. A. (2000). Rethinking observation: From method to context. In Denzin, N. K. & Lincoln, Y. S., Eds. Handbook of qualitative research, second edition. Thousand Oaks, CA: Sage Publications, Inc. Arostegui, J. L. (2003). On the nature of knowledge: What we want and what we get with measurement in music education. International Journal of Music Education, 40, 100-115. Atterbury, B. W. (1990). Mainstreaming exceptional learners in music. New York: Prentice Hall College Division. Barrett, M. (1997). Invented notations: A view of young children’s musical thinking. Research Studies in Music Education, 8, 2-14. Beegle, A. C. (2010). A classroom-based study of small-group planned improvisation with fifth-grade children. Journal of Research in Music Education, 58 (3), 219-231. Bernard, B. I. (2005). The application of multiple intelligences theory in the elementary music classroom: More than just music. Unpublished master’s thesis, University of Prince Edward Island. AAT MR10357. Boardman, E. (1988a). The generative theory of musical learning, Part I: Introduction. General Music Today. Boardman, E. (1988b). The generative theory of musical learning, Part I. General Music Today. Boardman, E. (1988c). The generative theory of musical learning, Part II. General Music Today. Boardman, E. (1988d). The generative theory of musical learning, Part III. General Music Today. Boston, C. (2003). The concept of formative assessment. UTS Newsletter: The 304 University of Manitoba, 11(3), 1-3. As cited in Hepworth-Osiowy, K. (2004). Assessment in elementary music education: Perspectives and practices of teachers in Winnipeg public schools. Unpublished masters thesis: University of Manitoba, Canada. Bouton, K. (2001). What does a grade of S, N, and U mean to parents? In Spotlight on assessment in music education. (pp. 5-6). Reston, VA: MENC: The National Association for Music Education. Boyle, J. D. (1996). The national standards: Some implications for assessment. In: Aiming for excellence: the impact of the standards movement on music education. (pp. 109-116). Reston, VA: Music Educators National Conference. Boyle, J. D., & Radocy, R. E. (1987). Measurement and evaluation of musical experience. New York: Schirmer Books. Brophy, T. S. (1997). Authentic assessment of vocal pitch accuracy in first through third grade children. Contributions to Music Education, 24(1), 57-70. Brophy, T. S. (2000). Assessing the developing child musician: A guide for general music teachers. Chicago: GIA Publications. Brophy, T., S. Ed. (2008). Assessment in music education: Integrating curriculum, theory, and practice. Proceedings of the 2007 Florida Symposium on Assessment in Music Education, University of Florida. Chicago: GIA Publications. Brophy, T. S. (Ed). (2010). The practice of assessment in music education: Frameworks, models and designs. Chicago: GIA Publications. Brummett, V. M . (1993). The development, application, and critique of an interactive student evaluation framework for elementary general music. Unpublished doctoral dissertation, University of Illinois at Urbana. AAT 9314847. Brummett, V.M., & Haywood, J. (1997). Authentic assessment in school music. General Music Today, 11(1), 4-10. Burbridge, A. A. (2001). Assessment: pencil, paper… & performance, too! In Spotlight on assessment in music education. (pp. 7-9). Reston, VA: MENC: The National Association for Music Education. Campbell, P. S. & Scott-Kassner, C. (1995). Music in Childhood. New York: Schirmer Books. Campbell, P. S. (2010). Songs in their heads: Music and its meaning in children’s lives. Second Edition. New York: Oxford University Press. Chen, C-D. (2000). Constructivism in general music education: A music teacher’s lived experience. Unpublished doctoral dissertation: University of Illinois, Urbana. 305 Christensen, C. B. (1992). Music composition, invented notation, and reflection: Tools for music learning and assessment. Unpublished doctoral dissertation, Rutgers: The State University of New Jersey. AAT 9231370. Colwell, R. (2010). Many voices, one goal: Practices of large-scale music assessment. In Brophy, T. S. (Ed). The practice of assessment in music education: Frameworks, models and designs. Chicago: GIA Publications. pp. 3-22. Colwell, R. (2008). Music assessment in an increasingly politicized, accountabilitydriven educational environment. In Brophy, T., Ed. (2008). Assessment in music education: Integrating curriculum, theory, and practice. Proceedings of the 2007 Florida Symposium on Assessment in Music Education, University of Florida. Chicago: GIA Publications. Colwell, R. (2002). Assessment’s potential in music education. In Colwell, R., & Richardson, C. Eds. (2002). New handbook of research on music teaching and learning. New York: Schirmer Books. Colwell, R. (1996). Why we shouldn’t change the standards. In: Aiming for excellence: The impact of the standards movement on music education. (pp. 117-124.) Reston, VA: Music Educators National Conference. Colwell, R., Ed. (1992). Handbook of research on music teaching and learning: A project of the Music Educators National Conference. New York: Schirmer Books. Colwell, R., & Richardson, C. Eds. (2002). New handbook of research on music teaching and learning. New York: Schirmer Books. Colwell, R. & Barlow, G., Eds. (1986). 1986-Tests in Print. Tests and Measurements Newsletter-MENC, 1(1). Downloaded from http://assessment.webhop.org/ on November 12, 2009. Cornacchio, R. A. (2008). Effect of cooperative learning on music composition, interactions, and acceptance in elementary school music classrooms. Unpublished dissertation, University of Oregon. Creswell, J. W. (1998). Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks, California: Sage publications. Cox, S. G. (2008). Differentiated instruction in the elementary classroom. The Education Digest, 73(9), 52-54. DeNardo, G. (2001). An assessment of student learning in the Milwaukee Symphony 306 Orchestra’s ACE partnership: 1991-2000. Bulletin of the Council for Research in Music Education, 148, 37-47. Duling, E., & Cadegan, J. B. (2001). A critical evaluation of “Arts for Understanding,” and integrated music and arts project in a chartered nonpublic school. Contributions to Music Education, 28(1), 81-102. Edmund, D. C., Burcham, R., Birkner, M., & Heffner, C. (2008). Identifying key issues for assessment in music education. In Brophy, T., Ed. (2008). Assessment in music education: Integrating curriculum, theory, and practice. Proceedings of the 2007 Florida Symposium on Assessment in Music Education, University of Florida. Chicago: GIA Publications. Eisner, E. W. (2005). Reimagining schools: The selected works of Elliot W. Eisner. New York: Routledge. Elementary and Secondary Education Act, (2002). Downloaded from http://www2.ed.gov/policy/elsec/leg/esea02/index.html March 1, 2010. Elliot, D. J. (1995). Music matters: A new philosophy of music education. New York: Oxford University Press. Flinders, D. J. & Richardson, C. P. (2002). Contemporary issues in qualitative research and music education. In Colwell, R., & Richardson, C. Eds. New handbook of research on music teaching and learning. New York: Schirmer Books. Freed-Garrod, J. (1999). Assessment in the Arts: Elementary-aged students as qualitative assessors of their own and peers’ musical compositions. Bulletin of the Council of Research on Music Education, 139, 50-63. Froseth, J. O. (1971). Using MAP scores in the instruction of beginning students in instrumental music. Journal of Research in Music Education, 19(1), 98-105. Gardner, H. (1993). Frames of mind: The theory of multiple intelligences. New York: Basic Books. Gfeller, K. E. (1992). Research regarding students with disabilities. In Colwell, R., Ed. Handbook of research on music teaching and learning: A project of the Music Educators National Conference. New York: Schirmer Books. Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine Publishing Company. Gordon, E. E. (1986). Primary measures of music audiation and the intermediate measures of music audiation. Chicago: GIA Publications. 307 Gordon, E. E. (1990). Jump right in: Rhythm register book 1. Chicago: GIA Publications. Gordon, E. E. (2003). A music learning theory for newborn and young children. Chicago: GIA Publications. Gordon, E. E. (2007). Learning sequences in music: A contemporary music learning theory. Chicago: GIA Publications. Grodon, E. E. (2010). The crucial role of music aptitudes in music instruction: Keynote address. In Brophy, T. S. (Ed). The practice of assessment in music education: Frameworks, models and designs. Chicago: GIA Publications. pp 211-215. Greene, L. (2008). Music, informal learning and the school: A new classroom pedagogy. Burlington, VT: Ashgate Publishing. Griffith, C. E. (2008). Examining experiences of teaching music to a child with autism while using a music learning-theory-based intervention during informal music sessions infused with DIR/floortime strategies. Unpublished Master’s Thesis, University of South Carolina. Gromko, J. E., & Walters, K. (1998). The development of musical pattern perception in school-aged children. Research Studies in Music Education, 12, 24-29. Groth-Marnat, G. (2009). The handbook of psychological assessment, 5th edition. Hoboken, New Jersey: Wiley. Gruber, H. (2007). Musical responses and collateral benefits of a music-learning-theory based intervention for children with autism. Unpublished master's thesis, University of South Carolina, Columbia, SC. Guerrini, S. C. (2006). The developing singer: Comparing the singing accuracy of elementary students on three selected vocal tasks. Bulletin of the Council of Research on Music Education, 167, 21-31. Hallan, S., Ireson, J., & Lister, V. (2003). Ability grouping practices in the primary school: A survey. Educational Studies, 29(1), 69-83. Hamann, K. L. (2001). Assessment tools for the music classroom. In Spotlight on assessment in music education. (pp. 23-25). Reston, VA: MENC: The National Association for Music Education. Hammel, A. M. (2004). Inclusion strategies that work. Music Educator’s Journal, 90(5), 33-37. Hammel, A. M. (2001). Special learners in elementary music classrooms: A study of 308 essential teacher competencies. UPDATE:Applications of Research in Music Education, 20 (1), 9-14. Haywood, J. S. (2005). Including Individuals with Special Needs in Choirs: Implications for Creating Inclusive Environments. Unpublished Doctoral Dissertation, University of Toronto. Downloaded from ProQuest (UMI) 6/6/2009. Heddon, D. G. & Johnson, C. (2008). The effect of teaching experience on time and accuracy of assessing young singers’ pitch accuracy. Bulletin of the Council for Research in Music Education, 178, 63-72. Henry, W. (2002). The effects of pattern instruction, repeated composing opportunities, and musical aptitude on the compositional process and product of fourthgrade students. Contributions to Music Education, 29(1), 9-28. Hepworth-Osiowy, K. (2004). Assessment in elementary music education: Perspectives and practices of teachers in Winnipeg public schools. Unpublished masters thesis, University of Manitoba, Canada. Holster, K. (2005). Why assess music? In: Spotlight on General Music Reston, VA: MENC. pp. 120-122 Hooper, J., Wigram, T., Carson, D., & Lindsay, B. (2008). A review of the music and intellectual disability literature (1943-2006): Part two-Experimental writing. Music Therapy Perspectives. 26(2), 80-97. Hornbach, C. & Taggart, C. (2005). The relationship of developmental tonal aptitude and singing achievement among kindergarten, first-, second-, and third-grade students. Journal of Research in Music Education 53(4), 322-331. Hourigan, R. M. (2007). Teaching music to students with special needs: A phenomenological examination of participants in a fieldwork experience Howard, L. Y. (2007). How exemplary teachers educate children of poverty, having low school readiness skills, without referrals to special education. Unpublished doctoral dissertation, George Mason University. Ilari, B. (2002). Invented representations of a song as measures of music cognition. Update: Applications of Research in Music Education. Spring/Summer, 12-15. Janesick, V. J. (2000). The choreography of qualitative research design: Minuets, improvisations and crystallization. In Denzin, N. K. & Lincoln, Y. S., Eds Handbook of Qualitative Research, Second Edition. Thousand Oaks, CA: Sage Publications, Inc. Jang, H., Reeve, J., & Deci, E. L. (2010). Engaging students in learning activities: It is not 309 autonomy support or structure but autonomy support and structure. Journal of Educational Psychology, 102(3), 588-600. Jordan, J. M. (1989). Music learning theory applied to choral music performing groups. In Walters, D. L., & Taggart, C. C., Eds. Readings in music learning theory. Chicago: GIA Publications. Kelly, S. N. (2001). Using portfolios for performing ensemble. In: Spotlight on assessment in music education. (pp. 26-28). Reston, VA: MENC: The National Association for Music Education. Larson, D. D. (2010). The effects of chamber music experience on music performance achievement, motivation, and attitudes among high school band students. Unpublished D.M.A. document, Arizona State University; AAT 3410633. Lehman, P. R. (2008). Getting down to basics. In Brophy, T., Ed. (2008). Assessment in music education: Integrating curriculum, theory, and practice. Proceedings of the 2007 Florida Symposium on Assessment in Music Education, University of Florida. Chicago: GIA Publications. Levinowitz, L. M. & Scheetz, J. (1998). The effects of group and individual echoing of rhythm patterns on third-grade students’ rhythmic skills. Update: Applications of Research in Music Education, 16(2), 8-11. Lind, V. (2001). Adapting choral rehearsals for students with learning disabilities. Choral Journal, 41(7), 27-30. Linn-Cohen, R. B., & Hertzog, N. B. (2007). Unlocking the GATE to differentiation: A qualitative study of two self-contained gifted classes. Journal for the Education of the Gifted, 31(2), 227-259. Livingston, J. J. (2000). Assessment practices used in Kodaly-based elementary music classrooms. Unpublished master’s thesis, Silver Lake College, Manitowoc, WI. Linsenmeier, C. V. (2004). The impact of music teacher training on the rate and level of involvement of special education students in high school band and choir. Unpublished Doctoral Dissertation: Kent State University. AAT 3159804. Lopez, C. (2001). Assessing elementary improvisation. In: Spotlight on assessment in music education. (pp. 32-34). Reston, VA: MENC: The National Association for Music Education. Lou, Y., Abrami, P. C., Spence, J. C., Poulsen, C. Chambers, B, & d’Apollonie, S. (1996). Within-class grouping: A meta-analysis. Review of Educational Research, 66(1), 423-458. 310 Masear, C. (1999). The development and field test of a model for evaluating elementary string programs. Unpublished doctoral dissertation, Teachers College, Columbia University, New York City. McCord, K. & Watts, E. H. (2006). Collaboration and access for our children: Music educators and special educators together. Music Educators Journal, 92(4), 26-33. MENC: The National Association for Music Education. (2001). Spotlight on assessment in music education. Reston, VA: MENC: The National Association for Music Education. Miles, M. B., & Huberman, A. M. (1984). Qualitative data analysis. Beverly Hills, CA: Sage. Miller, B. A. (2004). Designing compositional tasks for elementary music classrooms. Research Studies in Music Education, 22, (59-71). Miller, M. D., Linn, R. L., & Gronlund, N. E. (2009). Measurement and assessment in teaching. Upper Saddle River, New Jersey: Pearson. Monzingo, J. M. (1997). The relationship between vocal pitch-matching and learning disabilities. Unpublished Doctoral Dissertation. Florida Atlantic University. AAT 1387326 Abstract on Proquest. Music Educators National Conference. (1996a). Aiming for excellence: the impact of the standards movement on music education. Reston, VA: Music Educators National Conference. Music Educators National Conference. (1996b). Performance standards for music: Strategies and benchmarks for assessing progress toward the National Standards, grades pre-K-12. Reston, VA: Music Educators National Conference. National Standards for Arts Education. (1994). Reston, VA: MENC. Nelson, S. L. (2007). The complex interplay of composing, developing musicianship and technology: A multiple case study. Unpublished PhD dissertation: University of Colorado at Boulder; AAT 3256395 Niebur, L. (2001). Incorporating assessment and the National Standards for Music Education into everyday teaching. Lewiston, NY: Edwin Mellin Press. Nierman, G. E. (2001). Criteria for evaluating performance assessment. In Spotlight on assessment in music education. (pp. 52-53).Reston, VA: MENC: The National Association for Music Education. Paquette, K. R., & Rieg, S. A. (2008). Using music to support the literacy development 311 of young English language learners. Early Childhood Education Journal. 36(3), 227236. Peppers, M. R. (2010). An examination of teachers’ attitudes toward assessment and their relationship to demographic factors in Michigan Elementary general music classrooms. Unpublished Master’s Thesis. Michigan State University. Pfordresher, P. Q., & Brown, S. (2007). Poor-pitch singing in the absence of “Tone Deafness.” Music Perception, 25, 95-115. Phelps, K. B. (2008). The status of instruction in composition in elementary general music classrooms of MENC members in the state of Maryland. Unpublished doctoral dissertation, University of Maryland, College Park. Philipak, B. (1997). Recorder karate: A highly motivational method for young players. Brookfield, WI: Plank Road Publishing. Phillip, F. (2001). Arts education assessment: The journey and the destination. In Spotlight on assessment in music education. (pp. 54-59).Reston, VA: MENC: The National Association for Music Education. Phillips, K. H., & Aitchison, R. E. (1997a). Effects of psychomotor instruction on elementary general music students’ singing performance. Journal of Research in Music Education, 45(2), 185-196. Phillips, K. H., & Aitchison, R. E. (1997b). The relationship of singing accuracy to pitch discrimination and tonal aptitude among third-grade students. Contributions to Music Education, 24(1), 7-22. Pontiff, E. (2004). Teaching special learners: Ideas from veteran teachers in the music classroom. Teaching Music 12(3), 52-56. Ravitch, D. (2010). The Death and Life of the Great American School System: How Testing and Choice Are Undermining Education. New York: Basic Books. Roberts, J. L., & Inman, T. F. (2007). Strategies for differentiating instruction: Best practices for the classroom. Waco, TX: Prufrock Press. Robinson, M. (2005). The theory of tensegrity and school/college collaboration in music education. Arts Education Policy Review, 106(3), 9-20. Robinson, P. (2002). Educating by numbers: Standards, testing, and accountability in education. Uncommon knowledge with Peter Robinson (Transcript). Recorded Jan. 9, 2002. Downloaded from: http://www.hoover.org/multimedia/uk/3004411.html Rutkowski, J. & Snell Miller, M. (2003). The effectiveness of frequency of instruction 312 and individual/small group singing activities on first graders’ use of singing voice and developmental music aptitude. Bulletin of the Council of Research on Music Education, 30(1), 23-38. Rutkowski, J. (1996). The effectiveness of individual/small group singing activities on kindergartener’s use of singing voice and developmental music aptitude. Journal of Research in Music Education, 44(4), 353-368. Rutkowski, J. (1994). The longitudinal effectiveness of individual/small group singing activities on children’s use of singing voice and developmental music aptitude. Journal of Research in Music Education, 20(1), 31-43. Rutkowski, J. (1990). The measurement and evaluation of children’s singing voice development. The Quarterly: Center for Research in Music Learning and Teaching, 1(1-2), 81-95. Salvador, K. K. (2010). Who isn’t a special learner? A survey of how music teacher education programs prepare future educators to work with exceptional populations. Journal of Music Teacher Education 20(1), 27-38. Schoepp, K. (2001). Reasons for using songs in the ESL/EFLclassroom. The Internet TESOL Journal 7(2). Shih, T. –T. (1997). Curriculum alignment of general music in central Texas: An investigation of the relationship between the essential elements, classroom instruction, and student assessment. Unpublished doctoral dissertation, The University of Texas at Austin. Shuler, S. C. (1996). The effects of the National Standards on assessment (and vice versa). In Aiming for excellence. (pp. 81-108). Reston, VA: MENC. Snell Miller, M. (2001). Assessment tools for kindergarten and first-grade general music students. In: Spotlight on assessment in music education. (pp. 37-39). Reston, VA: MENC: The National Association for Music Education. Stake, R. E. (2000). Case studies. In Denzin, N. K. & Lincoln, Y. S., Eds Handbook of qualitative research, second edition. Thousand Oaks, CA: Sage Publications, Inc. Strand, K. (2006). Survey of Indiana music teachers on using composition in the classroom. Journal of Research in Music Education. 54(2), 154-168. Strand, K. (2005). Nurturing young composers: Exploring the relationship between instruction and transfer in 9-12 year-old students. Bulletin of the Council of Research on Music Education, 165, 17-36. Stringer, L. S. (2004). The effects of Music Play instruction on language behaviors of children 313 with developmental disabilties, ages three to six. (Doctoral Dissertation, University of Southern Mississippi, 2005). Dissertation Abstracts International, 6510A, 3622. Swanwick, K. (1998). The perils and possibilities of assessment. Research Studies in Music Education, 10 1-11. Talley, K. E. (2005). An investigation of the frequency, methods, objectives, and applications of assessment in Michigan elementary general music classrooms. Unpublished master’s thesis: Michigan State University. Taggart, C. C. (2005). Meeting the musical needs of all students in elementary general music. In The development and practical application of music learning theory. Chicago: GIA Publications. Tiseo, C. (2005). The effects of grouping practices and curricular adjustments on achievement. Journal for the Education of the Gifted, 29(1), 60-89. Tomlinson, C. A. (2000). Differentiation of instruction in the elementary grades. ERIC Digest. Downloaded February 18, 2010 from http://scholar.google.com/scholar?hl=en&q=eric+digest+tomlinson&btnG=Search&as_sdt=8000 0000&as_ylo=&as_vis=0 Tomlinson, C. A. (1999). The differentiated classroom: Responding to the needs of all learners. Alexandria, VA: Association for Supervision and Curriculum Development. Upitis, R. B. (1990). This, too, is music. Portsmouth, New Hampshire: Heinemann. Walsh, B. C. (1995). The effects of an alternative instrumental music program on elementary school children. Unpublished Master’s Thesis, McGill University. AAT MM12099. 314