SUPPORTING LOW-TRACKED ALGEBRA STUDENTS’ WRITTEN ARGUMENTS By Jerilynn Lepak A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Mathematics Education--Doctor of Philosophy 2013 ABSTRACT SUPPORTING LOW-TRACKED ALGEBRA STUDENTS’ WRITTEN ARGUMENTS By Jerilynn Lepak Students who engage in mathematical arguments in classrooms take part in an important practice involving reasoning and justification. As a mathematical practice, argumentation involves using both conceptual and procedural reasoning to justify a claim. Even though argumentation can occur in discussions where one asserts and defends their claim orally or through a written format, the majority of the literature regarding argumentation in school mathematics has considered arguments only as they develop orally through whole class th discussions. This study sought to describe the development of 8 grade students’ written arguments and the teaching moves that supported the development of students’ convincing arguments within two algebra units. To do so, a case study of teaching arguments was conducted th in a low-tracked 8 grade classroom using a curriculum in which students were expected to support their claims in both oral and written work. Identifying promising teaching moves that supported students’ argumentative writing required a scheme for categorizing written work according to their level of persuasiveness. Accordingly, I created a scheme by taking into consideration several relevant sources: Toulmin’s model of argumentation (1978), Morselli and Boero’s aspects of proof (2009), and Driscoll’s (1999) definition of and criteria for arguments. The first has been used extensively when analyzing arguments as they occur in discussions in mathematics classrooms. The second considers three aspects of proof that students must understand in order to successfully write an argument: epistemological, teleological, and communicational. These three aspects aligned with Driscoll’s criteria that arguments be tied to the original context, convincing, and inference-free. Combined, this resulting framework offers a tool to categorize students’ written arguments and provides an important resource to teachers and researchers interested in promoting students’ written arguments. Once students’ written arguments were categorized with this new framework, I considered instructional moves that explicitly promoted students’ written arguments. Specifically, I analyzed features of the writing tasks, how students responded to these features, and ways in which writing assignments were introduced to consider what factors may have influenced students’ written work. Findings suggest that although students attended to the features of the task, the features did not have a clear impact on the persuasiveness of their arguments. Conversely, as more time was devoted to setting up the writing tasks, students’ written arguments became more convincing. The most significant impact on the persuasiveness of students’ arguments occurred when students were involved in peer-review activities with the aid of rubrics created by the cooperating teacher. These findings suggest that students can be successful at writing arguments when significant time is devoted to instruction that is specific to writing convincing arguments. This is important, because engaging in arguments is a complex task that requires ongoing attention and support from teachers. Through the consideration of written arguments, this study sought to begin to fill a gap in the literature by describing ways in which one teacher helped students articulate written justifications to claims, an essential part in “creating viable arguments” (CCSSI, 2010). Dedicated to my boys: Devante, Dominick, and Zion iv ACKNOWLEDGEMENTS Completing this program would likely not have been possible without the support of many people. I especially recognize my husband, Marty, who has encouraged me to ‘stick it out’ each time I thought I had reached the end of my intellectual rope, and for helping to keep me grounded in the process. He has offered unconditional support, love, and patience throughout my graduate studies. Likewise, I thank all my parents, Diana and Lyle Smith and Gerald and Alice Gordon, for all their encouragement, praise, and humor throughout this adventure, and my sisters, Nikki Gordon and Teresa Recknor, for always being willing to celebrate each milestone. Guiding me through this final journey of graduate school, I want to specifically thank my dissertation committee: Kristen Bieda (chair), Bob Floden, Glenda Lappan, and Sandra Crespo. In particular, I thank my advisor, Kristen Bieda, for the many hours she has spent face to face or via Skype and email to help me think through and write this study. I am ever grateful for her thoughtful comments, advice, encouragement, and counsel. Also, I want to extend appreciation to Bob Floden. The amount I have learned from working with him over the last four years would fill volumes; knowledge that can only come from the experience of working closely with such an esteemed scholar, dedicated mentor, and quality individual. Thank you for your patience, encouragement, wisdom, and willingness to always read and comment. Sandra Crespo and Glenda Lappan, thank you for all the encouragement and work you have done in helping me reach this point. I thank all the close friends and colleagues I have developed through this journey. In particular, to my writing group, Jamie Wernet, Funda Gonuates, Alex Theakston Musselman, and Kate Johnson: I cannot express how grateful I am for the time you spent reading the v hundreds of pages of text that I have submitted over the years, your comments that both challenged me and pushed my work forward, and the conversations that have inspired and kept me moving to reach this goal. I thank CSMC for their ongoing support throughout my time at MSU and the College of Education for awarding me with a Dissertation Continuation Fellowship to help offset the costs involved with this study. Finally, I want to thank the teacher and students who made this study possible. I could not have found a better cooperating teacher with whom to conduct this study. vi TABLE OF CONTENTS LIST OF TABLES xi LIST OF FIGURES xii KEY TO ABBREVIATIONS xiv CHAPTER 1: INTRODUCTION Conceptualization of Argument Situating this Study in the Literature Overview of the Study 1 2 4 4 CHAPTER 2: A FRAMEWORK FOR CONSIDERING STUDENTS’ WRITTEN ARGUMENTS Background Literature Argumentation and Proof Differences between argumentation and proof Levels of Proof Production Contributing aspects of proof Argumentation in the Literature Toulmin’s Framework Differences between written and oral arguments Summary Framework Development Stage 1: Data Collection Participants Prompts for written arguments Stage 2: Initial Analysis using Toulmin’s Model Stage 3: Analyzing Arguments Holistically Epistemic Mathematical resources Assessing accuracy Teleological Elaborating mathematical resources Communicative Tying it together Framework for Written Arguments Non-Arguments Weak arguments Example Epistemic 9 vii 12 13 13 15 16 18 19 20 21 22 22 23 24 26 28 29 29 30 31 32 33 33 35 35 36 37 39 Teleological Communicational Moderately-strong arguments Example Epistemic Teleological Communicational Strong arguments with minor errors Example Epistemic Strong arguments Example Epistemic Teleological Communicational Summary Inter-rater Reliability Discussion Capturing Variations in Students’ Written Arguments Diagnosing Difficulties with Arguments The Utility of the Framework Limitations and Future Research Conclusion CHAPTER 3: SUPPORTING STUDENTS’ WRITTEN ARGUMENTS THROUGH TASK FEATURES AND SET UP Conceptual Framework Task Framework and Argumentation Mathematical tasks and ambiguity Task set up Argumentation and Toulmin’s model Argumentation in classrooms Research Question Method Participants and Setting Students Ms. Hill Role of Researcher Mathematical Content Unit 1: Equivalent Expressions Opportunities for argumentation Unit 2: Systems of Equations Opportunities for argumentation viii 40 40 40 41 42 42 42 43 43 44 45 45 46 47 47 48 48 51 51 52 54 55 56 57 59 59 61 63 65 66 67 68 69 69 70 71 73 73 74 76 76 Data Collection Students’ written work Classroom video and fieldnotes Data Analysis Task features Students’ written arguments Task set up Capturing time of task set up Coding task set up Summary Results Task Features Task Set up Time spent on task set up Instructional moves during set up Telling Modeling Peer evaluation Rubrics from EE Peer-evaluation using rubrics Summary Discussion Influence of Task Features on the Persuasiveness of Arguments Written scaffolds Situating students in an argument Influences of Time Spent Setting Up Tasks on Arguments Decreasing ambiguity Instructional moves during task set up Sustaining Convincing Arguments Incorporating new content Limitations and Future Research Conclusion 77 78 79 79 80 81 83 83 84 85 86 87 90 90 91 92 94 97 97 98 101 102 103 103 104 105 105 106 108 109 110 112 CHAPTER 4: AN ELABORATION OF THE RELATIONSHIP BETWEEN WRITING PROMPTS AND TASK SET UP Method Results Equivalent Expressions Authoring writing prompts Setting up writing prompts Summary of EE 114 ix 114 115 116 116 118 120 Systems of Equations Authoring writing prompts Setting up writing prompts Summary of SoE Discussion and Conclusion 120 121 125 129 130 CHAPTER 5: USING RUBRIS FOR WRITTEN MATHEMATICAL JUSTIFICATIONS The Classroom and Mathematical Context Introducing and Using Rubrics Results from Using Rubrics Benefits of Using Rubrics Communicating Mathematical Resources to use in Justifications Helping Students Attend to Audience Conclusion 134 137 138 140 141 143 143 APPENDIX 145 REFERENCES 147 x 132 LIST OF TABLES Table 1: Writing prompts in Unit 1 (Equivalent Expressions) 25 Table 2: Writing prompts in Unit 2 (Systems of Equations) 25 Table 3: Codes for Mathematical Resources 30 Table 4: Codes for Explanation of Resources 32 Table 5: Coherence Rubric (adapted from NAEP) 33 Table 6: Summary of Argument Categorization 35 Table 7: Results of Argument Categorization developed for EE and SoE 50 Table 8: Writing prompts in Unit 1 (equivalent expressions) 75 Table 9: Writing prompts in Unit 2 (systems of equations) 77 Table 10: List of codes for written scaffolds 80 Table 11: Examples of coding instructional moves 85 Table 12: Summary of all codes used for tasks and task set up 86 Table 13: Results of task features analysis with student results 87 Table 14: Student responses to task features and average student results 89 Table 15: Comparison of written prompt, task set up and student results 115 Table 16: Strength of Argument Rubric (adapted from McCann (1989)) 146 xi LIST OF FIGURES Figure 1: Example of released test item from Smarter Balanced Assessment for CCSSM test (http://www.smarterbalanced.org). 2 Figure 2: Sample item from the released Smarter Balance Assessment Consortium (http://www.smarterbalanced.org) 10 Figure 3: Toulmin's Model of Argumentation (1978) 19 Figure 4: Argument categories with varying degrees of convincingness. 35 Figure 5: Example of a non-argument (student’s work recreated) 36 Figure 6: Example of a weak argument because of mathematical errors (student work recreated) 37 Figure 7: Example of a moderate argument (student’ work recreated in part) 41 Figure 8: Example of an argument coded as strong with minor errors 44 Figure 9: Example of a strong argument (student work recreated) 45 Figure 10: The mathematical tasks framework (adapted from Henningsen & Stein, 1997, p. 528) 61 Figure 11: Toulmin’s model of argumentation (1978) 65 Figure 12: Introducing equivalent expressions with the pool problem 74 Figure 13: Example of writing task that situated students in an arguments and provided a process scaffold in written task 81 Figure 14: Comparison of percent of time given to task set up and averages of students’ written arguments 90 Figure 15: Example of a weak argument on a day with little scaffolding (students work recreated) 93 Figure 16: Example of a moderately strong argument on a day with modest scaffolding 96 Figure 17: Rubric given in Unit 1 (equivalent expressions) as a task set up to a writing activity 98 Figure 18: Example of student work that provides appropriate resources that are xii 100 unpacked and cogently support the claim on a day where extensive scaffolding was provided Figure 19: Mathematics Task Framework (Henningsen & Stein, 1997) adapted for tasks involving written arguments 101 Figure 20: Examples of similarities of form and writing scaffolds between prompts in EE with different authors. 117 Figure 21: Example of student who did unpack the mathematical resources 121 Figure 22: Writing assignments seven and eight, written by me and Ms. Hill respectively 122 Figure 23: Experimenting with different type of writing prompt with writing assignment nine 124 Figure 24: Example of student who used context to support their claim (student work recreated) 129 Figure 25: Sample item from the released Smarter Balanced Assessment Consortium (www.smarterbalanced.org) 133 Figure 26: The Pool Problem 135 Figure 27: Illustration of how words-symbols-pictures should link together in arguments 135 Figure 28: Student’s attempt at an argument before using rubrics 136 Figure 29: Pool problem rubric 138 Figure 30: Student’s successful attempt at an argument after using rubrics 139 Figure 31: Illustration of how students represented 2(x – 5) in factored and expanded form 141 xiii KEY TO ABBREVIATIONS ATS Algebra Teaching Study CCSS-M Common Core State Standards-Mathematics CMP Connected Math Project (Lappan et al., 2006b) EE Equivalent Expressions IEP Individualized Education Plan NCTM National Council of Teachers of Mathematics SoE Systems of Equations xiv CHAPTER 1 INTRODUCTION Reasoning mathematically is crucial to students’ understanding (e.g., Ball & Bass, 2003). As contended by Adding it Up (NRC, 2001), a core element of mathematical proficiency—the goal of mathematics instruction—is students’ adaptive reasoning, defined as the capacity to justify claims by logically identifying relationships between concepts. Adaptive reasoning is the “glue that holds everything together, the lodestar that guides learning” (p. 129). Mathematical reasoning has been highlighted in recent, important mathematics education documents as a core mathematical practice that should be promoted throughout students’ K-12 school experiences (National Council of Teachers of Mathematics, 2009, Common Core State Standards Initiative, 2010). Even so, literature on reasoning in school mathematics highlights the troubles students have making justifications that go beyond providing empirical evidence for the validity of a claim, especially in their written work (e.g., Balacheff, 1991; Harel & Sowder, 2007; Yackel & Hanna, 2003). Yet, students are primarily assessed through their written work (Morgan, 1998) including how they communicate their mathematical reasoning and justifications. Further, students will soon be asked to construct viable arguments in which they will need to justify a claim as a demonstration of their mathematical proficiency with the widespread adoption of the Common Core State Standards Initiative ([CCSS-M], CCSSI, 2010). For example, the question shown in 1 Figure 1, a sample item taken from the Smarter Balanced Assessments website, is an example 1 The Smarter Balanced Assessment Consortia is a state-led consortium that is developing items for the statewide assessments that will be given during the 2014-15 school year that are aligned with the CCSS-M. 1 of the types of arguments students may be asked to engage in during testing that will begin in 2014. The noise level at a music concert must be no more than 80 decibels (dB) at the edge of the property on which the concert is held. Melissa uses a decibel meter to test whether the noise level at the edge of the property is no more than 80 dB.    Melissa is standing 10 feet away from the speakers and the noise level is 100 dB. The edge of the property is 70 feet away from the speakers. Every time the distance between the speakers and Melissa doubles, the noise level decreases by about 6 dB. Rafael claims that the noise level at the edge of the property is no more than 80 dB since the edge of the property is over 4 times the distance from where Melissa is standing. Explain whether Rafael is or is not correct. Figure 1. Example of released test item from Smarter Balanced Assessment for CCSS-M test (http://www.smarterbalanced.org/). Problems such as this ask students to make a claim (regarding which fictitious student is correct) and to provide a justification for their claim in their explanation. In other words, problems like this ask students to construct an argument for their solution in response to the contextual situation. Incorporating the recommendations of the CCSS-M into school classrooms depends on uniform criteria for what an argument should include as well as a better understanding of teaching and learning written arguments. Thus, the purpose of this study was to contribute to the understanding of arguments by examining how one teacher supported her students in the development of their argumentative writing skills. In doing so, I created a taxonomy of written arguments that categorized arguments according to their level of persuasiveness and used this categorization to investigate which pedagogical moves were most supportive of students’ convincing mathematical arguments. 2 Conceptualization of Argument For this study, I draw from Driscoll’s (1999) conceptualization of argument as “a coherent string of thoughts that convinces a fellow student of a mathematical result" (p. 104). Driscoll’s definition implies three essential components of an argument: first, the result, second, the convincing justification of the result, and, third, an audience to be convinced. This definition suggests that the result is a solution to a problem, for example, which student, Rafael or Melissa, is correct in the problem shown in Figure 1. Obtaining the result is typically how students have been tested in the past on high-stakes exams, but pushing them to articulate why the result is valid introduces higher levels of reasoning. To provide the justification, students must draw on their mathematical content knowledge, but they also must know something about arguments themselves in order to meet the criteria of convincing someone else. Knowing what statements can be used to convince others is a complex endeavor that requires students to draw on mathematical resources that may need to be explicitly brought to students’ attention. The third essential component in Driscoll’s criteria is the need to convince another. The body of literature describing arguments that unfold through whole class discussions assumes the class, which includes the teacher and fellow students, is the audience being convinced. For oral arguments, immediate feedback from the classroom community informs students whether their argument is convincing through questioning or challenging the validity of the claim. With written arguments, however, the intended audience is less straightforward. Driscoll’s (1999) definition specifies the audience be a “fellow student” which is true for oral arguments. But for written arguments this is only true if students’ engage in peer review activities. Further, students are not likely to read each other’s written arguments on assessments or high stakes tests. 3 Therefore, the intended audience, while it may include peers, must extend to include teachers and researchers examining the development of written arguments in school classrooms. Situating this Study in the Literature One way that researchers have captured how teacher’s scaffold students’ oral arguments is by using Toulmin's model of argumentation (1978) as a means of characterizing students' arguments. In particular, this corpus of literature describes how teachers create a context for argumentation (Stephan & Rasmussen, 2002; Wood, 1999), teachers’ roles and knowledge in facilitating argumentation (Yackel, 2002), students’ co-construction of arguments (Mueller, 2009) and teachers’ refutation of students’ invalid arguments (Giannakoulias, Mastorides, Potari, & Zachariades, 2010). This body of literature may have implications for how teachers provide instruction for written arguments. Although recent literature describes teaching moves that support oral arguments in whole-group settings, we know little about how constructing arguments in whole class discussions influences individual students' reasoning in their written arguments, if at all, (McCann, 1989), or how students learn to write convincing arguments. To address the paucity in the literature regarding students written arguments, I explored instructional moves that supported students' individual written arguments by using the mathematics task framework (Henningsen & Stein, 1997; Stein, Grover, & Henningsen, 1996). In particular, I analyzed features of tasks written to elicit student arguments, and how these features may have scaffolded students’ written product. Additionally, I considered how instructional moves during the setup of these writing tasks may have influenced students’ written work. The following question guided the development of this study and analysis of the results: 4 What is the nature of task features and task set up, specifically as it relates to process, product, and resources, and subsequent student performance on written arguments in an th 8 grade algebra class? Overview of the Study In order to answer the question above, I first needed to establish an analytical framework that captured different levels of persuasiveness in students’ arguments. This was a necessary first step in analyzing what task features and instructional moves during set up influenced students’ written work. The development of this framework resulted in multiple attempts at classifying students’ written data from this study. In keeping with the body of argumentation literature in mathematics education, this extension was informed by Toulmin’s model of argumentation (1978) as well as the aspects of rational behavior that were adopted by Boero et al. (2010) for proof, and Driscoll’s (1999) criteria for arguments. These aspects consider distinct types of knowledge that are necessary to proof production: epistemological, teleological, and communicational. These three aspects, which align with Driscoll’s criteria, extend to arguments, were incorporated into the hierarchical categorization of arguments. Once the framework was established and students’ work categorized, I began the analysis of classroom data to determine what factors may have played a role in students’ written arguments. Two factors, in particular, were the subject of this study: the tasks as written and the instructional moves employed during the task introduction. Accordingly, I drew on Doyle’s (1983; 1984) work around academic tasks and the subsequent work done to establish the mathematical task framework (Stein et al., 1996). This framework considers different stages of incorporating tasks in mathematics classrooms and how these stages support students through different phases of task implementation. Considerations of tasks are important as they inform 5 students of what it means to engage in mathematical activity (Stein et al., 1996) and shape the content students learn and how they learn to process it (Doyle, 1983). My analysis revealed that instructional moves during the task introduction were an influential factor. In fact, the factor that had the most impact on the persuasiveness of students' arguments was the amount of time setting up the task. Conversely, even though students attended to the features of the task as written, they did not seem to have a noticeable role in the persuasiveness of students’ arguments. Even so, students often complied with written scaffolds that instructed them to use a particular mathematical resource. There were several instructional moves used during task set up that seemed to prompt students towards more convincing arguments. In particular, a combination of telling, modeling, and activities involving peer review, targeted specifically at students' written work, seemed to have the most direct impact on students' written arguments. Specifically, these teaching moves addressed arguments as both a process and a product. That is, the instruction identified particular things students should do as they are generating their arguments, such as make connections between multiple representations. Additionally, the instruction addressed the requirement that students’ arguments provide evidence that convinces another of the validity of their claim. In doing so, students were given criteria for the finished product. These moves led students through a continuum of passively receiving instruction on what their written work should include, to taking a more active role as they provided feedback to their peers with the use of a rubric. In the following chapters, I elaborate on the question posed above. In Chapter 2, I present the analytical framework I developed to analyze students’ written arguments. This framework provides a means to categorize written arguments based on several factors including 6 how well students communicate their content knowledge and knowledge of arguments. The intended audience for this article is readers of the Journal of Mathematical Behavior (JMB). I chose JMB because of the volume of articles published in this journal on argumentation, and believe this article will contribute to the ongoing conversation by considering a different format of argumentation. This different focus includes a new platform for argumentation (written rather than oral arguments), as well as a close examination of the types of statements students used to create stronger arguments. In doing so, this article offers the mathematics education community a tool to categorize and diagnose arguments to identify aspects of argumentation that may be lacking as evidenced by students’ written work. In Chapter 3, I discuss the instructional moves that seemed to support the changes in students' written work. In particular, I found that as more time was devoted to setting up writing tasks, students produced more convincing arguments, determined from using the argument taxonomy. This elaboration is written in article format with intended submission to the Journal of Learning Sciences (JLS). This journal was chosen because of its commitment to understanding the processes and outcomes involved in learning. Because this article focuses on teaching moves specific to helping students learn how to produce stronger written arguments, JLS is an appropriate destination for this article. Chapter 4 was written to accompany Chapter 3. As such, it elaborates on the teaching side of the study, and attempts to account for influences the authorship of the writing prompts may have had on Ms. Hill’s decisions to provide instruction during task set up or not. Results from this analysis suggest that Ms. Hill was more consistent in providing task set up on tasks that she wrote, or those that were written by me that closely resembled prompts she had written. This, in turn, had an impact on how successful students were with the writing assignment. 7 Chapter 4 will likely be incorporated into Chapter 3 when preparing the manuscript for publication. Chapter 5, like Chapters 3 and 4, focuses on the teaching side of the study. This article, intended to be submitted to Mathematics Teaching in the Middle School (MTMS), elaborates one specific instructional move that was effective in students' writing: peer review activities with the use of rubrics. Rubrics, and the activities used to implement the rubrics, seemed to support students' development of cogent arguments through their explicit communication and demonstration of appropriate types of statements to justify claims. With the widespread adoption of the CCSS-M (2010), this article will inform teachers of ways to help students develop supporting justifications when engaging in argumentation. Together, these chapters answer the research questions that guided this study. 8 CHAPTER 2 A FRAMEWROK FOR CATEGORIZING STUDENTS’ WRITTEN ARGUMENTS The Common Core State Standards Initiative ([CCSSI], CCSSI, 2010) outlines criteria for students in grades K-12 to be mathematically proficient. These criteria comprehensively address content students should master to be adequately prepared for entry-level courses in colleges and universities. Additionally, the criteria specify mathematical practices that give students opportunities to apply their mathematical content knowledge in ways that stretch beyond procedural applications. One such mathematical practice is “constructing and critiquing viable arguments”. Viable arguments are constructed as students “make conjectures and build a logical progression of statements to explore the truth of their conjectures …they justify their conclusions, communicate them to others, and respond to the arguments of others” (pp. 6-7). The practice of constructing viable arguments can occur either collectively, through discussions, or individually, through students’ written work. Whatever the format, this mathematical practice involves selectively linking content knowledge to claims with the explicit intent of justifying them (Boero et al., 2010). This process requires students to reason beyond procedures and draw on the conceptual underpinnings of the mathematical content. Consistent with the CCSS-M’s description of argument, Driscoll (1999) defined an argument as "a coherent string of thoughts that convinces a fellow student of a mathematical result" (p. 104). This implies three essential components of an argument: the result, the justification of the result, and the audience being convinced. Obtaining the result is typically how students have been tested in the past on high-stakes exams, but pushing them to articulate the validity of the result introduces a higher level of reasoning. Further, what counts as 9 convincing evidence in support of a claim depends on established classroom norms (Ball & Bass, 2003; A. J. Stylianides, 2007; Wood, 1999). This raises an issue of subjectivity: what may be acceptable in one classroom may not be in another. The third component of Driscoll’s (1999) definition assumes there is an intended audience in need of convincing. Although Driscoll identifies the audience as “a fellow student”, the CCSS-M is vague about who the intended audience is. For items on the statewide tests starting in 2014 the intended audience will likely not be fellow students. Thus, while it is appropriate that students be considered among the intended audience, especially if they are expected to critique the reasoning of others, the audience should extend to teachers and researchers as they are the likely audience monitoring the development and persuasiveness of students’ written arguments. The sample item shown in Figure 2 is from the Balanced 2 Assessment database and illustrates how students will need to determine and support a result to convince another of its validity. Tony is buying a used car. He will choose between two cars. The table below shows information about each car. Car Cost Miles Per Gallon Estimated Immediate (MPG) Repairs Car A $3200 18 $700 Car B $4700 24 $300 Tony wants to compare the total costs of buying and using these cars.  Tony estimates he will drive at least 200 miles per month.  The average cost of gasoline per gallon in his area is $3.70.  Tony plans on owning the car for 4 years. Calculate and explain which car will cost Tony the least to buy and use. Figure 2. Sample item from the released Smarter Balance Assessment Consortium (http://www.smarterbalanced.org) 2 The Smarter Balanced Assessment Consortia is a state-led consortium that is developing items for the statewide assessments that will be given during the 2014-15 school year that are aligned with the CCSS. 10 Problems like the one shown in Figure 2 position students to argue the validity of their result by asking them to explain, or defend, their choice. The intended audience is not explicit in the existing literature on argumentation in mathematics classrooms (e.g., Krummheuer, 1995; Whitenack & Knipping, 2002; Yackel, 2001). In fact, the majority of this literature has centered on arguments as they develop in whole class discussions with the classroom community as the assumed intended audience. Yet, with statewide testing of the CCSS-M, students will be required to provide written arguments to problems similar to that illustrated in Figure 2 for an unknown audience. As a result, preparing students to produce viable written arguments will require teachers to oversee the development of these skills so that a “critic” would be convinced. Thus, it is essential that teachers and researches come to a uniform understanding of what a viable argument entails to be convincing. One way to introduce uniformity is with a taxonomy developed from existing work on arguments and a common definition of argumentation. Such a tool offers teachers and researchers a means to uniformly gauge the persuasiveness of students’ arguments and has the potential to determine how to help students improve less convincing arguments. Additionally, a taxonomy tool would benefit teachers by helping them ensure students include appropriate criteria when constructing convincing arguments and provide formative feedback when students’ arguments do not meet the criteria. Further, this tool would likewise benefit researchers and teacher educators investigating different instructional moves that support students in producing convincing arguments and inform prospective teachers of these practices. To date, no such assessment tool exists for written arguments. The purpose of this paper is to offer a categorization tool for students’ written arguments along a continuum from most to least convincing based on the attributes put forth by the CCSS11 M, Driscoll’s (1999) definition, and the mathematical resources school students have at their disposal. This tool provides one way for teachers and researchers to compare growth in students’ written arguments over time while also providing insight into the types of knowledge students incorporate into their arguments. As such, this analytical framework has the potential to serve as a diagnostic tool for teachers to provide feedback to students who are developing their argumentative writing skills in mathematics classrooms. With the presentation of the framework, I provide exemplars for each of the five levels of arguments, and highlight differences between them. Finally, I offer conjectures about what each level in the framework suggests about student understanding of both the content and the requirements for argumentation. But first, I will review the literature that was relevant to creating this framework. Background Literature Although there is substantial literature regarding student reasoning through argumentation, the mathematics education literature concerning arguments has been limited to the analysis of whole class discussions with little attention to the knowledge students need to generate these arguments. This body of literature has reported on creating a context for argumentation (Stephan & Rasmussen, 2002; Wood, 1999), teachers’ roles and knowledge in facilitating argumentation (Yackel, 2002), and discourse moves that support whole class arguments (Forman, Larreamendy-Joerns, Stein, & Brown, 1998). While these classroom level analyses are important to consider how teachers effectively create a classroom climate that promotes arguments, little research has been conducted at the student level. For example, no research has examined students’ interpretations of discussions containing arguments, or considered whether students understood that they had participated in the construction of an 12 argument. One may wonder whether students understand what a mathematical argument is, or what criteria are necessary to construct one. The body of literature related to proof sheds some light on students’ knowledge for argumentation and the difficulties therein. In particular, students tend to draw on empirical evidence to validate general statements (e.g., Balacheff, 1988; Harel & Sowder, 1998). Yet, the nature of proof in school mathematics is to use a chain of deductive reasoning to verify the truth of a given statement. While arguments are less rigorous, the end goal is similar: to establish the truth of a claim. Because of their relationship, the consideration of proof literature is relevant to the development of a framework to categorize arguments, especially because multiple frameworks for categorizing students’ proof production exist. In the following sections, I consider argumentation and proof as they relate to students’ written work and then expound on persuasive writing in general. Argumentation and Proof All proofs are arguments, and scholars recognize the importance of constructing arguments as a stepping stone to proof (Pedemonte, 2007; A. J. Stylianides, 2007). As such, proof and arguments share many similarities including the requirement to convince through a logical sequence of statements. In this paper, however, argument refers to non-proof arguments. In the sections that follow, I elaborate on differences between argument and proof and then describe existing frameworks for the evaluation of proof that were useful to the construction of the framework for arguments. Differences between argumentation and proof. Although all proofs are arguments, not all arguments are proof, and the boundary that separates them is not always apparent. Stylianides (2009) distinguished between proof and argument by the types of statements used, 13 qualifying proof as appropriately incorporating “key accepted truths”. Accepted truths include “axioms, theorems, definitions, and representational tools that a particular community may take as shared at a given time” (p. 266). What makes accepted truths “key” depends on the classroom community, blurring the line between proof and argument further. For, what may be accepted as shared in one classroom may not be in another; consequently, what may count as proof in one classroom may only count as an argument in another. One distinguishing criteria that is consistent in the literature on argumentation (e.g., Driscoll, 1999; Yackel, 2002) is the way students are set up to provide an argument. In large part, the examples provided in the argumentation literature, arguments arise from a problem solving activity in which students argue the validity of their solution. Accordingly, reported situations involving arguments remain local to the given context, like the problem shown in Figure 2. This is the case for the problems in this study: claims were contextually bound to a problem and were the result of a problem solving activity. Statements that are proved are typically a global mathematical statement such as, Prove the sum of two odd integers is even. Once the statement is proved, the proven statement is understood as a mathematical truth and, as such, can be taken up in future proofs. As such, proof activities can be seen as being generative. For example, once it has been proven that the sum of two odd integers is even, this mathematical fact can be used in future proofs. Conversely, claims in arguments do not tend to be generative. Arguing that Car A in Figure 2 is the better financial choice does not promote a foundation for additional mathematical content in the same way as proving a general mathematical fact can. There are several ways that mathematics educators have characterized written proof in school mathematics (e.g., Bell, 1976; Harel & Sowder, 1998; A. J. Stylianides & Stylianides, 14 2009). These were an important consideration in the development of the taxonomy for written arguments because of the relationship between proof and arguments. In the following sections, I will discuss two that were influential to the development of the framework presented in this study. Levels of Proof Production. Stylianides & Stylianides (2009) developed a hierarchical framework to categorize students’ proof production in a study where prospective teachers in an elementary methods course in which mathematical proof is a focal point. The five levels of proof production are more general than other existing schemes (Harel & Sowder, 2007), and therefore, more adaptable to arguments. The five levels of proof production in order of decreasing sophistication were: 1. Proof 2. Valid general argument, but not proof 3. Unsuccessful attempt for valid general argument 4. Empirical argument 5. Non-genuine argument In this study, proof was defined to be “general, valid, and accessible to the members of the community” (p. 239) in keeping with how the second author defined proof (A. J. Stylianides, 2007). Criteria for the production of proof were outlined to include correctness, concisely addressing the problem posed, and written in a way that was coherent, convincing, and logical; criteria that are similar to Driscoll’s (1999) definition for argument. These criteria were sufficiently vague enough that they could apply to a multitude of situations, yet the expectation, apparent from an examination of the written work that was shared in this study, was that for an argument to be considered a proof, “key accepted truths” (G. J. Stylianides, 2009) be used. In all 15 the proofs that were analyzed using this categorization scheme, however, students were asked to construct a proof for a general statement that was provided for them. This hierarchy was appealing because it was devised to capture every instance of proof. Unlike other proof schemes that might have to consider proofs that are a hybrid of multiple categories, Stylianides & Stylianides framework offers a way to categorize based on students’ progression towards the goal of producing a proof. Categorizing in this manner would allow for teachers and researchers to make comparisons between classes, or units, or to track progress in individual student’s proof production over time. The ensuing argument framework can be seen as an expansion of this framework for the levels that relate specifically to argument. Contributing aspects of proof. Morselli & Boero (2009) present a framework based on the contributing aspects of proof. Instead of classifying proofs according to type, Morselli and Boero apply Habermas’ theory of rationality (2003) to proof to consider the types of knowledge students must coordinate in order to construct an appropriate proof. In doing so, they focus on three aspects of arguments: 1) the epistemic aspect; 2) the teleological aspect, and; 3) communicational. These three areas of knowledge work together to contribute to the development of a valid proof. First, the epistemic aspect addresses the content knowledge brought to proof in order to persuade others of its validity. The epistemic aspect regards the mathematical content that is applied to validating the statement being proved. In this respect, the epistemic aspect can be seen to influence the process of constructing a proof as well as the final product. As such, it deals with the selection of mathematical tools and resources used to verify and reflects students’ understanding of and flexibility with the mathematical content. 16 The teleological aspect of rational behavior is concerned with the knowledge of proof and argumentation in general. With this knowledge comes the “conscious choices to be made in order to obtain the desired product” (Boero et al., 2010, p. 19). This suggests a product-driven approach to proof and argumentation in which the author is aware of his/her obligation to convince another. Closely linked to the epistemic aspect, the teleological aspect requires students to consciously make decisions about the mathematical resources at their disposal that will be most convincing. This aspect requires students to use mathematical resources in a way that removes doubt from the validity of the statement being argued. Finally, the communicational aspect requires the student to present their argument in a way that conforms to norms around argument and proof and is easily understood. In doing so, students need to keep in mind their audience and to present their argument in a coherent, logical way. Again, this aspect overlaps with the other two aspects in the sense that a student may draw on a resource but they also need to communicate the significance of the resource in justifying the claim. The list that Stylianides & Stylianides (2009) created addressed this by stating that the proof should be written in a coherent and logical manner. In this way, these three aspects are intertwined as they work together in providing the final product. Considering the relationship between proof and arguments was useful to the development of this framework. The taxonomy presented in this article elaborates on the levels of arguments that Stylianides and Stylianides (2009) presented for proof by providing descriptions for levels of argument. This expansion takes into account the types of knowledge necessary for the construction of proof (Boero et al., 2010), but it also incorporates extant literature on argumentation that will be presented next. 17 Argumentation in the Literature Driscoll (1999) asserted that students as young as middle school should regularly be engaging in mathematical argumentation, and that "students who make convincing arguments…show a higher level of algebraic thinking" (p. 88). He established three criteria for an argument to be convincing to the classroom community. First, convincing arguments should leave nothing to inference. That is, arguments require students to be clear, coherent, and complete. The reader should not have to guess at how a mathematical resource supports the claim. Instead, students should make links between resources and the claim explicit. This criteria aligns with the communicational aspect, providing an expectation that students articulate their mathematical thinking (Morgan, 1998). Secondly, students' arguments must be bound to the original context of the problem. This criterion provides insight into the resources students should use when providing their justification for their claim. Namely, students should use the given context, including diagrams, to justify their claims. Extending this criterion includes the use of representations used to mathematize the context and develop the claim, and provides insight into students’ content knowledge providing support for the epistemic aspect (Boero et al., 2010) extending to arguments. Finally, Driscoll states that a convincing argument should "stand up to any challenge" (p. 104). Removing any doubt regarding the mathematical truth of a claim requires students to reason about each step of the process and provide justifications. This parallels the teleological aspect where students consciously select mathematical resources that support their claim. Yet, this is an area that has been difficult for students (Deatline-Buchman & Jitendra, 2006; Morgan, 1998). Driscoll’s (1999) criteria mirrored the aspects of proof (Boero et al., 2010), 18 demonstrating that these aspects extend to written arguments. As such, Driscoll’s (1999) criteria were also important to the development of the framework presented here because they directly addressed the criteria for arguments. Toulmin’s model of argumentation (1978) was also useful, and has been used extensively in the literature on argumentation in general. Toulmin’s Framework. Toulmin’s (1978) model is widely used in the mathematics education literature to identify parts of an argument. Four parts of the model—claims, grounds, warrants, and backings—are primary to argumentation and are well-represented in existing literature. Figure 3 provides a visual picture of these components of argumentation and how the elements of an argument work together to support a claim. The arrows indicate how one element supports another element in the argumentation process. Grounds Claim Warrants Backings Figure 3. Toulmin's Model of Argumentation (1978) Typically, the first line of defense in support of a claim is the provision of grounds. In the mathematics classroom, grounds are not “facts in the empirical sense, but statements that we take as given under the assumption that they will not be challenged” (Forman et al., 1998, p. 532). Grounds may include empirical statements such as procedures or evidential data. Thus, grounds provide some justification for the claim, but are not a strong enough rational to validate the claim. Warrants are given in support of the grounds provided (Yackel, 2002), and are often expressed as a “second” explanation, providing argumentative support (Whitenack & Knipping, 19 2002). Warrants play a more indirect role to support the claim through their linking function. That is, they logically link the grounds to the claim, and provide a more generalized form of reasoning than grounds. Finally, backings are generally agreed on facts that support the warrant. Backings lend credibility to the warrants, solidifying their role in the argument. Backings put to rest any lingering suspicion about the validity of the claim by giving authority to it (Yackel, 2002). Toulmin’s (1978) model has primarily been applied to argumentations occurring in whole-class discussions. But, researchers in science education analyzing students’ written explanations (McNeill & Krajcik, 2009, 2012) have adapted Toulmin’s model to include only three components: claim, grounds, and reasoning. Reasoning is a combination of warrants and backings, and is described to “articulate the logic behind the link between the evidence and claim, which often includes appropriate scientific principles” (2009, p. 420). Thus, reasoning still requires students to draw from something other than empirical evidence to support their claim. This adaptation was incorporated into the resulting framework for written arguments. By itself, this framework has been used to identify the components of an argument as it unfolds. As such, Toulmin’s model has been used in the mathematics literature as a framework to primarily discuss the process of argumentation developed in oral arguments rather than the finished product. Differences between written and oral arguments. Constructing a written argument is different in many respects than participating in generating an argument in whole-class discussions. On one hand, whole-class arguments have the benefit of the teacher or other students' pressing and supporting each other to provide more to their argument, either by way of providing more mathematical resources or explicit links to the claim. This can be provided 20 verbally or with non-verbal gestures, such as pointing to a representation or using gestures to describe a mathematical situation. Similarly, there are more people contributing to the argument, each drawing from their own knowledge and experiences. In these cases, the social inputs of others aid students in their construction of oral arguments (Knudson, 1992). Unlike oral arguments, writing explanations individually typically does not receive this type of ongoing feedback and call for clarification. Students are left to their knowledge of arguments to determine when they have applied sufficient mathematical resources and unpacked them clearly enough to support the claim so that it stands up to challenge (Driscoll, 1999). Accordingly, Albert (2000) drew attention to the solitary act of writing, and the fact that students typically write without an audience in mind other than their teacher (Morgan, 1998). As a result, written arguments tend to include only the slightest insight into connections made between student's thinking to generate a claim, and the reasoning that supports it (Deatline-Buchman & Jitendra, 2006). Summary. Several areas of literature were instrumental to the development of the framework presented shortly. For this study, I apply Driscoll’s (1999) definition of argumentation, and take into account the criteria he outlined. These criteria align with the aspects of proof (Boero et al., 2010), making it appropriate to extend the coordination of these aspects to constructing arguments. These aspects were instrumental to the development of the categorization tool that extends Stylianides and Stylianides taxonomy for proof by elaborating on different levels of arguments based on their persuasiveness. In particular, the analytical framework developed in this study provides a tool to categorize students’ written mathematical arguments according to their persuasiveness across classrooms, units, and individual students. This has implications for teachers, curriculum 21 developers, and mathematics education researchers interested in reasoning in general, and oral and written argumentation, possibly proof, in particular. The information provided by the framework can ultimately help teachers support students in making stronger arguments by identifying which aspect of argumentation seems to be weak. In light of this, my goal was to develop a framework that met two criteria. First, I wanted the framework to classify students’ written arguments according to the degree of convincingness that would allow a comparison of arguments across time. Secondly, I wanted the framework to identify strengths and deficits in students’ coordination of their epistemic, teleological, and communicational knowledge. In the following section I discuss the process of developing this framework. Framework Development The framework for categorizing students’ written arguments was developed with the purpose of achieving two goals. First, the framework emerged as a means to classify students’ written arguments collected in a case study (Yin, 2009) of the arguments students produced in th two units in a low-tracked, 8 grade CMP algebra class. Secondly, the design of the framework is intended to serve as a diagnostic tool for teachers and researchers. This involved an examination of how students were coordinating their knowledge of content, argument, and communication. In this section, I will describe the stages involved in the development of the framework. Following this, I describe the student work, providing examples of how the criteria for each category were met. Stage 1: Data Collection The data that informed the development of the framework consisted entirely of students’ written work. The written work included worksheets, study guides, and quizzes. The 22 worksheets were of two types: those that involved both skill developing activities and writing prompts and those that consisted only of writing prompts. I developed the writing prompts with the goal of establishing a situation in which students provide and support a claim. In the following section, I elaborate on the participants of the study and provide more information on the writing prompts. Participants. The participants for this study were located in a large junior high school th th (8 and 9 grade) in a suburban school district in the Midwest. The particular section of 8 th grade algebra for this study fluctuated between 30 and 35 students who were placed in a section of the lowest-tracked mathematics class. Eighth grade is the first year that students in this th district are tracked in mathematics, and are tracked according to 7 grade test results. I chose to focus on this class because low-tracked classes are underrepresented in the mathematical reasoning literature. The teacher, Ms. Hill, had taught mathematics and science in the district for 8 years at the time of this study and was in her fifth year of teaching CMP. She had been teaching the lowth tracked 8 grade class for three of those years and had participated in a separate project the previous year, the Algebra Teaching Study ([ATS], http://ats.berkeley.edu/). The overall goal of this project is to develop a classroom observational scheme that links pedagogical moves to growth in students’ robust understanding of algebra. Through multiple classroom observations for ATS, Ms. Hill stood out as consistently prompting students to provide justifications for their claims. I selected her to participate in this study because she showed commitment to improving students’ mathematical practices, especially as they relate to mathematical arguments. 23 Prompts for written arguments. The writing prompts were designed with the purpose of eliciting arguments from students that would demonstrate their content knowledge and knowledge of arguments. As such, they were intended to provide situations in which students were required to state a claim and defend it with a mathematical justification. In many cases, students were required to use their solution to a problem as their claim and defend it by explaining it to a fictitious other. Other times, students were given a situation with the answer provided and asked whether they agree or disagree and why. Table 1 and 2 provide the prompts given across this study. This study spanned two algebra units that targeted identifying equivalent expressions (EE) and solving systems of equations (SoE). From EE, I collected six writing assignments, and from SoE I collected five. Ms. Hill administered the writing prompts once she and I sensed students had a foundational understanding of the content. This was gauged by students work on problems during group or individual work time, and the discussions that ensued at the conclusion of their work time. Most (n = 9) of the writing assignments were given to students to work on during class time, with 10-15 minutes allotted specifically for students to work individually on their arguments. The remaining two were also given during class, but they were part of a larger assignment for which they were allowed to work with their classmates. 24 Table 1 Writing Prompts in EE Day Writing Prompt 2 Someday you might be training new employees. Explain to a new employee how to calculate the number of tiles for ANY length pool. 3 How many tiles are needed for a 100 x 100 pool? Explain how and why your equation works using words, symbols and pictures. 4 Explain to workers at the "Sticker Cube Factory" how to find the number of stickers they will need for any length (x) cube. Tell them HOW to find it and WHY it works. 5 Provide a justification for the sticker cube problem (from Day 4) that would earn full credit based on the rubric. 6 Edit the instructions so the manual does a better job explaining why the number sentence works. 9 Crystal says that 2(x-5) is equivalent to 2x - 5. Explain to Crystal why she is incorrect. Be sure to use words, diagram, and the equation in your explanation. Table 2 Writing Prompts in SoE Day Writing Prompt 5 Student council is selling sweatshirts-profit $8- and t-shirts-profit $5. The president says if they sell 20 sweatshirts and 10 t-shirts they will make a profit of $250. The treasurer says if they sell 20 sweatshirts they'd need to sell 18 t-shirts. Use an equation and graph to decide who is correct. Then, explain how each representation gives evidence to support your decision. 8- The band students are selling cookies (x) and candy (y) in an effort to raise $500. quiz Cookies cost $5 per box and candy costs $2 per box. Ali tells the teacher they will need to sell 20 boxes of cookies and 100 boxes of candy. Jack says if they sell 20 boxes of cookies, they will need to sell 200 boxes of candy. Who is correct? Show and explain how to use the equation and graph as evidence. 10 [Students are given a problem in which a hypothetical student, Casey, does all the work (shown) and finds the point of intersection.] Explain to Casey what the solution means and why it works. 11 [Students given a systems of equations problem that is worked out incorrectly] Casey claims the answer is (15, 10). Is he correct? Explain why or why not. 12- Morgan wanted to raise money to buy food for families in need. She raised $600 by quiz selling stickers for $3 each and t-shirts for $6 each. At the end of the day she counted that she had sold 150 items. How many stickers and shirts were sold? Explain what your solution means and use the context of the story to support your solution. 25 Results from the written assignments were used to gauge areas where student understanding may have been weak. The purpose of the prompts was to extend student thinking beyond the mechanics of the problem and require them to make sense of their solution and provide a justification for it. Consequently, for EE, the writing assignments began almost immediately and were given consecutively because students seemed to have a firm understanding of the content of the writing prompts and Ms. Hill was attempting to develop norms around students’ written arguments. In SoE, however, the writing prompts were delayed because students were being introduced to new material that was seemingly more complex for the students. At the conclusion of each class, Ms. Hill and I discussed whether students would be ready for a writing prompt the following day and who would write it. Ms. Hill and I each wrote about half of the prompts for each unit. All the prompts were written the night before they were given so they were consistent with what students were doing in class. In all cases, students were required to write their responses to the writing prompts in class, were collected by Ms. Hill, and given to me at the conclusion of the class. Stage 2: Initial Analysis using Toulmin’s Model As I received students’ written responses from Ms. Hill, I removed identifying information and replaced it with a pre-determined student identification number. Then, each day I coded responses according to my adaptation of a rubric based on Toulmin’s model (see Appendix A) that is used in Language Arts education (McCann, 1989). This rubric functions in two ways. First, it identifies each component—claim, grounds, warrants, backing. Once identified, the rubric provided a method for establishing the strength of each component using a four-point scale (0-3) with respect to how well each component was communicated, and how 26 well it supported the claim or its counterpart. For example, the rubric for warrants rates the strength of the connection between the grounds and the claim. The adaptations were made to reflect mathematical arguments and provided quantitative measures of the components (claim, grounds, warrants, backings) of students’ arguments. To use this rubric, I first identified the claim, grounds, warrants, and backings in each written argument. Once each component of the argument was identified, I used the rubric to assign a score to each, resulting in one to four component scores for each argument depending on how extensive students’ arguments were. In addition to individual component scores, I also calculated a holistic score for each student by totaling each component score. The component rubric scores provided a means to track the development of grounds, warrants and backings across each unit and determine growth in components. This analysis revealed some consistent trends in the data. For example, the scores for warrants showed modest improvement over the two units, and holistic scores dropped when a new writing prompt was introduced. Additionally, the total scores allowed a way of categorizing arguments into three groups: strongly, moderately, and weakly convincing arguments. Once the arguments were categorized, though, it became clear the rubric was insufficient for a number of reasons. First, there were many arguments that did not score well even though I considered them convincing. The reason for most of these miscategorizations was that students had not included a backing in their argument, which adversely affected their holistic score. Secondly, and more importantly, the scores did not provide qualitative information of what students were including in their arguments that resulted in the scores. Unless the holistic scores were very high or very low, they did not give much insight regarding how convincing students’ arguments were. For example, a score of 9 may have indicated that students created a strong argument that did not 27 include a backing, or they made significant errors in two of the components. Thus, the holistic scores obscured data that was relevant to constructing arguments. In particular, I was interested in the content of strongly convincing arguments and what was lacking in the moderately and weakly convincing arguments, and what the arguments as a whole suggested about students’ understanding of both the content and of arguments in general. In response to these issues, I turned to an adaptation of Toulmin’s model (1978) used in science education (McNeill & Krajcik, 2009) that combines warrants and backings into one heading: reasoning. This provided more flexibility in the holistic analysis of arguments. Stage 3: Analyzing Arguments Holistically The subsequent analysis was focused on describing arguments as a finished product. This was done in an attempt to capture variations that resulted in more convincing arguments. Further, this analysis provided better descriptions of students’ written arguments as a whole, and a finer-grained analysis of how the contents of the arguments supported the claim. Considering these elements, I used a constant comparative approach (Strauss & Corbin, 1990) to develop codes for the mathematical resources students chose to include in their arguments, and considered how these resources were used to support their claim. Morselli & Boero’s (2009) aspects of proof provided a means of structuring the analysis of arguments as a finished product. That is, the types of mathematical resources students chose to use gave insight into students’ epistemic aspect while the way students’ elaborated these resources gave insight into students’ teleological aspect. Further, considering these aspects seemed to have the potential to serve as a diagnostic tool. That is, by considering these aspects provided a qualitative description of how students coordinated their knowledge of content, of argumentation in general, and of communication in the construction of their arguments. Areas in 28 which any of the aspects were lacking could serve as specific feedback to a teacher or researcher to help students construct convincing arguments. In the following sections, I share how each aspect was operationalized, and what specific mathematical resources or statements were considered for each. Epistemic. For the epistemic aspect, I considered two areas in particular: the mathematical resources that contributed to the argument and the accuracy of each statement. Statements included everything a student included in their argument, which primarily consisted of the words used to expand on the relevance of the resources. I defined mathematical resources as the different types of statements and/or representations students drew from in an attempt to validate their claim. Capturing mathematical resources provided a way to access student understanding of the content, thus their epistemic aspect. Mathematical resources. Drawing on multiple mathematical resources suggested that students had a multi-faceted understanding of the content and could approach the problem from multiple angles. Thus, keeping track of the different resources students used was an important in being able to develop the framework for describing students’ epistemic aspect. Table 3 provides a list of the codes used to describe the resources students used and how they were defined. Thus, every instance of a student using a resource in their argument was coded. As a result, there were many arguments that had multiple mathematical resource codes. 29 Table 3 Codes for Mathematical Resources Codes Definition Mathematical truth Representation Procedure Context Counter argument Description Student states a definition in support of their claim Student uses a mathematical fact that was learned in class, such as “points on lines are solutions to the equation that generated the line” Student either generates or uses an existing representation to support their claim Student states procedure either in words or symbolically with no explanation Student explicitly draws from the problem context to support their answer, e.g., given a square pool to tile around, students use the fact that a square has 4 equal sides in their argument Student provided an alternative as a counter argument to a fictitious student’s answer Assessing accuracy. Resources and statements considered correct required precise mathematical language that was true to the particular contextual situation of the problem and to mathematics in general. In other words, the mathematics had to be free from errors to be considered correct. This criterion accounted for students who argued along the lines of "I checked it and it works" because it is neither appropriate to the context nor mathematically correct. Correctness was coded on a three-point scale: 0 was given if the statement was incorrect, 1 for statements that contained minor mathematical errors, and 2 for statements that were correct. Minor mathematical errors included arithmetic errors, such as ½ + ½ + ½ + ½ = 1 or switching variables. In most cases, these minor errors did not affect the validity of the claim or the convincingness of the argument because little inference was required to determine the students’ intention. Correctness was considered both for the mathematical resource(s) that were used to support the claim as well for how the student explained them. Therefore, it is feasible that a 30 student might incorporate a mathematical resource that was correct (score of 2), but not give an accompanying explanation, or give an incomplete or incorrect explanation. For example, a student could correctly graph a system of equations, but not reference it in their argument, only casually reference it (“because I graphed the lines”), or reference it incorrectly (“the graph shows that Marcello was right” when Marcello was not correct). There was no instance of a student using a resource that was inappropriate or unrelated to the task. For the collection of arguments in this study, assessing the correctness was not a difficult task, as students tended to use the same representations as they had used in collective argumentation during class. Teleological. The second dimension I coded was teleological. The teleological aspect deals with the choice of resources and how they were elaborated to support the claim. Considering the teleological aspect provided some indication of how students understood arguments in general and the requirements to make mathematical justifications convincing. This is important, especially in light of the widespread adoption of the CCSS-M (2010). I hypothesized that students would need more than mathematical resources or content to write convincing arguments, but that they would also need to know how to use their knowledge in a way that provided sufficient evidence in support of their claim. In other words, they needed to provide reasoning for their claims. One way to consider the extent of students reasoning was by incorporating the adapted version of Toulmin’s (1978) model used in science education (McNeill & Krajcik, 2009; 2012). Determining the presence, or lack of, grounds and reasoning offered insight into students’ knowledge of arguments and the criteria that they must support their claims. These statements generally involved drawing attention to the mathematical resources they used in some manner. 31 Elaborating mathematical resources. How students used resources to support their claim was also analyzed. To do so, I considered the mathematical statements that were included as evidence of students making a conscious choice and of their knowledge of arguments. There were times when a student would include a resource, for instance a diagram, but would not explain its significance in their argument. In these cases, the resource did not lend support to the claim but still provided some indication of how the student was making sense of the problem. Because I wanted to capture how students used resources to support their claim in an effort to capture the teleological aspect, I coded when a student included an explanation for their mathematical resource using the codes listed in Table 4. Table 4 Codes for Explanation of Resources Type of Explanation Linking Unpacking Connecting Definition Links a resource to the claim or two or more resources together Explains the significance of a resource Student uses a generalized statement that connects to mathematical truth Linking statements either made connections between a resource and the claim, or between multiple resources. Statements coded as unpacking provided an explanatory element in which the significance of the resources was elaborated on. In these cases, only one resource was coded as having been unpacked. Unpacking provided additional evidence that the student understood the claim they generated, and could defend it in multiple ways. Often, arguments were coded with multiple resource and explanatory codes. Note that these statements did not have to be mathematically correct in order to count towards the teleological aspect. Rather, they provided insight into how students understood the requirements of a convincing argument rather than their 32 content knowledge. In order to determine how to credit students with the teleological aspect, I coded each argument for the ways they explained their resources and tied them to their claim. Communicative. The communicative aspect was the final aspect I considered when coding each category of student arguments. For this aspect, I considered the logical flow of the argument and its coherence. In order to do this, I used a rubric adapted from NAEP that is used in Language Arts Education (see Table 5). Table 5 Coherence Rubric (adapted from NAEP) Score 0 1 2 3 Description No evidence of cohesion-information is provided, but randomly relates to support the claim There is evidence of statements supporting the claim, but statements are randomly ordered or there is not enough supporting evidence to assess the order Most of the details gathered and are well-ordered to support the claim-rearranging evidence would hurt the argument Details are gathered and flow in a logical sequence and explicitly support the claim. This rubric was useful in assessing the communicative aspect of students’ written arguments even when they did not provide a lot of sentences. In order to strengthen the flow of the argument, students sometimes included labels for their representations. This allowed me to easily determine what they were referring to with little inference. Tying it together. When all the arguments were coded along these three dimensions, I sorted them according to the number of correct statements and explanatory elements used into three groups: weak, moderate, and strong. I considered each group in turn and determined a need for two additional groups. The first addition was for arguments that were sorted into the strong category but contained minor errors. The second addition was for arguments that were sorted into the weak category but did not contain any grounds or reasoning containing resources or explanatory elements. In these cases, I considered these as non-arguments because there were 33 no statements offered that would convince another of the validity of the claim. Once I added these two categories and rearranged the arguments, there were patterns in the epistemic, teleological and communicative aspects of each category. Based on these patterns and extant literature (e.g., Driscoll, 1999; A. J. Stylianides & Stylianides, 2009), I developed a taxonomy of five levels of persuasiveness for written arguments that is summarized in Table 6 that will be elaborated with exemplars in the following section. 34 Table 6 Summary of Argument Categorization Categories Strong Strong (w/minor errors) Moderate Weak NonArgument Epistemic All statements and resources are correct (received score of 2 for correctness) Statements have minor errors (received score of 1) Teleological Included grounds and reasoning Statements are generally correct (received mostly 2’s for correctness), but there are not enough statements or resources to support claim At least one statement or resource contains a major error (score of 0) There are no supporting statements or resources Included grounds and reasoning that was either incomplete and/or contained errors Included grounds only Scored 1 on coherence rubric Included grounds and reasoning Communicational Scored 3 on coherence rubric and labels appropriately Scored 3 on coherence rubric and labels appropriately Scored 2 on coherence rubric and labels inconsistently Did not include Scored 0 on grounds and reasoning coherence rubric Framework for Written Arguments The categories are presented in Figure 4 in a hierarchical fashion from most to least convincing, but will be described in more detail below. The validity of the codes was tested with two phases of interrater reliability coding undertaken. Revisions to the code descriptions occurred between the two phases. A brief elaboration of the interrater reliability will be given after an elaboration of the five argument categories. Nonarguments Weak Least Moderate PERSUASIVENESS Strong with Strong Minor Errors Most Figure 4. Argument categories with varying degrees of convincingness. Non-Arguments. As Table 6 summarizes, non-arguments were those that contained no grounds or reasoning, thus no resources or explanatory statements. Arguments in this category 35 typically did not contain enough of an explanation to score higher than a 0 on the coherence rubric. Non-arguments were written frequently at the beginning of the study, before students had a firm understanding of what criteria should be included in their arguments, but were scattered throughout the two cycles. Because these written assignments did not contain grounds or reasoning, they did not provide enough evidence to make any claims about the epistemic, teleological, or communicational aspects of arguments. Non-arguments typically consisted of an elaboration of the claim as the following example (see Figure 5) illustrates. Arguments that were coded as non-arguments demonstrated that either the student does not fully understand the mathematics or does not have the teleological knowledge that would assist them on how to write a mathematical argument, or both. The following argument in Figure 5 provides and illustration of a non-argument. Someday you might be training new employees. Explain to a new employee how to calculate the number of tiles for ANY length pool. You would add 1 to the length of the pool then multiply by 4. And you will get the number of tiles you need. Figure 5. Example of a non-argument (student’s work recreated). This example of student work only contains a claim, “you would add 1 to the length of the pool then multiply by 4”, but no supporting evidence. In this non-argument, the student does not incorporate any mathematical resources or expand their claim in any way. The lack of evidence in support of the claim does not provide any confirmatory evidence of the student possessing teleological knowledge. Yet, the inclusion of a correct claim suggests this student has some content knowledge. Weak arguments. Arguments coded as weak used minimal resources with no clear links to the claim, and included irrelevant or incorrect statements 100% of the time. Almost all of the 36 arguments coded as weak contained one or more statements that received a score of 0 on correctness, although for the most part (77%) students’ claims were correct. Despite this, written assignments coded as weak were considered to make an attempt at an argument because they always included grounds, and communicated them in a reasonably coherent manner. Weak arguments tended to receive a score of 1 from the coherence rubric because students simply did not provide enough confirming evidence for the claim. More detail follows. Example. Arguments that were coded as weak demonstrated that the student does not fully understand the mathematics because of the errors presented in the statements of the argument. The work presented in Figure 6 provides and illustration of a weak argument. 37 The band students are selling cookies and candy in a fundraiser. Cookies cost $5 per box and candy costs $2 per box. x: # boxes of cookies y: # boxes of candy a. Write an equation to find the number of each type of treat they can sell to raise $500. 5x + 2y = $500 b. Use the x- and y-intercepts to graph the equation. c. Ai tells the teacher they will need to sell 20 boxes of cookies and 100 boxes of candy. Jack says that if they sell 20 boxes of cookies, they will need to sell 200 boxes of candy. Claim: Who is correct? __Jack___ Evidence: Show AND explain how to use the equation and graph as evidence. Equation: Graph: 20 + 100 = 120/500 = 4 Because Ali’s number equals up to being $4 for candy and Jack’s equal up to being $2. Jack is right. 20 + 200 = 180/500 = 2 Figure 6. Example of a weak argument because of mathematical errors (student work recreated). 38 This argument was coded as weak because of the incorrect statements made in regards to the equations used. Although this student did not demonstrate an understanding of how to use the equations to make sense of the claim, she did graph the equation correctly. Therefore, this argument was coded as including a correct representation, even though the student did not refer to it in their attempt to validate the claim. For these reasons, it would seem that although the student understood how to graph a linear equation, she did not provide evidence that she understood how the graph related to claim. Because of these errors, the flow of her argument is halted, making it difficult to understand. Arguments that were coded as weak could be described as an ineffective attempt at a strong argument. In these cases, there was little evidence that students understood the mathematics, and regardless of what they know about argumentation, they simply did not provide enough evidence to produce a convincing argument. Their resources may be appropriate, but they are unable to provide an explanation as to why the resource supports the claim because they do not unpack it sufficiently, or attempt to unpack it with an incorrect statement as the example in Figure 6 illustrates. Epistemic. As mentioned, all weak arguments consisted of at least one incorrect statement. The incorrect statements tended to consistently occur with either the representations students used or with the explanation of a representation. As with moderate arguments, there were few resources used. Although most arguments categorized as weak had only one resource, there were a few arguments in this category that had two or three. Nevertheless, for those arguments that had more than one resource, at least one of them was incorrect. Students who wrote weak arguments demonstrated gaps in their understanding of the content because of the few and incorrect resources used. 39 Teleological. Weak arguments differed from non-arguments because they included grounds. In this way, there was some conscious attempt at justifying the claim. This attempt, perhaps constrained by the students’ weak understanding of the content, did not support the claim in a convincing way. This was primarily due to the lack of explanatory statements students made. Typically, students only attempted to explain a resource once in weak arguments if at all. Like with moderate arguments, it is difficult to tell whether students who wrote insufficient arguments did so because they lacked insight about the requirements of arguments, or if their content knowledge inhibited a stronger argument. Communicational. All of the arguments categorized as weak received a score of 1 on the coherence rubric. Like with moderate arguments, students did not use many words in weak arguments. Coupled with the mathematical errors, the lack of words inhibited the logical flow and readability of these arguments. This along with the lack of supporting evidence in favor of the claim made the communicational aspect of weak arguments low, causing the reader to infer what the student was trying to communicate. Moderately-strong arguments. Arguments that were coded as moderate were considered a modest attempt at making a strong argument, but fell short in some aspect. Arguments that fell into this category either lacked appropriate or enough resources to support the claim, or connections were either incomplete or not clearly linked to the claim. These arguments were incomplete, but were on their way to becoming strong in the sense that they included elements of arguments that were coded as strong, but not enough to be considered convincing. There were few instances (17%) of students making mathematical errors in their statements or resources and no instance of incorrect claims. 40 Example. Arguments that were coded as moderate demonstrated that either the student does not fully understand the mathematics or does not have the teleological knowledge that would assist them on how to write a mathematical argument, or both. The following argument in Figure 7 provides and illustration of a moderate argument. Casey claimed that there were 15 true/false and 10 multiple choice questions on the test. Is Casey correct? Explain why or why not using evidence from the equations and from the graph. Casey is incorrect. Casey did all of the work before finding the interception correct. The interception point is: 5 multiple choice, 20 true/false. 40 20 20 40 Figure 7. Example of a moderate argument (student’ work recreated in part). This example shows the student drawing on two resources. First, he uses a counterargument by correcting the fictitious student, Casey. Second, the counterargument was determined by finding the point of intersection from the graph that was provided. As such, this student was coded as using two resources (counterargument, representation-gr). Even so, there is only one link to the claim, albeit a weak one, when the student writes, “the interception point is: 5 mc and 20 t/f”. This statement is an attempt to support his claim that “Casey is incorrect”. But the student does not provide explicit connections between his claim and his counter argument. Had the student provided more explanation regarding why his counter argument was correct, 41 especially stating the significance of the point of intersection or using the equations to validate his counterargument, this argument would have likely been scored as strong. As it is, it is incomplete, but suggests a developing understanding of solutions to systems of equations. Epistemic. Students who wrote moderate arguments included mathematically correct statements, although there were errors in a few cases (n=4 of 24). In general, arguments in this category simply did not include enough statements to be convincing. In fact, typical arguments falling into this category usually included two or three resources, but usually only explained one of them, as the example in Figure 7 demonstrates. As such, students who wrote moderate arguments usually demonstrated a developing understanding of the content. Teleological. All moderate arguments contained grounds, and in most cases, there was an attempt at reasoning. Even though these statements were generally correct, the arguments as a whole were incomplete and unconvincing. In most cases, had the student explained an additional resource, these arguments may have been upgraded to strong. It is difficult to determine from moderate arguments whether or not students possessed a teleological aspect of arguments. The fact that students included reasoning along with their grounds would suggest that they have a general idea of what it takes to write a convincing argument. Perhaps due to their developing understanding of the content evidenced by the epistemic aspect, the teleological aspect was constrained. Communicational. Students who wrote moderate arguments generally structured their argument well, and typically scored a 2 on the coherence rubric. Yet, students who wrote moderate arguments used fewer words, thus linking statements, than students in the strong categories. The lack of supporting evidence affected the coherence of these arguments resulting in scores of 2. In general, students labeled some or all of their resources, but did not make 42 explicit connections between resources and the claim. This tended to cause the logical flow to be disrupted, requiring some inference in what the student meant when they referred to contextual things such as point or corner, especially when there were multiple points included in a graph. In summary, moderate arguments are can colloquially be described as on the right track. But the lack of an important resource or the ability to unpack the resource well enough to support the claim places these arguments in a category lower than those considered strong. In short, these arguments feel incomplete due to the lack of either additional resources or an explanation regarding the significance of resources. Strong arguments with minor errors. Strong arguments with minor errors is a subset of valid arguments as reflected in Figure 4. The close placement of the two categories in Figure 4 was purposeful and reflects this relationship. The only difference between the two categories is inherent in the name: these arguments contained small mathematical errors. Therefore, the only difference in the three aspects of argumentation is in the epistemic aspect. The teleological and communicational aspects are identical to strong arguments and will be discussed shortly. Example. Although there were relatively few strong arguments with minor errors, they provide evidence for students’ epistemological, teleological, and communicational knowledge of arguments. The following example (See Figure 8) was a response to the same writing prompt shown in Figure 7 and illustrates the similarities between arguments in this category and those categorized as strong. 43 Rectangular Pools with Triangle Corners POOL ASSOCIATES LENGTH EQUATION: T = 2W + 2L + 2 T = Tiles 2W = Width x 2 2L = Length x 2 + 2 = Total of each corner W I D T H W I D T H LENGTH SUMMARY: Add Width and Length. And then add Width and Length again. So basically multiply Length by 2 and Multiply Width by 2. Then add them together. So Length x 2 + Width X 2. Then add 0.5 four times which equal 1. Then that is how many tiles you will need. Figure 8. Example of an argument coded as strong with minor errors (student’ work recreated) This student draws on many resources including procedure, representation, and context that are all mathematically correct and appropriate to the task. In their unpacking of the procedure, however, the student makes a minor mathematical mistake when he says, “then add 0.5 four times which equals 1”. This mistake was considered minor especially in light of the fact that they have provided the correct expression in other parts of their argument. Besides this minor error, the student presents a strong argument that fully supports the claim and is presented in a logical, coherent way. Epistemic. As mentioned, the only difference between arguments that fell in this category and strong arguments was minor errors. Examples of minor errors include arithmetic errors, such as multiplying or adding incorrectly, or confusing variables with each other. Again, arguments receiving this code are very close to being coded as strong, but it was necessary to 44 differentiate the two arguments because they lack the mathematical precision that strong arguments require. Because these errors require some inference and interpreting what the student meant to say, they made up a separate category. Further, one potential benefit of students generating mathematical arguments is to catch these errors and ensure their reasoning is sound. Yet, because students are not catching the error, such arguments should be a different category as those categorized as strong. Accordingly, although students understand what is required to generate a sound mathematical argument from both a teleological and mathematical perspective, the lack of precise mathematical statements categorizes arguments differently than those categorized as valid. Strong arguments. A strong argument is convincing and correct, and can "stand alone". That is, no inference is needed to understand what the student is articulating. Strong arguments provide evidence that the student understood all aspects of argumentation. In the following sections, I will first provide an example, and then elaborate on these aspects. Example. The following argument (See Figure 9) was given in response to a prompt that was an adaptation of the pool problem and is representative of strong arguments. The adaptation presented a rectangular pool with triangular corners. 45 L: length Rectangular Pools with Triangle Corners W+W+L+L+2 (+2: Four 1/2 (width) (length) corners added together) 0.5 0.5 L W: width W W L 0.5 0.5 You would add the width of the pool plus the width, because there are 2 sides that are widths. Then you add the length of the pool plus the length, because there are 2 sides that are lengths. Then add 2, because on each corner is ½ and if you add ½ plus ½ = 1 and you do that for both sides and that’s why you would add 2. Figure 9. Example of strong argument (student work recreated). In this example, the student draws on many resources including representation, procedure, and context that are all mathematically correct and appropriate to the task. The first statement of the argument, “you would add the width of the pool plus the width, because there are 2 sides that are widths…” is an example of a linking statement. In this case, she is linking the procedure, what should be done, with the context, a rectangular pool. Through this linking, she provides evidence for her claim, but also clearly articulates why the process of adding two widths is appropriate and correct. Accordingly, this linking statement provides insight into the connections she has made between the process and the context. Additionally, there are two instances of unpacking, both involving representations. In the first, the student unpacks the claim, w + w + l + l +2, by labeling each part, and likewise the second unpacking code came from labeling the diagram. Further, the argument is presented in a logical sequence and explicitly supports the claim. Epistemic. With regards to the epistemic aspect, all statements in strong arguments are correct. That is, students provide multiple, correct resources, and accurately explain the 46 significance of each resource. This is important and clearly separated strong arguments from the other categories. Students who produced strong arguments provided evidence that they could support their claim from multiple angles. The inclusion of multiple resources with the elaboration of each suggested a firm understanding of the content. Teleological. Arguments in the strong and strong with minor errors categories included both grounds and reasoning in their argument. The inclusion of both suggests the student possessed a teleological understanding of arguments, and consciously chose multiple lines of supporting evidence to validate their claim, thus leaving nothing to inference. That is, the student provided evidence that they understood both what is required to be convincing and the mathematics. Further, their choice of mathematical resources is appropriate and they unpacked the resources in a way that fully supports the claim with the inclusion of multiple explanatory statements as illustrated in Figure 9. In this way, there is some overlap between the epistemic and teleological aspects that provide the appropriate support for the claim. Communicational. Strong (and strong with minor error) arguments were able to communicate support for their claim in a logical way that was easy to read. All arguments categorized as strong were scored a 3 on the coherence rubric. One notable characteristic of arguments in this category was that students labeled all of their resources to enhance the communicability of their argument. This aided the reader in understanding what the student was attempting to communicate and resulted in little inference between the representations and the explanatory statements. In summary, strong arguments were mathematically correct, contained appropriate resources that were explained, clearly linked, and directly supported the claim. Further, the argument was presented as coherent and convincing, leaving nothing to inference, using 47 explanatory elements that explicitly described how the resources validated their claim. Strong arguments indicated a teleological understanding and a firm understanding of the content. Summary. The five categorizations of arguments for this study can be summarized in Table 7. This category is arranged according to Toulmin’s model (1978) and the three aspects of rational behavior (Boero et al., 2010) much like Table 6, but this table provides more specific information regarding the number of resources and explanatory statements found from students’ written arguments in this study. This summary provides an at-a glance tool to quickly categorize students’ arguments into one of the five levels, although future research is necessary to determine how generalizable it is. In general, each aspect starts off strong, providing ample evidence of students’ content knowledge and knowledge of how to provide a convincing argument. Going down the columns, however, the strength of each aspect tapers to almost nonexistent. Inter-rater Reliability. Once the criteria for valid arguments were generated and the codes were determined and my first round of coding was completed, I tested for reliability by asking a colleague with a strong background in mathematics to code a sample of thirteen arguments. I decided on 13 arguments by taking 10% of each pre-determined category in each unit. The thirteen arguments were chosen across the study (7 from EE and 6 from SoE) so that a variety of arguments from each category were represented. At this time, I had separated empirical arguments from weak arguments and considered it as a separate code, originally having 5 separate codes for the arguments. With the first round of reliability testing, I asked my colleague to code each of the 13 arguments and provide a rationale for her coding. Once her coding was complete, I compared her codes to mine and considered her rationale for each answer. In many cases, she wavered between two scores. At this time, we agreed on less than 48 half of the scores. After discussing our scores, I made revisions on both the criteria and the codes to better clarify the intent of each code. After these revisions, I asked a second colleague to code the same thirteen arguments. With the second rounding of coding, we reached agreement on the categorization of each sample for all but one argument, yielding 92% agreement. After discussion, we agreed on all arguments reaching 100% agreement. At this time, I considered the criteria and codes stable, and no revisions were made. With these revised criteria and codes, I went back and reconsidered all 90 arguments making changes according to the revisions, and compared the categories of arguments between units. 49 Table 7 Results of Argument Categorization developed for EE and SoE Level of Epistemic Teleological Convincingness Strong  All statements are correct  Contains grounds and reasoning  At least 3 resources  2-3 explanatory statements Strong (but contains minor errors) Moderate  Minor arithmetic errors in statements  At least 3 resources  Statements are generally correct  At least 2 resources Weak  Statements contain at least 1 conceptual error  Typically only 1 correct resource  Statements contain major conceptual errors, and/or  No resources Non-Arg. Communicational  Scored a 3 on coherence rubric  Labels appropriately  Contains grounds and reasoning  2-3 explanatory statements  Scored a 3 on coherence rubric  Labels appropriately  Contains grounds and usually weak reasoning  1-2 explanatory statements  Contains grounds only  0-1 explanatory statements  Scored a 2 on coherence rubric  Included inconsistent labeling  Does not contain grounds or reasoning  No explanatory statements  Scored a 0 on coherence rubric  No labeling 50  Scored a 1 on coherence rubric  No labeling Discussion The goal of this study was to develop a framework that could be used to analyze students’ written arguments. In doing so, there were two objectives in the design of this hierarchical model. First, it needed to capture variations in written arguments to categorize them according to the convincingness of the argument as a finished product. Secondly, the categorization needed to communicate which aspects of argumentation students demonstrated as evidenced by their use of the mathematical content, their understanding of arguments in general, and how they were able to communicate the validity of the claim in a clear and coherent manner. These two goals also had to comply with Driscoll’s (1999) definition of and criteria for arguments. In the following sections, I address each goal separately, and then consider them together. Capturing Variations in Students’ Written Arguments The first objective in creating this framework was to sufficiently capture variations in students’ written arguments. These variations were a product of how convincing the argument was; in essence, how well students supported their claim. To do so, the arguments must be dually considered holistically and at a finer-grained level to achieve this aim. As such, the holistic analysis aligned with Driscoll’s definition of argument, by considering the argument as a collective “string of thoughts that convinces” (Driscoll, 1999, p. 88). In doing so, arguments are considered as products in much the same way as proofs are (Boero et al., 2010). Considering arguments as a product has only received little attention in mathematics education thus far (Morselli & Boero, 2009). More often, arguments have been examined as a process as they are constructed in whole-class discussions using an existing framework for argumentation (Toulmin et al., 1978). While this approach has been informative of different ways teachers have elicited arguments from students (e.g., Forman et al., 1998; Yackel, 2002), it 51 has focused less on examining written arguments as a whole. Additionally, Toulmin’s model does not provide a categorization of these arguments as finished products, although he does give some criteria for assessing the strength of each component. Capturing variations in students’ written arguments is important, however, especially during a time when the mathematics education community has agreed that constructing viable arguments is an important skill for students to demonstrate their mathematical proficiency. The existence of a framework that categorizes students’ arguments provides a common standard for what is necessary for an argument to be viable. This type of framework provides such a standard and can support teachers who are assisting their students in developing the skills required to construct viable arguments. Further, the framework provides a tool to mathematics educators and researchers who are interested in the types of teaching moves used to help students develop those skills. In short, if we expect students to construct viable arguments, then support must be provided to teachers that communicates the requirements of viable, convincing arguments. The framework presented here is a first attempt at doing that. Diagnosing Difficulties with Arguments Extending the types of knowledge for proof to argument and incorporating them into the framework was an effort to recognize what students need to know in order to write convincing arguments and diagnose what aspects may have been lacking. Writing is a complex process requiring the writer to draw from many areas (Deatline-Buchman & Jitendra, 2006; Doyle, 1983; Knudson, 1992; Morgan, 1998). In the case of writing mathematical arguments, students must coordinate their content knowledge with the goal of writing an argument in a way that will cogently address the claim and convince the intended audience. 52 The framework provided by Boero et al. (2010) was useful for this goal, and was compatible with Toulmin’s (1978) framework and Driscoll’s (1999) criteria. Namely, the types of resources students draw from and how they connect to the claim provide some indication of students’ content knowledge. Further, the presence of grounds and reasoning are useful in establishing the teleological aspect of students’ understanding and leaving nothing to inference. That is, their presence suggests students understood the requirement to provide mathematical reasoning in support of their claim. Further, this reasoning provided insight into students’ epistemic aspect through the accuracy of different resources used and how they are explained. These are necessary aspects of the final product and their presence in an argument provides useful information to teachers and researchers alike. As such, the framework I presented can serve as a diagnostic tool to determine what aspect of argumentation was lacking. Yet, there are limitations to this model. For example, there may be instances in which it is difficult to determine what aspect of a students’ knowledge is lacking in less convincing arguments. For example, students who do not have strong content knowledge may be limited in providing evidence of possessing teleological or communicational knowledge. It may be the case that the presence of the teleological and communicational aspects will be evidenced when students’ content knowledge is strong enough to be able to select resources that will provide the basis of grounds and reasoning. Because the possession of these aspects is essential to producing viable arguments, they are important to evaluate. Likewise, there is little communicational aspect to assess when students’ content knowledge cannot support their stated claim. As was seen in Figure 6, the lack of the epistemic aspect affected the communication aspect because students’ simply did not include enough information to provide coherent support of the claim. This suggests that if students do not have a 53 flexible understanding of the content they will be unlikely to produce convincing arguments. This certainly does not suggest that students have weak teleological or communicational aspects. Further, the taxonomy provided in this study may address this issue by tracking individual students’ arguments across time. This is an area where future research may be necessary. Even so, the use of this framework is one way that teachers can identify from where limitations of students’ written arguments might stem. The Utility of the Framework The combination of Toulmin’s model for argumentation (1978) along with the three aspects of proof (Boero et al., 2010) and Driscoll’s (1999) criteria provide a uniform means for teachers and researchers to assess students’ written arguments and diagnose areas of possible concern. This framework has the potential to track argumentative writing skills across students and classes, and provide an indication of growth over time. For example, one student was coded as writing the following coded arguments in EE: weak, weak, strong, strong, moderate. This might indicate that as the unit progressed, this student began to develop teleological understanding necessary to write strong and moderate arguments. Had the study gone longer, a sudden decline in her argument codes might indicate confusion in her content knowledge that prevents her from writing stronger arguments because the presence of multiple strong codes suggests that this student understands the goal of arguments and can communicate effectively. Further, because this framework is in alignment with the definition of argumentation Driscoll (1999) put forth and meets the criteria for argumentation he outlined and incorporates Toulmin’s model (1978), it is consistent with existing literature on argumentation. The first criteria, the argument leaves nothing to inference, is addressed by the epistemological and communicational aspect. With this aspect, students’ arguments must be coherent and possess a 54 logical flow that draws from multiple resources so that the reader does not have to interpret what the writer ought to have said. Further, the criteria that the argument is tied to the context is captured in the way that argument was classified as being situated in a contextual problem, much like the sample provided in Figure 2. These are the types of problems students will likely face on high-stakes tests that ask them to demonstrate their mathematical proficiency through the construction of a viable argument. Finally, with the inclusion of the appropriate resources, students’ arguments will “stand up to any challenge” (p. 106) Limitations and Future Research With any study, there are always limitations, and this study is no exception. One limitation of this study is with the amount and type of data that informed this framework. In particular, the data is concentrated to only one small group of students. Consequently, there is probably not enough variety that would be introduced with larger numbers of students. Yet, the types of descriptions provided for each category in the framework may be expanded with the addition of more students’ written arguments. Additionally, the data is concentrated in only one course, algebra. This may limit the types of classes that this framework can extend to, although I took care to make sure that the types of mathematical resources that were used to code students’ work contained enough generality to be able to extend to other strands of mathematics such as number sense, data and probability, etc. This is an area where future research could inform the framework for argumentation. Finally, this class used the Connected Mathematics Project (Lappan et al., 2006b) curriculum, so students were more accustomed to using representations in their daily work, and using these representations to make mathematical connections, especially in algebra. Students who are not afforded the opportunity to work from reform-oriented curricula may not share the 55 same flexibility with the use of representations, and especially the flexibility in moving between representations. Future research might compare the types of arguments students’ produce in multiple classrooms using a variety of textbooks. Conclusion In 2014, students are going to begin to be tested on the CCSS-M and will be expected to construct viable arguments. As such, constructing arguments will become more of a focus in mathematics classrooms. Using a taxonomy like that presented here provides a means for teachers and researchers to track students’ arguments over time. Further, on an individual level, the taxonomy allows teachers to see growth in students over time and allows them to specifically address aspects of students’ arguments in need of attention. In this way, this taxonomy serves as a diagnostic tool to teachers as well. Through the construction of arguments, students are uniquely positioned to demonstrate their content knowledge while also showing their knowledge of arguments in general. This requires students to process and display their knowledge differently than they have been required to in the past. With the endorsement of arguments as a mathematical practice, the mathematical community is drawing attention to this important skill. As such, this framework is timely, and useful to both teachers and researchers in being able to uniformly assess and diagnose students’ written work. 56 CHAPTER 3 SUPPORTING STUDENTS’ WRITTEN ARGUMENTS THROUGH TASK FEATURES AND SET UP Understanding mathematics requires the ability to reason about it (e.g., Ball & Bass, 2003; Lampert, 1990; 1989; A. J. Stylianides, 2007; Yackel & Hanna, 2003). As such, student reasoning and how teachers facilitate that reasoning has received significant attention from the mathematics education community and policy makers. For example, the National Council of Teachers of Mathematics (NCTM) has promoted reasoning and proof as one of five process standards to be emphasized throughout students’ K-12 mathematical experience (National Council of Teachers of Mathematics, 2000). Similarly, the National Research Council (NRC) has included “adaptive reasoning” as one of the five strands of mathematical proficiency (Kilpatrick, Swafford, & Findell, 2001). Even more recently, the Common Core State Standards ([CCSS], 2010) has established argumentation, a specific form of reasoning with the goal of persuading another, as one of eight mathematical practices. Asserting that "students who make convincing arguments…show a higher level of algebraic thinking" (1999, p. 88), Driscoll defined an argument as "a coherent string of thoughts that convinces a fellow student of a mathematical result" (p. 88). Inherent in this definition is that an argument includes two different types of statements: a claim, and a justification for that claim. Yet, the difficulty students have producing justifications for their claims is widely reported in the mathematics education literature (e.g., Balacheff, 1988; Harel & Sowder, 1998; A. J. Stylianides, 2007) and in the literature on argumentative writing in other disciplines 57 including language arts and science (e.g., Deatline-Buchman & Jitendra, 2006; McCann, 1989; McNeill & Krajcik, 2009). Although argumentation has received increased attention in mathematics education literature (e.g., Forman et al., 1998; Krummheuer, 1995; McClain, 2009; Mueller, 2009), this attention is almost entirely focused on students' co-construction of verbal arguments in classroom settings under the guidance of a teacher and the scrutiny of the classroom community. Yet, as important as episodes of argumentation have been to students' developing understanding of mathematics (Speiser, 2002), little consideration has been given to how students perform on written, non-proof arguments or what teachers do to support their performance. This gap in the literature is an important consideration, as students' written work is often a primary source of formative and summative evaluation of their learning (Morgan, 1998). Constructing a written argument is different in many respects than participating in an oral one. On one hand, whole-class arguments provide immediate feedback to students regarding the persuasiveness of their argument. For example, questions or counterexamples may challenge a students’ argument, but these challenges allow them the opportunity to provide more evidence in support of their claim. This type of feedback is usually not available during the construction of written arguments. Students are left to their own devices to determine whether they have provided enough justification to support their claim. Similarly, Albert (2000) drew attention to the solitary act of writing, and the fact that students typically write without an audience in mind other than their teacher (Morgan, 1998). Driscoll (1999) specified the intended audience as a “fellow classmate”. Yet, this certainly should be expanded to include teachers and researchers—as they are likely candidates to provide appropriate feedback and instruction that support students in constructing convincing arguments. 58 The purpose of this article is to present findings of a study focused on middle school students’ production of written arguments and how different characteristics of writing prompts and variations in the introduction of the writing tasks influenced students’ written arguments. In th particular, I examined the task features and task set up one 8 grade teacher of a low-tracked algebra class employed to support her students in generating convincing, written arguments. Results from this study suggest that while students attended to key features of the tasks, the attention to the features did not seem to contribute to the persuasiveness of their written arguments. Yet, increases in the amount of time devoted to task set up did appear to impact the persuasiveness of students’ written arguments in positive ways. Specifically, through layers of th telling, modeling, and peer-evaluation activities, one class of low-performing 8 grade students moved from explaining how to justifying why their mathematical claims were valid. This is noteworthy given the increased attention to arguments as an essential mathematical practice to be incorporated into the mathematics classroom. Conceptual Framework The conceptual underpinnings of this study rest on Doyle’s (1983) work around academic tasks and a task framework that was developed as a result of this work (Henningsen & Stein, 1997; Stein et al., 1996). In the following sections, I discuss each as they relate to using writing prompts designed with the goal of eliciting students convincing, written arguments. Task Framework and Argumentation Doyle (1983; 1984) defined academic tasks as consisting of “three elements: (a) a goal or product; (b) the operations that are used to generate the product, such as memorizing a list of words…; and (c) the resources or ‘givens’ available to students while they generate the product” 59 (1983, p. 161). This definition is all encompassing, considering both the final product as well as what is involved to construct the final product using the resources at students’ disposal. Doyle clarified that resources can include anything from students’ prior and developing content knowledge to various instructional moves teachers use to inform students about the process or product. In this analysis, the resources under consideration were the instructional moves that specifically addressed students’ written arguments, and will be referenced as such. For example, one instructional move teachers might offer students in mathematics classes is modeling the process of constructing an argument . Stemming from Doyle’s work on academic tasks, Stein et al. (1996) developed a task framework that considered different stages of tasks as they are implemented in the classroom. These phases capture the task as written through its completion considering classroom processes that support students as they engage in tasks and factors that may influence the resulting product (See Figure 10). This work extends Doyle’s conceptualization of task to consider how tasks may transform as teachers set up the tasks and students begin working on them in the mathematics classrooms. Considerations of tasks are important as they inform students of what it means to engage in mathematical activity (Stein et al., 1996) and shape the content students learn and how they learn to process it (Doyle, 1983). 60 Task Task Set up (as written) Task Implementation (by students) (by teacher) Factors Influencing Set up: Factors Influencing Implementation: -Teacher’s goals -Teacher’s knowledge of content and students Student Learning -Classroom goals -Task conditions -Students’ learning dispositions Figure 10. The mathematical tasks framework (adapted from Henningsen & Stein, 1997, p. 528) (Note: the shaded areas, mathematical tasks and task set up, are the focus of this paper) Mathematical tasks and ambiguity. Mathematical tasks are the tasks as written, taken either from the curriculum or created by the teacher. Tasks that engage students in writing mathematical arguments, a task considered to be doing mathematics, was the subject of this study. These tasks ask students to construct a response with no clearly defined process. Because of this, one feature of a high-level task is their level of ambiguity. Doyle (1984) defined ambiguity as “the extent to which a precise and predictable path for generating a product can be defined” (p. 131). Ambiguities can be inherent in the written prompt. For example, prompts that are written with vague terminology, like explain, only implicitly ask students to support their response with a justification; that is, a mathematical rationale for their response (Thompson, Senk, & Johnson, 2012). One contributing issue to students not providing an adequate justification may lie with their interpreting explain to mean recounting their steps rather than providing a conceptual rationale (Staples & Bartlo, 2010). Another ambiguity arises, when students do not know what 61 counts as a mathematical justification. As with proof, another example of a doing mathematics task, many ambiguities stem from students’ difficulty justifying their claims, and in particular from not knowing which 'rule' to draw from that "allows [them] to make a generalization" (McCann, 1989, p. 71). This requires that students understand what mathematical statements are appropriate to draw from and what mathematical truths can be used to convince a ‘critic’. Yet another ambiguity involves arguments as a finished product. Engaging in any form of an argument, whether oral or written, requires students to not only understand the content about which they argue, but also to be knowledgeable about arguments in general. Krajcik (2009) found this to be the case when working with middle school science students’ written explanations. When students are asked to articulate a scientific argument to explain a phenomenon, what they write is influenced by both their application of scientific principles to the phenomenon and what they think they need to include in their written explanation. (p. 422). Students need to understand the goal of an argument is to convince another that their claim is valid, and know how to do so. Together, this requires students understand both the process and product aspects of constructing an argument (Boero et al., 2010). The process side involves making one’s thinking evident to oneself in order to communicate it to others. This should include a mathematically acceptable rationale for why a claim is valid rather than only the process of arriving at a solution (Balacheff, 1991), and logically link these rationales together. Above all, the process of arguing must keep in mind the final product, the argument. The process and product aspect of writing mathematical arguments is well-aligned with Doyle’s (1983; 1984) conceptualization of 62 academic tasks, but for students who are unfamiliar with either, the task may seem vague. These uncertainties introduce ambiguity about the task (Doyle & Carter, 1984) that may require teacher intervention or the availability of more resources to assist students in completing the task. One opportunity for teachers to address ambiguities is during the task set up. Task set up. Task set up is initiated by the teacher, and encompasses the formal introduction of the task. Task introductions are as varied as teaching styles and can be as simple as distributing a task for students to complete to as elaborate as a lengthy discussions of the task that includes specific expectations regarding the final product (Stein et al., 1996, p. 460). The way in which teachers introduce a task has important implications for the final product. “When preceded with the appropriate set up, students were found to actually use multiple-solution strategies and multiple representations and to produce explanations and mathematical justifications in the majority of cases” (Stein & Smith, 1998, p. 483). In other words, with appropriate set up, students engaged in doing mathematics tasks. Thus, how teachers set up tasks has important implications for what students produce. Henningsen and Stein (1997) identified several factors that affected task set up that were included in Figure 10. Among these factors are the goals held for the activity and teacher knowledge of both the content and their students. The goals for an activity are important, as the accumulation of tasks that students engage in throughout their mathematical experiences informs them of what it means to know and do mathematics (Doyle & Carter, 1984; Henningsen & Stein, 1997; Stein et al., 1996). Therefore, if the teacher’s goals are for students to reason about claims and justify them mathematically, this can be communicated during the task set up. “Such an emphasis can be maintained if explicit connections between the mathematical ideas and the activities in which students engage are frequently drawn” (Henningsen & Stein, 1997, p. 527). 63 Thus, task set up has the potential to link students’ prior knowledge with the goals teachers have for their students’ future knowledge through the teacher’s instructional moves. Similarly, teacher’s knowledge of their students is important, especially when they are about to engage in a high-level task. Considering students’ prior knowledge and skills is important for choosing appropriate tasks that are within students’ ability level but that will push them to grow in their mathematical understanding. Research on performance differences has indicated that novices, young children, and low ability students often fail to develop the strategies and higher order executive routines that enable them to understand tasks or construct goal structures necessary to accomplish tasks without strong guidance. (Doyle, 1983, p. 175) Task set up provides the opportunity to include instructional moves that can address students’ prior knowledge and directly offer strategies that will assist students in completing a task without reducing the cognitive demand. This is an important consideration for this study, as the participants were drawn from a low-tracked algebra classroom. Yet, other studies caution that strong guidance, especially in the form of direct teaching, only has short-term effects (Deatline-Buchman & Jitendra, 2006; Doyle, 1983; McCann, 1989). Further, strong guidance during task set up only addresses the task at hand without equipping students with the necessary skills to successfully perform on future tasks. Students who are identified as being low performers may encounter trouble transferring the resources received during task set up to other tasks that may not resemble previously encountered tasks (Doyle, 1983). Thus, as always, knowing how much to elaborate on a task during the set up phase is a delicate balance for teachers. This study sought to identify what features of task set up seemed to influence the persuasiveness of students’ written arguments. 64 Argumentation and Toulmin’s model. As mentioned, in mathematics education, tasks that involve higher-level demands are referred to as doing mathematics tasks (Stein & Smith, 1998), and are identified through the cognitive demands they place on students. In the case of tasks classified as doing mathematics, students are required, among other things, “to engage in complex and nonalgorithmic thinking, explore the nature of mathematical concepts, processes, or relationships, and access relevant knowledge and experiences and make appropriate use of them working through the task” (p. 270). As such, the product of high-level tasks can vary as is certainly the case with any form of writing. One such task is engaging students in argumentation. Although there is extensive literature on argumentation in mathematics classrooms (e.g., Forman et al., 1998; Giannakoulias et al., 2010; Yackel, 2002), it is entirely focused on arguments that occur through whole-class discussions. This literature uses Toulmin’s (1978) framework to identify components of arguments in mathematics education. Four parts of the model—claims, grounds, warrants, and backings—are primary to argumentation. Figure 11 provides an illustration of these components of argumentation; the arrows indicate how one element supports another element in the argumentation process. Grounds Claim Warrants Backings Figure 11. Toulmin's Model of Argumentation 65 Every argument must contain a claim, a statement put forth with which others can agree or disagree. Grounds are typically the first statement in support of a claim. In the mathematics classroom, grounds are not “facts in the empirical sense, but statements that we take as given under the assumption that they will not be challenged” (Forman et al., 1998, p. 532). Grounds can include procedures, examples, or other empirically-based evidence. Thus, grounds provide some support for the claim, but may not provide a strong enough rational to convince others of the validity of the claim. To strengthen the connection between the grounds and claim, a warrant is provided. Warrants are given in support of the grounds (Yackel, 2002), and are often expressed as a “second” explanation, providing argumentative support (Whitenack & Knipping, 2002). Although warrants are also evidence in support of the claim, they play a more indirect role through their linking function. That is, they logically link the grounds to the claim. Finally, backings are generally agreed on facts that support the warrant. Backings lend credibility to the warrants, solidifying their role in the argument. Backings put to rest any lingering suspicion about the validity of the claim by giving authority to it (Yackel, 2002). Argumentation in classrooms. The literature in mathematics education using this model provides multiple illustrations of arguments that unfold through whole-class discussions. Various studies in mathematics education demonstrate that through questioning and support moves (Forman et al., 1998; Mueller, 2009; Whitenack & Knipping, 2002; Yackel, 2002) teachers were able to guide students in the construction of arguments. Yet, one may wonder if participating students understood that they had constructed an argument by providing justifications to support a claim. Further, each of the examples of whole-class argumentation shared in this body of literature consisted of all four components of Toulmin’s (1978) model, and for the most part, are included because a knowledgeable other, usually the teacher, is pressing 66 students’ to supply missing components. Because of this, one might question whether an argument exists if one of the parts is missing. Toulmin asserted that backings are not always necessary when the warrants provided are strong enough to validate the claim. McNeill and Krajcik (2009; 2012), researchers who analyze students’ written scientific explanations, have adapted Toulmin’s model to include only three components: claim, grounds, and reasoning. Reasoning is a combination of warrants and backings, and works to “articulate the logic behind the link between the evidence and claim, which often includes appropriate scientific principles” (2009, p. 420). Thus, reasoning still requires students to draw from something other than empirical evidence to support their claim. The analysis of students’ written work in this study uses McNeill and Krajcik’s (2009) adaptation of Toulmin’s model, defining reasoning to include justifications that draw on more general mathematical principles, procedures students explain, and/or representations and context in keeping with Driscoll’s (1999) definition of argument. With this adaptation, this study explored the set up and features of tasks that influenced students doing mathematics through their written mathematical arguments. Research Question In light of the extensive work done around academic tasks and their role in mathematics classrooms, the purpose of this study was to examine the features of tasks designed with the goal of eliciting students’ written arguments, how these tasks were introduced, and how students responded to each. This research has implications for teachers and teacher educators interested in how teachers engage their students in tasks involving written arguments. In light of this, the question guiding this study was, 67 What is the nature of task features and task set up, specifically as it relates to process, product, and resources, and subsequent student performance on written arguments in a th low-tracked 8 grade algebra class? Method In order to answer the research question, I conducted a case study of the development of th students’ written arguments within two units of a low-tracked, 8 grade CMP algebra class. A case study was appropriate for this analysis because I was interested in an in-depth exploration of students’ written arguments and the external supports offered by the teacher that seemed to influence their progress within the bounded context of everyday occurrences of one classroom (Cresswell, 2007; Yin, 2009). I chose to concentrate on only one classroom at this time in order to better understand the students, the instructional moves, and the classroom culture established around arguments. Because this was a low-tracked class, indicating that the students did not th score well on their 7 grade exit exams, I anticipated the pace of instruction would be slower and that instructional scaffolds would be greater. The first unit coincided with the book Say it with Symbols (Lappan et al., 2006b), and focused on strategies for identifying equivalent expressions (EE), while the second focused on solving systems of equations (SoE), and loosely followed The Shapes of Algebra (Lappan, Fey, Fitzgerald, Friel, & Phillips, 2009). Because this class was low-tracked, the curriculum was supplemented with tasks similar to those found in CMP in both content and in their contextual nature. These materials provided students additional opportunities to review previous topics, develop important algebraic skills, such as simplifying expressions using algebra tiles to model, and to receive additional practice on complex material. This extra time was essential, as students 68 th th expressed confusion over topics that they had already covered in 6 and 7 grade such as, graphing points and representing monetary values using decimals. In particular, SoE was heavily supplemented, with students spending considerable time reviewing points as solutions to linear equations. Participants and Setting Students. The participants for this study were students in a large junior high school (8 th th and 9 grade) in a suburban school district in the Midwest. The school was not ethnically diverse: 93% of the population was White, and the remaining 7% of students were Hispanic, Asian, or Black. Additionally, only 11% of students were eligible for free and reduced lunch. This middle school had been using the CMP curriculum ([CMP], Lappan et al., 2006b) for nearly ten years, but eighth grade mathematics only used the specifically-algebraic texts th th during the 8 grade year. The particular section of 8 grade algebra for this study fluctuated from 30-35 students who had been placed in a section of the lowest tracked mathematics class. Consequently, roughly 67 percent of the students were on an individualized education plan (IEP). As such, what was typically covered in two trimesters for students at grade-level was extended to three trimesters for this class; however, the same mathematical content was covered. Eighth grade was the first year that students in this district were tracked in mathematics. Students were informed that participation in this study would include collecting written assignments and being captured in classroom video. Although the class consisted of more than thirty students, only ten students consented to participate. To honor the non-consenting students which were the majority of the class, the video was primarily focused at the front of the room 69 where board work was conducted. This allowed me to capture the cooperating teacher’s instructional moves while including as few students as possible. According to the teacher, the ten students were a representative sample of the class based on their mathematical performance. In particular, of these ten students, seven had special needs identified through the special education department, and had IEPs. Four of the ten students performed well in class according to the cooperating teacher, and were recommended to join their regularly-tracked peers the following school year. These students regularly participated in class discussions, and consistently turned in completed assignments. Conversely, three students rarely participated in whole-class discussions and were more sporadic with completing their assignments. These three students were identified as being low performers based on homework completion and test grades. The remaining three students were less consistent in their participation in whole-class discussions and assignments, and displayed inconsistent similarities with the groups just described. That is, they participated in whole-class discussions, but not every day, and were moderately consistent in turning in their assignments. Ms. Hill. At the time of this study, I had already been working with the teacher of this 3 class, Ms. Hill , for over a year because she volunteered to work on a different project, the 4 Algebra Teaching Study ([ATS], http://ats.berkeley.edu/). Because of this study, we observed her classroom multiple times throughout the previous school year, and had observed this particular classroom twice. During that time, Ms. Hill stood out as consistently pressing students to provide justifications of their claims. Because of this, I recruited her to participate in this 3 All participants have been given pseudonyms . 4 ATS is a developmental project with the goal of linking promising instructional practices to students’ robust understanding of algebra. 70 study. Because of my history with the class, I also felt that my continuous presence for this study and for ATS would not be disruptive. At the time of the study, Ms. Hill had taught mathematics and science in the district for 8 th years, and was in her fifth year of teaching CMP. She had taught the low-tracked 8 grade class for three of those years. Ms. Hill regularly participated in professional development offered by the district and talked often about implementing what she had learned, especially as it related to student justifications. In particular, Ms. Hill recognized the importance of eliciting her students’ thinking to create a student-centered learning environment. Ms. Hill used the information students offered in classroom discussions, and regularly engaged students in writing activities to help them clarify their thinking. These writing activities included journal writing and more structured writing that included explanations and justifications. Recently, Ms. Hill had begun to facilitate professional development in the district on implementing CMP and generating classroom discussions. Additionally, she was part of a committee working to incorporate the CCSS-M standards and mathematical practices for algebra. This required the committee to rearrange the curriculum significantly, which resulted in the decision to use only parts of the Shapes of Algebra (Lappan et al., 2006) book. Further, although the middle school teachers agreed to concentrate on the mathematical practice of perseverance in problem solving, Ms. Hill also included the mathematical practice of argumentation in her classroom. Role of Researcher. For this study, Ms. Hill and I each wrote about half of the writing prompts. I analyzed all student arguments from the writing prompts immediately after collection so that I could report back to Ms. Hill before the next class, and again at the conclusion of data collection. After each analysis, I shared the results with Ms. Hill, usually via email, about how 71 students did overall, areas they did not seem to understand, and my perceptions of what they understood about arguments. As a result, Ms. Hill would address areas of concern during class the following day. For example, the following is an example of a typical memo that I sent (12/05/11): Of the six students I looked at, three provided really strong explanations/justifications for how they came to 404 tiles. Most students included words and a diagram, some were even color coded like you had modeled. One student provided a reasonable explanation, but their explanation did not match their number sentence. (They explained 4x + 4, but their number sentence was 101x4) These 4 students included the two requirements that you had written on the board (number sentence + explanation that includes appropriate vocab). The other two students did not provide complete explanations. One was really off-base (because 100 x 100), the other had an incomplete explanation because they just recounted the steps, not explaining why. So, there are two things that were potentially instrumental in the overall success of these explanations: 1. the triangle and including 'because' (all but 1 explanation had at least 1 because) 2. the checklist (this is nice scaffolding that can hopefully be removed and students still provide the why's) 72 In these ways, my role was as a participant observer (Cresswell, 2007). But beyond these memos and suggestions, Ms. Hill conducted class based on her lessons, planned independently, and in-the-moment analysis of how students understood the content and their knowledge of arguments. Mathematical Content. In order to frame the results, I will provide an overview of the two units I observed. Although the units were sequential, they were separated by a little over a month due to time devoted to preparing for the semester exam, and winter break. The unit on EE lasted for 10 days, and the unit on SoE lasted 12 days. In the following sections, I will briefly describe the mathematical content and share the types of arguments in which students were asked to engage for each. Unit 1: Equivalent Expressions. The overall objective of this unit was for students to develop skills to determine whether two or more linear expressions were equivalent. Say it with Symbols (Lappan et al., 2006a) begins by introducing that there is more than one way to symbolically represent a contextual situation with equivalent expressions. One of the contextual situations used to investigate equivalent expressions is the Pool Problem ([See Figure 12], 2006a, p. 6) in which students are asked to provide multiple symbolic expressions that represents the number of tiles needed to surround a square pool. The task as written is recommended to last one class period. This class took multiple classes to complete this task so that students had more time to engage with the ideas. For this reason, we decided to write our own writing prompts for the students that addressed the content planned for that day. 73 In order to calculate the number of tiles needed for a project, the Custom Pool manager wants an equation relating the number of border tiles to the size of the pool. Write an expression for the number of border tiles N based on the side length s of a square pool. 1 ft Border tile s Figure 12. Introducing equivalent expressions with the pool problem As the unit proceeds, Ms. Hill introduced students to several strategies that can be used to establish expressions as candidates for equivalence. These strategies include substitution, examination of tables and examination of graphs. Students learn that these strategies only qualify expressions for candidates for equivalence because they cannot possibly test for every point with substitution or tables, and they could not see every part of the graph. Finally, students develop symbolic manipulation skills that allowed them to indisputably determine whether two expressions are equivalent. In the same vein, any of these strategies could be used to prove that two or more expressions are not equivalent as long as they understand the concept of equivalent expressions. Opportunities for argumentation. As students proceeded through the unit, there were many opportunities to engage in mathematical arguments. For example, with the problems that initially set up the unit, students could argue that the expressions they generate accurately model the contextual situation, and as the unit progressed, they could argue whether two expressions were equivalent or not. But the primary focus of the writing activities in EE centered on students arguing for the validity of their generated expression modeling a contextual situation, such as the 74 pool problem. There were multiple reasons for this focus. First, Ms. Hill anticipated students would have a difficult time generating expressions to model situations, and wanted to provide multiple opportunities for them to do so. Secondly, Ms. Hill believed that with these opportunities, students would be exposed to multiple ways of representing the same situation. Arguing the validity of multiple expressions representing the same context would solidify the idea that there were many ways to represent the same situation (personal communication, December 2011). She likened this idea to synonyms in English class. Finally, we anticipated that students would have a difficult time writing convincing arguments. Therefore, we purposefully created writing prompts that were similar (arguing for the validity of a generated expression) and sequenced them close together to provide some continuity in the writing assignments. The writing prompts used in EE are summarized in Table 8. Table 8 Writing prompts in EE Day in Unit 2 Assignment Writing Prompt Number 1 Someday you might be training new employees. Explain to a new employee how to calculate the number of tiles for ANY length pool. 3 2 4 3 5 4 6 5 9 6 How many tiles are needed for a 100 x 100 pool? Explain how and why your equation works using words, symbols and pictures. Explain to workers at the "Sticker Cube Factory" how to find the number of stickers they will need for any length (x) cube. Tell them HOW to find it and WHY it works. Provide a justification for the sticker cube problem (from Day 4) that would earn full credit based on the rubric. Edit the instructions so the manual does a better job explaining why the number sentence works. Crystal says that 2(x-5) is equivalent to 2x - 5. Explain to Crystal why she is incorrect. Be sure to use words, diagram, and the equation in your explanation. 75 As Table 8 shows, five of the six writing prompts involved students justifying their expression. Further, these writing assignments occurred daily for five days in a row. Unit 2: Systems of Equations. This unit was loosely based on Shapes of Algebra (Lappan et al., 2009) and was extensively supplemented with contextual problems that aligned with the CCSS-M standard: “Analyze and solve linear equations and pairs of simultaneous linear equations” (CCSSI, 2010). As such, students continued to write equations and generate representations that model contextual situations as they had in EE. But they also spent considerable time working with solutions to a linear equation and what the solution meant graphically. Ms. Hill emphasized that a solution to a linear equation came as a coordinate pair, (x,y). This idea was emphasized almost daily throughout the twelve-day unit and Ms. Hill tried to help students see this through the use of tables and graphs. From single equations, students moved on to systems of equations. With the addition of an equation, students were also introduced to mathematical terms such as systems of equations, point of intersection, and xintercepts. Opportunities for argumentation. As with EE, there were many opportunities to engage students in written mathematical arguments. To push students to make sense of their solution and to draw on resources, the prompts typically consisted of presenting students with a solution given by a hypothetical student and asking them to agree or disagree, and explain why. Towards the end of the unit, students were given a SoE problem and asked to solve it, and make an argument for their solution. As mentioned previously, students had a better understanding of what should be included in an argument from the activities in the previous unit, so less time was spent on discussing the qualities of a good written argument. Table 9 provides a summary of the writing prompts we wrote in SoE. 76 Table 9 Writing prompts in SoE Day in Unit 5 Assignment Number 7 8-quiz 8 10 9 11 10 12quiz 11 Writing Prompt Student council is selling sweatshirts-profit $8- and t-shirts-profit $5. The president says if they sell 20 sweatshirts and 10 t-shirts they will make a profit of $250. The treasurer says if they sell 20 sweatshirts they'd need to sell 18 t-shirts. Use an equation and graph to decide who is correct. Then, explain how each representation gives evidence to support your decision. The band students are selling cookies (x) and candy (y) in an effort to raise $500. Cookies cost $5 per box and candy costs $2 per box. Ali tells the teacher they will need to sell 20 boxes of cookies and 100 boxes of candy. Jack says if they sell 20 boxes of cookies, they will need to sell 200 boxes of candy. Who is correct? Show and explain how to use the equation and graph as evidence. [Students are given a problem in which a hypothetical student, Casey, does all the work (shown) and finds the point of intersection.] Explain to Casey what the solution means and why it works. [Students given a systems of equations problem that is worked out incorrectly] Casey claims the answer is (15, 10). Is he correct? Explain why or why not. Morgan wanted to raise money to buy food for families in need. She raised $600 by selling stickers for $3 each and t-shirts for $6 each. At the end of the day she counted that she had sold 150 items. How many stickers and shirts were sold? Explain what your solution means and use the context of the story to support your solution. As Table 9 demonstrates, the writing prompts were not as consistent as those given at the beginning of the EE unit. Yet, many of the writing prompts in SoE are similar to each other, like in EE. Data Collection I collected two types of data: 1) classroom video and; 2) written artifacts, in the form of students’ written arguments and fieldnotes. I collected students’ written arguments at the conclusion of the class in which they were assigned. In no instance were the writing assignments given as homework. This data allowed me to compare students’ performance between the two 77 units. Secondly, I used video to capture classroom interactions and mathematical work that was done at the front of the classroom. In addition to the classroom video, I took fieldnotes every day during my observations with the goal of accounting for classroom discussions. These fieldnotes were revised after each unit based on a review of supporting data. Fieldnotes were primarily taken as rough transcriptions with particular attention given to instructional moves around written work and arguments that were co-authored in class discussions. At the end of each unit, these were compared to video in order to create more thorough transcripts, gaining a more compete account of daily objectives and classroom interactions, especially as they pertained to arguments. Each datum will be detailed below. Students’ written work. Students' written explanations and justifications were the primary data source used to assess students' developing understanding of arguments across the two units. Some of these writing assignments, however, were part of a larger classroom assignment or quiz, and provided further evidence of what content students seemed to understand, as well as their struggles. Writing prompts were given when Ms. Hill and I felt that students possessed enough knowledge to be able to write a meaningful response that would capture what they understood about both the content and arguments in general. My goal was to collect at least five writing samples from each unit. In EE, students responded to six different prompts for a total of 45 written arguments, and to 5 prompts in SoE, also for a total of 45 written arguments with two of the writing samples taken from quizzes students were given. Writing prompts were created with the goal of eliciting situations in which students were required to state and defend a claim, that is, cases in which justifications were required for their response as Table 8 and 9 illustrate. Ms. Hill administered and collected each of the writing assignments and gave them to me at the conclusion of the class period. Once I received students’ 78 written work, I removed identifying information and gave them a pre-determined student code. Each night I received a set of students’ written work; I analyzed them and wrote a memo that I shared with Ms. Hill. Classroom video and fieldnotes. Each class period during the two units was captured with video and fieldnotes. Fieldnotes were taken each day in an attempt to capture the classroom objectives, tasks, and discussions around both content and arguments. At the conclusion of each unit, the fieldnotes were compared to the video to fill in missing information and to provide annotations, especially as they pertained to anything said that would help students with their written work. Although I did not transcribe the entire video, I was careful to accurately capture instruction and discussions around arguments, as well as arguments the class may have engaged in collectively during whole class discussions. These instances were identified primarily through Ms. Hill’s prompting. For instance, during whole-class discussions, she would ask students “why is that true?” or “why did you do that?” to set them up to provide an argument. Similarly, instruction around written arguments primarily occurred just before she distributed a writing assignment. These instances were easily identified as she typically used terms like “written explanations” or “writing assignments”. I primarily used these transcripts to identify instructional support during the task set up regarding students’ written arguments. Data Analysis To determine the nature of task features and task set up and how that may have impacted students’ written arguments, I performed two parallel analyses. The first involved the analysis of the writing prompts and the subsequent written work students produced, and the second, how Ms. Hill set up the writing tasks. In the following sections, I will go into greater detail regarding how I performed each analysis. 79 Task features. Characteristics of the writing prompts were analyzed to determine whether certain features may have contributed to more convincing arguments. There were three features considered. First, I considered whether or not the prompt explicitly situated students to make an argument. Writing prompts in which students were situated to make an argument included those that forced students to agree or disagree with a fictitious students’ claim, or when they were explicitly asked to support their answer using a particular resource, such as a graph or an equation. Less explicit instances of situating students were when they were asked to “explain” a claim. Whether a prompt situated students to make an argument was coded as either yes or no, depending on whether students were set up to provide a justification for their claim. Secondly, I examined the writing prompts for any written scaffolds they may have provided. Written scaffolds were considered to be reminders to students to include different elements in their written argument. In all cases, written scaffolds informed students of what the final product should include. For example, some writing prompts urged students to explain why; others reminded students to use equations and graphs to support their answer. In these cases, the writing prompt was coded according to the type of scaffold provided (see Table 10 for the list of codes). These scaffolds were consistent with the types of scaffolding provided in task set up. Table 10 List of codes for written scaffolds Code Mathematical Resource Justification Definition Students pressed to include resource such as a representation or connect to context Student encouraged to go beyond procedures to explain why their claim makes sense. Example Crystal says that 2(x-5) is equivalent to 2x - 5. Explain to Crystal why she is incorrect. Be sure to use words, diagram, and the equation in your explanation. [Students are given a systems of equations problem that is worked out incorrectly] Casey claims the answer is (15, 10). Is he correct? Explain why or why not. 80 Finally, I coded for the physical layout of the task, and called these process scaffolds, as they affected the process of students’ final argument. In some instances, space was provided for students to support their claim using an equation and graph (see Figure 13). In other cases, a diagram, graph or a checklist was provided. The student council is selling Clarkston sweatshirts and t-shirts for a fundraiser. They make a profit of $8 for every sweatshirt, and $5 for every t-shirt they sell. The student council president says that if they sell 20 sweatshirts and 15 t-shirts they will make a profit of $250. The treasurer says they if they sell 20 sweatshirts they'd need to sell 18 t-shirts to make a profit of $250. Who is correct? Explain why this person is correct using an equation and a graph: Method 1: Equation Method 2: Graph Figure 13. Example of writing tasks that situated students in an argument and provided process scaffold in written task. Figure 13 provides an example of a process scaffold in which students were given allotted space in which to respond to each part of the task. In tasks like this, the allotted space or representations were a type of scaffold that served to remind students to include some kind of mathematical statement with regards to the given process scaffold. Students’ written arguments. Once all the data was collected, students’ written work was scored using the analytical framework described in Chapter 2. The level of persuasiveness was coded on a spreadsheet for each student by day. I sought to also understand how student performance evolved each day to track possible relationships between student performance and scaffold provided each day. In order to do this, I assigned a numerical score, from 0-4, to each category in the analytical framework, with non-arguments scoring the least, and strong arguments scoring the most. Once each argument received a numerical score, I averaged 81 students’ individual scores for a daily average. This average provided an overall approximation of how the students’ performed as a class. From these scores, I labeled each day according to the average score in accordance with the labels from the framework presented in Chapter 2. Therefore, scores between zero and two were described as weak, scores between two and three as moderate, and between three and four as strong. This provides a descriptive way to talk about average daily results for students’ written arguments. These labels provided both an adequate indicator of how students performed as a whole each day and a means for determining trends between students’ written work and the instructional support offered during task introductions. Additionally, I coded students’ written work according to the task features. For situating, students were coded as either yes or no depending on whether they attempted to write an argument. Attempts at an argument were considered when students included at least a statement that could be considered as grounds. Students’ response to situating was scored on a three-point scale: 0 for no argument, 1 for an attempt at an argument, and 2 for an argument. These scores were averaged for each writing assignment to provide an overall indication of how students responded to this task feature. Students’ responses to the written scaffolds in tasks were coded according to whether they provided what was indicated. For example, if the task stated that students should use words, pictures, and/or symbols, students’ work was coded for which of these they included. In order to determine whether students attempted to provide a reason “why”, signal words such as “because” or “since” were identified as an attempt to answer why. These will be presented in the results section as the percent of each that was included. That is, if half the students provided a diagram, then diagram will be listed as 50%. These percents do not account for the correctness of any 82 statements made; rather the percent only indicates the number of responses that included a particular representation relative to the total number of responses. Finally, students who used the process scaffolds were tallied and averaged. If more than one process scaffold was provided on a task, they will be named and each will be coded. For example, when space was allotted for students’ explanation of why a graph supported their claim, it was labeled “Graph Space” and included the percent of students who incorporated some sort of explanation of the graph, whether it was correct or not. Task set up. Task set ups were captured each day a writing task was distributed. They were analyzed in two ways. First, I captured the amount of time spent setting up the task because there seemed to be considerable variation in this area. For example, some days Ms. Hill distributed the writing assignments without any explanation, and other days, she spent extensive time explaining the task and discussing elements of a good argument. I did this because it seemed as though the time taken to introduce the task may have had an influence on students’ written work. Secondly, I coded each set up according to how the discussion informed students about the writing assignment. Again, this was because of the variation in information provided throughout the two units. In the following sections, I will elaborate on each. Capturing time of task set up. The classroom video provided a means to capture the amount of time devoted to the task set up of written arguments. With these times, I calculated the percentage of class time devoted to the set up for each day that students were given a writing task. Timing typically began with Ms. Hill giving a general introduction to the task, such as, “I’m going to give you another writing assignment that you’ll have ten minutes to work on” and typically concluded once students began working on the task. Percentages were an appropriate 83 measure for comparison because there were several days that an alternate schedule altered the class time of the days I observed. As with the daily averages of written work, I created a means to talk qualitatively about the average class time devoted to written arguments across three categories because the average times occurred in clusters. First, when 0 to 5% of class time was devoted to written arguments, I describe these days as little support. Similarly, modest support occurred on days when 6 to 30% of class time was given to supporting written arguments and intense support for days over 31%. Coding task set up. The activity structure of task set ups of students’ writing assignments were clearly defined, and generally involved feedback on a previous day’s assignment, or an elaboration just before distributing another writing assignment. Once these instances were identified in the transcripts, I returned to the video to double-check the transcripts to make sure they accurately captured the scaffolds and where necessary, edit them. This was to ensure that the fieldnotes captured these instances verbatim. After returning to the video, I used the transcripts to locate sections of dialogue that involved setting up students’ writing tasks. Once the sections of transcripts were identified, they were coded according to the type of instruction that was offered: telling, modeling or rubric activities. For example, telling included support about what should be involved when writing an argument, such as different tools that can be used or feedback regarding a previous writing assignment that drew attention to what students were doing well and on what they needed to improve. Additionally, task set up included time in which Ms. Hill modeled writing arguments, either by stating what she would write or actually writing the argument on the board. Finally, peer review activities preceded writing tasks in which students evaluated written arguments with the aid of a rubric created specifically for that writing assignment. Each task set up was coded as 84 telling, modeling, or peer evaluation, or a combination of the three codes. These moves were progressive in the sense that every instance of modeling included telling, and every instance of peer evaluation activities involved telling and modeling. This progressive nature of instruction provided instances of instructional layering. Additionally, the instructional types were coded according to how they informed students about the task in general. Instruction either addressed the final product, the process of generating the final product, or both. The process included instruction as telling student the types of resources they should be using in their arguments, or to explain why their claim is valid. This process aspect also included scaffolds that communicated that these resources needed to be unpacked and linked together, which worked as an emphasis on both the process of creating an argument as well as what the final product should include. Although there were instances when both process and product were coded in an instructional episode, there were instances when Ms. Hill only talked about one of the two aspects (see Table 11). Table 11 Examples of Coding Instructional Moves Transcript “I want you to move away from what you are doing and start explaining why [you are doing it]. So, I say to the customer you do this because…” “I gave you a true statement, but my statement doesn't answer the question.” Instructional Move Telling Process/Product Modeling Product Process Summary. An overview of all the codes with a brief description can be found in Table 12. This table summarizes the task features included in the task as written and whether students complied with the various scaffolds in parenthesis). The table also summarizes the codes used to capture elements of the task set up including time and instructional moves. 85 Table 12 Summary of all Codes Used for Tasks and Task Set Up Captures whether the task as written situates students to provide an argument (and the resulting strength of argument on a scale from 0-2) Written Captures any scaffolds provided in the task that Scaffolds remind students to include specific elements such as mathematical resources or a justification (and whether students included these elements in their argument) Process Captures allotted space that reminds students to Scaffolds produce specific elements such as a graph, or space allotted to unpacking a representation (and whether students used the space in their argument) Time Little 0-5% of class time devoted to setting up writing tasks Set Up Modest 6-30% of class time devoted to setting up writing tasks Intensive >31% of class time devoted to setting up writing tasks Instructional Telling Teacher tells students what to include or what to do Moves when writing arguments Modeling Teacher shows students what to include and how to write an argument Peer Teacher sets up activities where students provide Review feedback to each other regarding their completed arguments In the following section, I share how task features and task set up impacted the results of Tasks Task Features and Student Response (in parenthesis) Situated students’ written work. Results The purpose of this study was to capture both the nature of task features and task set up of writing assignments designed to elicit arguments, as well as how convincing the resulting arguments were. Findings suggest that although the task features did not have a discernible impact on students’ written results, task set up did. Students’ written arguments were more convincing on the days Ms. Hill devoted the most time to providing tangible resources that explicitly addressed arguments. In the following sections, I describe the results of the analysis of the written tasks and task set up, and describe how these affected students’ arguments. 86 Task Features I analyzed three characteristics of tasks as written for how they scaffolded students to write an argument—situating, written scaffolds, and process scaffolds. Table 13 provides the results of this analysis along with the daily average of students’ written arguments for each writing assignment. Table 13 Results of task features analysis and average of student results Writing Task # 1 2 3 4 5 6 7 8 9 10 11 Situating (yes/no) No No No No No Yes Yes Yes No Yes No Written Scaffold None Resources Why? Rubric None Resources Resources Resources None None Checklist Process Scaffold (yes/no) No No No No Yes No Yes Yes No No No Average Student Results 0.4 2.5 1 3.6 3.5 1.2 1.4 2.4 0.9 1.5 2.1 As Table 13 shows, there are a few observable trends in the data. In particular, there are some patterns among the written and process scaffolds and student results. For example, days in which there were no written or process scaffolds, written results were quite weak for the most part, receiving scores of 0.4, 3.5, 0.9, and 1.5. This would indicate that without the written scaffolds, students were uncertain about what to include. There is one outlier for this case (writing task #5). On this day the average score of 3.5 can be explained by the peer-evaluation activity that occurred the previous day and the close similarity between this task and the task used in the peer-evaluation activity. 87 Similarly, on days in which there were specific written scaffolds informing students of what mathematical resources to include, the average scores were better than the days without: 2.5, 1.2, 1.4, 2.4, and 2.1. Even so, these average scores are still considered moderately convincing. The written scaffolds informing students of what to include in their argument still left students with the job of using these mathematical resources in a way that supports their claim. That is, students were still required to use and unpack the resources as grounds and reasoning to logically validate their claim. In this sense, including the written scaffolds did not interfere with the intention of the writing task. Despite the fact that the majority of tasks-as-written did not situate students to provide an argument, in many cases students provided an argument anyways. In fact, of the days in which students provided the most convincing arguments, the writing prompts did not situate students to present an argument. This suggests that either students inherently knew they were supposed to provide an argument, or Ms. Hill indicated during the task set up that written responses should contain a justification that explains why their claim is true. In this study, the writing prompt itself did not seem to have much impact on the finished product. Students’ responses to the task features were also analyzed (see Table 14). The information provided in Table14 only captures the presence of a resource or an attempt to explain why in the case of the written scaffolds, or whether students took advantage of the space provided to write an explanation regarding a resource. 88 Table 14 Student responses to task features and average of student results Writing Task # Situate Written Scaffold Process Scaffold 1 Number of Completed Tasks 5 0.40 None 2 8 1.13 None 2.5 3 4 5 7 9 7 0.25 1.78 1.86 None None Labeled Diagram-100% 1 3.6 3.5 6 9 0.40 None 1.2 7 7 0.43 8 10 1.10 9 10 11 9 10 9 0.44 0.60 0.89 None Words-100% Diagram-50% Symbols-50% Why-71% Rubric-100% None Words-78% Diagram-100% Symbols-100% Equation-100% Graph-57% Equation-90% Graph-90% None None Checklist-89% Average Student Results 0.4 Equation Space-100% Graph Space-71% Equation Space-100% Graph Space-100% None None None 1.4 2.4 0.9 1.5 2.1 As was seen with the analysis of the task features, the situating aspect of the writing task did not seem to influence the strength of students’ arguments, and the situation scores, coded on a 3-point scale, reflect this. Students responded favorably in most instances to the presence of scaffolds. That is, students attended to the reminders to use resources, or to explain why their claim was valid. Similarly, when students were given space to provide an answer that addressed why a particular resource validated their claim, students generally included some statement to fill the space. Whether the resources or statements were correct or not was not part of this analysis. Rather, this analysis captured students’ responses to the task features. 89 Task Set up Task set up varied in both the time allotted to it, and the instructional moves to prepare students for the writing assignment. In general, the more time that was spent on task set up, the more convincing students’ written arguments were. The instructional moves during task set up varied from telling students about both the process and product of arguments, to modeling arguments, and using rubrics to analyze peers’ arguments before constructing their own. Each of these will be described below. 5 Time spent on task set up. As Figure 14 shows, as more time was spent setting up the writing tasks the written arguments became more convincing according to the five-point scale (04). Comparison of Task Set Up Time and Average of Written Arguments 4 Task Set-up Time 3 2 Ave Score of Written Arguments 1 0 Little Modest Time Spent on Task Set up Intensive Figure 14. Comparison of percent of time given to task set up (x4 to compare to written results scale) for particular days and averages of students’ written arguments. For interpretation of the references to color in this and all other figures, the reader is referred to the electronic version of this dissertation. 5 The percent of class time was multiplied by 4 so that the scales were comparable. 90 Figure 14 shows some indication that students’ written work corresponded with the amount of class time spent on task set up and the convincingness of students’ arguments. That is, days in which more time was spent on task set up resulted in more convincing arguments for the class. Likewise, days in which little time was spent on discussions related to setting up the task resulted in low scores on students’ written work. Although this is not entirely surprising, it is noteworthy. If students are expected to show their mathematical proficiency through the construction of arguments, then attention will need to be given to the criteria for convincing arguments during class. The instructional moves that Ms. Hill used during these times aligned with little, modest, and intensive time spent on task set up. In particular, telling was associated with little time, modeling to modest time, and peerevaluation activities to intensive times spent setting up tasks. These instructional moves will be discussed in the following sections, particularly as they relate to informing students of either the process of writing arguments, or to the final product. Instructional moves during set up. Three distinct types of instructional moves were used by Ms. Hill to set up the writing tasks: telling, modeling, and peer review. But as more time was devoted to task set up, there was a layering effect of instruction. For example, with modest time was devoted, Ms. Hill tended to use modeling and telling, and with intensive time she used peer review activities and modeling and telling. Additionally, the instructional moves built on each other in the order presented next, establishing a foundation. That is, telling students about the processes and product of an argument provided a foundation for Ms. Hill to model the development of an argument. This in turn, provided a foundation for the peer evaluation activity, creating a layering of instructional moves that helped students construct convincing arguments. 91 Telling. Telling involved informing students about either the process or the product of arguments, but in very general terms. For instance, during a task set up that occurred early in EE, as Ms. Hill was distributing the writing prompts she announced to the class, “It's about being better math students, and not just answering the question, but explaining why you think that it is true. I'm asking you to not just give answers, but to explain.” By saying this, she is drawing students’ attention to the final product. In particular, the final product must explain. Yet, the way she communicates this is vague. In fact, explain might mean recount steps, or provide an example. The ambiguity of this instruction might have been decreased by asking students to provide evidence for the claim and/or directing their attention to different ways in which mathematical resources can be used to validate their claim. Similarly, on the day of the ninth writing assignment, Ms. Hill spent time defining key terms of the SoE (?) unit. That is, once the writing assignments were distributed, Ms. Hill spent about 3 minutes defining the terms, solution and point of intersection. She then explained how the graph can be used to identify the point of intersection and related the solution to being a solution for each of the lines. These are terms that had been defined before, and students were acquainted with finding the point of intersection in their daily work to identify the solution to a system of equations. By defining these terms and reminding students of their relationship, Ms. Hill addressed the process of constructing an argument. However, students were left to their own to determine how to use these definitions in their justifications. In each of these instances, the support given to students was vague, which was the case for most of the instances of telling. Although telling addressed both the process and product of arguments, it did not provide any instructional resources from which students could refer when 92 constructing their arguments. Not all telling was as vague as “explain”, however. There were episodes when Ms. Hill explicitly told students appropriate resources to use when writing their arguments. For example, in EE, she stated that students should use diagrams to support their generated expressions, and to use words to make the connections between them. In the following quote, we see an example of how Ms. Hill used direct instruction to make clear to students what resources they should draw from in their arguments. Give me an explanation, showing the symbols you use to solve it. But also give an explanation-this is what you didn't do last Friday. Use the words and the pictures to explain exactly why you are using the numbers you are using. While less vague than the quotations above, telling students what mathematical resources they can draw on did not seem to have much impact on students’ arguments. In this passage, Ms. Hill explicitly calls for the finished arguments to include words, pictures, and symbols. She is directly telling students to use mathematical resources that are tangible and that will increase the persuasiveness of their arguments. Telling students to explain or use certain resources did not have a great impact on students’ arguments. In fact, in every instance when telling was the only instructional move used to set up tasks, student arguments were categorized as weak, with an average range of scores from 0.4 – 1.5. The student work in Figure 15 provides an example of the arguments that fell into this range. Tell the workers how to find the number of stickers they will need for ANY length (x) cube. Tell them HOW to find it and WHY it works. x x 4 + 2. This works because x x 4 will give you the first answer then you add two because of the top and bottom of the cube. Figure 15. Example of weak argument on a day with little scaffolding (students work recreated). 93 This example of student work attempts to “explain their thinking”, but falls short because there is little to guide the reader into understanding what the x x 4 answer means with regards to the original context. Further, although the task provided a diagram of the cubes, this student did not use it to communicate their thinking. This argument was categorized as weakly convincing, and was typical of the types of responses given on days when task set up only involved telling. When telling was coupled with other types of instructional moves, students’ written arguments became more convincing. Modeling. Ms. Hill also used modeling to scaffold written arguments. Although the modeling included instances of telling, she also showed them what she expected. She did this both orally and verbally. The following example illustrates an occasion of Ms. Hill writing out an argument with students’ assistance. The following portion of transcript provides an illustration of how Ms. Hill modeled how to use resources during the process of generating an argument in the SoE unit (before writing task 8). 1 2 3 4 9 10 11 12 13 14 Ms. Hill: So, in the show part, you could do something like this [Writes Marcello: 500x + 300 y = 180000, 500(10) + 300(40) = 17000] ten oil paintings and forty sketches only makes seventeen-thousand not eighteen-thousand. I think you did that, because that's how you knew the agent was correct, but you're not showing it. This is how you show it. [Writes: “Agent: 500 (12) + 300(40) = 18000”] This is how you show it. So, in words, explain what the equation is showing. Some of you put $17000, that's not an explanation, it's just a number. Explain why that number is not what you're looking for. [and later during the same discussion addressing how the graph shows the solution] So what? What does it matter if the point is on the line? What does the line represent? So, what did Marcello do? This is his combination here, right? [Underlines the 10 and 40 from the problem]. That's his information, so I plot that point. This is the agent's combination, I plot that. Here it is. [Labels each point, "Marcello" and "Agent"] That's showing. So what? What am I looking at? Why does this show me the agent is right? How does this graph give evidence? Student: Because one of the points is on the line and the other isn't. Ok, who's point? Tell me who specifically. [Writes: “the agent's point is ON THE LINE”]. This is a super important phrase. I’m going to put it in capital letters. So what? What does it matter if the point is on the line? What does the line represent? 94 15 Student: The line has all the solutions that give you $18,000. In this episode, Ms. Hill asked students to draw from their developing content knowledge and apply it to their arguments in line 2. Students consistently found correct solutions, but they did not support their answers in their arguments. Ms. Hill draws attention to the fact that they simply needed to include more of what they knew about solutions in their arguments that would validate their claim. Then, to illustrate, she showed them what a convincing argument would look like for this particular problem. In doing so, she provided examples of what the finished product should contain. Statements like these helped students make links between their thinking and their writing. As a result, students generated written responses that included insight into their content knowledge that supported their claims. This provided a useful means of assessment that provided valuable information regarding how students were progressing in the unit (see Figure 16). 95 The band students are selling cookies and candy in a fundraiser. Cookies cost $5 per box and candy costs $2 per box. x: # boxes of cookies y: # boxes of candy d. Write an equation to find the number of each type of treat they can sell to raise $500. 5x + 2y = $500 e. Use the x- and y-intercepts to graph the equation. Jack point f. Ali tells the teacher they will need to sell 20 boxes of cookies and 100 boxes of candy. Jack says that if they sell 20 boxes of cookies, they will need to sell 200 boxes of candy. Claim: Who is correct? __Jack___ Evidence: Show AND explain how to use the equation and graph as evidence. Equation: Graph: 5(20) + 2(200) = 500 Ali point is on the $300 graph Jack is on the $500 graph. Because 20 boxes x 5 + 200 candy’s x 2 = $500. Ali equation = 5(20) + 2(100) = $300 Figure 16. Student work showing insight into their content knowledge on a day with modest scaffolding (student work recreated-student has counted by ten on the x-axis and titled it “cookies”, and by 25 on the y-axis, titling it “candy”). 96 As the student work shown in Figure 16 illustrates, there is evidence of strong content knowledge regarding the process of showing whose solution is validated by their equations. This student was clearly able to substitute the values into their equation and determine that Jack was correct. Similarly, this example demonstrates the student has some idea of how the graph shows solutions to the equation, although there are not strong links made between the points and the line. However, there is little connection between these resources and the claim. This argument was categorized as only moderately convincing because although there is some indication the student understood the material, they did not make a convincing argument. There were elements missing that indicate the student did not have a strong understanding of arguments and the requirement to convince. Peer evaluation. The third instructional move used as a task set up was peer evaluation activities. There were two days, one in each unit, in which these activities preceded a writing task. In EE, this activity occurred in the middle of the unit, and had an impact on future, similar writing tasks. In SoE, the peer-evaluation activity was left until the last day of the unit and only had a modest impact on students’ writing assignment. In both instances of peer evaluation, Ms. Hill helped students evaluate their peers’ arguments with the assistance of rubrics she developed in EE, and a checklist she developed in SoE. Through the implementation of these activities, students still received instruction through telling and modeling as described above, but the peer evaluation activity provided another layer of instruction. That is, the activity was product-oriented in that it provided hands-on experience in evaluating and making sense of each other’s arguments. I will first provide an example of the rubric provided in EE before writing assignment four. 97 Rubrics from EE. This rubric activity was introduced in the middle of the EE unit, and was used to reinforce the telling and modeling that students had already been exposed to. In particular, these rubrics were used to reinforce how words, pictures, and diagrams should be used to support students’ claims, supporting students in the process and product aspects of arguments. This activity and the corresponding rubric were aimed at critiquing responses to the Sticker problem. The rubric (shown in Figure 17) provided a tool for students to meet this goal. Rating 2 1 An explanation in words for An explanation in words for HOW to find the number of HOW to find the number of stickers, but does not Words stickers AND 'because' statements explaining WHY explain WHY it works are provided for each step An expression showing An expression showing HOW to find the number of HOW to find the number of stickers, but it is not labeled Symbols stickers is given AND each part is labeled with what it with what each part represents (WHY) represents (WHY) A labeled picture is used to A picture is given, but it is show how to find the number not labeled or does not match the description in Picture of stickers. The picture matches the description of words or symbols words and/or symbols Figure 17. Rubric given in EE as a task set up to a writing activity. 0 An explanation in words is NOT given There is no expression given There is no picture given Peer-evaluation using rubrics. In the activity that involved this rubric, students were given six arguments taken from the previous day's writing assignment. Ms. Hill chose examples from students' work to give an example of the variety of explanations that students wrote. The examples were chosen so that students could see examples of explanations that included parts of words, pictures, and symbols, and at varying levels of completeness and varying levels of connecting the resources to the claim. Students were asked to use a 3-point (0-2) rubric to assess each criteria: words, symbols, and pictures. Once these six arguments were evaluated, the scores were discussed as a class, drawing out what types of statements would warrant a rating of 2 over 98 a 1 or 0. In these discussions, both Ms. Hill and her students were able to differentiate between arguments that clearly explained why the expressions worked from those that only articulated how they worked. The following portion of transcripts illustrates an example of the conversation that took place during this task set up activity. 1 Ms. Hill: 2 3 4 5 6 7 Students: Ms. Hill: Students: Ms. Hill: Students: Ms. Hill: Ms. Hill: 8 Max: 9 Ms. Hill: 10 11 12 13 Clara: Ms. Hill: Ivan: Ms. Hill: 14 15 Students: Ms. Hill: These are the types of explanations I see a lot of. [writes 4x + 4]. If you tell me this is how to find the number of tiles, I'm not really sure what that means. So, I need something more than that [points to expression]. This is not enough. Does this explanation use words? No So, what score would they get? Zero. Does this explanation, four x plus two, use symbols? Yes. Good, so this one will not get a zero. Read through the rubric and decide if this should get a one or two. .. How many of you think this deserves a zero? [no students] How many think it deserves a one? [most raise their hand] Why? Because it doesn’t explain what things [variables] are and how it connects to the picture. Does anyone think it deserves a two….why wouldn't it deserve a two, what is it missing? It's not showing why the expression works. So, if I wanted to turn this into a two, what would I have to do? Label the parts of the equation, like, they should say x is side length. [draws arrow to x and writes side length] So, that's one step closer to being a two, but it's not yet because everything isn't labeled. So, what about the picture? Zero. So, if this was out of six points, even though the expression is correct, as an explanation it would get a one out of six points. And that's where you guys stop. It makes sense to you, but you're not doing a good enough job explaining why? This portion of transcript shows how students were able to recognize how to improve areas of an argument. In particular, they saw that the expression did not communicate why it was appropriate for this situation, but were able to suggest ways to improve this by saying why it 99 works and “labeling parts of the equation”. This was an important realization for students because their arguments before this activity rarely included words or pictures. This set up reinforced what Ms. Hill had told students and modeled previously. In particular, she had repeatedly urged students to use mathematical resources that support the claim by stating both how and why each resource was relevant. In doing so, Ms. Hill stressed the process of generating an argument. But because students were given rubrics to assist them in evaluating their peer’s arguments, she also provided a means for students to reflect on arguments as finished products. As Figure 18 indicates, students’ arguments became much more convincing after this rubric activity as they began to include more resources in their arguments that were unpacked and clearly supported the claim. In the space below, provide a justification for the sticker cube problem that would earn full credit based on the rubric. Because you can only see four sides, we would have x groups of four (with x being the number of blocks). However at the ends you can see another sticker, so we need two more stickers for two sides. x x 4 + 2 = # of needed stickers Top, bottom, front and back (x4) 2 sides plus the two sides side length x the four sides # of blocks on each x (Length, by the number of blocks) Figure 18. Example of student work that provides appropriate resources that are unpacked and cogently support the claim. As this example shows, students’ arguments began to include many more words that unpacked the resources used and communicated their relevance. Unlike previous arguments, students labeled their diagrams and expressions, which was helpful in communicating the links 100 between resources because the reader did not have to infer what the author meant by terms like ‘sides’ or ‘corners’. Further, the arguments were clearly stated and followed a logical flow that more appropriately communicated the validity of the claim. The peer-evaluation activity in EE provided the hands-on experience that telling and modeling did not. This experience was designed to position students to critique their peer’s arguments, but provided a tool to help them do so in a way that may not have been as successful without it. In doing so, students were exposed to arguments as both a process and a finished product. Determining how students used the rubrics and how they influenced their subsequent writing assignments would be an interesting exercise, but was not within the scope of this study. Summary. Figure 19 provides a summary illustration of the features of the tasks as written and the task set up that seemed to play a role in students’ arguments for this study. Task Task Set up (as written) (by teacher) Task Implementation (by students) Factors Reducing Ambiguity: Factors Influencing Convincing Arguments: -Written Scaffolds -Process Scaffolds Convincing Arguments -Time Spent on Set up -Layering Instruction Figure 19: Mathematics Task Framework (Henningsen & Stein, 1997) adapted for tasks involving written arguments Although the task features did not have a noticeable impact on students’ arguments, they were consistent with the instruction given during task set up. For example, Ms. Hill consistently 101 instructed students during task set up to include words, pictures, and symbols in their arguments and included this instruction in the written tasks. Further, students attended to the task features by including the mathematical resources suggested, attempting to explain why, or taking advantage of the allotted space given by writing something in there. The task set up, however, did seem to contribute to the persuasiveness of students’ arguments. The time devoted to task set up and the instructional moves seemed to have a direct impact on how convincing student arguments were. Ms. Hill set up high-level tasks aimed at eliciting written arguments using three types of instructional moves: telling, modeling, and peer review activities. The contents of these instructional moves aligned to inform students of the types of mathematical resources they should include in their arguments and that the final product should provide evidence that validates their claims. In doing so, the task set up reduced the ambiguity of the doing mathematics tasks and moved students towards constructing more convincing arguments. Discussion This study investigated mathematical tasks and the task set up designed with the intention th of eliciting convincing, written arguments in a low-tracked, 8 grade mathematics class. The results show that although the features of the tasks did not seem to contribute to the persuasiveness of students’ arguments, the set up phase did. In fact, as Ms. Hill spent more time setting up the writing tasks students’ arguments became more convincing. Part of this can be attributed to the consistent message informing students of both the product and process to generate arguments given in the three instructional moves used to set up the tasks. This study provides evidence that with scaffolding, students can learn to write convincing arguments using appropriate reasoning to support their claim. Though it is not entirely surprising that the final 102 product improved as set up time increased, it is worth noting during a time when students’ mathematical proficiency will be assessed, in part, by their ability to construct viable arguments. This study identified task features of writing prompts designed to elicit an argument from students and described instructional moves that seemed to have an impact on students’ written work. Even so, the persuasiveness of students’ arguments was not sustained when new content or a change in writing prompt was introduced. In the following sections, this will be discussed, with possible explanations for why students’ argumentative writing skills did not persevere across writing prompts. Influence of Task Features on the Persuasiveness of Arguments Three features of the writing prompts—situating, written scaffolds, and process scaffolds—were analyzed in this study. Even though the presence of at least one of the types of scaffolds was present in the majority of the tasks, they did not seem to have any discernible impact on the persuasiveness of students’ arguments in this study like the task set up did. Yet, even though the written and process scaffolds did not seem to have an impact on students’ arguments, students nonetheless complied with the suggestions inherent in the scaffolds. That is, when the task allotted space to explain why the graph supported the solution, students typically wrote something in that space and they included the recommended mathematical resources. Written scaffolds. The inclusion of the written scaffolds served to reduce the level of ambiguity inherent in tasks classified as doing mathematics (Smith & Stein, 1998; Stein & Lane, 1996). That is, the presence of these scaffolds provided textual cues to students regarding what they should include to make a convincing argument. Although the inclusion of the written scaffolds may have reduced the cognitive demand of the task, the spirit of the task was not 103 changed. That is, students still need to logically connect the significance of the mathematical resources in a way that justified the claim. In this respect, the task of producing a final product that convinces was not altered. The written scaffolds simply served as cues to the process of justifying one’s claim. Especially because the difficulties students have with providing justification is so well-documented (e.g., Balacheff, 1988; McCann, 1989; McNeill & Krajcik, 2009), scaffolding the types of resources that students might draw from to make mathematical justifications provided some assistance to students. When this assistance was not offered, the arguments were classified as weak. This suggests that including written scaffolds in the task is one pragmatic way to reduce some of the ambiguity of a high level task for students who are beginning to develop the skills associated with writing mathematically convincing arguments and give students insight into appropriate mathematical resources that would convince a critic. While the written scaffolds did not seem to dramatically affect the written product in this study, they may play a role for students who have already developed the skills associated with writing arguments. That is, for students who understand the product aspect of arguments, written scaffolds may provide insight into the types of mathematical resources students draw on. It seems as though, especially for high-stakes testing, that if particular mathematical resources are expected in students’ written work, these expectations be made explicit. As this study suggests, despite explicit reminders to use mathematical resources, students’ still were left to determine how to appropriately use these resources in a way that directly linked to and supported their claim. Situating students in an argument. Even though for this study, the way students were situated to provide an argument did not seem to have much influence on whether or not they did 104 because of the task set up. In high stakes testing, for example, it is unlikely that students will be given elaborate task set ups. Therefore, as students begin to be tested on their ability to construct viable arguments (CCSSI, 2010), the writing prompt may play a larger role. Even though in this study prompts that asked students to explain resulted in convincing argument at times, Staples and Bartlo (2010) caution about students’ interpretation of the word explain to mean ‘recount your steps’. In other words, to students explain is not synonymous to justify. The fact that students provided stronger arguments on tasks that did not explicitly situate them in an argument may speak more about the task set up than the writing prompts themselves. This suggests that teachers need to prepare students to interpret explain tasks on highstakes tests (see Figure 2) differently than recount your steps—especially when they are asked to explain a choice or defend a result. Like Ms. Hill demonstrated, teachers can help their students associate such tasks with the need to provide evidence for their claim. Because of the difficulty students have with providing evidence (e.g., Balacheff, 1988; Knuth, Choppin, & Bieda, 2009) this instruction may need to be explicit. This can occur when providing feedback to written arguments, but it can also be drawn out in oral arguments. Teachers can highlight justifications provided in support of a claim, such as “that was a really good justification that provides a good reason why this claim is true”. Influences of Time Spent Setting Up Tasks on Arguments Decreasing ambiguity. The amount of time spent on task set up seemed to greatly influence the persuasiveness of students’ arguments, likely because this time served to reduce the ambiguity surrounding these high-level tasks. Task set up allowed Ms. Hill to link students’ content knowledge with the task at hand. In this respect, Ms. Hill informed students of appropriate mathematical resources at their disposal that could be elaborated and linked to their 105 claim. She also gave students insight into what the final arguments should contain, making explicit that the final arguments should provide enough evidence to convince another. In this respect, the task set up included instruction on both the process and product aspects of arguments. This is important, as it includes all the aspects of Doyle’s (e.g., 1983; 1984) definition. Thus, for students to be successful at generating the product, they need to know something about the process of creating it. Some of the factors influencing task set up include the teacher’s goals and knowledge (Henningsen & Stein, 1997). In this study, Ms. Hill’s goals and knowledge of arguments were compatible and worked together to provide comprehensive instruction to students that helped decrease the ambiguity of the writing tasks. The commitment Ms. Hill demonstrated through her professional development activities to incorporating arguments in her classroom is likely to have influenced how she set up the writing tasks. Because of the extensive professional development she had given and received, Ms. Hill seemed to have a firm understanding of argumentation as both a process and product. Because of this, she was able to instruct students regarding both. Instructional moves during task set up. The instruction that Ms. Hill gave around process and product fell into three types of instructional moves: telling, modeling, and peerevaluation activities. From these three instructional moves, there were variations in the explicitness of the instruction regarding process and product. In particular, instructional moves involving telling were least explicit while modeling and peer-evaluation activities were the most explicit. When task set up involved the more explicit instructional moves, students performed better. This is not surprising as Doyle (1984) found similar results. Students did better “when the teacher gradually did an increasing amount of work for the students by clarifying and specifying the features of an acceptable product” (p. 145). 106 The increasing amount of work that Doyle speaks of was apparent in the types of instructional moves Ms. Hill used in this study and correlated with the amount of explicitness. Telling, although informing students of the process and product sides of argumentation, tended to be very general. Statements like “explain why” or “provide evidence” told students what should be included in their arguments, an explanation why and some evidence, but did not show students how to use mathematical resources in an effort to explain why or offer evidence. Similarly, Ms. Hill encouraged students to use words, pictures, and symbols in their arguments, but did not tell students how to use these in a way to validate their claims. Bieda and Lepak (2012) found that students preferred proof explanations that “show and tell”, but the type of telling that Ms. Hill used seemed to concentrate on only one at a time. Had she told students to provide evidence (product-oriented) by using words, pictures, and symbols (process-oriented), the outcomes may have been different. Yet, as Ms. Hill’s instructional moves during task set up included instances that informed of both the process and product together, such as modeling or peer evaluation activities, students’ written arguments were strongly convincing. These types of instructional moves also provided more tangible resources. These resources included an example of a strong written argument constructed in class and written on the board for all students to see, and rubrics that informed students of how to evaluate their peer’s arguments but also served as a tool for self-evaluation. The explicitness of the instruction regarding the process and product correlated with the tangibility of the resources. One of the features of these instructional types was the consistency of the message. Although there were variations in the degree of explicitness, students were always told or shown to use available mathematical resources, like context and representations (Driscoll, 1999) to 107 provide evidence in support of their claim. The consistency of this instruction had a layering effect. Students were able to follow the rubrics and assess each other’s use of words, pictures, and symbols because they had also been told to use these resources and had seen them modeled. Although it is impossible to know from this data, it is likely that this consistency and layering played a role in the effectiveness of each instructional move. Although task set up played an important role in students’ convincing arguments, it seemed only to have a short term effect. This may be due to the fact that the instruction during set up was localized. That is, the instruction was only applicable to the problems or types of problems that were in students’ immediate future. Although the advice of providing evidence and using words, pictures, and symbols is appropriate and applicable to many types of arguments, students did not seem to transfer these mathematical resources to any argument. In fact, when the writing prompt changed because of the content, the convincingness of students’ arguments declined. Sustaining Convincing Arguments Although students in this study had instances of writing convincing arguments, the level of convincingness was not stable. In fact, only two of the eleven writing assignments could, as a class, be described as strong arguments. Conversely, there were six writing assignments, over half, that were described as weak. Further, it was only when Ms. Hill provided the most explicit support of the longest duration that arguments were the most convincing. Yet, when the writing prompt changed in content, the convincingness of arguments declined. One reason for the instability in the results of students’ arguments could be attributed to Ms. Hill’s direct guidance during task set up. Even though Ms. Hill provided appropriate instructional moves that supported students in their writing and reflected her knowledge of 108 students and what they were producing, all of the instruction during task set ups generally only addressed the task at hand. Even so, this instruction had the potential to be generalizable to other tasks involving written arguments. But instructional moves that only apply to immediate tasks “result in superior performance on ‘near transfer’ tests, which require reproduction of information or solutions to problems similar to those used in instruction” (Doyle, 1983, p. 175). Despite this, across the eleven writing prompts, there was a very slight increasing trend in students’ average scores. This suggests that students were starting to understand the requirements of the finished product and attempting to provide more justifications in their arguments. It also confirms that writing is a complex process (Doyle, 1983; Morgan, 1998), one that will likely need ongoing attention in mathematics classrooms as new content is added. Incorporating new content. Doyle (1983) defined a task to include the product itself as well as the course of generating the product and resources that students have available to them to generate the product. When the product is a mathematical process, like generating an argument, the process draws from students content knowledge as well as their notion of what the end product should contain (Doyle & Carter, 1984; McNeill & Krajcik, 2012). Thus, as students are introduced to new content, they become accountable to use this knowledge for writing tasks in classrooms dedicated to developing argumentative writing skills. For this study, one explanation for the decline in arguments can be attributed to the introduction of new content. Every time students were expected to use new content in their arguments or when the format of the writing prompt changed, the average results of students’ arguments decreased. This is not entirely surprising and represents the “zigzag” nature of learning (Lampert, 1990). Once Ms. Hill provided instruction during writing task set up that involved the new content, the arguments improved. Nevertheless, this has implications for 109 teachers preparing their students to write viable arguments. New content is a reality for mathematics classrooms. How to integrate new content in a way that allows students to access it to justify a claim is an important question with implications for classroom teachers and teacher educators. Associated with this, another explanation for the instability in the convincingness of students’ arguments could be the extent of students’ existing content knowledge. Given that many of the participants in this study were on IEPs and they were all placed in a low-tracked algebra class suggests that they did not have strong prior knowledge—both conceptual and procedural knowledge. Coupled with this, there is high cognitive demand that argumentative writing places on students, including determining a claim, providing evidence for the claim, and linking the two together in a coherent, logical way (Driscoll, 1999). Argumentative writing is a unique blending of content knowledge, logic, and communication skills and may be the first encounter these students had with such tasks in mathematics classrooms. It may be the case that because of the high cognitive demand of these tasks and the low performance of these students that much more time is needed to establish a habit of mind (Driscoll, 1999) to understand how to incorporate new content into arguments. Future research might explore this in greater depth. Limitations and Future Research Part of the decline in the persuasiveness of students’ arguments when new content or a change in the prompt was introduced could be attributed to the brevity of this study. Doyle (1983) noted that especially for high-level tasks, the time to develop skills will be greater than for lower level skills. The duration of this study was less than three calendar months, which included a substantial winter break. Because argumentative writing is a highly cognitively demanding task, the length of this study was likely not long enough to capture significant 110 changes in students’ written arguments. Thus, future studies might consider how written argumentation skills develop over longer periods of time, such as a school year or longer. Another limitation in this study is that it only considered how one teacher attempted to develop argumentative writing skills and in only one classroom. Although Ms. Hill was an appropriate choice for this study, future studies might investigate how several teachers select and set up writing tasks in their classroom and whether variations in these factors affect students’ written results. With a study of this nature, there will undoubtedly be more factors introduced such as teacher’s content knowledge and knowledge of arguments. These additional factors will likely contribute more insight into the field regarding influential aspects of tasks and task set up that promote convincing written arguments. Although the units in this study, EE and SoE, were rich with opportunities to explore students’ arguments, examining the development of convincing arguments in earlier years and with more strands of mathematics would also be fruitful. For example, an examination of how students’ draw on mathematical resources in developing number sense may have implications on how they develop their arguments in algebra and beyond. At the same time, analysis could consider whether active participation in oral arguments throughout these formative years has an influence on students’ written arguments as many scholars believe (e.g., Deatline-Buchman & Jitendra, 2006; McNeill & Krajcik, 2009; G. J. Stylianides, 2009). Future research might also question how much time and how explicit instruction needs to be for students to write convincing arguments. Further long-term research should be conducted that would identify if this is a reoccurring trend. That is, is it the case that every time students are introduced to new content or a new writing prompt that they will need the kind of intensive 111 set-up that Ms. Hill demonstrated on days when modeling or peer review activities were part of task set-up? Conclusion Results from this study suggest that how writing tasks are set up is more influential to students’ production of convincing written arguments than any of the task features themselves. Through three types of instructional moves, the task set up provided information regarding both the process and product aspects of argumentation. When sufficient time was devoted to addressing these aspects, students’ written arguments were strong. Whether one of these moves alone is enough to have created the change given enough time is uncertain, and could be the subject of future research. Nevertheless, three instructional moves—telling, modeling, and peer review activities—were foundational to achieve the end goal of helping students generate more convincing, written arguments. This finding is important because students' difficulty in knowing what statements should be used when supporting a claim is well-documented in the persuasive writing literature (e.g., Deatline-Buchman & Jitendra, 2006; Knudson, 1992; McCann, 1989), as well as the literature on proof (Bell, 1976; McClain, 2009; Miyazaki, 2000). Additionally, with the wide spread adoption of the Common Core State Standards (2010), students will need to be able to produce valid, coherent arguments to demonstrate their mathematical proficiency. This implies that teachers must provide instruction on and practice involving arguments. This instruction needs to center on the process and product aspects of arguments but also needs to address how to interpret situations in which the necessity of an argument is being disguised in a task that asks students to explain like that in Figure 2. Finally, results from this study suggest that working with students 112 to develop convincing arguments is a process requiring sustained attention to help students demonstrate their mathematical proficiency. With the overwhelming adoption of the CCSS, teachers will need to support their students in the generation of arguments. However, the demands on teaching and learning conceptually-rich mathematics in which students are consistently justifying their claims is a deeply intricate and interwoven endeavor. Thus, the more that is understood about how teachers support arguments in their classroom, the more likely students will be successful in future highstakes tests. 113 CHAPTER 4 AN ELABORATION OF THE RELATIONSHIP BETWEEN WRITING PROMPTS AND TASK SET UP Chapter 3 presented a version of the written and oral scaffolding provided for students to be successful with written arguments by considering each of the two types of scaffolds separately across the two units under observation. Through the untangling of each type of scaffolding, interesting relationships between the prompts and scaffolds were obscured. The purpose of this chapter is to expound more fully what was happening when students engaged in writing activities in each unit and describe possible relationships between the prompts, the task set up, and students’ performance in writing arguments. In doing so, I offer a more comprehensive view of lessons in which writing tasks were assigned and account for the decisions made in writing the prompts and by Ms. Hill when providing instruction during task set up. Method This analysis draws from the same data set described in Chapter 3. Instead of looking at task features and task set up separately as was done in Chapter 3, this analysis considered how task authorship may have influenced task set up and student results. As was stated in Chapter 3, Ms. Hill and I each wrote about half of the prompts across this study, and the authorships for each prompt. Table 15 also displays the presence of task features in the writing prompt, whether they are one or more of situating, written scaffolds, or process scaffolds described in Chapter 3. These are displayed as either being present (yes) or not (none). The task set-up column describes the type of instructional moves made by Ms. Hill for each writing assignment, if any, during task set up. The instructional moves are consistent with those described in Chapter 3: telling, 114 modeling, and peer-review. These descriptive terms were used to provide some sense of the duration of task set-up. For instance, telling always corresponded to little task set up and peerreview to intensive task set up. Results The purpose of this analysis was to capture the influence that the authorship of writing prompts had on task set up and student performance. Findings suggest that writing prompts that were authored by Ms. Hill received more elaborate introductions than those written by me. Further, students’ written arguments were more convincing on the days Ms. Hill devoted the most time to providing tangible resources that explicitly addressed arguments. Table 15 is a compilation of writing assignments, written prompts, task set ups, and student results organized in ascending order of student results. Coincidentally, by organizing Table 15 in this way, instances in which there was little task set-up rise to the first six rows of the table. Table 15 Comparison of Written Prompt, Task Set Up and Student Results Assignment # 1 9 3 6 7 10 11 8 2 5 4 Written Prompts Author Task Features Ms. Hill None Me None Ms. Hill Yes Me Yes Me Yes Me Yes Ms. Hill Yes Ms. Hill Yes Ms. Hill Yes Me Yes Ms. Hill Yes Task Set-Up Student Results None Telling Telling None None None Peer-Review Modeling Modeling Telling Peer-Review 0.4 0.9 1 1.2 1.4 1.5 1.9 2.4 2.5 3.5 3.6 There were two trends displayed in Table 15. First was the poor performance on writing tasks that I authored, in which four of the five prompts that I wrote resulted in weak overall 115 scores (results ranging from 0-1.5). Second, which is likely associated with this, was the lack of task set up for these prompts. These will be discussed further in the following sections. Equivalent Expressions The first six writing assignments were used in EE. As Table 15 demonstrates, I authored the last two of the writing prompts in this unit. When writing these prompts, I tended to follow the way that Ms. Hill wrote her prompts, mimicking both the content and the form of her writing prompts. Perhaps because of the similarity between prompts, Ms. Hill provided more instruction during task set up than in the next unit and consequently, students wrote the most convincing arguments for three of the prompts from this unit. The following sections provide more detail regarding decisions made when authoring the prompts and how these decisions may have influenced the amount of time Ms. Hill devoted to introducing the tasks. Authoring writing prompts. In EE, I mimicked the way that Ms. Hill wrote her prompts or how she prompted students to provide an argument during whole class discussions. In doing so, the form of the writing prompts was consistent and appeared to be a factor that influenced students’ performance. For example, the following writing assignments, three and five, written by Ms. Hill and me respectively, illustrate the similarities between the form of the prompts and the types of written scaffolds offered (see Figure 20). 116 A toy company produces connecting cubes with smiley face stickers on each side. An example is given below:   They put a smiley face on each face that is exposed. (There are smiley stickers that you cannot see in this picture.) The company needs to find out how many smiley stickers they will need for ANY length of cubes (GENERAL). Tell the workers how to find the number of stickers they will need for ANY length (x) cube. Tell them HOW to find it and WHY it works. RECTANGULAR POOLS WITH TRIANGLE CORNERS The following instructions are in the employee manual at Custom Pools. These instructions are supposed to explain how to find the number of tiles for any size pool. L To find the number of tiles take, W+L+1+W+L +1 W W It works because the length and width of the pool are W and L and then add 1 + 1 L Unfortunately, there are a lot of errors made by new employees. Edit the directions using the space below so the explanation does a better job explaining WHY the number sentence works (use a diagram in your explanation). Figure 20. Examples of similarities of form and writing scaffolds between prompts in EE with different authors. As Figure 20 illustrates, the content of the prompts was similar. For the majority of prompts in this unit, students were asked to explain both how to find an expression for using Unfortunately, there are a lot of errors made by new employees. Edit the instructions various the space below so they do a better job explaining WHY the number sentence works (use a scenarios with why the expression is valid. Not only was the content similar, determining and diagram and your new explanation) justifying an expression, but the context in the prompts shown in Figure 20 was similar, 117 positioning students to explain to fictitious employees the validity of the expression. One notable difference between these prompts as written, however, is that the prompt that I wrote provided a weak explanation that students were asked to revise. Yet, the previous day, students engaged in the peer review activity with rubrics described in Chapter 3, so critiquing and revising an argument should have been familiar. These similarities in content were consistent for the first five writing prompts even though students had moved on to different content consisting of developing strategies to determine if two expressions were equivalent. The purpose of the similarities was because Ms. Hill made an effort to help students improve their writing by consistently providing feedback to students on what mathematical resources they should include in their written work to be convincing (described in Chapter 3) and gave them opportunities to incorporate the feedback in prompts containing similar content but different scenarios. Consequently, the first five writing prompts in EE were similar in both form and content and students did increasingly well on them. Setting up writing prompts. Because of the similarity of writing prompts in EE, it is not surprising that Ms. Hill provided instruction during the task set up for prompts 2-5, regardless of the author. The similarity of the prompts provided a familiarity to Ms. Hill that allowed her to remind students of the mathematical resources they should include, especially in light of providing feedback on the previous day’s assignment. For example, for Assignment 5, written by me, Ms. Hill reminded students of the previous day’s peer-review activity, and commended them on the good job they had done critiquing their classmates and using the critiques to write new arguments that were convincing. In particular, she expanded on students’ use of words, symbols, and pictures to explain how and why their expressions were correct and used this to launch into an introduction of the task shown in Figure 20 that I wrote. The 118 following excerpt shows how Ms. Hill stressed how words, symbols, and pictures should be included in students’ written work for Assignment 5. The top part represents a part of a manual. This is the explanation you are given. Based on what you have learned, you need to change this so that people know what to do and why they do it. You need to edit this explanation so that it's good…get this better… you don't have to go in this order [words, pictures, symbols]. When I'm putting something together, I like to look at the diagram. So, you're explanation should be written so that someone can figure out what to do: how and why. But I also look at the words if the diagram doesn't explain it all. That shouldn't happen for your explanation. You will need all three, words, pictures, and equations, and they will all need to say how and why. The consistency of the prompts and instruction for students to use words, pictures, and symbols, like the quote above illustrates, led students to write strong arguments for prompts 2, 4 and 5. Yet, when the prompt for the last writing assignment (Assignment 6, written by me) changed, no task set up was provided and student performance dropped to 1.2. For this prompt, the content of the prompt changed. Instead of expecting students to defend their generated expression, this prompt asked students to determine if two expressions were equivalent, but the written scaffolds still instructed students to use words, pictures, and symbols to support their claim. This prompt was modeled after whole class discussions conducted where students were asked the same type of question but for different expressions using algebra tiles as pictures. Despite the similarity between the written and oral prompts, no task set up was offered. This may be due to differences between the previous prompts and this prompt. Further, because I wrote the prompt and gave it to Ms. Hill just before class started, she had little time to become acquainted with it. This lack of familiarity made it difficult to provide an adequate introduction, likely causing her to only provide a cursory overview to the task. Whatever the reason for not providing an introduction to the task, students’ poor performance provided important information regarding students’ ability to transfer what they had 119 learned from the previous task introductions to this novel task. The fact that written scaffolds instructed students2(xinclude equivalent to 2x – 5. symbols, students often provided two of the Crystal says that to – 5) is words, pictures, and Use an algebra tile model and the distributive property to explain to Crystal why she is incorrect. Be sure to use words, diagram, and the three resources at best. For example, one student, whose work is shown in Figure 21, had equation in your explanation. performed quite well on writing assignments up to this point especially when using words to link 2(x – 5) 2x - 5 his mathematical resources. Yet, for this writing assignment, he only included symbols and x = -1 2 pictures—he did not use words to explain their significance. (x – 5) xx xx xx xxx xxx xxx Figure 21. Example of student who did not unpack mathematical resources. As Figure 21 illustrates, not only did this student not use words to explicitly link his diagram to the symbols, he did not even state a claim to which he could link the resources. This type of performance was characteristic of how students performed in the next unit, SoE. Summary of EE. Consistency was the theme among the writing assignments and task set up in EE. The writing prompts were consistent in content and form, including the types of written and process scaffolds offered. For the majority of this unit’s writing prompts, students should have known what to expect. Perhaps because of the consistency amongst the writing prompts, there was also consistency among the introduction of each task. Ms. Hill consistently communicated that students include words, pictures, and symbols in their written arguments and that they explain both how and why each of these mathematical resources supported their claim. Perhaps due to this consistency, students wrote the most convincing arguments in the EE unit. Systems of Equations Writing assignments seven through eleven were given in SoE and represent a slightly more diverse set of prompts than were given in EE. In SoE, I wrote three of the five writing 120 assignments, and took more risks with the form of these prompts than I did in EE. Although the intent of these prompts was the same as in EE, to situate students to generate an argument, students’ performance was not strong for the majority of these writing tasks. This could be attributed to several factors, chief among them being the content. Additionally, however, students’ poor performance may also be attributed to the diversity in the writing prompts for this unit and the little time spent introducing these tasks compared to those in EE. I discuss the influence of these factors in the sections that follow. Authoring writing prompts. Three of the five writing prompts in SoE were written by me (see Table 15), and those written by Ms. Hill were written for quizzes but were similar to those that I authored in content and form. But, Ms. Hill tended to provide more written scaffolding in the tasks that she wrote. For example, Figure 22 shows writing assignments seven and eight, written by me and Ms. Hill respectively. 121 Writing Assignment #7: The student council is selling Clarkston sweatshirts and t-shirts for a fundraiser. They make a profit of $8 for every sweatshirt, and $5 for every t-shirt they sell. The student council president says that if they sell 20 sweatshirts and 15 t-shirts they will make a profit of $250. The treasurer says they if they sell 20 sweatshirts they'd need to sell 18 t-shirts to make a profit of $250. Who is correct? Explain why this person is correct using an equation and a graph: Method 1: Equation Method 2: Graph Writing Assignment #8: The band students are selling cookies and candy in an attempt to raise money. Cookies cost $5 per box and candy costs $2 per box. x: # boxes of cookies a. y: # boxes of candy Write an equation to find the number of each type of treat they can sell to raise $500. b. Use the x- and y-intercepts to graph the equation. x y 0 0 c. Ali tells the teacher they will need to sell 20 boxes of cookies and 100 boxes of candy. Jack says that if they sell 20 boxes of cookies, they will need to sell 200 boxes of cookies. Claim: Who is correct? ________________ Evidence: Show AND Explain how to use the equation and graph as evidence. Equation Graph Show and Explain below *Show on the graph above *Explain in this section Figure 22. Writing assignments seven and eight, written by me and Ms. Hill respectively. 122 As Figure 22 illustrates, Ms. Hill used many more written scaffolds in her prompts than I used. For example, Ms. Hill identified the variables for students, named them (x and y), and prompted students to graph the equation they generated using the x-, and y-intercepts whereas in my prompt, students were expected to perform these sub-tasks on their own in order to arrive at a solution. Despite these written scaffolds, the content and form of the tasks were similar. Both asked students to determine which fictitious student was correct and justify their choice and provided similar process scaffolds by offering space for students to explain the significance of the equation and graph separately. The results suggest that the improvement in students’ performance on the written assignment that Ms. Hill authored (2.4 compared to 1.4 on the assignment I wrote) could be attributed to the additional written scaffolds, the amount of time she spent introducing the task, a combination of both types of scaffolds, or a familiarity with the written assignment is uncertain. Whatever the cause, the average results in this unit were the highest for the prompt shown in Figure 22 that Ms. Hill wrote. Conversely, the lowest average results in SoE were from writing assignment nine, written by me. My intent for this writing prompt, shown in Figure 23, was to remove the problem solving process from students by giving them the correct answer to see how they made sense of it. This was because I was starting to feel that students were becoming adept at the process of finding a solution to a system of equations, but their success was due to following a list of steps rather than making sense of the problem or the solution. For example, the writing prompt that Ms. Hill wrote shown in Figure 22 gives some insight into the steps students were taught to solve systems of equations: name the relevant quantities, determine equations to model the scenario, graph using intercepts, find point of intersection. At one point during SoE, these steps were 123 written on the board as the class collectively worked through a problem. Despite being able to determine a correct answer, however, I was not certain that students understood why the point of intersection was the solution. The following problem was given to Casey: The admission fee at a basketball game is $2 for children and $4 for adults. At last Friday's game, 200 people bought tickets, and $500 is collected. How many children and how many adults attended? After thinking it through, he decided to let, x = the number of children's tickets sold y = the number of adult tickets sold Then, he came up with the following equations to represent first the money made and the number of tickets sold. 2x + 4y = $500 x + y = 200 Next he graphed each equation using the x- and y-intercepts, and came up with the following graph: Figure 23. Experimenting with different type of writing prompt with writing assignment nine. Therefore, with this writing prompt, I wanted to get a better understanding of how students made sense of the solution and what mathematical resources they used to justify it, without having to After doing this, Casey found the point of intersection to be (150, 50), which his teacher said go through thebut asked him finding it. was correct, procedure of what it meant...help Casey come up with an explanation of what this solution means and why it works. 124 Students performed quite poorly on this writing prompt (0.9 out of 4), and their responses communicated their uncertainty of how to address the prompt itself. Shortly after this writing assignment was distributed, most students began raising their hands asking for clarification. In particular, they did not know what to do with a task that already given the answer. The overwhelming display of student confusion prompted Ms. Hill to provide instruction after the task was distributed—the only episode of task set up being offered after students had begun working on it. Despite this instruction from the written task and from the task set up provided by Ms. Hill, many students worked through the problem as if the solution was not provided for them. In the end, they concluded that Casey was correct despite that information being given in the prompt. Whether students did not understand how to make sense of the given solution or the writing prompt itself was not revealed with this task. The next writing prompt in this unit was similarly structured, but instead of giving the correct response, the fictitious student, Casey, gave an incorrect response. Students were readily able to make sense of this prompt, concluding that Casey was wrong, using the equations to justify their response. Few students (n=2 out of 9) concluded that he was wrong using the graph as well as the equations, and no student relied only on the graph. As with the previous prompt, no task set up was offered for this prompt. Setting up writing prompts. As Table 15 shows, there were no instances of task set up in SoE for tasks that were written by me. Although Ms. Hill and I had agreed that I would write these tasks, as in EE Ms. Hill received them just before class began leaving little time for her to become familiar with them. For the two task set ups that Ms. Hill conducted in this unit, both immediately preceded a scheduled quiz. The first task set up was given for writing assignment eight, in which students were asked to determine which fictitious student had the correct solution 125 for a single linear equation, and the second, for writing assignment eleven, was for a system of linear equations. For each set up, different mathematical resources were stressed during the task set up, resulting in inconsistencies between the two. For writing assignment eight, Ms. Hill set up the task by modeling how to write a convincing argument by reviewing a writing assignment students had received the previous day. In this assignment, students were given a situation in which they were asked to determine which character, Marcello or the Agent, had the correct solution to a linear equation. In doing so, she stressed that both the equation and the graph be used as evidence to support the claim by explaining both how and why they provide evidence—instruction that was similar to what students received in EE. While she demonstrated this, she also provided feedback to students regarding areas they were doing well with and areas they needed to improve by providing further explanations. The following portion of transcript demonstrates this. 1 2 3 4 9 10 11 12 13 14 15 Ms. Hill: So, in the show part, you could do something like this [Writes Marcello: 500x + 300 y = 180000, 500(10) + 300(40) = 17000] ten oil paintings and forty sketches only makes seventeen-thousand not eighteen-thousand. I think you did that, because that's how you knew the agent was correct, but you're not showing it. This is how you show it. [Writes: “Agent: 500 (12) + 300(40) = 18000”] This is how you show it. So, in words, explain what the equation is showing. Some of you put $17000, that's not an explanation, it's just a number. Explain why that number is not what you're looking for. [and later during the same discussion addressing how the graph shows the solution] So what? What does it matter if the point is on the line? What does the line represent? So, what did Marcello do? This is his combination here, right? [Underlines the 10 and 40 from the problem]. That's his information, so I plot that point. This is the agent's combination, I plot that. Here it is. [Labels each point, "Marcello" and "Agent"] That's showing. So what? What am I looking at? Why does this show me the agent is right? How does this graph give evidence? Student: Because one of the points is on the line and the other isn't. Ok, who's point? Tell me who specifically. [Writes: “the agent's point is ON THE LINE”]. This is a super important phrase. I’m going to put it in capital letters. So what? What does it matter if the point is on the line? What does the line represent? Student: The line has all the solutions that give you $18,000. nm 126 As this transcript shows, Ms. Hill provided specific feedback to students on their study guide (which she wrote but was not collected as part of this study) regarding what elements of their arguments were already working and what was lacking. For example, in line 3, Ms. Hill points out that although students were using the equation to determine who in the problem scenario was correct, they were not providing enough information in their arguments to convey that information. By modeling how she wanted students to respond, Ms. Hill set a standard of detail that she wanted her students to incorporate into their written work. This was seen again in line 11 when Ms. Hill labeled the points according to what each character in the scenario said the solution was. Finally, in line 14, she again demonstrated how to include more detail in the written response by stating whose point was on the line rather, drawing on relevant information provided in the graph. Pointing out the weaknesses in students’ arguments and showing students how to make their arguments stronger by including this level of detail had a positive effect on students’ written work. Additionally, Ms. Hill modeled how to use this level of detail for multiple mathematical resources. During the first portion of the transcripts, Ms. Hill showed how to use the equation to show that Marcello was correct drawing on the problem context. In the second part of the transcripts, she demonstrated how to use the graph to support their claim. By incorporating both mathematical resources into written work, students produced strong arguments. Additionally, encouraging students to use the graph and the equation in their arguments was reminiscent of the instruction students received in EE to use words, pictures, and symbols. By drawing on mathematical resources that students were successful with in EE, Ms. Hill helps students transfer their knowledge to new writing prompts for different content. In fact, modeling this problem led to the highest scores for the written work in SoE. Using these mathematical resources in 127 scenarios with only one linear equation should have transferred easily to scenarios with multiple linear equations. Yet, that was not the case. After students had been working on systems of linear equations, Ms. Hill engaged students in a peer review activity using a checklist to get them ready for the final writing assignment, also a quiz, of the unit. This type of activity had positive results when used in EE, where students’ average score was 3.6, but did not have as great an impact on students’ writing in SoE, where students’ average score was 1.9. Part of these lower scores can be attributed to what was stressed in the checklist. Whereas in EE, Ms. Hill consistently emphasized that students use words, pictures, and symbols in their arguments, during this particular instruction, Ms. Hill only emphasized that students relate their solution back to the original context and to the equation, without mentioning how the graph can provide support of the claim. The following portion of transcript illustrates: 1 2 3 4 5 Ms. Hill: So, when I graded your study guides, I'm happy that you are finding solutions, but I want you to do better. What I want you to do is use evidence to support the answer you get. What that does is moves you from, "I think the answer is (60, 40)” to the thinking of "I think the answer is 60, 40 because...". If you support it with evidence, two things are going to happen. First you will be a better problem solver because you will always be looking for a reason why. And you will always be checking your answer and providing reasons for why it makes sense. So, what I think, for this particular explanation, well, I sat and thought about what does a good explanation do? I came up with three things that I think a good explanation does. I think it should connect the solution to the variables. Like the numbers should have a word attached. Like 60 hats, 20 kittens, etc. I also think it should connect the solution to the totals you are looking at. Like 100 items, or $400. Those totals you identify when you read the problem. Then I also think it should explain why it works, so connecting it back to the equation. So, I came up with sample explanations. Some of these I got right from you guys-some I made up on my own. So, I want you to go through and make sure these explanations do all these things, because on tomorrow's quiz you should have all these [in your written work]. Although this activity started out promising with Ms. Hill engaging her students in an activity that would help them better articulate evidence that supports their claim (line 1), it was 128 surprising how much she emphasized context, especially given that she had laid the groundwork for using a graph to support claims in the set up involving modeling shared earlier. Additionally, in line 1, Ms. Hill mentioned supporting their claim with evidence, as she did in EE by stressing the use of words, pictures, and symbols. However, connecting solutions to the context did not provide confirming evidence in support of their claim. Instead, connecting to the context only offered clarity to what the solution represented, it did not convince the reader of the validity of the solution, as Figure 24 illustrates. With 100 stickers and 50 shirts sold it made 150 items sold and $600 made. Figure 24. Example of student who used context to support their claim (student work recreated). While it is certainly important that students connect their solution to the context (Driscoll, 1999), using only the context did not offer enough support to students’ claims. As Figure 24 shows, it is unclear why 100 stickers and 50 shirts sold made $600 without using the equation. What was so surprising about emphasizing context was that it does not explain how or why the solution is correct, qualities that had been repeatedly stressed previously. Therefore, it is unlikely the case that Ms. Hill did not know the elements required to produce a good argument. Yet, possible reasons for stressing context over other mathematical resources escape me. Summary of SoE. There was considerably less consistency among the writing assignments and task set up in SoE than in EE. Authoring three of the five prompts, I did not have any examples from Ms. Hill to mimic these writing prompts after. With one prompt, I took a risk by stripping away the problem solving procedures from students and only asking them to interpret the solution. This proved to be too much of a deviation from what students were accustomed and students were confused about how to proceed with the task. Also less consistent 129 in SoE was the amount of task set up provided, with task introduction only occurring on tasks that were written by Ms. Hill for the two quizzes in this unit. Discussion and Conclusion From Table 15, one thing that stands out most is the consistency in which Ms. Hill provided task set up for the writing prompts that she authored and the lack of set up for those that I wrote. In fact, only two of the tasks I wrote received set up, one in which the task set up was given after students had already begun working on the task. For those two tasks, only minimal set up was given with telling as the mode of instruction as described previously. Conversely, there was only one task, the first in this study, that Ms. Hill wrote that was not introduced. Because of these trends, it seems as though the familiarity of the task plays an important role in the extent to which the task is introduced. This, of course, makes sense, as it would be difficult to set up an unfamiliar task. Even so, in both units, Ms. Hill could have provided verbal reminders for students to include words, pictures, and symbols to provide evidence of both how and why their claim was valid to orient students about elements of convincing arguments. Likely associated with the task set up is how students performed on these tasks. Those writing prompts that received the most set up were those that students did best on. Especially in EE, the message given in the task set up was consistent, orienting students to include multiple mathematical resources in their arguments. In SoE, however, this message was not as consistent. In one of the two task introductions, students were shown how to draw on the equations and graph to justify their claim. This instruction was consistent with the message of EE where students were encouraged to use words, pictures, and symbols. However, in the other task set up, students were told to connect their solution to the context. Although students acted in accordance with this instruction, their arguments were not particularly strong, as Figure 24 130 demonstrated. Had Ms. Hill continued with the same type of instruction regarding the mathematical resources students should include, it is likely that the arguments would have been much stronger. This chapter attempted to provide a more integrated approach of how aspects of the writing tasks influenced the instruction provided during task set up. By considering this more integrated approach by unit, it is clear that who authored the writing prompt had an impact on the amount and depth of task set up provided, if any. This in turn, impacted student performance. For this study, familiarity with complex tasks was crucial to students doing well and may have implications for teachers in general. In fact, familiarity with the task itself and the intended outcomes may be vital to teachers setting students up to be successful on such demanding tasks, especially in situations in which teachers rely on the written curriculum. 131 CHAPTER 5 USING RUBRICS FOR WRITTEN MATHEMATICAL ARGUMENTS Influential documents such as Principles and Standards (NCTM, 2000) and the Common Core State Standards Initiative (CCSSI, 2010) stress the importance of "communicating to learn mathematics, and learning to communicate mathematically" (National Council of Teachers of Mathematics, 2000, p. 60). These documents endorse both verbal and written communication. In fact, the CCSSI (2010) promotes the generation of clear, concise arguments as a mathematical practice. Reform curricula reflect these standards as students are frequently asked to write mathematical statements to support their case, using writing prompts like "explain why…", "convince…", and "justify…". Even so, writing convincing non-proof arguments in mathematics classrooms is currently underrepresented in the literature. Using students' writing is one way for mathematics teachers to informally assess students (McGatha & Darcy, 2010), and more importantly, it is a means for students to assess themselves. That is, writing provides an opportunity for students to justify mathematical claims, and in the process, convince themselves as they make connections to previously learned mathematics. Although writing in the classroom has many potential benefits for students, writing to persuade others of a mathematical truth can prove difficult for students. In fact, literature on persuasive writing suggests that it is the most difficult genre for students to master (e.g., Deatline-Buchman & Jitendra, 2006) with many of the difficulties stemming from not knowing what facts to draw on when justifying claims. In addition, students tend to write with the teacher in mind as their only audience (Harel & Sowder, 1998), and find it awkward to write convincingly to someone who knows the material better than they do (Morgan, 1998). 132 Yet, with the widespread adoption of the CCSS-M, students are expected to construct and critique “viable arguments”. Viable arguments are constructed as students “make conjectures and build a logical progression of statements to explore the truth of their conjectures…they justify their conclusions, communicate them to others, and respond to the arguments of others” (CCSSI, 2010, pp. 6-7). Figure 25 provides a released item from the Smarter Balanced Assessment website that will assess students’ ability to construct a viable argument. The noise level at a music concert must be no more than 80 decibels (dB) at the edge of the property on which the concert is held. Melissa uses a decibel meter to test whether the noise level at the edge of the property is no more than 80 dB.    Melissa is standing 10 feet away from the speakers and the noise level is 100 dB. The edge of the property is 70 feet away from the speakers. Every time the distance between the speakers and Melissa doubles, the noise level decreases by about 6 dB. Rafael claims that the noise level at the edge of the property is no more than 80 dB since the edge of the property is over 4 times the distance from where Melissa is standing. Explain whether Rafael is or is not correct. Figure 25. Sample item from the released Smarter Balance Assessment Consortium (http://www.smarterbalanced.org) In the task shown in Figure 25, students are first asked to determine whether Rafael is correct. In doing so, students are making a mathematical claim. By asking students to “explain whether Rafael is or is not correct” this task is asking students to construct an argument for why their claim is true. In order to do so, students will need to understand which mathematical resources are appropriate to draw from—a task that has historically been difficult for students in many disciplines (e.g., Balacheff, 1988; Deatline-Buchman & Jitendra, 2006; McNeill & Krajcik, 2009), but one that can be made explicit with instruction. Given that students will be expected to write a convincing argument in support of their claim, it is important to identify ways in which teachers can support their students’ argumentative writing. 133 The purpose of this article is to share how one teacher, Ms. Hill, used activities involving rubrics, “an assessment tool that lists the criteria for a piece of work” (Andrade, 2005, p. 27), to explicitly communicate mathematical resources students could draw from when providing written mathematical arguments. Through these activities, students' written arguments became more convincing, and they began to shift their attention to an audience other than the teacher. Additionally, using rubrics to evaluate each other's work positioned students as evaluators of mathematical arguments, an aspect of the CCSS-M for Mathematical Practices embraced by Ms. Hill. Using rubrics in the classroom can do more than just engage students in mathematical practices. Carroll (1998) found the use of rubrics was a simple way to gain insight into students’ understanding and adjust teaching accordingly. Ms. Hill found writing to be beneficial to her assessment of students' understanding as well, and a tool for students to identify and connect their thinking about mathematical truths. As such, writing was an integral part of her students' daily work. She urged her students to identify and apply the same thinking to justify their mathematical claims as they used to generate their claims. Helping students understand which type of statements can be used to justify, however, required consistent feedback and adequate practice. The Classroom and Mathematical Context The need for consistent feedback and practice was especially true for the students represented in this article. The class in which I conducted this case study was the lowest-tracked th 8 grade class, and many of the students struggled mathematically. In fact, nearly two-thirds of the students in this class received special services and some had an individualized education plan (IEP) which addressed specific mathematical learning disabilities. Ms. Hill had been teaching 134 th the low-tracked class for three of the five years that she had taught 8 grade math at the time of this study. Drawing from the Connected Mathematics Project (CMP) curriculum (Lappan et al. 2006), Ms. Hill began the unit Say it with Symbols, which explored equivalent expressions by giving an overview of the unit. The unit included the “pool problem” which asks students to generate and explain an expression that would model the number of tiles needed to surround a square pool (See Figure 26). 1 ft s s 1 ft border tile In-ground pools are often surrounded by borders of tiles. The Custom Pool Company gets orders for square pools of different sizes. For example, the pool above has side length of s feet and is surrounded by square border tiles. All Custom Pool border tiles measure 1 foot on each side. How many border tiles do you need to surround a square pool? Figure 26. The Pool Problem As she introduced the problem, she also drew a triangle on the board and labeled the vertices words, symbols, and pictures (See Figure 27). Once she completed this diagram, she explained to students that their arguments needed to include each of these three elements. Words Pictures Symbols Figure 27. Illustration of how words-symbols-pictures should link together in arguments. 135 First, their explanations needed to be in words; it could not only be a set of mathematical steps because as Ms. Hill explained, "an algorithm or example alone does not justify", a practice middle school students often use to attempt to validate a claim (K. Bieda & Lepak, 2010). In particular, she stressed that students should explain the relationship between their pictures, or representations, and their symbolic expression. In doing so, students used the representation to justify their mathematical claim: in this case, their symbolic expression. This process of translating verbal situations to symbolic equations and expressions is commonly used to mathematize contextual problems in CMP and was familiar to the students in Ms. Hill's class. By highlighting these mathematical processes, Ms. Hill drew her students' attention to tools they were already using, namely pictures, representations, and symbolic expressions, to state a mathematical claim and communicated how they could apply these tools for their justifications. Driscoll (1999) argued this practice leads to more coherent arguments because it helps students provide clear links between their justifications and their claim. Ms. Hill initially incorporated this model to support students’ arguments by explaining the model and then providing verbal reminders during the first week of the unit. However, the verbal reminders to use these tools (words, symbols, and pictures) only resulted in modest improvements in students' written justifications. For example, a typical response from students when asked to explain how to calculate the number of tiles for any length pool using words, pictures, and symbols can be seen in Figure 28. Although this student made a correct claim, there is little by way of words, symbols, and pictures to support it. One way you could do it is take how long a side length is multiply by 4 and then add 4 because you can’t forget the corners. Figure 28. Student’s attempt at an argument before using rubrics. 136 As this example demonstrates, "because you can't forget the corners" does not provide explicit links between the students' claim and their representation. That is, the justification does not clarify whether the four corners is represented as the coefficient or the constant term in the expression. Therefore, this argument is quite weak and does not justify the claim. Because this example was typical of most students' responses, Ms. Hill decided to amplify students’ attention to the model in Figure 27 by creating rubrics that would clearly communicate her expectations by providing a standard for using words, pictures, and symbols in their explanations. Introducing and Using Rubrics After a week of verbally reminding students what to include in their justifications with only modest success, Ms. Hill created task-specific rubrics (Thompson & Senk, 1998) that would explicitly demonstrate how words, symbols, and pictures could be used to justify claims for the pool problem task (see Figure 27). Ms. Hill provided little explanation regarding the rubrics because students were accustomed to using them in other classes, like English and Social Studies. To reinforce and operationalize the rubrics, Ms. Hill compiled six explanations from the previous day's writing assignment that included a range of justifications from moderately- to poorly-supported. With these explanations, she implemented an activity in which students used the rubrics to assess the six arguments’ use of words, pictures, and symbols to support the claim. As these arguments were examined in small groups, students expressed difficulty understanding what their peers were trying to communicate, even though they understood the mathematics. Ms. Hill used this opportunity to address the issue of audience with respect to writing convincing explanations. She stressed that, like the examples used in this exercise, mathematical writers cannot assume that the reader understands what the writer meant to say; instead, she instructed her students to consider their audience and to be very clear about how the 137 words, pictures, and symbols are linked together to support their claim. In doing so, Ms. Hill is providing instruction regarding how to persuade others of the truth of their claim (Harel & Sowder, 1998). Rating Words Symbols Picture 2 An explanation in words for HOW to find the number of tiles AND 'because' statements explaining WHY are provided for each step An expression showing HOW to find the number of tiles is given AND each part is labeled with what it represents (WHY) A labeled picture is used to show how to find the number of tiles. The picture matches the description of words and/or symbols 1 0 An explanation in words for An explanation in HOW to find the number of words is NOT given tiles, but does not explain WHY it works An expression showing HOW to find the number of tiles, but it is not labeled with what each part represents (WHY) A picture is given, but it is not labeled or does not match the description in words or symbols There is no expression given There is no picture given Figure 29. Pool problem rubric. With this in mind, students were then asked to create an explanation that would receive full credit from the rubric. Before students began working, however, she reminded them to keep their audience in mind. To reinforce attention to their audience, she told students they would share their writing with their partner, and that each would use the rubrics to assess each other's writing. In this way, Ms. Hill made paying attention to audience vital. Students were instructed to write in a way that convinced each other. Results from Using Rubrics As a result of the critiquing exercise with the rubric, students' written arguments became stronger. As intended, their arguments began to include all three aspects of the recommended words-symbols-pictures triangle that Ms. Hill had promoted from the beginning of the unit. 138 Consequently, students were constructing viable arguments and drawing on mathematical resources that led them to state their claim in the first place. Through the use of diagrams that supported their claim and words that made explicit links between the symbols and diagrams, students’ written arguments were much more convincing. For comparison, I used the rubrics to assess the writing the day before and the day after students were given the rubrics shows average scores increasing from 1.7 (n = 9) to 5.6 (n = 8) out of 6 possible points. Even though students did not use the rubrics on either of these days, I used them to assess the impact that rubrics had on students’ argumentative writing. The following example shows how the student's arguments, typical of the rest of the class, became stronger by linking words, symbols, and pictures to represent the thinking that led them to make their mathematical claim (see Figure 30). Unfortunately, there are a lot of errors made by new employees. Edit the instructions using the space below so they do a better job explaining WHY the number sentence works (use a diagram to explain your new explanation). You would add the width of the pool plus the width, because there are 2 sides that are width. Then you add the length of the pool plus the length, because there are 2 sides that are lengths. Then add 2 because on each corner is ½ and if you add ½ plus ½ = 1 and you would do that for both sides and that’s why you would add 2. length 1/2 1/2 L width W W L 1/2 1/2 length Figure 30. Student’s successful attempt at an argument work after using rubrics (student’s work recreated). 139 It is clear that the rubric activities had the intended effect from this sample of students' work. Typical of most responses, this explanation included links between the representation and symbolic expression. The linking statements typically explained the relationship between the mathematical resources students drew on. An example of a linking statement in the example shown in Figure 30 is “because there are 2 sides that are width”. This statement links the diagram to the symbolic expression, and is made more explicit because this student labeled both. Responses of this nature persuade others regarding the validity of the claim because clear, correct statements clearly connected the mathematical resources to the claim. That is, by labeling both the representation and the symbolic expression, the connection between both is strong, resulting in a coherent, convincing argument. Benefits of Using Rubrics Students’ inclusion of words, pictures, and symbols in their arguments persisted throughout the unit. A week later, I used the rubrics again to assess whether they had a lasting effect on students' writing. For this writing assignment, students were given the following prompt: Crystal says that 2(x - 5) is equivalent to 2x - 5. Explain to Crystal why she is incorrect. Be sure to use words, picture, and the equation in your explanation. This task is different from the previous writing assignments in the unit in two important ways. First, students were given abstract expressions for which they needed to determine equivalence where before students were asked to generate expressions and defend them. The fact that students were given expressions rather than being required to generate them made assessing the symbols part of the rubric problematic. In order to receive full credit for this part, students had to link their picture directly to the symbols. Secondly, the given expressions were not linked to context, yet the rubric encouraged students to tie their explanation to a picture. To provide the 140 link, students used pictures of algebra tiles, where a stick represented the variable, dots represented constants, and anything drawn in red represented negative values. To demonstrate the distributive property, all students (n=9) drew circles around groups of sticks and dots and repeated that drawing to represent the constant outside the parenthesis. For example, Figure 31 demonstrates how students represented 2(x - 5) in factored and expanded form. . . . . . . . . . . = . . . . . . . . . . Figure 31. Illustration of how students represented 2(x – 5) in factored and expanded form. Despite important differences in the writing prompt, students still performed better than before the rubric intervention, where average scores were 1.7. On average, students scored 4 out of the possible 6 points on this writing assignment, suggesting that Ms. Hill’s use of rubrics equipped students with tools they could use when providing justifications for their mathematical argument. Incidentally, this writing assignment occurred the week after students had worked with rubrics. Further, Ms. Hill did not refer to or suggest that students use the rubrics for this writing assignment. The activity involving rubrics helped students develop convincing justifications for their claims by linking words, pictures, and symbols. The rubrics were developed by recognizing tools that students were using to develop their claim, and helped them apply these tools to their justifications. The inclusion of rubrics needed little explanation, and helped make a difficult task clearer, but there were additional benefits as well. Communicating Mathematical Resources to use in Justifications Rubrics are used in classrooms for many reasons including specifying learning objectives and communicating them to students (Andrade, 2005). Although Ms. Hill had verbally told 141 students what she expected and provided the visual cue with the triangle (see Figure 25), isolating words, symbols, and pictures in the rubric seemed to alert students to the differences between the three requirements. This allowed students to concentrate on one track of reasoning at a time, but also forced them to make each track work together for a stronger argument. That is, Ms. Hill required that students make links between the three vertices of the triangle, illustrated with the bi-directional arrows, in order to get full credit. For example, to emphasize how the pieces work together, she stated, "Your words, pictures, and symbols need to match. Your words need to describe the expression, and the expression should match the picture" (Ms. Hill, Day 5). In this way, Ms. Hill emphasized how words, pictures, and symbols worked together. By reinforcing this instruction with the rubrics, Ms. Hill also provided a standard with which students could evaluate each other's written arguments. Further, by asking students to generate an explanation that would receive full credit according to the rubric, this activity also functioned as a tool for students to evaluate their own arguments, another benefit cited in the literature on rubrics (Carroll, 1998). Although this rubric was task-specific, the principles of the rubric could apply to a wide range of tasks. For example, in order to answer the question posed in Figure 26, students could use words, symbols, and pictures (or representations) to support their claim. For example, students could write a linear equation to model the relationship between distance from the speakers and the decibel level, or construct a picture or table to illustrate the relationship. Whatever mathematical resources students incorporate into their argument, they will need to use words to unpack the significance of these resources and link them to the claim. By correctly doing so, students will demonstrate their mathematical proficiency by drawing on multiple ways to support their claim. 142 Helping Students Attend to Audience Additionally, the exercise that accompanied the rubrics had an unintended, added benefit: making students aware of their audience. Because the instructional activity positioned students to evaluate each other’s arguments, the focus was taken from Ms. Hill as the only audience reading students' written work. Attending to audience may be a simple way to focus students on producing convincing arguments that would inform someone other than the teacher (Morgan, 1998). Often, keeping an audience in mind requires students to go into greater depth with their explanation, and encourages them to make the links between the claim and justifications more explicit. Additionally, as readers and assessors of arguments, students became conscious of the difficulty of mathematical communication because this exercise drew students' attention to the complexity of communicating in written form. So, directing the reader's attention to pictures and symbols made the meaning clearer by making specific links between mathematical resources. Although these pictorial representations can help the reader better understand the argument being made, the writer also benefits by being pushed to clearly link the justifications to the claim. Through this activity, Ms. Hill positioned her students as evaluators of mathematical arguments, and was able to provide constructive feedback to her students using the rubrics as a guide. Conclusion Students are going to be asked to answer problems that ask them to write viable arguments on the upcoming CCSS-M assessments. Accordingly, teachers will need to attend to students’ argumentative writing and be offered tools to do so to help students understand what mathematical resources are appropriate to use in support of a claim. The rubrics used in Ms. Hill’s writing activities is one such tool that yielded growth in her students' writing scores when 143 compared to the use of less focused instructional strategies, like direct instruction. In this class, simply telling students what to include in their written arguments did not help them understand what statements were appropriate to use; convincing them that audience mattered and helping them to understand what a successful argument entailed seemed to make the difference. The activity with rubrics transformed students' arguments as they began to include words, pictures, and symbols as justification for their claim. In fact, using the rubrics, students' scores grew and as a result, their arguments became much more coherent and convincing. Without a doubt, these activities explicitly communicated to students what statements were appropriate to use when defending their mathematical claim, and helped students overcome a well-documented hurdle by identifying appropriate statements to use in their arguments. 144 APPENDIX 145 APPENDIX Table 16 Strength of Argument Rubric (adapted from McCann (1989)) Claims 0 No claim was provided or was not implied No grounds are provided, or grounds that are provided do not relate to the claim Grounds- statements that we take as givenprovide some justification for the claim, but may not provide a strong enough rational to validate the claim Warrants- support of No warrant is provided the grounds providedoften expressed as a “second” explanation, providing argumentative support logically link the grounds to the claim. Backing- facts that No backings provided support the warrant. 1 Ambiguous referral to the claim Grounds are weak, inaccurate, or incomplete (e.g., use of examples) 2 Claim is provided, but is incomplete/incorrect Grounds are relevant, but not complete (e.g., draws arrow to representation, but does not explain its relevance, recount steps) 3 Claim is clear and complete Grounds are complete, accurate, and tied to the claim. Attempts made to connect grounds to claim, but are weak, inaccurate, or incomplete Grounds are explained, but not connected to the claim Grounds are explained clearly and support the claim Backings support the Backings support the warrant, but are weak, warrant, but still leave inaccurate or incomplete questions about the persuasiveness of the argument 146 Backings support the warrant and provide closure to a convincing argument REFERENCES 147 REFERENCES Albert, L.R. (2000). Outside-in-inside-out: Seventh-grade students' mathematical thought processes. Educational Studies in Mathematics, 41(2), 109-141. Andrade, H.G. (2005). Teaching with rubrics: The good, the bad, and the ugly. College Teaching, 53(1), 27-30. Balacheff, N. (1988). Aspects of proof in pupils' practice in school mathematics. In D. Pimm (Ed.), Mathematics, teachers and children. Balacheff, N. (1991). Benefits and limits of social interaction: The case of teaching mathematical proof. In A. Bishop, S. Millin-Olsen & J. Van Dormolen (Eds.), Mathematical knowledge: Its growth through teaching (pp. 175-192). Dordrecht: Kluwer Academic Publisher. Ball, D.L., & Bass, H. (2003). Making mathematics reasonable in school. In J. Kilpatrick, W. G. Martin & D. Schifter (Eds.), A research companion to principles and standards for school mathematics. Reston, VA: National Council of Teachers of Mathematics. Bell, A.W. (1976). A study of pupil's proof-explanations in mathematical situations. Educational Studies in Mathematics, 7(1/2), 23-40. Bieda, K., & Lepak, J. (2010). Students' use of givens when proving: Context matters. Paper presented at the Research Pre-Session of the National Council of Teachers of Mathematics Annual Meeting, San Diego, California. Bieda, K.N., & Lepak, J. (2012). Examples as tools for constructing justifications. Mathematics Teaching in the Middle School, 17(9), 520-523. Boero, P., Douek, N., Morselli, F., & Pedemonte, B. (2010). Argumentation and proof: A contribution to theoretical perspectives and their classroom implementation. Paper presented at the International Group for the Psychology of Mathematics Education, Belo Horizonte, Brazil. Carroll, W.M. (1998). Middle school students' reasoning about geometric situations. Mathematics Teaching in the Middle School, 3(6), 398-403. CCSSI. (2010). Common Core State Standards. In N. G. A. C. f. B. Practices & C. o. C. S. S. Officers (Eds.), (Vol. http://www.corestandards.org, pp. http://www.corestandards.org). Washington, D.C. Cresswell, J.W. (2007). Qualitative inquiry and research design: Choosing among five appropaches (2nd ed.). Thousand Oaks, CA: Sage Publications, Inc. 148 Deatline-Buchman, A., & Jitendra, A.K. (2006). Enhancing argumentatitve essay writing of fourth-grade students with learning disabilities. Learning Disability Quarterly, 29(1), 3954. Doyle, W. (1983). Academic Work. Review of Educational Research, 53(2), 159-199. Doyle, W., & Carter, K. (1984). Academic tasks in classrooms. Curriculum Inquiry, 14(2), 129149. Driscoll, M. (1999). Fostering algebraic thinking: A guide for teachers grades 6-10. Portsmouth, NH: Heinemann. Forman, E.A., Larreamendy-Joerns, J., Stein, M.K., & Brown, C.A. (1998). "You're going to want to find out which and prove it": Collective argumentation in a mathematics classroom. Learning and Instruction, 8(6), 527-548. Giannakoulias, E., Mastorides, E., Potari, D., & Zachariades, T. (2010). Studying teachers' mathematical argumentation in the context of refuting students' invalid claims. Journal of Mathematical Behavior, 29, 160-168. Habermas, J. (2003). Truth and justification. Cambridge, MA: MIT Press. Harel, G., & Sowder, L. (1998). Students' proof schemes. In E. Dubinsky, A. Shoenfeld & J.Kaput (Eds.), Research on Collegiate Mathematics Education (Vol. III, pp. 234-283): AMS. Harel, G., & Sowder, L. (2007). Toward comprehensive perspectives on the learning and teaching of proof. In F. K. L. Jr. (Ed.), Second handbook of research on mathematics teaching and learning: National Council of Teachers of Mathematics. Henningsen, M., & Stein, M.K. (1997). Mathematical tasks and student cognition: classroombased factors that support and inhibit high-level mathematical thinking and reasoning. Journal for Research in Mathematics Education, 28, 524-549. Kilpatrick, J., Swafford, J., & Findell, B. (2001). Adding it up: Helping children learn mathematics. Washington D.C.: National Academy Press. Knudson, R.E. (1992). Analysis of argumentative writing at two grade levels. The Journal of Educational Research, 85(3), 169-179. Knuth, E.J., Choppin, J.M., & Bieda, K. (2009). Middle School Students’ Production of Mathematical Justifications. In D. Stylianou, E. Knuth & M. Blanton (Eds.), Teaching and learning proof across the grades. Mahway, NJ: Erlbaum. 149 Krummheuer, G. (1995). The ethnography of argumentation. In P. Cobb & H. Bauersfeld (Eds.), The emergence of mathematical meaning: Interactions in classroom cultures (pp. 229269). Hillsdale, NJ: Erlbaum. Lampert, M. (1990). When the problem is not the question and the solution is not the answer: Mathematical knowing and teaching. American Educational Research Journal, 27(1), 2963. Lappan, G., Fey, J., Fitzgerald, W., Friel, S., & Phillips, E. (2009). The Shapes of Algebra. Boston, MA: Pearson Prentice Hall. Lappan, G., Fey, J., Fitzgerald, W., Friel, S., & Phillips, E. (Eds.). (2006a). Say it with Symbols. Boston, MA: Pearson Prentice Hall. Lappan, G., Fey, J., Fitzgerald, W.M., Friel, S.N., & Phillips, E.D. (2006b). Connected Mathematics Project (2 ed.). Boston, MA: Pearson. McCann, T.M. (1989). Student argumentative writing knowledge and ability at three grade levels. Research in the Teaching of English, 23(1), 62-76. McClain, K. (2009). When is an argument just an argument? The refinement of mathematical argumentation. In D. Stylianou, M. Blanton & E. Knuth (Eds.), Teaching and learning proof across the grades: A K-16 perspective. New York, NY: Taylor and Francis. McGatha, M.B., & Darcy, P. (2010). Rubrics at play: Reflect and discuss. Mathematics Teaching in the Middle School, 15(6), 328-336. McNeill, K.L., & Krajcik, J. (2009). Synergy between teacher practices and curricular scaffolds to support students in using domain-specific and domain-general knowledge in writing arguments to explain phenomena. Journal of the Learning Sciences, 18, 416-460. McNeill, K.L., & Krajcik, J.S. (2012). Supporting grade 5-8 studetns in constructing explanations in science. Boston, MA: Pearson. Miyazaki, M. (2000). Levels of proof in lower secondary mathematics. Educational Studies in Mathematics, 41(1), 47-68. Morgan, C. (1998). Writing mathematically: The discourse of investigation. London: Falmer Press. Morselli, F., & Boero, P. (2009). Proving as a rational behaviour: Habermas' construct of rationality as a comprehensive frame for research on the teaching and learning of proof. Paper presented at the CERME, Lyon France. Mueller, M.F. (2009). The co-construction of arguments by middle-school students. Journal of Mathematical Behavior, 28, 138-149. 150 National Council of Teachers of Mathematics. (2000). Principles and Standards for School Mathematics (Vol. 2000). Reston, VA: National Council of Teachers of Mathematics. National Council of Teachers of Mathematics. (2009). Focus on high school mathematics: Reasoning and sense-making. Reston, VA: NCTM. Pedemonte, B. (2007). How can the relationship between argumentation and proof be analyzed. Educational Studies in Mathematics, 66, 23-41. Schoenfeld, A.H. (1989). Explorations of students' mathematical beliefs and behavior. Journal for Research in Mathematics Education, 20(4), 338-355. Smith, M.S., & Stein, M.K. (1998). Reflections on practice: Selecting and creating mathematical tasks. Mathematics Teaching in the Middle School, 3(5), 344-350. Speiser, B. (2002). How does building arguments relate to the development of understanding? A response to the last three papers. Journal of Mathematical Behavior, 21, 491-497. Staples, M., & Bartlo, J. (2010). Justification as a learning practice: Its purposes in middle grades mathematics classrooms: CRME Publications. Stein, M.K., Grover, B., & Henningsen, M. (1996). Building student capacity for mathematical thinking and reasoning: an analysis of mathematical tasks used in reform classrooms. American Educational Research Journal, 33(2), 455-488. Stein, M.K., & Lane, S. (1996). Instructional tasks and the development of student capacity to think and reason: an analysis of the relationship between teaching and learning in a reform mathematics project. Educational Research and Evaluation, 2(1), 50-80. Stein, M.K., & Smith, M.S. (1998). Mathematical tasks as a framework for reflection: From research to practice. Mathematics Teaching in the Middle School, 3(5), 268-275. Stephan, M., & Rasmussen, C. (2002). Classroom mathematical practices in differential equations. Journal of Mathematical Behavior, 21, 459-490. Strauss, A.L., & Corbin, C. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage Publications. Stylianides, A.J. (2007). Proof and proving in school mathematics. Journal for Research in Mathematics Education, 38(3), 289-321. Stylianides, A.J., & Stylianides, G.J. (2009). Proof constructions and evaluations. Educational Studies in Mathematics, 72, 237-253. Stylianides, G.J. (2009). Reasoning-and-proving in school mathematics textbooks. Mathematical Thinking and Learning, 11(4), 258-288. 151 Thompson, D.R., & Senk, S.L. (1998). Using rubrics in high school mathematics courses. The Mathematics Teacher, 91(9), 786-793. Thompson, D.R., Senk, S.L., & Johnson, G.J. (2012). Opportunities to learn reasoning and proof in high school mathematics textbooks. Journal for Research in Mathematcs Education, 43(3), 253-295. Toulmin, S., Rieke, R., & Janik, A. (1978). An introduction to reasoning. New York, NY: Macmillan Publishing Company. Whitenack, J.W., & Knipping, N. (2002). Argumentation, instructional design theory and students' mathematical learning: A case for coordinating interpretive lenses. Journal of Mathematical Behavior, 21, 441-457. Wood, T. (1999). Creating a context for argument in mathematics class. Journal for Research in Mathematics Education, 30(2), 171-191. Yackel, E. (2001). Explanation, justification and argumentation in mathematics classrooms. Paper presented at the Psychology of Mathematics Education, Utrecht, Holland. Yackel, E. (2002). What we can learn from analyzing the teacher's role in collective argumentation. Journal of Mathematical Behavior, 21. Yackel, E., & Hanna, G. (2003). Reasoning and proof. In J. Kilpatrick, W. G. Martin & D. Schifter (Eds.), A research companion to principles and standards for school mathematics (pp. 227-236). Reston, VA: NCTM. Yin, R.K. (2009). Case Study Research: Design and Methods (4th ed.). Thousand Oaks, CA: SAGE Publications, Inc. 152