A z. ,. inmate 8.3-. p 3.. mm . 1 “NEW” (“.3 J a x . J?1L: 52.4.? . . in}. 4 1.3%...«uwfimfiw my .. 3., .1 f 1 51.51.: I 2:." or s- :5 t: a . ‘ .S’h‘v i-v‘ .aw.....n.xa.rlx.ufiflm. . a. 7%.}... k. 2.7 3:. 1...... (.....:...shfi.x ......Kx! f... 3:. .2... 9.... _?r:?‘§ s ‘af'jizg 3.5} ii’izgg‘iififi 3,. .33.. £5 . A ““1. Wm LIBRARY Michigan State Ur liversiiy l This is to certify that the dissertation entitled Fostering Focused Online Discussions presented by Fei Gao has been accepted towards fulfillment of the requirements for the Ph.D. degree in Educational Psychology and Educational Technology Department of Counseling, Educ tional PsychOIOQY. and Special . ucation [/Major Professor’s Signature 03’. 03". Of; Date MSU is an Affirmative Action/Equal Opportunity Employer PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DAJEDUE DAIEDUE DAIEDUE 5/08 K:/Proj/Aoc&Pres/ClRC/DateDuelindd FOSTERING FOCUSED ONLINE DISCUSSIONS By Fei Gao A DISSERTATION Submitted to Michigan State University In partial fulfillment of the requirements For the degree of DOCTOR OF PHILOSOPHY Educational Psychology and Educational Technology 2009 ABSTRACT Fostering Focused Online Discussions By Fei Gao This study compared the discussions and learning taking place in a question- embedded anchored discussion environment and a traditional threaded discussion forum. Drawing on research in asynchronous online discussion and learning from text, the anchored discussion environment was designed to encourage online discussion that is richly developed and focuses on text and discussion questions. The anchored environment differs from the threaded forum in three ways: the discussion questions are embedded in the text, the discussion happens close to the text, and the students make comments while they read the text. Discussions in the two environments differed in frequency of posts, focus, knowledge construction processes, and level of social presence. In the anchored environment, students posted more frequently, focused their discussions more on the texts and peer comments, raised more new t0pics, and extended on more ideas in previous posts. In the threaded forum, in contrast, discussions focused more on the discussion questions and general issues, and included more self-reflection. Though discussions in the anchored environment were found to be more interactive, there were more affective units in the threaded forum. Students also had higher quiz scores and short essay scores in the anchored environment than when they had discussions in the threaded forum. A follow-up survey and interview were conducted to understand these differences. Implications for online instruction and future research are discussed. Copyright by F El GAO 2009 ACKNOWLEDGMENTS I owe my gratitude to many people who has made this work possible. First, I wish to thank the members of my guidance committee: I would like to express my gratitude to my advisor Dr. Ralph Putnam for his excellent guidance and caring throughout my entire doctoral career. He has offered me insightful and constructive advice at different stages of my research, and taught me how to become a good thinker and writer. I would like to thank my dissertation director Dr. Matthew J. Koehler for carefully reading and commenting on every revision of this manuscript and for his consistent support over the five years. He is always readily available for me whenever I need his help. I am grateful to Dr. David Wong, who has high expectations of me and keeps motivating me to do more than I knew was possible. I would like to thank Dr. Dorothea Anagnostopoulos, who has provided valuable suggestions and insights for this project. I also want to take this opportunity to thank many other professors whom I have had an opportunity to work with during the five years of my doctoral study, in particular, Dr. Patrick Dickson and Dr. Rand Spiro. I have been very fortunate to know them and learn from them. Finally, thanks to the instructors and participants who participated in this research project. I appreciate their time and support throughout the project. TABLE OF CONTENTS LIST OF TABLES ............................................................................................................. ix LIST OF FIGURES ........................................................................................................... xi CHAPTER 1 Introduction ......................................................................................................................... 1 Background ..................................................................................................................... 1 Statement of the Problem ................................................................................................ 2 Purpose of the Study ....................................................................................................... 5 CHAPTER 2 Literature Review ................................................................................................................ 7 Research to Inform Online Discussion ........................................................................... 7 Social Presence ........................................................................................................... 8 Social Knowledge Construction ............................................................................... 12 Cognitive Processes .................................................................................................. 16 What’s Missing? ....................................................................................................... 19 Learning from Texts ..................................................................................................... 21 Self-Explanation ....................................................................................................... 22 Adjunct Questions; ................................................................................................... 23 Summary ....................................................................................................................... 25 CHAPTER 3 Method .............................................................................................................................. 27 Question-Embedded Anchored Discussion Environment ............................................ 27 Principle 1: Close Proximity of Text and Discussion ............................................... 27 Principle 2: Encouraging Self Explanation ............................................................... 28 Principle 3: Using Adjunct Question ........................................................................ 28 Principle 4: Opportunities for Interaction ................................................................. 28 Research Questions ....................................................................................................... 3O Participants .................................................................................................................... 31 Setting ........................................................................................................................... 31 Materials ....................................................................................................................... 33 Readings .................................................................................................................... 33 Discussion Questions ................................................................................................ 34 Quizzes ...................................................................................................................... 34 Short Essay Questions ............................................................................................... 35 Survey ....................................................................................................................... 37 Interview ................................................................................................................... 38 Procedures and Design .................................................................................................. 39 Measures ....................................................................................................................... 42 Unit of Analysis ........................................................................................................ 43 Quantity of Discussions ............................................................................................ 44 vi Quality of Discussions .............................................................................................. 44 Learning Outcome Measurements ............................................................................ 49 Student Perceptions ................................................................................................... 52 CHAPTER 4 Results ............................................................................................................................... 56 Question 1: Quantity of Discussion .............................................................................. 56 Question 2: Depth of Discussion .................................................................................. 58 Question 3: Focus of Discussion ................................................................................... 59 Question 4: Knowledge Construction ........................................................................... 61 Question 5: Social Presence .......................................................................................... 64 Question 6: Student Learning ....................................................................................... 66 Quiz Scores ............................................................................................................... 66 Short Essay Scores .................................................................................................... 68 Question 7: Student Perceptions ................................................................................... 70 Survey: Likert-Questions .......................................................................................... 70 Survey and Interview: Open-Ended Questions ......................................................... 72 CHAPTER 5 Discussion ......................................................................................................................... 76 How and Why the Focus of Discussion Differs ............................................................ 76 Focus on Readings .................................................................................................... 76 Focus on General Thoughts ...................................................................................... 80 Focus on Instructor’s Questions ................................................................................ 82 How and Why the Knowledge Construction Processes Differ ..................................... 83 Initiating New Topics ............................................................................................... 83 Extending Peers’ Comments ..................................................................................... 83 Making Reflections ................................................................................................... 84 How and Why the Level of Social Presence Differs .................................................... 86 Providing Affective Responses ................................................................................. 86 Providing Interactive Responses ............................................................................... 87 How and Why the Pattern of Participation Differs ....................................................... 87 Why Students Had Higher Scores in the Anchored Environment ................................ 88 Implications ................................................................................................................... 90 Implications for Teaching with Online Discussion .................................................. 9O Implications for Future Research .............................................................................. 94 Limitations .................................................................................................................... 98 APPENDICES ................................................................................................................ 100 APPENDIX A ................................................................................................................. 101 APPENDIX B ................................................................................................................. 105 APPENDIX C ................................................................................................................. 1 15 APPENDIX D ................................................................................................................. 118 APPENDIX E ................................................................................................................. 121 APPENDIX F .................................................................................................................. 124 APPENDIX G ................................................................................................................. 129 vii APPENDIX H ................................................................................................................. 136 REFERENCES ............................................................................................................... 146 viii LIST OF TABLES Table 1 .............................................................................................................................. 41 Table 2 .............................................................................................................................. 51 Table 3 .............................................................................................................................. 53 Table 4 .............................................................................................................................. 54 Table 5 .............................................................................................................................. 57 Table 6 .............................................................................................................................. 59 Table 7 .............................................................................................................................. 62 Table 8 .............................................................................................................................. 65 Table 9 .............................................................................................................................. 67 Table 10 ............................................................................................................................ 68 Table 11 ............................................................................................................................ 69 Table 12 ............................................................................................................................ 70 Table 13 ............................................................................................................................ 71 Table 14 ............................................................................................................................ 73 Table 15 .......................................................................................................................... 124 Table 16 .......................................................................................................................... 125 Table 17 .......................................................................................................................... 127 Table 18 .......................................................................................................................... 129 Table 19 .......................................................................................................................... 132 Table 20 .......................................................................................................................... 133 Table 21 .......................................................................................................................... 136 ix Table 22 .......................................................................................................................... 137 Table 23 .......................................................................................................................... 139 Table 24 .......................................................................................................................... 140 Table 25 .......................................................................................................................... 142 Table 26 .......................................................................................................................... 144 LIST OF FIGURES Figure 1 A question-embedded anchored discussion environment. ................................. 29 Figure 2 An example of true or false questions in the quizzes. ........................................ 35 Figure 3 An example of multiple choice questions in the quizzes. .................................. 35 Figure 4 An example of short essay questions in the quiz ................................................ 37 Figure 5 Directions for participating in the anchored discussion environment. ............... 40 Figure 6 Percentages of focus categories in the two environments. ................................. 60 Figure 7 Percentages of knowledge construction categories in the two environments. 63 Figure 8 Percentages of social presence categories in the two environments. ................. 65 Figure 9 Examples of text-focused posts in the anchored environment. .......................... 80 Figure 10 An example of general idea post in the threaded forum. .................................. 81 xi CHAPTER I Introduction Background I started to teach online in 2005, when I was a teaching assistant for a graduate level online course. Since then, I have been co-instructor for a variety of online courses. Online learning is different from traditional face-to-face learning in many ways. The most evident difference is that there are no direct face-to-face interactions among students or between students and instructor. I firmly believe the quality and quantity of student- student interaction and student-instructor interaction influences the quality for any course, online or face-to-face. So, one of the challenges for me to teach online is to foster meaningful online interactions among students from geographically distant locations. To achieve this goal, the asynchronous online discussion forum is one of the most effective tools, as it frees learners from time and space constraints (T. Anderson, 1996) and provides ample possibilities for communication. In my online courses, discussion forums have been used for a variety of purposes. In some courses, discussion forums are places for students to discuss ideas in course readings; in some courses, they are channels for students to share and obtain resources and information from each other; in some, courses, they are communication centers for groups of students who work collaboratively on class projects; and in some other courses, they are places where students get to know each other and build learning communities. Asynchronous online discussion forums play an important role in online courses, and have many possible functions (Dennen, 2008). Online discussion itself, however, does not ensure high-quality learning. My experience in using discussion forums in online courses has led me to many questions about the optimal ways of using online discussion to support learning, such as how should instructors structure online discussions in a way to promote c00perative learning? What should instructors do to enhance reflective thinking, critical thinking, or problem solving in online discussions? How should instructors design online discussion differently based on different pedagogical goals? For each question, there could be multiple answers. As an instructor and a scholar, these questions have raised my great interest, and have become the focus of my research. This study focuses on how to achieve one particular pedagogical goal with online discussion — enhancing student comprehension and understanding of course readings and basic concepts. In the next section, I draw on both my experience as an online instructor and current research on online discussion to describe the existing problems in online discussions. Statement of the Problem Researchers believe that participating in asynchronous online discussion by sharing thoughts, asking questions, and providing feedback is one of the major means to support interaction and community building in online learning environments (DeWert, Babinski, & Jones, 2006; Joeng, 2003). Online discussion, some argue, has advantages over traditional classroom discussion. Online discussion forums potentially allow for more in- depth discussions and more thoughtful learning than is possible in traditional face-to-face settings (Hawkes, 2006; Newman, Johnson, Cochrane, & Webb, 1995). In face-to-face discussions, students may not have enough time for evaluating information thoroughly before they respond, due to the high psychological pressure to respond as soon as possible (M. G. Moore, 1993). In online discussion forums, in contrast, the entire discussion is available for perusal, providing learners with opportunities for identifying, examining, making connections among and reflecting upon ideas (T. Anderson, 1996; Collison, Elbaum, Haavind, & Tinker, 2000). The reality in online discussion forums, however, does not always live up to these expectations. When I use asynchronous discussion to support the comprehending and understanding of course readings and concepts, 1 have seen both successful and unsuccessful scenarios. There are times when enthusiastic discussions started with one student sharing a piece of evocative experience, when discussions came alive with a thought-provoking question, and when a group of students argued passionately about ideas in a piece of reading. There are also times, however, when discussions failed to achieve the desired goal. For example, one of the problems in having online discussion on course readings is that it is hard for students to focus on the course readings and have sustained discussions about the readings in a threaded discussion forum. In online discussions, it is common to see discussion digressing from the central ideas, and students started talking intensively about their own personal experiences with little interpretation of how the experiences are related to the ideas in the reading. It is not usual to see students briefly responding to each other’s posts with sentences like “I cannot agree more” or “What a great example” without connecting back to or further exploring the ideas in the readings. The lack of focus on course readings, the reluctance to explore and extend each other’s ideas, and the infrequence of sustained discussions have constantly probed me to consider how I could design the discussion environment or moderate the discussion in such ways to make possible a focused discussion on course readings. The same concern has been widely shared by researchers who study learning occurring in online discussion. They have reported similar problems with asynchronous online discussion forums. First is the frequent lack of learner-content interaction. The structure of a typical threaded discussion forum often fails to encourage sufficient focus on course content, despite the intent to promote learning through learner-content interaction. Knowlton (2001), for example, found that it is easy for online discussions to digress into “isolated bits of small take and random cyber-chatter,” arguing that these digressions prevent students from focusing on course content and developing “a fresh and incisive understanding of course materials” (Knowlton, 2001 , Introduction section, para. 1). A second problem is the lack of meaningful learner-learner interaction. In many threaded discussion forums, students post condensed expositions of their own ideas, without attending or responding to the ideas of others (Hara, Bonk, & Angeli, 2000; Larson & Keiper, 2002). As a result, there is little building up or accumulating of ideas within a group. Third is a lack of depth in discussions. Although threaded discussion forums have the potential of fostering deep-level processing of information, research suggests that students’ online discussions often remain at a surface level, such as sharing or comparing information, seldom delving to deeper levels that involve negotiating meaning, synthesizing, or applying newly acquired knowledge (Gunawardena, Lowe, & Anderson, 1997; Kanuka & Anderson, 1998; J. L. Moore & Marra, 2005). I believe that having focused and sustained discussion on course readings helps students develop solid understanding of important ideas and concepts in the readings. It is a prerequisite for students to talk abstractly about ideas and apply them in real-life Situation. But the reality tells me that threaded discussion forums commonly used in online courses sometimes fail to support this goal. My goal, therefore, becomes how to foster a focused and sustained online discussion on course readings. For the first two years of my graduate study, I was interested in investigating how people learn from text, and how to help them learn in a more effective way. Research on learning from the text, therefore, greatly influences on my subsequent thinking about online discussion. By considering research on both learning from the text and online learning, I realize that although much online learning involves reading and learning from text, the extensive body of research on how readers comprehend and learn from text has been largely untapped for its contributions in understanding students’ engagement and learning in online environments. Purpose of the Study In this study, I consider research on online discussions and on how students comprehend and learn from text. I argue that the extensive body of research on learning from text—which heretofore has been largely absent in informing the design and study of online learning environments—should be considered more carefully in understanding and promoting productive online discussions. Building on this conceptual work, I develop a different online discussion environment to promote focused discussion about course readings. The present study was designed to compare the nature of discussions in the newly-developed environment and traditional threaded discussion environment, examine whether this new environment supports more focused discussion about course readings, and explain how Specific features in each of the environments support different forms of discussions and learning. Specifically, the research questions for this study are: 1. How did the quantity of discussion in the new environment differ from that in the threaded forum? How did the discussions in the new environment differ from that in the threaded forum in terms of depth of discussion? How did the discussions in the new environment differ from that in the threaded forum in terms of focus of discussion? How did the discussions in the new environment differ from that in the threaded forum in terms of knowledge construction process? How did the discussions in the new environment differ from that in the threaded forum in terms of level of social presence? Did students learn better in one discussion environment than the other? How did students perceive the nature of discussions and learning in the two discussion environments? CHAPTER 2 Literature Review I turn first to research on asynchronous online discussion to explicate how learning occurs through online discussion, and what are common approaches to promote learning through online discussion. Later in this chapter, I consider some research on learning from text, a body of work that precedes the recent explosion of web-based learning environments, and explain how this affects my thinking of online discussion. Research to Inform Online Discussion To study how learning occurs through asynchronous online discussion and how to improve the quality of discussion in asynchronous online discussion, researchers have focused on how peoples learn through participating and interacting in online discussions. It is difficult to find a clear and precise definition of participation and interaction in online learning environment. Participation could mean student behaviors that are directly visible and can be measured by quantitative methods, such as number of posts read or written (Lipponen, Rahikainen, Lallimo, & Hakkarainen, 2003), and could also be defined as “a complex processes comprising doing, communicating, thinking, feeling and belonging, which occurs both online and offline” (Hrastinski, in press, p. 3). Interaction could refer to “only those activities when the student is in two-way contact with another person (or persons)” (Daniel & Marquis, p.339, cited by Anderson, 2003), and could also be defined as “a dialogue or discourse or event between two or more participants and objects which occurs synchronously and/or asynchronously mediated by response or feedback and interfaced by technology” (Muirhead & Juwah, 2004, p. 13). Even though how online participation and interaction support learning has not been studied directly, the assumption that active participation and interaction is important for learning has been widely recognized and supported by empirical studies (Hiltz, Coppola, Rotter, Turoff, & Benbunan-Fich, 2000). Morris, Finnegan, and Sz-Shyan (2005), for instance, documented student frequency of participation and duration of participation _ (with four frequency variables such as the number of posts, and four duration variables time spent viewing content pages), and conducted a multiple regression analysis to evaluate how well student participation measures predicted student learning, which was measure as final grade. Student participation measures accounted for approximately 31% of the variability in achievement. Despite the close relationship of participation, interaction and learning, it is crucial to keep in mind that “increased learner interaction, is not an inherent or self-evidently positive educational goal” (May, 1993, cited by Rovai, 2007, p. 81). To understand the relations of participation, interaction and learning, researchers have investigated three major aspects of how learners participate and interact in online discussions, that is, (a) social presence, (b) social knowledge construction, and (c) cognitive processes. Social Presence Defining social presence. Social learning theorists believe that learning occurs when learner participates in the social activities and interacts with others (Lave & Wenger, 1991; Wenger, 1998). It is not surprising to see, therefore, one sizable line of online research has focused on conceptualizing the nature of social interactions, and the relationship between online interaction and learning (Wallace, 2003). This work has argued for the importance of a number of characteristics of online learning, such as social presence. Short et al. (1976), as one of the first few scholars studying the concept of social presence, defined it as “the degree of salience of the other person in a mediated interaction and the consequent salience of the interpersonal interaction”(p. 65). Based on the definition, social presence was measure by four items: (a) personal/impersonal; (b) sensitive/insensitive; (c) warm/cold; and (d) sociable/unsociable. With the development of computer-mediated communication (CMC), social presence has been developed from a conceptualization of a property of a communication medium into a psychological variable that reflects “the subjective experience of closeness and connectedness in mediated communication” (Bente, Rfiggenberg, Kramer, & Eschenburg, 2008). For example, Tu and McIssac defined social presence as the awareness of other people and their involvement in the communication process (Tu & McIsaac, 2002), and Garrison et al. defined it as “the ability of participants in the Community of Inquiry to project their personal characteristics into the community” (Garrison, Anderson, & Archer, 2000, p. 89). While definitions of social presence vary, there are a few common themes across those definitions. Those themes are (a) co- presence, including the sense of sensory awareness of the embodied others and mutual awareness; (b) psychology involvement, meaning perceived access to another intelligence, salience of the other, and mutual understanding; and (c) behavior engagement, implying the interdependent, multi-channel exchange of behaviors (Biocca, Harms, & Burgoon, 2001). Based on these key themes, social presence is commonly measured in the following dimensions: (a) perceived social richness of the medium; (b) involvement, immediacy and intimacy in interpersonal commmrication; (c) social judgments of others’ communication ability; and (d) behavioral measures including verbal markers and non- verbal indicators (Biocca etal., 2001). When it comes to measuring the social presence in asynchronous text-based online discussion, Rourke and colleagues (2001a) developed a set of categories and indicators for analyzing the level of social presence in discussion transcripts. Their three categories of communicative responses contributing to social presence were (a) affective responses, which express emotion, feelings, and mood of the participants; (b) interactive responses, which suggest a willingness to maintain a sustained relationship, and (c) cohesive responses, which indicate a willingness to build and sustain a sense of group commitment. Social presence has been documented as one of the most important factors for distance education (McIsaac & Gunawardena, 1996; Tu, 2000). Though there is little direct evidence on the influence of social presence on learning, research suggests that the strength of a learning community and the closeness of personal relationships with the community are positively correlated with the frequency and quality of interactions among participants (Rovai, 2003, 2007), which, in turn, may affect student learning performance (Lomicka & Lord, 2007). This is probably because social presence promotes student engagement in communicative learning (Polhemus, Shih, Swan, & Richardson, 2000; So & Brush, 2008). According to Gunawardena and little (1997), social presence is an important predictor of student satisfaction with learning. A study conducted by Richardson and Swan (2003) showed that students with high perceptions of social presence scored high in terms of perceived learning and perceived satisfaction with the instructor. Rourke, Anderson, Garrison, and Archer (2001a) emphasized the level of 10 social presence in an online community by arguing that social presence supports sustained learning and critical thinking in a community of learners. Increasing social presence. As discussed above, this perspective suggests that students will get more involved and engaged in online interactions if they feel a higher level of social presence. Some developers have focused on the goal of constructing a social community that increase perceived social presence (Bielman, Putney, & Strudler, 2003; Gunawardena, 1995; Palloff & Pratt, 1999). Most research in this area has focused on identifying important factors that impact the sense of online community, such as level of social presence, or size of a community; little research at this point has systematically examined the effects of specific approaches on increasing the sense of community and engagement. In working toward the goal of a supportive community, developers have relied on providing facilitation and moderation as well as structuring appropriate online activities. Winograd (2000) examined the effects of a trained moderator in online discussion, concluding that the use of moderation techniques allowed the experimental group to form a community based on camaraderie, support, and warmth, and the experimental group contributed far more posts than other control groups. Makitalo, Hakkinen, Leinonen, and J arveléi (2002) explored how students establish and maintain common ground in online discussions, arguing that showing evidence of understanding by providing written feedback and showing support to the peers in the replies is essential to establish common ground in terms of shared mutual understanding, knowledge, belief, and assumptions. From a review of literature, Rovai (2002a) suggested that instructors teaching at a distance may promote a sense of community by attending to seven factors: (a) 11 transactional distance, (b) social presence, (c) social equality, ((1) small group activities, (e) group facilitation, (1) teaching style and learning stage, and (g) community size. Some other researchers have focused on the effects of media on people’s perception of social presence. For example, Bente et al. (2008) compared the four types of communication modes in terms of their effects on perceived social presence: (a) text chat, (b) audio, (c) audio and video, and ((1) avatar, and found the text chat group has significantly lower perceived social presence than all other groups. Social Knowledge Construction Defining social lorowledge construction. As Cobb argues, “sociocultural perspective informs theories of the conditions for the possibility of learning, whereas theories developed from the constructivist perspective focus on what students learning and the processes by which they do so” (Cobb, 1994, p. 13). Social constructivists believe knowledge not only exists in individual minds but also in discourse among individuals (V ygotsky, 1978). Researchers who take the social constructivist perspective have focused more explicitly on certain interactions in direct support of collective construction of knowledge. An example of such interaction is what Moore defined as dialogue. As M. G. Moore pointed out (1993), such dialogue is more than interaction, as it is “purposeful” and “constructive”. Each participant “builds on the contributions of the other party or parties” and each is a “contributor” towards improved understanding (p. 24). Scardamalia and Bereiter (2003a) define knowledge building as: “... the production and continual improvement of ideas of value to a community, through means that increase the likelihood that what the community accomplishes will be greater than the sum of individual contributions and part of broader l2 cultural efforts” (p. 1371). Based on the social constructivist framework of learning, online learning environments should provide opportunities for students to articulate and reflect on course content, to analyze, discuss and negotiate meaning with others, to build upon each others’ contributions, and to apply the knowledge to real-life situations. To understand the nature of knowledge construction processes in asynchronous discussions, researchers go beyond simple analysis of rates of student participation, and emphasize the quality and processes of learning demonstrated in student online posts. This line of work has focused on the interactions of groups as wholes and how those interactions support group knowledge construction. It also assumes a particular set of group activities or functions (e.g., discovery, sharing) believed to foster that joint knowledge construction. A number of researchers have observed and measured the patterns of knowledge construction in asynchronous online discussions (Arvaja, Salovaara, Hakkinen, & Jarvela, 2007; Gunawardena et al., 1997; Hara et al., 2000; Pena-Shaff & Nicholls, 2004; Schellens & Valcke, 2005; Veerman & Veldhuis-Diermanse, 2001; Zhu, 1996). For example, Gunawardena et al. (1997) identified student posts reflecting the five stages of co-construction of knowledge: (a) "sharing/comparing of information;" (b) “discovery and exploration of dissonance or inconsistency among ideas, concepts or statements;” (c) "negotiation of meaning/co-construction of knowledge;" ((1) "testing and modification of proposed synthesis or co-construction;" and (f) "agreement statement(s)/application of newly constructed meaning" (p. 414). Pena-Shaff and Nicholls (2004) developed an instrument with 11 categories, such as question, reply, clarification, and reflection, to capture the knowledge construction processes. Zhu (1996) analyzed student posts by 13 identifying the types of posts that are information sharing, reflecting, scafi'olding, and so on. These studies suggested unanimously that dialogical processes of meaning construction are not as common as expected in asynchronous online discussions, with elaboration and clarification dominating the majority of student posts. Increasing interactions for knowledge construction. Under this perspective, to promote learning through online discussion is to increase the amount and quality of interaction for knowledge construction, rather than simply trying to create a sense of community. One approach has been to teach students ways of interacting by providing explicit expectations and guidelines. For example, Gilbert and Dabbagh (2005) tried three types of structures in an online course: (a) offering explicit guidelines on how to facilitate the discussion, (b) offering rubrics on how the discussion would be evaluated, and (c) offering posting protocols, such as limiting the length of a post and mandating reading citations. They found that certain elements of structure such as explicit facilitator guidelines and evaluation rubrics had a positive impact on online construction of knowledge. A second approach has been to structure the discussion activities. Kanuka, Rourke, and Laflamme (2006) studied the relative influence of five discussion activities on the quality of students' online discussions: (a) nominal group technique, where students are asked to generate and prioritize their ideas about a solution to a well-formed problem; (b) debate; (c) invited expert; (d) WebQuest; and (e) reflective deliberation, where students are provided with opportunities to reflect on the abstract material presented in academic settings and to make it relevant to their own worlds. For each activity type, the researchers devised clear role definitions and responsibilities for the instructor and the 14 students, rubrics for student assessment, and specific leaming outcomes. They found that students posted a higher proportion and number of messages reflective of the highest levels of cognitive presence when they engaged in the WebQuest and debate activities. In another study, Lebaron and Miller (2005) reported the effect of role play in online discussion, where each participant of the role-playing team assumed a different role, and participated in a synchronous conversation and asynchronous threaded discussion about the issues and challenges associated with each role. They concluded that role play might be a discussion activity that helps to ensure interactions among students, to promote purposeful peer student dialogue, and to encourage construction of knowledge in online learning environments. Another way to structure discussion activities has been to require students to take a more active role in discussions. Rourke and Anderson (2002) studied the effects of asking students to lead discussions. Students perceived these discussions led by their peers as more structured, more fluid, more responsive, and more interesting than those led by the instructor, even though there was little difference in the quality of discussion as assessed by the researchers. A study by Sec (2007) similarly found that when discussions were moderated by a peer, students responded to messages more actively and engaged in more in-depth discussions. Ertmer et al. (2007) examined the effect of using peer feedback on posts to increase their quality. The feedback was based on Bloom’s taxonomy (Bloom, Englehart, Furst, Hill, & Krathwohl, 1956) and distinguished between lower-order (knowledge, comprehension, and application) and higher-order (analysis, synthesis, and evaluation) contributions. The goal was to increase the amount of higher-order thinking evident in students’ posts. Although the quality of 15 students’ posts did not improve during the course, students reported through surveys that the peer feedback enhanced their learning and the quality of online discussion. A third approach for increasing the amount and quality of online discussion has been for instructors to use a set of facilitative techniques. Beaudin (1999) looked at how instructors could interact with learners in a way to keep the discussions on topic. By surveying 35 online instructors, Beaudin (1999) identified several techniques: (a) designing questions that specifically elicit on-topic discussion, (b) providing guidelines to help learners prepare on-topic responses, (c) rewording the original question when responses are going in the wrong direction, and (d) providing discussion smnmaries on a regular basis. Specific questions and guidelines provide the basis and procedures for knowledge construction, rewording the question helps redirecting the knowledge construction processes to the targeted topic, and summary is crucial for a fruitful interaction. Cognitive Processes Defining cognitive processes. In contrast, some researchers have viewed interactions with others as providing opportunities for individuals to engage in particular cognitive processes. From this perspective, the group is an important site for individuals to interact, but the learning is assumed to take place because of the thought processes in which individual learners engage. Henri's (1992) multi-dimensional model, for example, specified cognitive skills— elementary clarification, in-depth clarification, inference, judgment, and strategies—as represented in online posts, taking the occurrence of such cognitive processes as evidence that learning was taking place. Newman, Johnson, Cochrane, and Webb (1995), building 16 upon Henri (1992) and other researchers’ work, identified particular kinds of critical thinking processes, such as linking ideas, justification, and critical assessment, and looked for evidence of these processes in the postings of individuals. Their analysis showed that, when compared to face-to-face discussions, asynchronous online discussions had more thought-out comments and more linking between ideas, but less creative ideas. Jarvela and Hakkinen (2000) studied different levels of online discussions using perspective-taking theory. Discussions were coded into one of the following five stages: egocentric, subjective role taking, reciprocal perspective taking, mutual perspective taking, and societal-symbolic perspective. They found that the stage of perspective taking in online discussion was generally low, and none reached the highest stage, societal-symbolic perspective taking. A model that ideally represents critical thinking processes in computer conferencing is that developed by Garrison, Anderson, and Archer (2000; 2001). Based on their model, critical inquiry is composed of four sequential stages: triggering event, exploration, integration, and resolution. This model captures how individual learners construct and confirm meaning through sustained reflection and discourse in a community of inquiry (Garrison et al., 2001). Encouraging cognitive processes. A third goal for improving online discussions emphasizes encouraging particular cognitive processes. This goal is realized through using specific discussion environments or teaching particular discussion strategies. A variety of online discussion environments have been designed to scaffold the ways students participate, respond, and interact in the discussion. For example, in a constrained discussion environment, participants must label each of their posts using a predefined set of message types (Cho & Jonassen, 2002; J. L. Moore & Marra, 2005). In Guzdial (2000), 17 students chose for each post a post type or classification, such as new theory or evidence. Knowledge Forum (previously called CSILE) (Scardamalia & Bereiter, 2003b), supports both the creation of notes and the ways they are displayed, linked, and made objects of further work. The rationale is that a prompt suggesting a specific type of post will support students’ metacognitive thinking, helping them engage in certain cognitive processes (Scardamalia & Bereiter, 1994). Similarly, Nussbaum and colleagues (2004) encouraged counter-argument in online discussion by asking students to choose such note starters as “on the opposite side,” or “I need to understand,” which increased the frequency of disagreement and student willingness to consider other points of view. Another type of discussion environment supports graphical representations of different viewpoints and their relations, such as concept maps or tables (Ertl, Kopp, & Mandl, 2008; Suthers, Vatrapu, Medina, Joseph, & Dwyer, 2006; Suthers, Weiner, Connelly, & Paolucci, 1995). These studies concluded that students benefit from co-constructing graphical representations because the processes of construction may prompt for the extemalization of particular cognitive processes, such as linking new claims to an existing argument graph or filling in cells of a table (Andriessen, Baker, & Suthers, 2003). A different approach has been to teach students specific cognitive strategies or provide students with specific goals. In online discussions studied by Choi, Land, and Turgeon (2005), the instructor provided guidelines for generating three types of questions to promote peer interaction and enhance the quality of online discussion: (a) clarification or elaboration questions, (b) counter-arguments, and (c) context- or perspective-oriented questions. This intervention resulted in an increase in the frequency of questioning, but did not affect the quality of the discussion. Similarly, Yang, Newby, and Bill (2005) had 18 the instructor teach and model Socratic questioning, which students then used in their online discussions. This approach resulted in more posts that demonstrated critical thinking. Nussbaum (2005) explored the effects of goal instructions on students’ online discussion. Students were instructed to achieve one general goal (to persuade, to explore, none), and one specific goal (to generate reason, to provide counterarguments, none) while participating in the discussion. Results showed that the goal of “generate reason” resulted in deeper and more contingent arguments, and the goal of “persuade” led to more adversarial arguments. Finally, instructors and moderators can encourage certain cognitive processes. Collison et al. (2000), for example, argued for the critical role of effective moderation in online discussion. They identified several moderating strategies to help learners engage in critical thinking. These strategies include (a) sharpening the focus of discussion by identifying the direction of a dialogue, sorting ideas for relevance and focusing on key points; and (b) deepening the dialogue by questioning, making connections, and honoring multiple perspectives. Berge and Muilenburg (2002) focused on the role of instructor’s questions in promoting online discussion. They developed a framework for designing questions for starting online discussion and maintaining the discussion. What’s Missing? In sum, based on the different conceptualizations of how learners learn from online discussion, various approaches to improving its quality have been aimed primarily at one of the following goals: (a) increasing the sense of social community; (b) increasing the amount and quality of interaction for knowledge construction; and (c) engaging students in certain cognitive processes. To support these goals, researchers have focused on 19 different aspects of the online learning environment: (a) structure and features of online discussion tools, (b) online activities in which learners engage, (c) teaching and modeling particular ways of interacting, and (d) instructor’s facilitation and moderation. Across these studies, however, researchers have paid scant attention to how learners interact with the course materials, another important aspect of learning. As we know, online discussion is commonly used as a way of increasing learners’ understanding of the text. One often implicit and little studied goal of such online discussions is to facilitate students’ interaction with the text. Moore (1989) defined three types of interactions that were crucial to the quality of distance learning: learner-content interaction, learner- instructor interaction, and learner-learner interaction. Leamer-instructor interaction and learner-learner interaction has been intensively investigated in online discussion (Swan, 2001). How learners interact with the content during discussion, such as the text read for the course, and how to engage learners in a focused discussion on the text to better understand ideas has remained largely unexplored in the research on online discussion. An exception was a group of researches studying anchored discussion forums. Those researchers believe that that sustained on-topic discussion is essential for learning (Guzdial & Turns, 2000). In an anchored discussion forum, both the text and the discussion are displayed in a linked, yet independent manner (van der Pol, Admiraal, & Simons, 2006). A user can identify a portion of text and type in a comment while they are reading an online document. The comments are shown alongside the document in a separate frame with a visual indication of the associated text, so all the other users can read and respond. This allows discussion to be anchored within a specific content. WebAnn (Brush, Bargeron, Grudin, Borning, & Gupta, 2002; Marshall & Brush, 2004) is 20 such a system that supports anchored discussion of online documents. When comparing the discussion in WebAnn with that in Epost, a typical discussion board system, Brush and her colleagues found there was more discussion in WebAnn, and students perceived the discussion in WebAnn focused more on the text, and more thoughtful. Van der Pol et al. (2006) compared an anchored discussion forum with threaded discussion forums in Blackboard, directly investigating the quality of discussion by analyzing students’ posts. They found that discussion in the anchored discussion forum referred more frequently to the text, and was more focused and more communicatively efficient. Guzdial and Turns’ (2000) CaMILE system uses a similar concept. When students create notes on a page in CaMILE, they can choose to link to files, web pages, or other media. The selected file is uploaded to the CaMILE Server and connected to the note, which serves as an anchor for subsequent discussion. Guzdial and Turns reported similar findings, that is, discussion in CaMILE was more sustained, more focused on class learning topics, and involved broader participation. This is an important research direction, and this study is an effort to build upon and extend this line of work. Researchers should think about learning in online discussion not only from the perspective of person-to-person interaction, but also from the perspective of person—to-text interaction. To facilitate this line of thinking, I consider in the next section research on learning from text, which examines how we learn from text and how to improve that learning. Learning from Texts As mentioned previously, several problems in online discussion forums are (a) the frequent lack of learner-content interaction, (b) the lack of meaningful learner-learner 21 interaction on the course content, and (c) the lack of depth in discussions. Interestingly, similar concerns were shared by researchers studying how people learn from texts. Researchers studying reading comprehension have noticed that readers tend to interact with the texts rather superficially. Readers often fail to relate new information to their prior knowledge, and overlook inconsistencies (Schank, 1986). The consequence of this mindlessness is “less complete understanding, learning, and memory”(Pressley, Wood, Woloshyn, & Martin, 1992, p.92). Researchers have developed many strategies to address these problems, which may inform discussion-based approaches as well. Researchers have found that teaching readers to use certain strategies, such as associating new ideas to prior knowledge, making inferences, questioning, interpreting, or paraphrasing text can help readers comprehend better (Block & Pressley, 2001; Duke & Pearson, 2002; Palinscar & Brown, 1984; Pressley, 2000; Pressley & Afflerbach, 1995). It seems that the extensive body of research on how readers comprehend and learn from text has been largely untapped for its contributions in understanding students’ engagement and learning in online discussions. Self-Explanation One of these successful reading strategies is self-explanation, the process of explaining the text to oneself while reading. Research suggests that readers who explain the text either spontaneously or when prompted to do so understand more from the text and construct better mental models of the content than do readers who do not engage in self-explanation (Chi, de Leeuw, Chiu, & LaVancher, 1994; Collins, Brown, & Larkin, 1980; Magliano, Trabasso, & Graesser, 1999; Schank, 1986; VanLehn, Jones, & H., 22 1992). Some readers, however, do not spontaneously self-explain while reading or self- explain poorly when prompted to do so. Therefore, self-explanation is usually supported by other reading strategies, such as questioning, interpreting, and relating ideas to prior knowledge (McNamara, 2004). Several reasons may account for why self-explanation improves comprehension and learning. First, self-explanation prompts learners to process information more actively. While explaining, the learner actively engages in making sense of the text and constructing meaning (Block & Pressley, 2001; Duke & Pearson, 2002). Second, it encourages learners to self-monitor their comprehension learning. Some researchers have focused on the significant role of metacognition in productive reading and learning, arguing that in order to learn effectively, learners need to know how to check, control and monitor their deliberate attempts to learn or solve problems (Baker & Brown, 1984; Brown, 1980). Third, self-explanation provokes learners to consciously make connections between what they are reading and prior knowledge. According to schema theory, the readers’ prior knowledge governs their understanding of the text (Adams & Collins, 1979; R. C. Anderson, Spiro, & Anderson, 1978; Rumelhart, 1980), so strategies that help activate readers’ prior knowledge will promote learning. Adjunct Questions Another way to promote active comprehension is by inserting adjunct questions into the text. Readers are asked to read through the texts and respond to embedded questions. Research on adjunct questions flourished in the 1970’s (R. C. Anderson & Biddle, 1975). The cognitive level of adjunct questions varies from factual questions, which ask the reader to “repeat or recognize some information exactly as it was presented in 23 instruction” (Andre, 1979, p. 282), to higher-order questions, which ask the reader to “mentally manipulate bits of information previously learned to create an answer, or to support an answer with logically reasoned evidence”(Winne, 1979, p.14). In adjunct question studies, three types of posttest were employed to examine the effect of adjunct questions: factual recall of the passage, answering the same questions inserted in the text, and answering new questions involving the transfer of what was read. These studies revealed that higher-order adjunct questions affect both productive and reproductive knowledge when they are placed after, instead of before, the part of the text being questioned (R. C. Anderson & Biddle, 1975; Rickards & Divesta, 1974; Shavelson, Berliner, Ravitch, & Loeding, 1974; Watts & Anderson, 1971). Researchers suggest higher-order adjunct questions have two possible functions. First, they may direct the learner’s attention to more of the information. Having attended more, the learner can therefore recall more. Second, adjunct questions may prompt learners to process the information in a deeper level. It is argued that these questions lead learners to set up complex strategies or programs for processing the information in the text. As the strategies employed determine the nature of representation of knowledge in the mind (Andre, 1979), those questions that “not only cause memory search, but also cause some sort of reorganization of memory traces and associations” will surely foster deeper learning (Carroll, 1971). Studies on adjunct questions suggest that making good use of higher-order questions in online learning environments may help both direct students attention to more important information and process the information at a deeper level. In fact, both adjunct questions and reading strategies are aimed at promoting more active and deeper mental 24 processing of the text. The difference between them is that reading strategies expect the readers to prompt or regulate themselves to process the text in an active way, while inserting adjunct questions intends to offer outside prompts to help learners actively process the text. One particularly interesting finding reported in adjunct question research is that for both factual questions and higher level questions, the closer the question is put to the part of the text it asks about, the higher the performance of learners when answering related questions in the posttest (R. C. Anderson & Biddle, 1975; Rickards & Divesta, 1974). The explanation is that placing higher level questions relatively far apart from the part of the text may overtax the subjects processing capacity, that is, produce excess cognitive strain (Bruner, Goodnow, & Austin, 1956; Rickards & Divesta, 1974), and partitioning information into certain level of aggregation can reduce cognitive strain. In threaded discussion forums, however, questions are often posted for the learners to discuss after learners have finished reading the whole text. In such cases, the learners may be confronted with much more information than they are able to process at one time. Summary The research and development efforts in online learning show that to improve the quality of online discussions, designers and instructors can manipulate a number of features. These fall into four clusters: (a) changing the structure of the online environment; (b) changing the activities in which learners engage; (0) modeling or teaching strategies, expectations, and ways of interacting; and (d) changing the way facilitators interact during instruction. 25 The learning from text research reminds us of other possibilities of structural changes to help learners process texts in certain ways, which in turn may prove helpful in Shaping discussions. For example, the insertion of adjunct questions into text would direct student attention to important issues in the text. The reading research also reminds us of some strategies important for learning. In particular, strategies for how to focus on the text and how to critique the ideas in the text. For example, asking students to self- explain the text may engage them in more text-focused, substantive discussions. 26 CHAPTER 3 Method In Chapter 2, I reviewed research on online discussions. I also presented research on learning from text, including self-explanation and adjunct questions research. I argued that bringing in the research on learning from the text could expand and extend our current thinking on online discussion research. In this chapter, I demonstrate how I drew on perspectives from research on online discussion and research on learning from the text to develop an approach to improve online discussion. More specifically, I designed an online discussion environment to foster focused discussion on text by drawing especially on self-explanation and adjunct questions research in reading research literature. Then, I designed and implemented a research study that investigated the effectiveness of this new approach. Question-Embedded Anchored Discussion Environment To promote more focused and deeper learning of course content than is typically afforded by traditional threaded discussion forums, I designed a new discussion environment by drawing on research on online discussion and research on learning from text. Four design principles guided the design of this new environment. Principle 1 .' Close Proximity of Text and Discussion Student posts should be entered and displayed close to particular texts being studied. The close proximity of text and posts should help students focus attention on the text and facilitate transition between reading the text and discussing it. 27 Principle 2: Encouraging Self Explanation Students should be expected to explain and comment on the text as they read. The process of recording and clarifying their thoughts as they read serves as a form of self- explanation intended to improve comprehension and learning (Block & Pressley, 2001; Duke & Pearson, 2002; Pressley & Afflerbach, 1995). Principle 3 : Using Adjunct Question Higher-order adjunct questions should be used to direct student attention to important issues in the text and prompt students to process the information at a deeper level (R. C. Anderson & Biddle, 1975; Rickards & Divesta, 1974). Principle 4 : Opportunities for Interaction Students should be expected to interact with each other by responding to and commenting on each others’ posts. The importance of interaction has been widely supported by literature on online discussions (Hrastinski, 2008; Tallent—Runnels et al., 2006) Anchored discussion forums (Guzdial & Turns, 2000; van der Pol et al., 2006), reviewed in Chapter 3, are one kind of tool that meets the first two principles. In my study, I first developed an anchored discussion environment, like the systems reported by Brush et al. (2002) and van de Pol et al. (2006), which satisfies the first two design principles. To meet Design Principle 3, I inserted discussion questions into the text. I developed the discussion environment within the online collaborative text-editing tool Google Docs (http://docs.google.com; see Figure l). 28 . . maceration“ '9') .F‘l‘l’ till hum Revisions m are filwavw ail}; E’t'w' Edit—313:3!” 3.} ....... email an x on n r a Mr at»: i1 ma... ass-ix: cw dnsrw 3....-----.-*_-~..-..--_.-..--... -W- ---.._----c.-_--?.-------..---------..--------------------_-_------c,’ l. . Teal Came"; . r' IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII r. . : Geometric ideas are unfit to representing and solving emblems In other cm in Ilaltm ms paragraph rice: help more undmand the . ‘ areas 0! mathematics and In real world mun-mm w geometry aim-1d .mnrvartinrs between geometry and other areas at mart-lunatic: burr-r z fhfl Noam?“ whim mm mm MN" MW» (”new '74:»: we smulo we let our sunsets maze tress connective. The will reprcecnuborls can Help student. malt: sense of area and fractions, on: them a mm: ”W m, 3141mm 01 and "My, .5 5mm; _ .mrowams an: scatoerplcts can we insights about Gate. and :gwuug ‘33,“ i g coordnat‘r: graphs; ran sewn on connect geometry and algebra Spahal- : 3- reacting I: Imblul in gang tr “up: Pldlfllllg ice tea, designing floor 'MV ‘WLDOW bad: ”to m, “~le “cum. ‘ plan, rad creating arr Students can learn to can the stmrmrs and , , . , raymmu'f awnd them. Using eunuch: Mel: d: H. lugs. rid dynamic abmhxdy mm that omen: «‘5 m ”3:.“ n "Stealth“: and . :9“)th 53 my” smog.“ cm "1030. “(New ,. It" ("warm ,0.” :sohm problems :1 other areas of mathemztics'. l-cr example. when ' «Wim we!- designed ac'wims, appropriate term. and vachm' upon, PYWW“ “3m "19 "lawn” ”“10?" hm" 3"“ equation, graph can ' isludenlu L411 make and explore toluene”: about gecnwttv and Len 9"" 4" WNW-"U31“ I” student. ”(CW9 WYJUW‘M- "“3 '5. 1" ham to men» carefully about geometric. man from the settles? yam 9W5?" 8' more-matron 00' 0‘ "*1 D-‘Olflm Npmlmlhon “M5 i 'of schooling. Comm try n: more than definitions: ll I5 chant 02:43me .can matte near abmt concepts and mrrlatim mums mama. . relationships and reasoning The return 01 0mg understanding an 179601» ,' gnomtry across thr grades. from informal to non- fiomal thirds-«g in ; Mum at Goa - Ga Young, 1 am stir trying to WIWM the ; consistent with the limiting of amounts and researchers {Duper aid . wflfcam of reaching stucco“ 930mm. on 703 M pear-retry is .. SMWW ”86; FWI: GOG-1“. Md 786”" 1463: SW “69; van 3 outs usem when we represent or sowe proolems in other areas?! 1 :H'dc :m). ; am not «.12.. Radcl'li A 9 rules 0 54am 7 - mbIsdulilothetuw Sure. llhirkso. 5 Even mom-ah Man we draw the nature not math, qoomtric idea can be so uselul. truuam area. lot nuance (new; nuke: it parable for student to envision and to wide-ram the abstract cm as ‘roncrme‘ and I A , m :nhrcreim Cinch, spelling Figure 1. A question-embedded anchored discussion environment. In the anchored environment, the focal assigned text is presented in a column, with questions to promote student thinking and discussion embedded in the text. To the right of the text is a second column in which students write answers to the embedded questions and comments on the text. Comments might include (a) asking a question related to the text; (b) making connections to prior experiences or other readings; or (c) making interpretations and judgments about the text. Students can also see the comments and responses posted by other students and are encouraged to respond to them (Design Principle 4). The environment thus provides opportunities for students to reflect upon and discuss with others the embedded questions as well as the specific text being read. This newly—designed anchored environment differs from a threaded forum in: (a) where the discussion questions are placed; (b) when the discussion takes place; and (c) where the discussion takes place. In threaded forums, discussion questions are posed in forums; in anchored environments, they are inserted as adjunct questions in text. Second, in threaded forums, students typically start writing comments after they have read the 29 complete focal text; in anchored environments, they start writing comments while reading the text. Third, in threaded forums, students typically contribute to the discussion without viewing the focal text at the same time; in anchored environments, students can see the focal text as they discuss it. I expect these differences between anchored environments and threaded forums to result in differences in the nature and quality of discussion. More specifically, I expect student discussion in the anchored environment to be centered around and led by text and questions, and to be more focused on the text and course content. Research Questions Among studies on anchored discussion, few have provided comprehensive measures of the discussions occurring in anchored environment, or measured learning outcomes. Therefore, research is needed to examine in detail the nature of the discussions and the learning taking place in the anchored environment as compared to the threaded forum. The current study builds upon the previous research on anchored discussion and further explores the following questions. 1. How did the quantity of discussion in the anchored environment differ from that in the threaded forum? 2. How did the discussions in the anchored environment differ from that in the threaded forum in terms of depth of discussion? 3. How did the discussions in the anchored environment differ from that in the threaded forum in terms of focus of discussion? 4. How did the discussions in the anchored environment differ from that in the threaded forum in terms of knowledge construction process? 30 5. How did the discussions in the anchored environment differ from that in the threaded forum in terms of level of social presence? 6. Did students learn better in one discussion environment than the other? 7. How did students perceive the nature of discussions and learning in the two discussion environments? To address the research questions, I implemented a study in an undergraduate online course, where students participated in online discussions about course readings in both the anchored environment and the threaded forum. Participants Participants were 34 undergraduate students enrolled in two different sections of an online undergraduate course at a large mid-westem university. At the beginning of the course, the instructors provided a brief introduction about the research to all students in the two sections. Of 36 students in the course, 34 agreed to participate in the study, 16 students from one section, and 18 students from the other. One was Asian, two were African American, and the rest were Caucasian; 5 were males and 29 were females. Regarding online course experiences, 6 had no experience prior to this online course, 16 had taken 1-3 online courses, and 12 had taken more than 3 online courses. Setting All participants were enrolled in two sections of an online undergraduate course on learning theories. Both of the two sections were offered in a course management system — Moodle {http://moodlcflg), were the same in terms of course content, and were taught by the same two instructors. The only difference between the two sections was the tool used 31 for supporting threaded discussion. One section used the threaded forum in Moodle; the other section used the threaded forum supported by the Facebook. Instructors were two doctoral students specializing in educational technology. They had taught online courses before, and were experienced in designing and delivering online courses. Two associate professors supervised the design and delivery of the course. The online course comprised eight two-week modules, with the first seven introducing basic learning theories and the last serving as a final wrap-up. Within each of the first seven modules, students completed course readings, discussed them, participated in learning activities, and responded to short essay questions raised by the instructors. The short essay questions usually asked the students to use what they had learned to explain a real-life phenomenon. This course was chosen for this study for two reasons. First, one of the goals of the course was to learn about learning theories through the course readings. In each module, a learning theory was introduced to the students, and the students needed to complete the readings to understand the specific learning theory. Second, having online discussion in threaded forums was one of the major activities to facilitate student learning and understanding of the course readings. Dennen (2008) argued for the importance of context and situation description when examining a learning experience supported by asynchronous discussion, as asynchronous discussion could be used in a variety of activities for different purposes. The instructional goal and activities of this course made it an ideal site to explore the affordances and constraints of two different discussion environments in terms of supporting the discussion and learning about course readings. 32 Materials The study was conducted in Module 3 and Module 4 of the course. In both modules, one of the learning activities for the students was to read and have discussions on a set of course readings. Both modules were on the theme of learning as individual cognitive processing. Each contained two topics: attention and memory in Module 3, and schema and stereotype in Module 4. Under each topic, the students read two pieces explaining the basic concepts or theories. All the readings were online articles selected by the instructors. Readings The readings for Module 3 were (a) Attention (“Attention,” 2008), which covers the history of studying attention and current research on attention (2091 words); (b) Selective Attention and Arousal (Beneli, 1997), which introduces various models of attention (2255 words); (0) Short-Term Memory (“Short-term memory,” 2008), which explains what short-term memory is and its relation with working memory (2086 words); ((1) Long- Term Memory (“Long-term memory,” 2008), which introduces long-term memory and other the basic concepts including capacity and duration (1366 words). The readings for Module 4 were (a) Schemas (“Schemas,” 2008), which introduces the concept of schema, its history, and its application (2301 words); (b) Schema Theory of Learning ("Schema theory of learning," 1999), which describes the basic principles from schema theory and the characteristics of schemata (558 words); (c) Learning to Make Inferences (Buehl, 2001), which lists the strategies teachers could use to help students learn how to make inferences (1016 words); and (d) Stereotype (Gibson, 1999; "What is a stereotype?," 2008), which introducing the concept of stereotype and how to deconstruct stereotype (921 words). 33 Discussion Questions As Berge and Muilenburg (2002) pointed out, in online learning environments, it is more important for the instructor to “ask the ‘right questions’ than to give the ‘right answer’” because online classrooms are often characterized as being discussion-oriented and collaborative (p.184). To engage the students in the important ideas in the texts, the instructors developed two discussion questions for each reading. These were high-order questions (Winne, 1979) that promote thoughts and understanding of ideas in the readings. More specifically, these questions served two major purposes: (a) to encourage students to make connections of the concepts in the readings to their own experiences, and (b) to invite students to think about how to use or apply the concepts in their lives. An example of the questions serving the first purpose was: From your own experience, what are the factors that affect your attention? And an example of the questions serving the second purpose was: What are some implications of capacity of short-term memory and chunking for teaching and learning? See Appendix A for a full list of discussion questions. Quizzes There were altogether four quizzes in Module 3 and 4. When students clicked on the link to a quiz, the Moodle course management system automatically generated a quiz by selecting 5 quiz questions out of a bank of 10-15 questions on this topic. To pass the quiz, the students had to answer four out of five questions correctly. If not, they were instructed to re-read the readings and come back to take the quiz again. The quiz items took forms of (a) true-or-false questions, (b) short answer questions, and (c) multiple-choice questions. Eighty percent of the answers to quiz questions could 34 be found directly from the required course readings. Figure 2 presents an example of a true or false question about the definition of attention: Example A: Attention is the tendency of organisms to orient themselves toward, or process information from only one part of the environment with the exclusion of other parts. Answer: True __ False Figure 2. An example of true or false questions in the quizzes. Twenty percent of quiz questions tested the students’ abilities to use the concept appropriately. An example of such questions is presented in Figure 3, a multiple question asking students to identify examples of stereotype. (Appendix B provided examples of quiz questions on each topic.) Example B: Which of the following statements is not an example of a stereotype? Choose one answer: a. Movies that feature Italian American themes tend to depict Italian Americans as gangsters. b. Rice is a staple food in Japan. c. Boys are better at mathematics than girls. Figure 3. An example of multiple choice questions in the quizzes. Short Essay Questions After working with each of the four topics, the students wrote a short essay in response to a central question, asking them to apply what they had learned from the text 35 and the discussion to explain a real-life phenomenon (see Appendix C for all the short essay questions). Figure 4 presents an example of the short essay question on the topic of memory. The students were asked to explain why they remember and recall some words easier than others in a previous memory test, identify the patterns of recall in the memory test, and use the terms and concepts from the readings to explain the phenomena. 36 Short Essay Question on the Topic of Memory Now it is time to analyze the results of your own memory study. You are not graded on how well you recalled words. Instead, the focus is on why you recalled some words and not others using the memory concepts and terms you studied about. Make a 500 word post with: 1) An informative title (a key insight you had, or a question you still have). 2) In the post, type exactly what you recalled for each of the four lists (including words you recalled that weren't there). 3) Ideas about why some words were easily recalled. 4) Ideas about why some words were not easily recalled. 5) Any trends or patterns you see in your own data (or across other students' data), and ideas that you have for why those patterns exist. 6) Use terms and concepts from the readings (i.e., "encoding" and "retrieval" when possible) to show that you can talk the talk. The above six criteria are the metric by which you are graded as well, so stick to them and you'll do fine. Some questions to ask yourself if you're stuck: What kinds of words were in each group? In some of the experiments the words belonged to a group, and in some they didn't. Another question: Did I have anything else to do besides remember the words? Sometimes having something else to do makes it harder to remember the first thing. One more question for you: How familiar was I with the words? Do I use them everyday, once a week, never? Figure 4. An example of short essay questions in the quiz. Survey The students completed a survey at the end of Module 4 (see Appendix D). The survey assessed student perceptions on the use of the two different environments. More specifically, it asked the students to compare the two environments in five dimensions: (a) learning, (b) focus of discussion, (c) depth of discussion, ((1) knowledge construction 37 processes, (e) social presence, and (t) engagement. This survey was adapted from previous research conducted by Gao and Putnam (2007), which served as a pilot for the current study. Under each of the five dimensions, there are Likert items asking the students how well a particular environment supported certain behaviors. For example, one survey question regarding the focus of discussion is “How well did Google Docs (the anchored environment) support you to pay attention to specific words or concepts in the readings?” Students chose from a scale of 5, ranging from “not supportive at all” to “extremely supportive”. After completing the Likert items on each dimension, the students responded to an open-ended question—“Please explain what specific features of the two environments led to your responses” — to explain their ratings. For each environment, there are fourteen five-scale Likert items. Altogether there were 28 Likert items and 6 open-ended questions. Interview The interview protocol focused on students’ experiences in having discussions in the two environments (See Appendix E). The interview served as a complementary data source to the open-ended questions in the survey to gauge students’ perceptions on having discussions in the two environments, and to find out why they behaved in certain ways — Sometimes, students failed to provide enough information on that in the survey. Some example questions were: “You mentioned in the survey that you didn't feel like you were very connected to your classmates in the anchored environment. What made you think so?”; “I noticed you usually posted a long first post in the threaded forum but you seldom did that in the anchored environment. What do you think that happened?”; or 38 “You wrote in the survey that the anchored environment made it easier for you to have focused discussions on the text. Why?” The selection of interviewees was based on the criteria of maximum variation sampling (Patton, 1990), a special kind of purposive sampling. According to Patton (1990), the maximum variation sampling involves purposefully picking up a very different selection of people. By creating a sample with maximum variation, researchers are able to study a wide range of aspects of a phenomenon. Based on student performance in the discussions and their responses to the survey, all students fell into one of the following categories: (a) students who favored the anchored environment and rated the anchored environment as a more supportive environment in over 60% of the Likert items; (b) students who favored the threaded forum and rated the threaded forum as a more supportive environment in over 60% of the Likert items; and (c) students who had no preference to the environments, and rated the two environments as equally supportive in over 60% of the Likert items. From each of the categories, 5 students participated in the interview. Procedures and Design The study focused on discussions of the required readings. Before the implementation of the study, the students practiced the use of the anchored environment in Module 2. The instructors put one of the required readings in Module 2, Caution — Praise can be dangerous (Dweck, 1999) into the anchored environment, and asked the students to read and discuss the reading from there. Instructions were given on how to make a comment in the anchored environment. An example of how the discussions in the 39 anchored environment should be like was also provided to the students. When the students got into the anchored environment, they saw the directions in Figure 5. Before you start reading and commenting on the article, please BE SURE to read the example, which shows you how collaborative commenting and discussion work out in Google Docs. Here are a few suggestions on what kind of comments you could make: 0 Your Insights: Help others see the significance of a particular point or the relationships between ideas. Or, help others see the weaknesses in a statement. 0 Your Judgment and Rationale: State your reactions/views to certain part of the text. Support your points of view with your reasoning, experience, or other readings. 0 Your Questions: Ask others to help you understand or clarify a point, or invite a discussion with a question 0 Your Connections: Make connections to your own experiences and other readings 0 mi Share anything else that could contribute to others' understanding of the reading Figure 5. Directions for participating in the anchored discussion environment. Below the directions was a page with two columns. The article was shown on the lefi column, and the right column was left empty for students to write down their comments. When there were students having difficulties in using the anchored environment, the instructors would email them explaining how to use the environment properly. The purpose of this activity was to get the students familiar with the anchored environment before the implementation of the study, so as to ensure that the results of the study would not be affected by the students’ unfamiliarity with the new enviromnents. 40 The study was implemented in Module 3 and Module 4, and lasted for 4 weeks. As was mentioned previously, there were altogether four topics in the two modules: attention, memory, schema, and stereotype. The students spent one week learning each of the topics by completing a set of learning tasks and activities. In learning the topic of attention, for example, the students first watched a card trick video and a gorilla video, both of which were attention experiments that demonstrate the selective attention of human mind. Then the students spent a whole week reading and discussing about the two pieces of readings—Attention, and Selective Attention and Arousal. At the end of the week, they completed the quiz on attention and wrote a short essay by using what they had learned to explain the card trick and gorilla video work using attention ideas and terms. The other three topics had similar activities and followed the same tirneline. The students in both sections had discussions in a group of eight to nine in either the threaded forum or the anchored environment. In Module 3, the students in Section A had discussions in the anchored environment, while the students in Section B used the threaded forum. In Module 4, the discussion environments were reversed, with the students in Section B using the threaded forum, and the students in Section A using the anchored environment (see Table l). Table 1 Research Design Module 3 Module 4 Anchored environment Section A Section B Threaded Forum Section B Section A The students in the anchored environment made comments while they were reading the texts online. The discussion questions were embedded below the relevant paragraphs. 41 The students in the threaded forum had discussions after completing the required online readings. Similar questions were suggested at the beginning of the discussion. In both environments, the students were told that the questions were to help them focus on and discuss about important issues in the readings, but they were not required to respond to those questions. The students were required to participate in the discussions throughout the week, and there was no specific requirement on the number of posts student should contribute. In both environments, the students received the same instruction on what kind of posts they were expected to contribute during the discussions as they received when discussing the piece: Caution — Praise can be dangerous (Dweck, 1999) in Module 2 (see Figure 5). After a week-long discussion, the students in both sections took the quiz. The students also wrote the short essay using what they have learned to explain a real-life problem raised by the instructors. Students’ grades on both the quizzes and the short essays were assessed as learning outcomes. At the end of Module 4, students took the survey on their discussion and learning experiences in the two discussion environments. The interview was conducted at the end of the course with the selected sample of students. Measures This study measured the effects of the discussion environments from multiple angles. All sources of data were reviewed, analyzed and considered together. Students’ online discussions were counted in terms of number of words and number of posts, and were coded based on a set of coding schemes. Students’ quiz scores, number of attempts, and short essay scores were analyzed to determine the quality of learning. The survey 42 responses and the transcripts from the interview were used to provide further information on the nature of discussion and student perception of their experiences. Unit of A nalysis An important first step for analyzing online discussions was to determine the unit of analysis. There were five most commonly used unit of analysis: (a) sentence unit, (b) paragraph unit, (c) message unit, (d) thematic unit, and (e) illocutionary unit (L. Rourke, T. Anderson, D. R. Garrison, & W. Archer, 2001b). There is no agreement on which one is the best because the choice for unit of analysis largely depends on the nature of discussion and the purpose of analysis. For this study, I used the thematic unit or “unit of meaning” (Henri, 1992), because I was interested in how students talked about or elaborated on each specific idea. As pointed out by Schrire (2006), syntactic units like sentence or paragraph may be easier to identify than thematic units, but “they are not as meaningful if the purpose is to trace the progression of ideas, for example” (p.56). A message may contain multiple ideas, and a sentence or a paragraph may only cover an incomplete idea. The advantage of segmenting the posts based on unit of meaning is that it allows researchers to focus on meanings or ideas in discussion posts, instead of being limited by syntactical factors. Therefore, all the posts were first divided into meaning units. In this study, two coders worked together to code the data after they had undergone a period of training. Rourke et al. (2001b) emphasized the importance of inter-rater reliability in content analysis research, because if different coders came to different coding decisions while coding the same content, the findings based on the content analysis would be unreliable. The two coders worked independently to segment 200 43 randomly selected posts out of a total of 1029 posts. The agreement was 98.6%. The discrepancies were resolved by discussion. The percent agreement calculated in this study is higher than the minimum accepted level — 80% (Riffe, Lacy, & Fico, 1998, p. 128). Therefore, one coder continued to code the rest of the units. Altogether, the segmentation resulted in 394 meaning units in the threaded forum, and 770 in the anchored environment. In the anchored environment, most posts consisted of single meaning units, whereas, in the threaded forum, one post usually consisted of multiple units. Quantity of Discussions Each individual meaning unit contributed by the students was assigned a set of numbers to indicate (a) the author of the unit (1 to 34); (b) the group the author belonged to (1 to 4); (c) the section (1 or 2); (d) the reading (1 to 8); (e) the topic (1 to 4); and (f) the discussion environment that the unit belonged to, 0 for the threaded forum and 1 for the anchored environment. Two pieces of information were derived from this initial coding: (a) the total number of words and the total number of units in the two environments; (b) the total number of words and the total number of units posted by each. student in each round of the discussions. Quality of Discussions To measure the quality of discussions, a set of coding schemes were developed. The coding schemes have four dimensions: (1) depth, (2) focus, (3) knowledge construction processes, and (4) social presence. In online discussion research, the code schemes for analysis can be derived deductively from the hypotheses—top down method, or arise from the actually data — bottom up method (Schrire, 2006). The current study combined the two methods. The 44 coding schemes developed and used by previous researchers served as the basis for developing new coding schemes. Then, to capture the nature of the discussions and address the research questions in this study, I modified those coding schemes and created new coding schemes by taking into consideration both the characteristics of the data and the purpose of this study. In this study, the units were first coded as nontask-related units and task-related units. Nontask-related units were excluded from further analysis due to the small number of this type of unit. Depth. To determine the depth of discussion, I developed a coding scheme based on Henri’s (1992) analytical model on information processing (see Appendix F), which have been used by a number of researchers to analyze online discussions (Hara et al., 2000; Rose, 2004). When I looked at the discussions, I found that Henri’s two-level analysis — coding a unit as either surface processing or in-depth processing, failed to differentiate the many levels of cognitive processing in online discussions. For example, there are many posts that fell between surface processing and in-depth processing, and it was hard to code them into either category. To solve this problem, I developed a new coding scheme, which specified five levels of cognitive processing. The lowest level is coded as level 0, suggesting that there is no idea, argument, judgment, inference, hypothesis, newly proposed question, or solution involved, and the highest level is coded as level 4, indicating that the meaning unit shows the connectedness and interrelationships of multiple ideas, arguments, judgments, inferences, hypotheses, and solutions based on analyses, triangulations, comparisons, interpretations, or refinements of data from multiple angles or perspectives in a sophisticated way. Definitions and examples of each level of cognitive processing were provided in Appendix G. 45 Focus. Few content analysis schemes have systematically examined the focus of a discussion, one of the more basic characteristics of a discussion. Measuring the focus of the discussions, however, is important for this study because I wanted to know whether the new environment developed was able to support more focused discussions on the texts. The units were coded on whether the unit focused on (a) texts: responses to specific ideas in the texts, (b) general ideas: responses not closely related to the specific texts, but related to the topics in general, (c) instructor’s questions: responses to instructor’s questions, and (d) peer comments: responses to comments from peers. Each unit was coded into one of the four categories according to its focus (see Appendix G for a detailed coding scheme). Knowledge construction. I chose Gunawardena et al.’s (1997) model as a basis for developing a new coding scheme for several reasons. First, this model has been widely used by researchers to examine the nature of online discussions (J . L. Moore & Marra, 2005; Schellens & Valcke, 2005; Yang et al., 2005). Second, the model possesses several good features, as was pointed out by Lally (2001): (a) “it focuses on interaction as the vehicle for the co-construction of knowledge”, (b) “it focuses on the overall pattern of knowledge construction emerging from a conference”, (c) “it is most appropriate in social constructivist and collaborative (student-centered) learning contexts”, (d) “it is a relatively straightforward schema”, and (c) “it is adaptable to a range of teaching and learning contexts” (p.402). There is a need, however, to modify the model for this study. First, the Gunawardena et al. model was used to analyze a global online debate. The nature of 46 debate determines that disagreements and exploration of contradictory opinions happened more frequently in their discussions than in other kinds of discussions. When looking at the data for this study, I found that few posts expressed disagreements directly. More often, the students built upon previous students’ opinions or pointed out alternative ways of thinking. Second, I found that not all posts were interactive. Some posts were not interactive at all, as they were only self-reflections on instructor’s questions or ideas in the text. Some other posts, on the other hand, raised new topics or new questions, inviting comments from peers. I tried to distinguish those types of posts with my coding scheme because they served different functions in the knowledge construction processes. In the modified scheme, each meaning unit was coded into one of five categories: (a) starting a new topic: defining, describing, or identifying a new problem or asking a question to open up discussion on a new problem or issue; (b) supporting, clarifying or elaborating existing ideas: showing support to or agreement on previously stated ideas, or corroborating previously stated ideas with personal experience and so on, or elaborating on previously stated ideas; (c) extending or deepening existing ideas: adding new interpretations, observations, or perspectives to existing facts/evidence/ideas, or contributing new ideas to an existing topic, or bringing in additional issues on a t0pic to considerations; (d) self-reflection: reflecting on and answering discussion questions, or making associations of texts with personal experiences, or showing difficulty in understanding the text, or paraphrasing the sentences in the text to make sense of it; and (e) synthesizing: combining multiple points previously made by more than one peer (see Appendix G for examples). 47 Social presence. Finally, the units were analyzed according to the social presence model developed by Rourke, Anderson and Garrison (1999) (see Appendix F). This model fit well with the existing data and the purpose of analysis, so it was used directly to analyze the discussion without modification. The three major categories of social presence are aflective responses, the expression of emotion, feeling, and mood; interactive responses, evidence that others is attending; and cohesive responses, activities that sustain a sense of group commitment. The units that did not fit into any of the three categories of social presence were coded as no presence. Inter-rater reliability. The two coders first coded the meaning unit in 200 randomly selected posts as on-topic or off-topic. The agreement was 99.1%. Therefore, one coder coded all the units in the rest of the discussions into either on-topic or off-topic. Altogether 12 off-topic units were identified, among which, 6 are in the threaded forum and 6 in the anchored environment. These off-topic units were excluded from the subsequent analysis. The on-topic units were further coded on four dimensions: (a) depth (b) focus; (c) knowledge construction processes; and (d) social presence by using the coding schemes described previously (see Appendices F & G). Two coders coded independently 214 randomly selected units (18.4% of the total units). Two types of inter-rater reliabilities were calculated— Cohen’s Kappa and proportion agreement. Multiple reliability indices were reported here, because “there is no general agreement on what indexes should be used” (Wever, Schellens, Valcke, & Keer, 2006, p. 10). Providing multiple reliability indices gives readers more information to judge the reliability. 48 The coding of depth reached a Cohen’s Kappa of .75, with 83.2% of the units coded exactly the same, 16.8% with a one-level difference, and no unit with a two-level or three-level difference. A Cohen’s Kappa between .40 to .75 indicates fair to good agreement beyond chance (Capozzoli, McSweeney, & Sinha, 1999). The proportion agreement and Cohen’s Kappa were 93.5% (200/214) and .91 for knowledge construction, and 95.3% (204/214) and .90 for focus. For social presence dimension, one unit could be coded into more than one category, so only proportion agreement was calculated, which was 94.4% (202/214). Learning Outcome Measurements Student performances on quizzes and short essay questions were measured as learning outcomes. According to Caspi and Blau (2008), evidence showed that learning measured by performance and perceived learning measured by reported feeling or perception are “independent, and may be uncorrelated” (p. 327). Therefore a direct measure of student performances is necessary to understand what students had learned. Quizzes. The quizzes were graded automatically by the Moodle course management system. Students got one point for each correct answer, so the range of scores for a quiz was from 0 to 5. Students were allowed to take the quizzes multiple times, so one student might have multiple scores based on the number of trials. Student performance on the quizzes, therefore, was measured by four different but related things: (a) average scores, (b) best scores, (c) worst scores, and (d) number of attempts. Short essays. The second measure of learning aimed at understanding how well the students could use what they have learned to explain real-life problems. The instructors evaluated student short essays based on a set of standards such as how well students 49 incorporate course readings in their writings, how well students introduce new ideas for thinking about the phenomena, and whether students appropriately used resources beyond the those provided in the course. For each module, students received a score ranging from 1 to 13 based on the rubrics presented in Table 2. 50 Table 2 Instructor’s Rubrics for Grading Short Essays Score Criteria 12-13 8-11 6-7 1-3 It is on-topic; it addresses the question asked, or it relates to others' ideas. It contains your own ideas and adds new content. It is not just a rehashing of what the reading said. Incorporate course readings and terminology Use a resource beyond those provided in the course, cites it It is written clearly, mainly fi'ee of errors. Sentences and words are fully formed and not in text speak. Typically follow four of the 12-13 point criteria or Follow more than four of the 12-13 point criteria, but not to the full extent (for example, occasionally “incorporates course readings) Typically follow three of the 12-13 point criteria or Follow more than three of the 12-13 point criteria, but not to the full extent (for example, occasionally “incorporates course readings) Typically follow two of the 12-13 point criteria or Follow more than two of the 12-13 point criteria, but not to the full extent (for example, occasionally “incorporates course readings) Typically follow one of the 12-13 point criteria or Follow more than one of the 12-13 point criteria, but not to the full extent (for example, occasionally incorporates course readings) 51 Student Perceptions For each of the 28 Likert items, a score ranging -2 to 2 was given based on how supportive student rated the environment, where -2= not at all, -1= not very well, 0= to some extend, 1=quite a lot, 2= extremely supportive. The internal reliability of the survey, assessed using Combach’s alpha, was 0.93. Student responses to the open-ended questions in the survey and the interview transcripts were coded using a grounded theory approach (Glaser & Strauss, 1967). To generate initial categories of major themes, I followed a “detailed line-by-line analysis” (Strauss & Corbin, 1998) by examining data line by line, asking questions about what the data is about, giving a description that stands for the data, and moving to the next bit of data and comparing it to the previous ones to decide whether it should be given a same description or a different description. As I assessed and categorized the data, I was open to unanticipated categories. I made reassessments and revisions until I found that firrther analysis did not provide new information or insights. 52 Table 3 Major Themes from Survey and Interview Data for the Anchored environment Themes Examples The presence of text made it easy to make immediate responses to the text and focus on the text. Reading while commenting provoked active processing of the text. Being able to focus on one particular portion of a text or one idea in the text at one time made it easy to attend to details. Comments on an idea/a portion of the text were clustered together, making it easier to read and build on each other’s ideas, and explore them in-depth. Reading and rereading of the text was more likely to happen in the anchored environment than in the threaded forum. “Discussion [in the anchored enviromnent] was more focused on the reading than the (threaded) forum, because you had the document right there. You were looking at the reading, and typing your responses, rather than reading it through, and then going back and trying to think about it. “In Google Docs (the anchored environment) it was much easier to read the article in an "active learning" style, because I could respond to questions that were posted, or write down any questions or comments that came to mind WHILE I read... ” “I think that Google Docs (the anchored environment) made you pay more attention to the specific words in such because you had to respond on the different sections of them, and made me pay attention to all the details.” “I think the way the discussions were grouped together in Google Does (the anchored environment) by relevant sections in the reading led to more profound discussion and generated more ideas and questions than the format in the threaded forum.” “I found it much easier to go back and reread things to try to better understand the topics the questions were asking about [in the anchored environment] .” 53 Table 3 Continued Themes Examples Having to commenting closely to the text restricted the topics of discussion. Responding to multiple things including text, peer comments, and inserted questions while reading in the anchored environment, making it hard to focus. “[In the anchored environment,] we're a bit more restricted to what we are supposed to discuss.” “I found Google Docs (the anchored environment) to be more confusing. There were too many aspects of it and it was over- bearing. . .having to stop while reading Google Docs to type in my thoughts and questions threw me off.” Some technical problems made the anchored environment hard to use. “I liked the discussions on the threaded forum. I did not like the Google Docs (the anchored environment) because I still do not know how to use them and it confuses me a lot.” Table 4 Major Themes from Survey and Interview Data for the Threaded Forum Themes Examples The long first posts in the threaded forum played an important role in learning the text and ideas from others. The open nature of the threaded forum made it easy to talk about general topics, and connect ideas across multiple texts. Having discussion after finishing reading the texts made it hard to talk about ideas in the text. “[In the threaded forurn,] I am doing a little bit more, I wouldn’t say background research, but you kind of learn more about the topic to write the first post.” “It was easier to pay attention to the "big" issues [in the threaded forum], and discussion usually revolved around them.” “The [threaded] discussion forums were more difficult to communicate with my classmates because I sometimes forgot what I wanted to 54 Table 4 Continued Themes Examples The threaded forum sends email reminders when there is a new post, so they did not have to go to the forum to check whether there were any new posts. Being able to see the pictures and profiles of the peers in the threaded forum made the students feel more supportive and more likely to share personal experiences. question or debate within the article by the time I post in the [threaded] forum.” “[The threaded forum] made it easy to read other's responses since it emailed me when a new one was written. Google Docs (the anchored environment) did not.” “I like the [threaded] discussion forum because you can see the picture of your classmates, and click to find out more about their background. People were more open to each other in the ,9 ! discussion forum Familiarity with the threaded forum made it easy to use. “Most of us are pretty familiar with the threaded forum and it is very easy to have discussion on it ,9 Overall, 8 major themes were identified for the anchored environment (see Table 3) and 6 themes for the threaded forum (see Table 4). For the purpose of reliability, the second coder coded one-third of the data for these themes. The inter-rater agreement using the coding scheme was 81.8%. The differences were resolved through discussion. 55 CHAPTER 4 Results This chapter reports the results found from the analyses. Results found from analyzing the discussions were used to answer research questions 1 to 5, results of student quiz and short essay scores were used to answer research question 6, and finally, the survey and interview responses were analyzed to address research question 7. Question I .' Quantity of Discussion Both the number of meaning units and the total number of words posted by each student were analyzed using MAN OVA to determine whether student participation was different in these two environments. The independent variables were: (a) environment: anchored environment vs. threaded forum, (b) section (2 sections), (0) discussion group (nested in section, 4 groups), and (d) student (nested in group, 34 students). Finally, I counted the length of each meaning unit, and conducted an AN OVA to find out how the length of unit was related to the environments. The independent variables included: (a) environment, (b) module, (c) topic (nested in module), ((1) text (nested in topic), (e) section, (f) discussion group (nested in section), and (g) student (nested in group). There were altogether 1164 meaning units, 394 in the threaded forum and 770 in the anchored environment. The number of units in the anchored environment approximately tripled that in the threaded forum. Students also wrote more total words in the anchored environment. The total number of words students contributed was 44736 in the threaded forum, and 55822 in the anchored environment. To understand whether the total amount of words and the total number of units contributed by individuals were different in the two environments, a MAN OVA was 56 conducted. The independent variables were (a) environment: anchored environment vs. threaded forum, (b) section, (c) discussion group (nested in section) and ((1) student (nested in group). Table 5 Estimated Marginal Means and Standard Deviations on Total Number of W ords and Units Contributed by Individual Students Threaded Anchored 2 DV . F p-value Tlp Forum Envrronment No. of Words 1355.13(158.53) 1639.73(158.53) 1.61 .21 .05 No. of Units“ 11.98 ( 1.71) 22.64 ( 1.71) 19.39 .00 .38 *p<.05 The results were presented in Table 5. There was no significant difference in the number of words contributed across the environments [F (1 , 32) = 1.61, p > .05, 771,2 = .05], though the anchored environment had a higher estimated mean on the number of words (1639.73 in the anchored environment and 1355.13 in the threaded forum). There were significantly more units in the anchored environment [F (1 , 32) = 19.39, p < .001 , 771,2 = .38]. Factors including section, group, and individual had no effects on the two dependent variables. The discussion data were also analyzed at the unit level to see whether there was a difference in the length of the units across the two discussion environments. An AN OVA was used, with independent variables being (a) environment, (b) module, (c) topic (nested in module), ((1) text (nested in topic), (e) section, (f) discussion group (nested in section), and (g) student (nested in group). It was found that in the threaded forum, the students 57 wrote longer units than in the anchored environment [F (1, 1118) = 266.94, p < .001, ”p2 = .20]. The estimated marginal means and standard deviation of the unit length in the threaded forum and the anchored environment were 121.62 (3.07) and 71.51 (2.39) respectively. The analysis also suggested that such factors as student [F (32, 1118) = 5.09, p < .001, 271,2 = .13], group [F(2, 1118) = 3.24, p < .05, 77,,2 = .01], text [F(6, 1113) = 3.14,p < .01, "p2 = .02], and topic [F(2, 1118) = 11.31,p < .001, 771,2 = .02] had effects on the length of the units. Question 2: Depth of Discussion Each unit was coded into a depth of 0 to 4. An AN OVA was conducted to determine the depth of the discussions in the two environments. The dependent variable was the level of depth, and the independent variables were (a) environment: anchored environment vs. threaded forum (b) section, (c) discussion group (nested in section), ((1) student (nested in group), (e) module, (f) topic (nested in module), and (g) text (nested in topic). Among all these independent variables, environment (QEADE vs. threaded forum) is the primary concern. Depth was not significantly different between the environments [F (1 , 1118) = .04, p > . , 77 = . . e eStlmate margln mean 0 ep were . . In 3 ea c 05 p2 001111 ' (1 'al fd th 194(06)’ ththr dd forum and 1.93 (.04) in the anchored environment. The three factors that had impact on the depth were student [F(32, 1118) = 4.89,p < .001, 771,2 = .12], group [F(2, 1118) = 3.63, p < .05, U; = .01], and text [F(6, 1118) = 3.72, p < .005, ”p2 = .02]. 58 Question 3: Focus of Discussion To determine the focus of discussion, each unit was coded into one of the following categories: (a) texts, (b) general ideas, (c) instructor’s questions, and (d) peer comments. I used MANOVA to analyze the focus of the discussions. The independent variables were (a) environment, (b) module, (c) topic (nested in module), (d) text (nested in topic), (e) section, (f) discussion group (nested in section), and (g) student (nested in group). The dependent variables were (a) texts, (b) general ideas, (c) instructor’s questions, and (d) peer comments. The dependent variables in these MANOVA were not continuous variables. Instead, they were dummy variables. The basic assumption of using MANOVA was not met, so the results were used as general estimations of the effects. The MAN OVA indicated that the environment had a significant difference in terms of the focus ofdiscussion [F(4, 1115) = 30.37,p < .01, 771,2 = .10]. Table 6 Estimated Proportions and Standard Deviations on the Focus Categories in the Two Environments Threaded Anchored 2 DV . F p-value Tlp Forum Envrronment Text" .003(.021) .158(.017) 52.36 .00 .05 General Idea* .07l(.011) -.007(.008) 55.34 .00 .05 Question“ .618(.033) .466(.026) 21.07 .00 .02 Peer .296(.032) .359(.025) 3.75 .05 .00 * p < .05 59 F ollow-up AN OVAs indicated when having discussions in the anchored environment, the students focused significantly more closely on the texts [F (1 , 1118) = 52.36, p < .001, ”p2 = .05] as compared to when having discussions in the threaded forum. On the other hands, the threaded forum discussions had higher percentage of units focusing on general idea [F(l, 1118) = 55.34, p < .001, 771,2 = .05] and instructor’s questions [F (1, 1118) = 21 .07,p < .001, 771,2 = .02] than the anchored environment discussions (see Table 6). It is worth noting that factors such as student [F (128, 443 8) = 2.44, p < .001, 271,2 = .07], section [F(4, 1115) = 4.34, p < .005, ”p2 = .02], text [F(24, 3891) = 2.65, p < .001, 271,2 = .01], topic [F(8, 2230) = 2.62, p < .005, 771,2 = .01], and module [F(4, 1115) = 10.94, p < .001, 771,2 = .04] also made a significant difference in the focus of discussion. 70.00% ~ 60.00% - 50.00% - 40.00% ‘ 30.00% “ 20.00% 4 10.00% * § 0.00% - * General * Texts Ideas * Questions Peers DThreaded 2. 03% 7. 11% 59. 90% 30. 96% IAnchored 16. 88% 0. 52% 43. 64% 37. 66% *p<.05 Figure 6. Percentages of focus categories in the two environments. 60 The percentages of focus categories presented in Figure 6 depict that in the threaded forum the majority of the units focused on instructor’s questions (59.90%) and peer’s comments (30.96%). The students sometimes talked about some general ideas related to the topic (7.11%), but they rarely focused on specific ideas in the texts (2.03%). In the anchored environment, in addition to responses to the instructor’s questions and peer’s comments, the students also attended to and talked about ideas in the texts (16.88%). The percentage of text-focused units increased 15% as compared to that in the threaded forum. In the anchored environment, however, the students seldom discussed general ideas that were not in the texts (.52%). Question 4: Knowledge Construction After all the units were coded into one of the following five knowledge construction categories: (a) new topic; (b) supporting; (c) extending; (d) synthesizing; and (e) reflection, I used MAN OVA to analyze the knowledge construction dimension. The dependent variables were (a) new topic, (b) supporting and elaborating, (c) extending and deepening, (d) self-reflection, and (e) synthesizing. The independent variables were (a) environment: anchored environment vs. threaded forum (b) section, (c) discussion group (nested in section), (d) student (nested in group), (e) module, (f) topic (nested in module), and (g) text (nested in topic). Among all these independent variables, environment (QEADE vs. threaded forum) is the primary concern. The MANOVA indicated an overall difference on knowledge construction processes regarding environment [F (5, 1114) = 3.58, p < .01, 77 2 = .02]. Student [F(160, P 5521) = 2.55, p < .001, 771,2 = .07] and text [F(30, 4458) = 1.61, p < .05, 771,2 = .01] also made a difference in knowledge construction processes. 61 Table 7 Estimated Proportions and Standard Deviations on the Knowledge Construction Categories in the Two Environments Threaded Anchored 2 DV _ F p-value 11p Forum Envrronment New Topic“ .014(.014) .044(.011) 4.47 .04 .01 Supporting .198(.027) .196(.021) .01 .92 .00 Extending" .086(.023) .161 (.01 8) 10.54 .00 .01 Reflection“ .692(.033) .590(.025) 9.82 .00 .01 Synthesis .010(.006) .012(.004) .17 .67 .00 * p < .05 Follow-up AN OVAs suggested that, when compared to the discussions in threaded forum, the discussions in the anchored environment had significantly more units that raised a new topic [F (1 , 1118) = 4.47, p < .05, 771,2 = .01] or extended a previous discussion [F(l, 1118) = 10.54, p < .001, 771,2 = .01], but less individual reflections [F(l, 1118) = 9.82, p < .001, qu = .01] (see Table 7). 62 80. 00% 70. 00% r 60. 00% ‘ 50. 00% ‘ 40. 00% ‘ 30. 00% 7 20. 00% r ‘ 10. 00% ‘ \§ * new topic supporting * extending synthesis * reflection DThreaded 2.03% 20. 30% 8. 12% 0. 51% 69. 04% AInchored 6. 10% 20. 52% 16. 36% 0. 78% 56. 36% "' p < .05 Figure 7. Percentages of knowledge construction categories in the two environments. Figure 7 showed the percentages of knowledge construction categories in the two environments. The percentages of individual reflection units were the highest in both environments (69.04% in the threaded forum, and 56.36% in the anchored environment), suggesting over half of the discussions in both environments were individual reflections, involving no peer interactions. The percentage of units that extended or deepened previously posted ideas in the anchored environment doubled that in the threaded forum (8.12% in the threaded forum, and 16.36% in the anchored environment), suggesting that the students were more likely to develop new ideas from and build upon the posts contributed by others. Finally, the percentages of synthesizing units and new topic units were extremely small in both environments (2.03% in the threaded forum, and 5.97% in the anchored 63 environment), though the percentage of new topic units was higher in the anchored environment than in the threaded forum. Question 5: Social Presence The discussions were analyzed according to the social presence model developed by Rourke, Anderson and Garrison (1999). The three major categories of social presence are affi’ctive responses, the expression of emotion, feeling, and mood; interactive responses, evidence that others is attending; and cohesive responses, activities that sustain a sense of group commitment. Units that demonstrated none of the above characteristics were coded as no presence. I used MANOVA to analyze the social presence dimension. The dependent variables were (a) affective responses, (b) interactive responses, (c) cohesive responses, and (d) non-social. The independent variables were (a) environment: anchored environment vs. threaded forum (b) section, (0) discussion group (nested in section), (d) student (nested in group), (e) module, (1) topic (nested in module), and (g) text (nested in topic). Among all these independent variables, environment (QEADE vs. threaded forum) is the primary concern. 64 Table 8 Estimated Proportions and Standard Deviations on the Social Presence Categories in the Two Environments Threaded Anchored 2 DV _ p-value Tlp Forum Envrronment Affective“ .044(.013) .005(.010) 7.81 .01 .01 Interactive“ .367(.054) .054(.042) 7.69 .02 .01 Cohesive .033(.013) .023(.010) 2.05 .15 .00 No Presence" .698(.033) .612(.026) 4.58 .03 .01 * p < .05 70. 00% 60.00% - 50. 00% J \ 40. 00% i 30. 00% - 20. 00% 7 10. 00% r 0. 00% mm * none * affective * interactive vocative E] Threaded 63. 45% 5. 08% 32. 49% 4. 06% IAnchored 52. 47% 2. 86% 46. 49% 3. 64% * p < .05 Figure 8. Percentages of social presence categories in the two environments. The analysis indicated that the factors including environment [F(5, 1114) = 5.10, p < .001, 771,2 = .02], student [F(128, 4438) = 2.10, p < .001, 771,2 = .06], and text [F(24, 3891) = 3.68, p < .001, 77 2 = .02] were significant. The results from follow-up ANOVAs P 65 indicated that the discussions in the anchored environment were more interactive [F (1 , 1118) = 7.69, p < .05, 771,2 = .01], but the proportion of affective units was higher in the threaded forum [F(l, 1118) = 7.81, p < .05, npz = .01]. It is worth noting that there was also higher proportions of no presence units in the threaded forum than in the anchored environment [F(l, 1118) = 4.58, p < .05, 77,,2 = .01] (see Table 8 & Figure 8). Question 6: Student Learning Quiz Scores The students took a quiz on each topic. To discover the effect of the environments on student quiz scores, I used AN OVA by setting (a) environment, (b) section, (c) discussion group(nested in section), ((1) student(nested in section), (e) module (nested in section), and (0 topic (nested in module) as independent variables. Because students were allowed to take the quiz multiple times, one student could have more than one score for a quiz. Therefore, I ran the tests by setting one of the following variables as the dependent variable: student best score, worst score, average score and number of attempts. 66 Table 9 Estimated Marginal Means (Standard Deviations) on Quiz Scores in the Two Environments Threaded Anchored 2 DV . F p-value 11,, Forum Envrronment Ave Score" 4.03(.10) 4.39(.10) 4.52 .04 .07 Worst Score" 3.66(.l4) 4.17(.15) 6.21 .02 .09 Best Score 4.35(.10) 4.53(.10) 1.72 .20 .03 No. of Trial 1.61(.09) 1.46(.09) 1.48 .23 .02 * p < .05 The results suggested that the environment had an effect on student worst score [F(39, 63) = 4.52, p < .05, 771,2 = .07], and average score [F(39, 63) = 6.21, p < .05, 771,2 = .09], but not on their best score or number of attempts. In general, students had higher average quiz score and worst quiz score when participating in the anchored environment discussions than participating in the threaded forum discussions. Table 9 showed the estimated marginal means of the worst score and average score. To understand the differences in the students’ quiz scores, I ran correlations of individual students’ average quiz scores and (a) the number of the units, (b) the average number of words for each unit, (c) the total number of words contributed by individual students, (e) the average depth of the units, (f) the number of units under each knowledge construction category, (g) the number of units under each focus category, and (h) the number of units under each social presence category. 67 Table 10 Correlations between the Students ’ Average and Worst Quiz Scores and their Behaviors in the Discussions Variables Average Quiz Score Worst Quiz Score Pearson p-value Pearson p-value Corr. Corr. Total Number of Words .347 .007 .477 .000 Depth of the Units .269 .038 .277 .032 Number of Text-Focused Units .269 .038 .066 .616 * p < .05 The analysis suggested that three variables positively correlated with the average quiz scores were (a) the total number of words contributed by individual students [r(5 8) = .35, p < .01], (b) the average depth of the units [r(58) = .27, p < .05], and (c) the number of text-focused units contributed by individual students [r(58) = .27, p < .05] (see Table 10). Worst scores were correlated with the total number of words [r(5 8) = .48, p < .001] and the average of depth [r(5 8) = .28, p < .001], but not with the number of text-focused units [r(5 8) = .07, p > .05] or any other variables. Short Essay Scores To see whether participating in the two different discussion environments influenced the students’ essay scores, I used ANOVA by setting (a) environment, (b) section, (c) discussion group(nested in section), (d) student(nested in group), and (e) module as independent variables. The dependent variable was the students’ essay scores. 68 Table 11 Estimated Marginal Means (Standard Deviations) on Essay Scores in the Two Environments Environment Mean (SD) F p-value qu TTDE 10.92 (.18) 5.55 .027 .182 QEADE 11.49 (.16) The results suggest that the environment had an effect on student essay scores [F (1 , 25) = 5.55, p < .05, 771,2 = .18]. When participating in the anchored environment discussions, the students had higher essay scores than participating in the threaded forum discussions. Table 11 showed the estimated means of the essay scores in the two environments. To understand the differences in the students’ essay scores, I calculated the correlations between individual students’ quiz scores and (a) the number of the units, (b) the average number of words in each unit, (c) the total number of words, (e) the average depth of the units, (f) the number of units under each knowledge construction category, (g) the number of units under each focus category, and (h) the number of units under each social presence category. 69 Table 12 Correlations between Students’ Essay Scores and Their Behaviors in the Discussions Variables Essay Scores Pearson Corr. p-value * Number of Text-Focused Units .455 .000 * Number of New Topic Units .268 .037 * p < .05 The analysis revealed that there were two variables positively correlated with the essay scores: the number of text-focused units [r(58) = .46, p < .001], and the number of new topic units [r(58) = .27, p < .05] (see Table 12). Question 7: Student Perceptions Survey: Likert-Questions Thirty students took the survey, and their responses to Likert-questions were coded and analyzed using MANOVA. The independent variables were environment, section, discussion group (nested in section), and student (nested in group). When an omnibus difference was found, follow-up AN OVAs were conducted. I found a statistical difference in terms of environment [F (1, 29) = 3.81, p < .01, ”p2 = .83]. Section and student also had significant effects on student perceptions [F (l, 29) =4.72,p< .01, n 2 = .86;F(26, 29)= 1.49, p< .01, 77 2 = .66]. p p 70 Table 13 F ollow- Up ANO VAs of Student Ratings for the Two Environments (n=30) Threaded Anchored p- 2 DV . "13 Forum Envrronment value Learning the text“ 3.30(.16) 3.93(.16) 7.43 .01 .20 Focusing on Question 3.57(.17) 3.90(.17) 1.91 .18 .06 Focusing on Specific 3.30(.16) 3.60(.16) 1.69 .20 .06 Text . Focusing on Specific 3.30(.18) 3.63(.18) 1.71 .20 .06 Issue Focusing on Overall 3.83(.19) 3.40(.19) 2.66 .11 .08 Ideas Thinking Critically 3.53(.17) 3.63(.17) .17 .68 .01 Thinking Deeply 3.43(.20) 3.63(.20) .52 .48 .02 Developing New Ideas 3.67(.21) 3.93(.21) .05 .82 .00 Sharing Ideas“ 3.27(.20) 3.93(.17) 7.63 .01 .21 Exploring Ideas“ 3.27(.20) 3 .90(.20) 4.92 .04 .15 Negotiating Meanings 3.23(.20) 3.73(.20) 3.22 .08 .10 Being Interactive 3.37(.24) 3.80(.24) 1.64 .21 .05 Being Supportive" 3.67(.20) 2.77(.20) 9.69 .00 .25 Being Emotionally 3.70(.20) 3.10(.20) 4.40 .05 .13 Responsive* Being Motivating 3.l7(.23) 3.30(.23) .16 .69 .01 Being Engaging 3.43(.23) 3.10(.23) 1.04 .32 .04 Being Enjoyable 3.40(.24) 3.13(.24) . 61 .44 .02 * p < .05 The follow-up AN OVAs suggested that students believed that the anchored environment was better for learning the readings [F (1, 29) = 7.43, p < .05, ”p2 = .20], sharing information [F (1, 29) = 7.63, p < .05, 771,2 = .21] and exploring each other’s ideas 71 [F (l , 29) = 4.92, p < .05, ”p2 = .15] when compared with the threaded forum. The students, however, felt discussions were more supportive [F (1, 29) = 9.69, p < .01, ”p2 = .25] and emotionally responsive [F(l , 29) = 4.40, p < .05, 771,2 = .13] in the threaded forum than in the anchored environment (see Table 13). Survey and Interview: Open-Ended Questions Student responses to the open-ended questions in the survey and face-to-face interview were analyzed to identify the major themes. For each theme, I counted the number of students who mentioned it in either the survey or the interview. If one student mentioned a theme multiple times, it was still counted as 1. The findings were used to support the quantitative analyses. For example, when a significant difference was found in favor of one environment over the other, students’ responses were used to understand why students preferred that environment. The number of students addressing the major themes emerged from their survey and interview responses are presented in Table 14. 72 Table 14 Numbers and Percentages of Students Addressing the Major Themes (n =30) Major Themes on the Anchored Environment Number Percentage The presence of text made it easy to make immediate 22 73.33% responses to the text and focus on the text. Reading while commenting provoked active processing of 8 26.67% the text. Being able to focus on one particular portion of a text or one idea in the text at one time made it easy to comprehend the 12 40.00% text. Comments on an idea/a portion of the text were clustered together, making it easier to read and build on each other’s 13 43.33% ideas, and explore them in-depth. Reading and rereading of the text was more likely to happen 6 20 000/ in the anchored environment than in the threaded forum. ' 0 Having to commenting closely to the text restricted the . . . 7 23.33% toplcs of discussron. Responding to multiple things including text, peer comments, inserted questions while reading in the anchored 5 16.67% environment, making it hard to focus. Some technical problems made the anchored environment 8 26.67% hard to use. Major Themes on the Threaded Forum Number Percentage The long first posts in the threaded forum played an . . . 14 46.67% important role 1n learrung the text and ldeas from others. The open nature of the threaded forum made it easy to talk . . . 12 40.00% about general toprcs, and connect ldeas across multlple texts. Having discussion after finishing reading the texts made it 6 20.00% hard to talk about ideas in the text. 73 Table 18 Continued The threaded forum sends email reminders when there is a new post, so they did not have to go to the forum to check 16 53.33% whether there were any new posts. Being able to see the pictures and profiles of the peers in the threaded forum made the students feel more supportive and 23 76.67% more likely to share personal experiences. Familiarity with the threaded forum made it easy to use. 9 30.00% Anchored environment. The first three predominant themes on the anchored environment were related to the advantages of being able to comment on a specific portion of the texts during reading. The majority of the students (73.33%) commented on how easy it was to comment and focus on the text in the anchored environment. About 40% of the students felt that focusing on one portion of the text at one time allow them to explore and comprehend the text and others’ comments related to this portion of the text much better. Some students (26.67%) thought that they were able to read actively in the anchored environments. Students also reported some problems in the anchored environment. For example, the discussion topics were restricted to the text (23.33%), and some students had technical problems with the anchored environment (26.67%). Threaded forum. The first two overarching themes on the threaded forum were related to the user-friendly interface of the threaded forum. The students (76.67%) felt the discussions in the threaded forum more supportive and more personal because they were able to see the each others’ pictures and profiles. They (53.33%) also thought having email reminders of new posts made discussions in the threaded forum easier. About half of the students (46.67%) commented on the importance of the first posts in the threaded forum, because they felt writing or reading the first posts helped them organize their 74 thoughts about the text and learn about others’ opinions. They (40.00%) also loved the open nature of threaded forum, which allowed them to talk about general issues. In addition, 30.00% students said that the threaded forum was easy to use and they had experiences using before. 75 CHAPTER 5 Discussion This study was designed to foster focused online discussions on course readings. More specifically, the study compared a new discussion environment — a question- embedded anchored environment — with a commonly used threaded forum, and examined whether the anchored environment supported more focused discussion on the readings than the threaded forum, and why the two types of environments supported online discussion differently. The previous chapters have detailed the design principles for the anchored environment, described the study carried out to compare the anchored environment with threaded forums, and reported what was found from the study. In this chapter, I focus on those findings that suggested a significant difference between the two environments, and use the major themes identified from survey and interview data to explain how the design of the environments is related to those differences. I also discuss the implications of the study, the limitations and possible directions for future research. How and Why the Focus of Discussion Diflers Focus on Readings A big difference between the two environments was the percentage of text-focused units posted by the students. In the anchored environment, 16.88% of the units focused on text, while in the threaded forum, only 2.03% of the units focused on text. Students agreed that it was easier to focus on the text in the anchored environment discussion than in the threaded forum discussion. One reason is that, during discussions, the texts were always there, constantly reminding the participants about the texts. As one 76 student put it, “Google Docs (the anchored environment) was more focused on the reading than the (threaded) forum, because you have the document right there. You are looking at the reading, and typing your responses, rather than reading it through, and then going back and trying to think about it.” The students also found that rereading the texts was more likely to happen in the anchored environment than in the threaded forum because it was just easier: “. .. I can go back and reread the paragraph above. It is easier because the text is right in front of you, with the information”. In the anchored environment, the students were able to respond to one particular section of the text at a time — “[In the anchored environment], it was nice that the readings are split into sections and that allows you to focus on each part as you discuss them”. The advantage of this is that the students could concentrate on one particular section at a time, and needed not to worry about the rest of the text, which reduced the cognitive load involved. This could be beneficial to learning because, according to cognitive load theory (Sweller, 1988), it is difficult to assimilate multiple elements of information simultaneously due to our limited working memories. Another reason why there were more focus on the texts in the anchored environment discussions is that, in the anchored environment, students could make comments while reading the text. So whenever they thought of something related to the text, such as a question or a personal story, they could write it down immediately and share it with others. In the threaded forum, the students were not able to do this. Usually, by the time they got to the forum, they would not remember the thoughts they had while they were reading the texts. Here is how the students put it: “it is just easier when you are reading it, and have an idea popping up in your head, and go ‘Oh, I wonder about this! ’, 77 and then type it down as you are sitting there reading through it.” “The [threaded] forum discussions were much later in time. Sometimes I would even read the articles a few days before, like Monday for example, and then did not start the discussion in the [threaded] forum until Wednesday.” The time lag, in this case, has made focused discussions on text difficult. 78 Example I : This reminds me of something we recently learned in my physiology class. We learned that the thalamus is responsible for editing and blocking the large amount of information our brain receives every second of the day. It then decides what is important enough to be passed on to the next area of the brain. This is comparable to the filter model of working memory. — Student 11 Example 2: The quote by William James states that, "It [attention] implies withdrawal from some things in order to deal effectively with others..." This statement really reminds me of how people deal with traumatic events in their lives. They can't deal with all aspects of the situation so they compartrnentalize and focus on one portion while ignoring the others. -Student 30 Example 3: Is anyone else struggling to make sense of this section? Does anyone else know of any other good resources that break this information down in a different way? Thanks! — Student 12 It seems like this is very science heavy. What do you think this is trying to say? It seems like it is trying to state that your brain/nervous system actually shows increased activity/firing when you are doing an attention related activity. —Student 35 ' Student 21- I was confused by this portion also, and I found a website that discusses the neural aspects of attention with working memory in a less confusing way. The website is: http://www.bgrinconnection.com/topics/?main=fa/work'_11g; memory3 Read the section under the title "The Elusive Central Executive" Let me know if you found it helpful! -Student 27 Sorry, I forgot to put a follow post on here to let you know what I thought about the Brain Connection website! I really appreciate you providing that additional resource. I did find the 79 Figure 9 Continued wording on that site easier to comprehend, so thank you! It's been too long since I've been in a biology, physiology or psychology class--sometimes the scientific terminology just goes over my head! -Student 21 Figure 9. Examples of text-focused posts in the anchored environment. Figure 9 presents several examples of text-focused posts in the anchored environment. In example 1, the student was learning and assimilating the information on the concept of “filter model” in the text by associating it with what she had learned from another class. In example 2, the student was making inferences based on a quote in the text. Sometimes, after one student raised a question about the text, other students would respond to the previous student, offering their understandings of the question, like what happened in example 3. These examples suggested that, in the anchored environment, having students commenting on the text while reading promoted an active processing of text, which is crucial for understanding and learning from the text. As one student interpreted it, “[t]he good thing about Google does is that you have to read a lot more critically and a lot closer. You are asked to actually demonstrate understanding throughout the reading.” In the threaded forum, such posts were less likely to see. Focus on General Thoughts Another evident difference in the two environments was that the threaded forum has higher percentage of units talking about general ideas related to the t0pic than the anchored environment (7.11% in the threaded forum versus .52% in the anchored environment). 80 Students believed that this was probably because of the setup of the anchored environment. It was hard to write overall comments on the topics in the anchored environment, because they had to write their comments adjacent to a specific portion of the texts. One student said it well in the interview: “The bad thing about Google Docs (the anchored environment), is that it is just one piece of information, one article, and then you respond to the article. When you want to have an overall discussion, or when you focus on more than one article, it does make it a little bit difficult.” The threaded forum, on the other hand, was open to any types of posts. One student said, “[In threaded forum,] you are reading the whole thing, and that your mind automatically summarizes what you have read for you. So when you are posting, you think back to the big picture because it is easier for your mind to remember the big concepts rather than each individual examples.” Therefore, in the threaded forum, students were more likely to talk about their opinions on a topic in general, without referring to any specific readings (see example 4). This type of comments was extremely rare in the anchored environment. Example 4 STEREOTYPING: Stereotyping plays a role in everyone's lives, whether they want to believe it or not. Though often, stereotyping is thought of as deliberately negative, close minded, and often racist, sexist, and etc., it is an automatic process that we have to learn to control. True, some people stereotype and don’t care to change their views, because stereotyping often makes people feel superior to others. However, people must be enlightened to reduce their stereotypes. — Student 32 Figure 10. An example of general idea post in the threaded forum. 81 In example 4 (see Figure 10), the student expressed her opinion about the need to control and reduce stereotypes. This type of comments is important to learning because it demonstrated that the students had gone beyond the stage of comprehending and understanding the texts, and started to form their own opinions or perspectives on a topic. Unfortunately, there was only small percentage of this type of comments even in the threaded forum. Focus an Instructor ’s Questions As described in Chapter 3, to facilitate discussions, the instructors asked three questions related to each piece of reading, both in the threaded forum (listed at the top of the forum) and in the anchored environment (embedded between paragraphs). Students decided whether or not they would respond to the questions, and which questions they would like to respond to. In the threaded forum, the majority of units (59.90%) were responses to these questions raised by the instructors. And in the anchored environment, the percentages of units in response to the questions were 43.64%. The students did not provide explicit explanation of why there was such a difference in the two environments. It is possible that when the students got to the threaded forum, the first thing they could see was the questions listed at the top of the forum. When the students came to the anchored environment, however, they saw the text first, and they would not see the embedded questions until they read a few paragraphs. Therefore, the students were more likely to respond to the instructor’s questions in the threaded forum, and to the text in the anchored environment. 82 How and Why the Knowledge Construction Processes Difler Initiating New Topics The percentage of new topic units in the threaded forum and the anchored environment was significantly different, 2.03% and 5.97% respectively. There were only 8 new topic units identified in the threaded forum. Among them, 4 focused on texts, 2 focused on instructor’s questions, and 2 focused on peer comments. In the anchored environment, in contrast, there were 49 new topic units, 16 on texts, 2 on general ideas, 2 on instructor’s questions, and 29 on peer comments. It revealed that in the anchored environment there were much more new topic units related to texts and peer comments. As was discussed in the section of focus, the students found it was just easier to add a new idea or a question about the texts in the anchored environment, because they could write down immediately what they had in their mind. It is also not surprising that more new topics were raised based on peer comments in the anchored environment, because the large number of peer comments in the anchored environment provided the students plenty of opportunities to come up with something new based on previous comments. Extending Peers ’ Comments The percentage of units that built on and extended a previous comment in the anchored environment doubled that in the threaded forum (16% versus 8%). This suggested that the students, when having discussion in the anchored environment, were more likely to pay attention to their peers’ opinions, developing each other’s understandings. 83 A higher percentage of extending units in the anchored environment could be explained by student responses to survey and interview. Students felt that, in the anchored environment, comments on one particular idea or a specific portion of the texts were clustered together, making it easier to read, build upon, and explore in-depth. It is interesting that, in the threaded forum, one post usually contained multiple meaning units, but in the anchored environment, each post was a single meaning unit. Reading relatively short posts on one specific idea at a time reduced the cognitive load, and made it easier to extend and build on this idea than reading a long post containing multiple ideas. As one student said in the interview, “I just liked the way the similar postings were in closer proximity to each other visually in Google Docs. It made it easier to focus on one idea at a time and explore it in greater depth.” This is an important finding. Research suggests that in online discussion, the knowledge construction processes usually stayed at the stage of sharing and comparing information, and seldom go to the stage of meaning negotiation (Gunawardena et al., 1997). The finding here indicated that the anchored environment has the potential to increase more advanced levels of knowledge construction. It is worth noting, however, that still a large percentage of the units (20.52%) in the anchored environment was purely supporting or clarifying previously posted ideas, suggesting students had a strong tendency to agree with each other’s comments (Nussbaum et al., 2004), or offer similar points of views (Bullen, 1998) even in the anchored environment. Making Reflections Higher percentage of reflection units was found in the threaded forum (69.04% in the threaded forum, and 56.36% in the anchored environment). Student survey and 84 interview responses revealed that reflection on and response to instructor’s questions was an important way for them to learn in the threaded forum. When talking about how they discussed in the threaded forum, 46.67% of the students mentioned the importance of their first posts. The first post in the threaded forum was usually a long and thorough response to the questions raised by the instructors. First posts served two functions in the discussions. First, the students saw writing the first posts as an opportunity to collect and organize their thoughts about a topic. One student said that “[In the threaded forurn,] usually what I do is I make my initial post... I wanted it to be just me, what I thought about the text.” Second, reading through other people’s first posts allowed students to get a sense of the key issues in the discussions. Therefore, “(the first posts) kind of make everybody get the ideas of what they will discuss next”. Here is one student describing what he usually did when he came into the threaded forum: “I will read the discussion questions, then I will read through a couple people’s initial posts, so I am kind of know what to look for a little bit, you know, kind of main ideas. That, personally, helps me.” Therefore, although these first posts did not show the level of interactivity, some students who posted later did benefit from reading and analyzing these posts (Pena-Shaff & Nicholls, 2004). In the anchored environment, in contrast, the students did not see reflections on instructor’s questions as important as in the threaded forum, because their discussions were stimulated by many specific portions the texts rather than mainly by the instructor’s questions. For example, one student said during the interview, “Here (in the threaded forum) sometimes I feel bad at posting what some others have already posted, like telling the same stories, because we are responding to the same discussion questions. But while 85 in Google Does (the anchored environment), I can kind of say, ‘Hey, here is a more specific life example that happened to me!’ because I felt like other people were kind of giving more specific examples in Google Does (the anchored environment) as compared to here (the threaded forum).” How and Why the Level of Social Presence Dijfers Providing Aflective Responses Affective responses include the use of emoticons, the use of humor, or self- disclosure in the conversation. A closer look at the data revealed that there were no emoticons or use of humor in either environment. All affective responses were self- disclosure. The percentage of self-disclosure was much higher in the threaded forum than in the anchored environment (5.08% versus 2.86%), though the percentages were low in both environments. The results from the survey also suggested that the students perceived the threaded forum as a more supportive and emotionally responsive environment than the anchored environment. Many students attributed this to the particular setup of the threaded forum: When posting in the threaded forum, they were able to see others’ pictures and profiles, which was not possible in the anchored environment —“I like the (threaded) discussion forum because you can see the picture of your classmates, and click to find out more about their background. People were more open to each other in the discussion forum!” This suggests that, in the threaded forum, students felt they were able to get to know each other personally by checking out the pictures and profiles of their classmates, which created a more emotionally supportive environment for discussions. 86 Providing Interactive Responses Interactive responses refer to meaning units that continue a thread, quote from others’ messages, refer explicitly to others messages, ask questions, compliment or express appreciation, and express agreement. The anchored environment had higher percentage of interactive responses than the threaded forum (46.49% versus 32.49%). The students also reported in the survey that they were more likely to explore each other’s ideas (an average rating of 3.90 versus 3.27) and share information with their classmates (an average rating of 3.93 versus 3.27) in the anchored environment than the threaded forum. As one student commented, “In the Google Docs, there was a lot more conversation back and forth between people. People read through the document and commented where they felt appropriate. In the forums there was not a lot of discussion.” There is no clear answer why this happened. It is possible that the anchored environment made it easy for the students to continue or extend a conversation, as discussed in the previous sections. How and Why the Pattern of Participation Differs There was a pattern in which students wrote more frequently (1639.73 words versus 1355.13 words) but wrote shorter units (11.98 units versus 22.64 units) in the anchored environment than in the threaded formn. Students wrote more frequently probably because of the similar reasons discussed in the previous sections. First, it was simply easy to leave a comment while reading the texts in the anchored environment. As a student said, “it was much easier to read and respond to the articles in Google Does (the anchored environment) because the article was located right next to the area where we were to respond.” Second, students were more likely to add to an existing thread, because it was 87 easy to read through a thread on a specific theme and add to it. It is unclear why the units were shorter in the anchored environment. Perhaps, there were many things (text, peers’ comments, instructor’s questions) to respond to while they were reading the texts in the anchored environment, so that each response became short. Why Students Had Higher Scores in the Anchored Environment In the anchored environment, students had higher average quiz scores (4.39 as opposed to 4.03) and essay scores (11.49 as opposed to 10.92) than in the threaded forum. The average quiz scores were positively correlated with the total number of words contributed by individual students [r(58) = .35, p < .01], the depth of units [r(58) = .27, p < .05], and the number of text-focused units [r(58) = .27, p < .05]. The student short essay scores, which were graded by the course instructors, were positively correlated with the number of text-focused units [r(5 8) = .46, p < .001], and the number of new topic units [r(58) = .27, p < .05]. It is not surprising that the total number of words and the depth were positively correlated with the quiz scores. This suggests that the more students wrote and the deeper cognitive processes students went through, the higher quiz scores they got. The total number of words and the depth, however, were not statistically different between the two environments. What differed was the number of text-focused units [F (24, 3891) = 2.65, p 2 . . < .001, up = .01]. Both learmng outcomes (qulz scores and short essay scores) were positively correlated with the number of text-focused units, confirming the importance of having discussions focused on texts. Here are two students’ comments, which revealed how easily it was in the anchored environment to have discussions focused on texts and why it was important: 88 Student 1: When you use the (threaded) forum, you read it, when you don’t understand it, you know, whatever. I don’t really need to understand it. I won’t necessarily go to the forum to ask those questions. Whereas in Google Docs (the anchored environment), you are reading it, it is right there. If you have a question, I might just put it on here. It only takes me ten seconds to put it on. Google Docs (the anchored environment) gives you a good way of spending a short amount of time and to get the most information possible. And then somebody helps you and says “Hey, this is what it means.” And you go back and read it. And like, oh, well, I know. Student 2: Having the discussions in the Google does (anchored environment) made me look into the readings in a lot of detail, because in order to start a discussion or give your opinion on the specific articles, you need to have a really good understanding of them in order to start a discussion, so you can back up your opinion. The short essay scores were also correlated with the number of new topic units, which was significantly different between the two environments [F (1 , 1118) = 4.47, p < .05, ”p2 = .01]. It suggests that the students who raised more new topics during the discussions were more likely to have higher scores on their short essays. The underlying reason behind this is largely unknown. Perhaps students who raised a higher number of new topics were more active learners, and they were more willing to consider and explore various ideas. Therefore, they were able to provide more comprehensive views when writing the short essays. 89 Implications Implications for Teaching with Online Discussion How to foster focused discussions. Findings from the study provide implications for teaching with online discussion. First, results indicated that students were able to have discussions with a closer focus on the course readings in the anchored environment as compared to the threaded forum. This is consistent with the findings from previous research on anchored discussion environments, where researchers found a greater number of text-focused posts in anchored environments than in traditional forums (Brush et al., 2002; van der Pol et al., 2006). Results also showed that students’ quiz and short essay scores were positively correlated with the number of text-focused units posted by individual students. The implication is that the anchored environment can better facilitate learning from texts than the threaded forum. To support student learning of texts through online discussion, instructors should design environments or activities in a way that allowed students to have a sustained focus on the texts during the discussion. The anchored environment achieved this goal mainly because of the presence of texts throughout the discussion, allowing students to write down whatever came to their mind while reading the texts. The anchored environment may not be the only way to do it. For example, online instructors could ask students to jot down their questions, thoughts while they were reading the texts, and later posted onto a threaded forum. Of course, this might not work as well as in the anchored environment, because (a) students will have to quote the texts in their posts to let others know which portion of the texts they were referring to, and (b) posts that address the same portion of the texts may not be “grouped 9O by relevant sections” as what happens in the anchored environment, making it hard for students to consider comments about a same theme together. Not every student found the anchored environment helpful in focusing on or learning about the texts, though. The analysis showed that students wrote significantly different number of text-focused units in the discussions [F (32, 1118) = 1.89, p < .005, 771,2 = .05]. When I interviewed one student, she told me that when she got into the anchored environment, she sometimes only looked for and replied to the embedded questions —“[In the anchored environment,] usually the questions you guys post are pertaining to the section directly before it. So, you know you can, I did not do that every time, but a couple of times, like busy week, I am like, okay, this question is about focused attention. I go right to the above paragraphs and read all these about what focused attention is. Okay, I am just going to do my post on focused attention. You have to read the whole thing when you are in (threaded) discussion forum. Here (in the anchored environment), you can kind of take and choose.” Therefore, this anchored environment might not work for all students. How to encourage high-level knowledge construction. Data suggested that the patterns of knowledge construction processes were different in the two environments. There were more new topics raised and more units extending previous comments in the anchored environment. In the threaded forum, in contrast, there were more reflective monologues. According to the students, this was because, in the anchored environment, each short post usually focused on one single theme, and posts with similar themes were naturally grouped together (adjacent to the related portion of the text), making it easy to 91 respond to. In the threaded forum, however, the first posts were long and contained multiple themes. As Garrison and Cleveland-Innes (2005) argued, online discussion must be designed and structured in a way that facilitates clear discussion threads, avoids disjointed monologues, and moves the discussion to higher levels of thinking (p. 137). The current study suggests that one possible way of avoiding] disjointed monologues and encouraging students to build upon each others’ views could be asking students to write about only one idea rather than multiple ideas in each post. This might help reduce the cognitive load for the readers, making it easy for readers to consider the particular idea in the post carefully. Though compared to the threaded forum, there were higher percentages of new topic units and extending units in the anchored environment, the actually percentages were still low, 5.97% and 16.36% respectively. There was still a high percentage of disjointed reflection units in the anchored environment (56.36%). This means that the anchored environment may not be an ideal environment to stimulate high-level knowledge construction processes. Therefore, if online instructors aim at fostering high- level knowledge construction processes, they should probably consider adopting other approaches. How to support a friendly discussion environment. It is interesting that the students did not rate the anchored environment as a more engaging or enjoyable environment than the threaded forum, though it was more effective to help students focus on the course readings. One student said that having discussions in the anchored environment “seems more like school work, whereas the (threaded) discussion forum more like a bunch of 92 kids learning about the same thing and talking about it”. Also, students had more self- disclosure units in the threaded forum. It seems that the threaded forum is a more fiiendly environment for the students. There might be a couple of reasons for this. One of the reasons, as stated by many students, was the presence of their peers’ pictures and profiles. This indicated that to foster a supportive environment in an online class, it might be important to provide easy ways of getting students to know each other at a more personal level. Whether the discussion tool is user friendly or not may also be an important factor that affects student evaluation of their experiences in online discussion. Most of the students found the threaded forum easy to use, and many of them actually had experiences in using it for some other online courses. In terms of technical support, the students were generally satisfied with the setup of the threaded forum. They believed that some functions in the threaded forum were particularly helpful. One is being able to see each other’s pictures and profiles in the discussion forum. The other is that the threaded forum automatically sends out an email reminder when there is a new post: “[In the threaded forurn,] you could immediately see when someone responded to your post, whereas in Google Docs (the anchored environment), you would have to continuously check it.” In contrast, a few students encountered technical problems while using the anchored environment, even though the instructors had provided instructions and had the students practiced it before the implementation of the study. When I interviewed one student and asked her overall impression of the two discussion environments, she told me that she liked threaded forum better simply because she thought the anchored environment was 93 less user-friendly: “I am not so good with computer, and it took me a little bit to get used to Google Docs (the anchored environment). I also don’t like the fact that you can go back and edit on Google Does (the anchored environment). I never personally do this. But I feel like someone could change my post or you can go back and change your own. For me, part of the importance of discussion is when you say something, you cannot take it back. You know it is already been said. So I personally like the [threaded] discussion forum better.” This suggests that having a user-friendly online discussion environment could also be important for creating a friendly discussion environment. Implications for Future Research Examining the characteristics of text. This study did not specifically examine how other factors could affect student performance in the two environments. For example, the analyses constantly suggested that the nature of course reading had significant impacts on the length [F(6, 1118) = 3.14,p < .01, 771,2 = .02] and the depth [F(6, 1118) = 3.72,p < .005, 771,2 = .02] of student posts. It also affected the focus of discussion [F (24, 3891) ‘= 2.65, p < .001, 771,2 = .01], the knowledge construction processes [F(30, 4458) = 1.61, p < .05, ”p2 = .01], and the level of social presence [F(24, 3891) = 3.68, p < .001, ”p2 = .02]. A few students also wrote about how the nature of readings affected their discussions in the survey, such as “the shorter the reading is, the more it grabs my attention because it only requires me to read a little but think more in my own term”, and “The longer the readings, the more I struggled focusing. I began searching for the important information and skipping all the academic jargon...” Some of the comments suggested an interaction between readings and discussion environments — “If the reading 94 was longer, I appreciated Google Docs (the anchored environment), because it kind of split it up in more manageable sections”. This opens up possibilities for future research on the influence of text. Future research should examine how the characteristics of texts, such as style or length, affect the nature of discussions. It is possible that certain types of texts could evoke more discussions and more knowledge construction processes. In addition, research should be conducted to understand whether there are interactions between the characteristics of texts and the types of discussion environments. That line of research may have future implications for online instructors on what types of texts they should choose for online discussions. Developing instructional or learning strategies. Findings from this study showed that not every student knew how to use the anchored environment. As reported in their survey and interview responses, some students found it hard to keep track of the multiple tasks in the anchored environment. Here is one student complaining how the anchored environment made it hard for her to focus. “In Google Does (the anchored environment), we were directed to make comments throughout the reading of the various texts. I found this extremely unhelpful and quite detrimental to my concentration. The (threaded) discussion forum worked much better, since it allowed me to comprehend the entire readings before having to make a point about them.” Some other students, as mentioned previously, only responded to the embedded questions in the anchored environment, and did not interact much with the texts or peers. It is necessary, therefore, to develop ways of facilitating discussions in the anchored environment, so that students would be able to use the environment wisely. 95 One possible approach is to develop a set of discussion strategies for students to use in the anchored environments. For example, some students were not sure how to respond to multiple things at the same time while reading the texts. One possible strategy that could be taught to the students is to instruct students to read through a piece of reading twice in the anchored environment — Students will only focus on reading the text and writing down their own thoughts the first time they read it, and for the second time, they will concentrate on responding to peers’ comments. Other strategies on how to comment on the texts, and how to respond to peers’ comments in the anchored environment should also be developed and taught to students, so that they will not come into the anchored environment feeling lost. Solving other problems in online discussion. The study showed that the anchored environment had some limitations. One of them was that it was hard to make general comments on course readings or to connect ideas across multiple texts. A possible solution to this problem could be using anchored environments in combination with threaded forums: Students can first write comments and have discussions in an anchored environment while they are reading the texts. After they finish reading, they can continue the discussions in the anchored environment, and they can also open up a new discussion in the threaded forum. In that way, students would not lose opportunities to talk about general issues in the course readings. In addition, the percentages of new topic units, extending units, and synthesis units were relatively low in both environments. Units that were reflection or showing support to previous comments still dominated the discussions (76.88% in the anchored environment and 89.34% in the threaded forum). This finding was consistent with 96 previous research, reporting asynchronous online discussions usually stayed at low-level activities, such as sharing and comparing of knowledge (J. L. Moore & Marra, 2005). This suggests that anchored environment developed in this study cannot solve many of the problems in online discussions. Future research is still needed to find out other ways of improving the quality of online discussions and learning. Some promising lines of research are designing online activities that promote student-student interaction, such as role-playing activities (Lebaron & Miller, 2005) or debate (Kanuka et al., 2006), teaching students online discussion strategies (Choi et al., 2005; Yang et al., 2005), and developing interactive online discussion environments (Scardamalia, 2004; Suthers et al., 2006) Achieving diflerent learning goals. Finally, this study showed that each environment comes with its own affordances and constraints (Koehler & Mishra, 2008; Mishra & Koehler, 2006). The anchored environment affords a more text-focused conversation, and constrains general discussions. The threaded forum, however, affords a more personal and emotionally supportive learning environment that allows students to open up the discussions to more general issues, but is constrained in supporting sustained focus on readings. The challenge to every educator, then, is to maximally leverage these unique affordances in the service of student learning. The anchored environment was designed to achieve a specific learning goal, that is, helping students have focused discussion on texts. This environment may not work well for achieving other learning goals. When goals vary, online instructors should choose different technologies to support interaction and learning, because of the unique affordances and constraints of every technology. Tools used to support online discussions 97 and ways adopted to structure the interactions may differ, for example, when the primary goal is developing a sense of engagement and community than when the primary goal is having learners focus carefully on a text (Gao & Putnam, submitted). Future research should design and test various types of online discussion environments to determine which approach should be used to achieve a specific pedagogical goal under a particular circumstance. Limitations There were some limitations in this study. One limitation is that though students were randomly assigned to two sections, there was a section difference on the focus of discussion categories and student perceptions. The second limitation is that conditions in the two sections were not exactly the same. The threaded forum was held by Moodle course management system in one section, and by Facebook in another section. These limitations pose threat to both internal and external validity, and bring in confounding factors to the study. It significantly limits the generalizability of the study to a larger population. Another limitation of the study was that the sample size was relatively small, and all participants were volunteers. It was unknown why the rest of the students refused to participate in the study and how their participation would impact the results. In addition, the sample of students was predominantly Caucasian and female; thus, generalizations to larger and more diverse samples may also be limited. A third limitation is that the interviews were conducted two weeks after the completion of Module 4. The selection of students for the interview was based on their survey responses, and some of the interview questions were pertinent to their 98 performance in the two environments. So it was not possible to conduct interviews right after the intervention. Students might not clearly remember what they were doing and why they were doing so when they participated in the discussions. They might provide less concrete or even incorrect information, because of the time delay. Finally, during the survey and interview, some students provided limited responses. For example, two students did not give answers to any open-ended questions in the survey. Some other students, when asked a question during the interview, only provided short answers. When I asked follow-up questions, encouraging them to say more, they simply repeated what they had said previously. Therefore, it was hard to understand what was exactly happening when they participated in the discussions. 99 APPENDICES 100 APPENDIX A Course Readings and Discussion Questions Topic 1: Attention Reading 1: Attention URL: http://en.wikipedia.org/wiki/Attention Word count: 2091 words Discussion questions: 0 From your own experience, what are the factors that affect your attention? 0 From your experience, could you please explain why certain type(s) of attention are important for learning (focused attention, sustained attention, selective attention, alternating attention, divided attention)? Reading 2: Selective Attention and Arousal URL: http://www.csun.edu/~vcpsyOOh/students/arousal.htm Word count: 2255 words Discussion questions: 0 From your own experience, what would be an example of attention being affected by "prior experience and perception of the material being handled"? o How do these models (eg. memory model, bottleneck theory, filter model, attenuation model) help you understand attention during learning? Topic 2: Memory Reading 1: Short- Term Memory URL: http://en.wikipedia.org/wiki/Short-term_memory Word count: 2086 words 101 Summary: Discussion questions: 0 From your experience, what are some factors that affect short-term memory? 0 What are some implications of capacity of short-term memory and chunking for teaching and learning? Reading 2: Long-Term Memory URL: ht_tp://en.wikipedia.orngiki/Long-term memom Word count: 1366 words Summary: Discussion questions: 0 From your experience, what are some factors that affect long-term memory? 0 What are some implications of the characteristics of long-term memory for teaching and learning? Topic 3: Schema Reading 1: Schemas URL: http://wik.ed.uiuc.edu/index.php/Schemas Word count: 2301 words Summary: It introduces the concept of schema, its history, and its application Discussion questions: 0 From your experience, what are the factors that affect your schema of certain things? 0 What are other implications of schema theory for teaching and learning? Reading 2: Schema Theory of Learning 102 URL: http://wwwsil.org/lingu_alinks/literacv/ImplementALiteragyProgram/SchemaTheorvOfLe mm Word count: 558 words Summary: It describes the basic principles from schema theory and the characteristics of schemata Topic 4: Stereotype Reading 1: Learning to Make Inferences URL: http://www.weac.org/News_and_Publicati(fl/education_news/2000- 2001/read_inferences.agrx Word count: 1016 words Summary: It lists the strategies teachers could use to help students learn how to make inferences Discussion Questions: 0 From your experience, could you explain why making inferences is important for comprehension, or learning in general? 0 From your experience, could you explain what the relationship between schema and inference-making is? Reading 2: Stereoggge URL: http://wwwmedia- awareness.ca/english/specia‘l initiatives/toolkit/stereogpes/what areistereotvpescfm and http://wwwresearch.ukv.edu/odvssev/fgll99/stereotvpes.html. Word count: 921 words 103 Summary: It introduces the concept of stereotype and how to deconstruct stereotype. Discussion Questions: 0 From your experience, how do stereotypes affect your opinions on a particular group of people? 0 What do you think an individual can do to help reduce bias and stereotyping? 104 APPENDIX B Quiz Questions Topic 1: Attention 1. Attention is part of memory which is said to be able to hold a small amount of information for about 20 seconds Answer: True False 2. Attention is what allows a person to selectively focus on one aspect of the environment while ignoring other things. Answer: True False 3. Attention is the tendency of organisms to orient themselves toward, or process information from only one part of the environment with the exclusion of other parts. Answer: True False 4. Which of the following is an example of the ability psychological "attention"? Choose one answer. a. Focusing on one conversation in a noisy room b. Counting basketball passes from white-shirted players only, and ignoring passes from 105 black-shined players. 0. Both of the above (1. None of the above. 5. Which of the following is an example of psychological "attention"? Choose one answer. a. Listening to two conversations in headphones (one in each ear), and being able to follow one of them completely b. None of the above 0. Acting out so that others will focus more on you (1. Both of the above 6. When confronted with multiple stimuli, which answer best describes researchers proposed means of describing how people chose what to attend to? Choose one answer. a. A filtering system b. Association to previous experience. c. Where you are looking at already. Topic 2: Memory 1. What is chunking? Choose one answer. 106 a. Breaking a longer piece of information into smaller pieces b. The process of recalling something from long-term memory c. Grouping smaller pieces of information into a larger structure to make more efficient use of memory (1. None of the above 2. True or false: The following is an example of chunking When asked to recall the following numbers: 22434126530 you first break it down into: 2 2 4 3 4 12 6 5 30 and then try to remember it as: 2*2=4 3*4=12 6*5=30 Answer: True False 3. True or False: The following is an example of chunking When asked to remember the following phone numbers: 353-9287 107 353-6393 353-7211 you decide it is just easier to remember the following string of numbers: 353928735363933537211 Answer: True False 4. How long does long term memory last? Choose one answer. a. 3 seconds b. permanent c. none of the above 5. What can cause a memory not to be recalled from long-term memory? Choose one answer. a. The memory fades b. Brain injury destroys it c. The memory is there, but there are not enough retrieval cues to recall it (1. All of the above 6. How much information can short-term memory hold? Choose one answer. 108 a. 7 +/- 2 "items" b. The one thing you are currently focusing attention on c. Up to 20 items d. None of the above. 7. How long can short-term memory hold information? Choose one answer. a. 2 minutes b. 20 seconds c. 2 seconds d. None of the above. 8. Which of the following is NOT a part of working memory? Choose one answer. a. Visualspatial Sketchpad b. The Central Executive 0. The Phonological Loop (1. Short-Tenn memory 9. Because working memory affects your ability to rehearse, when given a list of words to recall, which part of the list is most likely to be *forgotten* ? Choose one answer. 109 a. The beginning of the list, because its the oldest b. The middle of the list because rehearsal on the later part of the list interferes with recall on the middle part of the list. c. The end of the list, because you've had less time to rehearse it d. All parts of the list are recalled the same Topic 3: Schema 1. True or false: If you don't teach children how to form categories, they'll never form any categories on their own. Answer: True False 2. One metaphor for how concepts sit within categories might be a Russian nesting doll: a matroyska. Can you think of another metaphor? Answer: 3. True or false: In many world religions, the power of giving names to objects or living things, is often a power of the deities. Answer: True False 4. A schema can be thought to be general knowledge about a typical object or event of a specific category. Answer: 110 True False 5. Schemas influence our memory at these levels Choose one answer. a. Retention b. Retrieval c. All of the above d. Encoding 6. What is the MOST important influence on the development of schemas? Choose one answer. a. motives b. heredity c. experience d. emotions 7. Diana has a "teacher schema" that includes the belief that, "Teachers like to make students feel stupid." Last week, her algebra teacher tried to help her during class while Diana was struggling with a difficult problem. Given what you know about the relationship between schemas and memory, which of the following is MOST likely to occur? Choose one answer. 111 a. Diana will remember incorrectly that she solved the problem with no trouble. b. Diana will come to believe that teachers help students learn difficult material. c. Diana will remember incorrectly that her teacher tried to make her feel stupid. (1. Diana will come to believe that she can't learn algebra without assistance. 8. We are more likely to selectively attend to behaviors or features that are more intense, novel, complex, or sudden. Answer: True False 9. In a classroom schemas can be activated by providing multiple examples and giving hints. Answer: True False 10. Prior knowledge is essential for the comprehension of new information: Answer: True False 11. An individual will often prefer to live with inconsistencies rather than to change a deeply-held value or belief. Answer: True 112 False Topic 4: Stereotyping 1. Which of the following statements is not an example of a stereotype? Choose one answer: a. Movies that feature Italian American themes tend to depict Italian Americans as gangsters. b. Rice is a staple food in Japan. c. Boys are better at mathematics than girls. 2. Which of the following concepts is closely associated with a stereotype? Choose one answer. a. amnesia b. meta-cognition c. bias 3. A stereotype refers to: Choose one answer. a. a fixed, commonly held notion or image of a person or group, based on an oversirnplification of some observed or imagined trait of behavior or appearance b. an attitude of a majority toward a minority c. a positive attitude of a special kind 113 4. What is the most accurate inference you can make from the following sentence? Pat began studying friends clothing at a young age and later became a top fashion designer. Choose one answer. a. Pat did not do well in math classes b. Pat is a girl c. Pat likes to look at clothes (1. Pat likes bright colors 5. One of the main places that children and adults learn stereotypes is the mass media Answer: True False 6. Stereotypes involve beliefs and expectations about a particular group Answer: True False 7. Stereotyping is the same as categorizing. Answer: True False 114 APPENDIX C Short Essay Questions Topic 1: Attention Now it is time to analyze the results of your own memory study. You are not graded on how well you recalled words. Instead, the focus is on why you recalled some words and not others using the memory concepts and terms you studied about. Make a 500 word post with: 1) An informative title (a key insight you had, or a question you still have). 2) In the post, type exactly what you recalled for each of the four lists (including words you recalled that weren't there). 3) Ideas about why some words were easily recalled. 4) Ideas about why some words were not easily recalled. 5) Any trends or patterns you see in your own data (or across other students' data), and ideas that you have for why those patterns exist. 6) Use terms and concepts from the readings (i.e., "encoding" and "retreival" when possible) to show that you can talk the talk. The above six. criteria are the metric by which you are graded as well, so stick to them and you'll do fine. Some questions to ask yourself if you're stuck: What kinds of words were in each group? In some of the experiments the words belonged to a group, and in some they didn't. Another question: Did I have anything else to do besides remember the words? Sometimes having something else to do makes it harder to remember the first thing. One 115 more question for you: How familiar was I with the words? Do I use them everyday, once a week, never? Topic 2: Memory Now it is time to analyze the results of your own memory study. You are not graded on how well you recalled words. Instead, the focus is on why you recalled some words and not others using the memory concepts and terms you studied about. Make a 500 word post with: 1) An informative title (a key insight you had, or a question you still have). 2) In the post, type exactly what you recalled for each of the four lists (including words you recalled that weren't there). 3) Ideas about why some words were easily recalled. 4) Ideas about why some words were not easily recalled. 5) Any trends or patterns you see in your own data (or across other students' data), and ideas that you have for why those patterns exist. 6) Use terms and concepts from the readings (i.e., "encoding" and "retreival" when possible) to show that you can talk the talk. The above six criteria are the metric by which you are graded as well, so stick to them and you'll do fine. Some questions to ask yourself if you're stuck: What kinds of words were in each group? In some of the experiments the words belonged to a group, and in some they didn't. Another question: Did I have anything else to do besides remember the words? Sometimes having something else to do makes it harder to remember the first thing. One 116 more question for you: How familiar was I with the words? Do I use them everyday, once a week, never? Topic 3: Schema For this posting, we would like you to do two things. First, in about 250 words, post your predictions about what you think people will remember when they read the passages. Use what you know of schema theory to justify your predictions. This part of the posting should be done as early as possible so that you will have time to conduct your study and report your results by Saturday night. After you have written up your predictions, conduct your study and tell us if what you found matched what you had predicted. Why or why not? This part of your posting should be about 250 words. In total your posting will be 500 words. Topic 4: Stereotype Put yourself in the shoes of this teacher of these children in the video. After the "nice newsman" leaves the day camp classroom, you must talk to these students about what they just said about the pictures he was showing them. In about 250 words, how would you begin a discussion about this topic with your students? How could you use what you have learned about making inferences and using schemas to explain stereotyping to these students? 117 APPENDIX D Survey on Student Perceptions of the Two Discussion Environments Dear class, So far, you have participated in two forms of discussion. We are inviting you to share with us your experiences and thoughts. Please open two web pages with your browser: one discussion in the threaded discussion forums, and one discussion in the Google Docs. Look at them, and reflect on your experience in the discussions. Then come back to this page to complete the survey. This survey will ask you to compare the nature of discussions and learning occurred in the threaded discussion forum and the Google Docs. It will also ask you to provide reasons and explanations of your behaviors. Please be as detailed as possible when you give an explanation. This will give us a clear idea of what works for you and how it works. It will take you about 20 minutes to complete the survey, and you will be asked to type down your name at the end of the survey to get credit for completing it. Thank you! Learning How well did Google Docs support you to... Not at Not very To some Quite a Very all well extent lot well Learn the article How well did threaded forum support you to... Not at Not very To some Quite a Very all well extent lot well Learn the article Please explain what specific features of the two environments led to your responses. 118 Focus How well did Google Docs support you to... Not at Not very To some Quite a Very all well extent lot well pay attention to specific words or concepts in the readings pay attention to specific issues in the readings Pay attend to the overall idea of the readings How well did threaded forum support you to... Not at Not very To some Quite a Very all well extent lot well pay attention to specific words or concepts in the readiggs pay attention to specific issues in the readings Pay attend to the overall idea of the readings Please explain what specific features of the two environments led to your responses. How well did Google Docs support you to... Not at Not very To some Quite a Very all well extent lot well Share information with your classmates Explore various ideas or opinions with your classmates Negotiate meanings with your classmates Interact and communicate with your classmates fluidly How well did threaded forum support you to... Not at Not very To some Quite a Very all well extent lot well Share information with your classmates Explore various ideas or opinions with your classmates Negotiate meanings with your classmates Interact and communicate with your classmates fluidly 119 Please explain what specific features of the two environments led to your responses. How well did Google Docs support you to... Not at Not very To some Quite a Very all well extent lot well Maintain a close and supportive relationship with your classmates Be responsive to your classmates’ feelings How well did threaded forum support you to... Not at Not very To some Quite a Very all well extent lot well Maintain a close and supportive relationship with your classmates Be responsive to your classmates’ feelings Please explain what specific features of the two environments led to your responses. How well did Google Docs support you to... Not at Not very To some Quite a Very all well extent lot well Motivate you to participate in the discussion actively Help you enggge in the discussion Make participation in the discussion enjoyable How well did threaded forum support you to... Not at Not very To some Quite a Very all well extent lot well Motivate you to participate in the discussion actively Help you engage in the discussion Make participation in the discussion enjoyable Please explain what specific features of the two environments led to your responses. 120 APPENDIX E Interview Protocols Introduction Thank you for taking time to help me with my research project. The purpose of this interview is to help me learn how different discussion environments supported discussion and learning. I have asked you to help me with this because I would like to understand more about it. The interview should take about half an hour. If it is alright with you, the interview will be audio taped and tape will be used to help me reflect on the questions I asked. It will also be used to type up my interview notes. Before we get started, do you have any questions about this interview? General Questions 0 How do you feel about these two environments? What was your general experience in having discussions in the two environments? 0 What do you think about the quality of discussion in the two environments? Why? 0 How well do you think the discussions in the two environments supported you to understand the texts? Why? 0 Did you enjoy discussions in the two environments? Why? 0 What do you think about theses two discussion environments? Focus 0 How well do you think the discussions in the two environments supported you to focus the discussion on the texts? Why? Depth 121 How well do you think the discussions in the two environments supported you to have in-depth discussion on the texts? Why? Interaction How did the two environments make you respond to peers’ comments differently? Why? What specific features of the two environments led to this difference? How did the two environments make you respond to instructor’s question differently? What specific features of the two environments led to this difference? Did others performances in your group affect your performance in both of the environments? Learning Do you think they affected your learning or your answer to the short essay question and quiz? Which one helped you remember and learn the information in the text better? Other questions (These questions may vary depending on their performance in the discussion or responses to the survey) 1 notice that you posted more reflections on the texts in Google Docs? Why? I notice you post more frequently in Google Docs. Why? What specific features of the two environments led to this difference? I notice that you post shorter in Google Docs? Why? What specific features of the two environments led to this difference? I notice that although you posted more (or interact with others more) in Google Docs, in your survey, you felt Google Docs is not enjoyable to have discussions in. Why? 122 Overall Impression 0 What are the good things and bad things for Google Docs? o What are the good things and bad things for the threaded forum? 123 APPENDIX F Content Analysis Models Used by Previous Researchers Table 15 Henri ’s (1992) Analytical Model on Information Processing Surface Processing In-Depth Processing Repeating the information contained in the statement of the problem without making inferences or offering an interpretation Repeating what has been said without adding any new elements Stating that one shares the ideas or opinions stated, without taking these further or adding any personal comments Proposing solutions without offering explanation Making judgments without offering justification Asking questions which invite information not relevant to the problem or not adding to the understanding of it Offering several solutions without suggesting which is most appropriate Perceiving the situation in a fragmentary or short short-term manner Linking facts, ideas and notions in order to interpret, infer, propose and judge Offering new elements of information Generating new data from information collected by the use of hypotheses and inferences Proposing one or more solutions with short-, medium-, or long-term justification Setting out advantages and disadvantages of a situation or solution Providing proof or supporting examples Making judgments supported by justification Perceiving the problem within a larger perspective Developing intervention strategies within a wider framework 124 Table 16 Gunawardena, Lowe and Anderson ’s (I 99 7) Interaction Analysis Model Phase I: Sharing/Comparing of information A.A statement of observation or opinion B. A statement of agreement from one or more other participants C. Corroborating examples provided by one or more participants D.Asking and answering questions to clarify details of statements E. Definition, description, or identification of a problem Phase II: the Discovery and Exploration of Dissonance or Inconsistency among Ideas, Concepts or Statements (This is the operation at the group level of what F estinger calls cognitive dissonance, defined as an inconsistency between a new observation and the learner's existing framework of knowledge and thinking skills.) Operations which occur at this stage include: V A.Identifying and stating areas of disagreement B.Asking and answering questions to clarify the source and extent of disagreement C.Restating the participant’s position, and possibly advancing arguments or considerations in its support by references to the participant's experience, literature, formal data collected, or proposal of relevant metaphor or analogy to illustrate point of view Phase III: Negotiation of Meaning/Co-construction of Knowledge A.Negotiation or clarification of the meaning of terms B.Negotiation of the relative weight to be assigned to types of argument C.Identification of areas of agreement or overlap among conflicting concepts D.Proposal and negotiation of new statements embodying compromise, co-construction E. Proposal of integrating or accommodating metaphors or analogies Phase IV: Testing and Modification of Proposed Synthesis or Co-construction 125 Table 16 Continued A.Testing the proposed synthesis against "received fact" as shared by the participants and/or their culture B.Testing against existing cognitive schema C.Testing against personal experience D.Testing against formal data collected E. Testing against contradictory testimony in the literature Phase V: Agreement Statement(S)/Applications of Newly-constructed Meaning A.Summarization of agreement(s) B.Applications of new knowledge C. Metacognitive statements by the participants illustrating their understanding that their knowledge or ways of thinking (cognitive schema) have changed as a result of the conference interaction 126 Table 17 Rourke, Anderson, Garrison ’s (1999) Social Presence Model Category Indicators Definition Example Affective Expression of Conventional expressions of “I just can’t stand it emotions emotion, or unconventional when... H!” expressions of emotion, “ANYBODY OUT includes repetitious THERE!” punctuation, conspicuous capitalization, emotions. Use of humor Teasing, cajoling, irony, “The banana crop in understatements, sarcasm. Edmonton is looking good this year.” Self- Presents details of life “Where I work, this is disclosure outside of class, or expresses what we do...” “I just vulnerability don’t understand this question” Interactive Continuing a Using reply feature of Software dependent, e. g., thread software, rather than starting “Subject:Re” or “Branch a new thread from” Quoting from Using software features to Software dependent, e. g. others’ quote others entire message “Martha writesz” or text messages or cutting and pasting prefaced by less-than selections of others’ symbol<. messages. Asking Students ask questions of “Anyone else had questions other students or the experience with moderator. WEBCT?” Complimentin Complimenting others or “I really like your g, expressing contents of others’ messages interpretation of the appreciation reading” 127 Table 17 Continued Category Indicators Definition Example Expressing Expressing agreement with “I was thinking the same agreement others or content of others’ thing.” “You really hit messages the nail on the head.” Cohesive Vocative Addressing or referring to “I think John made a participants by name good point.” “John, what do you think?” Addresses or Addressing the group as we, “Our textbook refers refers to the group using inclusive pronouns Phatics, salutations us, our, group. Communication that serves a purely social function, greetings, closures. to...” “I think we veered off track. . .” “Hi all” “That’s it for now” “We’re having the most beautiful weather here” 128 APPENDIX G Coding Schemes for Content Analysis Table 18 Depth of Discussion Levels Definitions Examples Level 0 No ideas, arguments, judgments, I commend your ability to study, inferences, hypotheses, newly maintain a household, and tend to the proposed questions, and solutions needs of a baby. You're lucky though, are present. And the example or my "boyfriend" can only keep his evidence provided is irrelevant to attention on football and not much the discussion. else. Well, I guess he can multitask between games, the remote, and a can ofcoke. Level 1 Ideas, arguments, judgments, I believe that we do each have a inferences, hypotheses, newly certain capacity that we can attain proposed questions, and solutions attention at. I have a very similar are present, but supporting situation and I do think that we also evidence, analyses, triangulations, each have a certain environment in comparisons, interpretations, or which we are able to have to best refinements are not provided. attention span. Or, analysis or evidence is present, but the point is implicit. Level 2 Ideas, arguments, judgments, I really think that focused and inferences, hypotheses, newly proposed questions, and solutions are present. And supporting evidence, analyses, triangulations, comparisons, interpretations, or refinements are unelaborated and sustained attention are the most important for learning because if you cannot keep focus on one specific objective you will struggle to get the message in its entirety and also I feel that sustained attention goes hand in 129 Table 18 Continued Levels Definitions Examples Level 3 Level 4 brief, but are logical, consistent, or helpful to propose the ideas, arguments, judgments, inferences, hypotheses, questions, and solutions. Ideas, arguments, judgments, inferences, hypotheses, new proposed questions, and solutions are present. And supporting evidence analyses, triangulations, comparisons, interpretations, or refinements are elaborated, and support the ideas, arguments, judgments, inferences, hypotheses, questions, and solutions in a clear, coherent way. Presenting the connectedness and interrelationships of multiple hand with this because if you can only focus, like me, for a few rrrinutes at a time, your pace of learning will be slowed unless you develop other tactics, like I have, to attack attention issues. I am not so sure I agree that divided attention always is the best. Sometimes, especially when I need to meet a deadline, my complete focus is needed. That being said, if I have a lot of stuff to do, and plenty of time, then divided attention does work better for me. I think divided attention is something you develop as you grow. A scene from the movie Knocked Up comes into mind. When the children are blowing bubbles at the park, they are so focused, so happy on only the bubbles. Ben even comments that he wishes anything could make him that happy. Once we learn how to divide our attention, I think we loose the ability to undivide it, and focus on simple things like bubbles. I am not sure I did a great job of conveying my thoughts, but its worth a try -Alex The models described here are helpful in understanding the best way to 130 Table 18 Continued Levels Definitions Examples ideas, arguments, judgments, inferences, hypotheses, and solutions based on analyses, triangulations, comparisons, interpretations, or refinements of data from multiple angles or perspectives in a sophisticated way. present information to learners so that it can be discerned and retained. Knowing that information is best comprehended when associated with something familiar can guide us as educators as well as when learning new things ourselves. The filter model also shows the importance of breaking down information so that learners aren't bombarded with too many ideas at one time and find themselves unable to pay attention to everything at once. If educators don't "pre-filter" the information, it sound like the brain will do this on its own! The attenuation model is also good to keep in mind in terms of how to present information so it will stand out and demand a learner's attention when other, competing stimuli are also present. 131 Table 19 Focus of Discussion Focus Definitions Examples Texts General ideas A response to the content in the text Instructor’s A response to a Questions question raised by the instructors There were two major statements that stood out to me from these readings. The first was from the "schemas" reading, "without some general setting or label as we have repeatedly seen, no material can be assimilated or remembered." I found this idea very interesting and was curious about others thoughts on it. When I first read it I thought it seemed that according to this statement we could never actually form any schemas or memories, etc. For if we always needed something to base it on, how would form our very FIRST schema, label, setting, memory, etc. Reading the post above, I was reminded of another thought I had while going through this module. I'm curious as to how these attention theories apply to an activity like playing music. When I play drums with a group of other musicians, my attention is on the rhythm. I'm creating, the actual motions of my hands and feet, and the sounds of a guitar, bass and vocalist at one time. Split attention is necessary in order to hear sounds from multiple sources and to respond accordingly. One factor that affects my schema of certain things is if I have a strongly reinforced sense of the topic. Like it is stated in the article is ties together with a strong visual component. Therefore, having the knowledge along with visuals such as charts, 132 Table 19 Continued Focus Definitions Examples Peer Responses to Comments COIIIII‘ICIIIS from peers diagrams, etc. than this helps put schemas into my memory. Another aspect that helps schemas stick out is if they are original or stick out in my mind. Teaching suggestions for schemas are making sure assimilating ideas nicely fit into the place where they belong. Accommodating experiences to fit into our model so we modify what we already know. That is a good question, it seems like inferences are sort of based on stereotypes. You think of something as stereotypical, so that when you need to make an inference that stereotype will help you make one, whether right or not. Table 20 Knowledge Construction Categories Definitions Examples Starting a Defining, describing, or With all these negative New Topic identifying a new problem or correlations between the quality asking a question to open up of neurons and memory, do you discussion on a new problem or believe that people with more issue (and higher quality) neurons remember more information? Or is memory more environmentally dependent, hinging on the ability to chuck and associate? Nature and nurture rearing its ugly head 133 Table 20 Continued Categories Definitions Examples once again! Supporting, Showing support to or agreement I agree- these models make us Clarifying or on previously stated ideas, or realize there is a filter system in Elaborating corroborating previously stated our brains that help us focus on Existing Ideas Extending or Deepening Existing Ideas Self- Reflection ideas with personal experience and so on, or elaborating on previously stated ideas Adding new interpretations, observations, or perspectives to existing facts/evidence/ideas, or contributing new ideas to an existing topic, or bringing in additional issues on a topic to considerations Reflecting on and answering discussion questions, or making 134 what is important, and what we need for each particular instance. IfI am looking for someone in an airport, I am going to be only looking for them. But if I am in a crowded airport and am just "people watching", I won’t be on the lookout for anything in particular. It makes me realize better how our brains work, and that these processes occur without our even noticing. I both agree and disagree. Memory might be mostly nature, but from our previous modules, I believe memory and learning can be improved. Working on memory is learning to remember. Working on learning is utilizing your memory. It is a cyclical process. I'm having trouble believing this model! It's hard to comprehend Table 20 Continued Categories Definitions Examples associations of texts with personal experiences, or showing difficulty in understanding the text, or paraphrasing the sentences in the text to make sense of it Synthesizing Combining multiple points previously made by more than one peer that our memories may reach a maximum threshold of time or attention which causes them to transfer from short-term to long- terrn. However, the evidence contributed by patient HM is pretty convincing. Do you think that each person has their own set "threshold" or is it more of a gray area (give or take time or attention depending on the situation)? I think that it's interesting that the three of us that have posted thus far have all talked about repetition as a means for long- term memory, and we all had different examples of it. I wonder if this is because as we grow up, we have all learned that one of the best ways to remember information for the long-term is to repeat it over and over until it is almost like second nature and we can recall it whenever we need to. 135 APPENDIX H Results from ANOVAs and MANOVAs Table 21 MANOVA of Numbers of Units in Focus Categories Hypothesis , 2 Effect Value F df Error df Slg. 11p P'l'a"ST'a°e .965 7.65553 4.000 1.11553 .000 .965 ““1!“ ”mm“ .035 7.65553 4.000 1.11553 .000 .965 Intercept Hotelllngs Trace 27.460 7.655133 4.000 1.11553 .000 .965 Roy's Largest R00, 27.460 7.65553 4.000 1.11553 .000 .965 Pillai'ST’ace .098 30.370 4.000 1.11553 .000 .098 “WWW .902 30.370 4.000 1.11553 .000 .098 Environment Hotelling's Trace 109 30 370 4000 111553 000 093 Roy's Largest R00, .109 30.370 4.000 1.11553 .000 .098 Pinai'smce .261 2.440 128.000 4.47253 .000 .065 SmdenKGroup Wllks'nm'mbda .763 2.440 128.000 4.43853 .000 .065 (Section)) “Ne"mgsmce .281 2.441 128.000 4.45453 .000 .066 Roy's Largest R00, .092 3.225 32.000 1.11853 .000 .085 Pillafflme .020 2.841 8.000 2.23253 .004 .010 Group W'“‘S WM” .980 2.839 8.000 2.23053 .004 .010 Hotellin 's Trace (Section) E .020 2.837 8.000 2.22853 .004 .010 Roy's Largest R00, .012 3.296 4.000 1.11653 .011 .012 Pillai'smce .015 4.342 4.000 1.11553 .002 .015 W'lks'Lmbda .985 4.342 4.000 1.11553 .002 .015 Seem“ H°tellm§STra°° .016 4.342 4.000 1.11553 .002 .015 Roy's Largest R00, .016 4.342 4.000 1.11553 .002 .015 Readin8(T°Pi° Pillars Trace .056 2.647 24.000 4.47253 .000 .014 Module» Win‘S'Lambda .945 2.651 24.000 3.89153 .000 .014 “me"mg'ST’a“ .057 2.651 24.000 4.45453 .000 .014 136 Table 21 Continued Hypothesis . 2 Effect Value F df Error df Slg. Tlp Roy's Largest Root .027 4.966 6.000 1.1 18133 .000 .026 ““3“ “a“ .021 2.967 8.000 2.23253 .003 .011 ““1” ”mm" .979 2.973 8.000 2.23053 .003 .011 T°P'°(M°d“'e) H°'°"'"g's Tm" .021 2.978 8.000 2.22853 .003 .011 Roy's Largest Root .018 5.119 4.000 1.116E3 .000 .018 Pillars Trace .038 10.938 4.000 1.11553 .000 .038 MW “mm .962 10.938 4.000 1.11553 .000 .038 M°d"'e “me“mg's Trace .039 10.938 4.000 1.11553 .000 .038 Roy's Largest Root .039 10.938 4.000 1.1 15E3 .000 .038 Table 22 ANOVA of Numbers of Units in Focus Categories Type III Dependent Mean . 2 Source . Sum of df F Slg. np Varlable Square Squares Texts 14.649 45 .326 3.402 .000 .120 General Ideas 5.532 45 .123 5.372 .000 .178 Corrected Instructor’s Model _ 37.056 45 .823 3.627 .000 .127 Questlons Peer Comments 22.357 45 .497 2.284 .000 .084 Texts 3.309 1 3.309 34.575 .000 .030 General Ideas .5 l 7 1 .5 17 22.571 .000 .020 Intercept Instructor’s , 115.297 1 1 15.297 507.773 .000 .312 Questlons Peer Comments 44.453 1 44.453 204.331 .000 .155 Environment Texts 5.01 l l 5.01 1 52.362 .000 .045 General Ideas 1 .267 1 l .267 55.337 .000 .047 Instructor’s . 4.784 1 4.784 21.071 .000 .018 Questlons 137 Table 22 Continued Type 111 Dependent Mean , 2 Source , Sum of df F 81g. 1],, Variable Square Squares Peer Comments .815 1 .815 3.746 .053 .003 Texts 5.782 32 .181 1.888 .002 .051 General ldeas 2.344 32 .073 3.201 .000 .084 Student(Group . Instructor’s (Sectron)) _ 17.239 32 .539 2.373 .000 .064 Questlons Peer Comments 13.365 32 .418 1.920 .002 .052 Texts .156 .078 .817 .442 .001 General Ideas .229 .1 15 5.014 .007 .009 Group _ Instructor’s (Section) . 2.248 2 1.124 4.950 .007 .009 Questlons Peer Comments 1.508 2 .754 3.465 .032 .006 Texts .064 l .064 .667 .414 .001 General Ideas .287 1 .287 12.519 .000 .01 1 Section Instructor’s _ .190 1 .190 .836 .361 .001 Questions Peer Comments .002 .002 .01 1 .916 .000 Texts 1.514 6 .252 2.636 .015 .014 . . General Ideas .408 6 .068 2.972 .007 .016 Readmg(Toplc Instructor’s (Module)) . 5.525 6 .921 4.055 .000 .021 Questlons Peer Comments 4.199 .700 3.217 .004 .017 Texts 1.021 .510 5.333 .005 .009 General Ideas .079 .040 1.736 .177 .003 Topic(Module) Instructor’s _ 3 .654 2 1.827 8.046 .000 .014 Questions Peer Comments .684 2 .342 1.573 .208 .003 Texts .21 1 1 .21 1 2.202 .138 .002 General Ideas .892 1 .892 38.970 .000 .034 Module Instructor’s , .061 l .061 .271 .603 .000 Questlons Peer Comments .982 l .982 4.516 .034 .004 Error Texts 106.990 1 l 18 .096 General Ideas 25.588 11 18 .023 Instructor’s . 253.858 1118 .227 Questions 138 Table 22 Continued Type III Dependent Mean _ 2 Source . Sum of df Slg. 11p Varrable Square Squares Peer Comments 243.227 1 l 18 .218 Texts 138.000 1 164 General Ideas 32.000 1 164 Total Instructor’s _ 572.000 1164 Questlons Peer Comments 410.000 1 164 Texts 121.639 1163 Corrected General Ideas 3 1 . 120 1 163 Total Instructor’s _ 290.914 1163 Questlons Peer Comments 265.584 1 163 Table 23 MANO VA of Numbers of Units in Knowledge Construction Categories H thesis Effect Value F (1:130 Error df Sig. "p2 Pillai's Trace .989 1.99354 5.000 1.11453 .000 .989 Wilks' Lambda .011 1.99354 5.000 1.11453 .000 .989 Intercept Hotelling's Trace 89.467 1.99354 5.000 1.11453 .000 .989 Roy's Largest 89.467 1.99354 5.000 1.11453 .000 .989 Root Pillai's Trace .016 3.575 5.000 1.11453 .003 .016 Wilks' Lambda .984 3.575 5.000 1.11453 .003 .016 Environment Hotelling's Trace .016 3.575 5.000 1.11453 .003 .016 Roy's Largest .016 3.575 5.000 1.11453 .003 .016 Root Pillai's Trace .338 2.533 160.000 5.59053 .000 .068 Wilks' Lambda .703 2.546 160.000 5.52153 .000 .068 Student(Group . Hotellrng's Trace .368 2.557 160.000 5.56253 .000 .069 (Section)) Roy's Largest .114 3.976 32.000 1.11853 .000 .102 Root Group Pillai's Trace .009 1.037 10.000 2.23053 .409 .005 139 Table 23 Continued Hypothesis . 2 Effect Value F df Error df Slg. 1'“, (Section) Wilks' Lambda .991 1.037 10.000 2.22853 .409 .005 Hotelling's Trace .009 1.038 10.000 2.22653 .408 .005 Roy's Largest .008 1.801 5.000 1.11553 .110 .008 Root Pillai's Trace .002 .499 5.000 1.11453 .777 .002 Wilks' Lambda .998 .499 5.000 1.1 1453 .777 .002 Section Hotelling's Trace .002 .499 5.000 1.11453 .777 .002 Roy's Largest .002 .499 5.000 1.1 1453 .777 .002 Root Pillai's Trace .043 1.602 30.000 5.59053 .020 .009 Wilks' Lambda .958 1.606 30.000 4.45853 .019 .009 Reading(Topic _ Hotelllng's Trace .043 1.609 30.000 5.56253 .019 .009 (Module)) Roy's Largest .024 4.426 6.000 1.1 1853 .000 .023 Root Pillai's Trace .010 1.096 10.000 2.23053 .361 .005 Wilks' Lambda .990 1.097 10.000 2.22853 .361 .005 Topic(Module) Hotelling's Trace .010 1.097 10.000 2.22653 .360 .005 Roy's Largest .008 1.886 5.000 1.11553 .094 .008 Root Pillai's Trace .007 1.485 5.000 1.11453 .192 .007 Wilks' Lambda .993 1.485 5.000 1.1 1453 .192 .007 Module Hotelling's Trace .007 1.485 5.000 1.11453 .192 .007 Roy's Largest .007 1.485 5.000 1.1 1453 . 192 .007 Root Table 24 F allow-up ANO VAs of Numbers of Units in Knowledge Construction Categories ' T 111 Dependent ype Mean 2 Source _ Sum of df F Sig. 11p Variable Square Squares Corrected NewTopic 4.478 45 .100 2.322 .000 .085 Model Supporting 20.589 45 .458 3 .031 .000 .109 140 Table 24 Continued Type III Dependent Mean , 2 Source _ Sum of df F Slg. 11p Varrable Square Squares Extending 1 1.756 45 .261 2.340 .000 .086 Synthesizing .854 45 .019 2.992 .000 .107 Reflection 32.285 45 .717 3.267 .000 .1 16 NewTopic .303 1 .303 7.067 .008 .006 Supporting 16.562 1 16.562 109.730 .000 .089 Intercept Extending 5.742 1 5 .742 5 1.436 .000 .044 Synthesizing .051 1 .051 8.022 .005 .007 Reflection 170.543 1 170.543 776.633 .000 .410 NewTopic . 191 l .191 4.465 .035 .004 Supporting .001 1 .001 .009 .924 .000 Environment Extending 1 . 176 1 1.176 10.540 .001 .009 Synthesizing .001 l .001 .173 .677 .000 Reflection 2.156 1 2.156 9.818 .002 .009 NewTopic 3.157 32 .099 2.301 .000 .062 Supporting 16.805 32 .525 3.479 .000 .091 Student(Group . , Extendlng 6.418 32 .201 1.797 .004 .049 (Sectlon)) . . Synthe51zmg .754 32 .024 3 .7 1 3 .000 .096 Reflection 16.692 32 .522 2.375 .000 .064 NewTopic .001 2 .001 .013 .987 .000 Supporting .334 2 .167 1.106 .331 .002 Group , . Extendlng .3 17 2 . 159 l .422 .242 .003 (Sectlon) _ _ Synthesrzmg .018 2 .009 1.432 .239 .003 Reflection 1.223 2 .612 2.786 .062 .005 NewTopic 2.6925-5 1 26925-5 .001 .980 .000 Supporting .050 1 .050 .335 .563 .000 Section Extending . 197 1 .197 1.769 . 184 .002 Synthesizing .002 l .002 .322 .57 l .000 Reflection .046 l .046 .210 .647 .000 NewTopic .145 6 .024 .565 .759 .003 _ . Supporting 2.189 6 .365 2.417 .025 .013 Readlng(Toplc _ Extendlng 1.329 6 .221 1.984 .065 .01 1 (Module)) _ , Synthesrzmg .077 6 .013 2.024 .060 .011 Reflection 5.607 6 .934 4.255 .000 .022 Topic(Module) NewTopic .211 2 .105 2.459 .086 .004 Supporting .659 2 .330 2.184 .1 13 .004 Extending .276 2 .13 8 1.236 .291 .002 141 Table 24 Continued Type III Dependent Mean _ 2 Source _ Sum of df F Slg. 11p Varrable Square Squares Synthesizing .001 2 .001 .107 .899 .000 Reflection . 192 2 .096 .436 .647 .001 NewTopic .047 1 .047 1.087 .297 .001 Supporting .202 1 .202 1.339 .247 .001 Module Extending .321 l .321 2.877 .090 .003 Synthesizing .009 1 .009 1.374 .241 .001 Reflection .852 1 .852 3.878 .049 .003 NewTopic 47.923 - 1118 .043 Supporting 168.748 1 118 .151 Error Extending 124.797 1 1 18 .1 l2 Synthesizing 7.091 1 1 18 .006 Reflection 245.505 1118 .220 NewTopic 55.000 1 164 Supporting 238.000 1 164 Total Extending 158.000 1 164 Synthesizing 8.000 1 164 Reflection 706.000 1 164 NewTopic 52.401 1 163 Supporting 189.337 1 163 Corrected . Extendlng 136.553 1163 Total .. Synthesrzrng 7.945 1 163 Reflection 277.790 1 163 Table 25 MANOVA of Numbers of Units in Social Presence Categories Hypothesis _ 2 Effect Value F df Error df Slg. Ilp Pinafflme .832 1.37953 4.000 1.11553 .000 .832 W‘lks'me” .168 1.37953 4.000 1.11553 .000 .832 Intercept H°‘e”‘“g's“a°e 4.948 1.37953 4.000 1.11553 .000 .832 Roy's Largest Root 4.948 1.37953 4.000 1.11553 .000 .832 E“V"°“me“‘ Pillai'ST’m .018 5.099 4.000 1.11553 .000 .018 142 Table 25 Continued H thesis Effect Value F ypo Error df Sig. "p2 w'lks'Lambda .982 5.099 4.000 1.11553 .000 .018 H°‘e"‘“g'sm°e .018 5.099 4.000 1.11553 .000 .018 Roy's Largest R00, .018 5.099 4.000 1.11553 .000 .018 Pillai'STrm .226 2.096 128.000 4.47253 .000 .057 Student(Group W'lks'Lamm .791 2.101 128.000 4.43853 .000 .057 (Section)) H°'e"'"g'sm°° .242 2.105 128.000 4.45453 .000 .057 Roy's Largest R00, .099 3.457 32.000 1.11853 .000 .090 Pi'lai'sm“ .012 1.646 8.000 2.23253 .107 .006 Group w'lks'Lmbd” .988 1.647 8.000 2.23053 .107 .006 (Section) ”mellmgsmc" .012 1.647 8.000 2.22853 .107 .006 Roy's Largest R00, .010 2.689 4.000 1.11653 .030 .010 P""'"ST”‘°° .001 .381 4.000 1.11553 .822 .001 W’lks'me" .999 .381 4.000 1.11553 .822 .001 seam" H°'e"‘"g'”'a°e .001 .381 4.000 1.11553 .822 .001 Roy's Largest R00, .001 .381 4.000 1.11553 .822 .001 Pillai'smce .077 3.648 24.000 4.47253 .000 .019 Reading(Topic W‘lks Lmnbda .925 3.678 24.000 3.89153 .000 .019 (Module)) "°'°""’gsm°e .080 3.701 24.000 4.45453 .000 .020 Roy's Largest R00, .052 9.634 6.000 1.11853 .000 .049 Pillai'smc" .007 .925 8.000 2.23253 .494 .003 w'ms'Lmbda .993 .925 8.000 2.23053 .494 .003 T°p'°(M°d“'°) H°"’"'"g'ST"'°e .007 .925 8.000 2.22853 .494 .003 Roy's Largest R00, .006 1.558 4.000 1.11653 .183 .006 Pillai'ST'ace .004 1.072 4.000 1.11553 .369 .004 w'lks'Lmbd" .996 1.072 4.000 1.11553 .369 .004 M°dule H°‘°"“‘g'ST’a°e .004 1.072 4.000 1.11553 .369 .004 Roy's Largest R00, .004 1.072 4.000 1.11553 .369 .004 143 Table 26 F allow-up ANO VAs of Numbers of Units in Social Presence Categories Type III Dependent Mean , 2 Source . Sum of df F Slg. 11p Varlable Square Squares Texts 14.649 45 .326 3.402 .000 .120 General Ideas 5.532 45 .123 5.372 .000 .178 Corrected Instructor’s Model _ 37.056 45 .823 3.627 .000 .127 Questrons Peer Comments 22.3 57 45 .497 2.284 .000 .084 Texts 3.309 1 3.309 34.575 .000 .030 General Ideas .517 1 .517 22.571 .000 .020 Intercept Instructor’s _ 1 15.297 1 1 15.297 507.773 .000 .312 Questrons Peer Comments 44.453 1 44.453 204.331 .000 .155 Texts 5.011 1 5.01 1 52.362 .000 .045 General Ideas 1.267 1 l .267 55.337 .000 .047 Environment Instructor’s . 4.784 1 4.784 21 .07 l .000 .018 Questrons Peer Comments .815 1 .815 3.746 .053 .003 Texts 5.782 32 .181 1.888 .002 .051 General Ideas 2.344 32 .073 3.201 .000 .084 Student(Group . Instructor’s (Sectron)) , 17.239 32 .539 2.373 .000 .064 Questrons Peer Comments 13.365 32 .418 1.920 .002 .052 Texts .156 2 .078 .817 .442 .001 General Ideas .229 2 .1 15 5.014 .007 .009 Group . Instructor’s (Sectron) . 2.248 2 1 .124 4.950 .007 .009 Questrons Peer Comments 1.508 2 .754 3.465 .032 .006 Texts .064 1 .064 .667 .414 .001 General Ideas .287 l .287 12.519 .000 .01 1 Section Instructor’s . .190 1 .190 .836 .361 .001 Questrons Peer Comments .002 1 .002 .01 1 .916 .000 Reading(Topic Texts 1.514 6 .252 2.636 .015 .014 (Module)) General Ideas .408 6 .068 2.972 .007 .016 144 Table 26 Continued Type III Dependent Mean . 2 Source _ Sum of (If Slg. 11p Varrable Square Squares Instructor’s _ 5.525 6 .921 4.055 .000 .021 Questlons Peer Comments 4.199 .700 3.217 .004 .017 Texts 1.021 .510 5.333 .005 .009 General Ideas .079 .040 1.736 .177 .003 Topic(Module) Instructor’s , 3.654 2 1.827 8.046 .000 .014 Questlons Peer Comments .684 2 .342 1.573 .208 .003 Texts .21 1 1 .21 1 2.202 .138 .002 General Ideas .892 1 .892 38.970 .000 .034 Module Instructor’s . .061 1 .061 .271 .603 .000 Questrons Peer Comments .982 1 .982 4.516 .034 .004 Texts 106.990 1 1 18 .096 General Ideas 25.588 11 18 .023 Error Instructor’s , 253.858 1118 .227 Questlons Peer Comments 243.227 1 1 18 .218 Texts 138.000 1 164 General Ideas 32.000 1164 Total Instructor’s , 572.000 1 164 Questlons Peer Comments 410.000 1 164 Texts 121.639 1163 Corrected General Ideas 3 1.120 1 163 Total Instructor’s , 290.914 1 163 Questlons Peer Comments 265.584 1 163 145 REFERENCES 146 References Adams, M. J., & Collins, A. C. (1979). A schema-theoretic view of reading. In R. F reedle ’ (Ed), New directions in discourse processing (pp. 1-22). Nonvood, NJ: Ablex. Anderson, R. C., & Biddle, W. B. (1975). On asking people questions about what they are reading. In G. H. Bower (Ed.), Psychology of learning and motivation (V 01. 9, pp. 89-132). New York: Academic Press. Anderson, R. C., Spiro, R. J., & Anderson, M. C. (1978). Schemata as scaffolding for the representation of information in connected discourse. American Educational Research Journal, 15(3), 433-440. Anderson, T. (1996). The virtual conference: Extending professional education in Cyberspace. International Journal of Educational Telecommunications, 2, 121- 1 3 5. Andre, T. (1979). Does answering higher-level questions while reading facilitate productive learning? Review of Educational Research, 49, 280-318. Andriessen, J., Baker, M., & Suthers, D. (2003). Arguing to learn: Confronting cognitions in computer-supported collaborative learning. Dordrecht: Kluwer Academic. Arvaja, M., Salovaara, H., Héikkinen, P., & Jarvela, S. (2007). Combining individual and group-level perspectives for studying collaborative knowledge construction in context. Learning and Instruction, 1 7, 448-459. Baker, L., & Brown, A. (1984). Metacognitive skills and reading. In P. D. Pearson (Ed), The handbook of reading research (pp. 353-3 94). New York: Longman. Beaudin, B. P. (1999). Keeping online asynchronous discussions on topic. Journal of Asychronous Learning Networks, 3(2), 41 -53. Beneli, I. (1997). Selective attention and arousal. Retrieved September 1, 2008, from http://www.csun.edW~vcpsv00wsmdents/amsal.htrn Bente, G., Rilggenberg, S., Kramer, N. C., & Eschenburg, F. (2008). Avarta-mediated networking: Increasing social presence and interpersonal trust in net-based collaborations. Human Communication Research, 34(2), 171-345. Berge, Z., & Muilenburg, L. (2002). A framework for designing questions for online learning [Electronic Version] from http://www.emoderators.com/moderators/muilenburg.html. Berge, Z. L., & Muilenburg, L. (2002). Questions for online, adult learning. In A. Rossett (Ed.), The ASTD E-learning Handbook: Best Practices, Strategies and Case Studies for an Emerging Field (pp. 183-190). Chicago: McGraw-Hill Professional. 147 Bielman, V. A., Putney, L. G., & Strudler, N. (2003). Constructing community in a postsecondary virtual classroom. Journal of Educational Computing Research, 29, 119-144. Biocca, F., Harms, C., & Burgoon, J. (2001, May). Criteria and scope conditions for a theory and measure of social presence. Paper presented at the the Presence 2001, 4th Annual lntemational Workshop, Philadelphia, PA. Block, C. C., & Pressley, M. (2001). Comprehension instruction: Research-based best practices. New York: Guilford. Bloom, B., Englehart, M., Furst, 5., Hill, W. S., & Krathwohl, D. (1956). Taxonomy of Educational Objectives: The Classification of Educational Goals, Handbook 1: Cognitive Domain. New York: Longmans Green. Brown, A. L. (1980). Metacognitive development and reading. In R. J. Spiro, B. B. Bruce & W. F. Brewer (Eds), Theoretical issues in reading comprehension (pp. 453- 481). Hillsdale, NJ: Lawrence Erlbaum Associates. Bruner, J. L., Goodnow, J. J., & Austin, G. A. (1956). A study ofthinking. New York: Wiley. Brush, A., Bargeron, D., Grudin, J., Borning, A., & Gupta, A. (2002). Supporting interaction outside of class: Anchored discussion vs. discussion boards. Paper presented at the CSCL, Boulder, CO. Buehl, D. (2001). Learning to make inferences. Retrieved September 1, 2008, from http://wwwweac.org/News_and_Publications/education_news/2000- 2001/read inferencesaspx Bullen, M. (1998). Participation and critical thinking in online university distance education [Electronic Version]. Journal of Distance Education, 13, 1-32. Retrieved April 12, 2009 from lflpzflwww.iofde.ca/index.php/ide/article/view/ 140/394. Capozzoli, M., McSweeney, L., & Sinha, D. (1999). Beyond kappa: A review of interrater agreement measures. The Canadian Journal of Statistics, 27(1), 3-23. Caspi, A., & Blau, I. (2008). Social presence in online discussion groups: Testing three concepts and their relations to perceive learning. Social Psychology of Education, I I , 323-346. Chi, M. T., de Leeuw, N., Chiu, M., & LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439-477. Cho, K. L., & Jonassen, D. H. (2002). The effects of argumentation scaffolds on argumentation and problem solving. Educational Technology Research & Development, 50(3), 5-22. 148 Choi, 1., Land, S. M., & Turgeon, A. J. (2005). Scaffolding peer-questioning strategies to facilitate metacognition during online small group discussion. Instructional Science, 33, 483-511. Cobb, P. (1994). Where is the Mind? Constructivist and socialcultural perspectives on mathematical development. Educational Researcher, 23(7), 13-20. Collins, A., Brown, J. S., & Larkin, K. M. (1980). Inference in text understanding. In R. J. Spiro, B. C. Bruce & W. F. Brewer (Eds), Theoretical issues in reading comprehension (pp. 385-407). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. . Collison, G., Elbaum, B., Haavind, S., & Tinker, R. (2000). Facilitating online learning: Eflective strategies for moderators. Madison: Atwood Publishing. Dennen, V. P. (2008). Looking for evidence of learning: Assessment and analysis methods for online discourse. Computers in Human Behavior, 24(2), 205-219. DeWert, M. H., Babinski, L. M., & Jones, B. D. (2006). Providing online support to beginning teachers. Journal of Teacher Education, 54(4), 311-320. Duke, N. K., & Pearson, P. D. (2002). Effective practice for developing reading comprehension. In A. E. F arstrup & S. J. Samuels (Eds), What research has to say about reading instruction (pp. 205-242). Newark, Delaware: The International Reading Association, Inc. Dweck, C. S. (1999). Caution-Praise can be dangerous. American Educator, 23(1), 4-9. Ertl, B., Kopp, B., & Mandl, H. (2008). Supporting learning using external representations. Computers and Education, 51(4), 1599-1608. Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., et a1. (2007). Using peer feedback to enhance the quality of student online postings. Journal of Computer Mediated Communication, 12(2), 412-433. Gao, F., & Putnam, R. (2007, April). A content-focused online discussion environment: Effects on student engagement and learning. Paper presented at the the annual meeting of American Educational Research Association. Gao, F., & Putnam, R. (conditionally accepted). Bring literacy research and perspectives into online discussion research. Journal of Educational Computing Research. Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in’a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87-105. Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education 149 American Journal of Distance Education, 15, 7-23. Garrison, D. R., & Cleveland-Innes, M. (2005). Facilitating cognitive presence in online learning: Interaction is not enough. American Journal of Distance Education, I 9, 133-148. Gibson, D. (1999). Deconstructing stereotypes. Retrieved September 1, 2008, from http://www.research.ul_