3“; x... g ,1 n. ‘ 4. “rfiuumwww ..i 3. 1...: x... 3. . 3| 2.5!? .3... .13 v 2.. .~u..h..f.rqrdas .x: A... 1 .9. t. 31.13% . .1... ‘ lumiwr flu in .banfiils 3: 5.. .3 P. u . . ans... switl “53914"11‘ arm-M .2: ‘1... . fir. .safifiwu . in... .. .U .1 z. “Lune. . . .1: . THESIS 2“ LIBRARY 2 7'. MICI’IIng': State Emma This is to certify that the dissertation entitled INSTRUCTION IN THE WWWDOT APPROACH TO IMPROVING STUDENTS’ EVALUATION OF WEBSITES presented by SHENGLAN ZHANG has been accepted towards fulfillment of the requirements for the PhD degree in Department of Counseling, Educational Psychology, and Special Education 715/ 2/. LEI/2..., Major Professor’s Signature I 218 (’LH'JM B 3097 Date MSU is an afimative-action, equal-opportunity employer o-o---v-I-o----n- PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 6/07 p:/ClRC/DateDue.indd-p.1 INSTRUCTION IN THE WWWDOT APPROACH TO IMPROVING STUDENTS? EVALUATION OF WEBSITES By Shenglan Zhang A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Counseling, Educational Psychology, and Special Education 2007 ABSTRACT INSTRUCTION IN THE WWWDOT APPROACH TO IMPROVING STUDENTS’ EVALUATION OF WEBSITES By Shenglan Zhang This dissertation includes two manuscripts resulting from a study on efforts to improve fourth and fifth grade students’ website critical evaluation skills. The study investigated the impact of two 30-minute lessons and two 30-minute practice sessions with an approach to website evaluation called WWWDOT on students’ critical evaluation of websites. It also explored how a focal group of students who received instruction in the WWWDOT approach and a focal group who did not receive instruction evaluated websites. This study was guided by the new literacies perspective (Leu, Kinzer, Coiro, & Cammack, 2004), theories of metacognition (Flavell, 1979, 1987; Brown 1987; Garner, 1987), the concept of critical literacy (Lankshear, 1997; Luke, 2000), and Burbules and Callister’s (2000) views on Internet access and credibility. It considered critical evaluation of information on the Internet to be an important aspect of lntemet reading. Twelve fourth and fifth grade classes participated in the study. Data collected included pre and post questionnaires, pre and post assessment requiring students to evaluate a website and explain their evaluation, pre and post assessment requiring students to rank four websites from the most trustworthy to the least trustworthy and explain their ranking, coding of instruction during classroom observation, screen and voice recording of focal students’ verbal protocols, and researcher’s field notes. For the first manuscript, statistical analyses including ANCOVA and Ordinal Regression were performed in order to test the impact of instruction in the WWWDOT approach. Results suggest that instruction in the WWWDOT approach improves fourth and fifth grade students’ Website evaluation skills with respect to evaluating websites on multiple dimensions as measured by the questionnaire and the single website evaluation assessment and website ranking assessment. However, students’ overall judgment of trustworthiness of websites and their performance on ranking a set of websites by trustworthiness were not improved. This study developed a preliminary approach to improving students’ website evaluation skills. It also helped demonstrate the complexities of website evaluation and raise further questions for future research. In the second manuscript, comparative analyses were used to examine two focal groups of students’ website evaluation process. Results show that the students who had received instruction in WWWDOT had a greater understanding of the need for website evaluation, a deeper understanding of the function of multimedia presentations, were more strategic in evaluation, and made better overall judgments about which website to trust than the students who did not receive instruction in WWWDOT. The findings suggest that the WWWDOT approach is beneficial to students, but that some dimensions of the approach should be further stressed during instruction. The findings from the group that did not receive instruction in WWWDOT suggest that there is an urgent need for teaching students how to critically evaluate websites. This study is an important first step in describing how students evaluate websites. Copyright by SHENGLAN ZHANG 2007 ACKNOWLEDGEMENTS I would like to express my deepest gratitude to my academic advisor and committee chair, Dr. Nell Duke, for her guidance and support in every phase of my dissertation study. She has devoted numerous hours to reading and revising every page of my dissertation and providing insightful comments. She is my role model in writing, academia, time management, and advising. I extend my gratitude to my committee members, Dr. Beth Dobler, Dr. Mary Lundeberg, Dr. Kim Maier, and Dr. Punya Mishra for their encouragement and taking time from their busy schedules to attend the committee meetings, and to provide their ideas and suggestions at different stages of the project. My sincere thanks also go to the parents for allowing their children to participate in this study and to the teachers and students involved. As an international student, I benefited much from several mentors. I would like to extend my thanks to Dr. Jack Schwille for his encouragement and mentoring throughout my graduate study. His accessibility for long discussions and thoughtful advice on my study, work and life greatly smoothed the progress of my graduate study as well as provided enormous emotional support. I also owe a debt of thanks to Dr. David Wong for walking me through the first two years of my study. Dr. Jack Smith, you are the organizer and leader of a community which warmly welcomes international students. I have felt comfortable and enjoyed being in such a community. I thank you all. I am also appreciative of help from my friends. Fang Yu — Thank you for offering your talents and patience. Tianshu Pan, Tianli Li, Yang Lu — thank you for providing your suggestions when I was in need. Kristen Perry, Chun Lai, Annie Moses, Juliet Halladay, Katie Hilden, Alison Billman, Yonghan Park, Gaoming Zhang, Dongping Zheng, and AM, thank you for your sincere friendship, your heart-warming companionship, and your encouragement. Finally I would like to thank my family for their encouragement. I thank my dear husband and best friend, Tonglu Li. He has been there supporting, encouraging and believing in me, even in the darkest moments. I am grateful to my parents for their unconditional love, understanding, and faith in me. My dear son, Mengze, you have been a tremendous source of happiness and energy. I thank and hug you. vi TABLE OF CONTENTS LIST OF TABLES ................................................................................... viii LIST OF FIGURES .................................................................................. ix INTRODUCTION .................................................................................... 1 Overview of the Dissertation ............................................................... 1 References ..................................................................................... 6 MANUSCRIPT 1: INSTRUCTION IN THE WWWDOT APPROACH TO IMPROVING STUDENTS’ EVALUATION OF WEBSITES: AN EXPERIMENTAL STUDY WITH 4“ AND 5TH GRADE STUDENTS ........................................ 9 Abstract ......................................................................................... 9 Introduction .................................................................................. 11 Rationale and Review of the Literature ................................................. 11 Theoretical Framework .................................................................... 1 7 WWWDOT Approach ..................................................................... 18 Methods ...................................................................................... 25 Results ....................................................................................... 45 Discussion ................................................................................... 52 Strengths and Limitations of the Study .................................................. 56 Conclusion and Future Research ......................................................... 57 References ................................................................................... 60 MANUSCRIPT 2: A COMPARATIVE VERBAL PROTOCOL STUDY OF FOURTH AND FIFTH GRADE STUDENTS’ WEBSITE EVALUATION STRATEGIES ............................... 66 Abstract ...................................................................................... 66 Introduction ................................................................................. 68 Theoretical Framework .................................................................... 69 Rationale and Review of Literature ...................................................... 72 WWWDOT Approach ..................................................................... 79 Methods ...................................................................................... 79 Results ......................................................................................... 87 Discussion .................................................................................. 109 Limitations and Further Research ....................................................... 114 References ................................................................................. 1 l 7 vii APPENDICES ”mowmcnw> Teacher Survey ..................................................................... 138 Lesson Plans For WWWDOT Implementation ................................ 140 WWWDOT Worksheet ............................................................ 149 WWWDOT Observation Protocol ............................................... 151 Student Questionnaire ............................................................. 153 Single Website Evaluation Assessment (Form A and Form B) ............... 157 Scoring Guide for Single Website Evaluation Assessment ................... 159 Website Ranking Assessment (Form A and Form B) .......................... 160 Scoring Guide for Website Ranking Assessment .............................. 162 viii Table 1 Table 2 Table 3 Table 4 Table 5 Table 6 Table 7 Table 8 Table 9 Table 10 LIST OF TABLES Demographic Statistics of the Participating Students ......................... 122 Information About Participating Classes ........................................ 123 Computer and lntemet Use Statistics Reported by Participating Students .............................................................................. 124 Means, Standard Deviations, and Means Adjusted by Pretest Scores for Student Questionnaire (Total Score) ............................................. 126 Coefficients for Instruction in the Approach on Students’ Performance on the 18 Items of the Questionnaire ................................................ 127 Means, Standard Deviations, and Means Adjusted by Pretest Scores for Single Website Evaluation Assessment (Judgment Score) ................... 128 Means, Standard Deviations, and Means Adjusted by Pretest Scores for Single Website Evaluation Assessment (Reason Score) ...................... 129 Means, Standard Deviations, and Means Adjusted by Pretest Scores for Website Ranking Assessment (Reason Score) .................................. 130 Demographic Information About the Participants ............................. 131 Frequency of Strategy Use by the Experimental Group and the Control group ....................................................................... 132 ix Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 LIST OF FIGURES Comparison Lines by Condition: Post Questionnaire Item Means .......... 133 Comparison Lines of Pre and Post Reason Scores: Single Website Assessment Scores ................................................................. 134 Comparison Lines of Pre and Post Reason Scores: Website Ranking Assessment Scores ................................................................. 135 Judgment on the Website by Zoological Association of San Diego. ........136 Judgment on the Website by Zoological Association of San Diego ......... 137 INTRODUCTION How to help students effectively use the lntemet has been one of my research interests for a long time. My research plan in graduate school has been to learn about strategies good adult lntemet readers adopt in their reading, and then examine how to teach these strategies to upper elementary students so that they can read effectively on the Internet. My first research project relating to lntemet reading was on good adult lntemet readers’ reading strategy use. While I collected data for this project, one of the things I noted was that when they were asked to locate certain information on the lntemet, all the participating good adult lntemet readers applied some criteria to evaluate the credibility of a website before they delved into the text to finish the task (Zhang & Duke, in press). This phenomenon is prominent especially when being compared to school students’ total neglect (Kafai & Bates, 1997; Large & Beheshti, 2000) or na'i've understanding of the credibility issue about the information on the lntemet (Wallace, Kupperman, Krajcik., & Soloway, 2000). I felt it was more important to find ways to teach students strategies for discerning what is believable and what is most useful on the Web than anything else on my research agenda. This is howl started my dissertation project that is reported here. Overview of the Dissertation This dissertation was written in an alternative format (Duke & Beck, 1999). It consists of an introduction, and two stand-alone manuscripts ready to be submitted for publication. In the Introduction, I will provide a brief description of the overall dissertation study to contextualize the two manuscripts. Overview of the Study This study was designed mainly to refine and test the impact of an approach on students’ critical evaluation skills. Another purpose of this study was to explore in depth the website evaluation processes of students from two groups. One group received instruction in this approach and the other did not. This approach is called the WWWDOT approach. It was designed by Nell Duke and Shenglan Zhang to improve students’ critical evaluation of websites by teaching them to attend to at least six aspects of a website when evaluating it: Who wrote it? When did they write it, Why was it written? Does this help meet my needs? Organization of the website, and To do list for the future, that is, what future activities they would do, such as reading other materials, sharing what they learned with others, asking a librarian a question and so on. Nowadays students have easy access to the lntemet, which provides almost limitless amount of information. However, information on the lntemet is unscreened and readers need to discern what is trustworthy and what is not. A review of literature shows students rarely evaluate the credibility of websites (Baule, 1997; Coiro & Dobler, 2007; Fitzgerald, 2000; Hirsh 1999; Hoffman, et al., 2003; Kafai & Bates, 1997; Kuiper, Volman, & Terwel, 2005; Large & Beheshti, 2000; Lorenzen, 2001; New Literacies Research Team, 2006; Ng & Gunstone, 2002; Stapleton, 2005; Wallace et al., 2000; Watson, 1998). Many researchers and practitioners have proposed different ways of teaching students to critically evaluate websites (e.g. Burke, 2000; Eagleton & Dobler, 2007; Hawes, 1998; Henry, 2007; Schrock, 1996, 1999), but to my knowledge, no comprehensive approach has been developed to help students critically evaluate websites. This study is the first in the research literature to examine an approach to improving students’ website evaluation skills. Built upon the new literacies perspective (Leu, Kinzer, Coiro, & Cammack, 2004), theories of metacognition (F lavell, 1979, 1987; Brown 1987; Garner, 1987), the concept of critical literacy (Burbules, 1997; Lankshear, 1997; Luke, 2000), and Burbules and Callister’s (2000) views on lntemet access and credibility, this study adopted a mixed method design combining quantitative and qualitative analyses (Chatterji, 2005; Johnson & Onwuegbuzie, 2004) to address the following research questions: 1. What is the effect of instruction in the WWWDOT approach, if any, on 4th and 5'h grade students’ critical evaluation of websites? 2. How do 4th and 5th grade students who have received instruction in the WWWDOT approach and those who have not evaluate websites? 3. Are there any patterns of differences in website evaluation within and between the two groups? If so, what are they? The two manuscripts that make up the body of this dissertation address these research questions. The first manuscript addresses question 1. It is titled Instruction in the WWWDOT Approach to Improving Students ’Evaluation of Internet Sites: An Experimental Study With 4th and 5 th Grade Students. This manuscript presents results of quantitative data analyses of an experimental study to examine the effects of instruction in the WWWDOT approach. Results show there is a statistically significant effect of instruction in the WWWDOT approach on students’ critical website evaluation with respect to evaluating websites on multiple dimensions as measured by answering the questionnaire and completing an assessment that asks students to give reasons why they should or should not trust a website. However, students’ overall judgment of trustworthiness of websites and their ranking performance were not improved. The second manuscript addresses questions 2 and 3, and to some extent, question 1. It is titled A Comparative Verbal Protocol Study of Fourth and Fifth Grade Students’ Websites Evaluation Strategies. The second manuscript reports the results of comparative analyses of how two subsets of students from the control group and the experimental group evaluated websites. It takes a closer look at the two groups of students’ website evaluation processes through using a tutoring method (Garner, et al., 1983). Results show that the students who had received instruction in WWWDOT had a greater understanding of the need for website evaluation, a deeper understanding of the function of multimedia presentations, and were more strategic in evaluation and that they made better overall judgments on which website to trust than the students who did not receive instruction in WWWDOT. The findings suggest that the WWWDOT approach is beneficial to students, but that some dimensions of the approach should be further stressed or reinforced during instruction. The students who did not receive instruction in WWWDOT did not have a clear idea about website evaluation and were not able to strategically evaluate the trustworthiness of websites. Only four out of the 12 students made correct judgment on the trustworthiness of the websites. The findings from the group that did not receive instruction in WWWDOT suggest that there is an urgent need for teaching students how to critically evaluate websites and that some misunderstandings that are holding them back in making sound judgment should be corrected. The current study is an important first step in testing an approach to improving students’ website evaluation skills and examining students’ website evaluation processes in depth. More research is needed to add to the sparse empirical base on website evaluation. Future studies should investigate how good Internet readers synthesize information to make sound overall judgments about website trustworthiness. Research should examine if instruction in the WWWDOT approach over a longer period of time is more effective at improving soundness of students’ website trustworthiness judgments, and whether it is effective in improving evaluation skills of students in other grade levels. It is also important to explore how students’ website evaluation and lntemet reading in general develops over time and in different instructional contexts. References Baule, S. (1997). Easy to find but not necessarily true. Book Reports, 16, 26. Brown, A. L. (1987). Metacognition, executive control, self-regulation, and other more mysterious mechanisms. In F. E. Weinert & R. H. Kluwe (Eds), Metacognition, motivation, and understanding (pp. 65-116). Hillsdale, New Jersey: Lawrence Erlbaum Associates. Burbules, N. C. (1997). Rhetorics of the Web: Hyperreading and critical literacy. In .i. Snyder (Ed.), Page to screen: Taking literacy into the electronic era. (pp. 102-122). New York: Routledge. Burbules, N. C., & Callister, T. A. (2000). Watch it: The risks and promises of information technologies for education. Boulder, CO: Westview Press Chatterji , M. (2005). Evidence on "what works": An argument for extended-term mixed method (ETMM) evaluation designs. Educational Researcher, 34(5), 14-24. Coiro, J ., & Dobler, E. (2007). Exploring the online reading comprehension strategies used by sixth-grade skilled readers to search for and locate information on the lntemet. Reading Research Quarterly, 42(2), 214—257. Duke, N. K., & Beck, S. W. (1999). Education should consider alternative formats for the dissertation. Educational Researcher; 28(3), 31-36. Fitzgerald, M. A. (2000). Criticizing media: The cognitive process of information evaluation. Educational Media and Technology Yearbook, 25, 130-140. Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906-911. Flavell, J. H. (1987) Speculation about the nature and development of metacognition. In F. Weinert & R. Kluwe (Eds), Metacognition, motivation, and understanding (pp.21 - 29). Hillsdale, NJ: Lawrence Erlbaum. Garner, R. (1987). Metacognition and reading comprehension. Norwood NJ: Ablex publishing corporation. Garner, R. Wagoner, S., Smith, T. (1983). Extemalizing question-answering strategies of good and poor comprehenders. Reading Research Quarterly, 18(4), 439-447. Hirsh, S. G (1999). Children’s relevance criteria and information seeking on electronic resources. Journal of the American Society for Information Science, 50 (14), 1265-1283 Hoffman, J. L., Wu, H.-K., Krajcik, J. S., & Soloway, E. (2003) The nature of middle school learners’ science content understandings with the use of on-line resources. Journal of Research in Science Teaching, 40 (3), 323-346. Kafai, Y., & Bates, M. J. (1997). lntemet Web-searching instruction in the elementary classroom: Building a foundation for information literacy. School Library Media Quarterly, 25(2), 103-11 1. Lankshear, C. (1997). Changing literacies. Milton Keynes, England: Open University Press. Luke, A. (2000). Critical literacy in Australia: A matter of context and standpoint. Journal of Adolescent & Adult Literacy, 43(5), 448-462. Leu, D.J., Jr., Kinzer, C.K., Coiro, J ., Cammack, D. (2004). Toward a theory of new literacies emerging from the lntemet and other information and communication technologies. In R.B. Ruddell & N. Unrau (Eds), Theoretical models and processes of reading, Fifth Edition (1568-1611). International Reading Association: Newark, DE. Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher; 33(7), 14-26 Kuiper, E., Volman, M., & Terwel, J. (2005). The Web as an information resource in K-12 education: Strategies for supporting students in searching and processing information. Review of Educational Research, 75, 285-328. Large, A., & Beheshti, J. (2000). The Web as a classroom resource: Reactions from the users. Journal of the American Society for Information Science, 51 (12), 1069-1080. Lorenzen, M. (2001). The land of confusion? High school students and their use of the World Wide Web for research. Research Strategies, 18, 151-163. Ng, W., & Gunstone, R. (2002). Students’ perceptions of the effectiveness of the World Wide Web as a research and teaching tool in science learning. Research in Science Education, 32, 489-510. Stapleton, P. (2005). Evaluating web-sources: lntemet literacy and L2 academic writing. ELT Journal, 59(2), 135-143. Wallace, R. M., & Kupperman, J ., Krajcik, J ., & Soloway, E. (2000). Science on the Web: Students on-line in a sixth-grade classroom. Journal of the Learning Sciences, 9(1), 75-104. Watson, J. S. (1998). “If you don’t have it, you can’t find it”: A close look at students’ perceptions of using technology. Journal of the American Society for Information Science, 49(1 1), 1024-1036. Zhang, S., & Duke, N. K. (in press). Strategies of lntemet reading with different reading purposes: A descriptive study of twenty good Internet readers. To appear in Journal of Literacy Research. MANUSCRIPT ONE INSTRUCTION IN THE WWWDOT APPROACH TO IMPROVING STUDENTS’ EVALUATION OF WEBSITES: AN EXPERIMENTAL STUDY WITH 4TH AND 5TH GRADE STUDENTS Abstract This study tested an approach called WWWDOT to improving students’ website critical evaluation skills. The WWWDOT approach was designed to improve students’ critical evaluation of websites by teaching them to attend to at least six aspects of a website when evaluating it: Who wrote it? When did they write it, Why was it written? Does this help meet my needs? Organization of the website, and To do list for the filture, that is, what future activities they would do, such as reading other materials, sharing what they learned with others, asking a librarian a question and so on. A matched pair randomized design was adopted. Twelve 4’h and 5th grade classes participated in this study. Data were collected through three assessments before and after the intervention in both the experimental group and control group: a student questionnaire, an assessment requiring students to evaluate a website and explain their evaluation, and another assessment requiring students to rank four websites from the most trustworthy to the least trustworthy and explain their ranking. ANCOVA and Ordinal Regression were run to examine the impact of instruction in the approach on students’ evaluation skills. Results suggest that instruction in the WWWDOT approach improves fourth and fifth grade students’ website evaluation skills with respect to evaluating websites on multiple dimensions as measured by answering the questionnaire and giving reasons why they should or should not trust a website. However, students’ overall judgment of trustworthiness of websites and their ranking performance were not improved. This study developed a preliminary approach to improving students’ website evaluation skills. It also helped raise further questions for future research. 10 Introduction The lntemet has become a part of many people’s daily life. Most public schools have access to the lntemet (N CES, 2005), and students are being asked to use the lntemet to conduct research for their school projects (Eagleton, Guinee, & Langlais, 2003). The Internet provides a great amount of information. However, unlike printed text that is published, most of the information on the lntemet is unfiltered. This makes it even more important for students to know how to evaluate the quality of information they encounter. Research shows that most students do not take a critical view when they read on the Internet (Baule, 1997; Hirsh 1999; Hoffman, et al., 2003; Kafai & Bates, 1997; Kuiper, Volman, & Terwel, 2005; Large & Beheshti, 2000; Lorenzen, 2001; New Literacies Research Team, 2006; Ng & Gunstone, 2002; Wallace et al., 2000; Watson, 1998). There is a need to teach students how to critically evaluate websites. The purpose of this study is to test an approach to improving students’ website evaluation skills. This approach was designed to help students learn to evaluate websites on at least six dimensions: authorship, currency, purpose of the website, organization of the website, whether the website meets the students’ needs, and what to do after reading it (e.g., additional material to read). Rationale and Review of the Literature In this section I present the relevant literature that provides the rationale and foundation for this study. This section is divided into five sub-sections. The first two sub-sections argue that the Internet is being used by students and that it is important to 11 evaluate the trustworthiness of websites. This discussion leads into the third section, which reviews students’ neglect of trustworthiness evaluation of websites and examines the reasons. Then the importance of addressing readers’ needs in website evaluation is discussed. Finally, various approaches to improving students’ website evaluation skills are reviewed. The Internet Is Widely Used in Schools More and more students have access to the World Wide Web (WWW) and the WWW is increasingly becoming a rich resource for students’ learning. U.S. statistics on lntemet use in education show that nearly 100 percent of public schools in the United States had access to the Internet in fall 2003 (National Center for Education Statistics [NCES], 2005). The types of lntemet connections used by public schools and the speed at which computers are connected to the lntemet have been improved. According to NCES (2005), 95 percent of public schools with lntemet access used broadband connections to access the lntemet in 2003. With the improved lntemet connections, students have easy and convenient access to the Web. The lntemet has been used as one of the important resources in students’ research and “a natural place to conduct authentic inquiry” (Guinee, Eagleton, & Hall, 2003, p. 364). Teachers and students have started to use the lntemet in addition to the library catalogs to conduct research for school projects (Eagleton, Guinee, & Langlais, 2003). Indeed, the survey data collected in the present study also indicated that most of the students are using the lntemet as an information source. Ninety five percent of students in this study reported using the lntemet and 61% of them used the 12 lntemet to locate information (see Table 3). Teachers in this study reported asking their students to read on the lntemet for an average of at least 52 minutes each week. Critically Evaluating Websites Is Important The Internet has provided people with easy access to all kinds of information. However, unlike printed texts, which have gone through different processes of screening or sanctioning by editors, publishers, librarians, and so on, information on the Web may be unscreened or unsanctioned. The lntemet allows anyone to publish anything, thus people without appropriate credentials, with specific biases and agendas, and so on, provide information on the lntemet. Bruning, Schraw, and Ronning (1995) have pointed out the importance of developing metacognitive processes for judging, organizing, and acquiring new information. As students start to move from using printed texts to more uncontrolled and unfiltered electronic resources like the lntemet as a source of information, there is an even greater need for them to be able to evaluate the quality of the information presented. Teaching students to judge and critically evaluate information becomes more important in the lntemet age than ever before (Brouwer, 1997; Fitzgerald, 1997; Leu, 2002). Students Tend Not to Critically Evaluate Websites Much research has been done on students’ reading behavior on the Internet. Results consistently show that students rarely evaluate the reliability and authority of the information on the Web (Baule, 1997; Hirsh 1999; Hoffman, et al., 2003; Kafai & Bates, 1997; Kuiper, Volman, & Terwel, 2005; Large & Beheshti, 2000; Lorenzen, 2001; New 13 Literacies Research Team, 2006; Ng & Gunstone, 2002; Slone, 2002; Wallace et al., 2000; Watson, 1998). Three studies have been conducted with elementary students. Hirsh (1999) did an exploratory study with ten 5th graders to investigate children’s relevance criteria and information seeking with electronic resources. She found that few students mentioned the authority of the information as a basis for making their relevance decisions. Wallace et al. (2000) studied the strategies that eight 6"1 grade students used for seeking, evaluating, and using information on the Web. This study also found that most students did not critically read the information they found on the Web, but accepted the information at face value. Kafai and Bates (1997) included lSt grade through 6th grade students in their research and found that elementary grade students assume information found on the Web is truthful and correct. Moreover, research suggests that information found on the Web is often regarded by students to be of higher value and authority than its print counterpart (Schacter, Chung, & Dorr, 1998; Small & Ferreira, 1994). A survey of lntemet usage and online reading (New Literacies Research Team, 2006) shows that only 5% of students report looking at who created information on the lntemet. Only 4% of students report checking the accuracy of information found on the Web at school; only 2% of students report doing so outside of school. Henry (2007) evaluated middle school students’ as well as middle school teachers’ online reading comprehension achievement and compared performance between students and teachers from economically privileged districts to those in economically disadvantaged districts. She found that critical reading tasks involving critical evaluation of the accuracy of an 14 image on a website and critical evaluation of information for bias were especially challenging for both students and teachers in both economically privileged and disadvantaged districts. For example, the correct response rate was less than 15% for students from both types of districts when they responded to a survey item that “measured critical evaluation of the reliability of an information source (a phish message about a bank)” (p. 128). One of the reasons that students lack critical reading ability on the Web is that they may not be informed of appropriate criteria for their assessment (Lorenzen, 2001). One criterion that students have been observed to use is how attractively the information was presented (Agosto, 2002). In addition, some students were observed to equate quantity of information with quality (Lorenzen, 2001). They looked at how relevant the content was to their subject (Hirsh, 1999), but that does not speak to the information’s credibility. Not surprisingly, given the lack of appropriate criteria for evaluating the quality of information on the Web, Jones (2002) found that 9th and 10th grade students felt more comfortable using sites selected by teachers. On the one hand, teachers are probably better equipped than students to select quality websites for students to use. On the other hand, by giving students a short list of sites to consult, teachers deny the students an opportunity to explore the Web space and may be less likely to teach students how to read critically on the lntemet. There is a Chinese saying, “Giving someone fish is not as good as teaching him/her how to fish.” Learning to critically distinguish good information 15 I'._ from poor is a valuable part of developing students’ information literacy. With critical reading ability, students may make much better use of the lntemet. Under these circumstances, the task of finding a way to teach students to become more critical readers is urgent. Critically Evaluating Websites Also Includes How to Match the Information Resources With Needs In addition to teaching students to evaluate the credibility of websites, it is also crucial to teach students to evaluate the relevancy of the information on the website, that is, whether the information on the lntemet meets their needs (Choc, Detlor, & Tumbull, 2000; Henry, 2007). As noted above, readers are sometimes distracted by visually attractive websites (Agosto, 2002); this sometimes results in forgetting to think about their original purpose or goal of reading them. Relevancy evaluation also requires readers to adjust their information resources to better suit their own reading level. Henry (2007) found that middle school students do not evaluate the reading level of websites and sometimes end up on websites that contain reading materials much higher than their own reading ability. Teaching students to evaluate information resources on the Internet should also include teaching them to be aware of their needs before searching and throughout the searching and the reading process, and most importantly, to be able to evaluate whether the information resources meet their needs. Research-tested Approaches Are Needed to Improve Students ’Critical Evaluation of Websites 16 Teachers, technology specialists in schools, and many others have been calling for training students to critically read the information on the lntemet and have proposed ways to do that (Burke, 2000; Eagleton & Dobler, 2007; Hawes, 1998; Henry, 2007; Schrock, 1996, 1999), but a systematic approach specifically for elementary students has not been developed and tested. Hawes (1998) suggested asking students a list of questions that could help them analyze the information they found on the lntemet. These questions included: Who are the authors? Where do they work? What organization, business or school do they represent? Who is the intended audience? How could this influence the author’s point of view on the issue? What is the author’s point of view on the issue? What proofs are offered for that point of view? What is the purpose of the author? Schrock (1999) pointed out that teachers should teach students to evaluate websites from the three perspectives: authority of author; content, bias and the authenticity of information; and presentation. All of these suggestions of how to teach students to critically read on the lntemet are reasonable and worthy of trying out, but none of them has been tested for their effects. Theoretical Framework This study is framed by the new literacies perspective (Leu, 2000, 2002; Leu, Kinzer, Coiro, & Cammack, 2004) and built upon the concept of critical literacy (Burbules, 1997; Lankshear, 1997; Luke, 2000). According to the new literacies perspective, teaching students to read the multiple text formats in multimodal reading environments should be included in classroom instruction (Lankshear & Knobel, 2003; 17 Leu, Kinzer, Coiro, & Cammack, 2004). The new literacies of the lntemet and other information communication technologies (ICTs) include the skills, strategies, and dispositions necessary to identify important questions, to locate information, to critically evaluate that information, to synthesize information, and to communicate the answers to others. The focus of this study is on the critical evaluation component of new literacies. Critical literacy includes, among other things, critical thinking about the meaning of information in general, including information that comes from media, printed text or the Internet (Burbules, 1997; Lankshear, 1997; Lankshear & Knobel, 2003; Luke, 2000). In this view, an important part of reading is to critically assess the information encountered in text. Readers should assess the appropriateness and validity of the information they encounter. This is especially important in lntemet reading. WWWDOT Approach Evaluating website credibility is a highly complicated process. During this complex process, different factors weigh differently. For example, an outdated website written by a credible source may be more trustworthy than an updated website written by a person without enough credentials. Credibility is not a black and white issue. Rather, it is a continuum with the most trustworthy on one end and the least trustworthy on the other. On the lntemet, there are few totally trustworthy or totally untrustworthy websites. Most of the time, websites fall along this continuum. The WWWDOT approach is an effort to make the students aware of a few dimensions on which they could collect information to help them evaluate websites. In this section, I first introduce the 18 nun- --u... __. . 1 y 8‘ . l l WWWDOT approach briefly. Then I give details about each element of it. Detailed Description of the WWWDOTApproach Many scholars, teachers and educational specialists have suggested evaluating website trustworthiness on various dimensions (Hawes, 1998; Fitzgerald, 1997; Schrock, 1998; Stapleton, 2005; Street, 2005). The National Educational Technology Standards (NETS, 2007) and American Library Association (ALA, 2000) have listed what students need to know with respect to electronic information source evaluation. Nell Duke and Shenglan Zhang designed a tool, called WWWDOT, consistent with many of these recommendations and standards. that captures several important dimensions of website evaluation. This tool was designed to support students’ critical evaluation of websites by encouraging them to think about at least six things when considering using a website for information: Who wrote it, Why was it written, When was it written, Does it help meet my needs, Organization of the site, and To do list for the future. The WWWpart is about the authorship of a website, the purpose for which it was written/created, and the timeliness of the information. The DOT part is about reading the content of a website, its presentation, and deciding what to do next. The use of an acronym was thought to make the tool easier for elementary students and teachers to remember. Acronyms have been used successfully in teaching other routines in which we want leamers to engage (e.g., Graham & Harris, 2005). In the following sections, I explain and justify each element of WWWDOT. Who Wrote This and What Credentials Do They Have? 19 Identifying authorship and the author’s qualifications is critical for any type of reading, but this act seems even more pertinent when reading in the environment of the lntemet, where there are often no filtering or sanctioning bodies for publishing (Burbules & Callister, 2000; Burke, 2000; Eagleton & Dobler, 2007; Hawes, 1998; Shrock, 1999, ALA, 2000). It is also important to examine what perspective(s) the author holds and by what funding source he/she is supported (Burbules & Callister, 2000; Burke, 2000; Eagleton & Dobler, 2007; Hawes, 1998; Shrock, 1999). A website can be written by a person or an organization. The author’s name may or may not be present on the website. If the author’s name is present, readers should ask what credentials the author(s) has or have. The presence of the author’s affiliation, occupation, title, and contact information can make the credential evaluation easier. If the author’s name is not given, it is important to find out who is responsible for the website. There are occasions when no author or organization can be identified. In this case, the website content itself could signal whether the author or organization is qualified to write this. For example, self-contradictions, such as opposing facts and statistical inconsistencies, and spelling and grammatical errors on a website usually indicate an unqualified author, or at least that the author was not serious in providing the information. Why Did They Write It? Regardless of who the author of a website is, it is important to judge whether he or she or an organization provides thorough and unbiased information (Burbules & 20 Callister, 2000; Hawes, 1998; Schrock, 1999). Generally speaking, thoroughness and lack of bias are to a large degree dependent on the writing’s purposes. In order to avoid biased and distorted information, it is necessary to ask about the purposes for which the information was created. There are different purposes, such as to entertain, to share, to support, to inform, to educate, to sell, and to persuade (Burke, 2000). One topic can be written differently with different purposes. Take the topic, “Introduction to Michigan,” as an example. If the purpose of writing it is to educate, it can include both the advantages and disadvantages of Michigan. However, if the purpose is to advertise for tourism business, only advantages are likely to be stressed. When Was It Written And Updated? Information and works on the lntemet can be categorized into three main categories in terms of timeliness. The first category includes works that are timeless, such as classic literature. The second category includes works and information that have a limited life because of rapid advances in its field or discipline, such as psychology, biology, and so on. The third category includes information and works that are outdated very quickly, such as the news and technology (Harris, 2007). For topics in the second category, there is not a set interval of time in which it remains for being timely. However, for this category it would be better if the work is timely. For topics in the third category, the most current information is wanted. Therefore, when time-limited information is being sought, it is important to note when the information on the Web was written or updated (ALA, 2000; Eagleton & Dobler, 2007). If a date was given as the last time the 21 site was updated, it is usually at the bottom of a Web page. If this information is not provided, the reader should be cautious. In addition, the timeliness of a website reflects whether the author is still maintaining an interest in the page, or has abandoned it. This can also be one of the criteria to assess the usefulness of a website. Does This Help Meet My Needs (And How)? Readers need to evaluate the website to see whether and how it meets their needs. (Choo, Detlor, & Tumbull, 2000; Henry, 2007). A question that readers can ask as they get an overview of the website and before they dig deeply into specific parts of the site is: Does it give the type of information that I need? For example, if I were seeking information on an issue of history, I would not give first priority to a student’s essay, a novelist’s literary work, or the advertisement on a tourism website, but would prefer to first read an article written by a historian. If I were trying to find the latest information about a recent natural disaster, I would certainly not turn to a site in which the information had not been updated recently. Another very important step in evaluating whether a website meets one’s needs is to judge the reading level of the materials (Henry, 2007). A question that elementary students should ask as a part of evaluating whether a website meets their needs is: Is this too difficult for me? Or, can I even read it? Many websites are beyond the reading level of most elementary school students (Kamil & Lane, 1998). Even if a website is trustworthy and provides information that a student needs, it may be too challenging for 22 WIT—“I'll I. the student to use effectively. Students should avoid reading websites that are written at a readability level beyond their understanding, and should also be taught to find more sites that are designed for their reading level. Organization of Website Having an idea of how a website is organized is also crucial (ALA, 2000; Shrock, 1999). First of all, it helps readers to navigate the site and find useful information. Sometimes a website is poorly laid out. An example of this is, if you clicked on a few links, it is hard to find where you are and where you can go or should go next. If it takes a long time to read this kind of website, and there is an equivalent alternative, it is better to stop exploring it. The structure of a website plays an important role in helping reader navigate through the site and read information on it (Calisir & Gurel, 2003; Dee-Lucas, 1996; McDonald & Stevenson, 1996, 1998; Nimwegen, Pouw, & Oostendorp,l999; Rouet & Levonen, 1998; Waniek, et al., 2003). Getting familiar with the organization of a website helps readers understand the content (Coiro & Dobler, 2007). Coiro and Dobler (2007) explored how skilled readers located information on the lntemet and found that the skilled readers “drew from their prior knowledge of informational website structures to guide their reading on the Internet” (p. 230). Furthermore, given that graphs and photos could enhance or supplement the content (Baskin, 1997; Card, Mackinlay & Shneiderman, 1999; Larkin & Simon, 1987), by noticing where the graphs and photos are, the readers could intentionally seek help from them to enhance their understanding of the 23 other information presented on the website (Zhang, 2006). T 0 Do List For the Future A key value of the lntemet is that there is often so much information readily available on any given topic, but this also presents a challenge for readers. Readers can easily get disoriented, lose track of sites to which they could return or other resources they could use, or forget other activities that could enhance their learning of the topic (McDonald & Stevenson, 1996, 1998). Developing a plan for future activities while reading websites may help readers manage their learning. For example, some websites provide links to other websites on the topic or may provide references to books or other print materials that might supplement what is on the website. The plan or to-do list for the future, if developed while reading a website, can include additional texts to read. If readers find some interesting websites unrelated to what they are looking for, they can note those as well, indicating that they are off-topic but of interest. The plan or to-do list can also include activities that do not involve further reading, such as asking a librarian a question, sharing what they learn about the topic with someone, and so on. This would help them understand a certain topic from other perspectives or in other aspects. For the first three dimensions, that is the WWW, a reader may not always be able to identify these three things. However, it is important to look for and pay attention to them. The last three dimensions, that is the DOT, should be answerable with any website. Research Question While the WWWDOT approach has many elements suggested by respected 24 researchers and practitioners, it had not yet been tested to see whether teaching this approach results in improvement of students’ website evaluation skills. This study was designed to examine the impact of instruction in this approach by addressing the following research question: What is the effect of instruction in the WWWDOT approach, if any, on 4th and 5’h grade students’ critical evaluation of websites? Methods Design This study assesses the impact of teaching 4th and 5th grade students the WWWDOT approach on their website critical evaluation skills using an experimental design. It involves two groups. In the experimental group, students learned the WWWDOT approach in two 30-minute lessons and spent another two 30-minute lessons practicing what they learned with guidance from their teacher. In the control group, students did not receive any instruction about the WWWDOT approach during the data collection phase of the study. They did what they normally do during the equivalent time of the day. (The details about what the control group did can be found later in this section.) Several measures of and related to website evaluation were administered to both groups. Comparing the results between the two groups provides insight into the impact, if any, of teaching the WWWDOT approach on 4th and 5th grade students’ website critical evaluation skills. Participants Students 25 Two hundred and forty two fourth and fifth grade students in 12 classes from three schools in the mid-Michigan area participated in this study. Demographic statistics of the participating students are presented in Table 1. This information was provided in part by a student survey and in part by a teacher survey that are described later in the section. Fourth and fifth grade students were chosen for this research because there are substantial methodological challenges to conducting such a study with primary grade students given the limitations of their reading level in relation to websites available. In addition, fourth and fifth graders are also more likely to be expected to use the lntemet as a source of information than students in earlier grades (Kafai & Bates, 1997). Therefore they are in more need of knowing a way to evaluate websites. The twelve participating classes came from three schools in three different school districts Eight classes were from a suburban school, two were from a rural school, and another two classes were from an urban school. Table 2 presents information about the participating classes and the districts. For the twelve classes, there were six teachers. Two were computer teachers. One of the two computer teachers had four different classes and the other had two different classes. Another two teachers were regular classroom teachers and each of them had one class. Another two were classroom teachers who switched students for subject matters; one taught Language Arts and Social Studies and the other taught Mathematics and Science. Each of them had two different classes. A matched-pairs randomization approach at the class level was adopted. The matching was done on the basis of demographic characteristics of the student population. 26 With a relatively small sample, and given that classes came from schools in three different school districts, rural, urban, and suburban with varying demographic characteristics, it seemed important to make sure that equal numbers of experimental and control classes were in any given type of district. Thus within a school two classes would be matched and then one randomly assigned to the experimental condition (with the other then in the control condition). In addition, within the same school, when there was more than one class taught by the same teacher classes were designated as a matched pair to hold the impact of teacher constant across experimental and control conditions. The participating teachers in these situations were taught how to randomly assign half ntunber of his/her classes as the control group and the other as the experimental group. For example, the computer teacher with four classes randomly assigned two to the experimental group and two to the control group. In the case of the two classes that were taught by two different teachers in the same school, they were randomly assigned to different conditions. In total, six classes (three fourth grade classes and three fifth grade classes) were randomly assigned as experimental and six classes (four fourth grade classes and two fifth grade classes) served as control. One might be concerned that there was one more fourth grade class, and one fewer fifth grade class in the control than in the experimental group. However, data analyses suggest no difference between 4th and 5th grades on the three outcome measures used in the study (see later description of measures). Information about the classes is presented in Table 2. The researcher elicited consent from parents/guardians of students in the twelve 27 classrooms to participate in the study. The mean consent rate for the control and experimental groups were 81% and 83% respectively. The students in both groups were asked to complete a survey form before the intervention and any assessments. The survey asked for information about the students’ gender, age, and their use of computer and the lntemet, including the first time he/she used a computer and the lntemet, who taught him/her to use a computer and the Internet, whether there was a computer and lntemet connection at home, whether he/she used the lntemet after school or in the weekends, and if so, where did they use the computer, and their purposes for using computers and the lntemet. The use of computers and the lntemet statistics of the participating students are presented in Table 3. Teachers Teachers were also asked to complete a survey (Appendix A), which asked them to provide each child's data on the number of absences during the time when the intervention was implemented for the experimental groups and during the equivalent time for the control groups, whether the student was an ESL student, and whether the child had special education status. In addition to providing information about their students, the survey also asked the teachers to provide information about their education and teaching background and how they asked their students to use computers or the lntemet. The survey shows that the six participating teachers had 6.8 years of teaching experience on average at the grade levels they were teaching and 10.8 years of K-12 teaching experience on average. Five of them had a master’s degree and one had a bachelor’s 28 degree in education. Two of the six teachers had a degree in technology education. Based on the survey, the teachers had their students on the lntemet for 52 minutes each week on average. Only one of the teachers reported teaching students how to read on the lntemet and she reported spending 1.4 minutes on average teaching lntemet reading each week, specifically, as the teacher described it on her questionnaire, teaching students “how to scan to find specific information versus reading for in-depth content. Paired-randomization allows that the teachers for the two condition groups had the same education level and the students in the two groups had the same experience in using the lntemet in classroom. Treatment and Control Procedures The Experimental Group Before the project started, the researcher developed detailed lesson plans on teaching the WWWDOT approach to students. These lesson plans were read through and piloted by four experienced 4th grade teachers. The teaching backgrounds of the four teachers were similar with the six teachers in the study. For example, the teachers in the study had 10.8 years of K-12 teaching experience and 6.8 years of teaching experience at the grade level they were teaching on average; the pilot teachers had 11.4 years and 5.5 years of experiences respectively. Five out of six teachers in the study had Master’s degree; three of the four pilot teachers had at least Master’s Degree in Education. Teachers in the study had their students on the lntemet for 52 minutes each week; the pilot teachers had their students on the lntemet for 50 minutes each week on average. The 29 study included both computer teachers and regular classroom teachers in this study; the pilot study included both a computer teacher and regular classroom teachers. Based on pilot work, changes were made to ensure that the lesson plans were appropriate for upper elementary classes. Teachers who were assigned to teach the WWWDOT approach in the experimental classes participated in a 2-hour training workshop before the intervention. During the workshop, they learned about the rationale for this study, the importance of teaching students to evaluate websites, the WWWDOT approach, and how to teach students this approach. The researchers and the teachers went through the lesson plans together and learned how to use the websites used in the lesson plans in teaching the approach. The instruction in the WWWDOT approach was conducted during a total of four 30-minute sessions in each experimental class. During the first two sessions, the approach was introduced. Students spent the last two sessions practicing evaluating three researcher-selected websites with the WWWDOT learned in the last two sessions. The students were told by their teacher to complete a WWWDOT worksheet after reading each website. After the students completed the worksheets, the teacher led a discussion or a debate on which one of the three websites was most trustworthy. For the lesson plans, please see Appendix B. The worksheet (see Appendix C) was designed by the researcher to help students use the approach in their website reading. The example websites selected for the first two sessions were around the topic of 30 immigration, a subtopic of citizenship. The three websites that students used in their practice were on the topic of the Underground Railroad. Immigration and Underground Railroad were two main topics that teachers in Michigan normally cover in their fourth and fifth grade curricula. It was assumed that topics suitable for the grade levels in the study could help students learn the new approach and the new topics as well. Since the WWWDOT approach was designed to provide students with a tool to evaluate websites in their real life, an effort was made to ensure authenticity of the texts and assessments in the study. All the websites used in the interventions (and the assessments) were authentic rather than researcher-written websites, though admittedly it was difficult to find websites suitable websites at the students’ grade levels, making the websites less authentic in that sense. As noted above, their topics fit within topics commonly-addressed in fourth and fifth grade curricula. Teachers chose when they would conduct the four WWWDOT sessions. Because teachers in the study had different schedules for their subjects, the intervention time frame varied in this study. Three completed the intervention during a two-week period, that is, they had two 30-minute sessions on the WWWDOT approach per week and finished the intervention in two consecutive weeks. Two finished the intervention within one month, holding one 30-minute session per week. In all classes the intervention was conducted between the end of February and end of March of the fourth or fifth grade year. 31 The Control Group The control group did what they normally do during the time the experimental group was having WWWDOT sessions. Three control classes had their regular computer class activities during the time when their matched-pair experimental classes received instruction in the WWWDOT approach. During the two sessions when the researcher observed, students in these three control classes were doing class projects, which involved editing photos and writing reports on computer most of the time and searching for images on the Internet for a short period of time. Another three control classes received content area instruction as they normally would during the time when their matched-pair classes received instruction in the WWWDOT approach in their subject area instruction sessions. One of the three classes had their science class; one had social studies class; and another had their language arts class. The content that the teachers covered and the activities in the three classes during the two sessions when the researcher observed did not directly involve use of the Internet. The teachers were asked to teach the control classes as they originally planned. That is to say, they were asked not to add any content or practice on the lntemet or website evaluation, if they had not planned to teach that prior to involvement in the study, and not to purposefully avoid teaching anything related to the lntemet use or website evaluation, if they had had it planned in their curricultun prior to involvement in the study. 32 Monitoring Fidelity Monitoring fidelity, that is, determining whether and to what extent the experimental group students did indeed receive instruction in the WWWDOT approach and the control group did not during the intervention period, is essential to addressing the research question. The researcher observed once during the WWWDOT lessons and once during the WWWDOT practice sessions in each experimental classroom and twice during the equivalent times1 in the control classrooms. The researcher observed for 30 minutes each time, coding any class activity during the 30 minutes that involved using the lntemet. The resulting variables provide information about whether the students were involved in using the lntemet and learning the WWWDOT approach. The observation protocol is presented in Appendix D. The results of fidelity monitoring show that the experimental group experienced more in using the lntemet and learning the WWWDOT approach than the control group. In the experimental classrooms, all teachers showed evidence of addressing at least one aspect of each of the six dimensions —Who wrote this and what credentials do they have? Why did they write it? When was it written and updated? Does this help meet my needs? Organization of the website, To do list for the future. The total number of issues (see Appendix D for details) about the six dimensions addressed by the experimental teachers during the two observations of each experimental classroom ranged from 11 to 21. In ' For example, if the counterpart class learned the WWWDOT approach during their computer class when I observed, 1 observed the matched-pair control class during their computer class. 33 contrast, in the control classrooms, only two teachers addressed any of the issues listed on the protocol at any time. These two teachers taught their students how to identify their needs while searching on the lntemet. The numbers of issues addressed in each class were calculated. A t-test was run to compare the means of the total number. The result showed that the experimental group provided more instruction in website evaluation than the control group at a level of statistical significance (t=6.168, df=12.462, p<.001). Furthermore, during the instruction sessions, the researcher observed that teachers in the experimental group taught all the WWWDOT elements planned to be taught in the lesson plans. During the practice sessions, the researcher observed that students completed the WWWDOT sheet and the teachers led a debate on trustworthiness of the three websites in the experimental group. This can lead us to be reasonably sure that the experimental group does provide a test of impact of teaching students the WWWDOT approach. Assessments No previous assessments to measure students’ website evaluation skills were found. The researcher designed all assessments to measure students’ concept of and knowledge and skills related to website evaluation, how students evaluate websites, how they distinguish more trustworthy websites from less trustworthy websites and how they on what basis they make that distinction. The assessments included a questionnaire, a single website evaluation assessment, and a website ranking assessment and they were designed to evaluate students’ website evaluation ability in different respects. All of the assessments were piloted multiple times with students in other schools in which no class 34 participated in the project for content, wording, and duration. Each assessment is discussed below in turn. Questionnaire Description. A questionnaire was designed to mainly measure students’ general ability of website evaluation. It includes three pre-determined factors: (a) students’ awareness of website credibility issues; (b) students’ website evaluation skills, including the six aspects of WWWDOT; (0) students’ basic skills in using a browser and seeking information on the lntemet. It consists of 18 five-point, Likert-scale items (See Appendix E). In designing the questionnaire, the researcher used positive statements and negative statements to avoid giving hints that choosing “strongly agree” or “strongly disagree” indicates being good in website evaluation. Internal consistency was calculated and the Cronbach alpha was 0.728. Administration. Students in the experimental group were asked to complete the questionnaire twice, once before the intervention and once after the intervention. Students in the control group did the same, with approximately the same amount of time passing between pre- and post- as in their matched-pair class. Each time before they filled out the questionnaire, students were told, “In this questionnaire, you will be asked to answer questions about using the lntemet. This is NOT a test. Please tell us what is most true for you.” Teachers and the researcher made sure that students completed the questionnaire independently. No specific time limit was given to students before administering the questionnaire. Students usually spent 12 to 15 minutes completing the questionnaire. 35 Scoring. As items were designed on a five point Likert scale, students’ response to each item was given a number from 1 to 5. If the student checked the one that indicated he/she had the best evaluation skills or strongest awareness of information evaluation on the Internet, he/she got 5. If the student checked the one that indicated he/she had no idea about website evaluation at all, he/she got 1. This was scored with consideration of whether the statement is positive or negative. For example, item #6, “I always look on the website and see who created it”, is a positive statement; and item #17, “All the website authors have the same purpose of writing/creating a website”, is a negative statement. If a student chose “strongly agree” for item #6, he/she got 5 points. However, if a student chose “strongly agree” for item #17, he/she received 1 point. Single Website Evaluation Description. The single website evaluation assessment was designed to measure how students evaluate websites. This assessment is composed of two parts. First, the students were asked to browse a researcher-selected website and make a judgment about whether or not the information on it is trustworthy. Then, the students were asked to write one paragraph telling about why they should trust or should not trust the information on the site. Since the main focus of the study was on eliciting how students evaluate websites, not on their writing, students were instructed not to pay a lot of attention to their spelling, grammar, or handwriting for this task (see Appendix F). Given that the ultimate purpose of teaching students to evaluate websites is to help them recognize and use credible information on the Internet in real life, in designing 36 the assessments for this study, an effort was made to make the tasks as authentic and natural as possible. First, authentic websites were used; second, students were given a scenario for which to evaluate the website that was similar to a common real life situation. In addition, in designing the assessments, the researcher was mindful of the need to make sure the topic of the websites being evaluated were within this age group’s scope of understanding but were interesting as well. A difficult or boring topic could lead to inattention to the assessment. Panda bears, an animal that are not seen in many places in the US, was chosen as the topic of the website used in this assessment. This topic was chosen with much consideration of students’ interest and their level of understanding. Considering that kids of this age group know something about panda bears, but are not very familiar with them, they might be interested in learning more about them. The two websites were chosen on the following basis: (a) These websites would be representative of many websites -- not perfect ones, but not ones with various indicators of untrustworthiness; (b) Both of them are trustworthy in some respects and not very trustworthy in some other respects. This would provide a good chance for the students to show their evaluation skills completely. The two forms are presented in Appendix F. Assessment forms. To avoid a familiarity effect, two equivalent forms were designed. To keep the two forms as equivalent as possible, the researcher kept the instruction/scenario and the topic of the website on each form the same. The single website evaluation assessment was administered before and after the intervention in both 37 groups. For the purpose of counterbalance, half the students in each class were randomly selected to use one form, and half were given the other before intervention. After the intervention students got the opposite form. Time students spent on this assessment. Given that some students might spend a very long time browsing different links on the site, specific time limits were given before administering the single website evaluation assessment. The time limit was decided based on the pilot study. Students were given 25 minutes for this assessment. Scoring. For each student, two scores were given. One was the score for their overall judgment about whether or not they thought information on the website was trustworthy. The other was the score for the reasons they wrote down. The reason part of the assessment was scored by two raters blind to condition and blind to pre- and post-. The inter-rater reliability between these raters, based on a random sample of 42 samples across condition and assessment time, was 93.5%. A student can receive 0, 1, or 2 points for the overall judgment part of the single website evaluation assessment. Using criteria including (a) the update year, (b) the author’s credentials, (c) the purpose of this site, ((1) the information source, (e) presence of spelling or grammatical mistakes, and (f) presence of non-working link, the researcher and four experts reviewed the two websites and judged website of form A to be trustworthy in some respects, not in others, and to be overall more trustworthy than less. They judged the website of form B to be more trustworthy than less. Therefore, for the website of form A, if the student said “yes or no”, she/he scored 2; if he/she said “yes, I 38 trust it”, she/he scored 1; if a student said “no, I don’t trust it”, he/she scored 0. For the website of form B, if a student said “yes, I trust it”, he/she scored 2; if the student said “yes or no”, she/he scored 1; if a student said “no, I don’t trust it”, she/he scored 0. The reasons a student wrote down were scored based on whether each reason showed that: (a) the student identified a good thing to look at for the purpose of credibility evaluation, for instance, the date when the website was updated; (b) the student used a good strategy in evaluating the credibility, for instance, checking information in the site against his/her background knowledge. If the student identified a good thing to look at or used a good strategy, the researcher examined further whether the student correctly located the thing or applied the strategy appropriately. If the student correctly located the thing or applied the strategy appropriately, the researcher would examine whether the information led to an appropriate judgment about the trustworthiness of the website. For example, if a student wrote, “I trust this website because they tell when it was updated.” This means that this student identified a good thing to look at, that is, the time when the website was updated, and he/she received 1 point. If the student wrote, “I trust this website because they tell when it was updated and it was 1999.” This means that the student identified a good thing to look at, and he/she located that information, the year when the website was updated. He/she received 2 points for this answer. If the student wrote, “This website is somewhat not trustworthy because it was updated in 1999, 8 years ago. Some information might have changed in the past 8 years. It does not include any new information.” This student received 3 points 39 because he/she did not only identify some good thing to look at and got it correct, but also linked this information appropriately to the trustworthiness of the website. A student’s total score on the reason part of this assessment was a cumulative score for each reason the student gave. The range of the scores students received ranged from O to 12. The scoring guide/rubric for the single website evaluation assessment can be found in Appendix G. Website Ranking Description. The website ranking assessment was designed to test whether the students were able to distinguish trustworthy websites from untrustworthy websites, and how they made the distinctions. There were two parts in this assessment. First, the students were asked to rank four websites from the most trustworthy to the least trustworthy. Second, the students were asked to write one paragraph about why they chose one as the most trustworthy, and write another paragraph about why they chose another as the least trustworthy. For the same reason as stated in the discussion of the Single Website Evaluation assessment, students were told not to pay a lot of attention to spelling, grammar, or handwriting. For the same reasons as stated in the discussion of the Single Website Evaluation assessment, efforts were made to ensure the assessment would be as authentic and naturalistic as possible. Authentic websites were used. The directions for this assessment were written as an assignment that a teacher would normally assign. The students were instructed to browse the websites, believing that they were looking for information about 40 respiratory systems on the lntemet, and then to visit those websites. The respiratory system was chosen as the topic of the websites. This topic/theme was chosen by the researcher in consultation with 4th and 5’h grade teachers. The respiratory system was a topic that both 4‘“ and 5’h graders have learned and are familiar with. Since it is one of the human body systems, the respiratory system often appears as one section of a larger website that contains information about the human body, including other human body systems. The links given in the ranking assessment were directly linked to the page where the respiratory system was, not the main page of the website, if the respiratory system information was not on the main page. By doing this, students did not need to look for the respiratory system page, and their focus could remain on website evaluation instead of information-seeking (not the focus of this study). The ranking assessment for both forms can be found in Appendix H. Assessment forms. For the same reason as stated in discussion of the Single Website Evaluation assessment, two equivalent forms for this assessment were designed. To keep the two forms as equivalent as possible, the researcher kept the instruction and the topic of the websites the same. Furthermore, in selecting websites for the two forms, the following two goals were considered: (a) among the four websites on each form, there should be a big difference in the degree of trustworthiness between the most trustworthy and the least trustworthy ones; (b) each of the four different websites on one form should match in the degree of trustworthiness with each of the other four websites on the other form. For the purpose of counterbalance, half the students in each class were given one 41 form, and half were given the other before intervention. After the intervention students received the opposite form. Time students spent on this assessment. Given that some students might spend a very long time browsing different links on the site, specific time limits were given before administering the website ranking assessment. The time limit was decided based on the pilot study. Students were given 30 minutes for this assessment. Scoring. For each student, three scores were given. One was the score on their website ranking. One was a score for the reasons they wrote for listing one of the four sites as the most trustworthy. Another was a score on the reasons they gave for listing one as the least trustworthy. The reason parts of this assessment were scored by two raters blind to condition and blind to pre-assessment and post-assessment. Based on a random sample of 42 samples across condition and assessment time, the inter-rater reliabilities for scores on reasons for choosing one as the most trustworthy and one as the least were 89.5% and 91.5%, respectively. For reasons explained in the Results section, the inter-rater reliability for scores on reasons of both most trustworthy and least trustworthy were calculated together and it was 92.3%. The focus of the website ranking measure is on students’ ability to distinguish which website is the most trustworthy and which is the least trustworthy. Moreover, there is not much distinction between the second most trustworthy and the third most trustworthy websites. Therefore, how accurately a student ranked the most trustworthy one and the least trustworthy websites is weighed more heavily into the total score on a 42 student’s ranking of the middle two websites. There were 24 possible rankings in total for each form. Possible scores ranged from zero to six. For detailed information about the scoring of rankings, please see Appendix I. The rubrics for scoring students’ reasons as to why one website was chosen as the most trustworthy and another as the least trustworthy are the same as the rubrics used for scoring the second part of the single website evaluation assessment. Administration of Assessments In both groups before and after the intervention the questionnaire was administered first, followed by the single website evaluation assessment, and then the website ranking assessment. All pre-assessments were completed within two to three weeks before the intervention. All post-assessments were completed within two to three weeks after the intervention. Within each class, the researcher administered assessments with assistance from the teacher. Protocols for administering the assessments were used for each assessment administration before and after the intervention in both groups to ensure uniformity of administration. The protocols included clarification about when and how students may use the computer after finishing the assessments so that this was uniform across settings and did not provide undue reason for students to rush the assessments. One feature of the two assessments, both the single website evaluation assessment and the website ranking assessment, is that they require the use of computers and an lntemet connection. Therefore, the administration of assessments was very dependent on 43 the software and hardware that a school has. All assessments went as expected except for the website ranking assessment with one control class. Seven minutes after the students started to do the assessment, the lntemet connection went down. By that time only 5 students had handed in their answer sheets. Therefore the assessment was rescheduled on another day of the following week. Each student used the same form as they had used during the interrupted session. The answers from the five students who had turned in their answer sheets the first time were used in data analysis. The rest of data for this class was composed of students’ answers completed at the second time. Analysis Procedures The Statistical Package for the Social Sciences (SPSS, version 13.0) was used to analyze data. For data collected through the questionnaire, the single website evaluation assessment, and the reasoning part of the website ranking assessment, ANCOVAs (analyses of covariance) were used to estimate the effect, if any, of the intervention on students’ website evaluation performance and while controlling for initial group differences on website evaluation skills. Data from the three assessments were analyzed and modeled individually. For each model, pre-assessment scores were checked to determine whether the data met the ANCOVA assumption of homogeneous regression slopes. Results show that they met the assumption. For the questionnaire data, an overall score was used in the data analysis with ANCOVA. Then, in order to find out if there was any effect of the intervention on different aspects of the participants’ evaluation skills, ANCOVA was used to analyze data 44 obtained from each individual item. For the single website evaluation data, ANCOVA was run individually for the students’ judgment score and reason score. For the reason parts of the website ranking assessment data, ANCOVA was also used to test the impact of the instruction. Analyses for the ranking scores proceeded differently. Ordinal regression model with the logit link (proportional odds model) was used to test the effect of the intervention on all levels of ranked categorical outcomes (Bender and Benner, 2000). The assumption of parallel lines across all levels of the ranking outcome was met. A p< 0.05 level of statistical significance was used in all the models adopted. ANCOVA and Ordinal Regression were run comparing outcomes for classes taught by computer teachers to outcomes for classes taught by regular classroom teachers. No difference was found between the classes taught by these different types of teachers and results are reported for classes of the two types of teachers combined. Results Recall that the research question for this study was: What is the effect of instruction in the WWWDOT approach, if any, on 4th and 5th grade students’ critical evaluation of websites? The results show impact of instruction in the WWWDOT approach on fourth and fifth grade students’ attitudes towards website evaluation and their evaluation skills with respect to evaluating websites on various dimensions. That is, students in the experimental group regarded evaluating the credibility of websites to be a more important step during Internet reading than did students in the control group, and 45 they could list more well-founded reasons for their credibility judgments than those in the control group. However, the students in the experimental group did not perform better in overall judgment and website ranking performance than those in the control group. That is, their judgments about whether or not a website was trustworthy, or the relative trustworthiness of websites, was not greater at a level of statistical significance than that of the control group. In the following sections I report results for each outcome measure. Questionnaire The covariate, participants’ pre-assessment score, was significantly related to the participants’ post-assessment score, F(l ,198)=69.23, p<.01, r=.51. There was also a significant effect of instruction in the WWWDOT approach on participants’ post-assessment scores after controlling for the effect of participants’ pre-assessment scores, F (1,198)=42.06, p<.01, r=.42. Table 4 presents the means, standard deviations and means adjusted by pre-assessment scores by assessment time and condition. To further explore the effects of instruction in the WWWDOT approach on students’ concepts of website credibility and their evaluation skills, I used Ordinal Regression model (PLUM) with data for each item of the questionnaire. Because some category cells were empty, the chi-square goodness of fit statistic was not valid; therefore conclusions from the results of the Ordinal Regression model are merely suggestive and cannot be used to make inferences. After controlling for the pre-assessment scores, the coefficients for the instruction in the approach were positive and significant for all the variables except for item #1, #2, #11, #13, and #15. Table 5 presents the coefficients for 46 instruction in the approach on students’ performance on the 18 items of the questionnaire. After the estimated response probabilities for each category by condition was checked, results suggested that instruction in the WWWDOT approach increased the probability of getting a high score on the post questionnaire items except for the above-mentioned five items. Out of the six items, item #13 needs special attention. It asked students about their self-perception of their website evaluation skills. Although there was no statistically significant difference between groups in self-perceived evaluation skills, the means show a slight decrease in experimental group students’ self-perceived website evaluation skills after receiving instruction in the WWWDOT approach. The negative coefficient ([3 = -0.103), which is much smaller than the other coefficients, and the small standard error (SE = 0.251), which is similar with the standard errors for other items, indicated that the decrease in experimental group students’ self-perceived website evaluation skills was not only not statistically significant, but also not substantively significant. The findings from ANCOVA analyses of individual questionnaire items show that: (a) Instruction in the WWWDOT approach helps students be more aware of the existence of untrustworthy information on the lntemet; (b) Instruction in the WWWDOT approach also has a statistically significant impact on the experimental group’s website evaluation skills in identifying authorship of websites, noticing currency of information on websites, noting existence of different purposes to create a website, attending to organization of websites, and having a plan about what to do next while browsing a website; (c) Instruction in the WWWDOT approach does not help students take their own needs into 47 consideration while browsing a website; ((1) Instruction in the WWWDOT approach does not improve students’ confidence in their self-perceived evaluation skills (on the contrary, there was a not statistically significant trend toward a slight decrease in experimental group students’ self-perceived website evaluation skills); (e) Instruction in the WWWDOT approach does not have an impact on their browsing skills. Figure 1 displays contrasts of the post-assessment mean scores between the experimental group and the control group for the 18 items. Single Website Evaluation There were two scores for the single website evaluation assessment. One score was the judgment score of participants’ judgments of the trustworthiness of a website. The other score was the reason score on the reasons that participants gave for why they should trust or should not trust a website. The two scores were not significantly correlated (r=.102, p=.127) and two ANCOVAs were run to examine if there were any effects of the intervention on the two scores. The covariate, the students’ pre-assessment judgment score, was not significantly related to the post-assessment judgment score, F(1,209)=.273, p=.602, n2=.001. There was no significant effect of instruction in the WWWDOT approach on participants’ post-assessment judgment score after controlling for the effect of participants’ pre-assessment judgment scores, F(1,209)=.755, p=.3 86, n2=.004. Table 6 shows the means, standard deviations and means adjusted by pre assessment scores by assessment time and condition. 48 The covariate, the students’ pre-assessment reason score, had a significant effect on the post-assessment reason score, F(1,209)=27.864, p<.001, 112:.09. There was also a significant effect of instruction in the WWWDOT approach on participants’ post-assessment reason scores after controlling for the effect of participants’ pre-assessment reason scores, F(1,209)=72.498, p<.001, n2=.23. Table 7 shows the means, standard deviations and means adjusted by pre assessment scores by assessment time and condition. Figure 2 shows the comparison of growth in reason scores from pre- to post-assessment time between the two groups. Examples of students’ stated reasons are given below. The mean post-assessment scores were 5.45 and 1.60 for the experimental group and the control group respectively. I give one example answer from each group. Lauren, a student in an experimental class, scored 6 on the reasons she wrote. What she wrote2 is as follows, with comments from the rater explaining why she received 6 points. From the information I would think that it is true but the website was by J ian Mu a graduate, but who knows who that person is. It is not someone you know like the history channel [she received 3 points here because she identified a good thing to look at, that is, the author of the website; and located the correct information, Jian Mu, and she appropriately linked information about the author appr0priately with the trustworthiness of the 2 The researcher did some editing on the students’ written examples presented here in punctuation, spelling, and capitalization to make the paragraphs more readable. 49 website]. But other than that all the information is probably true like the panda living in China. But the pictures look like paintings not real [she got another 3 points here because she identified a good thing, whether the picture is real or not; and got the correct information, that is, the picture is not real; and linked fake picture appropriately to the trustworthiness of the website]. And in one of the pictures the panda is in a comic made of veins. Amy, a student in a control class, scored 2 on her reasons. Here is what she wrote, with comments from the rater explaining why she received 2 points: I can trust this information because I think it sounds true. Like it said Panda Bears are found in parts of South China, Tibet, Nepal and few other countries and I knew that it is true. [She received 2 points because she used a good strategy, that is, checking background knowledge; and used it correctly. However, she did not apply it well to make a sound trustworthy judgment] Website Ranking Assessment PLUM (Ordinal Regression) was used to examine if there were any effects of the instruction in the WWWDOT approach and the pre-ranking scores on students’ post-ranking scores. The coefficient for the instruction in the approach, the independent variable in the model, is .259. The positive coefficient for the independent variable indicates that instruction in the WWWDOT approach increased the probability of making 50 the correct judgment on which website was trustworthy and which was not. However, this increased probability was not statistically significant (p =- .272). The score on reasons why one website was chosen as the most trustworthy and the score on reasons why one website was chosen as the least trustworthy were significantly correlated (r=.486, p<.001). Therefore, the two scores were added up as one variable in ANCOVA. Results show that the pre-assessment reason scores were significantly related to the post-assessment reason scores for the ranking assessment, F(l, 204)=19.362, p<.001, n2 = .21. There was also a significant effect of the intervention on students’ reason scores for the ranking assessment, F(l, 204)=56.506, p<.001, n2=.07. Table 8 shows the means, standard deviations and means adjusted by pre assessment scores by assessment time and condition. Figure 3 shows the comparison of growth in reason scores from pre- to post-assessment time between the two groups. Essay examples of one of the top scorers from each group at post-assessment given below may help illustrate differences between the two groups. Donna, one of the top scorers in the experimental group, stated her reasons for trusting one the most as follows, I graded this website the best for many reasons. One of them is because they provide who wrote it. Although I don’t know who the man is, at least they provided some information about this person. The author wrote this to inform people. The page was last updated in 2006. It is a while back, but the facts can’t really change a lot. It is very organized. I don’t think someone 51 would go through all that work for nothing. She wrote the reason for trusting one website the least as follows, The author provides almost no information on the main page. The author doesn’t even provide their name or when they made it. The author does inform you, but not much. If I were writing a paper on health I would definitely go to a different website. Trevor, one of the top scorers in the control group stated his reason why he trusted one website the most as follows: I think the information on this website is trustworthy because whoever wrote the website wrote many things about the respiratory system, so they must know a lot about it. He stated his reason why he trusted one website the least as follows: [This website that I trust the least] had few info, only links to other websites. The writer must know little about the respiratory system. Discussion The findings suggest that instruction in the WWWDOT approach changed fourth and fifth grade students’ view about the credibility of information on the lntemet. They came to realize that information on the lntemet was not always accurate or true. To them, information on the Internet is no longer assumed to be true as they did before. Instruction in the approach also improved students’ website evaluation skills. After receiving the instruction, students could evaluate websites on multiple dimensions. However, students’ 52 overall judgment of the credibility of websites as trustworthy or not trustworthy and their ability to rank websites by relative trustworthiness were not improved at a level of statistical significance. The present findings can be summarized as follows. First, participants who received instruction in the WWWDOT approach outperformed the participants who did not receive the instruction in critically evaluating websites on various dimensions. This finding was confirmed with three assessments including the questionnaire, the single website evaluation assessment (scores on reasons), and the ranking assessment (scores on reasons). After receiving the instruction, students looked at websites with more depth. It was no longer simply the fancier website, the better, or the more photos or words there are, the better a website is. Instruction helped students realize the importance of evaluating websites on other dimensions, such as the authorship of a website, when it was created or updated, why it was created, and organization of a website, and they made extensive plans about what would be the next thing(s) to do. While reading websites, students who received instruction in the WWWDOT approach applied what they learned and noticed the trustworthy as well as the untrustworthy dimensions of websites. Second, although instruction in the approach enabled participants to point out trustworthy and untrustworthy aspects of a website, participants did not show improvement in their overall judgment of the trustworthiness of a website. Two possible causes might lead to this result. First, students were not able to synthesize information they collected about the various dimensions of the website credibility to make a sound 53 judgment. As shown in the analysis of the scores on the reason part of the single website evaluation assessment, students were able to gather information about many aspects of a website that would inform a judgment about whether it is trustworthy or not. That might just be the first step. The next step and the ultimate goal of website evaluation should be synthesizing all that they obtained to make a judgment about the credibility of a website. If this is the case, further research is needed to find out how to enable students to achieve this goal. Indeed, manuscript two of this dissertation examined in detail how students, who received instruction in WWWDOT and students who did not receive instruction in WWWDOT, evaluated two websites. The findings of that study show a difference in their overall judgments of the trustworthiness of two websites. Another possible reason is that people weigh criteria differently, or have some of their own criteria to label a website as trustworthy or untrustworthy. As discussed earlier, website evaluation is a highly complex process and credibility is a continuum. Some people may have a higher standard for a trustworthy website, and some may have a lower standard. With respect to ranking, some may believe that one criterion, such as authorship, is more important in helping make decisions on trustworthiness, while others might believe that another criterion, such as the purpose for which the website was created is more important. Readers with different emphases may place websites on a different order or put the same website at two different spots on the continuum. The two spots may be just slightly apart on this continuum, but when they had to put this into rankings or judgments of overall trustworthiness, the two spots can be seemingly far apart. For 54 example, the same website the researcher leaned toward trusting might be labeled by another as untrustworthy, even though their opinions on the credibility of the website are not so different as it sounds. If this is the case, scores on the students’ judgment may not be as reliable as scores on their reasons for trusting one website and not trusting another. Third, students in the experimental group did not show improvement in their report of whether they are inclined to, and can, evaluate a website with respect to whether the website meets their needs. This finding was obtained through the questionnaire data. The questionnaire items that address this aspect of website evaluation were: (a) When browsing a website, I stop to think about whether it has what I am looking for; (b) when browsing a website, I can tell quickly whether this website has what I need. In the questionnaire assessment, no specific website example or scenarios were given. Students’ responses to the website evaluation and ranking task show that they addressed their needs when they were given the scenario and opportunity to explore actual websites. In addition, students’ self-perceived website evaluation skills was not improved. In conclusion, the instruction in the WWWDOT approach had an impact on fourth and fifth grade students’ attitudes towards website evaluation and their attention to different dimensions in website evaluation. Given that the instruction consisted of only two 30-minute teaching sessions and two 30-minute practice sessions, its impact is noteworthy and encouraging. However, it is also important to examine in future research whether there is any impact on students’ overall judgment and ranking performance if the WWWDOT approach is taught in more than four 30 minutes sessions or if a different 55 instructional approach is used. Strengths and Limitations of the Study This study is the first in the literature to examine an approach to improving students’ website evaluation skills. An important strength of this study refers to the use of different types of measures including questionnaires, a single website evaluation assessment, and a ranking assessment to examine the effects of intervention. These measures help provide a better understanding of the effect of treatment than any single measure could. Another advantage is the use of pre-test scores as covariate so that the effect of instruction in the approach is obtained after adjusting the effects, if any, from the pre-test scores. Despite its strengths, however, this study also has several limitations. First, because measures of students’ website evaluation skills did not previously exist, the researcher created all measures used in this study. Although the procedures for creating these measures were extensive and the measures were proved reliable, and the statistical assumptions for running ANCOVA and Ordinal Regression were met, there is nonetheless space for more elaborate assessment development and then use of these further developed assessments in testing the impact of this intervention (and others). Second, students’ ranking scores might be affected by the readability level of each website, and this factor was not controlled. The researcher has made an extensive effort to locate websites that are at the reading level of fourth and fifth graders, and that fit for the purpose of assessment. There were not many websites that met the two criteria on the lntemet. The SMOG Reading Level Calculator (a formula that estimates the years of 56 education needed to understand a piece of printed text) was used to check readability, to ensure the websites were at an appropriate reading level. However, it is possible that there are difficult linguistic structures and vocabularies in some parts of the websites, especially in those used for website ranking assessment, that were not well captured by the SMOG and are beyond the levels of the students. Moreover, the SMOG was not designed to measure the readability of websites. Website readability could be a different construct than printed text readability. Thus students’ ranking performance might be affected. Third, the matched pair design in this study was only at the class level and favored district and school demographic characteristics over other factors. Matching at the student level, and matching on a greater range of factors, such as teachers’ graduate degrees, teachers lntemet reading experience, and so on would have made for a better design, but was not possible in this case due to constraints on the sample size and the sample pool. Fourth, we must be cautious to generalize the findings to other settings. Even though the participants in this study came from different backgrounds (some from a rural district, some from a suburban district, and some from an urban district), the sample size of 12 classes is relatively small, and some groups, such as English Language Learners, were not well represented. Moreover, this study only tested the approach with 4’” and 5th grade students. The results might not hold with other age groups. Conclusions and Future Research The current study was an important first step in testing an approach to improving 57 students’ website evaluation skills. Clearly more research is called for to add to the sparse empirical base on this topic. First, the effect of instruction in the WWWDOT approach should be tested with students in other grade levels, in other settings and samples. Measures used in the study, including questionnaire, single website evaluation, and website ranking, should also be tested in different grades. Expanded versions of WWWDOT or alternative approaches should also be developed and tested. Second, it is important to examine more qualitatively how students apply or do not apply what they learn the WWWDOT approach to their everyday Internet reading. For students who showed improvement on the measures, what characteristics did they have? For students who did not show improvement, what prevented them from doing so, and what could be changed to make the instruction of this approach work for everyone? A study addressing some of these questions appears as manuscript two of this dissertation. Third, it is important to understand the informational synthesis process with respect to critical evaluation of websites. Further studies should investigate how good website evaluators evaluate websites, how they synthesize the information and make sound judgments, and how they developed their evaluation ability. Fourth, as noted in the discussion section, it is also important to examine how exactly students make an overall credibility judgment of websites and ranked websites from the most trustworthy to the least trustworthy. What factors are affecting the overall judging and ranking process? If we have further understanding of students’ evaluation process, we could be in a better position to refine the WWWDOT approach. 58 Finally, it would be important to examine the longer term effects of instruction the in WWWDOT approach. If the tutoring sessions had been held 2 months, rather than 2 weeks, after instruction in the WWDOT approach, or even longer, would the same results have been seen? In conclusion, this study developed an approach that can be applied to help improve fourth and fifth grade students’ website evaluation skills. The results of the study add to the body of work on students’ reading on the lntemet and how to improve it. As the lntemet is increasingly becoming an important information source for students and information on the lntemet is not always screened, the ability to critically evaluate websites is an important new literacies skill (Leu, 2000, 2002; Leu, Kinzer, Coiro, & Cammack, 2004) and approaches to improving students’ this skill are urgently needed (Burke, 2000; Eagleton & Dobler, 2007; Hawes, 1998; Henry, 2007; Schrock, 1996, 1999). This study begins to serve this need. 59 References Agosto, D. E. (2002). A model of young people’s decision-making in using the Web. Library & Information Science Research, 24, 311-341. ALA, American Library Association. (2002). Criteria for evaluating Websites. Retrieved October 10, 2005 from http://www.ala.org/lCONN/rating.html. Baskin, B.H. (1997). The role of computer graphics in literacy attainment. In J. Flood, S.B. Heath, & D. Lapp (Eds), Handbook of research on teaching literacy through the communicative and visual arts (pp. 872-874). New York: Simon & Schuster Macmillan. Baule, S. (1997). Easy to find but not necessarily true. Book Reports, 16, 26. Bender, R. and Benner. A. (2000). Calculating Ordinal Regression Models in SAS and S-Plus. Biometrical Journal, 42(6), 677-699. Brouwer, P. (1997). Hold on a minute here: What happened to critical thinking in the information age? Journal of Educational Technology Systems, 25 , 189-197. Burbules, N. C. (1997). Rhetorics of the Web: Hyperreading and critical literacy. In .i. Snyder (Ed.), Page to screen: Taking literacy into the electronic era. (pp. 102-122). New York: Routledge. Burbules, N. C. (1997). Rhetorics of the Web: Hyperreading and critical literacy. In I. Snyder (Ed.), Page to screen: Taking literacy into the electronic era. (pp. 102-122). New York: Routledge. Burke, J. (2000). Caught in the Web: Reading the lntemet. Voices From the Middle, 7(3), 15-23. ‘ Calisir, F. & Gurel, Z. (2003). Influence of text structure and prior knowledge of the learner on reading comprehension, browsing and perceived control. Computers in Human Behavior. 19, 135-145. Card, S., Mackinlay, J .D., & Shneiderman, B. (1999). Readings in information visualization: Using vision to think. San Diego, CA: Academic. Chipman, S. F., Segal, J. W., & Glaser, R. (Eds). (1985). Thinking and learning skills, Volume 2: Research and open questions. Hillsdale, New Jersey: Lawrence 60 Erlbaum Associates. Choo, C. W., Detlor, B., & Tumbull, D. (2000). Information seeking on the Web--An integrated model of browsing and searching. Proceedings of the Annual Meeting of the American Society for Information Science (ASIS), 36, 3-16. Crane, B. E. (2000). Teaching with the Internet: Strategies and models for K-12 curricula. New York: Neal-Schuman Publisher. Dee-Lucas, D. (1996). Effects of overview structure on study strategies and text representations for instructional hypertext. In J. F. Rouet, J. J. Levon, A. Dillon, R. J. Spiro (Eds), Hypertext and cognition (pp. 73-108), Erlbaum, Mahwah, NJ. Eagleton, M. B., Guinee, K., & Langlais, K. (2003). Teaching lntemet literacy strategies: The hero inquiry project. Voices from the Middle, 10(3), 28-35. Eagleton, M. B. & Dobler, E. (2007). Reading the Web: Strategies for Internet inquiry. New York: The Guilford Press. Fitzgerald, M. A. (1997). Misinformation on the lntemet: Applying evaluation skills to online information. Emergency Librarian, 24, 9-14. Fitzgerald, M. A. (2000). Criticizing media: The cognitive process of information evaluation. Educational Media and Technology Yearbook, 25, 130-140. Friedman, A. (2005). Using Digital Primary Sources to Teach World History and World Geography: Practices, Promises, and Provisos. Journal of the Association for History and Computing, Vol. VIII (1), http://mcel.pacilicu.edu/iahc/JAHCVIll l /index.htm| Gilster, P. (1997). Digital literacy. New York: Wiley. Graham, S. & Harris, K. (2005). Writing Better: Eflective Strategies for Teaching Students with Learning Difliculties. Baltimore: Brookes Publishing Company. Guinee, K., & Eagleton, M. B., & Hall, T. E. (2003). Adolescents’ lntemet search strategies: Drawing upon familiar cognitive paradigms when accessing electronic information sources. Journal of Educational Computing Research, 29(3), 363-374. Harris, R. (2007). Evaluating Internet research sources. Retrieved October 17, 2007 from 61 http://www.virtualsalt.com/evalu8it.htm. Hawes, K. S. (1998). Reading the Internet: Conducting research for the virtual classroom. Journal of Adolescent & Adult Literacy, 41 (7), 563-565. Hirsh, S. G. (1999). Children’s relevance criteria and information seeking on electronic resources. Journal of the American Society for Information Science, 5 0(14), 1265-1283. Hoffman, J. L., Wu, H.-K., Krajcik, J. S., & Soloway, E. (2003) The nature of middle school learners’ science content understandings with the use of on-line resources. Journal of Research in Science Teaching, 40(3), 323-346. Hoffman, J. V. (1992). Critical reading/thinking across the curriculum: Using I-charts to support learning. Language Arts, 69, 121-127. Jones, B. D. (2002). Recommendations for implementing lntemet inquiry projects. Journal of Educational Technology Systems, 30(3), 271-291. Kafai, Y., & Bates, M. J. (1997). Internet Web-searching instruction in the elementary classroom: Building a foundation for information literacy. School Library Media Quarterly, 103-111. Kamil, M. L., & Lane, D. (1998). Researching the relationship between technology and literacy: An agenda for the 21 st century. In D. Reinking, M. McKenna, L. Labbo, & R. Kieffer (Eds), Handbook of literacy and technology: Transformations in a post-typographic world (pp.323-341). Mahwah, NJ: Lawrence Erlbaum Associates. Kirk, E. E. (2000). Evaluating information found on the Internet. Available online at [http://milton.mse.jhu.edu:8001/research/education/net.html]. Kuiper, E., Volman, M., & Terwel, J. (2005). The Web as an information resource in K-12 education: Strategies for supporting students in searching and processing information. Review of Educational Research, 75, 285-328. Larkin, J ., & Simon, HA. (1987). Why a diagram is (sometimes) worth ten thousand words. Cognitive Science, 11(1), 65-99. Leu, D. J. (2002). The new literacies: Research on reading instruction with the lntemet and other digital technologies. In J. Sarnuels & A. E. Farstrup (Eds), What 62 research has to say about reading instruction (pp. 310-336). Newark, DE: International Reading Association. Lankshear, C. (1997). Changing literacies. Milton Keynes, England: Open University Press. Lankshear, C., & Knobel, M. (2003). New literacies: changing knowledge and classroom learning. Philadelphia: Open University Press. Large, A., & Beheshti, J. (2000). The Web as a classroom resource: Reactions from the users. Journal of the American Society for Information Science, 51(12), 1069-1080. Luke, A. (2000). Critical literacy in Australia: A matter of context and standpoint. Journal of Adolescent & Adult Literacy, 43(5), 448-462. Leu, D. J., Jr. (2000). Our children’s future: Changing the focus of literacy and literacy instruction. The Reading Teacher; 53(5), 424-429. Leu, D. J ., Jr. (2002). The new literacies: Research on reading instruction with the lntemet and other digital technologies. In A. E. F arstrup & S. J. Samuels (Eds), What research has to say about reading instruction (3rd ed., pp. 310-337). Newark, DE: International Reading Association. Leu, D. J ., Jr., Kinzer, C. K., Coiro, J ., & Cammack, D. (2004). Toward a theory of new literacies emerging from the lntemet and other information and communication technologies. In R. B. Ruddell & N. Unrau (Eds), Theoretical models and processes of reading (5’h ed., pp. 1568—1611). Newark, DE: International Reading Association. Lorenzen, M. (2001). The land of confusion? High school students and their use of the World Wide Web for research. Research Strategies, 18, 151-163. Nets Projects & Brooks-Young, S. (2007). National Educational Technology Standards For Students. Retrieved December 1, 2007 from http://cnets.iste.org/. McDonald, S., & Stevenson, R. J. (1996). Disorientation in hypertext: The effects of three text structures on navigation performance. Applied Ergonomics Special Issue: Shiftwork. 27(1), 61-68. McDonald, 8., Stevenson, R. J. (1998). Effects of text structure and prior knowledge of 63 the learner on navigation in hypertext. Human Factors. 40(1). 18-27. Nah], D., & Tenopir, C. (1996). Affective and cognitive searching behavior of ovice end-users of a full-text database. Journal of the American Society for Information Science, 47, 276-286. National Center for Educational Statistics. (2005). InternetAccess in US. Public Schools and Classrooms: 1994-2003. Retrieved June 23, 2005 from http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=20050 l 5 New Literacies Research Team & lntemet Reading Research Group. (2006). Results summary report from the Survey of Internet Usage and Online Reading for School District 10-C (Research Report No. I). Storrs: University of Connecticut, New Literacies Research Lab. Ng, W., & Gunstone, R. (2002). Students’ perceptions of the effectiveness of the World Wide Web as a research and teaching tool in science learning. Research in Science Education, 32, 489-510. Nimwegen, C. V., Pouw, M., Oostendorp, H. V. (1999). The influence of structure and reading-manipulation on usability of hypertexts. Interacting with Computers, 12, 7-21. Rouet, J. F., & Levonen, J. J. (1998) Studying and Learning with hypertext: empirical studies and their implications. In J. F. Rouet, J. J. Levon, A. Dillon, R. J. Spiro (Eds), Hypertext and cognition (pp. 9-24), Erlbaum, Mahwah, NJ. Schacter, J ., Chung, G, & Dorr, A. (1998). Children’s lntemet searching on complex problems: Performance and process analysis. Journal of the American Society for Information Science, 49, 840-849. Schrock, K. (1996). It must be true: I found it on the lntemet. Technology Connection, 3(5), 12-14. Schrock, K. (1999). Producing information consumers: Critical evaluation and critical thinking. Book Report, 17(4), 47-48. Slone, DJ. (2002). The influence of mental models and goals on search patterns during web interaction. Journal of the American Society for Information Science & Technology, 53(13), 1152-1169. 64 Small, R. V., & Ferreira, S. M. (1994). Multimedia technology and the changing nature of research in the school library. Reference Librarian, 44, 95-106. Stapleton, P. (2005). Evaluating web-sources: Internet literacy and L2 academic writing. ELTJournal, 59(2), 135-143. Street, C. (2005). Tech talk for social studies teachers: Evaluating online resources: The importance of critical reading skills in online environments. Social Studies, 96(6), 271-273. VanSledright, B. (2002). In search of America 's past: Learning to read history in elementary school. New York: Teachers College Press. Wallace, R. M., & Kupperman, J ., Krajcik, J., & Soloway, E. (2000). Science on the Web: Students on-line in a sixth-grade classroom. Journal of the Learning Sciences, 9(1), 75-104. Waniek, J., Brunstein, A., Naumann, A., & Krems, J. F. (2003). Interaction between text structure representation and situation model in hypertext reading. Swiss Journal of Psychology, 62(2), 103-111 (I have hard copy) Watson, J. S. (1998). “If you don’t have it, you can’t find it”: A close look at students’ perceptions of using technology. Journal of the American Society for Information Science, 4 9(1 1), 1024-1036. Windschitl, M. (1998). The WWW and classroom research: What path should we take? Educational Researcher; 27, 28-33. Zhang, S. (2006, November). Reading authentic websites in their native language and in Chinese -- A descriptive study of four Advanced Chinese-as-a-Foreign-Language (CF L) learners reading on the Internet. Paper presented at the annual meeting of the Chinese Language Teachers Association (CLTA), Nashville, TN. Zhang, S., & Duke, N. K. (in press). Strategies of lntemet reading with different reading purposes: A descriptive study of twenty good lntemet readers. Journal of Literacy Research. 65 MANUSCRIPT TWO A COMPARATIVE VERBAL PROTOCOL STUDY OF FOURTH AND FIFTH GRADE STUDENTS?WEBSITE EVALUATION STRATEGIES Abstract This study explored within and between group differences in two groups of students?evaluation of the trustworthiness of information on websites. One group of fourth and fifth grade students (N=12) received instruction in an approach called the WWWDOT approach to improving students?website evaluation skills, and the other group of students (N=12) did not receive instruction. A comparative verbal protocol research method was adopted. A tutoring method was used in data collection. Students in the study were asked to tutor younger students to evaluate two researcher-selected authentic websites. Data include Camtasia recorded (both screen captures and audio recordings) tutoring sessions and the researcheriflfield notes. Results show that the students who had received instruction in WWWDOT had a greater understanding of the need for website evaluation and were more strategic in evaluation than the students who did not receive instruction in WWWDOT. The students who did not receive instruction in the WWWDOT approach did not have a clear idea about the need for website evaluation, were not able to strategically evaluate the trustworthiness of websites, and held incorrect understandings about what factors are related to the trustworthiness of a website. As a result, only four out of twelve in the control group made correct judgments on the 66 trustworthiness of websites, whereas eleven out of twelve students in the experimental group made the same judgments as expert readers did. Results also show that students who have received instruction in WWWDOT differed in strategy use and that the second W (when was it written or updated) and the third W (why was it written) should be emphasized in teaching the WWWDOT approach. Furthermore, students who did not receive instruction in WWWDOT hold a variety of misconceptions and misunderstandings such as believing websites with photos, maps, links to explain words are trustworthy. A misunderstanding held by both groups is that they equate quantity with quality. There is limited body of research on what strategies and skills readers adopt in the evaluation process, and this study helps to add to our knowledge about the evaluative strategies and skills a set of fourth and fifth grade students are and are not using. It is an important first step in describing how students evaluate websites and providing information on impacts of teaching students to evaluate the trustworthiness of websites. 67 Introduction In recent years, more and more students have easy access to the World Wide Web in school and outside of school (N CES, 2005). The World Wide Web provides access to all sorts of information. However, some information on the Internet is very different from many printed texts. Printed texts have undergone various screening processes before they reach our hands. Take books in the library as an example. These books were once manuscripts submitted to editors. The editors passed judgment on their worthiness, perhaps with the assistance of reviewers. Authors were asked to make revisions requested by the editors. If the quality of the books was judged to be good and/or marketable, publishers published them. If librarians in relevant fields were convinced about the resulting books’ significance, these books found their way to the library. Many printed texts have gone through screening by the editors, publishers, librarians, and so on. On the other hand, although some websites have also undergone similar screening processes, most of them are unscreened or unsanctioned. The lntemet allows anyone to publish anything, thus people without the credentials we might want them to have, with specific biases and various hidden agendas and so on, can provide information on the Internet. As students start to move from using printed texts as a source of information to using more uncontrolled and unfiltered electronic resources like the World Wide Web (Eagleton, Guinee, & Langlais, 2003), there is an even greater need for them to be able to evaluate the quality of the information presented. Good adult readers go through a major evaluation stage during their lntemet 68 reading (Zhang & Duke, in press). They evaluate not only how relevant but also how credible the information is on a website. In contrast, many researchers and practitioners have found that school students rarely evaluate information on the lntemet in their reading and have called for teaching students how to evaluate quality and credibility of websites (Baule, 1997; Coiro & Dobler, 2007; Fitzgerald, 2000; Hirsh 1999; Hoffman, et al., 2003; Kafai & Bates, 1997; Kuiper, Volman, & Terwel, 2005; Large & Beheshti, 2000; Lorenzen, 2001; New Literacies Research Team, 2006; Ng & Gunstone, 2002; Stapleton, 2005; Wallace et al., 2000; Watson, 1998). Many different ways of teaching students how to evaluate websites have been proposed (Burke, 2000; Hawes, 1998; Schrock, 1996, 1999; Stapleton, 2005; Street, 2005). However, little empirical research has been done to develop or test an approach for this purpose. No one has examined the evaluation processes of K-12 students when they are explicitly asked to evaluate websites. The purpose of this study is to provide an in-depth description of within and between group differences in two groups of fourth and fifth grade students as they evaluate websites. One group of students received instruction in an approach called the WWWDOT approach designed to improve their website evaluation skills, and the other group did not. The cases in this study are taken from a larger dataset collected during an experimental study on the impact of instruction in this approach to website evaluation. Theoretical Frame This study is built upon the new literacies theory, Burbules and Callister’s standpoint on Internet access and credibility, and the theory of metacognition. According 69 to Leu et al. (2004), the new literacies of the lntemet include “the skills, strategies, and dispositions necessary to successfully use and adapt to the rapidly changing information and communication technologies and contexts that continuously emerge in our world and influence all areas of our personal and professional lives” (Leu et al., 2004, p. 1572). One of these skills is critically evaluating the trustworthiness and usefulness of the information on the Internet. Along the same lines, Burbules and Callister (2000) argued for quality of access to information on the lntemet. They connected issues of access and issues of credibility by asking the question, “What kind of access is worth having?” The volume of information and the variety of voices, viewpoints and opinions on the lntemet may be overwhelming. If those who have access to the lntemet cannot discern what is useful, what is trustworthy, what is important, they may squander a lot of time wandering on the Internet and lose patience to make discriminations. Given that it is not possible for lntemet users to change the form and content of the Internet, they should gain a critical capacity to “select, evaluate, and question what they encounter there” (Burbules & Callister, 2000; p. 32). Burbules and Callister (2000) viewed assessing the credibility of materials on the Internet as a two-dimensional activity. There is an internal and an external dimension. The internal dimension involves evaluating elements of the material; the external dimension involves “evaluating external elements including associations with or references to others” (p. 33). Reading on the lntemet requires readers to search, critically evaluate the 70 credibility of websites (Burbules & Callister, 2000; Leu, 2002; Leu, Kinzer, Coiro, & Cammack, 2004; Zhang & Duke, in press), and comprehend the contents of websites. All of these processes, including critically evaluating websites involve skills that are metacognitive in nature. According to theories of metacognition, evaluation is composed of a few sets of elements (Flavell, 1981; Siegel & Carey, 1989). The first set of elements includes the disposition of the reader toward critical thoughts, the readers’ prior knowledge on the reading topic and the reading purpose. These elements prompt the reader to start evaluation. Once the evaluation process starts, a second set of elements begins, in which the reader assesses the problem and applies evaluative skills and strategies. The evaluation is complete when the reader makes a decision or a value judgment. There is limited body of research on what strategies and skills readers adopt in the evaluation process, and this study helps to add to our knowledge about the evaluative strategies and skills a set of fourth and fifth grade students are and are not using. Guided by these three theories, the researcher sought to examine how students who received instruction in the WWWDOT approach and those who did not receive instruction evaluate websites. The purpose of the study is to find out whether the WWWDOT approach needs to be refined and, if it needs, how, and what misconceptions or misunderstandings about website evaluation that the students hold, if any, and what should be looked out for. Being informed of the findings, other approaches could be developed and teachers would be in a better position to help students read critically on the Internet. 71 Rationale and Review of the Literature In this section I present literature that provides the rationale and foundation for this study. This section is divided into five sub-sections. In the first two sub-sections, I discuss how K-12 students rarely evaluate the trustworthiness of websites and are unaware of appropriate criteria to use in website evaluation. In the sub-section that follows, I explain that good adult Internet readers do evaluate websites and that they use various strategies in their evaluation. The next sub-section reveals that many students at upper elementary grades use the Internet for information and argues for the importance for teaching students to critically evaluate the trustworthiness of websites. In the last sub-section I review various approaches to improving students’ website evaluation skills, including the WWWDOT approach used in this study and discuss the main purposes of this study. Students Rarely Evaluate Websites Several studies have investigated students’ interactions and experiences with the World Wide Web. In general, these studies demonstrate that students, from elementary school to high school, do little evaluating of websites; they tend to assume that the information they find is true and valid. For example, Hirsh (1999) investigated the relevance criteria and search strategies that students applied when searching for information related to a class assignment. She interviewed ten motivated 5th grade students on two occasions at different stages of the research process on the Internet. Her purpose was to find out how these students used electronic resources and how they 72 evaluated the information they located. Only two out of the ten students mentioned authority of textual materials when searching for textual materials, and only one student considered the authority of graphics when searching for graphical materials. Kafai and Bates (1997) conducted research on how elementary school students interacted with the lntemet via the SNAPdragon project. The students were asked to build an annotated directory of websites for other children. The findings showed that “children were quick to assume everything they found about their topic on the Internet was correct just because it was there” (p. 109). When they were asked why they liked certain websites, the most common answer was, “It has a lot of information,” or, “It has good information” (p. 109). Large and Beheshti (2000) interviewed fifty 6th grade students about their experience using the Web to find information for a class project. They noted that the students did not question the accuracy or the validity of the retrieved information. Wallace, et al. (2000) examined 6’h grade students’ interaction with websites and reported that the students only evaluated the relevancy of the content, not the accuracy or credibility of the website. The most common way students used to evaluate sources was to search for the words they expected to find in an answer to their question. The students did not seem to display critical thinking about information on the Web perhaps because they believe that “the Web is a giant book with a table of contents and an index” (Wallace, et al., 2000, p. 94). Several other studies report similar findings. Watson (1998) interviewed twelve 73 8th grade students to discuss their experiences with technology. None of the students mentioned evaluating the quality or credibility of websites. Ng and Gunstone (2002), in their research on 10th grade students’ perceptions of the effectiveness of the World Wide Web as a research tool, noted that students need to learn how to judge the reliability of the source. Students Use Oversimplified Criteria and Overlook Important Aspects When They Do Evaluate Websites A few studies show that students do evaluate the websites at times. However, students often overlook factors that are crucial to making credibility judgments. Sometimes they used oversimplified criteria or did not use the right criteria. Lorenzen (2001) interviewed 19 high school students with eight questions as a way to investigate whether or not they were able to use the World Wide Web as their primary source of information. One of the eight questions was: How do you know if the information on a website is good? The findings showed that the students did think about the qualifications of Web page authors, but they used search engines and domain extensions incorrectly to authenticate websites. They tended to trust institutional Web pages, and they examined websites for spelling and grammatical errors. Hoffman, et al. (2003) investigated sixth-grade students’ content understanding as well as their use of search strategies when they used online resources via Artemis, which was designed to provide a permanent workspace and allow students access to pre-selected online resources. This study shows that students evaluated websites solely based on the 74 domain name. To my knowledge, no research has been done on what K-12 students do when they are explicitly asked to evaluate websites. All the previous studies on K-12 students examined their website evaluation while focusing on their broader use of the World Wide Web. Research on what students do when they are explicitly asked to evaluate websites would give in-depth details on students’ evaluation process, which in turn will shed light on how to improve students’ website evaluation. Good Adult Internet Readers Evaluate Websites Zhang and Duke (in press) investigated the reading strategies used by good adult lntemet readers who had differing reading purposes. The findings showed that these readers would not make the final decision on whether or not they would read information on certain websites they found without first evaluating the credibility of the website. Good adult Internet readers evaluate websites with depth and from a comprehensive perspective. For example, in the study completed by Zhang and Duke (in press), good adult lntemet readers paid a lot of attention to who wrote the website and the author’s credentials. Prior knowledge of official websites also played an important role in judging the credibility of websites. These good readers also judged credibility of websites by appearance, that is, the design/organization, and the URL. Fitzgerald (2000) examined well-educated and motivated 2nd year doctoral students’ cognitive processes of information evaluation. She found that these doctoral students used several strategies in evaluating the trustworthiness of websites by asking these questions: (a) Does the website 75 mention research? (b) Does the website contain all active links? (c) Is the site overly burdened with graphics? (d) Who is the sponsoring organization? and (e) What are their motives? Students Could Learn Incorrect Information From the Internet Students and teachers are being asked to use the lntemet to conduct research for projects and papers in school (Eagleton, Guinee, & Langlais, 2003; Hird, 2000). If they are not discerning about the information on the Internet, they could potentially learn incorrect information. Henry (2007) conducted a study on middle school students’ online reading comprehension. One of her findings indicated that critical evaluation of information was especially challenging for students from both economically advantaged and economically disadvantaged districts. In her study, one of the survey questions was to measure if “students understood the importance of checking a website’s authorship to evaluate information bias before using it as an information source” (p. 148). For this question, students were to asked to choose which of four links on a website home page they should click for a report on the Martin Luther King Holiday: Truth About King; The King Holiday, Download flyers to pass out at your school; and Hosted by Stormfront. Results show that only 1% of the students understood that the author(s) or sponsor(s) may shape the presented information. This could mean that 99% of the students who use the lntemet as an information source may learn biased information from the lntemet if they do not critically evaluate websites. Fortunately, researchers, teachers and technology specialists in schools have been 76 calling for the teaching of students to evaluate websites, and have proposed ways of doing so. For example, Street (2005) called for social studies teachers to teach secondary students to form questioning habits when reading on the lntemet. She adopted Doyle’s (1992) proposal as the goal of teaching students website evaluation skills: students should be able to recognize a need for information, identify and locate appropriate information sources, know how to gain access to the information, evaluate its quality, and use it effectively. Fitzgerald (1997) proposed that students should be able to evaluate the reliability of online sources in four distinct areas: format authority; writer authority; internal validity; and currency. Others have similar proposals. For example, Hawes (1998) suggested asking students a list of questions such as, -- Who are the authors? -- Where do they work? -- What organization, business or school do they represent? -- Who is the intended audience? -- How could this audience influence the author’s point of view on the issue? -- What is the author’s point of view on the issue? -- What proofs are offered for that point of view? -- What is the author’s purpose? Schrock (1999) suggested that teachers should teach students to evaluate websites from 77 three perspectives: authority of author; content, bias and the authenticity of information; and presentation. These proposals may be reasonable, but none of them has been tested for their effects. Researchers in ESL, however, have started researching the teaching of L2 graduate students to critically evaluate information on the Internet. Stapleton (2005) did a pilot study in which she taught seven L2 graduate students to ask six questions while they retrieved information from the Web: (a) Who is the author? (b) What authority does the site have? (c) How current is the information? (d) What is the intended audience? (e) What agenda (if any) does the author have? and (1) Is the content biased? These students were also introduced to several forms of weak reasoning associated with biases, and given terms used for some of the most common forms of fallacious reasoning from Ramage and Bean (1999). Results showed that the seven students identified a total of 75 distinct instances of weak reasoning and bias on their own covering all of those areas outlined above. While these results with L2 graduate students are encouraging, they also suggest the question whether these website evaluation skills can be developed in K-12 students. The findings listed above indicates a need to develop approaches to improving students’ website evaluation abilities, especially those of elementary and secondary school students. This study was part of a larger experimental study on the impact of instruction in the WWWDOT approach to students’ website evaluation skills. It was designed to describe how students who received instruction in the approach, and students 78 who did not receive instruction, evaluate websites. The research questions for this study are: 1) How do 4th and 5th grade students who have received instruction in the WWWDOT approach, and those who have not, evaluate websites? 2) Are there any patterns of differences in website evaluation between and within the two groups? If so, what are they? WWWDOT Approach The WWWDOT approach is a tool, designed by Nell Duke and Shenglan Zhang, to support students’ critical evaluation of websites. The WWWDOT approach was designed to encourage students to think about at least six issues when considering using a website for information: Who wrote the website? When was it written or updated? Why was it written? Does this help meet my needs? Organization of the website, and To do list for the future. The first part (WWW) are about the authorship of a website, the purpose of creating it, and the timeliness of the information. The last part (DOT) are about reading the content of a website, the presentation of the site, and deciding what to do next. Please see manuscript #1 in this dissertation for justification and further explanation of each of these components of the WWWDOT approach. Methods This study adopts a comparative verbal protocol methodology to look into differences, if any, in website evaluation performance within and between students who received instruction in the WWWDOT approach and those who did not. 79 To bring out students’ evaluating and reading processes, I adopted Garner, Wagoner, and Smith’s (1983) peer tutoring method. The rationale behind this method is that in order to teach someone something, one needs to verbalize his/her conscious thinking about how he/she does it. Compared with a traditional think aloud, in which the participant is asked to verbalize his/her thinking process to the researcher while completing a task, this tactic reduces intimidation from the researcher and maximizes the possibility of talking out what one thinks. It may be especially suitable for young children who might have a harder time thinking aloud than adults. Participants Twelve 4th and 5’h grade students who received the instruction in the WWWDOT approach (i.e. in the experimental group), and twelve 4th and 5th grade students who did not receive the instruction (i.e., the control group), were selected to be the tutors. The selection procedure went as follows: First, based on a survey conducted before the Intervention and any assessments, the researcher picked 4 students who had less experience in using the lntemet, that is, 0-4 years of experience in using the lntemet; and 4 students who had more experience in using the lntemet, that is, 5-8 years of lntemet experience, in each of the 12 classes. Then, the researcher gave the names of these students to their teachers and asked the teachers to choose one from each group (low lntemet experience, high Internet experience) whom she thought was most articulate. For a computer teacher, who was not sure about which student was the most articulate, the regular classroom teacher of each class was consulted. 80 As a result, half of the 24 participants had less experience with the lntemet and half had more experience in each condition group. The more experienced group had an average of 5.83 years of experience and the less experienced group had an average of 2.83 years of experience. On average, students in the control group had 4.42 years of experience in using the lntemet, and students in the experimental group had 4.25 years of experience in using the lntemet. The similar amount of experiences in using the lntemet helped ensure that differences, if any, found between the two groups would not be caused by a difference in their years of using the lntemet. Table 9 shows demographic information about these students. An equal number of students from 2nd and 3rd grades were selected as tutees to work with these student tutors from both groups. Teachers were asked to select students they believed would do well in the role of tutee (e. g., children who are not overly shy or overbearing). No very experienced lntemet users from 2nd or 3rd grade (who might threaten tutors) were selected. The role of the 2nd and 3rd grade students was only to provide an audience and purpose for the 4th and 5th graders’ tutoring. Data collection and analyses focus on the 4’'1 and 5th grade students. Tutor and tutee matching was based on the ideal-match prescriptions from the tutorial learning-outcome literature (Allen & F eldman, 1976). Tutor and tutee were of the same sex and tutees were two grades lower than tutors. An effort was made to avoid matching close friends or family members. Intervention Procedures 81 Teachers for the experimental group were provided with detailed lesson plans and taught to implement the WWWDOT approach at a workshop held by the researcher. The teachers taught students in the experimental group the WWWDOT approach and students in the experimental group practiced what they learned during a total of four 30 minute sessions. During the first two 30 minute sessions, the experimental teachers taught the six elements of the approach to their studentsin the experimental classes and demonstrated why it was important to critically evaluate websites. The latter two sessions allowed students to practice what they learned with three different websites. For each of the three websites, the students were asked to fill out a WWWDOT worksheet. After the three websites were evaluated, there was a teacher-led debate on the trustworthiness of the websites. The control group did what they normally do during the equivalent time. The researcher asked the teachers to teach the control classes as they originally planned prior to involvement in the study. Out of the six control classes, three control classes had their computer class as usual. Students in these three control classes were doing class projects, which involved editing photos and writing reports on the computer most of the time and searching for images on the lntemet for a short period of time. Another three eontrol classes received content area instruction as they normally do: one had science class, one had social studies class and another had language arts class, none of which was directly related to the lntemet. Data Collection 82 Before and following the intervention a number of pre- and/or post-test measures were employed. The focus of this paper is on an outcome measure administered to a subset of participating experimental and control group students — the discourse used by the students while tutoring younger students in evaluating websites. For this study, there are two data sources: Camtasia recorded students’ tutoring sessions (with both screen capture and voice recording) and researcher observation notes. The tutoring sessions happened two weeks after the experimental group received instruction in the WWWDOT approach. The tutoring method was adopted to help gain insights into the tutors’ thinking process (Garner et al., 1983; Lundeberg, 1987). Data were gathered individually with each pair. A tutor and his/her tutee sat in front of a computer where two researcher-selected (but not researcher-authored) websites were loaded. The tutor sat on the side of the computer where it is easy to use the mouse. The researcher instructed the tutor to teach the tutee about how to evaluate each of these two websites and make decisions about whether or not to use each of them in a research project. Specifically, the researcher said: Suppose you are doing a research project on the Giant Panda for your class and you found the following two websites. You would need to then decide if these websites are trustworthy to use. Here is what I want you to do. Using these websites, I want you to teach [name of tutee] about 83 how to evaluate each of these two websites and make decisions about whether or not to use each of them in a research project. Remember that you need to teach [name of the tutee] about how to determine that either or both of these websites are trustworthy, so I expect that you will talk aloud about how to evaluate the sites and what you are thinking as you are looking at each of these websites. The researcher sat some distance away and took notes. The researcher did not control the nature of the verbal exchanges between tutor and tutee except on two occasions: (a) when the tutor kept silent for over one minute and the tutee did not know what to do; and (b) when the tutor read word by word continuously for over one minute. On these occasions, the researcher would ask, “What are you thinking?” or “Can you tell [tutee’s name] a little bit more?” The order in which the tutors/tutees in the two groups evaluated the two websites was counterbalanced. No limitation was set on the length of time that a tutor spent in teaching. In general, the tutor let the researcher know when he/she was done teaching the tutee to evaluate each of the websites. The average amount of time for a tutoring session was 8 minutes and 37 seconds for the control group, and 7 minutes 49 seconds for the experimental group. The websites used for tutoring had been purposefully selected by the researcher. Both of them were about giant panda bears, and varied in their degree of trustworthiness. 84 One was created by a fourth grade student named Daniel Danohoe and was written in January 2000 (Danohoe, 2000). The author’s name and date of writing was at the bottom of each page of the site. Besides the name of the author, no other information about the author, including the fact that the author was a fourth grade student, was put on the website. The main page of the website has five links and a brightly colored drawing of a panda. On the other pages led to by the five links, there were a few photos, titles of each section, and a few sentences and phrases. The other website was written by the Zoological Society of San Diego (ZSSA) and updated in 2006 (Zoological Society of San Diego Zoo, 2006). The author/sponsor information, which consisted of the name of the author/sponsor and its affiliation, that is, Association of Zoos and Aquariums, and date of updating were displayed at the very bottom of the page. On the top of the main page, there were a few tabs. On the left side of the page, there was brief information about panda bears and a video. The main section of the page contained a great deal of information about and photos of panda bears At the bottom of the page, there was a slide show with a legend. The order in which the participants evaluated the two websites was counterbalanced. The whole tutoring process was recorded through Camtasia (TechSmith, 2007), the computer software that captures computer screen and sound (internal and external). Not only the students’ voice but also the computer screen were recorded to allow the researcher to have detailed information about students’ process of website evaluation, such as where the students went to locate information about the website 85 publication date, which part of the Web page the students were reading, and so on. The researcher took field notes while watching the tutors tutoring the tutees. Data Analysis The tutoring data was transcribed. While transcribing the data, the researcher recorded pauses, speech fillers, and the movement and location of the cursor. An open coding method based on grounded theory (Strauss & Corbin, 1990) was adopted to allow emergence of any non-preconceived ways of evaluating websites or reading on the lntemet. The following steps were taken in developing an inventory of strategies used by students from both groups. Strategies in this study were defined as any general or specific approach that the readers used in an attempt to achieve their evaluation goals. First, two raters blind to condition independently examined each tutor’s tutoring record and the researcher’s observation notes to identify all possible strategies each tutor adopted in evaluating the two websites. Then the two raters listed all the different strategies used by the tutors within the same condition group, went through each of them on the list, and compared each one to another independently within each group, looking for those that were sufficiently similar and so should be categorized into a single description of strategy. A list of strategies tutors used was created by condition. The strategies were then organized into three groups based on the literature: (a) the WWWDOT strategies, which were the target strategies of the WWWDOT instruction; (b) the recommended strategies, which consisted of those recommended in the literature as related to trustworthiness, but not part of WWWDOT; and (c) the irrelevant and 86 non-recommended strategies, which included those that were irrelevant or not recommended in the literature as related to trustworthiness. For each strategy, the number of tutors who used that strategy was recorded within each condition group. Two lists of strategies for the two groups of students were created with the number of participating tutors whose evaluation process involved these strategies. After that, the raters counted the number of strategies each tutor used and the researcher examined each list individually to identify patterns of similarities and differences that students within each group demonstrated in the process of evaluating the websites. Finally, the researchers compared the uses of strategies by condition to identify patterns of similarities and differences between the two groups. Results In this section, differences and similarities demonstrated by students within each group and between the two groups are described. There are three subsections. The first subsection describes how students who received instruction in WWWDOT evaluated websites and gives the within group differences. The second subsection describes how students who did not receive instruction in WWWDOT evaluated websites and also presents the within group differences. Within these two subsections, a general explanation of how students approached the tutoring task is provided before focusing on students’ evaluation processes, across and within the group. The third subsection presents a comparative analysis of differences and similarities between the two groups. How Students Who Received Instruction in the WWWDOTApproached Evaluating 87 Websites The way the students who received instruction in WWWDOT approached the tutoring task showed to some extent that they had evaluation strategies in mind before starting the tutoring process. This was reflected through their use of general reading strategies and their use of specific tutoring techniques to firlfill the task. For example, it was observed that 11 of them using reading strategies such as skimming and scanning to find information on different dimensions of the websites related to trustworthiness and relevancy for the purpose of evaluation. In addition, 11 out of 12 students were observed using one or more techniques in teaching the tutees website evaluation. The tutoring techniques included (a) introducing an evaluation strategy or strategies and then using the websites as examples to illustrate how to use the strategy or strategies; (b) comparing the two websites on different dimensions and making judgments with their tutees about why one was more trustworthy than the other, and (c) guiding students step by step using an “if, then” procedure to make a judgment on the trustworthiness of websites. On one hand, using these tutoring techniques, especially (a) and (c), requires students to have clear a priori knowledge about how to evaluate a website. On the other hand, the processes of students’ evaluating websites and their concept of website evaluation were revealed through these tutoring techniques. In general, the tutors in this group started reading the website by collecting information on various dimensions of website trustworthiness and taught their tutees how to do that. If they considered a website to be trustworthy, they spent some time reading the trustworthy website together. 88 In this section, concepts of website evaluation as shown in students’ tutoring processes are first discussed, followed by a description of individual strategies adopted by these students. A discussion of individual differences in website evaluation is given in the end of this section. Concepts of Website Evaluation All students who received instruction in WWWDOT to some degree expressed caution about the trustworthiness of information on the lntemet and told their tutees to be cautious too. To these students, it is a necessary step to evaluate the trustworthiness of a website when reading on the lntemet. For example, before she found information about the authorship of a website, Tammy said to her tutee, "Do you think you want to trust this? No, this could be written by anyone if you think about it.” Similarly, Caden taught his tutee not to just “go through the Web pages”. He said, you have to know when it was written and updated, you can't just go through it. . .. So would you just go through and would you look at who wrote this and when it was written and updated and stuff? So you'd stop to think or would you just go on without knowing? Evaluation Strategies Participants who received instruction in the WWWDOT approach mainly used nine strategies in evaluating the trustworthiness of websites], out of which five strategies ‘ Evaluation of the relevance of information to the participant’s goals was included in the strategies in that relevancy evaluation is the premise of any evaluation of trustworthiness. That is, if the website is not relevant to what one is looking for, it is not necessary to evaluate the trustworthiness of the website. 89 were a target of the WWWDOT approach, three were expert recommended, and one was not expert recommended. These strategies are shown in Table 10, which presents a comparison of the strategy use by the students who received instruction in WWWDOT and by those who did not receive instruction, in the order of frequency at which they occurred in the protocols and the observation records in different groups. Each value in the table represents the number of participants who used a specific strategy, not the number of times a strategy was used. The occurrence of use of these strategies in the control group will be discussed in the next section. In this section, experimental students’ use of the WWWDOT strategies and their use of other strategies, recommended and non-recommended, are discussed separately. Use of WWWDOT strategies. Five of the six WWWDOT strategies were adopted by the students who received instruction in the WWWDOT approach and they were: (a) identifies Who wrote the website; (b) identifies When it was written or updated; (0) identifies Why it was written; ((1) asks the question “Does this help meet my needs”; (e) checks the Organization of the website. Given that the context did not require students to do a real report, it makes sense that the participants did not use “To do list for the future” in their evaluation. Even though seven students paid attention to the organization of the websites and stressed the importance of it to their tutees, they did not directly relate the organization to the trustworthiness of the websites. Therefore, the discussion that follows does not include students’ use of the strategy of checking the Organization of the website. As shown in Table 10, all of the participants in the experimental group evaluated 90 websites by checking who wrote it and showed their tutees where to locate information about the authorship. All but one of them mentioned the importance of knowing the credentials of the website author before a website. Here is what Annie said when she taught Emily about website evaluation: if National Geographic wrote one, that would be trustworthy, because it would, they have a lot of credentials. If it's written by a child, then they don't really have a lot of credentials. You really don't want to use that site... Here says who wrote it. It is the Zoological Society of San Diego, which they have real good credentials because they know a lot about animals. Ten out of the twelve students evaluated the website by checking when the website was created or updated and they also showed the tutees where to locate information about creation/update time. Among the ten students, seven emphasized the importance of having timely information on a website. While looking at the website about panda created by the Zoological Society of San Diego in 2006, Johnny said to his tutee Eric, “if you find another one updated this year, that would be a better one to use. It has new information on it.” Eight out of 12 students in the experimental group evaluated the website by checking whether the website met their needs. They mentioned that if a website helped meet their needs, they could continue reading it. If not, they thought they should stop reading it and start looking for another one. In addressing the importance of matching the need to the content of the website, students paid attention to different dimensions of 91 needs. For example, Trevor focused on examining whether the website served different purposes of reading such as writing a research paper, writing an essay, and so on. He said, "You have to look at the whole thing, like, would it be for an essay or would it be good for a research paper." On the other hand, Thomas told his tutee to focus on the topic he needed to know more about. In his opinion, before starting reading a website, one needs to take a look and see if the website contains the very topic that one is looking for. He said: If you want to learn, let’s see, how big can pandas grow, say, if it didn't have it, you wouldn't really want to use this website. Say, if you want to learn, oh, what pandas eat, and say, you didn't know that, and this website did have it, you might be interested in this website. Half of the students inferred and concluded why the website(s) were written based on the information they gathered from the website(s). They believed that it was important for lntemet readers to be aware of why a website was created. These students believed that if the website was created for commercial purpose, readers should be cautious of its trustworthiness. If it was made to educate, the website is most probably trustworthy. For example, Kevin said to his tutee: I would go to the top [the top part of the page where there are tabs] and see what they have, looks like they do this and want to educate. Because it doesn't seem like they have gift shop or anything. If they have gift shop somewhere, they probably did it to make money. 92 Similarly, Jay said to his tutee: Why did they write it [the one done by Zoological Society of San Diego]? In my opinion, with all the facts, I don't think they list all the facts just to sell you something because in my opinion they have no real links for anything to buy. There is nothing you CAN buy. For this one (the one done by a fourth grader), in my opinion, this kind of looks like a school project. Use of recommended strategies. Three strategies in addition to those in WWWDOT recommended by experts were adopted by the students who received instruction in WWWDOT: (a) checks whether the photos on the website are real (e.g. Harris, 2007 recommended it); (b) uses background knowledge to evaluate whether the contents are true (e. g. Burbules & Callister, 2000 recommended it); (c) checks whether the spelling or grammar is correct (e.g. Harris, 2007 recommended it). Four out of twelve students checked whether the pictures on the website(s) were real photos or just drawings. They believed that if there were real photos on a website, the website was more trustworthy than one with drawings. Three students in this group pointed out that peOple should use their background knowledge to judge the trustworthiness of a website. For example, Brett said: If you have some background knowledge and there were some things that were true, you might be able to trust it because you know those things are true. Two students out of the twelve who received instruction in the WWWDOT approach checked whether the spelling and grammar in the writing were correct. These 93 students believed that if the writing is sloppy and full of spelling or grammatical mistakes, the website is probably not trustworthy. For example, Tammy asked her tutee, “If you are looking through this, and you saw something that was spelled wrong and they used a wrong type of grammar, do you think you want to trust this?” Use of non-recommended strategy. The only strategy that the students in this group used but that I have not found recommended by experts in the literature was checking whether there was a large amount of information. Six out of twelve students used this strategy. Only one student mentioned the rationale for using this strategy. Amy said, "In that one [the one created by a fourth grader], it is just 5 pages This one [the comprehensive website created and maintained by the Zoological Society of San Diego Zoo] has a lot. They obviously have taken a lot of time getting into it than the person [who wrote the other website] did." Using Multiple Strategies Although the strategies were listed as separate items in Table 10 and were discussed one by one in this section, it does not mean that these students did not synthesize information they obtained through using these strategies in making a judgment. All students who received instruction in the WWWDOT approach used more than one strategy in evaluating the two websites. Among these students, ten adopted at least four different strategies: one used eight, three used six, two used five, and four used four strategies. Only two students used fewer than four strategies — one using three, the other two. All students showed evidence of synthesizing all information obtained through using 94 different strategies before making a final trustworthiness judgment. For instance, when he was not sure about the author of the website, Caden took other factors into consideration. With all the information gathered, he evaluated the trustworthiness of the website. He said: You can already see this one might not be trustworthy. Because the person who wrote this, Daniel Danahoe, is probably in first grade or second. Probably for a project. This was researched and written in 2000. .. It doesn’t really have what you need. It shows you a little bit. It shows you who wrote it and when he updated it. But they have no, what you call it, people don’t really know this person who wrote this. And he wrote it many years ago and it doesn’t have new information on it and stuff. In summary, the number of WWWDOT strategies used by an individual student ranged from two to five, with a mean of 3.83. Total number of strategies (beyond those taught in WWWDOT) ranged from three to eight with a mean of 4.33. Benjamin is a typical case. He used and/or taught five different strategies, only one of which was not recommended in the literature: (a) identifies who created the website and what credentials the author has; (b) checks whether the website helps meet my needs; (0) uses background knowledge to check whether information on the website is true; ((1) checks whether there are grammatical mistakes in the writing; (e) checks whether there is a lot of information (not recommended). Parker taught the largest number of strategies. He used eight different strategies, all the strategies listed for this group except for checking whether the 95 spelling or grammar in the writing is correct. In contrast, Henry used and/or taught the least number of strategies, with three different strategies and one of them not recommended in the literature: (a) checks who created the website; (b) checks when the website was created; (c) checks whether there is a lot of information. How Students Who Did Not Receive Instruction in WWWDOT Evaluated Websites Students who did not receive instruction in WWWDOT approached the tutoring task differently from those who received instruction. A majority of students (N=9) who did not receive instruction in WWWDOT approach were observed reading the main section of the Web page word by word or having the tutees read word by word throughout their tutoring processes. As the tutor and the tutee read along, the tutors either took the lead in commenting on whether the information they were reading was true primarily based on their prior or background knowledge or summarized paragraph by paragraph. The way they approached the tutoring task may reflect that these students viewed website evaluation as a word by word (or sentence by sentence) reading and confirming process or as a summarization process. On the other hand, it reveals that these students may not have much a priori knowledge about website evaluation and that the strategies they taught their tutees were spontaneous and unplanned. One student read silently most of the time and occasionally told his tutee that he should trust the website because it had good information. Only one of the twelve students introduced some strategies first and then applied them in evaluation -- as students who received instruction in WWWDOT often did — and only one used a comparison technique, that is, comparing 96 the two websites before making a judgment, in her tutoring process (as compared to four in the WWWDOT group). In this section, students’ concept of website evaluation as shown in their tutoring process and students’ use of evaluation strategies are described, followed by a depiction of how individual students in the control group evaluated websites. Concepts of Website Evaluation When being asked to teach how to evaluate a website, only one student in the control group expressed concerns about trustworthiness of information on the website in general. Eleven out of the twelve students in the control group did not express any concerns about the trustworthiness of websites nor did they point out to their tutees the necessity of evaluating the trustworthiness of websites before using the information. However, in completing this task, the tutor students in this group adopted various strategies to evaluate trustworthiness of websites Evaluation Strategies Participants who did not receive instruction in the WWWDOT approach used 22 strategies in total in their website evaluation. Among these strategies, three were WWWDOT strategies, that is, a target of the WWWDOT approach; four were recommended strategies; and fifteen were irrelevant or non-recommended strategies. Table 10 provides the list of strategies with the number of students in this group who used each strategy. A more detailed explanation of these three categories of strategies used by this group of students is given below. 97 Use of WWWDOT strategies. The students used three WWWDOT strategies: Five students checked when the website was updated; three students checked who created the website and two checked whether the website met their needs. Out of the five students who checked when the website was updated, only three of them justified their use of this strategy. Elizabeth told her tutee about the importance of timeliness: You always want to look and see if they have the data so that you know when they wrote and updated it. So you don't have information that is old. Jim expressed a similar concern using one of the websites as an example. He said, “The last time it was updated was 2000. Some stuff could have been changed.” Three students out of 12 used the strategy of identifying who the author of the website is. Although the three students noticed the existence of website author, two of them seemed that they did not care who this person was. To them, it was trustworthy as long as the name of the author was written on the page. For example, when evaluating the website written by a fourth grade student, Scott seemed satisfied with the website simply because “they said the actual name of the person who wrote it." When evaluating the same website, Elizabeth told her tutee, "you also want to look for who wrote it. [Scrolled down and found the author’s name] This has this on every page. It is good." One of the three students was able to justify the importance of authorship to the trustworthiness, but he not able to find the author name. Instead, he guessed that it might be written by a kid, therefore it was not trustworthy. Two students used the strategy of checking whether the website met their needs. 98 One of them judged whether they should use a website totally depending on her needs. Alice said: I am thinking it [the one written by the fourth grader] looks just right, because some kids don't need a lot of information to do the research... If you are doing a big research project, you would have to probably use this one [sponsored by the Zoological Society of San Diego]. If you are just doing a little one, you will probably use this one [the one written by the fourth grader]. Use of recommended strategies. In addition to the three WWWDOT strategies, the students who did not receive instruction in WWWDOT adopted four other strategies recommended by experts, one was not directly related to trustworthiness. These strategies were: (a) checking background/prior knowledge (e. g. Burbules & Callister, 2000); (b) checking whether the photos are real (e. g. Harris, 2007); (c) checking whether the spelling or grammar of the writing is correct (e.g. Harris, 2007); (d) checking whether the content is written at their reading level (Eagleton & Dobler, 2007). Six students adopted the strategy of checking their content background or prior knowledge and they frequently used this strategy throughout their evaluation process. They frequently asked their tutees to answer these questions: “Do you think these facts are true?” “Does this sound right?” “Does it make sense?” Gary checked the content on the website with his prior knowledge he recently obtained from a book and said, "I'm thinking this is true, trustworthy because while I was reading in my head I had this book at home I read. It was like that.” Jim also said: 99 I'm thinking that this is true because in late research before this I know this, they are about to go extinct... I believe this paragraph because they are black and white and they do stand out in the forest. If you walk into a forest you would see them because it is not like they are hard to see. Three of the twelve students trusted a website if the photos on it were real. Elizabeth said, “They have pretty good picture in it and it looks like a real picture. So this site could be very trustworthy.” On the other hand, if the picture was not real, in their opinion, this site was not trustworthy. Scott expressed his view when he said, “I really don't think I would trust it because I can tell they just drew them [the paintings on the web page].” One of the twelve students checked the readability of the content. Checking to see whether the content was written at the readers’ reading level is recommended to caution students to read information that fits their reading ability (Eagleton & Dobler, 2007). It is not an indicator of trustworthiness. However, this strategy was not used to judge whether the website should be used, but to make judgment on the trustworthiness of the websites. Aden trusted the one written by the fourth grader because, “I agree with you because I can read this a lot better. The other one I was confused and I didn't really get it. It is more like for a college student, or maybe for high schoolers, if not for college students. I trust the other one [the one written by the four grader].” One of the twelve students checked “the form of the writing” in her own words, that is, whether the spelling and grammar of the writing was correct. 100 Use of irrelevant or non-recommended strategies. The students who did not receive instruction in WWWDOT adopted 15 strategies that are not relevant to trustworthiness or not recommended by experts. The most frequently used strategy (n=6) in this category was checking whether the website contains a large amount of information. As in the experimental group, most students did not give reasons why they believed that there was a positive correlation between the amount of information a website contains and the trustworthiness of the website. Only two participants indirectly expressed why thought so; that is, a trustworthy website should contain everything you are looking for. Cathy concurred with her tutee that one of the websites was trustworthy because “it gives you information on probably everything you will be looking for.” Daniel told his tutee that he did not trust one of the website because “they aren't telling everything about it.” In addition, three out of the twelve students who did not receive instruction in WWWDOT used the strategy of checking whether there are links for word explanation. In their opinion, if there were links that explained difficult words, the website is trustworthy. Regarding this matter, Jim said, "if you click on a word, it tells you about something and usually a trustworthy website has that.” Three of the twelve students thought staying on tOpic was a reason to trust. Other irrelevant or non-recommended strategies used by two or one participants include checking whether there is “good” information’, appropriate information, detailed information, whether there are photos, whether the photos on the website are clear, 3 Participants did not specify what kind of information was good information. 101 whether there is a map, whether there is any contact information (so that he can report when there is wrong information), whether there are interesting facts, and whether the website uses actual names of people or place. Some participants also told the tutees they should read carefully and read more about the page to see if the website was trustworthy or not. Summary of Individual Students’ Use of Strategies All of the twelve students in the control group used at least one strategy to evaluate and the total number of strategies used by an individual student ranged from one to eight. The total number of recommended strategies including the WWWDOT strategies ranged from zero to four, with a mean of 1.83. One of the twelve students did not use any recommended strategies. Five among the twelve students used only one recommended strategy. Two used two, two used three and another two students in the control group used four different strategies. In addition, 10 among the 12 students used at least one strategy that is not recommended in the literature. Brian is a typical case. He used and/or taught three different strategies, but one of them was not recommended in the literature as related to trustworthiness (or relevancy). These three strategies are: (a) checking background knowledge; (b) checking whether there are a lot of details; (c) checking the date when it was updated. In contrast, Kaleb used one strategy and this strategy was not recommended: checking whether there is a map. Johnny adopted the largest number of strategies. He used seven, three out of which were not recommended in the literature. The seven strategies include: (a) checking 102 whether it stays on topic; (b) checking who wrote the website; (c) checking the volume of the information (viewing it as more trustworthy if there is a large amount of information); (d) checking when it was written; (e) checking whether the picture on the website is clear; (f) checking background knowledge; (g) checking whether there is “good” information. Comparison Between Two Groups In this section, I discuss how differently or similarly the two groups evaluated websites. I begin with similarities and then turn to differences. Similarities There are two similarities in strategy use between the two groups. First, across both groups of students, a wide range of different evaluation strategies were adopted, though any one individual student may not adopt a wide range. Second, some evaluation strategies they used are the same, although in most cases used with different frequencies. These strategies are checking whether the website has a large amount of information, using background/prior knowledge to examine whether the contents are trustworthy, checking when a website was created and who created the website, examining whether the photos on the website are real, checking whether the website meets one’s needs, and checking whether there are spelling or grammatical mistakes in the writing. Of particular note is that six students in both groups used checking to see whether there is a large amount of information on the site as a strategy. As will, be discussed later in the paper, there are some weaknesses of this strategy. Differences 103 Students who received instruction in the WWWDOT approach evaluated websites very differently from the students who did not receive instruction. There are five main differences. First, they had different concept about website evaluation. Nearly all students who received instruction in the approach expressed how important it was to evaluate the trustworthiness of websites while reading on the lntemet. Students who did not receive instruction did not point out the necessity of trustworthiness evaluation to their tutees. To them, evaluating the two websites seemed to be just an assignment. Second, they approached website evaluation differently. Students who received instruction had planned strategies to use, while students who did not receive instruction mostly used impromptu tactics to evaluate websites. Students who received instruction taught their tutees a few strategies before examining the websites. The websites were used as examples to illustrate how to apply the strategies. Most of these students were able to synthesize information obtained through using the strategies and made a final trustworthiness judgment. On the contrary, most students who did not receive instruction started website evaluation by delving into the texts on the main page and reading closely. For these students, some evaluation strategies emerged during close reading, such as checking whether there are links that explain words; clicking on the links to see if they worked, clicking on a map to see if it worked and was useful, and so on". These strategies " Students who received instruction in WWWDOT clicked on links, but did not use these as strategies or criteria in website evaluation as students in the control group did. 104 were not used by any of the experimental group students. Consequently, the two groups focused on different strategies. Strategies used by students who received instruction focused on evaluating website components, such as who created it, when it was created, why it was created, the needs of readers, whether it was organized well, the information sources, and so on. Strategies adopted by students who did not receive instruction, however, focused on evaluating the Web content itself, such as whether they thought, based on their prior knowledge, that the information was true, whether it stayed on topic, whether there were actual names of places or people, whether the information was in detail, interesting or appropriate, whether there was any link to explain words, and so on. Third, as revealed in Table 10, they used different strategies. For example, nine students who received instruction in the WWWDOT approach used two strategies that no one in the other group used: checking whether the website is organized well and identifying the purpose for which the website was created. These strategies were both recommended by researchers and practitioners (Fitzgerald; 1997; Hawes, 1998; Shrock, 1999). Students who did not receive instruction in the approach used fourteen strategies that no one in the other group used. These strategies include checking whether there are links to explain words, whether the website stays on topic, whether they thought, based on their prior knowledge, that the information was true, whether there are photos, whether the content is at their reading level, whether there is contact information they can use to contact the writer/author, whether the website used actual names of people and 105 places, whether there is detailed, appropriate, or interesting information, whether there is a map, whether the picture is clear, and so on. Among these strategies, 12 are not among those that speak to the issue of trustworthiness and relevancy and are not recommended in the literature. Fourth, although students in the two groups used the same strategies, checking who wrote the website and checking whether there is a large amount of information on the site, their rationales behind using these two strategies were different. For students who received instruction in the WWWDOT approach, a trustworthy website was created and written by someone or some organization with credentials. Furthermore, more information on a website indicates that the author put a lot of time in writing them. However, for students who did not receive instruction in the WWWDOT approach, the presence of the author’s name is sufficient to prove that the website is trustworthy. And, a good website should have everything about one topic. Fifth, the average number of strategies recommended in the literature (including WWWDOT strategies) used by the two groups is different. Students who received instruction in the WWWDOT approach adopted 4.33 recommended strategies on average. However, students who did not receive instruction in the WWWDOT approach adopted 1.66 on average. Given all these differences in strategy use, it is not surprising that the results of students’ actual evaluation of the trustworthiness of each site were different. Eleven students out of 12 who received instruction in the WWWDOT approach made a correct 106 judgment on both websites. Only one student made an incorrect judgment and deemed the one written by the fourth grader as well as the website by Zoological Society of San Diego as trustworthy. In the control group, however, eight of the twelve students who did not receive instruction in WWWDOT deemed the website written by the fourth grader as trustworthy and three judged the site by the Zoological Society of San Diego as untrustworthy. Figure 4 compares the judgments made on the ZASD website by the two groups of students and Figure 5 illustrates the between group differences in judgment on the website by the fourth grader. A detailed description of how a tutor in the control group and how a tutor in the experimental group taught their tutees how to evaluate a website would help illustrate the differences between the two groups. Here is how Johnny, a typical control group student taught his tutee, Justin. First, Johnny told Justin if there was anything on the first page, he should read the first page before clicking on any links. Second, Johnny asked Justin to read the first page from the very beginning. He helped Justin to read some difficult words when needed. After Justin finished reading the first paragraph, Johnny asked Justin whether the information they just read sounded like it was right. This reading and confirming procedure repeated throughout the evaluation process. In the end, Johnny told Justin whether he should trust this website or not. Here is how Karen, a typical experimental group student taught her tutee, Isabel, to evaluate a website. First Karen told Isabel that in order to tell whether a website was trustworthy or not, she had to go to the bottom of the page and find information about the 107 author and publication date. While telling Isabel about this strategy, Karen had found the information and she talked to Isabel about how to evaluate the credentials of the author and the timeliness of the information. Then, Karen told Isabel about the WWWDOT approach they learned and they continued using the other strategies such as checking the purpose for which the website was written and the organization of the website. After checking this information, Karen asked Isabel if she would trust this website. If Karen considered the website untrustworthy but Isabel thought it was trustworthy, Karen would spent more time going over the information they collected and correcting Isabel’s misunderstandings until they reached an agreement. If they were sure the website was trustworthy based on their evaluation, they would spend some time reading the content of the website. It is worth paying some special attention to the student who received instruction in WWWDOT but did not make a sound judgment on one of the websites, that is, the relatively untrustworthy website written by a fourth grader. The protocol of this student indicated that she used three different strategies: (a) checking who wrote it, (b) checking when it was written; and (c) checking whether there is a lot of information. While using the first strategies, she did not look beyond the “who” and “when” to examine the credentials of the author or to consider whether the date suggests the information is sufficiently current. She said, “I think it is quite trustworthy. It has ‘who it was by’ and ‘when it was done’. It has a lot of information about pandas.” Since the first two strategies did not function in her application, the third strategy, which was not suggested 108 in the literature as related to trustworthiness, played an important role in assisting her to make the judgment. A more detailed discussion of what this suggests for improvements that might be made to the WWWDOT approach are provided. Discussion This study represents a qualitative investigation into website evaluation processes completed by two groups of students, one group having received instruction in the WWWDOT approach and one having not. This study explored how the students within each group evaluated websites and if there were any patterns of difference between the groups in evaluating websites. The findings suggested that the students?evaluating process demonstrated the three sets of elements as theories of metacognition indicated (F lavell, 1981; Siegel & Carey, 1989): (a) the disposition or concept of the reader toward critical thoughts; (b) applying evaluative skills; (c) making a judgment. There were differences in the three sets of elements both within and between the two groups. In this section, I discuss some conclusions drawn from these findings and the implications of these findings. The most evident conclusion drawn from the findings is that the two groups of students differed in their evaluating process in the three sets of elements. Compared to the students who did not receive instruction in WWWDOT, students who received instruction in WWWDOT had a greater understanding of the need for website evaluation, approached website evaluation with strategies in mind and used more recommended strategies in evaluation and more strategies that good adult readers used (Zhang & Duke, 109 in press; Fitzgerald, 2000), such as checking who wrote the website, the purpose of writing it, and itiflappearance, that is, the organization of the websites. Consequently the students who received instruction in WWWDOT made much better judgments of the trustworthiness of the two websites in completing the tutoring task than those students who did not receive instruction in WWWDOT. It is important to note that they did better in judging the trustworthiness of the two websites given in the tutoring task than in what they did in the single website evaluation assessment (the overall judgment part of the assessment) and website ranking assessment (the ranking part) in the first manuscript. One thing that needs to be pointed out and might help contribute to the different evaluation results reported in the two manuscripts is that the tutoring instruction in this study gave students more context and purpose for website evaluation (see instruction in the Methods section of this manuscript on pages 125-126) than the instruction given to students in the first manuscript (see Appendices F and H). Nevertheless, this finding suggests that the WWWDOT approach is beneficial to students in teaching students to evaluate the trustworthiness of websites (see also the Findings section of the first manuscript). However, there are aspects that need improvement even for the children who received instruction in WWWDOT. First, more explanation should be given about the third W, that is, when the website was updated, in teaching the WWWDOT approach. Ten among the twelve students learned it was important to find information about the time when the website was updated. However, out of the ten students, three students seemed 110 not understand the importance of timeliness. To them, the mere presence of the time when the website was updated would be an indicator of trustworthy website. Teachers should emphasize to their students that they should look beyond the date and examine if the information is timely. In some cases, a relatively current website may not even be needed, such as websites on classic literature. Second, more emphases should be put on the second W (i.e. Why — the purpose for which a website was written), in teaching the WWWDOT approach. It is very important to examine why a website was written because it could reveal the hidden agenda that a website has. However, among the 12 students who received instruction, only half of them used this strategy in their evaluation. Another conclusion we can draw from this study is that the students who did not receive instruction in the WWWDOT approach did not have a clear idea about website evaluation and were not able to strategically evaluate the trustworthiness of websites. Although some were able to apply recommended strategies in evaluation, they used few of them and they also held incorrect understandings about what factors are related to the trustworthiness of a website. For example, some believed that a trustworthy website has photos and has actual names of people and places. These misunderstandings would hold them back in making a sound judgment. F urtherrnore, some of the students who had not received instruction in WWWDOT totally depended on one or two strategies throughout their evaluation processes. While these strategies might be good ones, they could result in incorrect lll judgments if not combined with other strategies. For instance, the most frequently used strategy, using background/prior knowledge to evaluate the website content, does not seem to be effective for 4th and 5’h grade students if used alone. Students at this age might only have limited background/prior knowledge about a topic about which they are searching on the Internet (actually, the same can be said for adults). In addition, students are well known to have misconceptions (Clement, 1982; Gardner, 1991; Nesher, 1987; Shaughnessy, 1985; Smith, et al., 1993). If a child sees accurate information that contradicts their misconceptions, they may deem it untrustworthy and make incorrect judgments. As Burbules and Callister (2000) pointed out, if a person cannot independently judge whether certain claims are valid, he or she has to use other methods, such as examining the sources of information, who wrote it, and so on. Finally, even though some students who had not been taught WWWDOT adopted strategies such as checking who wrote the website and when it was written, some of them did not know where to locate information about these. In summary, findings suggest that it is very necessary to teach students how to evaluate the trustworthiness of websites. These findings add to the work by many researchers and practitioners such as Lorenzen (2001), Large and Beheshti (2000), Hoffman et al. (2003), Hirsh (1999), and Kafai and Bates (1997), who have found that K-12 students rarely evaluated the trustworthiness of websites and even if they did evaluate, they used inappropriate criteria in their evaluation. Clearly there is an urgent need for teaching students website evaluation and critical thinking skills (Brouwer, 1997; Fitzgerald; Hoffman et al., 2003; Leu, 2002). 112 This study is the first in the literature to examine in depth how students evaluate websites when they are explicitly asked to do so. By taking a comparative verbal protocol approach to describe the evaluating processes of students from two groups, one that has received instruction in the WWWDOT approach and the other that has not, this study provides some evidence of positive impact of implementing the WWWDOT approach to 4th and 5th grade students. In addition, it uncovers some weaknesses and misunderstandings students still hold when they evaluate websites despite instruction in WWWDOT (which was, it should be reiterated, limited to only two 30-minute instruction sessions and two 30-minute practice sessions), as well as a variety of weaknesses misunderstandings among students who did not receive instruction in the approach. As a result, it gives useful information regarding what specific aspects need teachers’ special attention when teaching students to evaluate trustworthiness of websites. Here I list a few of them that may need additional emphasis. For the students who did not receive instruction in WWWDOT, except that they need to learn different effective strategies targeted by the WWWDOT approach, the rationale of website evaluation in lntemet reading should be emphasized. Students, especially elementary students, need to know why they cannot simply trust everything on the lntemet. They also need to overcome some misunderstandings and weaknesses, including believing the existence of photos, maps, links that explain words, the use of actual names of people and places, or being interesting indicate trustworthiness. For these students, as well as a few who received WWWDOT, it is important to 113 know quantity is not equal to quality. The volume of information on a website does not indicate trustworthiness. Teaching the WWWDOT approach with emphases on these aspects, teachers might help students overcome their weaknesses and change their misunderstandings about website evaluation. Limitations and Further Research The tutoring method was selected because it reduces intimidation from the researcher compared to think aloud method. More importantly, it provides an authentic context for the students to talk out strategies they would use. While this context has strengths, it may also have some weaknesses. For example, students may have chosen not to teach everything they knew, perhaps because they did not want to overwhelm their tutees, or they might have been thinking some things that they didn’t see as appropriate to teach — things that might have come out in a think aloud. A much more important issue is that the sample for this study were students who the teachers identified as articulate students. While this serves the use of verbal protocol methodology well, it potentially undermines generalizability. Previous studies (e.g., Strasburger et al., 1999) show that students with high verbal proficiency are more likely to succeed in school. Indeed, analyses of data collected in the larger experimental study showed that the 24 students selected for this study did significantly outperformed the other control group students in their post questionnaire total scores after adjusting the pre questionnaire scores, F(2,195)=3.608, p<.05. There were no statistically significant differences in other measures between students in the larger experimental study and. the 114 24 students in this study. However, the 24 articulate students’ mean pre and post assessment scores for all assessments except for the single website evaluation overall judgment scores and the ranking scores, were larger. This may mean that the twenty-four students who were nominated by their teachers as articulate were different from other students on study measures and therefore results cannot be generalized to less articulate students. Another limitation of this study is that there were unequal numbers of 4’h grade and 5th grade participants in the two groups. In the group who received instruction in WWWDOT, there were six 4th grade students and six 5‘h grade students. In the group who did not receive instruction, there were eight 4th grade students and four 5th grade students. In selecting participants, the researcher made matching the amount of experience in using the lntemet a higher priority than matching by grade level. Although no significant difference was detected in the two condition group’s pre-assessment scores on measures used in the larger study, including a questionnaire of 18 five-point, Likert-scale items which was designed to measure students’ general ability of website evaluation, a single website evaluation assessment which requires students to evaluate the trustworthiness of a website and give reasons for their judgments, and a website ranking assessment which requires students to rank four different websites from the most trustworthy to the least trustworthy and give reasons for why one was ranked the most trustworthy and another as the least trustworthy, they still may exist and have not been detected by the means used. Despite the limitations, the current study is an important first step in describing 115 how students evaluate websites. Clearly more research is called for to add to the sparse empirical base on this topic. First, it would be interesting to investigate if there are any developmental differences in students’ critical evaluation of websites. This study provides an in depth description of how 4th and 5th grade students evaluate websites. Do middle school students or high school students evaluate websites in the same way as fourth and fifth graders do? Second, it would be interesting to examine if the instruction in the WWWDOT approach in primary grade classes has any effect. This approach showed benefits with 4th and 5th grade students. Is it a good tool for lower grade students to learn website evaluation? How does it work with middle and high school students? Third, it would also be interesting to investigate the effect of the tutoring method in terms of tutors’ and tutees’ critical evaluation skills. In the current study, tutoring sessions were used as a data collection method. Does the tutoring method work to help the tutors improve their evaluation skills? Engaging students as tutors has certainly been shown to have positive effects in other domains (Elbaum et a1, 2000; Hayes, 1996). And is it beneficial to the tutees when they are tutored by students who have learned how to evaluate websites? There are many other questions that are important next steps as we grapple with how to prepare students to be good Internet readers. 116 References Baule, S. (1997). Easy to find but not necessarily true. Book Reports, 16, 26. Brouwer, P. (1997). Hold on a minute here: What happened to critical thinking in the information age? Journal of Educational Technology Systems, 25, 189-197. Brown, A. L. (1987). Metacognition, executive control, self-regulation, and other more mysterious mechanisms. In F. E. Weinert & R. H. Kluwe (Eds), Metacognition, motivation, and understanding (pp. 65-116). Hillsdale, New Jersey: Lawrence Erlbaum Associates. Burbules, N. C., & Callister, T. A. (2000). Watch it: The risks and promises of information technologies for education. Boulder, CO: Westview Press Burke, J. (2000). Caught in the Web: Reading the lntemet. Voices From the Middle, 7(3), 15-23. Clement, J. (1982). Student’s preconceptions in introductory mechanics. American Journal of Physics, 50, 66-71. Coiro, J ., & Dobler, E. (2007). Exploring the online reading comprehension strategies used by sixth-grade skilled readers to search for and locate information on the lntemet. Reading Research Quarterly, 42(2), 214—257. Danohoe, D. (2000) Project page. Retrieved February 9, 2007, from http://www.edu.pe.ca/vrcs/grassroots/ l 999/grade4/animalmaggill/daniel/daniel.ht r_n_l Doyle, C. S. (1992). Outcome measures for information literacy within the national education goals of1990. Final report to the National Forum on Information Literacy, June 24. Summary of findings, ED 351033. Washington, DC: National Forum on Information Literacy. Eagleton, M. B., Guinee, K., & Langlais, K. (2003). Teaching lntemet literacy strategies: The hero inquiry project. Voices from the Middle, 10(3), 28-35. Elbaum, B., Vaughn, 8., Hughes, M. (2000). How effective are one-to-one tutoring programs for elementary students at risk for reading failure? A meta-analysis of the intervention research. Journal of Educational Psychology, 92, 605-619. 117 3‘9“... ‘P‘Pfif‘W—T- . ‘— Fitzgerald, M. A. (1997). Misinformation on the lntemet: Applying evaluation skills to online information. Emergency Librarian, 24, 9-14. Fitzgerald, M. A. (2000). Criticizing media: The cognitive process of information evaluation. Educational Media and Technology Yearbook, 25, 130-140. F lavell, J. H. (1976). Metacognitive aspects of problem solving. In L. B. Resnick (Ed.), The nature of intelligence (pp. 23 l-23 5). Hillsdale, NJ: Lawrence Erlbaum. Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906-911. Flavell, J. H. (1981). Cognitive monitoring. In W. P. Dickson (Ed.), Children ’s oral communication skills (pp.35-60). New York: Academic Press. Flavell, J. H. (1987) Speculation about the nature and development of metacognition. In F. Weinert & R. Kluwe (Eds), Metacognition, motivation, and understanding (pp.21 - 29). Hillsdale, NJ: Lawrence Erlbaum. Gardner, H. G. (1991). The unschooled mind. New York: Basic Books. Garner, R. (1987). Metacognition and reading comprehension. Norwood NJ: Ablex publishing corporation. Garner, R. Wagoner, S., Smith, T. (1983). Extemalizing question-answering strategies of good and poor comprehenders. Reading Research Quarterly, 18(4), 439-447. Harris, R. (2007). Evaluating Internet research sources. Retrieved October 17, 2007 from http://www.virtualsalt.com/evalu8it.htm. Hawes, K. S. (1998). Reading the lntemet: Conducting research for the virtual classroom. Journal of Adolescent & Adult Literacy, 41(7), 563-565. Hayes, Elisabeth. (1996). Learning about adult literacy: A case study of a college tutor. Journal of Adolescent and Adult Literacy, 39(5), 386-395. Hird, A. (2000). Learning from Cyber-Savvy students: How Internet-age kids impact classroom. Sterling VA: Stylus Publishing. Hirsh, S. G (1999). Children’s relevance criteria and information seeking on electronic resources. Journal of the American Society for Information Science, 5 0(14), 118 1265-1283. Hoffman, J. L., Wu, H.-K., Krajcik, J. S., & Soloway, E. (2003) The nature of middle school learners’ science content understandings with the use of on-line resources. Journal of Research in Science Teaching, 40(3), 323-346. Kafai, Y., & Bates, M. J. (1997). lntemet Web-searching instruction in the elementary classroom: Building a foundation for information literacy. School Library Media Quarterly, 25(2), 103-111. Leu, D. J. (2002). The new literacies: Research on reading instruction with the lntemet and other digital technologies. In J. Samuels & A. E. F arstrup (Eds), What research has to say about reading instruction (pp. 310-336). Newark, DE: International Reading Association. Leu, D.J., Jr., Kinzer, C.K., Coiro, J ., Cammack, D. (2004). Toward a theory of new literacies emerging from the lntemet and other information and communication technologies. In R.B. Ruddell & N. Unrau (Eds), Theoretical models and processes of reading, Fifth Edition (1568-1611). International Reading Association: Newark, DE. Lundeberg, M. A. (1987). Metacognitive aspects of reading comprehension: Studying understanding in legal case analysis. Reading Research Quarterly, 22(4), 407-432. Kuiper, E., Volman, M., & Terwel, J. (2005). The Web as an information resource in K-12 education: Strategies for supporting students in searching and processing information. Review of Educational Research, 75, 285-328. Large, A., & Beheshti, J. (2000). The Web as a classroom resource: Reactions from the users. Journal of the American Society for Information Science, 51 (12), 1069-1080. Lorenzen, M. (2001). The land of confusion? High school students and their use of the World Wide Web for research. Research Strategies, 18, 151-163. National Center for Educational Statistics. (2005). Internet access in US. public schools and classrooms: 1994-2003. Retrieved June 23, 2005 from http://ncesed.gov/pubsearch/pubsinfo.asp?_pubid=200501 5 Nesher, P. (1987). Towards an instructional theory: The role of students’ misconceptions. 119 For the Learning of Mathematics, 7(3), 33-40. Ng, W., & Gunstone, R. (2002). Students’ perceptions of the effectiveness of the World Wide Web as a research and teaching tool in science learning. Research in Science Education, 32, 489-510. Ramage, J. & Bean, J. C. (1999). Writing arguments. Boston: Allyn and Bacon. Schrock, K. (1999). Producing information consumers: Critical evaluation and critical thinking. Book Report, 17(4), 47-48. Shaughnessy, J. M. (1985). Problem-solving derailers: The influence of misconceptions on problem-solving performance. In E. A. Silver (Ed.), Teaching and learning mathematical problem solving (pp. 399-415). Hillsdale, NJ: Laurence Erlbaum Associate, Inc. Siegle, M. G, & Carey, RF, (1989). Critical thinking: A semiotic perspective. Urbana, IL: NCTE publication. fiflfizu“. {tn-finer." Smith, J. P., diSessa, A. A., & Roschelle, J. (1993). Misconceptions reconceived: A constructivist analysis of knowledge in transition. Journal of Learning Sciences, 3(2), 1993. Stapleton, P. (2005). Evaluating web-sources: lntemet literacy and L2 academic writing. ELT Journal, 59(2), 135-143. Street, C. (2005). Tech talk for social studies teachers: Evaluating online resources: The importance of critical reading skills in online environments. Social Studies, 96(6), I 271-273. Strasburger, R., Margaret, T., Richard W. T. (1999). Factors relating to the postsecondary success of students with learning disabilities. Journal of the F irst- Year Experience and Students in Transition, 11(1), 63-76. TechSmith. (2007). Camtasia Studio 5. [Computer software]. Okemos, MI: TechSmith Cooperation. Wallace, R. M., & Kupperman, J ., Krajcik, J ., & Soloway, E. (2000). Science on the Web: Students on-line in a sixth-grade classroom. Journal of the Learning Sciences, 9(1), 75-104. 120 Watson, J. S. (1998). “If you don’t have it, you can’t find it”: A close look at students’ perceptions of using technology. Journal of the American Society for Information Science, 49(1 1), 1024-1036. Yin, R. K. (1994). Case study research: Design and methods. Thousand Oaks, CA: Sage Zhang, S., & Duke, N. K. (in press). Strategies of lntemet reading with different reading purposes: A descriptive study of twenty good lntemet readers. Journal of Literacy Research. Zoological Society of San Diego. (2006). Animal Bytes: Giant Panda. Retrieved February 9, 2007, from http://www.sandiegozoo.org/animalbytes/t-giant_panda.html r 121 Table 1 Demographic Statistics of the Participating Students Variable N % Variable N % Gender Grade Female 136 56.2 4th 123 50.8 Male 106 43.8 5th 119 49.2 ESL Student Age Yes 3 1.2 8 year old 3 1.2 No 224 92.6 9 year old 55 22.7 Missing 15 6.2 10 year old 111 45.9 11 year old 53 21.9 Special Ed. 12 year old 2 .8 Yes 37 15.3 Missing 18 7.4 No 190 78.5 Missing 15 6.2 122 Table 2 Information About Participating Classes Suburban Rural Urban Control Exp. Control Exp. Control Exp. Computer Teacher O O O O O O Language Arts/ Social El El Studies teacher Math/Science teacher El C] All Subject Matters , 0 Cl Teacher Note. 0 = 4th grade class; CI = 5th grade class. I. a I, I .z' 123 Table 3 Computer and Internet Use Statistics Reported by Participating Students Valid Variable N % %8 Variable N % Valid % Have your ever used computers? Have you ever used the lntemet? Yes 228 94.2 99.6 Yes 226 93 .4 99.1 No 1 .4 .4 No 2 .8 .9 Missing 1 3 5.4 Missing 14 5.8 When did you start using computers? When did you start using the lntemet? 5th grade 1 .4 .4 5th grade 6 2.5 2.7 4‘h grade 0 0 0 4‘11 grade 9 3.7 4.0 - 3rd grade 7 2.9 3.2 3'rd grade 36 14.9 16.1 2nd grade 16 6.6 7.0 2nd grade 46 19.0 20.5 1st grade 48 19.8 21.1 1st grade 68 28.1 30.4 Kindergarten 67 27.7 29.4 Kindergarten 41 l 6.9 1 8.3 Pre-school 50 20.7 2 1 .8 Pre-school 7 2.9 3. 1 Before Before pre-school 39 16. l l 7. 1 pre-school 1 1 4'5 4'9 Missing 14 5 .4 Missing 18 7.4 Is there a computer at home? Yes 220 90.9 97.8 No 5 2.1 2.2 Missing 17 7 Is the computer at home connected to the lntemet? Yes 206 91.6 93.2 No 15 6.7 6.8 Missing 4 1.8 Do you use the computer at home? Yes 209 92.9 95.4 No 10 4.4 4.6 Missing 6 2.7 Do you use the lntemet after school and/or on weekends? Yes 213 88.0 94.7 No 12 5.0 5.3 Missing 17 7.0 Where do you use the lntemet after school and/or on weekends? (Students can choose all that applies.) b In friend’s house 86 At the library 63 In relative’s house 91 What are your purposes for using computer/the lntemet outside school? (Students can choose all that applies.) To write homework 95 To locate info or picture 135 To play games 185 124 Table 3 con't In community center 8 To watch CD/DVD 66 At lntemet cafe 8 To write email 78 Other location 153 To IM (Instant Message) 38 Other 43 a A valid percentage was obtained when the missing number was not taken into account. b Because participants can choose all that applies, the percentage of each choice was not calculated. 125 Table 4 Means, Standard Deviations, and Means Adjusted by Pretest Scores for Student Questionnaire (total score) Pre Questionnaire Post Questionnaire Adjustea" Condition M SD SD M Experimental 61.57 6.394 6.935 68.619 Control 61.58 6.199 6.662 63.319 ’ Adjusted mean is a mean adjusted by the pretest score. 126 Dmk'.‘ vmufi‘uaufi” .c .1 . .1. Table 5 Coeflicients for Instruction in the Approach on Students ’Performance on the 18 Items of the Questionnaire Item # Coeflicient P value I .578 .36 2 .465 .119 3 1.028 0 4 .586 .019 5 .560 .0415 6 1.26 0 7 .827 .001 8 1.17 0 9 .843 .001 10 .243 .035 11 -2.17 .391 12 1.266 0 13 -.103 .680 14 .558 .029 I5 .142 .589 I6 .646 .013 17 .731 .007 18 .640 .026 127 Table 6 Means, Standard Deviations, and Means Adjusted by Pretest Scores for Single Website Evaluation Assessment (Overall Judgment Score) Pre Single Website Post Single Website Evaluation Evaluation Judgment Score Judgment Score Adjusted Condition M SD M SD M Experimental .992 .680 1.002 .826 .1014 Control .896 .573 .909 .788 .899 128 A at)". Table 7 Means, Standard Deviations, and Means Adjusted by Pretest Scores for Single Website Evaluation Assessment (Reason Score) Pre Post Adjusted Condition M SD M SD M Experimental 1.56 1.861 4.02 2.831 3.877 Control 1.22 1.718 1.30 1.506 1.373 129 Table 8 Means, Standard Deviations, and Means Adjusted by Pretest Scores for Website Ranking Assessment (Reason Scores) Pre Website Ranking Post Website Ranking Adjusted Condition M SD M SD M Experimental 1.07 2.08 5.48 5.00 5.50 Control 1.07 2.50 1.67 2.34 1.65 130 Table 9 Demographic information about the participants (tutors) # of Participants in # of Participants in Control Group Experimental Group Gender Female 5 6 Male 7 6 Grade 4’h Grade 8 6 5‘h Grade 4 6 Self-reported Experience in lntemet Use 1-4 years 6 6 5-8 years 6 6 131 Table 10 Frequency of strategy use by the experimental group, compared with the control group Group Strategies Experimental Control n=12 n=12 WWWDOT Strategies Identifies who created the website ‘ 12 3 Notes importance of author credentials (11)a (1) Identifies when the websites was created 10 5 Notes importance of timeliness (7) (3) I Checks whether the website meets my needs 8 2 Checks whether the website is organized well 7 0 Identifies the purpose of creating the website 6 0 I Other Recommended Strategies ll. Checks whether the photos are real (Harris, 2007) Uses background or prior knowledge to evaluate the website contents (e.g. Burbules & Callister, 2000) Examines whether the spelling and grammar in the writing is 2 1 correct (e.g. Harris, 2007) Checks whether the content is at my reading level (e. g. 0 l Eagleton & Dobler, 2007) Irrelevant or Non-recommended Strategies Checks whether there is a large amount of info on the site Checks whether there are links to explain words Checks whether the website stays on topic Checks whether there is good information Checks whether there are photos Checks whether the contents are interesting Checks whether there is a map Reads carefully Reads more about the page Checks whether there is any contact information Checks whether the website uses actual names Checks whether the information is detailed Checks whether there are interesting facts Checks whether the information is appropriate Checks whether the picture is clear a Parentheses indicates subtotals within a larger category. A U) —_'T b.) O\ OOOOOOOOOOOOOOQ i—ar—tu—A—au—ou—au—ni—Ir—nNNNwLfiQ 132 Figure 1. Comparison lines by assessment time and condition: Pre and Post questionnaire item means. ——-——Experimenta| Post-assessment COI'ltI'OI Group Post-assessment Questionnaire ltem Mean Score E t; '41, i5. 3 l s 2.50— IIIIIIIIIIIIIIIIII #1 #2 #3 #4 #5 #6 #7 #8 #9 #10#11#12#13#14#15#16#17#18 Questionnaire Item Number 133 Figure 2. Comparison lines of pre and post reason scores: Single website assessment scores. ——Experimental Group Control Group 4.00" 93 8 c 3.00— o (I) «1 o o: a) 2.00— 1.00‘ I l Pre-Assessment Post-Assessment Assessment Time Points 134 «firm—m . ‘m ,. . .' I Figure 3. Comparison lines of pre and post reason scores: Website ranking assessment scores. 5.00‘ —Exper'rnenlalGroup ~ ~ ControlGrouo 4.00“ 9 8 co C o —d 8 3.00 d.) n: 1: a l 2.00“ i g 1.00" I 1 Pre-assessment Post-assessment Assessment Time Points 135 Figure 4. Judgments of two groups about the trustworthiness of the website by Zoological Association of San Diego, a relatively trustworthy website. Instruction In WWWDOT No Instruction in WWWDOT Condition 136 Trustworthlness IIII .Trustworthy I Untrustworthy Figure 5. Judgments of two groups about the trustworthiness of the website by Daniel Danohoe (a fourth grader), a relatively untrustworthy website. Trustworthiness Ill Untrustworthy I Trustworthy 3 I on | m l A l N l I 1 Instruction in WWWDOT No Instruction In WWWDOT Condition 137 Appendix A Teacher Survey Dear Teacher: Thank you in advance for taking the time to complete this questionnaire. The data you provide will be very helpful in our analyses. Please respond to the following questions: 1. 10. Your name Today’s date How many years of full-time K — 12 teaching experience do you have? years (excluding this year). How many years of full-time teaching experience do you have at the grade level you are teaching? years (excluding this year). What is the highest degree you have obtained? What was this degree in? How many hours, if any, does your class usually spend in the computer lab each week? hours each week. How many hours, if any, do your students usually spent on computers in your classroom each week? hours per student each week. How many hours, if any, of homework involving computer use do you assign to students each week? hours per student each week. Including time in the computer lab, in the classroom, and for homework, how many hours do you usually have each student in your class spend on the lntemet 138 “MW-2m specifically each week? hours each week. 11. Do you explicitly teach children how to read on the lntemet? Yes No If yes, what do you teach? 12. As of today, approximately how many lessons have you given on how to read on the lntemet? lessons. 13. As of today, approximately how many minutes of teaching (not including time children are practicing or implementing what you have taught) have you spent teaching how to read on the lntemet? minutes total minutes per week on average 14. If your class uses the lntemet, how do they use it? 139 Appendix B Lesson Plans For WWWDOT Implementation Overview In our experience, young learners tend to overlook the credibility and appropriateness of a website when they use the lntemet as a source for information. Michigan Educational Technology Standards and Expectations demand “by the end of fifth grade each student will describe basic guidelines for determining the validity of information accessed from various sources (e.g., website, dictionary, on-line newspaper, CD-ROM) and identify appropriate technology tools and resources by evaluating the accuracy, appropriateness, and. bias of the resource.” WWWDOT is an approach designed to teach students to evaluate the credibility of websites as they search for information on the lntemet, help them be more aware of their reading purposes and make further plans for their reading. It includes six aspects: 1) Who wrote this? 2) Why did they write it? 3) When was it written and updated? 4) Does this help meet my needs? I j 5) Organization of site 6) To do list for the future The WWWDOT approach will be introduced to students in two class sessions. After it is introduced, the students will practice the website evaluation skills they learned 140 with various websites and the WWWDOT form in two other class sessions. Teacher Background Knowledge About WWWDOT (see description of the WWWDO approach in the paper.) Student Objectives Students will > Read websites more critically, in particular evaluate websites for accuracy and appropriateness > Be able to identify some next steps in their research process Estimated Lesson Time Four 30-minute sessions within 4 weeks. Teacher Preparation 1. Please schedule time for all students to have access to a computer with a fast lntemet connection, preferably in a computer lab, during the lessons. 2. Please make sure that an lntemet browsing tool such as lntemet Explorer, Netscape, or Safari is already installed on the computers to which students have access. 3. Please go to http://www.msu.edu/~zhangsh5/proiectjpwebsiteshtm to find the following links which will be used in teaching. a) http://wwwhistorychannel.com/ellisisland/index2.html (good website) b) http://www.aiisf.org/ (good website) 0) http://www.msu.edu/course/mc/1 1, 2/1 92OS/Immigration/Jamiespagehtml 141 (example: author has not enough credentials) d) http://memorv.loc.gov/leam/features/immig/introductionhtml (very good website, but not recently updated.) e) http://wwwellisisland.com/ellis_home.html (good design, but with commercial purpose.) f) http://wwwbergen.org/AAST/Proiects/lmmigration/indexhtml (example of P3 not updating and not well-organized.) ’3’- g) http://immigration.about.com/ (This might not meet your needs.) I h) http://wwwuscisgov/graphics/indexhtm (This might not meet your needs.) :1 J i) http://www.worldalmanacforkids.coni/explore/pgrulationS.html (immigration group example, does this meet your needs?) Structure of Daily Lessons Day 1, Instructions And Activities: 1. Explain to students about the current limitations of information found 1 on the lntemet and the importance of critically reading lntemet " websites by comparing printed texts and information on the lntemet. / The printed texts have gone through different processes of screening or sanctioning by editors, publishers, librarians, and so on; / Information on the Web may be unscreened or unsanctioned. The lntemet allows anyone to publish anything, thus people without appropriate credentials, with specific biases and agendas, and so on 142 provide information on the lntemet. This is also true of printed texts but perhaps not to the same extent. 2. Ask students what aspects in terms of websites’ trustworthiness or credibility they need to pay attention to when viewing a website. 3. Introduce to students the WWWDOT approach. Call students’ attention to the name of this approach so that it is easy to recall. Explain to students the first 3 aspects of the WWW ' . approach with examples on the topic about immigration as follows. 4. Who wrote this? This can be taught through the following steps. i. Identify the authorship of a website. It could be a person or an organization that wrote a website. Sometimes a website has the author’s name on it and sometimes it does not. ii. If the website was written by a person, ask the question: “_WLat_ credentials does the author have?” Generally speaking, next to the author(s)’s name, there are his/her affiliation, occupation, title, and contact information, it is easier to evaluate whether the author is fit to write about the topic on that website. iii. If there is no author name, ask this question: “Who is responsible for the website?” For an organizational website, the domain name could help identify the possible authorship. For example, “.edu” is an educational website; “.org” is an institutional website; “.com” could be a 143 commercial website or a news/Media website. iv. If no author or organization name can be identified, ask this question: “Does the website content show whether the author or organization is qualified to write this?” For example, self-contradictions, such as opposing facts and statistical inconsistencies, and spelling and grammatical errors on a website usually indicate an unqualified author or at least that the author was not serious in providing the information. i I J v. Show students good and bad examples: websites with qualified and t l. . unqualified writer (examples a & b are good, example c is bad.). 1: I H 5. Why did they write it? i. Introduce to students different purposes, such as to entertain, to share, to support, to inform, to educate, to sell, and to persuade. Some websites were written with multiple purposes. ii. Give students some examples. Example a is to inform and to educate. Example e is to sell and to inform. Example g is to inform (and to entertain for some people). Example h is to inform and support. 6. When was it written and updated? i. Introduce to students the importance of timeliness for news and technology. The importance of the timeliness depends on the topic. For example, for some aspects of the topic of immigration, updating information is not as crucial as it is for news or information about 144 technology. ii. Teach students that the timeliness of a website also reflects whether the author is still maintaining an interest in the page, or has abandoned it. iii. Teach students that the updated time is usually presented at the bottom of a page. iv. Show students examples of websites that have been updated recently (examples h & g) and websites that have not been updated (examples d & f). v. Call students’ attention to the difference between copyright year and publication year. Day 2, Instructions And Activities: 1. Teach students the other 3 aspects of \\ '\\ \\'DOT -- that is, being aware of their needs, the organization of the website and how to plan for future reading or future activities on the reading topic. 2. Does this website meet my needs? The following three steps will lead students to evaluate the relevance and trustworthiness of websites. i.Ask students to think about the information they want or need from a website. For this lesson you can use the example of immigration. Identify for students some information you want or need about immigration. Ask students if there is any information they would 145 i "all 1.”.- ;: "““"‘m o I ' \ o ' t. like to know about this topic. The key question that should help the students identify their needs is: V What is it about [any research topic] that you want or need to know? ii.Guide students to judge if the website has the content that helps meet their needs. (Does the website have content that meets my needs?) iii.Lead students to evaluate the website using the three features learned in the last session: WW] )1 )i. (Are the contents on this website trustworthy or biased?) 3. Organization of website i.How is it organized? Navigate one or two Web pages with students. Call students’ attention to the structure or layout of the website. This could include such things as: V What the tabs are; V What sections it has; V Where its internal and external links are; V Which part goes the content and which part runs the advertisement (if any); ii.The degree of the usability of the organization: confusing, clear (well organized), boring, difficult/hard, easy, weird, neat. Examples c & fare poorly organized websites. Other websites are all well 146 organized. 4. To do list for the future. i.Teach students to think about what they want to do next based on their website reading. This could include: a) Reading another part of this website. b) Going to a link on this website. 0) Reading a book on d) Asking the librarian about e) Sharing what I learned with ii.Have students practice this by thinking with you about possible next steps in learning about immigration. Day 3, Guided and Independent Practice: Review WWWDOT. Give students three websites on Underground Railroad. Website A: http://www.history.rochester.edu/class/ugrr/home.htrnl Website B: http://www.nationalgeographic.com/railroad/ Website C: http://www2.lhric.org/POCANTICO/TUBMAN/timeline2/timeline.htm 3. Students practice the WWWDOT approach with the above three websites. Students will fill out the WWWDOT form (Appendix B) with pencil and paper. If students don’t have time to complete the three forms, please spend some time 147 in the next session to finish them. Day 4, Guided and Independent Practice: Ask students to discuss (this may turn out to be a debate) which of the 3 websites are best and worst. 148 Appendix C WWWDOT worksheet Name Date URL WWWDOT: A Tool for Supporting Critical Reading of lntemet Sites Who wrote this (and what credentials do they have?) Why did they write it? When was it written and updated? 149 Does this help meet my needs (and how)? Organization of site (you can write and/or draw.) 'E_ --.-:'u salad. 4. . I'.“ v ' . ' . To do list for the future 150 Appendix D WWWDOT Observation Protocol Date of Observation: Total Time Observed Class ID#: Lesson Time: Session observed for experimental class: 1St 2nd 3rd 4’h Evidence of Teaching Evaluation of Websites In Class Tells and/or demonstrates to students the importance of critically evaluating information on websites. Who Tells students how to identify author(s) of a website. wrote Tells students where to find credentials of author(s). this? Tells students possible signs of websites with not well qualified author. Gives students relevant example(s). Gives students practice. Why Tells students that different websites are written with different did they purposes. write Tells students some websites were written with multiple it? purposes. Identifies different purposes. Gives students relevant examples. Gives students practice. When Tells students the importance of timeliness of some news and was it technology sites. written/ Tells students that a broken link or a missing image usually updated indicate authors’ not maintaining the site. ? Tells students where the updated/publication time is usually shown on a page. Tells students the difference between a publication year and a copyright year. Gives students relevant example. Gives students practice. Docs Help students identify their needs. this Guides students to judge if the website has what they need. meet Reminds students of evaluating the trustworthiness of sites. my Gives students relevant examples. needs? Gives students practice. 151 Appendix D con’t Organiz Calls students’ attention to structure/layout of a website. ation of Tells students where the tabs are. site Tells students where the external or internal links are. Tells students where the advertisement goes. Calls students’ attention to the degree of usability of the organization. Gives students relevant examples. Gives students practice. To do Tells students to think about what to do in the future once list for finishing view a website. the Gives students an exemplary to do list for the future. future Gives students practice. Is there any evidence from students’ work, talk, etc. that they are or are not critically evaluating websites? Observations Comments Other comments: 152 f1“ _. n A l ' I ,4 -fi « Appendix E Student Questionnaire In this questionnaire, you will be asked to answer questions about using the lntemet. This is NOT a test. Please tell us what is most true for you. Thank you! 1. I can use an lntemet Browser, for example, Netscape Navigator, lntemet Explorer, Safari, Firefox, etc. a) Strongly agree; b) Agree c) Neither agree nor disagree F d) Disagree ‘ 1 e) Strongly disagree . t 2. I can search for resources and information on the lntemet. li _ a) Strongly agree g r ‘ b) Agree .1; c) Neither agree nor disagree " d) Disagree e) Strongly disagree 3. The information on the lntemet is always accurate and true. a) Strongly agree b) Agree c) Neither agree nor disagree d) Disagree e) Strongly disagree 4. You can tell if a website is good or not by how many graphics and pictures a website has. a) Strongly agree b) Agree c) Neither agree nor disagree d) Disagree e) Strongly disagree 5. You can tell if a website is good or not by how many words a website has. a) Strongly agree b) Agree c) Neither agree nor disagree 153 d) Disagree e) Strongly disagree 6. I always look on the website and see who created it. a) Strongly agree b) Agree c) Neither agree nor disagree d) Disagree 6) Strongly disagree 7. I always look on the website and see when the information on the site was created or updated. a) Strongly agree b) Agree c) Neither agree nor disagree (I) Disagree e) Strongly disagree 8. While browsing a website, I usually can tell how the website is organized. a) Strongly agree b) Agree c) Neither agree nor disagree d) Disagree e) Strongly disagree 9. As long as the website contains information I am looking for, I do not care who wrote the website. a) Strongly agree b) Agree c) Neither agree nor disagree (I) Disagree e) Strongly disagree 10. While I read things on the website, I am aware of the author’s purpose of writing/creating it. a) Strongly agree b) Agree c) Neither agree nor disagree (1) Disagree e) Strongly disagree 154 i 11. 12. 13. 14. 15. 16. When browsing a website, I can tell quickly whether this website has what I need. a) Strongly agree b) Agree c) Neither agree nor disagree (1) Disagree e) Strongly disagree I know where information about the publish/update date is usually displayed on a website. a) Strongly agree b) Agree c) Neither agree nor disagree d) Disagree e) Strongly disagree When browsing a website, I can easily judge whether I should trust a website. a) Strongly agree b) Agree c) Neither agree nor disagree (1) Disagree e) Strongly disagree When I’m reading a website, I concentrate on the information I am reading and I do not think about what to do next. a) Strongly agree b) Agree c) Neither agree nor disagree d) Disagree e) Strongly disagree When browsing a website, I stop to think about whether it has what I’m looking for. a) Strongly agree b) Agree c) Neither agree nor disagree d) Disagree e) Strongly disagree All websites are organized in basically the same way. a) Strongly agree b) Agree c) Neither agree nor disagree 155 d) 6) Disagree Strongly disagree 17. All the website authors have the same purpose of writing/creating a website. a) b) c) d) 6) Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree 18. When I st0p reading a website, I have a rough plan for what to do next. a) b) C) d) 6) Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree Thank you for taking the time to complete the survey and questionnaire! 156 Appendix F Single Website Evaluation Assessment, Form A Suppose that you are looking for information about Pandas on the lntemet and come across the following website. Please read the first page closely before clicking on any links. You can scroll all the way to the bottom of the page as you read. Please write one paragraph telling about whether you should trust and use the information on the site and why. You have 25 minutes to finish this task. While reading the website, you may browse the other pages of the website a little bit, but please focus mainly on the first page. While writing about whether you should trust the information on the website and your reasons, you do not need to pay a lot of attention to spelling, grammar, or handwriting. For this task, a rough draft is fine. http://www.cnd.org/Contrib/pandas/ Single Website Evaluation Assessment, Form B Suppose that you are looking for information about Giant Panda on the lntemet and come across the following website. Please read the first page closely and write one paragraph telling about whether you should trust and use the information on the site and why. While reading the website, you may browse the other pages of the website a little bit, but please focus on the first page. 157 A t; While writing about whether you should trust the information on the website and your reasons, you do not need to pay a lot of attention to spellings, grammar, or handwriting for this task. Rough draft is fine. http://www.tigerhomes.org/animal/giant-panda.cfm 158 Appendix G Scoring guide for Single Website Evaluation Assessment Criteria for scoring judgment FORM A FORM B Point Answer Point Answer 0 No, I don’t trust the website. 0 No, I don’t trust the website. Or, Or, No, this is not a good No, this is not a good website. website. 1 Yes, I trust the site. Or Yes, 1 Yes and no, (e.g. “I sort of/kind this is a good website. of trust the site.”, “I would probably get another website to back up this information though”) 2 Yes and no. (e.g. “I would not 2 Yes, I trust the site. Or Yes, this use it as my first choice”, “I is a good website. would clarify it on a trustworthy website, like national gecgrgphicfi Points Criteria for scoring reasons 0 Not a reason or not a good reason l - Notices or identifies something that is a good thing to look at BUT gets it wrong or does not make further justification (e. g. identifies purpose but it’s the wrong purpose, identifies date but it’s the wrong date.) - Used a good strategy BUT doesn’t get it correct. (e.g. checks background knowledge, but judged incorrectly.) - Notices or identifies something that is a good thing to look at AND gets it correct (e. g., correct identification of credentials, correct identification of date, etc.) - Used a good strategy, AND gets it correct.(e. g. checked background knowledge, and judged correctly.) +1 Links information gathered appropriately to the trustworthiness of the website. 159 Appendix H Website Ranking Assessment, Form A Suppose you are searching for information about the Respiratory System and you find the following 4 websites. During the next 30 minutes, please: 1. Check out the websites, browsing them a little bit. Please read the first page for at least 5 minutes before clicking on any links. You can scroll all the way to the bottom of the page as you read. 2. Rank the websites based on how much you can trust the information in them. 3. Write down the reasons why you trust one the most and another the least. While writing, you do not need to pay a lot of attention to the spelling, grammar, or your handwriting. For this task, a rough draft is fine. Please write down all the reasons you have in mind. A. http://www.lungusa.org/site/pp.asp?c=deUK9OOE&b=2_2576 B. m://librarv.thinwest.org/L824/Respiratow.h£n_l C. http://www.innerbodv.com/anim/lungs.htm_l D. http://www.leeds.ac.uk/chb/lectures/anatomy7.html Please circle one for each category. Most trustworthy: A B C D Second most trustworthy: A B C D Third most trustworthy: A B C D Fourth most trustworthy: A B C D Take a look at the website that you ranked as the most trustworthy and write down your reasons. Take a look at the website that you ranked as the least trustworthy and write down your reasons. 160 l“ l Website Ranking Assessment, Form B Suppose you are searching for information about the Respiratory System and you find the following 4 websites. During the next 30 minutes, please: 1. Check out the websites, browsing them a little bit. Please read the first page for at least 5 minutes before clicking on any links. You can scroll all the way to the bottom of the page as you read. 2. Rank the websites based on how much you can trust the information in them. 3. Write down the reasons why you trust one the most and another the least. While writing, you do not need to pay a lot of attention to the spelling, grammar, or your handwriting. For this task, a rough draft is fine. Please write down all the reasons you have in mind. A . http://biology.clc.uc.edu/courses/bio105/respirat.htm B . http://www.cdli.ca/~dpower/resp/main.htm C . http://health.allrefer.com/health/lung—disease-respiratory-system.html D . http://www.imcpl.org/kids/guides/health/respiratorysystem.html Please circle one for each category. Most trustworthy: A B C D Second most trustworthy: A B C D Third most trustworthy: A B C D Fourth most trustworthy: A B C D Take a look at the website that you ranked as the most trustworthy and write down your reasons. Take a look at the website that you ranked as the least trustworthy and write down your reasons. 161 Appendix I Scoring rubric for Website Ranking Assessment Correct A-D—B—C A—D—C—B order (From most to least trustworthy) (From most to least trustworthy) Rubric The scoring rubric is based on where the SAME RULES AS FOR FORM most trustworthy website (Website A) A. and the least trustworthy website (website C) was placed. Situation 1: If A and C are at the correct slots and a) If the 2nd most trustworthy and the 3rd most trustworthy are also at the right slot, it gets 6 points (maximum score). b) If the 2nd most trustworthy and the 3rd most trustworthy are not at the right slots, it gets 5 points. Situation 2: If either A or C is at the correct slot and a) If the one which is not at the correct slot (A or C) is only one step away from the right slot, it gets 4 points. b) If the one which is not at the correct slot (A or C) is two steps away from the correct slot, it gets 3 points. Situation 3: If not A nor C is at the righ slot, and ' a) if A and C are only one step away from their correct slots, it gets 2 points. b) If A and C are two steps away from their correct slots, it gets I point. Situation 4: If A is at the least trustworthy slot, or C is at the most trustworthy slot, it gets 0 point. 162 Examples ADBC — 6 ABDC — 5 ADCB - 4 ABCD - 4 DABC - 4 BADC — 4 ACBD — 3 ACDB - 3 DBAC — 3 BDAC — 3 BACD — 2 DACB — 2 BCAD - 1 DCAB — l BDCA—O BCDA—O DBCA—O DCBA-O CABD—O CADB—O CBAD—O CBDA—O CDAB—O CDBA—O Rubric for reasoning sessions SAME AS FOR THE SINGLE WEBSITE EVALUATION ASSESSMENT ' SAME AS FOR THE SINGLE WEBSITE EVALUATION ASSESSMENT 163 lllllllllllllllllll