UNDERSTANDING THE EPISTEMOLOGY-LEARNING CONNECTION WHEN EXPLORING AN ILL-STRUCTURED TASK USING THE INTERNET By Tianyi Zhang A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Educational Psychology and Educational Technology 2011 ABSTRACT UNDERSTANDING THE EPISTEMOLOGY-LEARNING CONNECTION WHEN EXPLORING AN ILL-STRUCTURED TASK USING THE INTERNET By Tianyi Zhang Within the context of exploring an ill-structured task using the Google search engine, this study examines 1) the connections between personal epistemology and the complexity of knowledge exploration (i.e., learning complexity), and 2) the role of activating learners’ taskoriented epistemic beliefs in affecting their knowledge exploration processes. When covariates (i.e., prior content knowledge, verbal comprehension, effort investment, and learning time) were controlled, hierarchical regression analyses were conducted to investigate (1) whether or not the complexity of participants’ knowledge exploration was associated with their epistemic beliefs (including general and task-specific epistemic beliefs) and epistemic activation; and (2) whether or not epistemic activation could affect the relationship between epistemic beliefs and the complexity of knowledge exploration. The results show that epistemic beliefs were connected to the complexity of learners’ knowledge exploration. Complex learners were more likely to benefit from the epistemic activation to (1) view the task as complex and subjective (and thus perceive their learning to be insufficient), (2) adopt more complex strategies to evaluate web information veracity, and (3) perceive the value of studying specific cases (e.g., empirical studies, first-hand experiences, etc). This research contributes to 1) theoretical understandings of personal epistemology in connection to learning complexity when learning resources are not pre-selected and learning tasks are open-ended and unstructured, and 2) the investigation of the pedagogical value of a teaching strategy (i.e., to activate learners’ epistemic beliefs prior to learning) to promote deep learning in Internet-based learning environments. Copyright by TIANYI ZHANG 2011 Dedicated to: My husband, Michael Ulyshen My mother, Ying Shi My father, Shibao Zhang My relatives and closest friends who support me to complete my degree. iv ACKNOWLEDGEMENTS I thank my Advisor, Dr. Matthew Koehler for his guidance and mentorship throughout these years. His help was paramount! I thank my dissertation Chair, Dr. Matthew Koehler and my committee members Dr. Rand Spiro, Dr. Punya Mishra, and Dr. Douglas Hartman for their time-commitment and their great help with my dissertation project. I thank Dr. Fei Gao and Paul Morsink for their precious suggestions on my dissertation. I would like to express my gratitude to Dr. Jere Brophy (passed away) who was always very supportive and impacted me as a researcher greatly. I thank all my instructors, my classmates, and the people I have been working with for sharing their thoughts and enthusiasm. I appreciate the support from our program, Educational Psychology and Educational Technology, the College of Education, and the Graduate School, who funded my dissertation. I want to thank my parents who not only gave me life, but also raised me and gave me the best education. I feel so grateful to my husband, who is always my first reader providing insightful suggestions, who takes me to the woods to see birds and insects, and who enriches my life! I thank my closest friends who think of me, encourage me, and stand by me. I dedicate this work to all of them. v TABLE OF CONTENTS LIST OF TABLES…………………………………………………………………………….viii LIST OF FIGURES………………………………………………………………………………xi CHAPTER 1 INTRODUCTION AND LITERATURE REVIEW……………………………………………...1 Overview……………………………………………………………………………………....1 Personal Epistemology………………………………………………………………………...2 Developmental and Dimensional Views of Personal Epistemology……………...2 Contextualized Personal Epistemology………………………………………...6 Activation of Personal Epistemology……………………………………..……..10 Learning Complexity in Internet-Based Learning Environments……………………………12 A Focus on Learning Process, Not Outcomes…………………………………...12 The Complexity of Learning Processes, Theories, and Perspectives for Analysis..……………………………………………………………………..…..13 Epistemology-Learning Connections in Prior Studies…………………………...17 Purposes, Research Questions, and Hypotheses……………………………………………..18 CHAPTER 2 METHOD………………………………………………………………………………………..22 Participants…………………………………………………………………………………...22 Instruments and Materials.…………………………………………………………………...22 Ill-Structured Task……………………………………………………………….22 Inventories Testing General Epistemic Beliefs…………………………………..23 Inventories Testing Task-Specific Epistemic Beliefs……………………………25 Epistemic Prompts……………………………………………………………….25 Post Survey……………………………………………………………………....28 Prior Content Knowledge Test…………………………………………………...29 Verbal Comprehension Test……………………………………………………..29 Video Clips………………………………………………………………………30 Interview…………………………………………………………………………30 Design and Procedures……………………………………………………………………….31 Measures and Data Analysis…………………………………………………………………33 Quantifying Independent Variables…………………………………………….33 Quantifying Covariates…………………………………………………………35 Quantifying Dependent Variables – Learning Complexity Measured through Direct Analysis………………………………………...…………………………36 Quantifying Dependent Variables – Learning Complexity Measured through Self-reported Methods…………………………………………………………...58 Descriptive Data – The Role of Epistemic Activation………….………………65 Statistical Analysis of the Research Questions…………………………………65 vi CHAPTER 3 RESULTS………..………………………………………………………………………………68 Descriptive Statistics…….…………………………………………………………………...68 Research Questions and Results………..………...………………………………………….72 Research Question 1……………………….…………………………………….72 Research Question 2……………………….…………………………………….74 Research Question 3……………………….…………………………………….75 Research Question 4……………………….…………………………………….76 The Effect of Epistemic Activation from Learners’ Perspectives…………..……………….86 The Connections between Covariates and Learning Complexity……………………………88 CHAPTER 4 DISCUSSION…...……………………………………………………………………………….93 Understanding the Epistemology-Learning Association…………………………………….93 General Epistemic Beliefs and Learning Complexity (Research Question 1)…...93 Task-Specific Epistemic Beliefs and Learning Complexity (Research Question 2)……………………………………………………...……….……….95 Understanding the Role of Epistemic Activation (Research Question 3&4)…....96 Understanding the Role of Covariates………………………………………...……………101 Learning Time…………………………………….…………………………….101 Verbal Comprehension Abilities and Effort Investment……………………….102 Prior Content Knowledge………………………………………………………103 Implications ……………………………………………………………………………...…105 Limitations………………………………………………………………………………….108 CHAPTER 5 CONCLUSIONS……………………………………………………………………………….110 APPENDICES...………………………………………………………………………………..111 REFERENCES…………………………………………………………………………………154 vii LIST OF TABLES Table 1: Design and Procedures of the Study ………………………………………………….32 Table 2: Coding Categories to Analyze the Complexity of Knowledge Exploration Processes .......................................................................................................................................39 Table 3: Codes, Definitions, and Examples of the Connection Dimension……………………...40 Table 4: Codes, Definitions, and Examples of the Flexibility Dimension……………………….42 Table 5: Codes, Definitions, and Examples of the Critical Analysis of Web Information Dimension…………………………………………………………………………..45 Table 6: Codes, Definitions, and Examples of the Novelty Dimension………………………….47 Table 7: Codes, Definitions, and Examples of the Engagement Dimension………………….….49 Table 8: An Example of Data Triangulation…………….……………………………………….54 Table 9: Triangulating the Results from the Interview and the Video Clips…………………….55 Table 10: Means and Stand Deviations of the Continuous Variables Measuring Learning Complexity (Raw Scores) and Zero-Order Correlation Coefficients between these Variables and the Variables Measuring Personal Epistemology and the Covariates……..........................69 Table 11: Descriptive Statistics for the Dichotomous Variables Measuring Learning Complexity ………………………….…………………………………………………………...70 Table 12: Means (Raw Scores), Standard Deviations, and Zero-Order Correlation Coefficients (Two-Tailed) of the Variables Measuring Personal Epistemology and Covariates…............…..71 Table 13: Overview of the Results When Personal Epistemology Was Measured by the CFI…..91 Table 14: Overview of the Results When Personal Epistemology Was Measured by the OMPI...92 Table 15: Hierarchical Analysis of Multiple Regression Models Predicting the Observed Learning Complexity (Integrated)……………………………………………………….……..134 Table 16: Hierarchical Analysis of Multiple Regression Models Predicting the Connection Dimension of Learning Complexity………………………...…………………………………135 Table 17: Hierarchical Analysis of Multiple Regression Models Predicting the Flexibility Dimension of Learning Complexity…………...………………….………………………….…136 viii Table 18: Hierarchical Analysis of Multiple Regression Models Predicting the Critical Analysis of Web Information Dimension of Learning Complexity………………….………….137 Table 19: Hierarchical Analysis of Multiple Regression Models Predicting the Critical Analysis of Web Information – Source Sub-Dimension of Learning Complexity……...……….138 Table 20: Hierarchical Analysis of Multiple Regression Models Predicting the Critical Analysis of Web Information – Recentness Sub-Dimension of Learning Complexity….…...….139 Table 21: Hierarchical Analysis of Multiple Regression Models Predicting the Critical Analysis of Web Information – Content Sub-Dimension…………………….……………..…..140 Table 22: Hierarchical Analysis of Multiple Regression Models Predicting the Novelty Dimension of Learning Complexity……………………………………………………...……..141 Table 23: Hierarchical Analysis of Multiple Regression Models Predicting the Engagement Dimension of Learning Complexity………………………..……………………..……….……142 Table 24: Hierarchical Analysis of Multiple Regression Models Predicting Learner Satisfaction…………..…..………………………………………………………….…………..143 Table 25: Hierarchical Analysis of Multiple Regression Models Predicting Perceived Extent of Knowledge Exploration………………………………………………………………144 Table 26: Hierarchical Analysis of Multiple Regression Models Predicting Overestimation…145 Table 27: Hierarchical Analysis of Logistic Regression Models Predicting Perceived Insufficiency of Learning…...…………………….………………………………………...…..146 Table 28: Hierarchical Analysis of Logistic Regression Models Predicting Participants’ Plans to Explore Empirical Studies………………………………………….……………..…..147 Table 29: Hierarchical Analysis of Logistic Regression Models Predicting Participants’ Plans to Explore Individual Cases………………………………………………………..…….148 Table 30: Hierarchical Analysis of Logistic Regression Models Predicting Participants’ Plans to Explore the Views from Different Stakeholders…………………………..………..….149 Table 31: Hierarchical Analysis of Logistic Regression Models Predicting Indecisiveness …..150 Table 32: Hierarchical Analysis of Logistic Regression Models Predicting Indecisiveness due to the Context-Dependency Concern……………………………………………….………..…..151 Table 33: Hierarchical Analysis of Logistic Regression Models Predicting the Adoption of Low Criteria Determining When to Stop Exploration……………………………………………….152 ix Table 34: Hierarchical Analysis of Multiple Regression Models Predicting the Breadth of Knowledge Exploration………………………………………………………..…...153 x LIST OF FIGURES Figure 1: Comparing the role of segmentation in an excerpt of the protocols ………………….51 Figure 2: An example for segmenting the protocols and ordering the segments …………...…..52 Figure 3: An example for calculating the test-retest agreement…………………………………57 Figure 4: The epistemology-learning connection was investigated independently based on the two inventories collecting personal epistemology……………….……………………………....67 Figure 5: Partial regression plot (with regression lines) depicting the two-way interaction between general epistemic beliefs and epistemic activation on the critical analysis of web information through content sub-dimension………………………..………………………....…78 Figure 6: Partial regression plot (with regression lines) depicting the effect of the two-way interaction between general epistemic beliefs and epistemic activation on perceived extent of knowledge exploration………………………………………………………………………..79 Figure 7: Partial regression plot (with regression lines) depicting the effect of the two-way interaction between general epistemic beliefs and epistemic activation on overestimation……..80 Figure 8: Partial regression plot (with regression lines) depicting the effect of the two-way interaction between task-specific epistemic beliefs and epistemic activation on perceived extent of knowledge exploration ……………………………………………………………...…81 Figure 9: Partial regression plot (with regression lines) depicting the effect of the two-way interaction between task-specific epistemic beliefs and epistemic activation on overestimation……………………………………………………………………………………82 Figure 10: Relationship between general epistemic beliefs and the need (i.e., future plan) to explore empirical studies for activation and non-activation groups……..……….….…………..83 Figure 11: Relationships between task-specific epistemic beliefs and the need (i.e., future plan) to explore individual cases for activation and non-activation groups……………………………84 Figure 12: Relationships between task-specific epistemic beliefs and perceived insufficiency of learning for activation and non-activation groups…….…………………………………………85 xi CHAPTER 1 Introduction and Literature Review Overview The internet is becoming an increasingly important tool to researchers of all kinds, from school children working on homework assignments to tenured academics. Whether or not this powerful resource is being used to its full potential, however, is in some doubt, particularly for ambiguously defined problems with more than one “solution”. The issue of global warming, for instance, does not adhere to a single discipline of knowledge with clear-cut boundaries and can be approached by several avenues of investigation, none of which is necessarily more “correct” than others. Kitchener (1983) and Wood (1983) named such cases “ill-structured problems” to distinguish them from the “well-structured problems” which are specific, clearly-defined, and can be judged on correctness (e.g., the trajectory of a rocket’s flight, an example proposed by King and Kitchener, 1994). Learning to solve ill-structured problems (i.e., learning in ill-structured domains) requires deep understanding of the issues at hand (i.e., complex learning or advanced knowledge exploration). That is, learners derive their own goals, investigate multiple cases and alternatives, build connections across information, generate questions, elaborate and justify theories, relate the new information to the real world, and recognize their own biases (Hare, 2003; Marton & Säljö, 2005; Vermunt & Vermetten, 2004; Spiro, Feltovich, Jacobson, & Coulson, 1991). Aided by hyperlinks, Internet users exploring an ill-structured problem can search expansively for individual cases, counter-examples, and personal stories; and then process the information deeply by comparing these cases, synthesizing them into their existing knowledge structure, and being open to alternatives and prepared to reconstruct ideas. Hypothetically, the 1 Internet is an environment that can nurture complex learning. Yet prior studies have documented low levels of knowledge exploration in the Internet-based learning environment. Both K-12 students and university students tend to use the Internet merely to collect information or to find quick answers, rather than to elaborate, explore, and justify ideas (e.g., Mansourian & Ford, 2007; Wallace, Kupperman, Krajcik, & Soloway, 2000). To take full advantage of this powerful resource to solve ill-structured problems and to improve how people learn, it is essential to study the factors associated with learning complexity in the Internet-based learning environments. Previous studies have revealed connections between learning complexity and personal epistemology (i.e., individuals’ beliefs about knowledge and knowing) in the environment using pre-programmed hypertext systems (Jacobson & Spiro, 1995; Windschitl & Andre, 1998) and printed materials (Bråten & Strømsø, 2010). However, these experimental learning environments were perhaps too highly controlled to accurately reflect the realities of online learning. The purpose of the current study, therefore, was to explore the relationship between personal epistemology and learning complexity in a real-world online environment. The major concepts and hypotheses underpinning this research are outlined below. First, issues related to personal epistemology are reviewed, followed by a discussion on learning complexity. Personal Epistemology Personal epistemology refers to individuals’ beliefs about knowledge and knowing (Hofer, 2002). Prior studies have focused their discussions on three issues: (1) developmental and dimensional views of personal epistemology, (2) contextualized personal epistemology, and (3) activation of epistemic beliefs. Developmental and Dimensional Views of Personal Epistemology 2 Developmental views of personal epistemology. Influenced by Perry’s (1970) initial work, developmental psychologists consider personal epistemology as an integrated cognitive structure developing from simple to complex stages. Although the specific developmental stages in different models (e.g., Baxter Magolda, 1987; Belenky, Clinchy, Goldberger, & Tarule, 1986; Chandler, Boyes, & Ball, 1990; King & Kitchener, 1994; Kuhn, Cheney, & Weinstock, 2000; Perry, 1970) vary and scholars have adopted different terms in their models, these models share a general sequence, starting with an absolutist view, followed by a multiplist level, and ended up with an evaluativist position (terms used here are from Kuhn et al.’s (2000) model). At the absolutist stage, individuals hold a right-or-wrong view of the world. That is, knowledge is seen to be objective and can be evaluated as true or false. Absolutists believe that knowledge exist externally within authorities (e.g., teachers, textbooks, experts, etc) who are responsible to pass the knowledge to others. Thus, knowledge in their views is fixed and certain (e.g., If I know it, then I know it. My answer is either correct or incorrect!). Later, individuals begin to realize the subjectivity of knowledge and embrace a multiplist stance. But individuals in this transitional stage perceive knowledge and knowing as uncertainty. Truth lies only within the self. The absolute answer to any question does not exist. All viewpoints are relative and equally valid, because each person can form his/her own subjective opinion. This stage, therefore, reflects a radical view of subjective knowledge (e.g., Everyone knows some of it, so everyone is correct as well as incorrect.). Finally, individuals who move to the evaluativist stage can realize that knowledge and truth are contextual and relative. They perceive both the objectivity and subjectivity of knowledge, that is, they conceive that some judgments are more reasonable or valid than others, which leads them to coordinate diverse evidence to draw a conclusion across perspectives. Thus, this stage involves individuals’ intensive reasoning, critical reflections and 3 awareness that conclusions are uncertain and subjected to re-assessment. At this most complex level, learners consider their role as meaning maker, which reflects a constructivist perspective of learning (e.g., Someone’s answers may be more reasonable than others’, although they are correct as well as incorrect to some extent.). Dimensional views of personal epistemology. During the 1990s’, Schommer questioned the unitary construct of personal epistemology proposed by developmental psychologists, and posited five relatively independent dimensions depicting the epistemic construct: 1) simplicity of knowledge (ranging from the belief that knowledge is isolated and simple to the belief that knowledge is interrelated and complex), 2) certainty of knowledge (ranging from the belief that knowledge is absolute to the belief that knowledge is tentative), 3) speed of learning (beliefs about whether or not learning occurs quickly), 4) implicit ability theories (beliefs about whether or not the ability to learn is innate), and 5) source of knowledge (a continuum from the view that knowledge is from authority to the view that knowledge is derived from reasoning). Schommer believed that these dimensions were relatively independent. That is, learners who assume knowledge is isolated and simple may also believe that knowledge is tentative. Although Schommer’s initial work to validate these dimensions failed in indentifying the source of knowledge dimension (Schommer, 1990), Schraw and his colleagues (Schraw, Dunkle, & Bendixen, 1995; Schraw, Bendixen, & Dunkle, 2002) successfully extracted this dimension from factor analysis, and confirmed the existence of the other four dimensions. Educational psychologists holding dimensional views cannot reach a consensus on how many and what dimensions depict personal epistemology sufficiently and efficiently. Other dimensions, such as the structure of knowledge, knowledge construction and modification, and learning as an orderly process, have been validated (Jehng, Johnson, & Anderson, 1993; Wood & 4 Kardash, 2002). Therefore, epistemic dimensions are perhaps not entirely orthogonal (as opposed to the dimensional independence view Schommer initially proposed). Empirical data (e.g., Hofer, 2000; Nussbaum & Bendixen, 2003; Qian & Alvermann, 1995) have shown significant interrelationships across some dimensions, and more and more scholars are inclined to assume dimensional interdependence (e.g., Bråten & Strømsø, 2005; Schommer-Aikins, 2004; Hofer, 2004). Therefore, when they approach the personal epistemology construct, they opt for oblique rotations in factor analysis (e.g., Bråten & Strømsø, 2010) or add up dimensional scores to reflect the integrated epistemic construct (e.g., Demetriadis, Papadopoulos, Stamelos, & Fischer, 2008). Consistency between developmental and dimensional views. Dimensional and developmental views of personal epistemology are not fundamentally inconsistent. Hofer (see Hofer & Pintrich, 1997; Hofer, 2004) compared these two views and proposed that personal epistemology functionalizes at the metacognitive level and includes nature of knowledge and nature of knowing. Nature of knowledge refers to individuals’ knowledge about knowledge and addresses the questions, such as “What is knowledge?” “Is knowledge tentative and situated in its contexts (i.e., contextually-based)?” “Is knowledge interconnected?” It, thus, embraces Schommer’s two dimensions - certainty and simplicity of knowledge. Nature of knowing is individual’s knowledge about knowing, and addresses the questions, like “How do people learn?” “How do people justify their views?” “Whom can people learn from?” It relates to Schommer’s source of knowledge dimension as well as knowledge justification process implied by developmental models (e.g., learning is to accept external authoritative information or to make meaning from the external information). The nature of knowledge and knowing are correlated. Individuals who assume that knowledge is embedded in diverse contexts, reflected through multiple lenses, interconnected and evolving, more readily think that knowing is a 5 process to evaluate the soundness of information in its contexts, to integrate multiple instantiations, and to synthesize conclusions. Therefore, the dimensional view of personal epistemology fundamentally resonates with the developmental view (Hofer, 2004; Limón, 2006). Because of the interdependent dimensions of personal epistemology and the internal consistency between developmental and dimensional views of personal epistemology, personal epistemology is a holistic construct consisting of several interrelated dimensions. Epistemic beliefs range from simplistic to complex. Individuals with simple epistemic beliefs (or less complex thinkers) assume that knowledge is objective, fixed, isolated, uni-dimensional, and contextually-independent (i.e., nature of knowledge), and knowing is a process to receive and copy the information from external authorities without self-evaluation or justification (i.e., nature of knowing). In contrast, individuals with complex epistemic beliefs (i.e., more complex thinkers) assume that knowledge is objective as well as subjective, and is tentative, interrelated, multi-dimensional, and contextually-bounded. Learning, in the view of a complex learner, is a meaning-making process, involving collecting, comparing, and synthesizing contextually-based instantiations to validate self-judgment. Contextualized Personal Epistemology Early studies of personal epistemology have focused on contextually-independent (i.e., general) personal epistemology (e.g., King & Kitchener, 1994; Perry, 1970; Schommer, 1990), assuming that general epistemic beliefs serve as a core basis from which contextually-dependent epistemic beliefs derive (Schommer-Aikins, 2002). Subsequently, researchers have grounded their investigations within disciplines and domains (Buehl, Alexander, & Murphy, 2002; Buehl & Alexander, 2005; Kuhn, Cheney, & Weinstock, 2000; Kuhn & Weinstock, 2002) implying that personal epistemology is domain-sensitive. This domain-sensitive view of personal 6 epistemology assumes that (1) as individuals gain more expertise in one domain, they are likely to form a coherent conception of knowledge and knowing within that domain (Limón, 2006), so levels of expertise can affect individuals’ epistemic thinking (Buehl & Alexander, 2001; 2006; Buehl et al., 2002); and (2) personal epistemology across domains may not be consistent. Knowledge and knowing should be investigated in specific domains. Recent studies have found that epistemic beliefs even differ across specific contexts in the same domain (e.g., diSessa et al. 2003; Leach, Millar, & Ryder, 2000; Louca, Elby, Hammer, & Kagey, 2004), which suggests the need to measure contextualized personal epistemology (Baxter Magolda, 2004; Hofer, 2004; Louca et al., 2004; Mason, Boldrin, & Ariasi, 2010). Some scholars then argue that a core set of pure epistemic beliefs does not exist, especially at early ages (e.g., Louca et al., 2004), and general epistemic beliefs can only be inferred from comparing contextualized epistemic beliefs across contexts (Limón, 2006). The relationship between general and contextualized epistemic beliefs remains poorly understood. Methodology. Different methodologies are used to measure general and contextualized personal epistemology. Two ways used to collect general epistemic beliefs are interview (e.g., Perry, 1970; King & Kitchener, 1994) and survey methods (e.g., Jehng, Johnson, & Anderson, 1993; Schommer, 1990; Schraw et al., 2002; Spiro, Feltovich, & Coulson, 1996; Wood & Kardash, 2002). The interview method widely used by developmental psychologists is timeconsuming and requires training coders. Then the Schommer Epistemological Questionnaire (SEQ; Schommer, 1990) and Epistemic Beliefs Inventory (EBI; Schraw et al., 1995, 2002) were developed to measure different epistemic dimensions and has been widely used. There are some challenges, however. As mentioned earlier, there is no agreement on epistemic dimensions, and both questionnaires focus on testing the nature of knowledge, but lack the items reflecting 7 knowledge justification (Hofer, 2004), which is activated frequently during the learning process using Internet search engines (Mason et al., 2010). Grounded on the notion that personal epistemology is an integrated construct encompassing several dimensions, Spiro, Feltovich, and Coulson’s (1996) Cognitive Flexibility Inventory (CFI) and Germer, Efran, & Overton’s (1982) Organicism-Mechanism Paradigm Inventory (OMPI) effectively counteracts such limitation and includes sufficient items addressing the beliefs about knowledge (e.g., whether or not knowledge is interconnected, tentative, contextually-based, etc) and the knowing process (e.g., whether or not learning is a knowledge construction process investigating and synthesizing cases across their contexts), and thus, this study adopted the revised CFI and OMPI to collect personal epistemology. In contrast, measuring contextualized personal epistemology involves think-aloud protocols, retrospective interviews, and/or direct observations (e.g., Hofer, 2004; Limón, 2006; Louca et al., 2004; Mason et al., 2010). These methods are successful in demonstrating the impact of personal epistemology on learning (Hofer, 2004; Mason et al., 2010; Whitmire, 2003). For example, Mason et al. (2010) found that participants spontaneously evaluated the source of web information they encountered while learning online, showing their willingness to trust authoritative resources (i.e., the source of knowledge aspect of personal epistemology). But as Schommer-Aikins (2004) pointed out, the contextualized epistemic beliefs measured through existing methods are confounded by various contextualized learning factors, such as learning materials and learners’ prior knowledge. Thus, the contextualized epistemology beliefs are found to be closely related to learning. In addition, considering the unlikelihood of replicating an identical learning context, the results from studies on contextualized epistemic beliefs can have uncertain implications. Even though research can demonstrate tight relationships between a 8 given contextualized epistemic belief and learning in a particular situation, this finding cannot be generalized for other situations automatically (i.e., without empirical testing). What is unknown is whether or not the pure construct of epistemic beliefs (i.e., general epistemic beliefs) relates to learning. Another way to collect contextualized epistemic beliefs is to adopt task-specific epistemic questionnaires, composing items on the base of learning tasks. Some scholars have adopted this method. For instance, when studying domain-consistency of personal epistemology through survey methods, Schommer and Walker (1995) imposed short directions on the questionnaire and also situated the items in the context of the interested subject areas. In this study, to collect participants’ contextualized epistemic beliefs pertinent to the given ill-structured task, the questionnaires collecting their general epistemic beliefs were revised to fit the context in which participants use the Internet to explore the given ill-structured task. In this way, the collected contextualized personal epistemology is less likely to maintain learning variables (and thus exaggerate the learning-epistemology connection). Meanwhile, it is oriented in the given task, and thus, also reflects the particular context well. Finally, Schommer-Aikins (2002) argues that general epistemic beliefs exist and serve as a core basis from which contextualized epistemology springs forth. Supporting evidence for this view comes from an empirical study demonstrating consistency of their participants’ epistemic beliefs about math and social studies (Schommer & Walker, 1995). When categorizing their beliefs into high or low levels in both domains, they found that a majority of their participants classified into low or high level in one domain were also categorized into the same level in the other. Although others studies (e.g., Hofer, 2000) show that individuals assess the nature of knowledge differently across domains, these studies only compared the mean epistemic scores 9 across domains, rather than examining their participants’ relative epistemic stances across domains. In other words, the difference revealed in mean scores across domains does not necessarily result in low correlations of epistemic scores across domains. Thus, these studies cannot reject Schommer-Aikins’ stance. Activation of Personal Epistemology The contextualized epistemology extracted from think-aloud protocols, retrospective interviews, and questionnaires are individuals’ professed (i.e., stated) epistemic beliefs (Limón, 2006). Yet epistemic beliefs refer to the implicit personal beliefs about knowledge and knowing. Thus, there is a concern that professed epistemic beliefs can prime participants to be more cognizant of their epistemology, which may change their learning processes, such as adopting different learning strategies and increasing metacognitive awareness of what should be acquired, etc (Louca et al., 2004). Priming effect can be more salient if the task-specific epistemic questionnaires are completed right before learning. Reading and contemplating its items can raise participants’ metacognitive awareness of their epistemic beliefs (Hofer, 2004), possibly leading them to recognize the complexity of the given task that they may not consider otherwise (Schraw, 2000). On the other hand, because the priming effect can change the subsequent learning process, it is interesting to understand how this happens. Priming effects may affect learning in at least two ways. First, studies have shown that stimulating individuals to reflect on some metacognitive prompts each time they are exposed to new learning material during the learning process can activate their metacognitive thinking and result in more complex learning (Bannert, 2006; Demetriadis, Papadopoulos, Stamelos, & Fischer, 2008). Since epistemic thinking operates at metacognitive levels (Hofer, 2004), it is reasonable to assume that presenting task-oriented 10 epistemic prompts to activate learners’ epistemic awareness during learning can enhance learning complexity. It is noteworthy that the participants in Bannert’s (2006) and Demetriadis et al.’s (2008) studies repetitively received activation prompts during their learning processes whenever they were exposed to a new learning material by pre-programmed computers. Nevertheless, it is not very practical for classroom teachers to ask their students to contemplate activation prompts each time when a new learning material is presented, especially when students learn at their own pace on the Internet. Unfortunately, the effect of activation prompts presented prior to learning has not been investigated. If contemplating these prompts prior to learning results in a greater extent of complex learning, this approach may be of considerable value to teachers. Second, scholars believe that if individuals are aware of their epistemic beliefs, the influence of these epistemic beliefs on learning may be magnified (e.g., Muis, 2007; Kitchener, 1983). Therefore, they suggest activating learners’ personal epistemology through proposing prompts before learning to strengthen the epistemology-learning connection (Kitchener, 1983). When individuals search the Internet to explore an ill-structured task, they may spontaneously activate some dimensions of their epistemic beliefs. For instance, in Mason et al.’s (2010) study, all of their participants spontaneously activated their epistemic beliefs during their Internet-based learning processes to evaluate the quality of Internet information in order to form their own interpretations. Thus, two epistemic dimensions – the source of knowledge (i.e., whether the knowledge exists externally or within individual learners) and knowledge justification process (whether learning is to accept authoritative information or to make meaning from the external information) – were activated frequently during their learning processes. But only a few participants reflected on the simplicity of knowledge and the certainty of knowledge dimensions. 11 Thus, although there is some evidence disclosing the spontaneous arousal of epistemic assumptions during the learning process, not all epistemic dimensions are likely to be activated spontaneously (Mason et al., 2010 ). It is, then, necessary to write the activation prompts covering epistemic dimensions comprehensively. Learning Complexity in Internet-Based Learning Environments A Focus on Learning Processes, Not Outcomes Learning complexity has been studied through several dimensions/constructs, such as the amount of knowledge acquired (Mason et al., 2010), knowledge application in novel situations (Jacobson & Spiro, 1995), knowledge integration across multiple sources (Bråten & Strømsø, 2010), perceived extent of knowledge exploration (Wu & Tsai, 2005; 2007), open-mindedness to alternatives (Whitmire, 2003), and satisfaction (Mansourian & Ford, 2007). Among these different constructs, the ones related to learning outcomes (e.g., the amount of knowledge acquired, knowledge application in novel situations, knowledge integration across multiple sources) have been collected through post-tests, and the constructs germane to learning processes (e.g., open-mindedness to alternatives, perceived extent of knowledge exploration, satisfaction) have been collected through interviews (e.g., Whitmire, 2003) and surveys (e.g., Mansourian & Ford, 2007; Wu & Tsai, 2005). In the Internet-based open-ended learning environment, the fact that learners can select learning materials and decide the sequence of learning leads to individual differences in learning outcomes (e.g., what have learners acquired? How deep have learners understood a certain issue?). Therefore, in such open-ended learning environments, investigating learning processes seems more important than gauging learning outcomes. Marton and Säljö (2005) also stressed 12 the importance of studying learning processes, because they lead to the variability of learning outcomes (Marton & Säljö, 2005). No prior study has investigated the complexity of learning processes through direct analysis. Instead, prior studies collecting data to understand how learners explore a certain issue using the Internet (i.e., to study learning processes) rely exclusively on self-reported data through interview or survey methods. The results from these self-reported methods are not equivalent to the results from direct analyses of learning processes. Although analyzing learning processes is challenging and requires a vast amount of time, it is necessary to gauge the complexity of learning processes through direct analyses of how learners explore a task, which can be corroborated with these self-reported methods. The Complexity of Learning Processes, Theories, and Perspectives for Analysis There are some existing theories exploring diverse cognitive activities to identify the essence of how learners approach learning tasks, which provides perspectives to analyze individuals’ Internet-based knowledge exploration processes. Marton and Säljö (see Marton & Säljö, 1976 & 2005), for instance, have studied how learners understand the text and find out that some learners focus on memorizing the text and quantitatively acquiring the information from the text; while others attempt to understand the texts by connecting ideas or relating to their lives. They believe that these two types of learners reflect surface and deep levels of processing respectively. More specifically, learners who adopt the surface approach do not engage in learning actively. They focus on taking information to complete requirements (i.e., extrinsic motivation) through shallow and minimal interactions with the text. On the other hand, learners who adopt the deep approach engage in an active interaction with the text. Learning is derived from their internal interests (i.e., intrinsic motivation) to find out the questions important to them 13 and to seek meaning from the text. Svensson (1976 & 1977) has also investigated learners’ approaches to process the texts and identified atomistic versus holistic approaches. Atomistic approaches refer to local comprehensions, such as specific comparisons in the text, memorizing details without being able to interpret them through a broader context; whereas using holistic approaches, learners process the text as a whole, contemplate author’s intention, comprehend the text based on its larger contexts. These two dichotomous divisions are consistent (Marton & Säljö, 2005). In order to understand the text, readers have to connect, integrate, and synthesize the information based on authors’ goals and its embedded contexts. Vermunt and Vermetten (see Vermunt & Vermetten, 2004; Vermunt, 1996 & 1998) further extend the investigation of cognitive processing strategies and take affective, metacognitive, and motivational domains into consideration, the whole of which they call learning style. Some learners in their studies view knowledge as something personal and to be constructed, which reflects the meaning-directed style with features of understanding and elaborating learning materials, investigating similarities and differences, relating learning materials to reality, regulating learning processes, and holding personal interest in learning. Some learners consider learning as the intake of the given information, which shows the reproduction-directed learning style. They are apt to write down definitions, read aloud without elaborations, memorize and rehearse facts, study discrete information in a predictable order, and be regulated and motivated by external factors (e.g., learning objectives, specific requirements, etc). Some learners, who demonstrate the application-directed learning style, opt for the practical value of what is learned and contemplate how to apply the acquired knowledge. Other learners display the undirected learning style when they cannot regulate their own learning processes well, or embrace ambivalent attitude toward learning. Although Vermunt and Vermetten 14 categorize four types of learning style, the meaning-directed style and the application-directed style resonate with the deep and holistic approaches and the reproduction-directed and undirected learning styles align with the surface and atomistic approaches in Marton and Säljö’s and Svensson’s dichotomous divisions. Spiro et al. (1992) also pinpoint the contextually-dependent nature of knowledge construction and stress presenting a certain concept in multiple cases in which learners can explore and compare their similarities and differences to form their deep understandings of the concept. Attending to the contexts in which information is oriented calls for open-mindedness. Understanding that the presentation of a concept in one situation can differ from others may help learners become more sensitive and tolerant to alternative views, cases, and facts. Based on Hare (2003), open-minded learners should also be capable of recognizing their own biases. That is, their self-understandings are provisional and subjected to changes. Therefore, they are willing to rethink the issue at hand from other perspectives and take different evidence and views into consideration. As Marton and Säljö disclosed, learners who adopt deep approaches to learning engage in reading actively. Active learners are inquisitive and curious. They generate and explore questions (or inquiry) in their learning processes (Graesser, McMahen, & Johnson, 1994; King, 1994). Asking questions to conceptually understand the text demonstrates deeper levels of interactions with text, because learners need to interpret, synthesize, and restructure new information to fit their knowledge base. Thus, learners who generate questions are more likely to comprehend reading materials better (Taboada & Guthrie, 2006). Learners who generate and explore questions also demonstrate their internal interest in learning as opposed to fulfilling the external demands. 15 Based on Svensson’s categories, atomistic learners integrate information at local levels, so they may not be able to produce coherent and integrated representations of the text as a whole. Coherent understandings, however, rely on readers’ effort to make inferences out of the text. Thus, the inferential processes reflect the deep approach to learning, and research shows a positive correlation between reading comprehension and inference-making skills (Cain, Oakhill, Barnes, & Bryant, 2001). When individuals use the Internet to explore ill-structured problems, they engage in information processing and information searching processes reciprocally. The above theories and related studies provide perspectives to conceptualize and measure the complexity levels of information processing. In general, deep levels of processing reflects the constructivist perspective and refers to an active meaning-making process in which learners derive their own learning goals (i.e., intrinsic motivation), generate and explore their own inquiry, interpret concepts based on their diverse contexts, synthesize and restructure new information, make inference for coherent understanding, relate new information to the real world (e.g., personal experiences), open to counter-views, and recognize their own biases. In contrast, surface level of processing is involved with learners’ shallow and minimal interactions with the text, and aimed at reproducing and memorizing the local text. Learning using the Internet, however, is unique in its information searching process because learners decide what to read. Learners who search for diverse cases and alternatives, open up their exploration to a broader context, and bring in new issues to explore, demonstrate an expansive search pattern; whereas learners who search for decontextualized statements, avoid alternatives, and are easily satisfied with what they have learned, display a reductive search pattern. Therefore, when directly examining the complexity of learning processes in the Internet, 16 two components should be considered: the depth of information processing and the expansiveness of information searching. Epistemology-Learning Connections in Prior Studies The above section demonstrates conceptual perspectives of learning complexity. Prior studies have revealed that personal epistemology relates to some of these perspectives in varying contexts. Compared to their less complex peers, complex thinkers are more likely to (1) benefit from the case-based learning environment in which the interconnections among cases are accentuated (Jacobson & Spiro, 1995; Windschitl & Andre, 1998); (2) conceive themselves capable of critiquing and assessing web information and being open to conflicting arguments (Whitmire, 2003); (3) participate in task-oriented Internet communications (Bråten, & Strømsø, 2006); and (4) favor the learning environment facilitating inquiry and reflective thinking (Tsai & Chuang, 2005). These findings suggest that personal epistemology may link to at least five aspects of learning processes to some extent: (1) making connections across multiple texts; (2) evaluating information veracity; (3) being open to alternatives (such as counter-views and counter-examples); (4) valuing individual cases, personal stories, and first-hand experiences; and (5) generating and exploring inquiry or questions. In addition, learner satisfaction of how well they have learned may also relate to personal epistemology. When individuals use the Internet for knowledge exploration, some are too readily to feel satisfied with what they have learned online, because their goals are set to locate the pertinent information, rather than develop and justify ideas (Mansourian & Ford, 2007). Hypothetically, it is possible that when facing the same task, less complex thinkers may be more readily to feel satisfied and stop learning, because their internal criteria (about what should be acquired and how much should be acquired) determining when to stop are low if they assume 17 that knowledge is simple, isolated, certain, contextually-independent, uni-dimensional, and obtainable from authorities. As a result, to these less complex thinkers, learning on the Internet means to find and record all available web information from reputable sources. Once they believe they achieve this goal already, they will probably stop learning and feel that they have explored the topic very well (i.e., perceived extent of knowledge exploration) although they have not (i.e., overestimation). This assumption is supported by Schommer’s (1990) study in which she found that the participants who believed that learning happens quickly or not at all demonstrated oversimplified conclusions and low test scores, but overconfidence in their test performance. Moreover, because these less complex learners perceive knowledge to be isolated, certain, and contextually-independent, hypothetically, they may be less likely to (1) explore a wide variety of indirectly related issues (because they cannot conceive their connection to the given topic), (2) propose indecisive conclusions at the end of their learning (because they assume that knowledge is generic and absolute), (3) perceive their learning to be insufficient (because they think that their learning is good enough and thus no need to learn more in the future), or (4) establish future learning plans to explore individual cases (because they assume that knowledge is contextually-independent). Unfortunately, these assumptions have not been tested in prior studies. Purposes, Research Questions, and Hypotheses The purpose of this study was to test the interrelationships between personal epistemology and learning complexity in the context of approaching an ill-structured task using the Google search engine. Specifically, the given ill-structured task asked participants to form and justify their opinion on whether or not genetically modified crops are safe to eat. Two lines of inquiry were examined. First, this study investigated the connections between personal 18 epistemology (i.e., general and task-specific epistemic beliefs respectively) and learning complexity (explained later). Second, this study explored the effect of a pedagogical intervention – epistemic activation by contemplating prompts prior to learning – on learning complexity or the epistemology-learning connection. Specifically, four research questions were developed: 1. Is there a connection between general epistemic beliefs and the complexity of participants’ knowledge exploration processes (i.e., learning complexity) when working on the given ill-structured task using Google? 2. Is there a connection between task-specific epistemic beliefs and the complexity of participants’ knowledge exploration processes (i.e., learning complexity) when working on the given ill-structured task using Google? 3. Is there an impact of activating participants’ task-oriented epistemic beliefs prior to learning on the complexity of their knowledge exploration processes (i.e., learning complexity) when working on the given ill-structured task using Google? 4. Is there an impact of activating participants’ task-oriented epistemic beliefs prior to learning on the connections between personal epistemology and the complexity of knowledge exploration processes (i.e., learning complexity) when working on the given ill-structured task using Google? The complexity of participants’ knowledge exploration processes (i.e., learning complexity) were collected through three methods: 1. The direct analysis of their learning processes gauging the depth of information processing and the expansiveness of information searching demonstrated during their 19 learning processes (i.e., ranging from simple reasoning and minimal exploration to deep reasoning and expansive exploration); 2. A survey collecting learner satisfaction of their knowledge exploration (i.e., are they satisfied with how well they have learned about the given task), their perceived extent of knowledge exploration (i.e., how thoroughly do they think they have explored the given task?), and their overestimation of learning complexity (i.e., the discrepancy between perceived extent of knowledge exploration and the observed learning complexity); and 3. An interview collecting their perceived insufficiency of learning (i.e., the likelihood participants perceive their learning to be insufficient), their future learning plans (i.e., what do they want to explore in the future to enhance their understandings on the give topic), indecisiveness and its reasons (i.e., the likelihood to propose indecisive conclusions due to some reasons), internal criteria determining when to stop knowledge exploration, and the breadth of knowledge exploration. In addition, the interview was used to understand the role of epistemic activation from the learners’ perspective. It was hypothesized that compared to less complex epistemic thinkers or to learners who did not receive the task-oriented epistemic activation prior to learning, learners with complex epistemic beliefs or whose epistemic beliefs got activated prior to learning would be more likely to: 1. Explore the given task expansively and engage in reasoning deeply (i.e., demonstrate more complex learning processes), 2. Feel dissatisfied with how well they had learned (i.e., less learner satisfaction), 20 3. Perceive themselves to be more thorough with respect to knowledge exploration (i.e., perceive less extent of knowledge exploration), 4. Underestimate the complexity of their knowledge exploration (i.e., perceive themselves to be less thorough than their actual learning complexity), 5. Perceive their learning to be insufficient (i.e., perceived insufficiency of learning), 6. Propose indecisive and tentative conclusions due to their context-dependency concern (i.e., indecisiveness due to the context-dependency concern), 7. Establish future learning plans to explore empirical studies, individual cases, and views from different stakeholders (i.e., future learning plans), 8. Adopt higher internal criteria to decide when to stop learning, and 9. Explore broader issues related to the given task (i.e., the breadth of knowledge exploration). In addition, two more hypotheses were generated: 10. The learners whose task-oriented epistemic beliefs got activated before learning would demonstrate stronger connections between personal epistemology and learning complexity, compared to their peers who did not receive the epistemic activation; 11. Participants’ task-specific epistemic beliefs should connect to learning complexity more strongly than their general epistemic beliefs. 21 CHAPTER 2 Method Participants Fifty-three undergraduate students from a Midwestern university participated in this study voluntarily. The recruiting criteria included: 1) Participants must use Google as their primary search engine, 2) English must be their native language, and 3) Participants must be undergraduate students. Each participant received $30 monetary compensation upon completing all tasks. There were 32 females (60.4%) and 21 males (39.6%). Their ages were between 18 and 26, with a mean of 20.19 (SD=1.70). Ten were freshmen (18.9%), 12 sophomores (22.6%), 21 juniors (39.6%), and 10 seniors (18.9%). Forty-one were Caucasian (non Hispanic, 77.4%), five African American (non Hispanic, 9.4%), one Hispanic or Latino (1.9%), three Asian or Pacific Islander (5.7%), and three biracial or multiracial (5.7%). Instruments and Materials Ill-Structured Task The ill-structured task adopted in this study (See Appendix A) asked participants to explore and research diverse issues on the web to form and validate their own views on whether or not genetically engineered (GE) crops are safe to eat. There was no time limit and participants could stop whenever they felt satisfied with their learning and confident that their views were well supported. Without imposing a time limit, participants were able to explore the task as thoroughly as they wanted. More complex learners might have higher standards in terms of the depth and expansiveness of their learning. Thus, freeing participants from time restriction could increase the variability of learning complexity observed in the sample. 22 Participants were also told that they would answer some questions after they explored the task to defend their view and to test how well they had learned the topic, but they did not see the questions beforehand. They could take notes while exploring the task if needed, but note-taking was not required. Inventories Testing General Epistemic Beliefs The revised Cognitive Flexibility Inventory (CFI; Spiro et al., 1996) and the OrganicismMechanism Paradigm Inventory (OMPI; Germer, Efran, & Overton, 1982; See Appendix B) were adopted to collect participants’ general epistemic beliefs. As argued in the literature review, personal epistemology in this study is assumed to be a holistic construct consisting of several interdependent epistemic dimensions. Targeted at ill-structured knowledge domains, both selected inventories have been validated to test a mindset of interactive epistemic dimensions, with their integrated structure depicting personal epistemology from simplicity to complexity. The CFI addresses following epistemic dimensions: 1) the relationship between a system and its parts (i.e., whether or not a system can be analyzed through its independent parts); 2) a concept and its implications (i.e., whether or not a concept should be examined in practice); 3) multiple lenses and perspectives for interpretation (i.e., can a system be analyzed through multiple lenses); 4) active/passive learning process; 5) knowledge justification (i.e., is knowledge acquired through self-construction or accepting authorities); 6) individuals’ preference for complexity; and 7) individuals’ tolerance of ambiguity and irregularity. Although the interconnections among these dimensions and the validity of the CFI have been confirmed based upon factor analysis (see Spiro et al., 1996), its internal consistency and test-retest reliabilities have not been reported. The original CFI includes 15 pairs of conflicting statements, asking participants to rate 23 each statement in a 7-point Likert scale. The CFI in this study (See Appendix B, part I) were revised in three ways: (1) the statements were simplified to fit undergraduate students’ reading comprehension; (2) the two opposing statements in each pair stood at the ends of a continuum, and participants were forced to weigh them along a 6-point Likert scale (the statements in Italic reflect more complex epistemic beliefs); and (3) the first two pairs of statements in the original CFI were combined due to their similarity, and thus, the revised CFI contained 14 pairs of statements. An earlier pilot study involving 11 other undergraduate students, confirmed the interpretability (including the accuracy of interpretation) of the revised CFI statements. Considering that the CFI was initially validated among medical students, which differs from the targeted participants in this study (i.e., undergraduate students with all majors), the OMPI were also used to enhance the validity of the measurement. The OMPI (See Appendix B, part II) includes 26 pairs of forced-choice statements, and participants were asked to identify the statement pertaining to them more closely in each pair. The OMPI differentiates individuals between Mechanistics and Organicists. At the less complex end of the spectrum, Mechanistics refer to the individuals seeing the reality as stable (i.e., schommer’s certainty of knowledge dimension) and isolated (i.e., schommer’s simplicity of knowledge dimension), and assuming the world as a machine whose parts can be understood separately and their interactions are systematic and follow a certain law (like an equation) (Pepper, 1942). At the more complex end of the spectrum, in contrast, Organicists can see the tentative and interconnected nature of the reality and assume that the world should be understood by constant integration of its parts. Parts of a system have some effects on other parts, so a system cannot be simply seen as a sum of its all parts (Pepper, 1942). The statements in Italic (See Appendix B, part II) reflect more complex epistemic beliefs (i.e., Organicists’ beliefs). The 24 OMPI has satisfactory internal consistencies (the Guttman split-half coefficient is 0.86; and the Cronbach alpha coefficient is 0.76) and the test-retest reliability (0.77 for three weeks’ interval; see Johnson, Germer, Efran, & Overton (1988) for more information). Inventories Testing Task-Specific Epistemic Beliefs To collect participants’ task-specific epistemic beliefs, the statements in the CFI and the OMPI were also revised to fit the context of this study – using the Google search engine to explore the given ill-structured task (See Appendix C, part I and part II). A short description was demonstrated at the beginning of each inventory asking participants to imagine that they were using the Internet to explore the given task while reading through the statements. The format of the task-specific CFI and OMPI was identical to the general CFI and OMPI respectively. There were 13 pairs of conflicting statements in the task-specific CFI and 18 pairs in the task-specific OMPI. Some pairs in the general CFI and OMPI were not rewritten due to the difficulty in adapting them to fit the specific context of this study. The contextualized specification also reduced the variability of the statements. For example, when rewriting pairs 13 and 16 in the general OMPI, they all resulted in pair 11 of the task-specific OMPI. The pilot study was conducted (among 11 undergraduate students) to enhance the clarity of statements and the accuracy of interpretation. Its reliability is reported in the Measures and Data Analysis section below. To avoid instruments’ priming effect, the task-specific CFI and OMPI were completed two weeks before participants explored the task. Epistemic Prompts As discussed in the literature, learners who are more cognizant of their epistemic beliefs may approach learning tasks differently, but learners cannot spontaneously activate all dimensions of their epistemic beliefs during the Internet-based learning (Mason et al., 2010). To 25 test whether or not learners’ awareness of their epistemic beliefs can lead to a more complex learning process (i.e., research question 3) or to a stronger epistemology-learning relationship (i.e., research question 4), 27 participants in this study (randomly selected) received five prompts (see Appendix D) immediately before learning to activate diverse dimensions of their taskoriented epistemic beliefs. While these 27 participants were working on the prompts, the other 26 participants completed vocabulary tests so that all participants spent equal amount of time before exploring the given task online. The prompts included five hypothetical scenarios that participants may encounter when exploring the given ill-structured task. The prompts were composed using specific scenarios pertinent to the given task to help participants think concretely and to increase the impact of epistemic activation on learning (if any). Before working on these prompts, participants were informed that the purpose of contemplating these prompts was to help prepare their mind for the upcoming task, rather than to test how well they could answer the question or how much they had known about the issue. Participants were forced to respond to all prompts and submitted their responses online. Their responses were not analyzed because this study aimed to examine the effect of the prompts, not how participants answered the prompts. The prompts were generated based on two rules: (1) prompts should cover diverse epistemic dimensions so that participants’ epistemic beliefs can be activated comprehensively; and 2) prompts need to avoid disclosing expected answers, because the focus of the prompts is to activate (i.e. enhance self-awareness), rather than to change, participants’ epistemic beliefs. As introduced in the literature, personal epistemology is a construct embracing several interdependent dimensions. Although scholars have proposed differing epistemic dimensions in their own frameworks (e.g., Jehng, et al., 1993; Schommer, 1990; Wood & Kardash, 2002), the 26 four dimensional framework synthesized by Hofer and Pintrich (1997) has been widely used in the field. These four dimensions are: certainty of knowledge, simplicity of knowledge, source of knowledge, and knowledge justification. In addition, Spiro et al. (1992) stressed the contextdependence nature of knowledge. Therefore, five questions were developed to prompt these epistemic dimensions. Specifically, the first question in the prompts (see Appendix D) focuses on activating the context-dependence of knowledge dimension. Participants selected to search for or read summaries instead of individual cases demonstrated their insensitivity to contexts for knowledge exploration. Participants’ answers to question two and four may address the simplicity, certainty, and context-dependence of knowledge dimensions. Participants who conceived the co-existence of alternative views and counter-facts could activate their epistemic beliefs that knowledge is tentative, subjective, and interconnected. Question three focuses on recalling learners’ strategies to evaluate information veracity. If participants only report the strategies of evaluating the authority of web information (e.g., trusting the websites from .gov or the opinions from scientists and experts) without thinking about the strategies to assess its content (e.g., whether or not the study is well designed, the logical soundness, the sufficiency of back-up evidence, etc), it reveals their underlying epistemic beliefs that knowledge exits externally, not within themselves. Thus, this question addresses the source of knowledge dimension. Finally, question one and five encourage students to think about how to approach the given task and justify their views (i.e. contemplating the nature of knowing). The epistemic beliefs that knowing is a process to accept authoritative information are activated if participants plan to collect information from trustworthy web sites or decide their view based on the expert opinions shown online. The participants who are aware that knowing is to construct meanings from the text are more likely to investigate the interconnections among individual cases and 27 examine contextual meanings. Thus, these two questions address the knowledge justification dimension. The pilot study was conducted to insure that participants’ responses could cover these epistemic dimensions. Post Survey Participants completed a post survey (See Appendix E) immediately after they explored the given task using the Internet. The post survey included items for three constructs: 1) perceived effort investment in knowledge exploration processes; 2) learner satisfaction, and 3) perceived extent of knowledge exploration. Effort investment. Participants’ effort investment in learning processes may influence their learning satisfaction, perceived extent of knowledge exploration, or the observed learning complexity. Thus, this construct should be treated as a covariate, and were collected through three7-point Likert scale items in the post survey (See Appendix E, part I, item 1-3). Learner satisfaction. Participants’ satisfaction of their knowledge exploration processes (i.e., were they satisfied with how well they had learned about the given task) was collected through five 7-point Likert scale items (See Appendix E, part I, Item 4-8) in the post-survey. Perceived extent of knowledge exploration. Participants’ perceived extent of their knowledge exploration processes ranges from surface processing and minimal exploration to deep processing and expansive exploration. It was collected through 23 5-point Likert scale items (appendix E, part II). These 23 items were written based on (1) the elaboration and match dimensions in Wu & Tsai’s (2007) Information Commitment Survey; and (2) the theories depicting conceptual meanings of learning complexity (e.g., Marton & Säljö, 1976 & 2005; Vermunt & Vermetten, 2004; Spiro et al., 1992; Hare, 2003). Specifically, the elaboration dimension in Wu & Tsai’s (2007) Information Commitment Survey measures the degree to 28 which learners integrate web information during learning; whereas the match dimension reflects the extent to which learners focus on finding the most relevant information efficiently. Thus, they reflect expansive vs. minimal exploration respectively. The internal consistencies of these two dimensions are 0.84 and 0.72 respectively. In addition, reviewing the literature suggests other dimensions that can depict the complexity of knowledge exploration processes, such as sensitivity to contexts, flexibility of reasoning, meaning construction, etc. Therefore, more items were composed and added in the survey. Because the majority of the items in the post survey were composed, all items were assessed in the pilot study to ensure their clarity and the accuracy of interpretation. The internal consistencies are reported in the Measures and Data Analysis section. Prior Content Knowledge Test Learners’ prior content knowledge can help with query selections and may affect how learners approach a task (e.g., Palmquist & Kim, 2000; Wildemuth, 2004). Therefore, fourteen true or false questions and an open-ended question were composed to test participants’ prior content knowledge about the given topic (See Appendix F), and this construct was treated as a covariate in statistical analyses. Verbal Comprehension Test Learners’ verbal comprehension abilities (i.e., individual’s ability to understand the English language; French, Ekstrom, & Price, 1963) relate to their reading comprehension (Qian, 2002). Thus, an 8-min version of Advanced Vocabulary Tests I and II (36 items in total) from the Kit of Factor-Referenced Cognitive Tests (Ekstrom, French, & Harman, 1976) were used to measure participants’ verbal comprehension abilities. The tests have been widely used in other studies (e.g., Saccuzzo, Craig, Johnson, & Larson, 1996; Visser, Ashton, & Vernon, 2006). But 29 only a few of them reported reliabilities: the Cronbach’s alpha for Test I was 0.53 in Barchard’s (2003) study, and the split-half reliability (corrected with the Spearman-Brown prophecy formula) for Test II was 0.89 in Hirumi and Bowers’ (1991) study. Video Clips While exploring the given task online, participants were asked to verbalize their thoughts. Participants practiced this think-aloud method before they explored the task (see Appendix I). These think-aloud protocols and their knowledge exploration processes were recorded and saved as video clips (in the .wmv format). It is possible that the participants who spent more time exploring the task online may feel more satisfied with the thoroughness of knowledge exploration, and may have larger opportunities to think in a more sophisticated way and to enact expansive searches for diverse issues. Thus, learning time was another covariate in this study. Interview After exploring the task, participants were interviewed for three main reasons: 1) the interview data were examined to corroborate the results from direct analyses of participants’ knowledge exploration processes (See Appendix G, question 4-9); 2) the interview protocols helped examine how (if any) the activation prompts influenced participants’ exploration of the task from their own perspectives (See question 12); and 3) the interview solicited other variables measuring learning complexity, which may relate to personal epistemology. These variables included: the likelihood participants perceive their learning to be insufficient (see question 1), their future learning plans (see question 2), the indecisiveness of their conclusions on food safety with reasons (see question 3), their internal criteria determining when to stop learning (see question 10), and the breadth of knowledge exploration (see question 11). 30 Design and Procedures Treatment, general epistemic beliefs, and task-specific epistemic beliefs are the three variables whose interactions with learning complexity were of primary interest in this study. Participants were randomly assigned to two groups: non-activation (the control group) and activation (the treatment group) of task-oriented epistemic beliefs through prompts prior to learning. Each participant completed two lab sessions (See Table 1). During the first session, all participants completed the prior content knowledge test, and the inventories measuring taskspecific epistemic beliefs (i.e., the task-specific CFI and OMPI). Participants in the activation group also completed the vocabulary test measuring their verbal comprehension abilities. Training on Google search techniques (See Appendix H) were provided at the end of this session to all participants to reduce individual differences in search efficiency (Palmquist & Kim, 2000). Considering the possibility that the inventories for task-specific epistemic beliefs may activate participants’ epistemic metacognition (and thus, can affect their knowledge exploration processes), learning was arranged two weeks later. During the second session, all participants reviewed the Google search techniques, followed by practicing the think-aloud method (See Appendix I for instruction). Then the participants of the activation group were exposed to the task and epistemic prompts. They were asked to contemplate and respond to these prompts online. No question could be skipped. There was no time limit for responding to these prompts, but the estimated time was 20 minutes based on the pilot study. Then they started exploring the task. To control for differences in experimental time between groups, while participants in the activation group worked on the activation prompts, the participants in the non-activation group completed two sets of vocabulary tests before exploring the given task. Yet only the 8-min version of Advanced Vocabulary Test I 31 & II completed by all participants were graded. When participants explored the given task, they were asked to think aloud throughout learning. Their learning processes and think-aloud protocols were recorded for subsequent analyses. Participants were also stressed that there was no time limit to explore the task. Upon finishing learning the task, all participants completed the post survey on effort investment, satisfaction, and perceived extent of knowledge exploration; received an interview, and completed the two inventories measuring general epistemic beliefs. Table 1 Design and Procedures of the Study Non-Activation Activation Session 1 1. Testing prior content knowledge 1. Testing prior content knowledge (35– 2. Testing task-specific epistemic 2. Testing task-specific epistemic 50mins) beliefs beliefs 3. Training on basic Google 3. Testing verbal comprehension techniques 4. Training on basic Google techniques Session 2 (two weeks 1. Reviewing the techniques to locate web information 1. Reviewing the techniques to locate web information later) 2. Practicing the think-aloud method 2. Practicing the think-aloud method (2.5 hours) 3. Testing verbal comprehension 3. Completing epistemic prompts 4. Exploring the given task 4. Exploring the given task 5. Completing the post survey 5. Completing the post survey 6. Conducting an interview 6. Conducting an interview 7. Testing general epistemic beliefs 7. Testing general epistemic beliefs 32 Measures and Data Analysis To examine the connection between learning and personal epistemology, hierarchical regression analyses were conducted by regressing each variable measuring learning complexity (i.e., treated as dependent variables) on covariates and the variables related to personal epistemology (i.e., treated as independent variables or predictors) through two steps (step 1, main effects; step 2, interactive effects; more details below). This section introduces the procedures to quantify all variables and to analyze the descriptive data generated from the interview, and ends up with describing statistical analyses to address the research questions. Quantifying Independent Variables General and task-specific epistemic beliefs. One of the main interests of this study was to test the connections of general and task-specific epistemic beliefs to different variables measuring learning complexity (i.e., research question 1 and 2). There were four inventories testing personal epistemology: general epistemic beliefs tested by the CFI (GEB(CFI), see Appendix B, part I), general epistemic beliefs tested by the OMPI (GEB(OMPI), see Appendix B, part II), task-specific epistemic beliefs tested by the CFI (TSEB(CFI), see Appendix C, part I), and task-specific epistemic beliefs tested by the OMPI (TSEB(OMPI), see Appendix C, part II). In these four inventories, the statements in Italic reflect more complex epistemic beliefs. In each inventory, its items were averaged to yield corresponding epistemic scores. Thus, four epistemic scores for each participant were generated: the GEB(CFI) score, the GEB(OMPI) score, the TSEB(CFI) score, and the TSEB(OMPI) score. The GEB(CFI) and the TSEB(CFI) scores ranged from one to six (because each item in the CFI was composed on a 6-point Likert scale); whereas the GEB(OMPI) and the TSEB(OMPI) scores ranged from zero to one (because each item in the OMPI was a forced selection from two statements). Higher scores indicated 33 more complex epistemic beliefs. Means and standard deviations of the raw epistemic scores were shown in Table 12. Because different epistemic inventories yielded epistemic scores in different scales (e.g., 1-6 scale or 0-1 scale), it was hard to horizontally compare the consistency of epistemic beliefs in each participant. Therefore, all epistemic scores were converted to the standardized z-score. There were no missing data, because participants’ responses were checked immediately after they completed the inventories. The Cronbach’s alpha of each inventory was calculated. Results showed that if the item 19 in the general OMPI, the items 1, 2, 3, 13, and 15 in the task-specific OMPI, and the items 1, 4, 5, 7, and 9 in the task-specific CFI were eliminated, the internal consistency of these three inventories would be significantly increased. Thus, these items were excluded when calculating corresponding epistemic scores. The internal consistencies (Cronbach’s alpha) of the rest items in the CFI measuring general epistemic beliefs (14 items), the OMPI measuring general epistemic beliefs (25 items), the CFI measuring task-specific epistemic beliefs (8 items), and the OMPI measuring task-specific epistemic beliefs (13 items) were 0.69, 0.63, 0.62, and 0.50 respectively. Group. Participants were randomly assigned to two groups: activation (treatment) and non activation (control). There were 27 participants in the activation group contemplating the prompts prior to learning to activate their task-oriented epistemic beliefs. Meanwhile, the rest 26 participants in the non-activation group were working on the vocabulary test, and thus, their taskoriented epistemic beliefs were not activated before learning. A dichotomous group variable was constructed to reflect the two groups (0 – non-activation, 1 – activation), and its connection to different variables measuring learning complexity (i.e., dependent variables) was the main 34 interest of this study (i.e., research question 3). Group-epistemology interactions. This study also focused on examining the impact of epistemic activation on the epistemology-learning connection (i.e., research question 4). Thus, two interaction terms were constructed: the interaction between general epistemic beliefs and the group variable, and the interaction between task-specific epistemic beliefs and the group variable. Their relationships to each dependent variable measuring learning complexity were investigated. When constructing these interaction terms, only z-scores of epistemic beliefs were used and entered into regression models to reduce multicollinearity (Reinard, 2006, p. 389). The correlation between the interaction terms and different variables measuring learning complexity (i.e., dependent variables), then, reveals whether or not epistemic activation can affect the epistemology-learning connection. Quantifying Covariates Prior content knowledge. Collected from the prior content knowledge test, the prior content knowledge score was the total number of participants’ correct responses to the true or false questions and the number of correct concepts in participants’ responses to the open-ended question (see appendix F, question 15). The mean and the standard deviation of this variable were 7.19 and 2.74 respectively. The split-half reliability with the Spearman-Brown correction for the true or false questions was .51. Verbal comprehension abilities. The number of participants’ correct responses to the Advanced Vocabulary Test I & II (36 items) from the Kit of Factor-Referenced Cognitive Tests (Ekstrom, French, & Harman, 1976) was calculated to reflect their verbal comprehension abilities. The mean and the standard deviation of this variable were 16.15 and 5.52 respectively. The split-half reliability (i.e., the correlation between the two tests) was .73. 35 Effort investment. Participants’ perceived effort investment in learning processes was collected through the post survey. Items (see appendix E, part I, item 1-3) were averaged, and the averaged score ranged from 1 to 7. The higher scores indicated larger degrees of effort investment perceived by participants. The mean and the standard deviation were 6.06 and 0.58 respectively. The internal consistency of these three items (Cronbach’s alpha) was .69. Learning time. On average, participants spent 70.87 minutes (SD = 26.17) exploring the given task. Because the original scales for the four covariates were different, it is more convenient to compare participants in the same scale. Therefore, these raw scores were converted to z-scores and entered into regression models as predictors. Cohen, Cohen, West, and Aiken (2003) also recommended centering continuous predictors when the regression analysis contains interactions in order to reduce multicollinearity. Quantifying Dependent Variables – Learning Complexity Measured through Direct Analysis The dependent variables – the complexity of knowledge exploration included (1) learning complexity measured through direct analysis of participants’ learning processes (i.e, analyzing video clips), and this observed learning complexity construct was quantified into several dimensional scores as well as an integrated complexity score (see details below); and (2) learning complexity measured through self-reported methods (i.e., interview and survey methods). This section introduces how to quantify the observed learning complexity (i.e., analyzing video clips), and the next section discusses how to analyze the self-reported data. Development of coding categories. The coding categories measuring the complexity of participants’ learning processes were generated through both top-down and bottom-up procedures (Chi, 1997). 36 In the top-down procedure, preliminary codes were generated based on the existing theories introduced in the literature review. Learning with deep levels of processing has the following characteristics (i.e., preliminary codes): 1) connecting new ideas (e.g., comparing and synthesizing); 2) relating new ideas to reality (e.g., evaluating their implications and practical values); 3) intrinsic motivation (e.g., internal interests in learning); 4) comprehending the text through its larger contexts (e.g., contemplating author’s intention, making inferences, etc); 5) flexibility of thinking (e.g., attending to contexts, opening to alternatives, recognizing selfbiases, rethinking, etc); and 6) inquisitiveness and curiosity (e.g., generating and exploring questions). Learning with surface levels of processing, on the other hand, has the following characteristics: 1) memorizing the text (e.g., writing down definitions); 2) quantitatively acquiring the information from the text (i.e., accumulating information as much as possible without synthesizing); 3) extrinsic motivation (i.e., learning to complete requirements); and 4) local comprehensions (e.g., failure to interpret a certain idea through its broader contexts, studying discrete cases or pieces of information without making connections, etc). Knowledge exploration using open-ended Internet resources also includes an information searching component, which refers to learners’ behavioral reactions to their information processing, such as searching for specific topics, ceasing to read a certain webpage, clicking on a hyperlink, etc. Behavioral reactions are derived from learners’ processing previous or current reading as well as their prior knowledge, and learners need to process the new web information once the behavioral reaction is enacted. Thus, the information processing and information searching procedures are reciprocal. Yet this study adopted the artificial division, because the current literature has not established the perspectives to analyze the information searching procedure in terms of its expansiveness, which is unique for the Internet-based open-ended 37 learning environment. At this stage, these preliminary codes were very general and ambiguous, and relied on the bottom-up procedure to finalize its embodied meanings and examples. In the bottom-up procedure, open-coding techniques (Strauss & Corbin, 1990) were adopted to analyze a sample of video clips recording participants’ learning processes. To select a sample, participants were categorized as either complex or less complex thinkers based on each of the four epistemic scores (i.e., the general CFI and OMPI scores, and the task-specific CFI and OMPI scores) using their medians as a cut-off point. Then five participants were randomly selected from the participants categorized consistently as complex thinkers in all four inventories, and another five participants were randomly selected from the participants categorized consistently as less complex thinkers. This sampling strategy maintained individual differences, and thus, protected the variability of learning complexity. During the open coding procedure, video clips were transcribed using the Transana software. Transcripts reflecting the interested constructs – the depth of information processing and the expansiveness of information searching – were kept and described, resulting in a long list of ideas. A comparative analysis (Strauss & Corbin, 1990) was conducted by contrasting individual ideas to the rest, aiming to combine similar ideas into a broader category. Categories generated through this process were also compared with the preliminary codes developed from the top-down procedure. The final coding categories were shown in Table 2. Interpretation of the coding categories. In Table 2, codes were grouped based on its conceptual meanings. When using the open-ended Internet resources to explore an ill-structured task, the complexity of learners’ knowledge exploration processes can be measured through five dimensions. First, the connection dimension resonates with Marton and Säljö’s and Vermunt and Vermetten’s arguments that deep learning approaches are involved with learners’ effort to 38 connect new ideas and relate new ideas to the real life. Six codes were included in the dimension. Table 3 lists their definitions and examples. When processing the text, participants in the sample recalled their prior knowledge or previously acquired web information. Yet more advanced skills to connect ideas were comparison and synthesis, which demonstrated their effort to understand Table 2 Coding Categories to Analyze the Complexity of Knowledge Exploration Processes Information Processing Information Searching Connection • • • • Recall-information Recall-prior-knowledge Compare Synthesize • • Hypelinks-within-text Connect-throughreference Flexibility • • • • • Investigate-contextual-meanings Rethink Provisional-understanding* Intolerance-of ambiguity* Tolerance-of-ambiguity* • • • • • Case-avoided* Case-pursued* Alternative-pursued* Alternative-avoided* Biased-argument* Critical Analysis of Web Information • • • • • • • • • Critical-analysis-Source Critical-analysis-Recentness Critical-analysis-Content-references Critical-analysis-Content-triangulation Critical-analysis-Content-writing Critical-analysis-Content-reasoning Critical-analysis-Universal bias* Critical-analysis-Reason to read Critical-analysis-Remind bias Novelty • • • Bring-in-new-ideas Make-inferences Generate-new-inquiry Engagement • Notes-for-exploring • Notes-for-recording • Generate-new-inquiry • Identify-issues-to-explore • Explore-inquiry-and-issues • Outcome-goal* • Internal-interest* Note. Codes in Italic indicate less complex learning processes (i.e., simple reasoning or minimal exploration. * indicates the codes scored for the qualitative difference (see scoring for details). 39 Table 3 Codes, Definitions, and Examples of the Connection Dimension Codes Definitions Examples "What are the potential dangers of GM foods?" Participants recalled the Recallweb information they had [Reading...] I am going to put "toxic" again and information read or the views they had put two “**” [on my notes], because this is the second time I've heard about it. proposed previously. Recall-priorknowledge Participants recalled a specific piece of prior knowledge or personal experiences related to what they were reading. I don't believe it a hundred percent, but there should be some truth to it. Maybe not all the food losing nutritional values, but …I've seen oranges, and pick them up on the tree, and they are green, but if you take them into store, they are really orange. Compare Participants compared new information with the information they had acquired online or compared new information with their existing prior knowledge in order to understand the new information. I am thinking that obviously there is another group that is opinionated on the matter, although I like their presentation a little bit more. Greenpeace has a better website. This is an ugly website, but they aren't as propaganda-ist and they are displaying their information I guess. They are just telling you what they found, why they don't support it; whereas Greenpeace is very, almost attacking the people who do support it. Synthesize Participants related a new concept to the concept(s) they acquired online or had known so that they could structure/reorganize it into their knowledge base. "However, there are GMOs produced naturally" I don't know what that means. Unless they are talking about what I wrote down earlier - selective breeding. But I don't know because my understanding is it [selective breeding] did not follow under the category of genetic modification. Hypelinkswithin-text Participants clicked on the hyperlinks embedded in the text of a web page (not the ones in navigation bars, content tables, or references). [Reading the Wikipedia page, “ Genetically Modified Food,” ...] Click on "transgenic plant" [a hyperlink in the text], because I don't know what that means. Connectthroughreference Participants used the current website or planned to use certain website to solicit more resources. [Googling "genetically modified food + wiki"] This way I can check and see what Wikipedia says. Probably not going to write down the stuff I have, but just scroll through real fast and then to check their sources, and figure out which one they are getting the most information from and then go there. 40 the text and to integrate new information into their own knowledge structures. To respond to the text they were reading, some participants solicited (i.e., searched for) more resources through hyperlinks or references provided by author(s). Thus, from one webpage, these participants connected to more web pages and opened up their inquiry to broader issues, which demonstrated an expansive information searching pattern. All six codes in this dimension, then, reflect complex learning processes. The second dimension, flexibility, echoes the idea put forth by Spiro et al. (1992) on flexible knowledge assembly and the discussion by Hare (2003) on open-mindedness. This dimension included 10 codes (see Table 2), whose definitions and examples are shown in Table 4. Based on observing learning processes, some participants demonstrated their effort to investigate contextual meanings of the arguments they were reading. They frequently asked themselves the questions like “What is the authors’ purpose to propose this argument? Whether or not the result can be generalized to other populations? What are these stakeholders’ perspectives?” They attempted to interpret issues through multiple lenses and proposed questions such as “What does it mean to politicians, scientists, and consumers? Is this issue different if it is viewed through other perspectives?” Because they attended to the role of contexts, they were more likely to search for specific cases, alternatives, or even biased arguments; tolerate the ambiguous conclusions to a larger extent; realize that their understandings were tentative and biased; and expect ambiguities. In short, this dimension consisted of five types of cognitive activities and five types of behavioral reactions merging from the analysis of participants’ information processing and searching procedures respectively. The third dimension, critical analysis of web information, is more unique in the Internetbased learning environment. In traditional learning environments, the learning materials are 41 Table 4 Codes, Definitions, and Examples of the Flexibility Dimension Codes Definitions Examples InvestigateParticipants’ reasoning So then the next paragraph, “In 2010, another contextualfocused on contextual experiment…” So it [GE corns] causes liver, meanings features of the argument(s) kidney damage in mammals. Then they said that they were reading online the reanalysis of the experiment, funded by the (e.g., the author’s company developing the GMO, and then perspective, implications, obviously, they are going to say that the study is generalizability, etc) flawed. ... So now I am thinking how that will affect the safety issues. Just to see how hard it is to get a clear answer. So it will be important to see the companies, like anything we read, who fund the research, who is publishing it, and the stuff like that. ProvisionalParticipants considered understanding their understanding of the text or their opinions provisional, tentative, limited, or even biased. Or participants discredited the absolute statement or opinion they read online. (Example 1) So I mean based on what I know, and what I am reading here, I am going to tentatively conclude that GE is not a problem. (Example 2) “To date, no adverse effects attributed to GE have been document in the human population" Pretty bold outright statement! That makes me feel a little bit discredit that. When people put ultimate statement saying “This is right. This is bad. This is good,” I am a lot more data-based, fact-based. This is what the data show, so may indicate bla bla bla, so need to be modest. Rethink Participants rethought web information through other lenses different than the text, or triggered by the text they were reading, they rethought a previous issue through a different perspective. Basically I am going to write down, I think the issue is, so efficiency of production is in relation to two things: you can either see an efficiency of production as a positive, because it could help farmers and increase yields, and then I can see it as a negative because they may do this and not pay attention to the health risks. Intoleranceof-ambiguity Participants assumed or searched for a clear answer or diminished the value of the arguments that did not provide clear-cut answers. Looking at "genetically modified plants and human health" Then I saw the short intro[duction] saying "Effect of diets containing genetically modified potatoes expressing..." I thought it is interesting, so I will click on that. Because it seems that they can show you[me] yes or no answer. 42 Table 4 (cont’d) Codes Tolerance-ofambiguity Definitions Participants were tolerant to the ambiguity nature of what they were reading. Examples Conclusion, [reading...] Basically not saying anything. That makes sense. It is from government, so they don't want to make any clear conclusions. Just the both sides of the conclusions, not completely useless, because I get some sides and general ideas and concerns. Alternativepursued Participants searched for, selected to read, or read alternative arguments or cases (e.g., counterarguments, arguments from different stakeholders). Alternativeavoided Participants avoided the information contradicting to their existing view. [On the search result page, reading the links to the scholarly articles.] Let's click on the third one, because it is going to criticize the risk assessment. So far from what I've got, it says that they are fine with risk assessment, but I think this link is going to say that the risk assessment is not good enough. Anything that makes my argument looks better, I will copy down. And leave the stuff that makes a little [bit] bad, because there are always two sides of the story no matter what. Biasedargument Participants were interested in reading the biased opinions or arguments (even though they were aware of their biases), or they recognized the value of reading the biased arguments. [On the Google search result page, reading the choices…] "Say no to genetic engineering" from Greenpeace. …So they were against GE. Maybe that could be a good source. … They will be biased, but maybe they did some research. Just like the company did research too, but they would be biased as well. I want to see why they are against the GE. Case-avoided Participants skipped specific examples, cases, or studies unless they had other reasons (e.g., repetitive cases, unreliable cases, etc). [Reading "unexpected effects are common in genetically engineered organisms"...] I will keep scrolling...so I don't think I would get into individual examples, like what this happens to potatoes. Case-pursued Participants looked for specific cases (e.g., scientific studies, personal stories, news report, etc). OK. So I don't see what types of food causing this [allergy] …and I still feel like this whole bacterium issue might just be related to food poisoning, rather than GE foods specifically. I'd like to find a very specific case, very specifically of genetic engineering directly related to allergy. censored by authorities to some extent (e.g., teachers select textbooks, editors review publications, etc), but everyone can write online. Therefore, evaluating the veracity of web 43 information, instead of trusting all being read, reflects deeper levels of information processing. This dimension included nine codes, whose definitions and examples are shown in Table 5. Participants in the sample demonstrated three main strategies to evaluate information quality: source, recentness, and content. Most simply, participants knew to use at least basic techniques, such as the URLs, organizations, the identity of authors, and the recentness of web information, to decide how likely they can trust the information. More advanced strategies, however, refer to participants’ evaluations on content per se, like writing levels, existence and soundness of evidence, data triangulation, the flow of arguments, and logical soundness. Moreover, a few participants evaluated information veracity less mechanically. They took their learning goals into consideration and read what they needed (i.e., the Critical-analysis-Reasontoread code), although it might be biased. For example, some participants would not read Wikipedia at all even though they had run out of resources, because they were told that Wikipedia was unreliable and contained misinformation. Some participants, however, treated Wikipedia as a starting point to solicit more resources or to extract potential issues that they could explore. Yet they were also aware of the possibility of inaccurate information. Thus, reading biased information to achieve their learning goals while keeping that bias in mind demonstrated their higher level of veracity judgment. At the most complex end of the spectrum, a few participants argued that no information was completely unbiased. The universal bias was unavoidable. The forth dimension, novelty, reflects the extent to which learners come up with new arguments during learning. It included three codes whose definitions and examples are shown in Table 6. While reading the Internet information, some participants proposed new concerns and suggestions based on their prior knowledge, generated questions that they wanted to solve subsequently, and read between lines to draw new meanings. These observations revealed their 44 Table 5 Codes, Definitions, and Examples of the Critical Analysis of Web Information Dimension Codes Definitions Examples Participants evaluated the (Example 1) I will start off by doing Google Criticalveracity of web analysisscholar, because …I trust scholarly article[s] more information based on the than just trust someone who is not a doctor, Sources identity of its authors, the someone who is not qualified to be doing research. URLs, organizations, or (Example 2) [Googling "royal society of medicine the original websites from press"] I just want to see their actual website, which the information was because I want to make sure things are credible. derived; and scholarly articles. CriticalanalysisRecentness Participants evaluated the veracity of information by checking out the recentness of the text posted online. "Genetic engineering can cause unexpected mutations in an organism" this makes sense. But again this is from 1995, so makes me wonder how credible this source is if it is 15 years from now. CriticalanalysisContentreferences Participants judged the veracity of arguments by checking if they had citation. But the same time, …because this website…is so clear and concise, I feel that it is not reliable. Because it is not showing studies or anything, it is just saying things. It is showing that there are links [references] to these, but I feel that if it is definitely reliable to me, the study will be online. They would actually be showing me these on the site. CriticalanalysisContenttriangulation Participants judged or intended to judge the veracity or the importance of arguments by evaluating their reoccurrence during learning. I am pretty convinced. So far, it is not on human… but obviously, a lot of testing for many different products is … on animals, specifically rats, and the effect that I am seeing here and also on Wikipedia which I believe these two to be pretty reliable sources, especially since the whole rats study was from another source. CriticalanalysisContentwriting Participants evaluated the veracity of the web information based on its writing levels, tones, or its layouts. (Example 1) So this is …"Mother for natural law." That's what the title of this webpage says, so sounds very environmental. (Example 2) Right away, the first thing I noticed is the writing level. …this is written at a bout 3rd grade level. …so I kind of start to disregard something like this. CriticalanalysisContentreasoning Participants evaluated the veracity and reasonableness of the web information through (Example 1) "We believe" [then reading its content...]. I like to hear people's opinions, but this really did not change my opinion, because they did not … give me the facts. 45 Table 5 (cont’d) Codes Definitions Logical and concrete reasoning (e.g., investigating the underlying agenda, backing up the new information, judging the logical soundness of author’s arguments, etc) Examples (Example 2) [Reading "increased cancer risks"...] Don't really like that. I kind of think that people jump to the speculation that cancer can be formed from anything. …so it seems that they are just speculating that, but they don’t have specific examples. (Example 3) Keep reading...that kinds of confuse me because the argument before is that rats cannot digest the food [then human may not digest the food], so that's why gm food is unsafe. But now it is saying GM peas are no harmful on animals, but they could be harmful on humans, so the logic doesn't apply across species consistently. CriticalanalysisUniversalbias Participants assumed that all web information had their own agenda. Or participants applauded the authors who admitted their own bias. (Example 1) Keep reading...It [the article] admitted it has bias. That is always good. (Example 2) [Reading ...] OK. If ICSU and WHO are behind it, it has more validity, but they also have their own agenda. People are very political, …so could not trust them completely. No person is completely unbiased. CriticalanalysisReasontoread Information veracity evaluation was not mechanical. Participants would not evaluate it just based on its source or recentness. Instead, what to read depended on what they needed at that moment (learning goals). (Example 1) Alright, the language here is not very technical, and it is very clear that there is a strong stand on it, but I will continue reading just to see if I can pull some information about this. (Example 2) I just want the Wikipedia page. I don't want to read the main [official FDA] page, because it is going to be biased and they may not report these issues. (Example 3) "Say no to GE" from Greenpeace. I am going to skip it because it is biased. I am going to go back to see their views of it, what they think to say no, but I want to see what it is first. CriticalanalysisRemindbias Participant reminded themselves of the (potential) biases or information inaccuracy of what they were reading, would read, or had read. [The page of Peer Reviewed Publications on the Safety of GM Foods on AgBioWorld is pulled out] Knowing that it is peer-reviewed, but it is also from AgBioWorld who is a proponent of GM foods, so have to read this with that in mind. 46 Table 6 Codes, Definitions, and Examples of the Novelty Dimension Codes Definitions Examples Bring-inParticipants generated new ["GM foods...with no report of ill effects"] The new-ideas suggestions, concerns, or problem with that is there is still stuff that we are personal views that were doing right now. Even like back at the day that not included in the text when our parents play[ed] with mercury, they they were reading. did not know for years later, so just because people have been eating it for 15 years with no report of ill effects. That means nothing. It could 30 years down to the road for kids' kids to start seeing the effects in general. [The long-term effect concern was not addressed in the text] Makeinferences Participants drew new meaning or hidden information from the text they were reading based on their prior knowledge or the information they acquired previously. (Example 1) [Reading the table ...] These are FDA approved crops, so there has to be some research to receive FDA approval. (Example 2) So what they are saying "in order to be virulent, the bacterium must contain a tumorinducing plasmid." So I would guess, they use bacterium that they don't have the pTi. In other words, if they don't use the pTi, then it is safe. Generatenew-inquiry Participants generated new questions or hypotheses as a response to the text they were reading, or they generated inquiry that they wanted to explore subsequently. (Example 1) Obviously there are difficulties in formulating the corresponding transgenic and non-transgenic diets so that they are both “isocaloric and identical with respect to all measured components." I was kind of thinking about that earlier, when you genetically engineered foods, does it change like how many calories it is? I guess this is something [I] have to look at. (Example 2) So now I am trying to think if GE only has to do with plants. They are taking genes from plants and that's the food they are talking about. I don't know if they are talking [about] meat also. So I know they can put animal genes into plants, or plant genes into plants. So I want to know if plant genes can be put into animals. curiosities and active interaction with the text. Thus, these cognitive activities refer to deep levels of information processing. The fifth dimension, engagement, reflects the locus of learner motivation. Some participants in the sample were extrinsically motivated. They focused on generating outcomes 47 (e.g., they were eager to select one side of arguments and to find out more evidence to support the side) and/or memorizing web information (e.g., they jotted down conclusions so that later when they were asked more questions about what they learned, they could remember these facts). On the other hand, some participants seemed to engage in learning itself. They generated new questions, identified issues they wanted to study, took notes to remind themselves of what they needed to learn, and explored these questions and the issues they cared about, even though some of these issues did not relate closely to the given task. One participant, for instance, examined the demographic factor affecting consumer’s selection of genetically engineered food. He was aware that it was not pertinent to the safety issue, but his strong interest drove him to examine it. Although he might not address the safety issues as pertinently as other participants did, his engagement in learning should be appreciated. More importantly, he learned what he wanted, not what other people wanted him to learn. Learning for externally imposed reasons (i.e., extrinsic motivation) reflects a less active learning process, and thus, is considered as surface learning approaches by Marton and Säljö (see codes in italic in Table 2), whereas the engagement in learning driven by learners’ internal interests features deep levels of knowledge exploration. Table 7 shows the codes (definitions and examples) included in this dimension. The generate-new-inquiry code is included in the engagement and novelty dimensions, because its conceptual meanings fit both dimensions well. Its definition and example are displayed in Table 6 only. Details on how to quantify this code for dimensional scores as well as the integrated learning complexity score are discussed next. These five dimensions are interrelated, because all dimensions reflect the degree to which learners actively interact with the text and make meanings from the text. Thus, they should have a strong internal consistency. In other words, participants who outperformed in one dimension 48 Table 7 Codes, Definitions, and Examples of the Engagement Dimension Codes Definitions Examples IdentifyParticipants identified (Example 1) Allerginicity was mentioned again, issues-toissues or subtopics from so I am definitely going to look into that later. explore the text they were reading (Example 2) “A pig was controversially and they wanted to explore engineered to produce omega-3 fatty acids" I these issues later. want to look that up after reading this. Exploreinquiry-andissues Participants explored the inquiry they generated or the issues they identified previously. Can we create a new tab? Then let’s googling "triglycerides" because I want to know why it is a problem with triglycerides increasing. [Previously, this participant was reading that rats fed with GE potatoes increased triglycerides, so he was wondering why this can cause problems] Notes-forexploring Participants took notes for later explorations. [Reading...] I don't know what “out-crossing” is, so I write it down, [and] maybe look it up in the dictionary. Notes-forrecording Participants took notes in order to keep and memorize the information. [Reading…] OK. So they are saying that the pest eventually will become immune to it. So that's a con, so I am writing that down Outcomegoal Participants focused on producing outcomes. It reflected that their learning was driven by the external requirements, not their own interest or curiosity. (Example 1) Alright. I am done with this article. It is useful, if I am going to decide that they [GE crops] are bad for you. (Example 2) Then "How GM foods are regulated and government's role" I feel like I am not technically looking for how it is regulated …So I probably skip that. But I will look at it if I got specific question like how it is regulated. Internalinterest Participants explored some issues based on their internal interest, although sometimes, they felt they were off topic. (Example 1) I don't think this will pertain directly to the health, but this is interesting to me, so I am going to read it. (Example 2) The "Nature of the protest" [the heading of the webpage]. I guess anytime something catches my eyes, I kind of get distracted. [keep reading the page…] Generatenew-inquiry See Table 6 See Table 6 49 should also do well in other dimensions. Assigning codes to these dimensions is also arbitrary. For example, the cognitive activity, making inferences, may go well with the connection dimension, because to obtain new meanings, readers sometimes have to activate their prior knowledge or connect to the information acquired earlier. Learners who develop questions while reading also demonstrate their engagement in learning. Grouping codes, however, is necessary in this study. It is an initial attempt to empirically test the epistemology-learning correlation in the Internet-based open-ended learning environments, so maintaining dimensions can provide a detailed relational pattern to the interested audience. Coding procedure. All the video clips recording participants’ learning processes (including their think-aloud protocols and nonverbal behaviors, such as opening a new webpage, inputting query words) were transcribed. The codes clarify the specific instances to search for in the protocols, and thus, segmentation is not necessary (Chi, 1997). For example, Figure 1 shows an excerpt of a participant’s protocols, which were not segmented on the left but were segmented on the right. The number of codes and the codes assigned were the same. Nevertheless, to make participants’ learning processes more transparent and to ease inter-coder communications, protocols were segmented upon shifts in activities (Jordan & Henderson, 1995). Three types of shifts in activities were search, select, and (read a certain web) page. Each segment was numbered based on its sequence (see Figure 2 for an example) and analyzed using the coding categories generated in the prior step (see Table 2). Each segment could have multiple codes. Scoring. To quantify the complexity of participants’ learning processes, the frequency score for each code was calculated for each participant. For example, if a participant recalled his/her prior knowledge four times during the learning process, then his/her (frequency) score for the recall-prior-knowledge code would be four. 50 Without Segmentation Segmentation [On a new webpage] So these are facts about [On a new webpage] So these are facts about GE foods. When I look at this [information] GE foods. When I look at this [information] …like the first one "Animals have become …like the first one "Animals have become seriously ill[ed] or died from GE foods." seriously ill[ed] or died from GE foods." That's not very specific to me, so it doesn't That's not very specific to me, so it doesn't necessarily give me a lot of evidence or necessarily give me a lot of evidence or support for something. [Critical-analysis- support for something. [Critical-analysis- Content-reasoning] If I was writing a paper Content-reasoning] on this, that's kind of like where I am looking -----------------segmenting--------------------- at the stand point from, I guess I could quote If I was writing a paper on this, that's kind of that, but I have to go into analysis about like like where I am looking at the stand point how many animals, what kind of animals. from, I guess I could quote that, but I have to …So I will make a note like "animals died" go into analysis about like how many animals, and then next would be "who? numbers? and what kind of animals. …So I will make a note humans?" [Notes-for-exploring] [Notes-for- like "animals died" and then next would be recording] Like if there are any humans died "who? numbers? and humans?" [Notes-for- due to GE food, because we are animals too. exploring] [Notes-for-recording] Like if there [Investigate-contextual-meanings][Generate- are any humans died due to GE food, because new-inquiry] we are animals too. [Investigate-contextualmeanings][Generate-new-inquiry] Figure 1. Comparing the role of segmentation in an excerpt of the protocols. 51 The frequency score reflected quantitative differences among participants. Some codes (see codes with an asterisk in Table 2), however, only indicated the qualitative difference. For example, participants could search for alternatives multiple times or just one time during their learning processes, but both situations reflected their open-mindedness to alternatives. Also participants assumed that everyone was biased to some extent believed about the universal bias, no matter how many times they verbalized it during their learning processes. Thus, these codes 1. Search 1 I am typing "genetic modification foods United States" because I feel it is important to learn it... 2. Search 1_Select 1 [On the search result page] So the first one [is] from Wikipedia. I am reading that "GM foods were first put on the market in earlier 1990s" so I think it will be the good thing to start with. [Click on it] 3. Search 1_Select 1_Page 1 [On the Wikipedia page] Reading... I want to click on the citation number 5, because I see “its safety issue” …that might be useful. 4. Search 1_Select 1_Page 2 [On the new page, reading…] …So it is a good source to start out. But there are more that I want to look into. 5. Search 2 Back to Google. I am done with the Wikipedia… Now I will look into the safety of the food. so type in "safety of genetic modified food in the United states" Figure 2. An example for segmenting the protocols and ordering the segments. 52 only differentiated participants qualitatively, and should be scored dichotomously as either “observed” (indicated by 1) or “not observed” (indicated by 0). Scores of all codes, including the frequency scores and the dichotomous qualitative scores, were converted to z-scores so that they were equally weighted. Each dimensional score (see Table 2) was the averaged z-scores of all its included codes. The z-scores for all codes were averaged to quantify learning complexity as an integrated structure. Table 10 listed the mean and the standard deviation of each dimensional score and the integrated score of the observed learning complexity. The generate-new-inquiry code was included in both novelty and engagement dimensions, because it conceptually matched both equally. Yet the final integrated learning complexity score only included this code once so that its impact was not double-counted. Taking notes was an option for all participants. Ten participants did not take notes during their learning processes. Thus, to average the code scores in the engagement dimension, the denominator for the participants who took notes was seven, whereas the denominator for the participants who opted not to take notes was five. The same method applied when calculating the integrated learning complexity score as well. Data triangulation. To corroborate the direct analysis of the video clips recording participants’ knowledge exploration processes, the interview protocols of question four to nine (see Appendix G) were transcribed and analyzed using the same coding categories (see Table 2). The consistency of a certain type of instances (i.e., a certain code) between the video clips and the interview protocols refers to the situation when this type of instances is indentified or not identified in both video clips and interview protocols. For instance, Table 8 displays five participants’ frequency scores of the rethink code observed in the video clips and reported in the interview. The analyses through these two types of data were consistent among participant two, 53 four, and five; and inconsistent among participant one and three. Although the rethink code was assigned to the fifth participant three times in the video clip and he/she recalled only one instance fitting the rethink category during the interview, the interview protocols still supported the conclusion that the participant enacted the rethink strategy during the learning process. Thus, the two types of data were consistent for this participant. The consistency of each code was quantified by the percentage of the participants showing consistency across these two types of data among all participants. In this particular example, the consistency for the rethink code is 60% (three divided by five). Table 9 (column 2) shows the consistency for each code across the two types of data. Table 8 An Example of Data Triangulation Participants Rethink instances (video) Rethink instances (interview) 1 1 0 2 0 0 3 0 1 4 0 0 5 3 1 Although higher consistencies are preferred, low consistencies are not always problematic. Inconsistency can be caused by two reasons. First, during the interview, participants might not recall certain instance(s) they enacted when they explored the task. For instance, participant one in this example (see Table 8) did not recall any instance for the rethink strategy he or she used during the learning process. There were multiple reasons contributing to participants’ unsuccessful recalling, such as the interview questions did not cover all codes (e.g., 54 the instances of making inferences, bringing in new ideas, taking notes, etc), and some instances were just hard to recall (e.g., synthesizing, reasoning to decide what to read, etc). This type of inconsistency, therefore, was of less concern. Table 9. Triangulating the Results from the Interview and the Video Clips Codes Consistency (%) Inconsistency (%) Recall-information 95.65 0 Recall-prior-knowledge 30.43 0 Compare 80.43 6.52 Synthesize 41.30 0 Hyperlinks-within-text 82.61 0 Connect-through-reference 78.26 0 Investigate-contextual-meanings 63.04 6.52 Provisional-understanding 76.09 2.17 Rethink 84.78 4.35 Intolerance-of-ambiguity 86.96 6.52 Tolerance-of-ambiguity 91.30 2.17 Alternative-pursued 80.43 15.22 Alternative-avoided 91.30 2.17 Biased-argument 73.91 2.17 Case-avoided 91.30 0 Case-pursued 69.57 10.87 Critical-analysis-sources 95.65 2.17 Critical-analysis-recentness 82.61 4.35 Critical-analysis-Content-references 71.74 4.35 Critical-analysis-Content-triangulation 69.57 8.70 Critical-analysis-Content-writing 65.22 2.17 Critical-analysis-Content-reasoning 78.26 0 Critical-analysis-Universal-bias 71.74 4.35 Critical-analysis-Reasontoread 60.87 0 Critical-analysis-Remindbias 47.83 0 Bring-in-new-ideas NA NA Make-inferences NA NA Generate-new-inquiry 47.83 0 Indentify-issues-to-explore NA NA Explore-inquiry-and-issues 41.30 0 Notes-for-exploring NA NA Notes-for-recording NA NA Outcome-goal 82.61 4.35 Internal-interest 91.30 2.17 Note. NA = No participant reported the corresponding instances during the interview. 55 Inconsistency also included another situation in which participants recalled certain instances during the interview, but the instances were not observed in their knowledge exploration processes. For example, participant three in this example (see Table 8) reported a rethink instance during the interview, which was not identified in his or her learning process. Compared to the failure in recalling instances at the interview, failure in identifying corresponding instances in participants’ learning processes was of higher concern. Thus, the percentage of the participants with the second type of inconsistency (i.e., failure in demonstrating a certain type of instances in learning processes which was reported at the interview) out of all participants was calculated for each code to indicate the inconsistency between these two types of data. Only the third participant in this example (see Table 8) failed to demonstrate the rethinking strategy in his/her learning process but reported this strategy during the interview. Thus, the inconsistency for rethinking is 20% (1 out of 5 participants). When participants demonstrated such inconsistency, their video clips were reviewed again to solve the inconsistency. Table 9 (the right column) shows this type of inconsistency between the two types of data for each code. Validity and reliability assessment. The coding categories were generated through both top-down and bottom-up procedures. This method secured that the instances depicted by the codes were well supported by the literature. Thus, the coding categories should be valid to quantify the complexity of participants’ learning processes. Three methods were used to enhance the reliability of coding. First, all of the protocols of participants’ knowledge exploration processes (i.e., video clips) were coded twice. During the first time, the coding procedure started in September 2010 and lasted till December 2010. All protocols were then re-analyzed in January 2011. The test-retest agreement was examined by 56 calculating the consistency of the coding activities divided by the total number of coding activities (i.e., the sum of the consistency and inconsistency of the coding activities). For example, Figure 3 displays an excerpt of a participant’s knowledge exploration protocols coded at both times. At both times, the connect-through-reference code was assigned. The code generate-new-inquiry was assigned at time 1, but not at time 2. The code recall-prior-knowledge was assigned at time 2, but not at time 1. The code critical-analysis-universal-bias was assigned at time 1, but was revised as critical-analysis-remindbias at time 2. Thus, in this example, the Protocols coded at Time 1: I will go back to Wikipedia, read other resources. Then go to external links...[Connectthrough-reference] …[after reading Wikipedia page] I don't know, it doesn't have anything that I am looking for. I probably will look for something from the FDA, because when I think food safety, the first thing I will think is FDA. They tell me what to eat, but this doesn't mean they are not biased. I usually go to WebMD for medical stuff and FDA for food stuff. [Critical-analysis-Universal-bias] [Generate-new-inquiry] Protocols coded at Time 2: I will go back to Wikipedia, read other resources. Then go to external links...[Connectthrough-reference] …[after reading Wikipedia page] I don't know, it doesn't have anything that I am looking for. I probably will look for something from the FDA, because when I think food safety, the first thing I will think is FDA. They tell me what to eat, but this doesn't mean they are not biased. I usually go to WebMD for medical stuff and FDA for food stuff. [Recall-prior-knowledge] [Critical-analysis-Remindbias] Figure 3. An example for calculating the test-retest agreement. 57 consistency of the coding activities across times is 1, and the inconsistency of coding activities across times is 3. The test-retest agreement, then, is 25% (one out of the four total coding activities). Applying this method to all protocols, the test-retest agreement was 84.63%. Coding differences were reanalyzed and solved. Once coding was stable within the first coder, a randomly selected sample (11 out of the 53 participants’ knowledge exploration protocols, 20.75%) was sent to a second coder who was trained to understand the coding categories ahead. The second coder then independently coded the sample. Using the same calculation method as examining the test-retest agreement, the interrater agreement was 85.09%. Because this inter-rater reliability is higher than 80% (Riffe, Lacy, & Fico, 1998, p.128), coding was stable across individual coders. Therefore, the coding results by the first coder were used to quantify the observed learning complexity. Third, methods triangulation through two types of data (Johnson, 1997) was described above (see Table 9 for consistency and inconsistency results). Finally, the internal consistency (Cronbach’s alpha) of dimensional scores (see Table 2 for dimensions) was 0.82, supporting the interdependence of the five dimensions. Quantifying Dependent Variables – Learning Complexity Measured through Self-Reported Methods Self-reported data were collected from the post survey and the interview. The post survey included the items measuring two dependent variables: (1) learner satisfaction of knowledge exploration, and (2) perceived extent of knowledge exploration. Another variable, (3) overestimation of learning complexity, can be calculated based on the perceived extent of knowledge exploration variable (see details below). The interview addressed the following variables: (4) perceived insufficiency of learning, (5) future learning plans, (6) indecisiveness, 58 (7) internal criteria determining when to stop learning, and (8) the breadth of knowledge exploration. These variables and their descriptive statistics are listed in Table 10 and Table 11. Learner satisfaction. The items in the post survey (see Appendix E, Part I, Items 4 to 8) testing learners’ satisfaction of their knowledge exploration processes (i.e., were they satisfied with what and how well they had learned?) were averaged. The learner satisfaction score, therefore, ranged from 1 to 7. Higher scores reflected greater satisfaction. The mean and the standard deviation of this variable were 5.42 and 0.71 respectively (also see Table 10). The internal consistency of these five items (Cronbach’s alpha) was 0.77. Perceived extent of knowledge exploration. Items in the post survey collecting participants’ perceived extent of their knowledge exploration processes (see Appendix E, part II) were averaged. The averaged score ranged from 1 to 5, with higher scores indicating participants’ perceived deeper processing and more expansive exploration. The mean and the standard deviation of this variable were 3.65 and 0.35 separately (also see Table 10). The internal consistency of these 23 items (Cronbach’s alpha) was 0.72. Overestimation. Whether or not participants overestimated the complexity of their knowledge exploration processes was constructed through subtracting the standardized z-score of the integrated learning complexity score (i.e., the actual learning complexity demonstrated in learning process) from the standardized z-score of their perceived extent of knowledge exploration measured through the post survey. Higher scores reflect that participants overestimated the complexity of their learning processes. The mean and the standard deviation of this variable were -0.01 and 0.87 respectively (also see Table 10). The breadth of knowledge exploration. The interview question 11 (see Appendix G) asked participants to identify the issues that they had explored. These issues did not address the 59 safety of GE foods directly, but the participants with more complex epistemic beliefs could have considered them important in order to understand the safety issue thoroughly. Thus, it was possible that the number of these issues explored were positively associated with personal epistemology. This variable was quantified by calculating the total number of issues identified by participants during the interview. The mean and the stand deviation were 3.28 and 1.22 respectively (also see Table 10). Perceived insufficiency of learning. The interview question one (i.e., do you think you need more time to learn about this topic so that your view on the safety of GE foods is more solid and reasonable?) measured whether or not participants perceived that their learning was insufficient, when they finished their knowledge exploration processes. Responses were coded dichotomously (i.e., 1-yes or 0-no). Participants’ response of yes reflected the awareness that their learning was insufficient and could be improved. There were 36 participants (67.9%) perceiving that their learning was insufficient (also see Table 11). Statistical analyses were conducted to test this variable’s connection to personal epistemology (including all the independent variables mentioned above). Future learning plans. The interview question two (i.e., if you could have more time working on this topic to enhance your understanding on whether or not GE foods are safe to eat, what would you research more?) solicited participants’ future learning plans. Although there was no time limit to explore the task, participants had to stop at a certain point. It is important to know when they stopped, what other information about the given topic they believed was worthwhile to know but had not been explored sufficiently. For example, even though a learner wanted to check out individual cases, he/she might have focused on understanding the general concepts relevant to the task during the experiment if he/she did not have enough prior 60 knowledge on the topic. Without this question, we could not know that this learner valued individual cases. Seven themes were extracted from the interview protocols through open coding techniques (Strauss & Corbin, 1990): 1. Thirty-one participants (58.49%) reported that they would check out more empirical studies in the future to understand this topic better; 2. Nineteen participants (35.85%) said that they wanted to check out individual cases; 3. Thirteen (24.53%) reported their interest to explore the views from different stakeholders; 4. Three participants (5.66%) told that they would look for more recent information as a next step; 5. Thirty-seven (69.81%) listed at least one specific content area (or certain inquiry) that they wanted to explore in the next as a response to what they had just learned online; 6. One participant (1.89%) said she wanted to explore some issues that interested her but were irrelevant to the given topic; and 7. One participant (1.89%) said that he would read more summaries or general information on this topic in the future. The first three themes – looking for empirical studies (i.e., the future plan to explore empirical studies variable in Table 11), individual cases (i.e., the future plan to explore individual cases variable in Table 11), and the views from different stakeholders (i.e., the future plan to explore the views from different stakeholders variable in Table 11) – reflect participants’ inclination to expansive search and flexible reasoning, and thus, for each of these three variables, its connection to the independent variables were examined. 61 Indecisiveness. The third interview question (see Appendix G) asked participants’ views on the safety of the GE foods in a 5-point Likert scale. Participants who selected 1 (GE crops are safe to eat) or 5 (GE crops are unsafe to eat) provided decisive or absolute conclusions, whereas participants who selected 3 (Depends), 2, or 4 provided indecisive or tentative conclusions. This variable (i.e., the indecisiveness of conclusions variable in Table 11) was coded as either generating decisive/absolute (0) or indecisive/tentative (1) conclusions. Forty-six participants (86.8%) concluded indecisively. Statistical analyses were conducted to test whether or not personal epistemology (including all the independent variables mentioned above) was connected to this variable. In addition, among the 46 participants who proposed indecisive conclusions, their reasons were summarized: 1. Two participants (4.35%) reported that they did not know all the current available information online, so they had very limited amount of knowledge; 2. Eighteen participants (39.13%) indicated that the existing web information was insufficient for them to make an absolute proposition on the safety of GE foods, and future research was needed; 3. Eleven participants (23.91%) mentioned that exceptions were inevitable, so they were unwilling to make absolute decisions; and 4. Seventeen participants (36.96%) believed that the safety issue depends on many factors, such as who eats the GM food, how much the GM food is in people’s diet, how GM foods are engineered, etc. Thus, it is impossible to judge the safety issue in general due to this context-dependency concern. 62 Because the final reason – the context-dependency concern of the given topic (i.e., the indecisiveness due to the context-dependency concern variable in Table 11) – reflected the flexible reasoning, and thus, whether or not this variable relates to personal epistemology was tested. Internal criteria determining when to stop learning. When receiving the task before learning, participants were told that there was no time limit to explore the task, and their satisfaction was the only rule for them to stop exploring. It is, however, unclear what factors contributed to participants’ satisfaction of learning. Thus, the interview question 10 (see Appendix G) addressed this issue and solicited their internal criteria determining when to stop knowledge exploration. Based on participants’ responses, eleven themes were extracted: 1. Cognitive overload. One participant (1.89 %) reported that she stopped learning because her brain was overloaded and could not handle more information; 2. Fatigue. Three participants (5.66 %) indicated physical fatigue which led to their decision to stop exploring the task; 3. External force. One participant (1.89 %) was forced to stop by the experimenter after exploring the task for over two hours; 4. Loss of interest. Three participants (5.66 %) reported that they lost their interest in the task or what interested them was unable to be retrieved from the Internet, so they decided to stop; 5. Reluctance to explore details. Two participants (3.77 %) indicated that they were satisfied with general information and they did not want to spend a large amount of time to delve into details; 63 6. View well supported but subject to change. Six participants (11.32 %) explained that they stopped because they felt confident that their views on the safety of GE crops were well-supported by the web information they had explored, but they were aware that their views could be changed if more studies would be conducted in the future; 7. View well supported but not subject to change. Two participants (3.77 %) reported that they stopped because their views on the safety of GE crops were correct and unlikely to be changed even more research would be done in the future; 8. Generating outcomes. Fourteen participants (26.42%) stopped exploration because they had generated their views (so they met the requirement); 9. Authoritative answers. Two participants (3.77%) reported that they found an authoritative answer online, and thus, they stopped; 10. Repetitive information. Twenty-eight (52.83 %) participants stopped because they encountered same information repetitively, so they felt that the information on the web related to this topic had been saturated; and 11. Insufficient web information. Five participants (9.43 %) stopped because of the insufficient germane web information they could find out and they would like to use other resources (e.g., asking experts, reading books) to study the given topic. The theme 7, view well supported but not subject to change, reflected that participants assumed the existence of a fixed and absolute solution of this open-ended ill-structured task. The theme 8, generating outcomes, demonstrated participants’ extrinsic motivation of learning. The theme 9, authoritative answers, revealed participants’ underlying assumptions that knowledge exists outside themselves (so their learning goal is to find the knowledge from authorities, not to construct the knowledge). Therefore, these three themes represented low (or less advanced) 64 criteria embraced by the participants to determine when to stop learning, and might be related to less complex epistemic beliefs or lack of epistemic activation. A dichotomous variable, then, was constructed to reflect whether or not participants adopted any of these three criteria to determine when to stop their knowledge exploration (i.e., the low criteria determining when to stop learning variable in Table 11), and whether or not this variable is connected to the independent variables was examined. Descriptive Data – The Role of Epistemic Activation Interviews may provide more qualitative information from participants’ own perspective to understand the role of epistemic activation. Thus, the interview question 12 (see Appendix G) asked the participants in the activation group to comment on the role of contemplating prompts immediately before exploring the task. Their responses were transcribed. Themes were extracted through the open coding procedure (Strauss & Corbin, 1990), which were displayed in the Results chapter. Statistical Analyses of the Research Questions The first three research questions focused on examining whether or not different variables measuring learning complexity were associated with general epistemic beliefs (Research Question 1), task-specific epistemic beliefs (Research Question 2), and epistemic activation (Research Question 3), when covariates (i.e., learning time, effort investment, verbal comprehension abilities, and prior content knowledge) were controlled. Thus, at Step 1, each variable measuring learning complexity was regressed on the four covariates, the general epistemic beliefs, the task-specific epistemic beliefs, and the group variable (dichotomous – activation vs. non-activation). 65 The Research Question 4 – whether or not activating participants’ task-oriented epistemic beliefs can affect the correlations between personal epistemology and learning complexity – was investigated through (1) constructing the epistemology-group interaction, and (2) testing its relationship to each variable measuring learning complexity. Therefore, two two-way interaction terms (i.e., the interaction between general epistemic beliefs and the group variable, and the interaction between task-specific epistemic beliefs and the group variable) were entered to the regression model (as two additional predictors) at Step 2. The first step tested the main effect of general-epistemic beliefs, task-specific beliefs, and the epistemic activation on each variable measuring learning complexity. The second step tested the interactive effects (i.e. the interaction between general epistemic beliefs and group, the interaction between task-specific epistemic beliefs and group) on each variable measuring learning complexity. The hierarchical regression analysis was adopted to present a clear picture of both the main effects and the interactive effects. The study used two inventories – the CFI and the OMPI – to measure participants’ general and task-specific epistemic beliefs. Thus, for each variable measuring learning complexity, its relationships to the personal epistemology measured through these two inventories were examined independently. For example (see Figure 4), when using the CFI to measure general and task-specific epistemic beliefs in order to examine their relationships to learner satisfaction, learner satisfaction was regressed on the four covariates (COVs), general epistemic beliefs measured by the CFI (GEB(CFI)), task-specific epistemic beliefs measured by the CFI (TSEB(CFI)), the group variable, the interaction between group and GEB(CFI), and the interaction between group and TSEB(CFI) through two steps. When using the OMPI to measure general and task-specific epistemic beliefs in order to examine their relationships to learner 66 satisfaction, learner satisfaction was regressed on the four covariates, general epistemic beliefs measured by the OMPI (GEB(OMPI)), task-specific epistemic beliefs measured by the OMPI (TSEB(OMPI)), the group variable, the interaction between group and GEB(OMPI), and the interaction between group and TSEB(OMPI) through two steps. Using two inventories to measure personal epistemology could enhance the validity of measuring. Checking the consistency of the epistemology-learning connection when the two inventories were used independently could also strengthen the reliability of the results. When using the CFI to measure personal epistemology: Satisfaction = Constant + COVs + GEB(CFI) + TSEB(CFI) + Group + Group*GEB(CFI) + Group*TSEB(CFI) Step 1 Step 2 When using the OMPI to measure personal epistemology: Satisfaction = Constant + COVs + GEB(OMPI) + TSEB(OMPI) + Group + Step 1 Group*GEB(OMPI) + Group*TSEB(OMPI) Step 2 Figure 4. The epistemology-learning connection was investigated independently based on the two inventories collecting personal epistemology Some dependent variables were dichotomous variables, such as perceived insufficiency of learning, future plan to explore empirical studies, future plan to explore individual cases, future plan to explore the views from different stakeholders, low criteria determining when to stop learning, indecisiveness of conclusions, and indecisiveness due to the context-dependency concern (also see these variables in Table 11). Thus, hierarchical logistic regression analyses were used for these dichotomous dependent variables. 67 CHAPTER 3 Results The descriptive statistics of all variables and their zero-order correlations are shown in the Descriptive Statistics section. Then the four research questions are addressed in the Research Questions and Results section, followed by a description of the role of epistemic activation based on interview data. Finally, connections between learning complexity and covariates are summarized. Descriptive Statistics Table 10 and 11 list all variables measuring learning complexity (i.e., dependent variables), with the continuous variables included in Table 10, and the dichotomous variables in Table 11. For each continuous variable, the mean and the standard deviation as well as its connections (zero-order, two-tailed) to the variables measuring personal epistemology and the four covariates are shown in Table 10. While these continuous variables were collected through direct analysis and self-reported methods, the dichotomous variables in Table 11 were collected from the interview. The number of participants whose responses fitted different categories of each dichotomous variable is listed in Table 11. Table 12 shows the descriptive statistics of the variables measuring personal epistemology and the covariates. Because these variables were entered as predictors in the regression model, high correlation led to the multicollinearity problem. The correlation between general epistemic beliefs measured by the CFI and task-specific epistemic beliefs measured by the CFI was large (r = 0.59, p < 0.01) based on Cohen (1988). Thus, multicollinearity was checked in each analysis by calculating Variance Inflation Factors (VIF). All VIF values were below 4, indicating insignificant multicollinearity. 68 Table 10 Means and Standard Deviations of the Continuous Variables Measuring Learning Complexity (Raw Scores) and Zero-Order Correlation Coefficients between these Variables and the Variables Measuring Personal Epistemology and the Covariates 1 2 Continuous variable measuring learning GEB GEB TSEB TSEB Time7 Effort8 Verbal9 M SD 3 4 5 6 complexity (CFI) (OMPI) (CFI) (OMPI) Observed variable .35* .00 .29* .53** .39** .26 0.00 0.42 .51** Learning complexity (integrated) .31* .43** -.01 .22 .34* .28* 0.00 0.64 .51** Connection .16 -.02 .30* .37** .32** .19 0.00 0.48 .41** Flexibility .38** .45** .04 .11 .15 .20 0.00 0.51 .33* Critical analysis of web information -.12 .05 .40** .46** .41** .33* Novelty 0.00 0.69 .46** .03 .17 .23 .39** .16 .41** -0.01 0.59 .30* Engagement Self-reported variable Learner satisfaction 11 Prcvd ext of knwl explr Overestimation Breadth of knowledge exploration 5.42 3.65 -0.01 3.28 0.71 0.35 0.87 1.22 .22 .44** .26 .36** Note. *p<.05. **p < .01. 2-tailed. n = 53. 1. Mean 2. Standard deviation 3. General epistemic beliefs measured by the CFI 4. General epistemic beliefs measured by the OMPI 5. Task-specific epistemic beliefs measured by the CFI 6. Task-specific epistemic beliefs measured by the OMPI 7. Learning time - Covariate 8. Effort investment - Covariate 9. Verbal comprehension - Covariate 10. Prior content knowledge - Covariate 11. Perceived extent of knowledge exploration 69 .21 .20 .10 .24 .16 .28* .12 .38** .20 .12 .00 .37** -.20 .35* .14 .46** .61** .01 .01 .02 .24 .19 .05 -.09 Prior 10 knwl .10 -.03 .20 .16 .01 -.06 .01 .10 .06 -.18 Table 11 Descriptive Statistics for the Dichotomous Variables Measuring Learning Complexity Dichotomous variable measuring learning complexity No. participants 1 Reported Not reported Perceived insufficiency of learning 36 17 Future plan to explore empirical studies 31 22 Future plan to explore individual cases 19 34 Future plan to explore the views from different stakeholders 13 40 Low criteria determining when to stop learning 15 38 Indecisiveness of conclusions 46 7 Indecisiveness due to the context-dependency concern 17 36 Note. 1. The number of participants. 70 Table 12 Means (Raw Scores), Standard Deviations, and Zero-Order Correlation Coefficients (TwoTailed) of the Variables Measuring Personal Epistemology and Covariates M SD 2 3 4 5 6 7 8 7.19 2.74 .25 .02 .09 .05 -.05 .11 .09 16.15 5.52 - .09 .02 .13 .00 .15 .02 6.06 0.58 - .02 .05 .07 .12 .04 - .12 -.04 .30* .25 - .57** .59** .30* - .29* .34* - .43** Covariate 1. Prior knwl 2. Verbal 1 2 3 3. Effort 4. Time 4 70.87 26.17 Personal epistemology 5. GEB (CFI )5 3.82 6. GEB (OMPI) 7. TSEB (CFI) 0.69 7 8. TSEB (OMPI) 8 0.14 3.59 6 0.65 0.79 0.55 0.17 - Note. *p<.05. **p < .01. 2-tailed. n = 53. 1. Prior content knowledge 2. Verbal comprehension 3. Effort investment 4. Learning time 5. General epistemic beliefs measured by the CFI 6. General epistemic beliefs measured by the OMPI 7. Task-specific epistemic beliefs measured by the CFI 8. Task-specific epistemic beliefs measured by the OMPI 71 Research Questions and Results The results are summarized in Table 13 and 14, and presented in detail in Appendix J (Tables 15-34). Research Question 1 The first research question addressed whether there was a connection between general epistemic beliefs and the complexity of participants’ knowledge exploration processes (i.e., different variables measuring learning complexity) when working on the given ill-structured task using Google. Each variable measuring learning complexity was regressed on the three main predictors (i.e., general epistemic beliefs, task-specific epistemic beliefs, and group) and the four covariates (i.e., learning time, verbal comprehension, effort investment, and prior content knowledge). Then the first research question can be answered by testing the significance of the regression coefficient for the predictor of general epistemic beliefs. Results when the CFI was used. When using the CFI to measure participants’ personal epistemology (see Table 13, the first column for the regression model), general epistemology was significantly related to the observed learning complexity (i.e., based on direct analyses of video clips). Specifically, participants’ general epistemic beliefs were positively associated with their integrated learning complexity scores (β = 0.42, p = 0.001, f-square = 0.27) and with its connection (β = 0.47, p = 0.001, f-square = 0.31), flexibility (β = 0.36, p = 0.02, f-square = 0.13), critical analysis of web information (β = 0.33, p = 0.03, f-square = 0.13), and novelty (β = 0.29, p = 0.05, f-square = 0.10) dimensions. That is, compared to the participants with less complex general epistemic beliefs, the participants with complex general epistemic beliefs demonstrated more complex levels of knowledge exploration, such as integrating web information they encountered, processing web information flexibly (e.g., interpreting the text through different 72 angles, being sensitive to contexts, etc), evaluating the veracity of web information, and bringing in new ideas during their learning processes. When interpreting the critical analysis of web information dimension, three major strategies for evaluating the veracity of web information by participants were identified: source (e.g., the identity of authors, URLs, etc), recentness (i.e., the recentness of the web information), and content (e.g., writing levels, sufficiency of evidence, logical soundness, etc). Evaluating the content of the web information for its veracity judgment was more advanced than the other two strategies, although all were complementary. Participants’ general epistemic beliefs connected positively to the content sub-dimension (β = 0.49, p < 0.001, f-square = 0.34), but not to the source or recentness sub-dimension. The results showed that complex participants outperformed their less complex peers in adopting more advanced strategies to evaluate the quality of web information when they explored the given task; but the less advanced strategies (i.e., evaluation based on source and recentness) were adopted by learners with all epistemic levels. Besides the connection between general epistemology and the observed learning complexity, participants’ general epistemic beliefs were also positively associated with their perceived extent of knowledge exploration (β = 0.44, p = 0.005, f-square = 0.20), suggesting that compared to their less complex peers, the complex participants not only demonstrated more complex learning processes, but also believed that their learning processes were deep and expansive. Results when the OMPI was used. When using the OMPI to measure participants’ epistemic beliefs (see Table 14, the first column for the regression model), the general epistemic beliefs were positively connected to the integrated learning complexity score (β = 0.24, p = 0.05, f-square = 0.11) as well as its connection (β = 0.27, p = 0.04, f-square = 0.12) and novelty (or β 73 = 0.27, p = 0.05, f-square = 0.11) dimensions. Participants’ general epistemic beliefs were also found to be related positively to their adoption of advanced strategies to evaluate the quality of the web information they encountered (i.e., the content sub-dimension of critical analysis of web information; β = 0.33, p = 0.01, f-square = 0.18). Participants’ general epistemic beliefs were also negatively associated with the likelihood that they embraced lower criteria to determine when to stop learning (B = -0.77, SE = 0.39, Wald(1) = 3.93, p = 0.05, Odds ratio = 0.47). Summary. The results show that compared to the participants with less complex general epistemic beliefs, the participants with complex general epistemic beliefs demonstrated more complex learning processes, such as building connections across the web information they encountered, processing web information flexibly, evaluating the veracity of web information, using advanced strategies to evaluate the veracity of web information, bringing in new ideas, etc. In addition, the participants with more complex general epistemic beliefs perceived their knowledge exploration processes to be more complex and were less likely to adopt low criteria to determine when to stop learning (e.g., stop learning because they got an answer from authorities, stop learning because they believed their answers were unchangeable, etc). Research Question 2 The second research question addressed whether there was a connection between taskspecific epistemic beliefs and the complexity of participants’ knowledge exploration processes (i.e., different variables measuring learning complexity) when working on the given ill-structured task using Google. Each variable measuring learning complexity was regressed on the three main predictors (i.e., general epistemic beliefs, task-specific epistemic beliefs, and group) and the four covariates (i.e., learning time, verbal comprehension, effort investment, and prior content 74 knowledge). Then the second research question can be answered by testing the significance of the regression coefficient for the predictor of task-specific epistemic beliefs. Results when the CFI was used. When personal epistemology was measured by the CFI (see Table 13, the first column for the regression model), participants’ task-specific epistemic beliefs were not connected to any of the variables measuring learning complexity. Results when the OMPI was used. When personal epistemology was measured by the OMPI (see Table 14, the first column for the regression model), participants’ task-specific epistemic beliefs were positively associated with the likelihood that they generated indecisive/tentative conclusions due to the context-dependency concern of the given topic (B = 1.18, SE = 0.46, Wald(1) = 6.69, p = 0.01, Odds ratio = 3.26). Summary. Only one learning variable was found to be connected to the task-specific epistemic beliefs. That is, when determining whether or not GE foods are safe to eat based on the web information they explored, the participants with more complex task-specific epistemic beliefs were more likely to be indecisive because they considered contextual factors (e.g., the view on the safety of GE foods relied on how foods were engineered, the conditions of consumers, the specific food consumed, etc). Research Question 3 The third research question addressed whether there was an impact of activating participants’ task-oriented epistemic beliefs prior to learning on the complexity of their knowledge exploration processes (i.e., different variables measuring learning complexity) when working on the given ill-structured task using Google. Each variable measuring learning complexity was regressed on the three main predictors (i.e., general epistemic beliefs, taskspecific epistemic beliefs, and group) and the four covariates (i.e., learning time, verbal 75 comprehension, effort investment, and prior content knowledge). Then the third research question can be answered by testing the significance of the regression coefficient for the predictor of group. Results when the CFI was used. When personal epistemology was measured by the CFI (see Table 13, the first column for the regression model), the statistical test showed that epistemic activation increased the likelihood that participants adopted advanced strategies to evaluate the veracity of the web information they encountered (i.e., evaluating the content of the web information for its veracity judgment; β = 0.28, p = 0.01, f-square = 0.14). Results when the OMPI was used. When personal epistemology was measured by the OMPI (see Table 14, the first column for the regression model), the statistical analysis showed the same result – epistemic activation promoted participants to evaluate the content of the web information they encountered for its veracity (β = 0.25, p = 0.04, f-square = 0.10). Summary. Presenting participants with prompts designed to activate their task-oriented epistemic beliefs before learning significantly increased the degree to which they evaluated information veracity based on its content, but had no effect on other measures of learning complexity. Research Question 4 The last research question addressed whether or not there was an impact of activating participants’ task-oriented epistemic beliefs prior to learning on the relationship between personal epistemology and the complexity of knowledge exploration processes when working on the given ill-structured task using Google. At step 1, each variable measuring learning complexity was regressed on the three main predictors (i.e., general epistemic beliefs, taskspecific epistemic beliefs, and group) and the four covariates (i.e., learning time, verbal 76 comprehension, effort investment, and prior content knowledge), and then the two interaction terms – (1) the interaction between group and general epistemic beliefs, and (2) the interaction between group and task-specific epistemic beliefs – were entered at Step 2. The forth research question can be answered by testing the significance of the R² change from Step 1 to Step 2, and the significance of the regression coefficients for the two interaction terms entered at Step 2. Results when the CFI was used. When personal epistemology was measured by the CFI (see Table 13, the first column for the regression model), the results (see Table 13, Step 2) showed that the critical analysis of web information through content sub-dimension was positively associated with the interaction between group (i.e., activation vs. non-activation) and general epistemic beliefs (β = 0.39, p = 0.03, f-square = 0.14). That is, compared to the participants in the non-activation group, the participants in the activation group (i.e., whose taskoriented epistemic beliefs got activated before learning) demonstrated a stronger correlation between their general epistemic beliefs and the likelihood to evaluate the content of the web information they encountered for its veracity. Figure 5 shows the partial regression lines for the two groups. In the non-activation group, participants’ general epistemic beliefs failed to predict the critical analysis of web information through content sub-dimension (β = 0.23, t(19) = 1.25, p = 0.23, f-square = 0.08); whereas in the activation group, participants’ general epistemic beliefs were positively correlated to their likelihood to evaluate the web information based on its content (β = 0.75, t(20) = 5.06, p < 0.001, f-square = 1.33). In addition, the interaction between group and personal epistemology predicted the perceived extent of knowledge exploration (ΔR² = 0.09, F(2, 43) = 3.38, p = 0.04, f-square = 0.16) and overestimation (ΔR² = 0.12, F(2, 43) = 3.47, p = 0.04, f-square = 0.16). Specifically, the participants in the non-activation group demonstrated a stronger correlation between general 77 epistemic beliefs and their perceived extent of knowledge exploration than the participants in the activation group (β = -0.39, p = 0.05, f-square = 0.08). Figure 6 illustrates this relationship. In the non-activation group, participants’ general epistemic beliefs were positively correlated to their perceived extent of knowledge exploration (β = 0.89, t(19) = 4.05, p = 0.001, f-square = 0.87); whereas in the activation group, there was no correlation between participants’ general epistemic beliefs and their perceived extent of knowledge exploration (β = 0.24, t(20) = 1.20, p = 0.24, f-square = 0.08). Figure 5. Partial regression plot (with regression lines) depicting the two-way interaction between general epistemic beliefs and epistemic activation on the critical analysis of web information through content sub-dimension. Similarly, the participants in the non-activation group also demonstrated a stronger correlation between general epistemic beliefs and their likelihood to overestimate the complexity 78 of their knowledge exploration processes than the participants in the activation group (β = -.46, p = .05, f-square = 0.10). As shown in Figure 7, general epistemic beliefs of the participants in the non-activation group (β = 0.90, t(19) = 4.06, p = 0.001, f-square = 0.87) but not in the activation group (β = 0.08, t(20) = 0.36, p = 0.72, f-square = 0.08) were positively connected with the likelihood to overestimate the complexity of knowledge exploration. The positive correlation between overestimation and personal epistemology in the non-activation group, however, was counter-intuitive and inconsistent with Schommer’s (1990) study. Figure 6. Partial regression plot (with regression lines) depicting the effect of the two-way interaction between general epistemic beliefs and epistemic activation on perceived extent of knowledge exploration. 79 Figure 7. Partial regression plot (with regression lines) depicting the effect of the two-way interaction between general epistemic beliefs and epistemic activation on overestimation. Epistemic activation also differentiated the relationship between task-specific epistemic beliefs and the perceived extent of knowledge exploration (β = 0.53, p = 0.02, f-square = 0.13), although in each group, their partial regression coefficients were not significant (β = -0.44, t(19) = -1.86, p = 0.08, f-square = 0.18 in the non-activation group; β = 0.12, t(20) = 0.53, p = 0.60, fsquare = 0.01 in the activation group). Figure 8 displays this relationship. 80 Figure 8. Partial regression plot (with regression lines) depicting the effect of the two-way interaction between task-specific epistemic beliefs and epistemic activation on perceived extent of knowledge exploration. Epistemic activation also differentiated the relationship between task-specific epistemic beliefs and the overestimation variable (β = 0.62, p = 0.02, f-square = 0.14). In the nonactivation group, consistent with Schommer’s (1990) finding, participants’ task-specific epistemic beliefs were negatively correlated with the likelihood that they overestimated the complexity of their knowledge exploration processes (β = -0.57, t(19) = -2.39, p = 0.03, f-square = 0.30). Yet in the activation group, participants’ overestimation was not correlated with their 81 task-specific epistemic beliefs (β = 0.13, t(20) = 0.54, p = 0.59, f-square = 0.01). Figure 9 displays their relationship. Figure 9. Partial regression plot (with regression lines) depicting the effect of the two-way interaction between task-specific epistemic beliefs and epistemic activation on overestimation. Results when the OMPI was used. When personal epistemology was measured by the OMPI (see Table 14, the first column for the regression model), the results (see Table 14, Step 2) showed group differences in (1) the connection between participants’ general epistemic beliefs and their likelihood to establish a future learning plan to explore empirical studies (B = 2.30, SE = 1.01, Wald(1) = 5.22, p = 0.02, Odds ratio = 9.92), (2) the connection between their task- 82 specific beliefs and their likelihood to establish a future learning plan to explore individual cases (B = 2.75, SE = 1.17, Wald(1) = 5.56, p = 0.02, Odds ratio = 15.68), and (3) the connection between their task-specific beliefs and their perceived insufficiency of learning (B = 5.54, SE = 2.81, Wald(1) = 3.88, p = 0.05, Odds ratio = 253.84). Specifically, in the non-activation group, participants’ general epistemic beliefs could not predict the likelihood they embraced the need to explore empirical studies. Yet in the activation group, more complex learners valued the role of examining empirical studies in understanding the given task to a greater extent than their less complex peers. Figure 10 illustrates this relationship. Figure 10. Relationships between general epistemic beliefs and the need (i.e., future plan) to explore empirical studies for activation and non-activation groups. 83 Similarly, the complex participants (based on their task-specific epistemic beliefs) in the activation group were more likely to feel a need to explore individual cases in the future than their less complex peers in the activation group; whereas in the non-activation group, this relationship did not exist. Figure 11 demonstrates this relationship. Figure 11. Relationships between task-specific epistemic beliefs and the need (i.e., future plan) to explore individual cases for activation and non-activation groups. Finally, epistemic activation also seemed to enhance the correlation between task-specific epistemic beliefs and the perceived insufficiency of learning. While in the activation group, the participants with complex task-specific epistemic beliefs were more likely to perceive their learning to be insufficient than their less sophisticated peers; this relationship was less evident in the non-activation group (see Figure 12). 84 Summary. Activating participants’ task-oriented epistemic beliefs prior to learning was more likely to trigger complex learners than their less complex peers to (1) perceive the insufficiency of learning; (2) plan on exploring more details, such as individual cases and empirical studies; and (3) adopt more advanced strategies to evaluate the quality of web information. In addition, epistemic activation also weakened the positive correlation between general epistemic beliefs and the perceived extent of knowledge exploration and the positive correlation between general epistemic beliefs and overestimation. Finally, epistemic activation also changed the relationship between task-specific epistemic beliefs and the perceived extent of knowledge exploration and weakened the negative connection between task-specific epistemic beliefs and overestimation. Figure 12. Relationships between task-specific epistemic beliefs and perceived insufficiency of learning for activation and non-activation groups. 85 The Effect of Epistemic Activation from Learners’ Perspectives Although the above statistical analyses examined the impact of epistemic activation on different variables measuring learning complexity, it is necessary to understand its impact through learners’ perspective, especially when there was no prior study examining the effect of epistemic activation in this learning context. Such qualitative investigation was conducted through the last interview question soliciting participants’ opinions on the effect of epistemic activation based on their own learning experience. The interview protocols from the participants in the activation group included nine themes: Theme 1, no effect at all, referred to the situation that participants believed contemplating the activation prompts did not have any impact on their subsequent learning. One participant (3.7%) reported this situation. Theme 2, forget prompts once started, reflected that participants produced learning plans or had some ideas of what they should explore due to the prompts, but once they started learning, they forgot about their plans. Six participants (22.22%) reported this situation, and one participant suggested that keeping prompts with him could have been helpful for him to remind his plans. Yet participants were allowed to take notes when they were working on the activation prompts and they could read their notes during their learning processes. That participant did not take advantage of this rule. The majority of the participants who took notes while working on the prompts also did not refer to them when exploring the task. It may help to verbally remind participants of their plans during their learning processes, but this requires future investigation. Theme 3, raise awareness prior to learning, was indicated by the participants who expressed that the activation prompts raised their attention to or reminded them of the issues such as information veracity judgment, openness to alternatives, etc; but meanwhile, they 86 believed they would have done the same thing (e.g., checking the veracity of information, looking for alternatives) if they were not presented with these prompts. Twelve participants (44.44%) reported this situation. Theme 4, plan and get ready, referred to the situation when participants reported that contemplating these prompts helped them get more ready or helped them plan the learning better. For example, one participant said, “[if I did not think about these prompts] I think I might search similarly, but might not have been as fluent as it was today … because these questions prepare my plan to some extent.” Nine participants (33.33%) reported this theme. Theme 5, attend to contextual meanings, reflected the scenario that participants believed that prompts encouraged them to think about the contexts from which conclusions were derived. An excerpt of the interview protocols exemplified this situation: “the one [prompt] saying that is it possible for two trustworthy sites saying contradicting things, that made me think that I should make sure that the sites talking about the same issue, using the same set up, or conditions in the lab to prove their points.” Two participants’ (7.41%) protocols fitted this category. Theme 6, content, referred to the situation when the prompts provided participants with specific content that they could check out, such as some participants searched for the World Health Organization website, which was mentioned in the prompts. Five participants (18.52%) reported this category. Theme 7, information veracity, referred to participants’ perception of the prompts provoking them to think about different ways to evaluate the trustworthiness of web information. This theme was commonly reported during the interview by 18 participants (66.67%). 87 Theme 8, openness to alternatives, referred to the situation that participants indicated that the prompts reminded them to be more open-minded to alternatives and different perspectives and viewpoints. Sixteen participants (59.26%) mentioned this effect during the interview. Theme 9, subjectivity and complexity, was indicated by the participants who considered that the prompts helped them to recognize the subjectivity of the given topic, and thus, increased their feeling on the complexity of this topic. One participant, for instance, reported that the prompts made him think that “there were more threads, …it was more subjective or more opinion-based. …there were more than just scientific facts.” Twelve participants (44.44%) indicated this theme during the interview. In addition, during the knowledge exploration processes, six participants recalled epistemic prompts. Among these six participants, three of them particularly searched for the World Health Organization website, because it was mentioned in one of the epistemic prompts (consistent with Theme 6). Three other participants connected the web information they encountered to their thoughts generated when they were working on the prompts. For example, when reading web information, a participant said, “so this is something actually I said in previous questions about what type of research is being done, who is funding it.” These recalling efforts demonstrated the possibility that learners could spontaneously link the new information to their thoughts produced when working on the prompts. The Connections between Covariates and Learning Complexity Although the main focus of this study was to test the epistemology-learning connection, it is also informative to examine the interrelationship between the variables measuring learning complexity and the covariates in this study. The results (see Table 13 and 14) showed consistent 88 patterns no matter which inventory was used to measure personal epistemology. Thus, the summary of their correlations is combined. Learning time was positively correlated to the integrated learning complexity (β = 0.46, p < 0.001, f-square = 0.49, using the CFI; β = 0.49, p < 0.001, f-square = 0.46, using the OMPI) and all of its dimensions, the breadth of knowledge exploration (β = 0.40, p = 0.002, f-square = 0.23, using the CFI; β = 0.42, p = 0.001, f-square = 0.26, using the OMPI), and the perceived extent of knowledge exploration (β = 0.32, p = 0.01, f-square = 0.13, using the CFI; β = 0.37, p = 0.01, f-square = 0.14, using the OMPI). More interesting, learning time was positively connected to participants’ perceived insufficiency of learning (B = 1.48, SE = 0.58, Wald(1) = 6.59, p = 0.01, Odds ratio = 4.41 using the CFI; B = 1.42, SE = 0.60, Wald(1) = 5.67, p = 0.02, Odds ratio = 4.12 using the OMPI), and negatively connected to learner satisfaction (β = -0.23, p < 0.05, fsquare = 0.13, using the OMPI). Participants’ verbal comprehension abilities were positively connected to their integrated learning complexity scores (β = 0.29, p = 0.007, f-square = 0.18, using the CFI; β = 0.35, p = 0.004, f-square = 0.21, using the OMPI), and its connection (β = 0.30, p = 0.01, f-square = 0.16, using the CFI; β = 0.35, p = 0.006, f-square = 0.18, using the OMPI) and critical analysis of web information (β = 0.34, p = 0.007, f-square = 0.19, using the CFI; β = 0.36, p = 0.005, f-square = 0.20, using the OMPI) dimensions. Participants’ perceived effort investment in exploring the given task was connected positively to their satisfaction of learning (β = 0.60, p <0.001, f-square = 0.64, using the CFI; β = 0.60, p <0.001, f-square = 0.68 , using the OMPI), but negatively to their perceived insufficiency of learning (B = -1.53, SE = 0.56, Wald(1) = 7.57, p < 0.01, Odds ratio = 0.22, using the CFI; B = -1.54, SE = 0.60, Wald(1) = 6.66, p = 0.01, Odds ratio = 0.21, using the OMPI) or planning to 89 explore empirical studies (B = -0.83, SE = 0.37, Wald(1) = 5.09, p = 0.02, Odds ratio = 0.44, using the CFI; B = -0.80, SE = 0.36, Wald(1) = 4.87, p = 0.03, Odds ratio = 0.45, using the OMPI). Finally, participants’ prior knowledge did not correlate with any variables measuring learning complexity at the significance level of .05. But it was negatively associated with the perceived insufficiency of learning (B = -0.79, SE = 0.45, Wald(1) = 3.09, p = 0.08, Odds ratio = 0.45 using the CFI; B = -0.83, SE = 0.48, Wald(1) = 3.08, p = 0.08, Odds ratio = 0.43, using the OMPI) and the breadth of knowledge exploration (β = -0.22, p = 0.08, f-square = 0.07, using the CFI; β = -0.22, p = 0.08, f-square = 0.07, using the OMPI) at the .10 significance level. 90 Table 13 Overview of the Results When Personal Epistemology Was Measured by the CFI Predictor Dependent variable – Learning Complexity LC LC LC LC LC LC LC LC LC Satis Perc Ovr Prcv Emp Indiv View IndcsIndcs Lw Brdt (Inti) (Cn) (Flx) (Cri) (Cri (Cri (Cri (Nlt) (En) Knw est Insff Stu Case Stkh Cntx Critr Src) Rcnt) Cnt) Expl Step 1: Time +** +** +* +** +** +* +* +** +** +** Verbal +** +** +** +* +** Effort +** -** -* Prior knowledge GEB (CFI) TSEB (CFI) Group +** +** +* +* +** +* +** +** Step 2: Group*GEB(CFI) +* -* -* Group*TSEB(CFI) +* +* Note. +/- reflects positive/negative correlation. *p<.05. **p < .01. 2-tailed. LC (Inti) = Learning complexity (Integrated). LC(Cn) = The connection dimension of learning complexity. LC(Flx) = The flexibility dimension of learning complexity. LC(Cri) = The critical analysis of web information dimension of learning complexity. LC(CriSrc) = The critical analysis of web information – source sub-dimension of learning complexity. LC(CriRcnt) = The critical analysis of web information – recentness subdimension of learning complexity. LC(CriCnt) = The critical analysis of web information – content sub-dimension of learning complexity. LC(Nlt) = The novelty dimension of learning complexity. LC(En) = The engagement dimension of learning complexity Satis = Learner satisfaction. PercKnwExpl = Perceived extent of knowledge exploration. Ovrest = Overestimation. PrcvInsff = Perceived insufficiency of learning. EmpStu = Future plan to explore empirical studies. IndivCase = Future plan to explore individual cases. ViewStkh = Future plan to explore the views of different stakeholders. Indcs = Indecisiveness of conclusions. IndcsCntx = Indecisiveness due to the context-dependency concern. LwCritr = Low criteria determining when to stop learning. Brdt = Breadth of knowledge exploration. 91 Table 14 Overview of the Results When Personal Epistemology Was Measured by the OMPI Predictor Dependent variable – Learning Complexity LC LC LC LC LC LC LC LC LC Satis Perc Ovr Prcv Emp Indiv View IndcsIndcs Lw Brdt (Inti) (Cn) (Flx) (Cri) (Cri (Cri (Cri (Nlt) (En) Knw est Insff Stu Case Stkh Cntx Critr Src) Rcnt) Cnt) Expl Step 1: Time +** +** +* +** +** +** +** -* +** +* +** Verbal +** +** +** +* +** Effort +** -** -* Prior knowledge GEB (OMPI) TSEB (OMPI) Group +* +* +** +* -* +** +* Step 2: Group*GEB(OMPI) +* Group*TSEB(OPMI) +* +* Note. +/- reflects positive/negative correlation. *p<.05. **p < .01. 2-tailed. LC (Inti) = Learning complexity (Integrated). LC(Cn) = The connection dimension of learning complexity. LC(Flx) = The flexibility dimension of learning complexity. LC(Cri) = The critical analysis of web information dimension of learning complexity. LC(CriSrc) = The critical analysis of web information – source sub-dimension of learning complexity. LC(CriRcnt) = The critical analysis of web information – recentness subdimension of learning complexity. LC(CriCnt) = The critical analysis of web information – content sub-dimension of learning complexity. LC(Nlt) = The novelty dimension of learning complexity. LC(En) = The engagement dimension of learning complexity Satis = Learner satisfaction. PercKnwExpl = Perceived extent of knowledge exploration. Ovrest = Overestimation. PrcvInsff = Perceived insufficiency of learning. EmpStu = Future plan to explore empirical studies. IndivCase = Future plan to explore individual cases. ViewStkh = Future plan to explore the views of different stakeholders. Indcs = Indecisiveness of conclusions. IndcsCntx = Indecisiveness due to the context-dependency concern. LwCritr = Low criteria determining when to stop learning. Brdt = Breadth of knowledge exploration. 92 CHAPTER 4 Discussion The Internet has been widely used as a tool to find quick answers (Mansourian & Ford, 2007). Yet using the Internet to explore ill-structured tasks demands complex knowledge exploration involved with deep cognitive processing and expansive searching. By analyzing learning processes directly and using interview and survey methods to understand learners’ perceptions, this study examined different aspects of the complexity of participants’ knowledge exploration processes when they were working on a given ill-structured task using the openended Internet resources, and their relationships to personal epistemology (including general epistemic beliefs, task-specific epistemic beliefs, and epistemic activation). The results are organized and discussed in five parts: (1) the epistemology-learning association; (2) the role of covariates; (3) the unique characteristics of using the Internet to explore ill-structured tasks; (4) implications; and (5) limitations. Understanding the Epistemology-Learning Association General Epistemic Beliefs and Learning Complexity (Research Question 1) The result supported the existence of positive connections between general epistemic beliefs and the observed learning complexity. Specifically, when measuring participants’ general epistemic beliefs through the CFI, general epistemology was positively correlated to the observed learning complexity in general and its embodied dimensions (except the engagement dimension). When using the OMPI to measure their general epistemic beliefs, general epistemology was also positively connected to the integrated learning complexity as well as its connection, novelty, and critical analysis of web information-content dimensions. Unlike prior studies measuring learning complexity through self-reported methods (e.g., Whitmire, 2003; Wu 93 & Tsai, 2005; 2007), the complexity of participants’ knowledge exploration processes in this study was gauged through direct analysis of their learning processes. Thus, the epistemologylearning connection indentified in this study was more than the correlation between perceptions. Three strategies were used to enhance the reliability of measuring the complexity of learning processes: (1) all knowledge exploration protocols were coded twice, (2) a second coder was involved to increase measuring stability across researchers, and (3) the direct analysis of learning processes was corroborated with the self-reported interview data (i.e., methods triangulation). Moreover, personal epistemology was measured through two instruments, consistently yielding the result of the connection between general epistemic beliefs and the observed learning complexity. These efforts to enhance the reliability and validity of analyses help increase the confidence in the conclusion that general epistemology is positively connected to the observed complexity of learning processes (i.e., Hypothesis 1 was supported). Results also showed that general epistemic beliefs related positively to the perceived extent of knowledge exploration (i.e., hypothesis 3 was supported), but negatively to participants’ adoption of low standards to stop learning, such as “I stopped because I found the answer from some authoritative web pages,” “I stopped because my view is very solid and cannot be improved with more work,” etc (i.e., hypothesis 8 was supported). But not both inventories (i.e., the CFI and the OMPI) measuring general epistemology generated these results. Thus, more research is needed to confirm these findings in the future. None of the hypotheses assuming correlations between general epistemic beliefs and learner satisfaction (i.e., hypothesis 2), overestimation (i.e., hypothesis 4), participants’ perceived insufficiency of learning (i.e., hypothesis 5), the breadth of knowledge exploration (i.e., hypothesis 9), likelihood to establish future plans to explore empirical studies, individual cases, 94 or views from different stakeholders (i.e., hypothesis 7), or generating indecisive conclusions based on the context-dependency concern (i.e., hypothesis 6) were supported by the analyses. Yet the task-specific epistemic beliefs were found to be connected to some of these variables, and epistemic activation also differentiated the epistemology-learning connection, which are discussed next. Task-Specific Epistemic Beliefs and Learning Complexity (Research Question 2) Results in this study were not strong enough to demonstrate the connection between taskspecific epistemic beliefs and different aspects of learning complexity. Only Hypothesis 6 – that task-specific epistemic beliefs would be positively correlated to the likelihood participants made indecisive/tentative conclusions due to their context-dependency concern – was supported by the data. One possible explanation is that participants’ general and task-specific epistemic beliefs were entered in the regression model simultaneously. The effect of their task-specific epistemic beliefs on learning could have been diminished after the impact of general epistemic beliefs was controlled. Therefore, Hypothesis 11 – that task-specific epistemic beliefs would have a stronger connection to learning complexity than general epistemic beliefs – was not supported in this study. This counter-intuitive result may be caused by the fact that the task-specific CFI and OMPI inventories were not empirically validated before they were used in this study, although the pilot study was conducted to enhance its interpretability and the accuracy of interpreting each item. The items in the original inventories had been significantly revised to collect participants’ task-specific epistemic beliefs in this study. Thus, to improve the reliability of the results derived from using these two inventories, future studies should also aim to validate the revised inventories. 95 Another possibility explanation is that the epistemology-learning connection is more complex. It may depend on other factors, such as learners’ self-awareness of their epistemic beliefs prior to learning. A special group of learners may exist whose epistemic beliefs can be more ready to interact with their learning complexity. The results of this study confirm this assumption and will be discussed next. Understanding the Role of Epistemic Activation (Research Question 3 &4) The effect of activating participants’ task-oriented epistemic beliefs prior to learning was investigated through both quantitative (i.e., regression analysis) and qualitative (i.e., interview) methods. Three themes describing the function of epistemic activation from participants’ perspectives have been constantly identified in the interview protocols. That is, these activation prompts encouraged participants to contemplate diverse strategies to evaluate web information veracity (66.67%), reminded them to be open-minded to alternatives (59.26%), and aroused their awareness of the subjectivity and complexity of the task (44.44%). The following discussion addresses each theme in details. Effects on information veracity evaluation. The regression analysis showed that the participants in the activation group (M = 0.27, SD = 1.04) evaluated the quality of web information based on its content significantly more often than the participants in the nonactivation group (M = -0.28, SD = 0.89). As mentioned before, evaluating the content of the web information for its veracity (e.g., the logical soundness, the sufficiency of back-up evidence, etc) is more complex than assessing source or recentness. No group differences in the source or recentness dimensions were found in the study. Therefore, presenting the activation prompts immediately before knowledge exploration could have provided an opportunity for participants to think about advanced strategies to evaluate web information quality. In other words, learners 96 may be unable to spontaneously activate these advanced strategies to evaluate information quality during learning, unless they are specifically asked to think about their strategies before learning. As one participant stated during the interview: They [activation prompts] gave me the idea that I had in my mind that is fresh and right there. …After prompts, I wanted to focus on data, things that can be proven, not just opinions. … I want[ed] to look for evidence that have citation and can be reinforcement. …These five questions made me aware of my thinking. The analysis of interview protocols was consistent with the statistical result. Therefore, these epistemic prompts seemed to be successful in activating participants’ advanced strategies to evaluate the veracity of web information. More interestingly, the correlation between general epistemology and evaluating the content of the encountered web information for its veracity judgment was stronger in the activation group than in the non-activation group (see Figure 5). This indicates that compared to the less complex participants, the complex participants in this study may have benefited more from the epistemic activation in terms of recalling advanced strategies to evaluate the quality of web information. The less complex participants, however, could not recall these advanced strategies, even though these prompts provided them with an opportunity to refresh their memories. Perhaps their epistemic beliefs were not complex enough for them to think about these advanced strategies. Therefore, prompts cannot be beneficial to all learners, but only to the most complex. Effects on open-mindedness. Based on the interview protocols, almost 60% of the participants thought that the activation prompts reminded them to be open-minded to alternatives and different view points. Thus, the dichotomous variable (0 – not observed, 1 – observed) for 97 the code, alternative-pursued, was regressed on the four covariates and the group variable through logistic regression. Group differences existed (B = -1.74, SE = 0.76, Wald(1) = 5.31, p = 0.02, Odds ratio = 0.18), suggesting that participants in the activation group (85.19%) were more likely to pursue alternatives during their learning processes than their peers in the non-activation group (57.69%). Because the statistical result corroborated the interview data, both analyses showed that these epistemic prompts triggered learners’ open-mindedness to alternatives. Effects on sensing complexity and subjectivity. During the interview, over 40% of the participants in the activation group reported that the prompts made them realize that this task was more complex and more subjective than they had assumed. Participants expressed that the activation prompts gave them “a sense of complexity.” They felt that they probably could not find “a straightforward answer to it [the question whether or not GE crops are safe],” and the topic was “going to be inconclusive.” Their awareness of subjectivity contributed to the sense of complexity, like one participant stated, “I feel this issue is a lot broader than just safety. It has something about perception of science and technology and how you interpret from different angles.” These anecdotal examples of the prompts enhancing participants’ perceptions of task complexity were corroborated by statistical analyses. When learning time was controlled, participants in the activation group (M = 5.90, SD = 0.60) perceived less effort investment in their knowledge exploration processes than their peers in the non-activation group (M = 6.22, SD = 0.52; F(1, 50) = 4.08, p = 0.049, partial η² = 0.08). In other words, for the participants who spent same amount of time exploring the given task, those in the activation group felt their effort investment in the task was less sufficient than the ones in the non-activation group. 98 The activation prompts, however, might be more likely to help complex participants to sense the complexity of the given task. In the activation group, there was no correlation between participants’ general epistemic beliefs and their perceived extent of knowledge exploration (or overestimation of their learning complexity). Yet in the non-activation group, such correlation was positive (see Figure 6 and Figure 7). Perhaps contemplating these activation prompts ahead made the complex learners (but not the less complex learners) realize the complexity of the given topic. When they stopped, they felt there was a great deal of information left to explore, which diminished their perceptions on the thoroughness of their knowledge exploration (or reduced the likelihood they overestimated the complexity of their knowledge exploration processes). But results also show that the correlation between task-specific epistemic beliefs and the perceived extent of knowledge exploration (or overestimation of their learning complexity) was significantly stronger in the activation group than in the control group, which seems contradictory to this explanation. Due to the consideration that the revised inventories collecting task-specific epistemic beliefs were not tested statistically for its validity, the results coming from the general epistemic beliefs are more trustworthy. Future studies should confirm the results deriving from the task-specific epistemic inventories. Moreover, if activation prompts made complex learners more aware of the complexity of the task, when they stopped learning, they should view their learning as insufficient or incomplete compared to their less complex peers. Results of this study supported this expectation. Once participants stopped their knowledge exploration, they were asked whether or not they needed more time in the future to learn about the given task to improve their views on the safety of GE crops (i.e., interview question 1). While complex learners (i.e., tested by the OMPI) in the non-activation group did not feel they needed more time compared to their less 99 complex peers, those in the activation group did (see Figure 12). The same result was found when participants were asked what they would research more if they could have more time working on the given topic (i.e., interview question 2). Only in the activation group did complex participants report a greater interest in further examining empirical studies or individual cases than their less complex peers (see Figure 10 and 11). Hypothesis 10 was therefore supported. In short, the results showed that epistemic activation seemed to have decreased learner satisfaction among the more complex participants, because they perceived their learning less thorough or less sufficient than their less complex peers in the activation group. Nevertheless, learner satisfaction – the construct directly measuring how well participants believed they had explored the given task – was not associated with personal epistemology or the groupepistemology interaction. Thus, not all results have confirmed this conclusion, and further investigation is still needed to investigate the relationship between personal epistemology and learner satisfaction. Prior Content Knowledge and Using the Prompts. The activation prompts were designed for pedagogy (i.e., triggering participants to think about how they could approach the task), rather than content (i.e., gaining a specific keyword query or website based on the prompts). Almost 20% of the participants, however, tried to search the specific web site (i.e., the World Health Organization website) mentioned in the prompts. These participants all verbally expressed (i.e., either in think-aloud protocols or interview protocols) that they had no prior knowledge on the given topic, and thus, they craved for a hint to start with or a content guideline to search for. This could have decreased their chance to think how they should approach the topic. As one participant claimed: 100 I think if I am doing a research that I already have some background knowledge already, then these prompts may change how I explore a little bit. But in this case I was really just looking for anything that would give me the knowledge to start… Because whether or not learners have some prior content knowledge may affect their response to epistemic prompts, future studies investigating the effect of epistemic prompts should be conducted in these two populations separately. Unexpected Results. This study yielded some unexpected results, however. In the nonactivation group, the correlation between participants’ general epistemic beliefs and their overestimation of learning complexity was positive, which was opposed to the Schommer’s (1990) empirical test revealing that epistemic beliefs related negatively to overconfidence. In addition, although the task-specific epistemic beliefs were found to be negatively correlated to overestimation of learning complexity in the control group, this connection in the activation group was insignificant. Why the complex thinkers in the non-activation group were less likely to feel overconfidence in their learning than the complex thinkers in the activation group is unexplainable at present. Future studies should be conducted to retest these findings. Understanding the Role of Covariates Although the main purpose of this study is to examine the epistemology-learning relationship, some valuable findings also emerged after examining the covariates. Learning Time Learning time was positively connected to the observed learning complexity (integrated) and all of its dimensions. The more time participants spent to explore the given task, the more opportunities they obtained to enact complex intellectual activities during learning process, such as making connections, evaluating information veracity, generating questions, etc. When 101 predicting learning variables collected from self-reported methods, learning time was associated positively with the perceived extent of knowledge exploration. That is, the more time participants spent, the more likely they perceived that their knowledge exploration processes were thorough. The interesting results, however, were the positive relationship between learning time and perceived insufficiency in learning and the negative relationship between learning time and satisfaction. In other words, when they stopped learning, participants who had explored the task longer were more likely to feel dissatisfied with their learning and perceive a need to improve their learning in the future; whereas participants who stopped learning quickly were more likely to feel good about what they had learned. But it remains unclear if study time is a cause or an effect of learner dissatisfaction and the perceived insufficiency of learning. Verbal Comprehension Abilities and Effort Investment Participants’ verbal comprehension abilities were correlated positively to learning complexity (integrated) and its dimensions of connection and critical analysis of web information. Verbal comprehension is known to predict reading comprehension and general intellectual ability (Qian, 2002; Stanovich, 2000). Thus, it is not surprising that participants with strong verbal comprehension abilities could exceed those with less strong verbal comprehension abilities in building connections across information and evaluating the quality of web information more often. In addition, no variables collected through self-reported methods were correlated with verbal comprehension, showing that learners’ perceptions on learning (e.g., how well they had explored the task, whether or not they should learn more in the future, etc) were not dependent on their verbal comprehension. Thus, verbal comprehension seemed to relate to the concrete observed learning process, rather than self-reported learning variables. 102 Another covariate, participants’ perceived effort investment in exploring the given task, related positively to their satisfaction of learning, and negatively to perceived insufficiency of learning and planning to check empirical studies. Not like the verbal comprehension’s connection to the observed learning variables, the effort investment is a self-reported variable and related to other variables derived from participants’ perceptions. Prior Content Knowledge The role of learners’ prior content knowledge on Internet-based learning has been studied by Wu and Tsai (2005; 2007). Using survey methods, they found that compare to novices, content experts were more likely to experience deep learning on the web, such as using elaborative search strategies (e.g., summarizing, comparing, etc) and validating personal judgment. This study, however, showed that prior content knowledge was not associated with the observed complexity of knowledge exploration processes and its dimensions. Wu and Tsai measured participants’ perceived learning complexity through self-reported instruments; whereas the learning complexity in this study was gauged through direct analysis and was corroborated with retrospective interview protocols. Yet in this study, participants’ self-reported perceived extent of knowledge exploration (i.e., equal to the learning complexity construct measured in Wu and Tsai’s study) also did not relate to their prior content knowledge. Thus, more study is needed to test these results. Previous studies have also shown that prior content knowledge can increase the effectiveness of information seeking by helping learners formulate appropriate search query words, judge the relevancy of web information, and prepare for the Internet search (Bilal, 1998; Hsieh-Yee, 1993; Marchionini, 1989; McDonald & Stevenson, 1998; Shute & Smith, 1993; Vakkari, Pennanen, & Serola, 2003; Wildemuth, 2004). Nevertheless, this study showed that 103 having more prior content knowledge was not that beneficial. Specifically, participants with greater prior content knowledge were less likely to perceive the insufficiency of their learning and their knowledge exploration processes were more focused (statistical results were significant at the .10 level), and these patterns did not depend on participants’ epistemic beliefs. These results reveal that for both complex and less complex learners, their prior content knowledge often reduce the breadth of knowledge they explore and also make them feel good about what they have learned. The statistical results corroborated the interview data. One participant, for example, stated, “I am more information-driven, just because I don't know about the topic. If I had prior knowledge, I would be more like self-control, [because] I know what I am looking for, and what I should look for.” Three other participants reported the same idea (i.e., prior content knowledge can enhance a more structured search) during the interview. The structured search, however, may lead to learners’ blindness to the things unknown, and the learners who know in advance what they will explore may be less likely to recognize their learning insufficiency. When exploring ill-structured tasks on the web to understand their embodied issues comprehensively and deeply, when searching the right answer and locating certain information quickly and accurately are not the priority anymore, learners need to be modest about what they have known so that they feel a need to learn more. These findings disclose the sophisticated relationship between prior knowledge and learning in different contexts (i.e., exploring illstructured vs. well-structured tasks) with different goals (i.e., deep understanding vs. finding answers quickly and accurately), which deserves further investigation. Although prior content knowledge can lead to a feeling of learning sufficiency and make learning more focused and structured, a lack of prior content knowledge may influence learning as well. As discussed above, some learners without prior content knowledge used activation 104 prompts as a source to get a content guideline, rather than to think pedagogically. But prior content knowledge seemed to affect participants’ decision about how to approach the task as well. One participant, for example, said: “if I already know what it is all about, then I would not have so much need to get that basic information. I could go straight to the individual cases or the case studies.” That is, greater prior content knowledge may connect to participants’ attempt to search for individual cases or empirical studies (i.e., instances fit the case-pursued code) during their knowledge exploration processes. This assumption was tested but not supported (F(1, 49) = 2.18, p = 0.15). Differences, however, existed among complex and less complex thinkers (F(1, 46) = 4.74, p = 0.035, partial η² = 0.09; when using the general epistemic beliefs measured by the CFI to categorize complex vs. less complex thinkers based on its median, and when the effects of learning time, effort investment, and verbal comprehension were controlled). That is, among complex thinkers, prior content knowledge predicted searching for cases; whereas among less complex thinkers, this relationship did not exist. The interaction between prior content knowledge, epistemic beliefs, and the knowledge exploration processes is intricate and needs further research. Implications This study confirms the epistemology-learning connection among undergraduate students who explored a given ill-structured task using the open-ended Internet resources. Theoretically, learners who believe that knowledge is interconnected, tentative, and sensitive to contextual factors are more likely to explore concrete cases, compare them through multiple lenses, and construct their own understandings; rather than finding and accepting authoritative information online. Thus, it is not surprising that complex learners in this study searched expansively and processed the web information deeply. 105 This finding informs teachers and educators that learner characteristics account for variance in the complexity of their knowledge exploration processes. Thus, teaching should focus not only on increasing students’ content knowledge, but also on cultivating students’ complex beliefs about knowledge and knowing. Although many psychologists have studied the developmental tendency of personal epistemology (e.g., Perry, 1970; King & Kitchener, 1994; Kuhn et al., 2000), the research on how to effectively change epistemic beliefs is scarce. Suggestions, such as adding hands-on experiments in science classrooms (where students make observations, test hypotheses, and draw conclusions based upon evidence (Conley, Pintrich, Vekiri, & Harrison, 2004) or providing students with conflicting information (Muis, 2007) are raw and should be further tested. This study also reveals that using prompts to activate learners’ task-oriented epistemic beliefs can enhance their feeling of task complexity, recall more advanced strategies to evaluate information veracity, and be more open to alternatives. Therefore, in order to improve the complexity of learning, instructors may consider using prompts to raise students’ self-awareness of their epistemic beliefs. Studies have shown the positive effect by prompting learners to reflect their learning during their knowledge exploration processes (e.g., Bannert, 2006; Demetriadis et al., 2008). The strategy tested in this study – prompting learners immediately before learning – is also effective, but more easily implemented by classroom and online instructors, especially when learning is not highly structured and learning resources are not pre-selected (e.g., learning with the Internet). Before sending students to explore a task in front of computers, it may be wise for instructors to prepare some questions and initiate a classroom discussion asking students to reflect on the nature of knowledge and knowing. Or if it is an online course, the instructor can 106 post some questions for epistemic reflections and ask students to think about it individually or to discuss in groups. On the other hand, instructors should also keep in mind that the students with complex epistemic beliefs are more likely to benefit from this strategy. When presenting prompts to raise learners’ self-awareness of their metacognitive thinking during their learning processes, Demetriadis et al. (2008) found that compared to their less complex peers, prompts were more likely to improve complex thinkers’ performance. In the present study, although the prompts were presented before learning and focused on epistemic metacognition, it still showed the similar result – only complex learners seemed to benefit from these epistemic prompts in terms of increasing their learning complexity. Therefore, learners’ epistemic beliefs are not only connected to learning complexity, but also affect how well they can take advantage of prompts. Prompts in this study provided participants with an opportunity to systematically think through the nature of knowledge and the process of knowing that were pertinent to the given task. Less complex thinkers did not embrace complex epistemic beliefs (such as considering knowledge to be subjective, contextually-sensitive, interconnected, and should be constructed rather than obtained directly from authorities). Therefore, even though they were provided with an opportunity to contemplate the nature of knowledge and knowing, they could not recall these complex epistemic beliefs, because theirs were quite simple. On the other hand, prompts in this study were targeted at activating, not changing participants’ epistemic beliefs. It is unknown how these less complex learners can be influenced by the prompts aimed to improve their epistemic thinking. Moreover, if teachers lead a classroom-wide (face-to-face or online) discussion before students’ knowledge exploration to activate their epistemic beliefs, can less complex learners benefit from this activity if their more complex peers share their more complex epistemic 107 thinking or if instructors enact some coaching strategies (e.g., scaffolding) during the discussion? Therefore, our understanding on how to use prompts is extremely limited, and additional investigations would be of great value. Limitations As mentioned, the items in the CFI and the OMPI were revised significantly to measure participants’ task-specific epistemic beliefs. The revised inventories were not examined for its validity, although the interpretability of their items was tested in the pilot study. Thus, the results derived from using these two inventories should be interpreted cautiously. In addition, when reflecting on the design of this study retrospectively, perhaps the general epistemic beliefs should have been measured before the interview, because some of the interview questions may prime participants to be more epistemically complex. It is also noteworthy that the interviewer may also subconsciously prompt some participants to speak more than others due to diverse reasons, such as some participants were not elaborative enough than others. Thus, it may affect participants’ responses to certain degree. Third, the dependent variables of learning complexity were interrelated, and thus, Bonferroni correction should be considered when evaluating the significance of the results or estimating the likelihood to replicate the findings in the future. The p-values for all statistical analyses are provided however. This study makes an initial attempt to examine the effect of epistemic activation. Thus, the results are both informative and limited. It is unclear how each prompt has affected learners’ knowledge exploration processes. It is important for future work to explore the role of each specific prompt. The participants in this study spent 20 minutes on average to work on these prompts. If some prompts were redundant in terms of activating certain learners’ epistemic 108 beliefs, or if some prompts were not effective to increase learners’ self-awareness, these prompts should be eliminated so that the activation process can be more succinct. Moreover, the prompts were designed to raise participants’ awareness of epistemic beliefs prior to learning, and the results show that complex thinkers responded to the prompts better than the less complex thinkers in terms of activating advanced strategies to evaluate web information veracity and feeling dissatisfied (particularly, perceived their learning to be insufficient) with their learning processes. Thus, it seems that contemplating prompts prior to learning reinforce the complex epistemic beliefs embraced by complex thinkers. However, the results are not informative to address if the prompts affected the epistemic beliefs of those less complex thinkers. For example, questions like “Did the prompts reinforce their less complex epistemic beliefs? Did the prompts increase the complexity of epistemic beliefs for these less complex thinkers because they were offered with an opportunity to think about diverse epistemic issues?” are not addressed by this study and should be studied in the future. This study reveals that prompting students to be cognizant of their task-oriented epistemic beliefs can be beneficial in terms of improving their learning complexity, but it remains unknown how prompts should be written to maximize their benefit. Although prior studies have examined different types of prompts and their relationship to learning outcomes (e.g., Davis, 2003; Chen & Bradshaw, 2007), more information is needed. As early as 1980s, scholars (e.g., Kitchener, 1983) have called for studying the role of activating learners’ epistemic beliefs as well as testing the epistemology-learning relationship in ill-structured knowledge domains, but investigators have not systematically examined them. It is hoped that the findings from the current study will be of considerable benefit to the field. 109 CHAPTER 5 Conclusions Exploring ill-structured tasks using the Internet requires expansive searching and deep processing. This study contributes to the literature in three aspects. First, a framework to analyze open-ended unstructured knowledge exploration processes to measure its complexity has been proposed. The proposed framework and its described procedure to quantify learning complexity may have utility in future studies. Second, this is the first to demonstrate the connection between personal epistemology and observed learning complexity through statistical analyses. Specifically, it was found that the complexity of knowledge exploration processes is associated with general epistemic beliefs, but not with task-specific epistemic beliefs. This may be attributed to the lack of validating taskspecific epistemic inventories. Further studies, therefore, should test the relationship between task-specific epistemic beliefs and learning complexity. Finally, this study examines the role of a pedagogical intervention that can be easily adapted in classroom settings as well as online instructions, and find out that complex thinkers can benefit from the epistemic activation to a greater extent than their less complex thinkers, and thus, demonstrate a more complex level of knowledge exploration. For classroom and online instructors, at least two ways that they can take advantage of the epistemic activation have been discussed. First they can post epistemic prompts online and ask their students to think about them individually, similar to the approach employed in this study. A second possibility would be to lead an in-class or online discussion. This approach may have the added benefit of opening a dialogue among learners of differing epistemologies, possibly to the benefit of less complex learners. This idea deserves future testing. 110 APPENDICES 111 APPENDIX A The Ill-Structured Task Genetic engineering (GE), also known as genetic modification (GM) is a recent innovation. But just like any new invention, GE implementation has potential costs. There are many concerns about the safety of GE crops. Because GE crops are widely grown in the U.S., we need to know their impacts on our health. Please take your time to explore and research diverse issues related to this topic on the web to form and validate your own view on whether or not GE crops are safe to eat. It is possible that you have heard about some discussions and could have formed your own point of view before you start your research. In this case, please hold a neutral view on GM crops prior to your online research and avoid your preconceptions interfering with the formation of your own view. Your understanding on the safety of GE crops should only build upon the web information you research later today. Although most participants spend 30 to 120 minutes to research the topic, there is no time limit. Please try your best to know the topic as thoroughly as possible so that your view is valid, reasonable, and supported by evidence. Please only stop when you feel satisfied with what you have learned and confident that your view is well supported. If needed, you can take notes while you are researching. A piece of paper and a pencil (or pen) are provided. While you are researching, please verbalize what you are thinking, especially when you form or change (if applicable) your view based upon web information you encounter. Please also verbalize what you write down if you take notes, and what you are thinking when you take notes. After your research, you will need to answer some questions to defend your view and to test how well you have researched this topic. You can NOT use the Internet anymore, but you can use your notes. Please! treat this task seriously! Try your best to research the topic with your full effort. Otherwise we may not be able to use your data to help understand how to improve learning on the web. Additionally, since GE crops are really widely grown in the U.S., investing substantial effort to understand safety issues of GE crops is also important to your own well-being! 112 APPENDIX B General Epistemic Beliefs Inventories Part I. • Each of the following items contains two opposing statements about learning. • Please select the degree to which statement matches how you think. • ONLY ONE OPTION ON EACH ITEM (OR LINE) CAN BE SELECTED. • There is no right or wrong answer, and we just want to know how you think. Example For the following item: Item1 I am an introverted person. Statement Strongly Mostly A agree with agree A with A I am an extroverted person Somewhat agree with A Somewhat agree with B Mostly agree with B Strongly Statement agree B with B Thus, if you think that you are more likely to be an introverted person, you may select: I am an extroverted Item I am an introverted person. person 1 [Part I starts below…] 1 2 I have learned some topic best when I can account for various phenomena using some single, more abstract, explanatory system, framework or perspective. Complex topics should be best broken down into parts and studied separately. In most areas of study, the whole topic is usually equal to the sum of its parts. I have learned some topic best when I can examine its various phenomena through different explanatory systems, frameworks or perspectives. Breaking down complex topics into separate parts is often misleading because components tend to interact and affect each other. In most areas of study, the whole is usually not the same as the sum of the parts. 113 3 4 5 6 7 8 Different aspects or subtopics of knowledge should be compartmentalized in the mind so that I can see how one aspect can neatly build off the rest. When phenomena appear inconsistent, it is probably because a single system or lens for explanation can not be found. Multiple explanatory systems should be used so that they could be explained thoroughly. I enjoy encountering difficult, conflicting, and disorderly concepts and find them challenging. Different aspects or subtopics of knowledge should be highly interrelated in the mind along varying dimensions so that I can see their different roles from different perspectives. When phenomena appear inconsistent, it is probably because a system for explaining them has not yet been found. But, it is likely that such a system exists. I prefer simplicity, consistency, and orderliness. Whenever possible, I prefer not to encounter complex concepts in school (although I deal with complexity when I have to) I do not find ambiguity or inconsistency too troubling. It's all right if things don't always have a clear answer or cannot be explained uniformly. Yet it is essential that I should know underlying factors accounting for the ambiguity and inconsistency. The notion that ideas should 'come to life' makes no sense. Concepts are merely abstractions. When previously learned information has to be applied, I usually recall specific contexts in which I use some general rule to solve similar problems. Then I try to align these I feel intolerant of ambiguity or inconsistency, because it indicates a limit to what is known. Things should have a clear answer if we know enough about them. Ideas need to 'come to life'. Concepts should be personally experienced in a vital manner. When previously learned information has to be applied, I usually tend to recall some general rule and then try it out in the new situation, or I usually recall the general process 114 of solving other cases for what I should do in the new situation. 9 10 11 12 13 14 contexts with the context of the new case. I usually do NOT directly try out some general rule or follow some general process when I deal with new cases. Learning is essentially a process that I receive information, record it appropriately in my memory, and retrieve it accurately for later use. Learning is essentially a process in which I personally construct understandings and acquire the ability to apply my knowledge in new ways to various kinds of new situations. Learning works best when I am told explicitly what I am supposed to learn and how I should learn. Learning works best for me when it is selfdirected. I am very concerned with how others evaluate me. Doing well on exams is my most important learning goal. Learning works best when I am left with a lot of flexibility regarding what should be learned and how I should learn. Learning works best for me under the guidance of experts (e.g., teachers). I set my own personal standards; self-evaluation matters most to me. Exams are important, but they are not the ultimate goal of my learning. All issues could NOT have any certain absolute answer applicable to all situations, even if they are well studied and are scientific and theory-based. I am highly motivated by internal factors (e.g., what I intrinsically want to do and think is best). All scientific and theorybased issues should have a single certain absolute answer applicable to all situations if they are well studied. I am highly motivated by external factors (e.g., what other people expect of me). 115 Part II. • • • Listed below are pairs of statements concerning thoughts, attitudes, and ways of behaving. Please read each statement carefully and find the one which pertains to you more closely. No statement is more "correct" than the other. Please answer all items, but circle only one statement ("a" or "b") in each pair. 1) a. Schools should be where a child learns to think for him/herself. b. Schools should be where a child learns basic information. 2) a. Things really look different if we change how we see them. b. Things really look different only if they are changed. 3) a. Organisms change by forces from outside themselves. b. Organisms can change themselves. 4) a. A good judge is purely objective. b. A good judge is not objective and knows it. 5) a. Great discoveries come from scientific imagination. b. Great discoveries come from scientific experimentation. 6) a. All things stay basically the same over time. b. All things change from one moment to the next. 7) a. A business executive needs time to analyze the facts. b. A business executive needs time for creative thinking. 8) a. Before making a big decision, I like to sleep on it. b. Before making a big decision, I like to get all the information. 9) a. Progress in science occurs when there is a new way of looking at events. b. Progress in science occurs when an important observation is made. 10) a. A criminal is just a burden to society. b. A criminal has a function in society. 11) a. Our knowledge is limited by our observations. b. Our knowledge is limited by our imagination. 12) a. Living is a process of using up the available supplies. b. Living is a process of exchanging supplies back and forth. 13) a. Events are sometimes just the same as before. b. Events are always new and different in some way. 14) a. Divorce is often a phase in each partner's growth. b. Divorce is usually the result of incompatible personalities. 15) a. Facts are more useful than a good idea. b. Facts are less useful than a good idea. 116 16) a. Each relationship I have is different. b. Each relationship I have is much like the previous one. 17) a. Things are changed only when they are directly affected. b. Things are changed by everything else. 18) a. We learn by carefully examining individual facts. b. We learn by finding order in an array of facts. 19) a. To live independently of other people is not a realistic goal. b. To live independently of other people is a realistic goal. 20) a. War can be better understood by examining what purpose it served. b. War can be better understood by examining its causes. 21) a. The world is like a large, living organism. b. The world is like a large, complex machine. 22) a. A child discovers the world by being praised and punished. b. A child discovers the world by testing his/her dreams and fears. 23) a. I can change things in my family only by planned action. b. I can change things in my family just by being who I am. 24) a. A child's world is different from mine. b. A child's world is like mine, but he/she knows less. 25) a. Persons are made by their environments. b. Persons and their environments affect each other. 26) a. To resolve a family dispute, it is important how we look at the facts. b. To resolve a family dispute, it is important to discover all the facts. Statements in Italic indicate complex levels of epistemic beliefs. 117 APPENDIX C Task-Specific Epistemic Beliefs Inventories Part I • Each of the following items contains two opposing statements about how you would learn about a certain topic. • Please select the degree to which statement matches how you think. • ONLY ONE OPTION ON EACH ITEM (OR LINE) CAN BE SELECTED. • There is no right or wrong answer, and we just want to know how you think. Example For the following item: Item I am an introverted 1 person. Statement Strongly Mostly agree with agree A A with A I am an extroverted person Somewhat agree with A Somewh at agree with B Mostly agree with B Strongly Statement agree B with B Thus, if you think that you are more likely to be an introverted person, you may select: I am an introverted I am an extroverted 1 person. person [Part I starts below] Instructions: Imagine that you had to use the Google search engine to learn how genetically engineered foods could influence human health. Keep this task in mind as you answer the following questions. 1 I will feel intolerant of ambiguity or inconsistency about the issue related to the safety of genetically engineered foods, because it shows the limit of my knowledge. I should have a clear answer if I learn about it sufficiently. I will NOT feel intolerant of ambiguity or inconsistency about the issue related to the safety of genetically engineered foods. It's all right if this issue doesn't have a clear answer. Yet it is essential that I need to know underlying factors accounting for the ambiguity and inconsistency. 118 2 3 4 5 Learning the safety of genetically engineered foods will be a process in which I read web information carefully, record it appropriately in memory, and retrieve it accurately for later use. Learning the safety of genetically engineered foods will be a process in which I personally construct understandings and acquire the ability to apply my knowledge in new ways to explain various phenomena. Learning diverse issues about the safety of genetically engineered foods will work best when I am told explicitly what I should learn and how I should learn. If phenomena or studies related to the effect of a certain type of genetically engineered foods appear inconsistent or contradictory, it is probably because a single system or lens for explanation can not be found. Multiple explanatory systems should be used so that they could be explained thoroughly. All issues related to the given topic, such as whether or not eating a certain type of genetically engineered foods can cause problems to human immune system, should have a single certain absolute answer applicable to all situations if they are well studied, scientific, and theory-based. Learning diverse issues about the safety of genetically engineered foods will work best when I am left with a lot of flexibility regarding what I should learn and how I should learn. If phenomena or studies related to the effect of a certain type of genetically engineered foods appear inconsistent or contradictory, it is probably because a system for explaining them has not yet been found. But, it is likely that such a system exists. All issues related to the given topic, such as whether or not eating a certain type of genetically engineered foods can cause problems to human immune system, could NOT have any certain absolute answer applicable to all situations, even if they are well studied, scientific, and theory-based. 119 6 7 8 9 10 I will learn best if I can account for various phenomena about genetically engineered foods using some single, abstract explanatory system, framework, or perspective. I will break this learning task (i.e., how genetic modification can influence human health) apart and learn each individual part or component separately. The whole topic should be equal to the sum of its parts. I will enjoy encountering difficult, inconsistent and disorderly concepts, and find them challenging during the process of understanding diverse views on how genetic modification can relate to human health. I will learn best if I can examine various phenomena about genetically engineered foods through different explanatory systems, frameworks, or perspectives. Breaking this task (i.e., how genetic modification can influence human health) apart to study each individual part or componen separately will be misleading because parts must interact and affect each other. The whole topic is not the same as the sum of the parts. I will prefer simplicity, consistency, and orderliness when learning the issues about how genetic modification can relate to human health. Whenever possible, I will prefer not to encounter complex or inconsistent concepts during the learning (although I will deal with complexity when I have to). Concepts and arguments about the effects of genetically engineered crops on human health are probably abstractions. Their meanings are not likely to be changed in different situations. Concepts and arguments about the effects of genetically engineered crops on human health should be concrete. That is, they will make no sense unless they can be personally experienced or relevant. If I had to take an exam of what I had learned about genetic modification after searching the Internet, I would be very concerned with how others (e.g., I set my own personal standards; self-evaluation of what I should learn about the given topic matters most to me. If I had to take an exam of what I had learned about genetic modification 120 11 12 13 experimenter) evaluate me. Doing well on the exam testing how much and how well I have learned about the given topic will be my most important learning goal. I will learn best about the connections between genetically engineered crops and human health when learning is selfdirected. after searching the Internet, although performing well in the exam would be important, it would not be the ultimate goal of my learning in this situation. I will learn best about the connections between genetically engineered crops and human health under the guidance of an expert (e.g., an expert gives me a list of websites to explore the topic) Different aspects or subtopics of genetic modification should be highly interrelated in my mind along varying dimensions so that I can see their varying roles from different perspectives. During the learning process of genetic modification, I will be highly motivated by internal factors (e.g., what I intrinsically want to learn and what is best about learning this topic). Different aspects or subtopics of genetic modification should be compartmentalized in my mind so that I can see how one aspect can neatly build off the rest. During the learning process of genetic modification, I will be highly motivated by external factors (e.g., what the experimenter expects of my learning, and how I am supposed to learn this topic). Part II • • • Listed below are pairs of statements concerning the ways to learn about a certain topic Please read each statement carefully and find the one which pertains to you more closely. No statement is more "correct" than the other. Even you partly agree with both, you should pick the one most close to you Please answer all items, but circle only one statement ("a" or "b") in each pair. Instructions: Still imagine that you had to use the Google search engine to learn how genetically engineered foods could influence human health. Keep this task in mind as you answer the following questions. 121 1) When learning how genetically engineered foods affect human health, the Internet should be an environment in which a. learners learn to think for themselves. b. learners learn basic information. 2) The impacts of genetically engineered foods on human health really look different to different people a. if they see the impacts differently. b. only if the impacts themselves are different. 3) The process of learning on the web about how genetically engineered foods affect human health will change a. in response to the environmental factors learners are exposed to b. in response to learners’ improved understanding of the subject 4) When learning about the relationships between genetically engineered foods and human health on the web, a. a good learner is purely objective and unbiased. b. a good learner is not objective and aware of his/her bias. 5) Concepts or principles related to the effect of genetically engineered foods on human health a. stay basically the same over time. b. change constantly. 6) Progress in learning about genetically engineered foods and their impacts on human health occurs when a. there is a new way of looking at what we have observed. b. an important new observation is made. 7) When searching on the Web to learn how genetically engineered foods can affect human health a. misinformation (i.e., false or inaccurate information) is always a problem for learning. b. misinformation has a function in learning. 8) Using the Internet to learn the connections between genetically engineered foods and human health is a process of a. finding and knowing all available web information. b. comparing web information among several sources. 9) The process of learning the connections between genetically engineered foods and human health on the web is a. sometimes just the same as before. b. always new and different in some way. 122 10) When discussing the impacts of genetically engineered foods, disagreements between learners are a. usually a phase in each learner’s growth. b. usually the result of incompatible understandings. 11) Each individual fact learners explore online about a certain function of a certain type of genetically engineered foods is probably a. similar to other facts. b. different from other facts. 12) Learners’ understanding of the impacts of genetically engineered foods on human health, using the Internet, can be changed a. by only directly related topics online. b. by everything else, such as other web information and the environment (e.g., air, chair, learner’s physical condition, etc) 13) We learn the impact of genetically engineered foods on human health by a. carefully examining individual facts. b. finding order in an array of facts. 14) Conflicting views on the function of genetically engineered foods can be better understood by a. examining values of these conflicting views. b. examining causes of these conflicting views. 15) Another learner’s understanding of how genetically engineered foods affect human health is a. different from mine. b. is like mine, but he/she knows more or less. 16) How well learners can understand the impacts of genetically engineered foods is determined by a. what web information they read. b. their improved understanding of the web information they read. 17) To resolve an inconsistent issue about genetic modification, it is important a. to understand how people look at the facts included. b. to discover all the facts included. 18) When searching on the Web to learn how genetically engineered foods can affect human health, biased information (e.g., individual opinions) a. is always a problem for learning. b. has a function in learning. Statements in Italic indicate complex levels of epistemic beliefs. 123 APPENDIX D Activation Prompts Before you begin, we would like you to contemplate the following five scenarios. Please answer the questions embedded in each scenario with as much detail as possible. The purpose of this exercise is not to test how well you can answer the questions, but to help prepare your mind for the upcoming task. If, while answering these questions, you have any thoughts on how to best carry out your online research, feel free to jot them down to assist you later. 1. When you study the effect of genetic engineering on human health, you can find and read 1) summaries posted by different people or organizations (e.g., conclusions from the world health organization website, effects of GE products presented online as bullet points, etc.); or 2) individual cases posted by different people (e.g., consumers describing their health issues after eating GE foods, physicians’ and nutritionists’ opinions on GE products, farmers talking about their GE crops, specific studies testing the safety of a certain type of GE food, interviews to policy makers, representatives of biotechnology companies, ecologists’ observations of agricultural system in which GE crops grow, etc). By which of these two approaches do you think you can understand the issue better and is more helpful to form and justify your own view on whether or not GE crops are safe to eat? Why? 2. Do you think it is possible that two trustworthy websites may show opposing information on a certain topic (e.g., opposing results found in rat feeding tests assessing the impact of a certain type of genetically engineered potato on rats’ immune system)? Why or why not? What are some possible explanations you can think of for the contradiction? 3. As you build your own knowledge about the given topic, how certain are you that what you read is true, reasonable, or believable? What factors do you think may affect the veracity of web information? What evidence, facts, or empirical data will you decide is acceptable justification for particular views related to this topic? 4. Suppose you find several websites providing evidence to support the view that genetically engineered foods are safe to eat, but several other websites provides evidence that genetically engineered foods are unsafe to eat. Which one of the following situations is most likely? 1) One view is correct and the other view is incorrect. 2) Both views can be equally correct or incorrect. 3) One view is more correct or reasonable than the other but both can be correct or incorrect to some extent Why? What are some possible explanations you can think of for the contradiction? How will you reconcile inconsistent information when judging whether or not genetically engineered foods are safe to eat? 5. Does the issue – whether or not genetically engineered foods are safe to eat – have a clear and correct answer? Why or why not? How would you address this issue and how do you know if you have learned this issue thoroughly and sufficiently? 124 APPENDIX E Post Survey Part I. Please read each statement below and indicate the number that best applies to you. PLEASE BE HONEST! There is no right or wrong answer! 1. Do you think you have sufficiently invested effort in the task? 1--------------2-------------3------------4------------5-------------6-----------7 Not sufficient at all Completely sufficient 2. How would you rate the completeness of the information you researched about the topic online? 1--------------2-------------3------------4------------5-------------6-----------7 I did the minimal I searched extensively search to finish the until I did not find more task quickly worthwhile information 3. Do you think you have sufficiently explored the topic to the fullest degree necessary? 1--------------2-------------3------------4------------5-------------6-----------7 Not at all Yes, completely 4. In terms of the richness/broadness of the information, how satisfied are you that you researched and understood the topic to the fullest extent necessary? 1--------------2-------------3------------4------------5-------------6-----------7 Not satisfied at all Completely satisfied 5. In terms of the depth of the information, how satisfied are you that you researched and understood the topic to the fullest extent necessary? 1--------------2-------------3------------4------------5-------------6-----------7 Not satisfied at all Completely satisfied 6. To what extent do you feel that there is information on the Web that you did NOT find but is important and pertinent to the topic? 1--------------2-------------3------------4------------5-------------6-----------7 I did not successfully I found all of the find any important important information information on the Web on the Web 7. To what extent do you feel you have understood the given topic thoroughly? 1--------------2-------------3------------4------------5-------------6-----------7 I have understood I have understood very little of this topic this topic very well 8. To what extent do you feel that there is more information on the Web that you did not find but could have made you understand the topic more thoroughly? 1--------------2-------------3------------4------------5-------------6-----------7 Lots of such information No such information left online that could left online that could have made me learn have made me learn more thoroughly more thoroughly 125 Part II. Please read each statement and indicate to what extent you agree or disagree with them: When I researched the information about the Strngly DisNeutral Agree Strngly given task on the Web … Disagree agree Agree 1. I frequently connected ideas or examples across websites. 2. What I found usually led me to what I should search next. 3. I frequently integrated web information across websites. 4. I frequently compared web information across websites. 5. If I could find a relevant website with lots of information I wanted, searching for other information was probably unnecessary. 6. I was eager to find a single website which contained the most fruitful and reliable information. 7. The particular information I needed to research was really clear to me at the beginning of my search. 8. There were moments when I felt overwhelmed by the amount of information relevant to the given task. 9. I purposefully searched out or would have searched out alternative views or evidence, if what I found was all from one side of arguments. 10. Sometimes, I skimmed web pages NOT only to find out the pertinent information addressing whether or not GE foods are safe to eat, but also to check out other indirectly related information that could help me understand this issue better. 11. Even after acquiring a fair amount of relevant information, I still tended to search for some different ideas, perspectives, or evidence. 12. I assumed that all web information was biased and was not flawless to some extent. 13. I preferred reading summaries than reading about individual experiences or opinions (e.g., ideas from physicians, consumers, politicians, biochemical engineers). 14. I would still purposefully search for alternatives even I thought alternative views may not exist for some issues. 126 15. I evaluated the trustworthiness of the information I read online by checking out whether or not alternative views, explanations, or disconfirming evidence existed. 16. I mainly focused on finding and reading the websites presenting information in a clear and simple way (e.g., conclusions summarized with bullet points). 17. I evaluated the trustworthiness of web information by checking out its source, such as who posted it or what website it was from. 18. I evaluated the trustworthiness of web information by assessing the content per se, such as if its evidence was sufficient and convincing, or if its experiment was set up reasonably, etc. 19. I not only read summaries, but also read individual opinions or experiences shared by some stakeholders. 20. I frequently reflected on what I had learned to determine what I should explore next. 21. I used particular web pages to get an overview of the topic or to locate additional (or original) sources of information for exploration. 22. It was necessary to read some research papers to form my view. 23. I evaluated the trustworthiness of web information by checking whether or not it was written recently. Note. Statements in Italic were reversed in data analyses. 127 APPENDIX F Prior Content Knowledge Test Age___________, Sex __________, I am a: Freshman Sophomore Major________________________________ Junior Senior Others (specify)____ 1. A large amount of genetically engineered/modified food safety research has been conducted on human subjects and has shown that genetically engineered foods can be risky to human health. a. True b. False c. I don’t know 2. Foods made from genetically engineered/modified crops are required to pass human testing conducted by the Food and Drug Administration (FDA). a. True b. False c. I don’t know 3. Most foods derived from genetically engineered/modified crops contain the same number of genes as food produced from their conventional (non-genetically engineered/modified) crops. a. True b. False c. I don’t know 4. If we live in the United States, it is almost certain that we have eaten foods that are genetically modified. a. True b. False c. I don’t know 5. Labeling food that is genetically modified is NOT required in the United States. a. True b. False c. I don’t know 6. Individual genetically engineered/modified foods and their safety investigations should be assessed on a case-by-case basis, because different genetically modified organisms include different genes inserted in different ways. a. True b. False c. I don’t know 7. When genetic modification was first introduced on the market, its major goal was to produce crops with more nutritional value. a. True b. False c. I don’t know 8. Genetically engineered/modified plants are now being developed for the production of recombinant medicines and industrial products, such as vaccines, plastics and biofuels. a. True b. False c. I don’t know 9. Genetically engineered/modified plants can be used to produce drugs to treat human disease. a. True b. False c. I don’t know 10. Genetically engineered/modified plants can NOT contaminate the ecosystem. a. True b. False c. I don’t know 128 11. A genetically engineered/modified plant can contain a gene from an unrelated plant or from a completely different species. a. True b. False c. I don’t know 12. Monsanto is a biotechnology company providing most of the genetically engineered seeds. a. True b. False c. I don’t know 13. Biotechnology companies are required to conduct safety test of new genetically engineered crops before marketing them. a. True b. False c. I don’t know 14. United States Department of Agriculture (USDA), Environmental Protection Agency (EPA) and Food and Drug Administration (FDA) are currently three agencies regulating the safety of genetic engineered crops in the U.S. a. True b. False c. I don’t know 15. What else do you know about genetically engineered crops and its safety issues? please specify below: 129 APPENDIX G Interview Questions 1. Do you think you need more time to learn about this topic so that your view on the safety of GE foods is more solid and reasonable? Why or why not? 2. If you could have more time working on this topic to enhance your understanding on whether or not GE foods are safe to eat, what would you research more? 3. After synthesizing the information you researched on the web, what’s your view on whether or not GE crops are safe to eat? Why do you think so? 1----------2----------3----------4----------5 GE crops are depends GE crops are safe to eat unsafe to eat 4. Did you check the existence of alternative views, disconfirming evidence, or different ways to interpret a certain issue? If so, please provide some examples, and rate to what extent you did that? 1---------------2---------------3---------------4--------------5-------------6 Never Always 5. Did you do anything to evaluate the trustworthiness of the web information you read? If so, please provide some examples and rate to what extent you did that? 1---------------2---------------3---------------4--------------5-------------6 Never Always 6. To what extend did you pay attention to the things such as how certain views or conclusions on the web were formed, how well these views or conclusions could be applied to other situations? Provide some examples 1---------------2---------------3---------------4--------------5-------------6 Never Always 7. To what extend did you try to find or read personal opinions or individuals’ experiences or stories? Provide some examples please. 1---------------2---------------3---------------4--------------5-------------6 Never Always 8. While you were researching the given topic, to what extent did you try to connect, compare, synthesize, or integrate web information you read on different sites? Examples? 1---------------2---------------3---------------4--------------5-------------6 Never Always 9. How did you approach this learning task to form your own view on whether or not GE foods are safe to eat? 130 10. How did you decide when to stop researching? 11. Besides finding information directly addressing the safety of GE foods, what other information did you also intentionally search for or spend some time reading? Why did you work on them? Please provide some examples for the item(s) you select? (This could address exploration or expansiveness vs. closure) o what is genetic engineering or genetically engineered plants o some specific terms you encountered while reading web pages o regulation system of genetically engineered crops o environmental impact of GE plants o the value or benefits of genetic engineering or genetically engineered crops o Moral or religious discussions of genetic engineering o Conventional (non-GE) foods o other issues that seemed not directly related to the safety of GE foods 12. (The activation group only) Did you keep the prompts in mind while searching? Do you think working on prompts change your way of learning? How? 131 APPENDIX H Training Instructions of Google Search Techniques Google's default behavior is to consider all the words in a search, except that ‘the,' 'a,' and 'for,' are usually ignored. Search the page with exact words (“”): • Double quotes, the exact words in that exact order without any change. • E.g., “the country with largest population” Terms you want to exclude (-): • Attaching a minus sign immediately before a word indicates that you don’t want pages that contain this word to appear in your results. • The minus sign should appear immediately before the word and should be preceded with a space. o Anti-virus software: minus sign is a hyphen, not an exclusion symbol o Anti-virus –software: will search for the words ‘anti-virus’ but exclude references to software. • Exclude more than one word by place the minus sign in front of all the words you want to exclude. o Population –china –US Search specific word(s) on web page • Cached highlights the query words you input in the search box. • Ctrl + F highlights the query words you want to search on each page 132 APPENDIX I Instructions to Practice the Think-Aloud Method Later today, you are going to research a given topic using the Internet. You will need to verbalize your thoughts during the process. To prepare for that, we will take sometime to practice. I am going to sit beside you during your search. I will hold the mouse and click the website for you. Please imagine that we are researching this topic together. You will have complete control of what we read and how we learn. But you need to explain to me why you want us to read a certain webpage or why you want us to approach this topic in a certain way. Some specific things you should explain include: • the reason why you want me to click a certain web page, and why not others • anything that you are reading, including titles, authors, urls, hyperlinks, main texts, etc. Do not worry about pronunciation; the main focus is to know what specific lines you are reading on each website. • anything that comes to mind as you read a webpage (e.g., something you find is useful or not useful, certain information you think you would like to explore further, something you have learned about previously, the reason why the webpage send you on your way to somewhere else, etc), • what you want to do next. • If you are taking notes, you should verbalize everything that you write down, explain why you write them down, and anything comes up in your mind. Any questions? Then try this example – how can global warming change our lives? 133 APPENDIX J Hierarchical Regression Analysis Results for Each Dependent Variable Table 15 Hierarchical Analysis of Multiple Regression Models Predicting the Observed Learning Complexity (Integrated) Variable CFI β 1 OMPI t p VIF β t 2 p VIF Step 1: Main effects Time .46 4.44 <.001 1.10 .49 4.35 <.001 1.08 Verbal .29 2.83 .007 1.10 .35 3.05 .004 Effort -.03 -0.25 .80 1.10 -.04 -0.31 .76 1.10 Prior knowledge .00 0.03 .98 1.07 .01 0.12 .91 1.08 GEB .42 3.40 .001 1.55 .24 2.00 .05 1.16 TSEB -.01 -0.10 .92 1.69 .08 0.69 .50 1.26 Group .10 0.95 1.09 .08 0.70 .49 1.10 2 .56, F(7, 45)=8.03, p<.001 R .35 1.08 .47, F(7, 45)=5.61, p<.001 Step 2: Interaction effects Group*GEB .03 0.15 .88 2.88 .10 0.59 .56 2.49 Group*TSEB -.02 -0.09 .93 3.44 .11 0.69 .49 2.08 2 ΔR <.01 F(2, 43)=.01, p=.99 .01 F(2,43)=.59, p=.56 Note. Dependent variable was the integrated learning complexity score (direct analysis of video clips). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 134 Table 16 Hierarchical Analysis of Multiple Regression Models Predicting the Connection Dimension of Learning Complexity Variable CFI β t 1 OMPI p VIF β 2 t p VIF Step 1: Main effects Time .41 3.69 .001 1.10 .44 3.66 .001 1.08 Verbal .30 2.70 .01 1.10 .35 2.89 .006 1.08 Effort -.05 -0.46 .65 1.10 -.07 -0.55 .59 1.10 Prior knowledge -.13 -1.17 .25 1.07 -.11 -0.94 .36 1.08 GEB .47 3.60 1.55 .27 2.17 .04 1.16 TSEB -.08 -0.58 .56 1.69 .02 0.14 .89 1.26 Group .02 0.15 1.10 -.01 -0.07 .95 1.10 2 .50, F(7, 45)=6.32, p<.001 R .001 .88 .39, F(7, 45)=4.14, p=.001 Step 2: Interaction effects Group*GEB -.20 -1.16 .25 2.88 -.13 -0.71 .48 2.49 Group*TSEB -.16 -0.82 .42 3.44 -.07 -0.39 .70 2.08 2 ΔR 0.05 F(2, 43)=2.32, p=.11 .01, F(2,43)=.45, p=.64 Note. Dependent variable was the connection dimension of learning complexity (direct analysis of video clips). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 135 Table 17 Hierarchical Analysis of Multiple Regression Models Predicting the Flexibility Dimension of Learning Complexity Variable CFI β t 1 OMPI p VIF β t 2 p VIF Step 1: Main effects Time .30 2.36 .02 1.10 .30 2.28 .03 1.08 Verbal .13 1.03 .31 1.10 .18 1.34 .19 1.08 Effort .01 0.10 .92 1.10 .01 0.09 .93 1.10 Prior knowledge .11 0.83 .41 1.07 .10 0.76 .45 1.08 GEB .36 2.36 .02 1.55 .13 0.91 .37 1.16 TSEB .004 0.03 .98 1.69 .19 1.31 .20 1.26 Group .18 1.37 .18 1.09 .17 1.28 .21 1.10 2 .33, F(7, 45)=3.13, p=.009 R .27, F(7,45)=2.33, p=.04 Step 2: Interaction effects Group*GEB .10 0.46 .65 2.88 .28 1.37 .18 2.49 Group*TSEB -.22 -0.96 .34 3.44 .03 0.16 .87 2.08 2 ΔR 0.01 F(2, 43)=.47, p=.63 .04, F(2,43)=1.13, p=.33 Note. Dependent variable was the flexibility dimension of learning complexity (direct analysis of video clips). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 136 Table 18 Hierarchical Analysis of Multiple Regression Models Predicting the Critical Analysis of Web Information Dimension of Learning Complexity Variable CFI β t 1 OMPI p VIF β 2 t p VIF Step 1: Main effects Time .40 3.34 .002 1.10 .41 3.33 .002 1.08 Verbal .34 2.82 .007 1.10 .36 2.93 .005 1.08 Effort .05 0.37 .71 1.10 .03 0.23 .82 1.10 Prior knowledge .07 0.56 .58 1.07 .07 0.59 .56 1.08 GEB .33 2.29 .03 1.55 .17 1.32 .19 1.16 TSEB -.15 -1.01 .32 1.69 -.05 -0.34 .74 1.26 Group .18 1.52 1.09 .17 1.34 1.10 2 .41, F(7, 45)=4.48, p=.001 R .14 .19 .37, F(7, 45)=3.70, p=.003 Step 2: Interaction effects Group*GEB .12 0.63 .53 2.88 .05 0.25 .80 2.49 Group*TSEB .08 0.39 .70 3.44 .28 1.64 .11 2.08 2 ΔR .02 F(2, 43)=.61, p=.55 .05, F(2, 43)=1.65, p=.20 Note. Dependent variable was the critical analysis of web information dimension of learning complexity (direct analysis of video clips). GEB = General epistemic beliefs. TSEB = Taskspecific epistemic beliefs. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 137 Table 19 Hierarchical Analysis of Multiple Regression Models Predicting the Critical Analysis of Web Information – Source Sub-Dimension of Learning Complexity Variable CFI β 1 OMPI t p VIF β t 2 p VIF Step 1: Main effects Time .51 3.95 <.001 1.10 .51 4.06 <.001 1.08 Verbal .26 2.05 .05 1.10 .25 2.00 .05 1.08 Effort -.01 -0.04 .97 1.10 -.02 -0.18 .86 1.10 Prior knowledge .03 0.24 .81 1.07 .04 0.31 .76 1.08 GEB .10 0.68 .50 1.55 .07 0.53 .60 1.16 TSEB -.19 -1.19 .24 1.69 -.19 -1.42 .16 1.26 Group .13 1.00 1.10 .12 0.90 1.10 2 .34, F(7, 45)=3.23, p=.01 R .32 .37 .34, F(7, 45)=3.36, p=.01 Step 2: Interaction effects Group*GEB -.14 -0.69 .50 2.88 .05 0.81 .81 2.49 Group*TSEB .25 1.08 3.44 .01 0.97 .97 2.08 2 ΔR .29 .02, F(2, 43)=.59, p=.56 .001, F(2, 43)=.04, p=.97 Note. Dependent variable was the critical analysis of web information – source sub-dimension (direct analysis of video clips). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 138 Table 20 Hierarchical Analysis of Multiple Regression Models Predicting the Critical Analysis of Web Information – Recentness Sub-Dimension of Learning Complexity Variable CFI β t 1 OMPI p VIF β t 2 p VIF Step 1: Main effects Time .29 2.02 .05 1.10 .26 1.83 .07 1.08 Verbal .06 0.39 .70 1.10 .06 0.44 .67 1.08 Effort .13 0.93 .36 1.10 .12 0.82 .42 1.10 Prior knowledge .16 1.16 .25 1.07 .16 1.14 .26 1.08 GEB .18 1.09 .28 1.55 .16 1.11 .28 1.16 TSEB -.14 -0.78 .44 1.69 .08 0.49 .62 1.26 Group -.06 -0.42 .68 1.10 -.05 -0.37 .72 1.10 2 .16, F(7, 45)=1.26, p=.29 R .18, F(7, 45)=1.41, p=.22 Step 2: Interaction effects Group*GEB -.01 -0.06 .95 2.88 -.11 0.63 .63 2.49 Group*TSEB .21 0.80 3.44 .28 0.15 .15 2.08 2 ΔR .43 .02, F(2, 43)=.44, p=.65 .04, F(2, 43)=1.07, p=.35 Note. Dependent variable was the critical analysis of web information – recentness subdimension (direct analysis of video clips). GEB = General epistemic beliefs. TSEB = Taskspecific epistemic beliefs. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 139 Table 21 Hierarchical Analysis of Multiple Regression Models Predicting the Critical Analysis of Web Information – Content Sub-Dimension Variable CFI β t 1 OMPI p VIF β 2 t p VIF .06 1.08 Step 1: Main effects Time .19 1.76 .09 1.10 .22 1.91 Verbal .45 4.15 <.001 1.10 .48 4.24 <.001 1.08 Effort -.02 -.14 .89 1.10 -.04 -0.37 .72 1.10 Prior knowledge .03 0.32 .75 1.07 .05 0.46 .65 1.08 GEB .49 3.83 <.001 1.55 .33 2.79 .01 1.16 TSEB -.19 -1.42 .16 1.69 -.08 -0.64 .52 1.26 Group .28 2.58 1.09 .25 2.16 .04 1.10 2 .52, F(7, 45)=7.06, p<.001 R .01 .46, F(7, 45)=5.36, p<.001 Step 2: Interaction effects Group*GEB .39 2.33 .03 2.88 .08 0.45 .65 2.49 Group*TSEB -.18 -0.95 .35 3.44 .12 0.77 .44 2.08 2 ΔR 0.06 F(2, 43)=2.81, p=.07 .01, F(2, 43)=.56, p=.58 Note. Dependent variable was the critical analysis of web information – content sub-dimension (direct analysis of video clips). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 140 Table 22 Hierarchical Analysis of Multiple Regression Models Predicting the Novelty Dimension of Learning Complexity Variable CFI β T 1 OMPI p VIF β t 2 p VIF Step 1: Main effects Time .30 2.43 .02 1.10 .34 2.73 .01 1.08 Verbal .01 0.04 .97 1.10 .07 0.55 .58 1.08 Effort -.21 -1.68 .10 1.10 -.20 -1.60 .12 1.10 Prior knowledge -.03 -0.24 .81 1.07 -.01 -0.11 .92 1.08 GEB .29 2.04 .05 1.55 .27 2.06 .05 1.16 TSEB .22 1.44 .16 1.69 .22 1.63 .11 1.26 Group -.13 -1.05 .30 1.09 -.14 -1.14 .26 1.10 2 .39, F(7, 45)=4.16, p=.001 R .36, F(7, 45)=3.54, p=.004 Step 2: Interaction effects Group*GEB .14 0.69 Group*TSEB -.05 -0.23 .82 2 ΔR .50 2.88 .21 1.08 .29 2.49 3.44 -.03 -0.20 .85 2.08 0.01 F(2, 43)=.26, p=.77 .02, F(2, 43)=.59, p=.56 Note. Dependent variable was the novelty dimension of learning complexity (direct analysis of video clips). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 141 Table 23 Hierarchical Analysis of Multiple Regression Models Predicting the Engagement Dimension of Learning Complexity Variable CFI β t 1 OMPI p VIF β 2 t p VIF Step 1: Main effects Time .33 2.59 .01 1.10 .40 2.98 .005 1.08 Verbal .17 1.30 .20 1.10 .21 1.58 .12 1.08 Effort -.06 -0.43 .67 1.10 -.04 -0.33 .75 1.10 Prior knowledge -.13 -1.03 .31 1.07 -.11 -0.83 .41 1.08 GEB .09 0.57 .57 1.55 .13 0.93 .36 1.16 TSEB .25 1.56 .13 1.69 .07 0.50 .62 1.26 Group -.13 -1.01 .32 1.09 -.15 -1.12 .27 1.10 2 .32, F(7, 45)=3.03, p=.01 R .26, F(7, 45)=2.30, p=.04 Step 2: Interaction effects Group*GEB -.20 -0.96 .34 2.88 -.01 -0.06 .96 2.49 Group*TSEB .34 1.48 3.44 -.01 -0.04 .97 2.08 2 ΔR .15 0.03, F(2, 43)=1.11, p=.34 <.001, F(2, 43)=.003, p=1 Note. Dependent variable was the engagement dimension of learning complexity (direct analysis of video clips). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 142 Table 24 Hierarchical Analysis of Multiple Regression Models Predicting Learner Satisfaction Variable CFI β t 1 OMPI p VIF β t 2 p VIF Step 1: Main effects Time -.21 -1.83 .07 1.10 -.23 -2.07 .05 1.08 Verbal .17 1.46 .15 1.10 .19 1.75 .09 1.08 Effort .60 5.29 <.001 1.10 .60 5.42 <.001 1.10 Prior knowledge -.06 -0.49 .62 1.07 -.06 -0.57 .57 1.08 GEB .18 1.31 .20 1.55 .10 0.90 .37 1.16 TSEB .03 0.24 .81 1.69 .20 1.72 .09 1.26 Group .05 0.42 .68 1.10 .06 0.49 .62 1.10 2 .47, F(7, 45)=5.80, p<.001 R .50, F(7, 45)=6.43, p<.001 Step 2: Interaction effects Group*GEB -.02 -0.12 .90 2.88 .22 1.29 .20 2.49 Group*TSEB -.02 -0.09 .93 3.44 -.08 -0.51 .62 2.08 2 ΔR .001 F(2, 43)=.03, p=.97 .02, F(2, 43)=.84, p<.44 Note. Dependent variable was learner satisfaction (analysis of self-reported data). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 143 Table 25 Hierarchical Analysis of Multiple Regression Models Predicting Perceived Extent of Knowledge Exploration Variable CFI β T 1 OMPI p VIF β t 2 p VIF Step 1: Main effects Time .32 2.58 .01 1.10 .37 2.71 .01 1.08 Verbal .12 0.97 .34 1.10 .17 1.24 .22 1.08 Effort .04 0.29 .77 1.10 .02 0.18 .86 1.10 Prior knowledge .05 0.38 .71 1.07 .06 0.46 .65 1.08 GEB .44 2.93 .005 1.55 .22 1.54 .13 1.16 TSEB -.08 -0.53 .60 1.69 -.04 -0.24 .81 1.26 Group .21 1.70 1.10 .19 1.36 1.10 2 .35, F(7, 45)=3.49, p=.004 R .10 .18 .24, F(7, 45)=2.05, p=.07 Step 2: Interaction effects Group*GEB -.39 -2.00 .05 2.88 -.12 -0.58 .57 2.49 Group*TSEB .53 2.51 3.44 .07 0.35 2.08 2 ΔR .02 .09 F(2, 43)=3.38, p=.04 .73 .01, F(2, 43)=.19, p=.83 Note. Dependent variable was the perceived extent of knowledge exploration (analysis of selfreported data). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 144 Table 26 Hierarchical Analysis of Multiple Regression Models Predicting Overestimation Variable CFI β T 1 OMPI p VIF β t 2 p VIF Step 1: Main effects Time .15 .32 1.10 .18 1.22 .23 1.08 Verbal -.003 -0.02 .98 1.10 .02 0.16 .88 1.08 Effort .06 0.38 .71 1.10 .05 0.30 .76 1.10 Prior knowledge .05 0.36 .72 1.07 .07 0.44 .66 1.08 GEB .30 1.73 .09 1.55 .13 0.86 .39 1.16 TSEB -.09 -0.49 .63 1.69 -.08 -0.51 .61 1.26 Group .20 1.35 1.09 .17 1.16 1.10 2 .13, F(7, 45)=.94, p=.49 R 1.01 .18 .25 .08, F(7,45) = .55, p=.79 Step 2: Interaction effects Group*GEB -.46 -2.05 .05 2.88 -.19 -0.83 .41 2.49 Group*TSEB .62 2.53 3.44 .02 0.11 2.08 2 ΔR .02 .12, F(2, 43)=3.47, p=.04 .91 .02, F(2,43) =.35, p=.70 Note. Dependent variable was overestimation (analysis of self-reported data). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 145 Table 27 Hierarchical Analysis of Logistic Regression Models Predicting Perceived Insufficiency of Learning Variable CFI B 1 OMPI SE B Wald p OR B 2 SE B Wald p OR Step 1: Main effects Time 1.48 0.58 6.59 .01 4.41 1.42 0.60 5.67 .02 4.12 Verbal 0.66 0.46 2.03 .15 1.94 0.57 0.46 1.51 .22 1.77 Effort -1.53 0.56 7.57 .006 0.22 -1.54 0.60 6.66 .01 0.21 Prior knowledge -.79 0.45 3.09 .08 0.45 -0.83 0.48 3.08 .08 0.43 GEB 0.17 0.51 0.11 .74 1.19 -0.56 0.46 1.45 .23 0.57 TSEB 0.05 0.51 0.01 .92 1.05 0.76 0.46 2.69 .10 2.13 Group -0.91 0.78 1.36 .24 0.40 -1.01 0.82 1.53 .22 0.36 Group*GEB -1.54 1.14 1.81 .18 0.22 -3.96 2.43 2.67 .10 0.02 Group*TSEB .72 .34 2.06 5.54 3.88 .05 253.84 Step 2: Interaction effects 1.24 .56 2.81 Note. Dependent variable was the perceived insufficiency of learning (analysis of self-reported data). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. OR = Odds ratio. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 146 Table 28 Hierarchical Analysis of Logistic Regression Models Predicting Participants’ Plans to Explore Empirical Studies Variable CFI B 1 OMPI SE B Wald p OR B 2 SE B Wald p OR Step 1: Main effects Time -0.07 0.33 0.04 .83 .93 0.08 0.32 0.06 .81 1.08 Verbal 0.53 0.36 2.15 .14 1.70 0.58 0.35 2.75 .10 1.78 Effort -0.83 0.37 5.09 .02 .44 -0.80 0.36 4.87 .03 0.45 Prior knowledge 0.18 0.32 0.31 .58 1.19 0.20 0.33 0.39 .53 1.23 GEB 0.48 0.42 1.27 .26 1.61 0.46 0.35 1.77 .18 1.58 TSEB 0.28 0.44 0.40 .53 1.32 0.04 0.36 0.01 .92 1.04 Group 0.31 0.66 0.14 .71 1.28 0.35 0.66 0.29 .59 1.42 Group*GEB -0.14 0.88 0.03 .87 0.87 2.30 1.01 5.22 .02 9.92 Group*TSEB -0.27 0.86 0.10 .76 0.77 -1.08 0.82 1.77 .18 0.34 Step 2: Interaction effects Note. Dependent variable was future plan to explore empirical studies (analysis of self-reported data). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. OR = Odds ratio. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 147 Table 29 Hierarchical Analysis of Logistic Regression Models Predicting Participants’ Plans to Explore Individual Cases Variable CFI B 1 SE B Wald OMPI p OR B SE B Wald 2 p OR Step 1: Main effects Time 0.25 0.33 0.57 .45 1.28 0.14 0.32 0.18 .67 1.15 Verbal -0.45 0.33 1.86 .17 0.64 -0.48 0.33 2.12 .15 0.62 Effort -0.20 0.32 0.39 .53 0.82 -0.19 0.33 0.32 .57 0.83 Prior knowledge 0.15 0.31 0.22 .64 1.16 0.10 0.32 0.08 .78 1.09 GEB -0.34 0.41 0.69 .41 0.71 -0.20 0.34 0.33 .57 0.82 TSEB 0.03 0.40 0.01 .95 1.03 0.37 0.36 1.07 .30 1.45 Group -0.65 0.64 1.06 .30 0.52 -0.79 0.64 1.50 .22 0.45 Group*GEB 0.53 0.93 .32 .57 1.70 0.52 0.81 0.41 .52 1.68 Group*TSEB 0.50 0.81 .38 .54 1.65 2.75 1.17 5.56 .02 15.68 Step 2: Interaction effects Note. Dependent variable was future plan to explore individual cases (analysis of self-reported data). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. OR = Odds ratio. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 148 Table 30 Hierarchical Analysis of Logistic Regression Models Predicting Participants’ Plans to Explore the Views from Different Stakeholders Variable CFI B 1 SE B Wald OMPI p OR B 2 SE B Wald p OR Step 1: Main effects Time 0.23 0.36 0.41 0.52 1.26 0.28 0.36 0.58 .45 1.32 Verbal -0.46 0.35 1.70 0.19 0.63 -0.39 0.36 1.19 .28 0.68 Effort -0.16 0.40 0.16 0.69 0.85 -0.04 0.40 0.01 .91 0.96 Prior knowledge 0.14 0.37 0.14 0.71 1.15 0.07 0.36 0.04 .84 1.07 GEB 0.64 0.46 1.91 0.17 1.90 0.29 0.40 0.53 .47 1.34 TSEB 0.24 0.45 0.28 0.60 1.27 0.65 0.41 2.51 .11 1.91 Group -0.26 0.73 0.12 0.73 0.77 -0.35 0.73 0.22 .64 0.71 Group*GEB 0.95 1.04 0.84 0.36 2.59 -0.17 0.83 0.04 .84 0.84 Group*TSEB -0.54 0.90 0.35 0.55 0.59 0.90 0.98 .32 2.46 Step 2: Interaction effects 0.91 Note. Dependent variable was future plan to explore views from different stakeholders (analysis of self-reported data). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. OR = Odds ratio. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 149 Table 31 Hierarchical Analysis of Logistic Regression Models Predicting Indecisiveness Variable CFI B 1 SE B Wald OMPI p OR B 2 SE B Wald p OR Step 1: Main effects Time 1.09 0.62 3.15 .08 2.98 0.97 0.68 2.05 .15 2.62 Verbal 0.18 0.48 0.14 .71 1.20 0.24 0.44 0.31 .58 1.27 Effort -0.60 0.52 1.36 .24 0.55 -0.49 0.46 1.10 .30 0.62 Prior knowledge -0.46 0.52 0.77 .38 0.63 -0.49 0.58 0.72 .40 0.61 GEB -0.01 0.63 0.00 .99 0.99 0.14 0.51 0.08 .78 1.15 TSEB 0.36 0.73 0.24 .63 1.43 1.07 0.63 2.92 .09 2.92 Group 1.72 1.09 2.49 .11 5.60 1.95 1.23 2.51 .11 7.02 Group*GEB 1.60 1.83 0.76 .38 4.94 2.86 1.66 2.97 .09 17.54 Group*TSEB -3.51 2.32 2.29 .13 0.03 -1.05 1.56 0.45 .50 0.35 Step 2: Interaction effects Note. Dependent variable was indecisiveness of conclusions (analysis of self-reported data). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. OR = Odds ratio. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 150 Table 32 Hierarchical Analysis of Logistic Regression Models Predicting Indecisiveness due to the Context-Dependency Concern Variable CFI B 1 SE B Wald OMPI p OR B 2 SE B Wald p OR Step 1: Main effects Time 0.01 0.32 0.00 .98 1.01 -0.19 0.35 0.31 .58 0.82 Verbal 0.20 0.35 0.33 .57 1.22 0.22 0.38 0.34 .56 1.25 Effort 0.38 0.35 1.22 .27 1.47 0.56 0.40 1.95 .16 1.75 Prior knowledge 0.28 0.34 0.65 .42 1.32 0.19 0.35 0.30 .58 1.21 GEB 0.50 0.42 1.41 .24 1.64 -0.06 0.36 0.02 .88 0.95 TSEB 0.06 0.41 0.02 .88 1.06 1.18 0.46 6.69 .01 3.26 Group -0.37 0.66 0.32 .57 0.69 -0.65 0.73 0.82 .37 0.52 Group*GEB -0.28 0.89 0.10 .75 0.75 -1.83 1.32 1.91 .17 0.16 Group*TSEB 1.05 0.88 1.45 .23 2.86 5.70 3.33 2.94 .09 299.61 Step 2: Interaction effects Note. Dependent variable was indecisiveness due to the context-dependency concern (analysis of self-reported data). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. OR = Odds ratio. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 151 Table 33 Hierarchical Analysis of Logistic Regression Models Predicting the Adoption of Low Criteria Determining When to Stop Exploration Variable CFI B 1 SE B Wald OMPI p OR B 2 SE B Wald p OR Step 1: Main effects Time -0.07 0.34 0.05 .83 0.93 -0.30 0.35 0.73 .39 .74 Verbal -0.34 0.35 0.94 .33 0.71 -0.43 0.35 1.50 .22 .65 Effort 0.06 0.34 0.03 .86 1.06 0.01 0.35 0.00 .98 1.01 Prior knowledge -0.43 0.36 1.39 .24 0.65 -0.44 0.36 1.48 .22 0.65 GEB -0.03 0.41 0.00 .95 0.97 -0.77 0.39 3.93 .05 0.47 TSEB -0.61 0.49 1.55 .21 0.54 0.08 0.40 0.04 .84 1.09 Group 0.45 0.71 0.40 .53 1.56 0.20 0.71 0.08 .78 1.23 Group*GEB 1.19 0.91 1.69 .19 3.28 0.25 0.78 0.10 0.75 1.28 Group*TSEB -1.61 1.03 2.45 .12 0.20 0.46 0.79 0.34 0.56 1.58 Step 2: Interaction effects Note. Dependent variable was low criteria determining when to stop (analysis of self-reported data). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. OR = Odds ratio. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 152 Table 34 Hierarchical Analysis of Multiple Regression Models Predicting the Breadth of Knowledge Exploration Variable CFI β T 1 OMPI p VIF β 2 t p VIF 1.12 .42 3.40 .001 1.09 Step 1: Main effects Time .40 3.23 Verbal -.10 -0.86 .40 1.10 -.05 -0.43 .67 1.08 Effort .03 0.22 1.10 .03 0.27 .79 1.10 Prior knowledge -.22 -1.82 .08 1.08 -.22 -1.79 .08 1.09 GEB .26 1.76 .09 1.55 .16 1.23 .23 1.17 TSEB .16 1.03 .31 1.73 .25 1.88 .07 1.26 Group .11 0.90 .37 1.10 .11 0.85 .40 1.11 2 .39, F(7, 45)=4.14, p=.001 R .002 .83 .37, F(7,45) = 3.85, p=.002 Step 2: Interaction effects Group*GEB -.18 -0.90 .37 2.92 .08 0.42 .68 2.58 Group*TSEB .15 0.67 3.43 .25 1.48 .15 2.08 2 ΔR .51 .01, F(2, 43)=.42, p=.66 .04, F(2,43) =1.50, p=.24 Note. Dependent variable was the breadth of knowledge exploration (analysis of self-reported data). GEB = General epistemic beliefs. TSEB = Task-specific epistemic beliefs. 1. GEB and TSEB were measured by the CFI. 2. GEB and TSEB were measured by the OMPI. 153 REFERENCES 154 References Afflerbach, P. (2000). Verbal reports and protocol analysis. In M.L. Kamil, P.B. Mosenthal, P.D. Pearson, & R. Barr (Eds.), Handbook of reading research, Vol. 3 (pp. 168-180). Mahwah, NJ: Lawrence Erlbaum. Bannert, M. (2006). Effects of reflection prompts when learning with hypermedia. Journal of Educational Computing Research, 35, 359-375. Barchard, K.A. (2003). Does emotional intelligence assist in the prediction of academic success? Educational and Psychological Measurement, 63, 840-858. Baxter Magolda, M.B. (1987). The affective dimension of learning: Faculty-student relationships that enhance intellectual development. College Student Journal, 21, 46-58. Baxter Magolda, M.B. (2004). Evolution of a constructivist conceptualization of epistemological reflection. Educational Psychologist, 39, 31-42. Belenky, M.F., Clinchy, B.M., Goldberger, N.R., & Tarule, J.M. (1986). Women’s ways of knowing: The development of self, voice, and mind. New York: Basic Books. Bilal, D. (1998). Children’s search processes in using World Wide Web search engines: An exploratory study. Proceedings of the 61st ASIS annual meeting, 35, 45-53. Bråten, I., & Strømsø, H.I. (2005). The relationship between epistemological beliefs, implicit theories of intelligence, and self-regulated learning among Norwegian postsecondary students. British Journal of Educational Psychology, 75, 539-565. Bråten, I., & Strømsø, H.I. (2006) Epistemological beliefs, interest, and gender as predictors of Internet-based learning activities. Computers in Human Behaviour, 22, 1027-1042. Bråten, I., & Strømsø, H.I. (2010). Effects of task instruction and personal epistemology on the understanding of multiple texts about climate change. Discourse Processes, 47, 1-31. Buehl, M.M., & Alexander, P.A. (2001). Beliefs about academic knowledge. Educational Psychological Review, 13, 385-418. Buehl, M.M., & Alexander, P.A. (2005). Motivation and performance differences in students’ domain-specific epistemological belief profiles. American Educational Research Journal, 42, 687-726. Buehl, M.M., & Alexander, P.A. (2006). Examining the dual nature of epistemological beliefs. International Journal of Educational Research, 45, 28-42. 155 Buehl, M.M., Alexander, P.A. & Murphy, P.K. (2002). Beliefs about schooled knowledge: Domain specific or domain general? Contemporary Educational Psychology, 27, 415449. Cain, K., Oakhill, J.V., Barnes, M.A., & Bryant, P.E. (2001). Comprehension skills, inferencemaking ability, and their relation to knowledge. Memory & Cognition, 29, 850-859. Chandler, M., Boyes, M., & Ball, L. (1990). Relativism and stations of epistemic doubt. Journal of Experimental Child Psychology, 50, 376-395. Chen, C., & Bradshaw, A.C. (2007). The effect of web-based question prompts on scaffolding knowledge integration and ill-structured problem solving. Journal of Research on Technology in Education, 39, 359-375. Chi, M.T.H. (1997). Quantifying qualitative analyses of verbal data: A practical guide. The Journal of the Learning Sciences, 6, 271-315. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New Jersey: Lawrence Erlbaum. Cohen, J., Cohen, P., West, S.G., & Aiken, L.S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: Erlbaum. Conley, A.M., Pintrich, P.R., Vekiri, I., & Harrison, D. (2004). Changes in epistemological beliefs in elementary science students. Contemporary Educational Psychology, 29, 186204 Davis, E. A. (2003). Prompting middle school science students for productive reflection: Generic and directed prompts. Journal of the Learning Sciences, 12, 91-142. Demetriadis, S.N., Papadopoulos, P.M., Stamelos, I.G., & Fischer, F. (2008). The effect of scaffolding students’ context-generating cognitive activity in technology-enhanced casebased learning. Computers & Education, 51, 939-954. diSessa, A.A., Elby, A., & Hammer, D. (2003). J’s epistemological stance and strategies. In G.M. Sinatra & P.R. Pintrich (Eds.), Intentional conceptual change (pp. 237-290). Mahwah, NJ: Lawrence Erlbaum Associates. Ekstrom, R.B., French, J.W., & Harman, H.H. (1976). Manual for the kit of factor-referenced cognitive tests. Princeton, NJ: Educational Testing Service. Ericsson, K., & Simon, H. (1993). Protocol Analysis: Verbal Reports as Data (2nd ed.) Boston: MIT Press. French, Ekstrom, & Price (1963). Manual for kit of reference test for cognitive ability. Princeton, NJ: Educational Testing Service. 156 Germer, C. K., Efran, J. S., & Overton, W. F. (1982, April). The Organicism- Mechanism Paradigm Inventory: Toward the measurement of metaphysical assumptions. Paper presented at the 53rd Annual Meeting of the Eastern Psychological Association, Baltimore, MD. Graesser, A.C., McMahen, C.L., & Johnson, B.K. (1994). Question asking and answering. In M.A. Gernsbacher (Ed.), Handbook of psycholinguistics (pp.517-538). San Diego, CA: Academic. Hare, W. (2003). The idea of open-mindedness and its place in education. Journal of Thought, 38, 3-10. Hirumi, A., & Bowers, D.R. (1991). Enhancing motivation and acquisition of coordinate concepts by using concept trees. The Journal of Educational Research, 84, 273-279. Hsieh-Yee, I. (1993). Effects of search experience and subject knowledge on the search tactics of novice and experienced searchers. Journal of the American Society for Information Science, 44, 161-174. Hofer, B.K. (2000). Dimensionality and disciplinary differences in personal epistemology. Contemporary Educational Psychology, 25, 378-405. Hofer, B.K. (2002). Personal epistemology as a psychological and educational construct: An introduction. In B.K. Hofer & P.R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp. 3-14). Mahwah, NJ: Lawrence Erlbaum Associates. Hofer, B.K. (2004). Epistemological understanding as a metacognitive process: Thinking aloud during online search. Educational Psychologist, 39, 43-55. Hofer, B.K., & Pintrich, P.R. (1997). The development of epistemological theories: Beliefs about knowledge and knowing and their relation to learning. Review of Educational Research, 67, 88-140. Jacobson, M. J., & Spiro, R. J. (1995). Hypertext learning environments, cognitive flexibility, and the transfer of complex knowledge: An empirical investigation. Journal of Educational Computing Research, 12, 301–333. Jehng, J.J., Johnson, S.D., & Anderson, R.C. (1993). Schooling and student’s epistemological beliefs about learning. Contemporary Educational Psychology, 18, 23-35. Johnson, R.B. (1997). Examining the validity structure of qualitative research. Education, 118, 282-292. Johnson, J.A., Germer, C.K., Efran, J.S., & Overton, W.F. (1998). Personality as the basis for theoretical predilections. Journal of Personality and Social Psychology, 55, 824-835. 157 Jordan, B., & Henderson, A. (1995). Interaction analysis: Foundations and practice. The Journal of the Learning Sciences, 4, 39-103. King, A. (1994). Autonomy and question asking: The role of personal control in guided studentgenerated questioning. Learning and Individual Differences, 6, 163-185. King, P. M., & Kitchener, K.S. (1994). Developing Reflective Judgment: Understanding and Promoting Intellectual Growth and Critical Thinking in Adolescents and Adults. San Francisco: Jossey-Bass. Kitchener, K.S. (1983). Cognition, metacognition, and epistemic cognition: A three-level model of cognitive processing. Human Development, 4, 222-232. Kuhn, D. (2000). Metacognitive development. Current Directions in Psychological Science, 9, 178–181. Kuhn, D., Cheney, R., & Weinstock, M. (2000). The development of epistemological understanding. Cognitive Development, 15, 309-328. Kuhn, D., & Weinstock, M. (2002). What is epistemological thinking and why does it matter? In B.K. Hofer & P.R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp. 121-144). Mahwah, NJ: Lawrence Erlbaum Associates. Leach, J., Millar, R., & Ryder, J. (2000). Epistemological understanding in science learning: The consistency of representations across contexts. Learning and Instruction, 10, 497-527. Limón, M. (2006). The domain generality-specificity of epistemological beliefs: A theoretical problem, a methodological problem or both? International Journal of Educational Research, 45, 7-17. Louca, L., Elby, A., Hammer, D., Kagey, T. (2004). Epistemological resources: Applying a new epistemological framework to science instruction. Educational Psychologist, 39, 57-68. Mansourian, Y., & Ford, N. (2007). Search persistence and failure on the web: a “bounded rationality” and “satisficing analysis. Journal of Documentation, 63, 680-701. Marchionini, G. (1989). Information-seeking strategies of novices using a full-text electronic encyclopedia. Journal of the American Society for Information Science, 40, 54-66. Marton, F. & Säljö, R. (1976). On qualitative differences in learning. II. Outcome as a function of the learner’s conception of the task. British Journal of Educational Psychology, 46, 115 - 127. Marton, F., & Säljö, R. (2005). Approaches to learning. In: Marton, F., Hounsell, D. and Entwistle, N., (Eds.), The experience of learning: Implications for teaching and studying in higher education (3rd ed., pp. 39-58). Retrieved June 4, 2010, from University of 158 Edinburgh, Centre for Teaching, Learning and Assessment Web site: http://www.tla.ed.ac.uk/resources/EoL.html Mason, L., Boldrin, A., & Ariasi, N. (2010). Searching the Web to learn about a controversial topic: Are students epistemically active? Instructional Science, 38, 607-633. McDonald, S., & Stevenson, R.J. (1998) Effects of text structure and prior knowledge of the learner on navigation in hypertext. Human Factors, 40, 18-27. Muis, K.R. (2007). The role of epistemic beliefs in self-regulated learning. Educational Psychologist, 42, 173-190. Nussbaum, E. M., & Bendixen, L. D. (2003). Approaching and avoiding arguments: The role of epistemological beliefs, need for cognition, and extraverted personality traits. Contemporary Educational Psychology, 28, 573–595. Palmquist, R.A., & Kim, K. (2000). Cognitive style and on-line database search experience as predictors of web search performance. Journal of the American Society for Information Science, 51, 558-566. Pepper, S.C. (1942). World hypotheses. Berkeley:University of California Press. Pressley, M.P., & Hilden, K. (2004). Verbal protocols of reading. In N.K. Duke & M.H. Mallette (Eds.), Literacy research methodologies (pp. 308-321). New York: Guilford. Perry, W.G. (1970). Forms of intellectual and ethical development in the college years: A scheme. New York: Holt, Rinehart and Winston. Qian, D.D. (2002). Investigating the relationship between vocabulary knowledge and academic reading performance: an assessment perspective. Language Learning, 52, 513-536. Qian, G., & Alvermann, D. (1995). Role of epistemological beliefs and learned helplessness in secondary school students’ learning science concepts from text. Journal of Educational Psychology, 87, 282-292. Reinard, J.C. (2006). Communication research statistics. Thousand Oaks, CA: Sage. Riffe, D., Lacy, S., & Fico, F.G. (1998). Analyzing media messages: Using quantitative content analysis in research. Mahwah, NJ: Lawrence Erlbaum Associates. Saccuzzo, D.P., Craig, A.S., Johnson, N.E., & Larson, G.E. (1996). Gender differences in dynamic spatial abilities. Personality and Individual Differences, 21, 599-607. Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension. Journal of Educational Psychology, 82, 498–504. 159 Schommer-Aikins, M. (2002). An evolving theoretical framework for an epistemological belief system. In B.K. Hofer & P.R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp.103-118). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Schommer-Aikins, M. (2004). Explaining the epistemological belief system: Introducing the embedded systemic model and coordinated research approach. Educational Psychologist, 39, 19-29. Schommer, M., & Walker, K. (1995). Are epistemological beliefs similar across domains? Journal of Educational Psychology, 87, 424-432. Schraw, G. (2000). Assessing metacognition: Implications of the Buros Symposium. In G. Schraw & J.C. Impara (Eds.), Issues in the measurement of metacognition. Lincoln, NE: Buros Institute of Mental Measurements. Schraw, G., Bendixen, L.D., & Dunkle, M.E. (2002). Development and validation of the Epistemic Belief Inventory (EBI). In B.K. Hofer & P.R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp. 261-275). Mahwah, NJ: Erlbaum. Schraw, G., Dunkle, M.E., & Bendixen, L.D. (1995). Cognitive process in well-defined and illdefined problem solving. Applied Cognitive Psychology, 9, 523-528. Shute, S.J., & Smith, P.J. (1993). Knowledge-based search tactics. Information Processing and Management, 29, 29-46. Spiro, R.J., Feltovich, P.J., Jacobson, M.J., & Coulson, R.L. (1991). Knowledge representation, content specification, and the development of skill in situation-specific knowledge assembly: Some constructivist issues as they relate to cognitive flexibility theory and hypertext. Educational Technology, 31(9), 22-25. Spiro, R.J., Collins, B.P., Thota, J.J., & Feltovich, P.J. (2003). Cognitive flexibility theory: Hypermedia for complex learning, adaptive knowledge application and experience acceleration. Educational Technology, 44, 5-10. Spiro, R.J., Coulson, R.L., Feltovich, P.J., & Anderson, D.K. (1988). Cognitive flexibility theory: Advanced knowledge acquisition in ill-structured domains. In Proceedings of the 10th Annual Conference of the Cognitive Science Society (pp. 375-383). Hillsdale, NJ: Erlbaum. Also appeared in R.B. Ruddel, M.R. Ruddell, & H. Singer (Eds.) (1994). Theoretical models and processes of reading (pp. 602-615). Newark, DE: International Reading Association. Spiro, R., Feltovich, P., & Coulson, R. (1996). Two epistemic world-views: Prefigurative schemas and learning in complex domains. Applied Cognitive Psychology, 10, S51-S61. 160 Spiro, R. J., Feltovich, P. J., Jacobson, M. J., & Coulson, R. L. (1992). Cognitive flexibility, constructivism, and hypertext: Random access instruction for advanced knowledge acquisition in ill-structured domains. In T.M. Duffy & D.H. Jonassen (Eds.), Constructivism and the technology of instruction: A conversation (pp. 57-76). Hillsdale, NJ: Lawerence Erlbaum Associates. Stanovich, K.E. (2000). Progress in understanding reading: Scientific foundations and new frontiers. New York: Guilford. Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage. Svensson, L. (1976). Study skill and learning. Göthenburg: Acta Universitatis Gothoburgensis. Svensson, L., (1977) On qualitative differences in learning. III - Study skills and learning. British Journal of Educational Psychology, 47, 233 – 243. Taboada, A., & Cuthrie, J.T. (2006). Contributions of student questioning and prior knowledge to construction of knowledge from reading information text. Journal of Literacy Research, 38, 1-35. Tsai, C. C., & Chuang, S. C. (2005). The correlation between epistemological beliefs and preferences toward Internet-based learning environments. British Journal of Educational Technology, 36, 97–100. Vakkari, P., Pennanen, M., & Serola, S. (2003). Changes of search terms and tactics while writing a research proposal: A longitudinal case study. Information Processing and Management, 39, 445-463. Vermunt, J.D. (1996). Metacognitive, cognitive and affective aspects of learning styles and strategies: A phenomenographic analysis. Higher Education, 31, 25-50. Vermunt, J.D. (1998). The regulation of constructive learning processes. British Journal of Educational Psychology, 68, 149-171. Vermunt, J.D., & Vermetten, Y.J. (2004). Patterns in student learning: Relationships between learning strategies, conceptions of learning, and learning orientations. Educational Psychology Review, 16, 359-384. Visser, B.A., Ashton, M.C., & Vernon, P.A. (2006). Beyond g: Putting multiple intelligences theory to the test. Intelligence, 34, 487-502. Wallace, R.M., Kupperman, J., Krajcik, J., Soloway, E. (2000). Science on the web: Students online in a sixth-grade classroom. The Journal of the Learning Sciences, 9, 75-104. 161 Whitmire, E. (2003). Epistemological beliefs and the information-seeking behavior of undergraduates. Library and Information Science Research, 25, 127-142. Wildemuth, B.M. (2004). The effects of domain knowledge on search tactic formulation. Journal of the American Society for Information Science and Technology, 55, 246-258. Windschitl, M., & Andre, T. (1998). Using computer simulations to enhance conceptual change: The roles of constructivist instruction and student epistemological beliefs. Journal of Research in Science Teaching, 35, 145-160. Wood, P.K. (1983). Inquiring systems and problem structure: Implications for cognitive development. Human Development, 26, 249-265. Wood, P., & Kardash, C. (2002). Critical elements in the design and analysis of studies of epistemology. In B.K. Hofer & P.R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp 231-260). Mahwah, NJ: Erlbaum. Wu, Y-T, & Tsai, C-C (2005). Information commitments: evaluative standards and information searching strategies in web-based learning environment. Journal of Computer Assisted Learning, 21, 374-385. Wu, Y.-T., & Tsai, C.-C. (2007). Developing an information commitment survey for assessing students’ web information searching strategies and evaluative standards for web materials. Educational Technology & Society, 10(2), 120-132. 162