METACOGNITIVE KNOWLEDGE IN SECOND LANGUAGE WRITING By Soo Hyon Kim A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Second Language Studies – Doctor of Philosophy 2013 ABSTRACT METACOGNITIVE KNOWLEDGE IN SECOND LANGUAGE WRITING By Soo Hyon Kim Metacognitive knowledge, “a relatively stable information human thinkers have about their own cognitive processes and those of others” (Wenden, 1998, p.516) is considered an important factor in language learners’ success not only inside but also outside the classroom. However, confounding definitions and measures of metacognitive knowledge in SLA have made it challenging to compare and synthesize the body of research carried out in this area thus far (cf. Schoonen et al., 2003). This study aims to establish a clear definition and operationalization of metacognitive knowledge in the context of L2 writing by identifying its components. The study also demonstrates the process of developing a psychometrically valid and reliable measure to assess L2 writers’ metacognitive knowledge, using both quantitative and qualitative methods. The following research questions guide the present study: 1) What are the underlying components of metacognitive knowledge in L2 writing? 2) How can L2 learners’ metacognitive knowledge of writing be reliably assessed? 3) What does introspective data reveal about learners’ metacognitive knowledge in task-based L2 writing? 4) What is the relationship between L2 writers’ metacognitive knowledge, L2 proficiency, and L2 writing performance? First, 253 intermediate Korean university EFL learners participated in a multi-phase procedure for the development of a self-report instrument that can reliably measure L2 writers’ metacognitive knowledge. The multi-phase procedure included item pool generation, questionnaire administration, and questionnaire evaluation. Six components of metacognitive knowledge were identified from this data using Principal Component Analysis (PCA), including metacognitive knowledge of: 1) task, 2) personal learning process, 3) strategy, 4) text and accuracy, 5) problem solving, and 6) discourse features. Using participants’ factor scores on these six components, in addition to L2 proficiency scores from DIALANG, and L2 writing scores from an argumentative writing task, a multiple regression analysis was conducted to investigate the relationship between L2 writers’ metacognitive knowledge, L2 proficiency, and L2 writing performance. It was found that L2 proficiency and metacognitive knowledge collectively predicted 38.6% of the variance in L2 writing performance. L2 proficiency was the strongest predictor, while metacognitive knowledge of strategy use, and metacognitive knowledge of problem solving also made significant contributions to predicting L2 writing. In addition, introspective data was collected via stimulated recall interviews with a sub-set of the participants, after they completed the argumentative writing task. The qualitative data gleaned from this process enabled triangulation of the large-scale quantitative data, adding more validity and further insight to the study results. The empirical results of this study indicated that the components of metacognitive knowledge in L2 writing mostly converged with those identified in early theoretical literature (e.g., Flavell, 1979). In addition, it was found that the particular context of L2 writing – in this case, emphasis on assessment – had a strong influence on writers’ metacognitive knowledge. Qualitative data also revealed that students’ previous experience with L1 and L2 writing instruction had a stronger influence on their metacognitive knowledge than has been acknowledged in previous literature. Copyright by SOO HYON KIM 2013 ACKNOWLEDGMENTS Over the years, I have often heard academics say that “it takes a village to write a dissertation.” Now, looking back at the process of writing my dissertation, I wholeheartedly embrace this sentiment, and would like to express my sincere gratitude to the many people in my village who have tirelessly supported my efforts. First and foremost, I would like to thank Dr. Charlene Polio, for her unwavering support as my dissertation committee chair and academic advisor. Having her as a sounding board during the conception of my dissertation was the highlight of my graduate studies. I only hope to emulate the insight, generosity, and patience she showed me as she encouraged me to continue to explore a topic with which I was initially unfamiliar, yet passionate about. Without doubt, her guidance and expertise was instrumental in the solidification and refinement of ideas that formed the basis of this dissertation. I am also immensely thankful for the support and guidance of my dissertation committee members, Drs. Susan Gass, Paula Winke, and Shawn Loewen, who selflessly gave of their time and expertise. Without each of their unique perspectives, and constructive criticism, the completion of this dissertation would not have been possible. A heartfelt thanks goes to the instructors who helped me with data collection in Korea: Ms. Jinhee Jeong, Dr. Youngwoo Cho, Ms. Jenny Kim, Dr. Ji Hyon Kim, Ms. Junhee Kim, and Dr. Eunmin Park. I am also grateful for the help of my student participants in Korea, who graciously shared not only their writing, but also their stories about triumphs and failures as learners and writers of a second language. Writing this dissertation required the collection of a vast volume of data, which would not have been possible without the support of the SLS Ph.D. program, College of Arts and Letters, and MSU Graduate School in the form of two summer research grants, Summer College Research Abroad Monies (SCRAM) Award, and Dissertation Completion Fellowship. I also thank TESOL International v Association for their support in conducting and disseminating the results of this study through the TESOL Ruth Crymes Fellowship for Graduate Study. I am indebted to many others at Michigan State University who helped me at various stages of conducting this study: Grace Amuzie, Jimin Kahng, Nobuhiro Kamiya, Roman Chepyshko, Maike Buckingham, Baburhan Uzum, LLT 841 students, Tomoko Okuno, Hyojung Lim, and Russ Werner come to mind. I will also warmly remember the friendship and support that friends in the SLS program have shown me throughout the years. Also, a special thanks goes to my dear friend and mentor Dr. Hongli Li, whose constant encouragement helped me pull through challenging times. Last but not least, none of this would have been possible without the love and support from my family in Korea. Through them, I have learned the value of hard work and perseverance, and the importance of following my dreams and keeping hope. Thank you. vi TABLE OF CONTENTS LIST OF TABLES………………………………………………………………………………x LIST OF FIGURES……………………………………………………………………………. xii CHAPTER 1: INTRODUCTION AND LITERATURE REVIEW…………………………... 1 Literature Review…………………………………………………………….……………. 2 Language Learning Strategy Research…………………………………………………….. 2 Metacognition………………………………………………………………………….. 6 Metacognitive Knowledge and Language Learning…………………………………… 7 Metacognitive Knowledge and Writing………………………………………………... 9 Metacognitive Knowledge and L2 Writing Research………………………………….. 10 Measures of Metacognitive Strategies…………………………………………………. 20 Developing and Validating Measures of Metacognitive Knowledge in SLA…………. 23 Summary……………………………………………………………………………….. 25 Statement of Purpose……………………………………………………………………26 Hypotheses……………………………………………………………………………... 26 Overview of Study……………………………………………………………………... 28 CHAPTER 2: METHODS…………………………………………………………………….. 30 Participants………………………………………………………………………………… .30 Instruments………………………………………………………………………………….32 Metacognitive knowledge self-report instrument……………………………………….32 Writing task……………………………………………………………………………. 35 Essay prompt 1…………………………………………………………………. 36 Essay prompt 2…………………………………………………………………. 36 Language proficiency test……………………………………………………………………… 38 Language learning background questionnaire…………………………………………..39 Semi-structured interview guides……………………………………………………….40 Procedure…………………………………………………………………………………... 40 Developing an item pool……………………………………………………………….. 41 Literature review……………………………………………………………….. 42 Preliminary instrument development……………………………………………42 Think-aloud protocols and semi-structured interview…………………………. 43 Piloting the questionnaire……………………………………………………………… 46 Administering the revised questionnaire………………………………………………. 47 Evaluating the final questionnaire……………………………………………………... 47 Stimulated recalls and semi-structured interviews…………………………………….. 48 Quantitative Data Analysis……………………………………………………………….... 49 Data entry and preliminary analysis…………………………………………………… 49 L2 argumentative essay writing task rating……………………………………. 50 Exploratory factor analysis…………………………………………………….. 50 Metacognitive knowledge test scores…………………………………………... 51 vii L2 proficiency test…………………………………………………………….... 51 Multiple regression analysis…………………………………………………………… 51 Multicollinearity………………………………………………………………... 51 Entry method…………………………………………………………………… 52 Qualitative Data Analysis………………………………………………………………….. 52 Stimulated recall and semi-structured interviews…………………………………….... 52 CHAPTER 3: RESULTS………………………………………………………………………. 53 Principal Component Analysis Assumptions……………………………………………….53 Interval data……………………………………………………………………………. 54 Normal distribution……………………………………………………………………. 54 Sample size…………………………………………………………………………….. 54 Multicollinearity and singularity………………………………………………..............55 Factor Extraction…………………………………………………………………………… 56 Factor Rotation…………………………………………………………………………….. 57 PCA Results………………………………………………………………………………... 58 Factor Names…………………………………………………………………………......... 60 PCA for Revised Version of Questionnaire………………………………………………... 61 PCA Assumptions………………………………………………………………………….. 62 Factor Extraction and Rotation……………………………………………………………. 63 Descriptive Statistics………………………………………………………………………. 64 PCA Results………………………………………………………………………………... 66 Metacognitive Knowledge Components…………………………………………………… 69 Reliability Analysis………………………………………………………………………… 73 Regression Analysis………………………………………………………………………... 75 Sample size…………………………………………………………………………….. 76 Quantitative variables………………………………………………………………….. 76 Regression Analysis Results……………………………………………………………….. 77 Relationship between metacognitive knowledge and L2 writing……………………….78 Relationship between metacognitive knowledge and L2 proficiency…………………. 79 Relationship between L2 proficiency, metacognitive knowledge, and L2 writing……. 80 Assessing the Regression Model…………………………………………………………... 82 Model fit……………………………………………………………………………….. 82 Generalization………………………………………………………………………….. 82 Summary of Quantitative Results………………………………………………………….. 83 Qualitative Results……………………………………………………………………......... 84 Metacognitive knowledge of task……………………………………………………… 85 Metacognitive knowledge of strategy use………………………………………………86 Metacognitive knowledge of person…………………………………………………… 86 CHAPTER 4: DISCUSSION…………………………………………………………...…….. 88 Research Question1……………………………………………………………….……….. 88 Subcomponents within broad categories…………………………………………..........89 Goal orientation of metacognitive knowledge components……………………….........91 Influence of learning context…………………………………………………………... 92 Research Question 2……………………………………………………………………….. 93 viii Self-report instrument reliability………………………………………………………..93 Measures of metacognitive knowledge and strategies………………………………..... 95 Research Question 3……………………………………………………………………….. 96 Influence of learning context…………………………………………………………... 97 Triangulation of quantitative data………………………………………………………98 Research Question 4………………………………………………………………………...99 Metacognitive knowledge of problem solving and strategy use………………………. 101 A cognitive model of L2 writing………………………………………………………. 102 Theoretical Implications…………………………………………………………………… 102 Pedagogical Implications…………………………………………………………………... 104 Limitations………………………………………………………………………………… 107 Future Research……………………………………………………………………………. 108 CHAPTER 5: CONCLUSION………………………………………………………………… 110 APPENDICES………………………………………………………………………………… 112 Appendix A: English Proficiency Test (DIALANG Grammar Section)………………….. 113 Appendix B: Argumentative Writing Task Prompt 1……………………………………… 116 Appendix C: Argumentative Writing Task Prompt 2……………………………………… 117 Appendix D: Language Learning Background Questionnaire……………………………...119 Appendix E: Language Learning Background Questionnaire in Korean………………….. 121 Appendix F: Semi-structured Interview Questions…………………………………………123 Appendix G: Initial Pool of Questionnaire Items………………………………………….. 127 Appendix H: List of Metacognitive Knowledge/Metacognition/Writing Questionnaires Consulted………………………………………………………………………………….. 130 Appendix I: First Version of Questionnaire (64 items)……………………………………. 132 Appendix J: Second Version of Questionnaire (45 items)………………………………….136 Appendix K: Second Version of Questionnaire (45 items) in Korean…………………….. 139 Appendix L: Stimulated Recall Instructions and Guidelines………………………………. 144 Appendix M: TWE Scoring Guide………………………………………………………… 146 REFERENCES………………………………………………………………………………… 148 ix LIST OF TABLES Table 1. Study Procedure………………………………………………………………………. 41 Table 2. Participant Essay and Think-Aloud Protocol………………………………………… 44 Table 3. Extracted Components and Explained Variance…………………………….…..…… 60 Table 4. Extracted Metacognitive Knowledge Factors……………………………….……….. 60 Table 5. Questionnaire Item Descriptive Statistics………………………………………..…… 64 Table 6. Extracted Components and Explained Variance……………………………………... 66 Table 7. Factor Loadings………………………………………………………………………. 68 Table 8. Component Correlation Matrix……………………………………………………….. 69 Table 9. Factor 1 Metacognitive Knowledge of Task…………………………………………..69 Table 10. Factor 2 Metacognitive Knowledge of Personal Learning Process…………………. 70 Table 11. Factor 3 Metacognitive Knowledge of Strategy…………………………………….. 71 Table 12. Factor 4 Metacognitive Knowledge of Text and Accuracy……………….………… 72 Table 13. Factor 5 Metacognitive Knowledge of Problem Solving…………………………….72 Table 14. Factor 6 Metacognitive Knowledge of Discourse Features…………………………. 73 Table 15. Reliability for Metacognitive Knowledge Component Subscales…………………... 74 Table 16. Item-Total Statistics……………………………….………………………………… 75 Table 17. Descriptive Statistics for Multiple Regression Variables…………………………… 77 Table 18. Inter-Variable Correlations…………………………………………….……………. 78 Table 19. Multiple Regression Analysis Summary for the Effects of Metacognitive Components on L2 Writing……………………...…………………………………………………………… 79 Table 20. Multiple Regression Analysis Summary for the Effects of Metacognitive Components on L2 Proficiency………………………………………………………………………………. 79 x Table 21. Multiple Regression Analysis Summary for the Effects of L2 Proficiency and Metacognitive Components on L2 Writing Scores…………………………………...………... 81 xi LIST OF FIGURES Figure 1. A model of metacognition…………………………………………………………… 7 Figure 2. Overall study design………………………………………………………………… 29 Figure 3. Scree plot for first version of self-report instrument………………………………… 59 Figure 4. Scree plot for revised version of questionnaire……………………………………… 67 Figure 5. Relationship between L2 proficiency, metacognitive knowledge, and L2 writing….. 82 Figure 6. Choices for the new space in the student union…………………………………....... 116 Figure 7. Choices for housing for exchange students………………………………………….. 117 Figure 8. First version of questionnaire example instructions…………………………………. 132 Figure 9. First version of questionnaire………………………………………………………... 132 Figure 10. Second version of questionnaire example instructions…………………………….. 136 Figure 11. Second version of questionnaire……………………………………………………. 136 Figure 12. Example instructions for second version of questionnaire in Korean……………… 139 Figure 13. Second version of questionnaire in Korean………………………………………… 139 xii CHAPTER 1: INTRODUCTION AND LITERATURE REVIEW Second Language Acquisition (SLA) researchers and language teachers have always shown great interest in what makes a good language learner. Research on the good language learner flourished in the 1970s as researchers sought to describe the practices of successful learners of a second or foreign language (Gan, Humphreys, & Hamp-Lyons, 2004; Naiman, Frolich, Stern, & Todesco, 1996; Politzer & McGroarty, 1985; Rubin, 1975; Rubin & Thompson, 1994; Stern, 1975; Stevick, 1989). By identifying key features that characterized successful learners, researchers and language teachers hoped to capture the essence of effective language learning practices, and impart that knowledge to other language learners as well. Meanwhile, a more theory-driven impetus behind this strand of research was to explore the ways that learner internal factors, as opposed to external factors (e.g., amount of language instruction), affect language learning, and how these factors may lead to differential learning outcomes (LujanOrtega & Clark-Carter, 2000; Wenden & Rubin, 1987). Thus came about the body of research in SLA on individual differences among learners and how they affect the process of L2 acquisition. In the past few decades, learner characteristics such as learner motivation, language learning aptitude, learning strategies, and learning styles have appeared as topics of research in numerous studies (for reviews, see e.g., Dörnyei & Skehan, 2003; Robinson, 2002, Ellis, 2004). Recently, several types of individual differences that play a central role in learner autonomy and self-regulation have come into the spotlight due to a renewed interest in autonomous language learning. With the development of technology and its application to learning, language learning is increasingly taking place outside formal instruction contexts, for example, in the form of blended/distance learning, and computer/mobile language learning (CALL/MALL). Along with this trend, understanding individual differences that are closely linked to learner autonomy and self-regulation (e.g., learner strategies), and also how these 1 individual differences operate within a learner self-regulatory system has become as important as ever. Learners’ metacognitive knowledge is one such individual difference that is crucial to the development of learner autonomy and self-regulation, and is the focus of this study. Whereas many studies in the field of SLA have attempted to describe and measure L2 learners’ metacognitive strategy use, this study focuses on the metacognitive knowledge that learners have, and access in order to plan, monitor, and evaluate their L2 writing. To this end, the study aims to: 1) establish a clear definition and operationalization of metacognitive knowledge in the context of L2 writing; 2) develop a psychometrically valid and reliable measure to assess L2 writers’ metacognitive knowledge, using both quantitative and qualitative methods; and 3) examine the relationship between learners’ metacognitive knowledge, L2 proficiency, and L2 writing performance. Literature Review The following section provides an overview of the literature on metacognitive knowledge in the context of L2 writing. First, it critically examines previous research on language learning strategies, and then introduces the construct of metacognition (including metacognitive knowledge) as a potentially larger framework to view learner strategies within. Next, previous studies that have examined metacognitive knowledge as it relates to L2 writing are reviewed with respect to their research goals, as well as their definitions and measures of metacognitive knowledge. The literature review ends with a brief discussion of issues regarding the conceptualization and assessment of metacognitive knowledge in L2 writing. Language Learning Strategy Research 2 Language learning strategies (LLS), according to a commonly cited definition by Oxford (1999) are, “specific actions, behaviors, steps, or techniques that students use to improve their own progress in developing skills in a second or foreign language. These strategies can facilitate the internalization, storage, retrieval, or use of the new language” (p. 518). At the core of this definition is the attractive idea that learner strategies are a concrete set of facilitative actions or procedures that learners can master and use at their discretion. The allure of this idea to those in the profession of language teaching and research may be why LLS is one of the most prominent topics of interest in individual differences research. While research on LLS has certainly been plentiful (for a review, see Chamot, 2001), it has also been a source of great confusion, mainly due to two reasons: 1) the lack of consensus on a definition and operationalization of what constitutes a strategy, and 2) the lack of psychometrically valid measures to assess strategy use (e.g., Dörnyei, 2003; Macaro, 2006). With regard to the first issue, several researchers have pointed out the ambiguity of what exactly a language learning strategy is. For example, in his recent meta-analysis of strategy instruction in L2 learning, Plonsky (2011) noted that researchers disagree on several issues related to the definition of learner strategies, some of which were: whether strategies are a plan of action or the actual action itself (Phakti, 2003), and thus, whether strategies are better considered internal or external mechanisms (Wenden, 1987), and how strategies are alike or different from other terms frequently used to define strategies (e.g., tactic, process, action, technique) (Oxford, 1990). Macaro (2006) shared many of the same concerns and pointed out that the various problems in LLS and strategy instruction (SI) research stem from “the lack of theory underlying the construct labelled a learner strategy” (p. 325). Some other important criticisms he directed at 3 the extant body of LLS research were: 1) the lack of consensus on a classification system; 2) the lack of a theoretically motivated framework or hierarchy; 3) insufficient evidence with regard to the integrity of learner strategies across different learning tasks and situations; 4) inadequate empirical evidence regarding their presumably facilitative effects; 5) a lack of consensus on the role of LLS in language processing; 6) a lack of consensus on the relationship between learner strategies and skills and processes; and 7) a lack of agreement on the long-term of effects of LLS on language learning. The second major problem regarding LLS research is the lack of psychometrically valid and reliable instruments that measure language learners’ strategy use. This problem is inextricably linked to the issue outlined above, i.e., there is no consensus on a clear definition or operationalization of LLS. Depending on the definition or classification system used, LLS could refer to learners’ plans, their concrete actions, or both, and here within lies the problem of using inappropriate, or inadequately examined instruments for LLS research. Most LLS research thus far has relied on self-report instruments to examine learners’ strategy use because learner strategies (e.g., cognitive, affective, and metacognitive strategies) are often perceived as processes that cannot easily be observed with the human eye. However, using self-report instruments to tap into cognitive processes that are not accessible via external observation is a task fraught with difficulty. According to Schellings (2011), one can obtain different information about learner strategies depending on several variables related to the selfreport instruments, such as “the level of specificity, target group, type of questions/statements, number of scales and items, type of answer scales, time to complete, reliability and validity” (p. 94). However, despite the complexity that these variables introduce to self-report instrument 4 development, not many instruments used in SI or LLS research have been closely examined for these features, especially their validity or reliability. Not only have the issues outlined above been detrimental to achieving a coherent research agenda in LLS research, but they have also failed to provide a theoretical rationale or robust empirical evidence to support strategy instruction. Plonsky (2011) argued that the “methodological flaws in previous research…, the complexity of variables that affect L2 strategy use, uncertainty of long-term effects, cost/benefit ratio concerns, non-empirically justified strategies, a lack of valid and reliable instruments, and the absence of a comprehensive theory” (p. 2) have caused serious doubts about the effectiveness of strategy instruction. Needless to say, it will be difficult to justify learner strategy training in classroom settings until these and other unsettled theoretical and methodological issues are resolved in SI and LLS research (Rees-Miller, 1993). Recently, in response to these criticisms, several researchers (e.g., Dörnyei, 2005; Dörnyei & Skehan, 2003; Tseng, Dörnyei, & Schmitt, 2006; Zhang, 2010) have put forward proposals to situate LLS research within the larger framework of metacognition or self-regulated learning, concepts derived from the field of educational psychology. In contrast to previous LLS research which focused on the behavioral aspects of LLS (e.g., frequency of strategy use), these proposals involve a shift towards identifying and measuring the underlying cognitive processes of LLS. A pending problem is that while this new framework of metacognition and selfregulation appears to be promising, it has not been sufficiently tested as to whether it can be successfully incorporated into the field of SLA. The following section of the literature review examines the constructs of metacognition and metacognitive knowledge as they have been 5 defined in educational psychology and SLA, and also looks into the ways these concepts have been applied to issues in L2 learning. Metacognition Metacognition, simply defined, is thinking about thinking. A more refined definition that captures the complexities of metacognition in the context of learning may be: “a higher order executive process that monitors and coordinates other cognitive processes engaged during learning, such as recall, rehearsal, or problem solving” (Tobias & Everson, 2009, p.108). What constitutes this abstract construct, however, has been subject to debate over the years, and several interchangeably used terms referring to similar concepts (e.g., self-regulation, selfdirection, executive control) have added to the confusion (Livingston, 1997). These various terms often highlight different aspects of metacognition, and certain terms are preferred over others in different fields of study. For example, the term self-regulation is often used in cognitive psychology, while self-direction is more frequently used in adult education or learner autonomy in foreign/second language learning (Wenden, 1998). One of the most influential frameworks of metacognition thus far has been Flavell’s (1979) model of cognitive monitoring. Flavell’s model of metacognition comprises several components: metacognitive knowledge, metacognitive experiences, goals (or tasks), and actions (or strategies). The interactions among these components, Flavell argued, are responsible for the monitoring of various human cognitive experiences. Among the different components in Flavell’s model, metacognitive knowledge is the primary component that anchors the overall cognitive monitoring system. Metacognitive knowledge as defined by Flavell is “that segment of your...stored world knowledge that has to do with people as cognitive creatures and with their diverse cognitive tasks, goals, actions, and experiences” (p. 906). He further explained that it 6 mostly consists of “knowledge or beliefs about what factors or variables act and interact in what ways to affect the course and outcome of cognitive enterprises” (p. 907). Metacognitive knowledge is different from metacognitive strategies in that metacognitive knowledge is the information people have about cognitive processes, whereas metacognitive strategies are the actual skills (e.g., planning, monitoring, evaluating) that are utilized to control, oversee, and regulate cognitive activities. Thus, metacognitive knowledge and metacognitive strategies are two distinct components of the broader construct of metacognition, and accessing metacognitive knowledge to deploy metacognitive strategies is what is referred to as selfregulation. Metacognition Metacognitive Metacognitive Knowledge Strategies Person Task Strategy Knowledge Knowledge Planning Knowledge Figure 1. A model of metacognition. Metacognitive Knowledge and Language Learning 7 Monitoring Evaluating While Flavell’s (1979) model of metacognition was meant to explain human cognitive experiences in general, it is clear that he also recognized its applicability to language learning as well. Metacognition, he explained, “plays an important role in oral communication of information, oral persuasion, oral comprehension, reading comprehension, writing, language acquisition, attention, memory, problem solving, social cognition, and various types of selfcontrol and self-instruction” (Flavell, 1979, p. 906). Wenden (1998) also noticed the value of metacognition in terms of language learning. Pointing out an absence of explicit, theory-based discussion of the role of metacognitive knowledge in language learning, Wenden (1998) adapted Flavell’s (1979) framework of metacognition, and was among the first to argue for its application to SLA research. She defined metacognitive knowledge as, “a relatively stable information human thinkers have about their own cognitive processes and those of others” (p. 516). Influenced by Flavell (1979), she further classified metacognitive knowledge into three components: 1) person knowledge (general knowledge learners have acquired about human factors that facilitate or inhibit learning); 2) task knowledge (what learners know about the purpose of the task and how it will serve their language learning needs); and 3) strategic knowledge (general knowledge about what strategies are, why they are useful, and specific knowledge about when and how to use them). Several learner statements excerpted and presented in Wenden (1998) illustrate the different types of metacognitive knowledge. Person knowledge is exemplified by a statement that a Japanese learner of English made about his perceived language aptitude: I think the improvement of language is due to some ability, personal ability. In my case, I have no personal ability. So I think it will take a long time. There is no good way to speed up my learning. (Wenden, 1998, p.515) 8 Wenden (1998) further explained that person knowledge includes factors in SLA such as age, language aptitude, motivation, and learners’ knowledge of how these factors affect their language learning experience. In addition, learners’ beliefs about their effectiveness as learners in general, effectiveness in specific domains (e.g., reading, writing, speaking, listening), and ability to achieve specific goals are also included in person knowledge. Task knowledge, according to Wenden (1998), not only includes learners’ knowledge of the purpose of a task and how it will affect their learning, but also the nature of the task, and the task’s demands. An example of a learners’ metacognitive knowledge of task demands would be a learner recalling that “speaking about everyday topics with Americans was easier than speaking about more abstract topics” (Wenden, 1987, p.578). Strategic knowledge includes both general beliefs about approaches to language learning as well as learners’ knowledge of particular strategies and their effectiveness. For example, “Grammar background is important to learn. Without grammar background you can’t improve” (Wenden, 1987, p.580) reflects learners’ general strategic knowledge, while a learner deciding to make a list of technical vocabulary while summarizing a reading passage (Wenden, 1998, p.524) would reflect the learner’s knowledge of a specific strategy. Wenden (1998) also pointed out that learner accounts collected through introspective methods reflect learners’ stored metacognitive knowledge of strategy. Metacognitive Knowledge and Writing While learners have varying levels of metacognitive knowledge, different learning situations also vary in their potential to induce language learners’ access to their own metacognitive knowledge. Flavell (1979) suggested that: 9 …metacognitive experiences are especially likely to occur in situations that stimulate a lot of careful, highly conscious thinking: in a job or school task that expressly demands that kind of thinking; in novel roles or situations, where every major step requires planning beforehand and evaluation afterwards; where decisions and actions are at once weighty and risky; where high affective arousal or other inhibitors of reflective thinking are absent (cf. Langer, 1978). Such situations provide many opportunities for thoughts and feelings about your own thinking to arise and, in many cases, call for the kind of quality control that metacognitive experiences can help supply. (p.908) Flavell’s descriptions of situations in which metacognitive experiences are likely to occur clearly resonate with descriptions of the process of writing: writing is with no doubt a process that calls for a great amount of deliberate, conscious thinking; writers in formal education contexts are also frequently assigned academic writing tasks, for which they need to plan for and evaluate writing; Their decisions during the writing process are important as they determine the quality of their final product, and ultimately, whether they successfully meet the goals of the writing task; And finally, while writing is not without factors that inhibit reflective thinking (e.g., writing anxiety, a lack of vocabulary or grammar knowledge), compared to speaking or listening which are language skills that have greater time-constraints and often require real-time interaction with interlocutors, writing presents a situation in which reflective thinking and the activation of metacognitive knowledge is more likely. Metacognitive Knowledge and L2 Writing Research In the realm of L2 writing, studies on metacognitive knowledge largely derived from the body of research on process writing and writing strategies (Manchón, 2001). In her critical analysis of the conceptualization of composing strategies, Manchón (2001) observed that studies 10 on L2 writing processes have focused on compiling a general repertoire of L2 writing strategies (e.g., Hirose & Sasaki, 1994; Raimes, 1987; Victori,1999; Zamel, 1983), or have looked at L2 writers’ behaviors at different stages of the writing process such as planning (e.g., Jones & Tetro, 1987), formulation (Roca, 1996), or revision (e.g., Hall, 1990; Sengupta, 2000). Other studies have looked at specific writing strategies such as the use of L1 in L2 writing (e.g., Kobayashi & Rinnert, 1992; Qi 1998), backtracking (e.g., Manchón et al., 2000), and restructuring (Roca et al., 1999). However, just as Macaro (2006) critiqued the ambiguity of the term strategy in LLS research, Manchón (2001) also pointed out that “the term strategy in its application to L2 writing, has become inoperative due to its generality (p. 95).” She claimed that with few exceptions (e.g., Cumming, 1989) the term has been used to refer to a variety of “processes, methods, actions, and means implemented/used both to complete and approach writing tasks” (p.95). In other words, any type of behavior observed during writing has been referred to as a writing strategy. While acknowledging the value of descriptive studies that document L2 writing behaviors, Manchón (2001) called for a more narrow, theory-based conceptualization and definition of writing strategies for the purpose of theory building. She proposed the conceptualization of writing strategies as problem solving and control mechanisms, and noted that these conceptualizations were in line with what cognitive psychology refers to as metacognitive knowledge. Studies on metacognitive knowledge in L2 writing vary in their focus. Early studies on metacognitive knowledge were exploratory in nature, and they mainly described the different categories of metacognitive knowledge in L2 writing. There were also studies that had a more pedagogical focus, and investigated ways to enhance students’ metacognitive knowledge in L2 writing through curriculum design and classroom instruction. Other studies examined students’ 11 metacognitive knowledge (often referred to as metacognitive awareness) of specific genres of writing, such as their knowledge of expository or argumentative writing. More recently, some studies have examined metacognitive knowledge as one of many factors within a cognitive model of L2 writing. In the following section, studies on metacognitive knowledge and L2 writing will be reviewed with respect to their study objectives and results, and most importantly, the way they defined and measured metacognitive knowledge in L2 writing. Early studies on the topic of metacognitive knowledge and L2 writing were similar to the early studies that emerged in LLS research in that they took a more exploratory and descriptive approach to examining metacognitive knowledge (e.g., Wenden, 1982; 1983). These were among the first studies to move away from traditional LLS research, and instead introduce the construct of metacognitive knowledge in language learning and L2 writing. Perhaps for this reason, they put an emphasis on differentiating between metacognitive strategies in LLS research and metacognitive knowledge, and also explored the relationships between the two. Wenden (1991) adapted Flavell’s (1979) framework of metacognitive knowledge (i.e., person, task, strategy knowledge) and investigated the cognitive processes underlying the implementation of metacognitive strategies in completing a writing task. Using think-aloud protocols and observations, she collected data from eight participants who wrote short argumentative essays in response to a given writing prompt. From these data she was able to generate descriptive profiles of the three most successful L2 writers’ metacognitive strategy use and the underlying metacognitive knowledge used in the implementation of their strategies. World knowledge, rhetorical knowledge, and linguistic knowledge were found to be accessed during L2 writers’ planning of a writing task, while message knowledge, rhetorical knowledge, linguistic knowledge, and person knowledge were summoned for task evaluation, and 12 monitoring. Wenden concluded that researchers should take a different approach to LLS research and examine LLS from the perspective of how language learners regulate their learning of a task (cf. Tseng et al., 2006), and that learners should be trained to develop “strategy-knowledge networks” (p. 317), so that they have the necessary metacognitive knowledge and strategies to successfully perform tasks. While Wenden’s descriptive profiles of learners and examples of their think-aloud protocols are informative, a question that remains that would benefit from further research is whether L2 writers of lower language- and/or writing proficiency would display similar patterns. Victori (1999), similar to Wenden (1991), presented a description of four student writers’ writing processes using Flavell’s metacognitive knowledge framework. She defined metacognitive knowledge as “the knowledge that a person develops about his or her own cognitive processes and about the nature and requirements involved in undertaking a cognitive task” (p. 538). While the study was similar in its exploratory and descriptive approach to Wenden (1991), a small departure from that study was that Victori (1999) shed light on the differences in metacognitive knowledge between more and less effective writers. The two specific aims of the study were to determine whether there is a relationship between students’ metacognitive knowledge and their use of writing strategies, and whether differences in the writing skills of more and less effective writers could be attributed to their metacognitive knowledge. Using think-aloud protocols and oral interviews, Victori found that compared to the two unsuccessful writers, the two successful writers had more elaborate metacognitive knowledge in each of its three categories (i.e., person, task, and strategy), and their writing behaviors were in accordance with this knowledge. Based on these results, Victori argued that L2 13 writing instruction should not only include strategy instruction, but instruction that can raise students’ awareness of the knowledge they need to successfully complete writing tasks. The pedagogical implications that were advocated for in Victori (1999) were the focus of several studies on metacognitive knowledge and L2 writing. These studies were motivated by the belief that L2 learners can be nurtured into more autonomous L2 writers by instruction aimed at raising their metacognitive awareness. For example, Cotterall and Murray (2009) looked at the instruction and development of students’ language proficiency and metacognitive knowledge which, following Flavell (1979), they defined as “what students know about themselves, the tasks they complete and their learning strategies” (Cotterall & Murray, 2009, p. 34). They reported on a longitudinal study they conducted with 400 Japanese EFL learners enrolled in a self-directed language learning course. Students in this course identified their own language learning goals and problems, decided on the materials they will use, and monitored and evaluated their learning processes as a way of developing their metacognitive knowledge (see Holec,1981 for a model of learner autonomy). Data were collected from six sources: language belief questionnaires, language learning histories, learner portfolios, a course evaluation (20 Likert scale questions and 6 open-ended questions), interviews, and focus group discussions. The authors reported that through the learning structure/environment enabled by the self-directed language learning course, students developed more positive beliefs about self-directed learning, and were better able to self-regulate (i.e., plan, monitor, and evaluate) their learning processes. The significance of Cotterall and Murray’s (2009) study lies in the fact that it aimed to enhance students’ metacognitive knowledge not by a short-term intervention, but by immersing students in a self-directed learning environment over a long period of time. Perhaps because of the various sources of data involved in the study, the authors did not provide information on how 14 the research instruments (e.g., questionnaire) were designed, what sources they were based on, and whether they were examined for their validity and reliability. Also, while the authors addressed the development of students’ metacognitive knowledge in length, they did not report on the relationship between students’ enhanced metacognitive knowledge and gains in language proficiency, which was one of their initial objectives for the study. They noted that due to intervening variables, this relationship was difficult to discern from their study, and called for further research on the relationship between the development of language proficiency and metacognitive knowledge. Kasper (1997) examined the development of L2 writers’ metacognitive knowledge (as defined by Flavell, 1979), but with a greater focus on its relationship with changes in language proficiency. The major goals of her study were to examine if the three categories of metacognitive knowledge (i.e., person, task, strategy) have an equal impact on students’ writing proficiency, how this differs for learners of different English proficiency levels, and how the components of metacognitive knowledge develop as learners’ language proficiency develops. While the questions that the author asked were interesting, it seemed that the instruments she used would better serve as pedagogical tools rather than as reliable measures of change in the different components of metacognitive knowledge. The instruments she utilized in the study were pre- and post-course writing autobiographies and a cognitive style questionnaire. For the writing autobiography, students wrote about the pros and cons of their writing experiences. The cognitive style questionnaire was comprised of three mostly open-ended questions asking students to define good writing, explain strategies they use to overcome difficulties in writing, and list different features of writing (e.g., grammar, organization, clarity) in the order of their importance. Kasper found that students who passed the end-of-semester writing assessment had 15 significantly higher ratings on their writing autobiographies and on their cognitive style questionnaires, and that there was a significant positive correlation between the writing assessment scores and the change in metacognitive knowledge indicated by the study instruments. Kasper acknowledged that it is difficult to claim a causal relationship between enhanced metacognitive knowledge and L2 writing proficiency, and called for further studies that could shed light on this issue. Negretti and Kuteeva (2011) brought the concept of metacognitive knowledge together with the concept of genre awareness in their exploratory case study that investigated how L2 writers use their metacognitive awareness to analyze the rhetorical features of texts in academic reading and writing. The authors noted that although the concept of metacognitive knowledge of genre has previously been discussed in the L2 writing literature, it has more often been discussed as concepts such as “academic discoursal consciousness” (Belcher & Brain, 1995, p.xv) and “rhetorical consciousness raising” (Hyland, 2007, p.160). In their study, the term metacognitive awareness was used interchangeably with metacognitive genre awareness, defined by the authors as “metacognitive processes that have as their object knowledge of genre, discourse, and rhetorical aspects of academic texts” (p. 96). Reflections on weekly genre analysis tasks, a postcourse survey, and a final written genre-analysis of two texts were collected from eight Swedish pre-service teachers enrolled in an academic reading and writing class, and were analyzed using qualitative methods. The authors claimed that after engaging in genre awareness raising tasks, learners developed a declarative but not conditional metacognitive knowledge (i.e., being able to adapt their knowledge to the genre-bound purpose of the text) of discourse features in academic writing. 16 Unlike the studies reviewed above which investigated metacognitive knowledge as their main focus, some studies have examined metacognitive knowledge as one factor among many within larger cognitive models of L2 writing. Schoonen and colleagues have conducted a prolific line of research (Schoonen & DeGlopper, 1996; Schoonen, Snellings, Stevenson, & Van Gelderen, 2009; Schoonen, Van Gelderen, De Glopper, Hulstjin, Simis, Snellings, & Stevenson, 2003; Schoonen, Van Gelderen, De Glopper, Hulstjin, Snellings, Simis, & Stevenson, 2002) on the relationships between L2 linguistic knowledge, speed of L2 processing, metacognitive knowledge, L1 writing proficiency, and L2 writing proficiency. Schoonen et al. (2003) developed a model of L2 writing by conducting a longitudinal study that examined the development of 400 Dutch EFL learners’ L2 writing, compared to their development of L1 writing. Data collected included the L2 writers’ metacognitive knowledge, linguistic knowledge (grammar, vocabulary, spelling), and their fluency in lexical retrieval and sentence building. Using structural equation modeling for analysis, the researchers found that L2 writing was more strongly correlated with linguistic knowledge and fluency (i.e., retrieval) than was L1 writing, and that there was a strong relationship between L1 and L2 writing proficiency with metacognitive knowledge and linguistic fluency mediating this relation. Metacognitive knowledge in this study was defined as “knowledge of what constitutes a good text and which writing strategies are likely to be successful in dealing simultaneously with all the constraints writing a text poses” (p.168). To measure metacognitive knowledge in L1 and L2 writing, the researchers used an 80-item questionnaire which comprised statements about text characteristics as well as reading and writing strategies, and participants were asked to indicate whether they agreed or disagreed with each of these statements. Because the authors considered metacognitive knowledge to be a language-independent or language-neutral construct, all 17 statements in the questionnaire were in Dutch, and this test was used as the single measure for participants’ metacognitive knowledge for both their L1 and L2 writing. While the researchers used the term metacognitive knowledge, it seems that the instrument used in the study to test metacognitive knowledge reflects a rather narrow conceptualization of the construct. The instrument was only designed to test participants’ knowledge of text characteristics, and certain reading and writing strategies. Perhaps because metacognitive knowledge was included as one of many factors in the L2 writing model, the authors did not report if the questionnaire was based on any particular theoretical framework or literature, nor how the 80 statements in the questionnaire were compiled. This limited conceptualization of metacognitive knowledge may have had an impact on Schoonen et al.’s (2003) study results. In interpreting the results of their study, the researchers observed that the correlation between participants’ L1 and L2 writing proficiencies were not fully explained by their metacognitive knowledge measure. The researchers explained this by positing that there may be other more general writing-related variables that their model of writing did not account for, for example, “general ‘world knowledge’, certain writing strategies, or general cognitive fluency (e.g., working memory)” (p. 192). However, general world knowledge and writing strategies, given in Schoonen et al (2003) as examples of possible other variables that affect writing, are often categorized under metacognitive knowledge in influential frameworks of metacognitive knowledge (e.g., Flavell, 1979; Wenden, 1998) that have been proposed in the past. Thus, in addition to the possibility that there were miscellaneous factors that were not accounted for in the study’s model of writing, it may be that metacognitive knowledge had been defined too narrowly, and thus the measure used was not sufficiently comprehensive to capture the full contribution of metacognitive knowledge to L1 and L2 writing 18 proficiency. The researchers also acknowledged that their “operationalization of metacognitive knowledge…was, for practical reasons, rather straightforward” (p.192), and that the construct could be extended to include more components, putting forward a call for further explorations of the relation between metacognitive knowledge and writing proficiency. Sasaki and Hirose also conducted a series of closely related studies (Hirose & Sasaki, 1994; Sasaki & Hirose, 1996; Hirose & Sasaki, 2000) that included metacognitive knowledge as one of the factors within their model of EFL writing. Within the context of Japanese EFL writing, Sasaki & Hirose (1996) looked at the relationships between L2 language proficiency, L1 writing proficiency, L1 writing strategies, L2 writing strategies, L2 writing metaknowledge, past writing experience, and instructional background. Sasaki and Hirose’s (1996) study was different from Schoonen et al. (2003) in that the researchers examined different factors to be included in their model of writing (e.g., L1 and L2 writing strategies, past writing experience, instructional background), and they more explicitly stated that their model of writing was for a specific genre: expository writing. Also, while Schoonen et al. (2003) used the term metacognitive knowledge, Sasaki and Hirose (1996) used the term metaknowledge in their study to refer to the knowledge of L2 writing that writers need in order to successfully complete a given writing task. Citing Reid (1990), the authors explained that this knowledge includes “what is socially and culturally appropriate in terms of the writer roles, audience expectations, rhetorical and stylistic conventions, and situational or contextual features of written text” (p. 201). Despite the different terminology, however, metaknowledge in Sasaki and Hirose (1996) seems to refer to a construct similar to those in other studies (metacognitive knowledge in Schoonen et al., 2002, 2003, 2011 and metacognitive genre awareness in Negretti and Kuteeva, 2011). 19 The test of metaknowledge on English expository writing used in Sasaki and Hirose (1996) assessed L2 writers’ knowledge of important concepts in English expository writing such as topic sentence, unity, coherence, and organization. The test consisted of 12 items that were organized under three broad sections: 1) open ended questions asking students to explain key terms (e.g., coherence), 2) multiple choice questions asking students to choose the most appropriate statement about English expository writing; and 3) reading five English paragraphs, selecting the best organized one, and then justifying their choice. The authors did not provide a detailed explanation of how the tests were scored. Results of the study indicated that the participants’ L2 proficiency, L1 writing proficiency, and metaknowledge all significantly predicted variance in L2 writing proficiency (52%, 18%, 11% each). However, a more interesting finding was that L1 writing proficiency and metaknowledge gained explanatory power only when L2 proficiency surpassed a certain threshold. Based on these results, Sasaki and Hirose (1996) hypothesized that L2 writers may need to reach or surpass a certain level of L2 proficiency in order to transfer these competencies to their L2 writing proficiency. Measures of Metacognitive Strategies Most of the existing research on metacognitive knowledge in L2 writing suffers from difficulties in using appropriate methods to examine or measure L2 writers’ metacognitive knowledge. According to Van Hout-Wolters (2000), the methods that have been used to examine learners’ cognitive processes can largely be divided into two categories: offline and online. Offline methods include methods such as language learning questionnaires, oral interviews, stimulated recall interviews, and portfolios, whereas online methods include methods that aim to access learners’ real-time cognitive processes. Think-aloud protocols, eye-tracking (eye- 20 movement measurement), computer logfile, observation of behavior, trace analysis, and performance assessment are some examples (Van Hout-Wolters , 2000). Self-report instruments (also referred to as questionnaires) are one of the methods that have been most frequently used to tap into language learning strategies. Yet, recently, with an increasing amount of scrutiny applied to the use of these self-report instruments, there have been questions raised with regard to the reliability and validity of such measures for language learning strategies. Insights gained on learning strategies through self-report instruments may differ depending on a range of factors. For example, it can vary based on the goal and content of the self-report instrument, as well as the “level of specificity, target group, type of questions/statements, number of scales and items, type of answer scales, time in which to complete the questionnaire, reliability and validity” (Schellings, 2011, p. 94). Therefore, it is crucial to select the appropriate type and format in order to ensure that the self-report instrument is in fact measuring what it set out to measure. Tseng, Dörnyei, and Schmitt (2006) explained in detail several limitations of self-report instruments used in LLS research. They took as an example the Strategy Inventory for Language Learning (SILL), a commonly used self-report instrument developed by Oxford (1990), and demonstrated issues that are found in self-report instruments. Similar to other self-report instruments in LLS research, the SILL presents the learner with a list a various items describing specific strategies, and asks learners to indicate on a 5-point Likert scale (1: never/almost never – 5: always/almost always) the frequency with which they use each strategy. One of the problems pointed out by Tseng et al. (2006) was that while the scores in the SILL tell the researcher/teacher about the frequency (quantity) of strategy use, they do not tell them anything about appropriate/effective use (quality). The authors also argued that “the scales in the SILL are 21 not cumulative and computing mean scale scores is not justifiable psychometrically” (Tseng et al. 2006, p.83). By this they mean that a student could use one specific cognitive strategy very frequently and effectively, but still receive a low average score on the Cognitive Strategy scale if s/he does not report using any of the other types of cognitive strategies. Thus, the SILL would not be a psychometrically valid and reliable self-report instrument for LLS use. Schellings and Van Hout-Wolters (2011) also add to this discussion of potential problems of measuring strategy use with self-report instruments by reviewing five empirical studies that demonstrate the advantages and disadvantages of self-report instruments, and address validity issues with regard to the measurement of cognitive and metacognitive strategies with self-report instruments. As an alternative to using self-report instruments as the sole measure of metacognitive strategy use, the authors suggest that researchers use multi-methods, mixed methods, or implement triangulation of different methods to obtain a more comprehensive picture of strategy use. However, they also point out some caveats in doing so as well. In using different methods to measure strategy use, it is likely that researchers will obtain different results, and the decision to combine and weight these results for triangulation can be both a difficult question as well as a time/labor-intensive endeavor (Schellings & Van Hout-Wolters, 2011, p.85). Some considerations the authors suggest should be made in terms of selecting a selfreport instrument to assess metacognitive strategy use is: 1) the match between the strategies the self-report instrument measures and the learning strategies of interest; 2) if the learning task accompanying the self-report instrument is representative of the learning tasks to which generalizations will be made; 3) validity and reliability requirements; and 4) issues of generalizability of measuring methods (Schellings & Van Hout-Wolters, 2011, p.84). 22 If using self-report instruments for measuring metacognitive strategy use despite its limitations, the authors suggest that items in the questionnaire should be task-specific, and that the questionnaire should be administered directly before or after participants complete the task so as to increase the accuracy of the questionnaire responses. In addition to these suggestions related to the administration of a questionnaire, researchers are encouraged to try conducting think-aloud protocols during questionnaire completion with a subset of participants in order to identify whether the participants are accurately interpreting the questionnaire items. Both thinkaloud protocols and stimulated recalls should be done as these two methods may capture different information, and the correlations between subscales of offline and online measures as well as correlations for the overall instrument should be measured (Schellings & Van HoutWolters. Developing and Validating Measures of Metacognitive Knowledge in SLA With the aforementioned methodological issues surfacing in discussions regarding the use of self-report instruments to assess L2 learners’ strategic learning, there have been several recent research efforts in the field of SLA that have aimed at developing and validating measures for the construct of metacognitive knowledge. For example, SLA researchers have documented the development and validation of self-report instruments that measure students’ metacognition or self-regulation in various domains of language learning such as L2 vocabulary learning and L2 listening comprehension. Arguing for the development of self-report instruments that tap into the “underlying trait” (p.82) of learners’ strategic learning as opposed to identifying isolated instances of strategy use, Tseng et al. (2006) have examined the construct of L2 learners’ “self-regulatory capacity” (p.78) in the domain of vocabulary learning. Upon critically reviewing the self-report instruments and 23 taxonomies used thus far in vocabulary learning strategies (VLS) research, such as the VLS taxonomy (Schmidt, 1997), Vocabulary Learning Questionnaire (VLQ) (Gu & Johnson, 1996), and the Vocabulary Learning Strategies Inventory (VOLSI) (Stoffer, 1995), the authors conclude that the measures are inadequate with regard to capturing the complete picture of L2 learners’ strategic learning. They contrast the makeup of these instruments with the “Motivated Strategies for Learning Questionnaire” (MSLQ) questionnaire developed in the field of Educational Psychology (Pintrich et al. 1991; reprinted in VanderStoep & Pintrich, 2003), and posit that a questionnaire format that aims to assess the underlying traits of strategy use such as the MSLQ would be a more valid measure for strategic use of vocabulary. The self-regulatory system that Tseng and colleagues (2006) adopted for their selfregulatory capacity in vocabulary learning included the five factors of: commitment control, metacognitive control, satiation control, emotion control, and environmental control. Through extensive piloting and questionnaire administration the researchers produced the Self-Regulating Capacity in Vocabulary Learning Scale (SRCVOC), composed of 20 items that elicit students’ responses on the five facets of self-regulation. In the realm of L2 listening, Vandergrift et al. (2006) carefully document the development and validation of a metacognitive knowledge self-report instrument to tap into learners’ regulation of listening comprehension strategies, namely, the Metacognitive Awareness Listening Questionnaire (MALQ). The instrument departs from previous self-report instruments that have been used to measure language learners’ use of strategies (e.g., SILL) in that they adopt a broader theoretical framework of metacognition. They do not use the term metacognitive knowledge and instead chose to refer to the construct as metacognitive awareness and perceived use of strategies. With extensive piloting, in addition to exploratory and confirmatory factor 24 analyses, the researchers develop a 21-item measure for metacognitive awareness in listening comprehension comprising five factors: problem-solving, planning and evaluation, mental translation, person knowledge, and directed attention. Summary With the increasing recognition that existing approaches to examining learner strategies may be ill-suited for establishing a coherent body of LLS research, several researchers (e.g., Dörnyei, 2005; Tseng et al., 2006; Wenden, 1991,1998) have proposed that we should adopt a framework of metacognition, and investigate LLS within this context. At the very core of learner metacognition is the construct of metacognitive knowledge, which is the knowledge learners have about their own cognitive processes. While metacognitive knowledge is relevant to the learning of various language skills, it seems especially relevant to L2 writing. However, the lack of studies on the relationship between L2 learners’ metacognitive knowledge and writing is surprising, considering the opportunity that the context of writing affords learners to access their metacognition. Also, even with the small number of existing studies, conducting a systematic and thorough examination of studies on metacognitive knowledge in L2 writing is a difficult undertaking. This is in part due to the various confounding terms and definitions that are used for metacognitive knowledge. Different terms such as metaknowledge (e.g. Sasaki & Hirose, 1996), metacognitive awareness or metacognitive genre awareness (e.g., Negretti & Kuteeva, 2011), and metacognitive strategies (e.g., O’Malley & Chamot, 1990 ) are often used in the literature, sometimes interchangeably with metacognitive knowledge, and at other times, as more limited or broader interpretations of metacognitive knowledge. Even when the term metacognitive knowledge is used (e.g., Schoonen et al., 2002, 2003, 2011), it is sometimes not used according 25 to definitions based on theory or on previous literature. What further complicates the problem is that many instruments that are built to measure metacognitive knowledge in L2 writing are based on different definitions of the construct, and are often not examined for their validity or reliability. Statement of Purpose The purpose of this study is to address the issues outlined above and disentangle the many confounding definitions, operationalizations, and measures of metacognitive knowledge within the context of L2 writing. The study aims to: 1) establish a clear definition and operationalization of metacognitive knowledge in the context of L2 writing, and 2) to develop a psychometrically valid and reliable measure to assess L2 writers’ metacognitive knowledge, using both quantitative and qualitative methods; and 3) to determine the relationship between L2 writers’ metacognitive knowledge in L2 writing, L2 writing proficiency, and L2 proficiency. The following are the main research questions this study seeks to answer: 1. What are the underlying components of metacognitive knowledge in L2 writing? 2. How can L2 learners’ metacognitive knowledge of writing be reliably assessed? 3. What does introspective data reveal about learners’ metacognitive knowledge in taskbased L2 writing? 4. What is the relationship between L2 learners’ metacognitive knowledge, L2 writing proficiency, and L2 proficiency? Hypotheses The following are the hypotheses made for each of the research questions, based on previous literature on the topic: H1. Metacognitive knowledge will include person, task, strategy knowledge of the 26 specific L2 writing task. With regard to the first research question, it is hypothesized that L2 writers’ metacognitive knowledge of the writing task will include knowledge of themselves as language learners (i.e., person knowledge), their knowledge of the specific pedagogical task (i.e., task knowledge), and their knowledge of effective strategy use (i.e., strategy knowledge) (Wenden, 1998, adapted from Flavell, 1979). H2. A theory-based self-report instrument developed through a multi-phase validation and revision process, complemented with introspective data using stimulated recall will enable valid and reliable measurement of learners’ metacognitive knowledge in L2 writing. Because metacognitive knowledge is stored within L2 learners’ cognitive systems, it is impossible to directly observe and assess it. Therefore, the most widely used method of measurement has been self-report instruments. However, in order to design self-report instruments that reliably assess and measure the construct of metacognitive knowledge, it is imperative to implement a procedure that ensures the compilation and validation of theoretically valid and empirically studied questionnaire items. In addition, insights gleaned from introspective data using stimulated recall interview methods can complement information gathered from the administration of the self-report instrument. H3. Introspective data will allow triangulation of L2 writers’ metacognitive knowledge as measured by the self-report instrument, and potentially offer insight into the relationship between MK and metacognitive strategies. Previous literature on the validity of self-report instruments (e.g., Schellings & Van Hout-Wolters, 2011; Veenman, 2011) have noted that data from large-scale administration of 27 self-report instruments alone offer a rather incomplete picture of learners’ cognition due to various reasons (e.g., misinterpretation of questionnaire items, participants’ bias towards social desirability, influence from previous instruction and teacher’s expectations, trouble accessing specific information from memory). Some researchers have also suggested that offline self-report instruments and online introspective methods such as think-aloud protocols or immediate stimulated recall interviews elicit different information; while offline self-reports elicit information stored in learners’ long-term memory, online methods elicit information about learners’ strategy use and behavior. Thus, complementing data collected via the self-report instrument with introspective data will enable better understanding of the relationship between metacognitive knowledge and strategies. H4. L2 proficiency and metacognitive knowledge will contribute to L2 writing proficiency. A high level of metacognitive knowledge will not be able to compensate for an inadequate level of L2 proficiency. It is hypothesized that learners’ access to their metacognitive knowledge will not enable them to overcome difficulties in L2 writing that stem from an inadequate level of L2 proficiency. (cf. Sasaki & Hirose, 1996; Schoonen et al., 2003). Also, previous research which has found that L2 proficiency and metacognitive knowledge are among the different variables that contribute to L2 writing performance (Sasaki & Hirose, 1996) support these hypotheses as well. Overview of Study The present study is largely organized into two parts. The first part of the study involves establishing a clear definition of metacognitive knowledge by identifying its components as well as the development of a psychometrically valid self-report instrument that can reliably measure L2 writers’ metacognitive knowledge. Qualitative data gathered through introspective methods 28 using stimulated recall interviews also informs the self-report instrument development process, and complements the large-scale quantitative data collected. Results from this section of the study will enable the researcher to answer the first, second, and third research question of the present study. The second part of the study involves investigating the relationship between L2 proficiency, metacognitive knowledge, and L2 performance. Results from this part of the study will answer the fourth research question. Details of the study participants, materials, and procedures are given in the next chapter. Phase 1: Phase 2: Instrument Development MK, L2 proficiency, L2 writing   RQ1: Components of MK  RQ2: Instrument development  RQ3: Introspective data RQ4: Relationship between MK, L2 proficiency, L2 writing Multi-phase development procedure: MK Item compilation Questionnaire administration  Questionnaire evaluation L2 proficiency Triangulation with introspective data Figure 2. Overall study design. 29 L2 writing CHAPTER 2: METHODS Participants Participants were Korean EFL learners enrolled in English classes at six universities in South Korea. A total of 619 students participated at different stages throughout the study including the materials development stage, two pilot studies, the main questionnaire administration, and the small scale think-aloud and stimulated recall interviews. Reported in this chapter are data on the participants of the main questionnaire study (n = 164) who submitted all study tasks (i.e., English proficiency test, language background questionnaire, metacognitive knowledge questionnaire, and English argumentative essay). The participants were enrolled at two universities located in Seoul: one, a relatively highranking women’s university, and the other, a very high-ranking public university. There were 151 females and 13 males, whose ages ranged from 19 to 31 (M = 20.95, SD = 1.89). They were mostly freshman (n =134) with a few sophomores (n = 3), juniors (n = 7), and seniors (n = 19), and were pursuing various areas of studying including, for example, advertisement, business administration, computer science, education, modern languages (e.g., German, French, Chinese, Spanish, Korean), engineering, arts and design, and music. There are largely two types of English classes that these students take: foundational English classes, and elective English classes. Enrollment in either of these classes is usually not a reliable indicator of students’ English proficiency. Students taking basic, foundational English classes are rarely placed in them based on any kind of assessment. Rather, English majors are required to take these foundational English classes during their freshman and/or sophomore years as prerequisites to other English classes offered in their major, and non-English majors can take these classes as General Education classes at any time during their tenure as a student at the 30 university. In the case of elective English classes, students are free to select classes that match their interests, and seem appropriate for their level. As can be seen, in either case, students’ English proficiency does not, in any important way, determine which English classes they take. Participants were anticipated to have, at a minimum, an English proficiency level of intermediate according to the Common European Framework of Reference (CEFR), because they would need to have obtained a considerably high score on the English portion of the national college entrance exam in order to enter both universities. On a scale of one to five, with one indicating poor, and five indicating excellent, they self rated their overall English proficiency to be 2.64, on average. They were more confident in their reading (M = 2.98, SD = 0.73) and listening skills (M = 2.93, SD = 0.71), than they were in their writing (M = 2.31, SD = 0.85) and speaking skills (M = 2.08, SD = 0.95). Meanwhile, their scores on the English proficiency test (DIALANG) ranged from 6 to 27 (M = 17.15, SD = 4.51) out of a possible score of 30. In terms of their English learning backgrounds, participants reported to have had a range of 3 to 19 years of English study (M = 11.53, SD = 2. 76). Some participants (n = 62) reported to have had taken standardized English tests in the past, including the Test of English Proficiency (TEPS), the Test of English for International Communication (TOEIC), the Test of English as a Foreign Language (TOEFL), and the International English Testing System (IELTS). Participants who reported to have had some kind of writing instruction in the past (n = 59) had mostly enrolled in private classes to prepare for these standardized tests, or less frequently, had attended an international school where they taken an ESL class that included writing instruction. The few participants who were juniors or seniors reported on their experiences with their freshman English reading and writing courses at their universities. A small number of students (n = 16) 31 had studied abroad in an English speaking country, and the average length of residence was 3.06 months (SD = 16.34). Instruments Data for this study came from five major sources: 1) metacognitive knowledge self-report instrument; 2) argumentative essay writing prompts; 3) DIALANG English proficiency test; 4) language learning background questionnaire; and 5) semi-structured interview. The metacognitive knowledge self-report instrument (see Appendices I and J) provided a measure for participants’ metacognitive knowledge in L2 writing, while the argumentative essays written in response to the prompts (see Appendices A and B) allowed insight into their English writing proficiency. The grammatical structure section of the DIALANG test (see Appendix A) provided an index of participants’ English proficiency, and the language learning background questionnaire helped gather information about the their language backgrounds and previous language learning experiences. The semi-structured interviews elicited further information from participants on other aspects of their perceptions about English learning and writing. Each of the instruments is described in further detail below. Metacognitive knowledge self-report instrument. The metacognitive knowledge selfreport instrument that was developed and used in this study consisted of 31 questionnaire items. Each item was a statement about a certain aspect of students’ metacognitive knowledge in the context of L2 writing. The beginning of each statement started with a phrase that oriented participants’ attention to the context of writing an English essay. This was in order to obtain the most accurate results with regard to participants’ metacognitive knowledge by creating a selfreport instrument that oriented participants to the specific task. 32 The metacognitive knowledge questionnaire items were presented in Korean. This was to prevent participants’ English proficiency from becoming a confounding factor in the questionnaire results. However, because the first version of the questionnaire was developed based on previous literature on metacognitive knowledge and questionnaire items that have been utilized in previous studies, the items were initially written in English. Then, these English items were translated into Korean by the researcher, whose L1 is Korean. The Korean translations were then translated back into English by an instructor of Korean as a foreign language, who also is a native speaker of Korean. This was to ensure the accuracy and equivalence of the Korean translations of the questionnaire items. The questionnaire items were presented to participants in a random order. Participants were asked to read each of the items and indicate, on a 6-point Likert scale, the extent to which they agree with the statement. Point 1 on the Likert scale indicated (strongly disagree), 2 indicated (disagree), 3 indicated (somewhat disagree), 4 indicated (somewhat agree), 5 indicated (agree), and 6 (strongly agree). The descriptors of the scale (ranging from strongly disagree to strongly agree) elicited students’ perceptions of the extent that the statements in the items were true, rather than the frequency that they believed the statement in each item was true during the task, as recommended by recent critical reviews of self-report instruments used in previous studies (cf. Tseng et al., 2006). The number of response categories for this study was decided based on a review of previous literature on the relationship between questionnaire response categories and scale reliability, in addition to pilot testing with a large group of participants (n = 98). Initially, an 8point Likert scale was constructed, based on literature in educational measurement claiming that the reliability of a scale continues to increase as the number of response categories increase, up 33 to an 11 or 12 point scale (Nunally, 1978, cited by Neuman, 2000). Researchers have claimed that reliability increases very rapidly as the number of response categories increases, but this increase tends to level off at about 7, and there is very little increase after 11 (Nunally, 1978; Preston & Colman, 2000). Also, an even number scale was chosen so that it would deter participants from giving a neutral answer as a default (e.g., a point 3 on a 5-point scale), and so that dividing the responses into a dichotomous variable (i.e., Yes, this statement is true of me or No, this statement is not true of me) would be possible in data analysis. However, this initial 8-point scale was reduced to a 6-point scale after piloting the questionnaire. Participants (n = 98) in the pilot study were encouraged to report on any difficulties they had in responding to the questionnaire, for example, ambiguous wording. While the participants did point out a few ambiguously-worded items, a more noticeable trend in their comments was that the 8-point scale was difficult to work with. For example, several of the participants indicated that the difference between 7 (strongly agree) and 8 (very strongly agree), or the difference between 1 (very strongly disagree) and 2 (strongly disagree) on the Likert scale was hard to discern. They commented on the difficulty of having to grade small differences in abstract concepts such as their thought processes and levels of awareness (e.g., I am aware of my thinking when I write in English). Other participants reported that they drew a vertical line through the middle of the scale as a way to more clearly visualize the divide between negative and positive responses, and thus, make it easier to grasp the degree of agreement that each point on the scale represented. Upon obtaining this feedback from the pilot study, additional research was conducted to find support for the use of Likert scales with a smaller number of response categories. Amid competing guidelines and suggestions with regard to the optimal number of response categories 34 for self-report instruments, it was found that several studies, in fact, offered empirical support for scales with fewer than seven points. Lozano et al. (2008), for example, suggested using scales between four to seven points. The researchers found that the benefits of increasing the number of response categories were few beyond the number 7, and that increasing the alternatives offered would likely introduce additional measurement errors, in the case that participants have difficulty discerning between the response categories offered. They concluded that “in practice, on determining the maximum number of response alternatives, it is advisable to complement the psychometric criterion with consideration of the particular characteristics of the sample in question” (p.78). Other researchers have echoed this sentiment, suggesting that instruments should be appropriate for participants’ cognitive ability to discriminate between the different number of scale points, in addition to the psychometric properties of the questionnaire itself (Andrich & Masters, 1988; Komorita & Graham, 1965; Krosnick, 1999). Weng (2003), more specifically, suggested that for participants whose cognitive ability approaches that of college students (as in the present study), “an odd-numbered, 7-point scale and an even-numbered, 6point scale should be able to provide consistent and reliable participant responses" (p.969). Based on both the pilot study results, and previous research in educational measurement, it was concluded that despite the argued benefits of using a scale with a large number of response categories for better reliability, there was a strong rationale for using an even-number, 6-point scale in the present study. Writing task. Two different but comparable essay prompts were used for the writing tasks in this study. The writing tasks required participants to engage in logical reasoning, and write an argumentative essay in response to the following prompts (see Appendix B and C) : 35 Essay prompt 1. Your university has just announced that it is considering several different businesses to move into an empty space in the Student Union building on campus. The university has invited students to give their opinions in the school newspaper about which business they would like to have in the building. The university will give priority to the business that meets as many of the following requirements as possible: 1) The business is environmentally friendly; 2) The business helps improve students’ academic performance; 3) The business will be able to fit into a small space (30 m2). Choose one the following businesses and write a short article for the school newspaper, arguing that it is the best choice for the new space in the union. Essay prompt 2. Your university is developing an exchange student program. Exchange students will have the chance to study for 9 months at a university in the United States. Your university is currently considering several different housing arrangements for these exchange students. Read the following descriptions of each housing arrangement, and choose the one you think would be best. Then, write a short essay for the university newspaper, arguing that it is the best choice for exchange students. The first writing prompt was used in the first phase of the study to generate questionnaire items for the self-report instrument. A subset of the participants (n = 8) were asked to think aloud while writing in response to this first essay prompt. The participants’ utterances were then transcribed and analyzed to inform the compilation of questionnaire items. The same writing prompt was also used in the main part of the study, during which the larger group of participants (n = 164) wrote essays, and then responded to the metacognitive knowledge questionnaire. The second writing prompt was used in the last phase of the study. At this stage of the study, students (n = 8) participated in a stimulated-recall after writing an essay based on the second writing prompt, and responding to a metacognitive knowledge questionnaire. Student 36 utterances from this stimulated recall were transcribed and analyzed, this time to triangulate with their responses to the questionnaire. The prompts included several requirements students needed to consider in order to write a successful argumentative essay, which prompted them to make use of their logical thinking skills. The rationale for adding an additional dimension to the writing task (i.e., decision-making based on a given set of criteria) is to add to the complexity (cf. Robinson, 2011) and perceived novelty of the writing task. This is in accordance to Flavell’s (1979) claim that metacognitive knowledge is more likely accessed in “…situations that stimulate a lot of careful, highly conscious thinking: in a job or school task that expressly demands that kind of thinking; [and] in novel roles or situations” (p. 908). By having students take additional requirements into account when writing their essays, the writing prompts in this study offered the complexity and novelty needed to stimulate participants’ metacognitive thought processes. The writing tasks used in this study were somewhat similar to the writing tasks used by Kuiken and Vedder (2011) who examined the effects of task complexity on L2 writing. In their study, students had to write a letter to a friend giving advice about selecting a destination for a holiday trip. In order to write the letter, the participants had to choose and make an argument for a vacation destination that fulfilled several requirements (e.g., access to a garden, how quiet it is, availability of physical exercise facilities). The major difference between the writing tasks in the present study and the ones in Kuiken and Vedder (2011) was the genre of writing that the participants had to produce. Argumentative essay writing seemed to be good choice for the writing tasks in the present study because it is a genre of writing that Korean EFL learners frequently perform in academic settings. It also seemed that using an argumentative writing task would also broaden the pedagogical implications of the study results, as argumentative writing 37 tasks seem to be a more prevalent genre of writing across different L2 writing instructional contexts compared to letter writing. Language proficiency test. DIALANG (www.dialang.org) is a computer-based, diagnostic language assessment that provides tests of listening, reading, writing, grammar, and vocabulary skills in fourteen European languages as target languages. It was developed by several higher education institutes in Europe, and can be used by both language learners and teachers for self-assessment and diagnostic purposes. The test is composed of an optional placement test, an optional self-assessment test, and five language skill tests (i.e., listening, reading, writing, grammar, vocabulary tests). The optional placement test is to determine which level of the DIALANG test the learner should take among the three different versions of the test: easy, intermediate, and difficult. If learners choose to skip the placement test, they automatically receive the set of questions for intermediate learners. The learners can also complete a selfassessment test of their language skills, but this is also optional. Each language skill test comprises 30 items, and takes approximately 20 to 30 minutes to complete. Test takers are free to choose how many tests they would like to take, and there is no time limit for the tests. After the test takers complete a language skill test, DIALANG provides them with feedback regarding their level based on the Common European Framework scale (A1, A2, B1, B2, C1, or C2). Level A on this scale indicates basic proficiency, B indicates independent, and C indicates proficient. In the present study, the English grammar test was used to assess participants’ language proficiency level. While it would have been ideal to administer all sections of the DIALANG including tests of listening, reading, writing, grammar, and vocabulary skills, with approximately 20 to 30 minutes allotted for administering each section, this testing process would have taken over 2 hours which would be unrealistic for conducting research in a classroom setting. The 38 grammar section was chosen as an index for participants’ language proficiency because it was judged that it targeted the language skills that were most representative of participants’ English proficiency in general, and also those used in L2 writing. The 30 items in the grammar test tapped into a range of students’ grammatical knowledge including sentence structure, vocabulary, and morphology. Although DIALANG is a computer-based test administered online, paper-and-pencil versions of the test were administered because the present study was carried out in classroom contexts without computer access. The computer-based and paper-and-pencil based administration of the test are considered comparable (Scheuermann & Pereira, 2008). The use of DIALANG as a language proficiency test in the present study enables easy interpretation and comparison of the participants’ English proficiency, because DIALANG was developed in accordance with the CEFR which is widely used in other studies and recognized across different contexts. The DIALANG test was also useful in that it provides a description of participants’ proficiency levels according to the CEFR. Meanwhile, for a more fine-grained comparison of the participants’ language proficiency within the study, the raw scores (a total of 30) were used, because each level of proficiency (e.g., A1, A2, B1) has a broad band-width, and it would be difficult to make more subtle distinctions between participants who fell under the same CEFR proficiency level, but differed in their level of proficiency measured by their raw scores. Language learning background questionnaire. In addition to participants’ results from the DIALANG test, their answers to a language learning background questionnaire (see Appendix D, Korean translation Appendix E) provided valuable information about their English proficiency and relevant language learning experiences. Students’ self-reported standardized test scores (TOEFL and/or TOEIC scores) were used as a secondary source of information about 39 students’ English proficiency. In addition, students provided information on other factors that may have a bearing on their metacognitive knowledge in L2 writing, such as their language backgrounds (e.g., any additional languages spoken), self-perceived English proficiency, goals for language learning, length of formal and informal English study, previous writing instruction, and possible study abroad experiences. Semi-structured interview guides. An interview guide (Appendix F) was also prepared to gather more in-depth information about participants with respect to their language backgrounds, previous language learning experiences, writing instruction, self-perception as a writer (in both their L1 and L2), and writing processes. The first part of the interview guide consisted of more general questions asking participants about their views on the qualities of a good writer, as well as their experiences with L1 and L2 writing. The second part of the interview guide included questions that probed specific aspects of each participant’s writing process, such as their brainstorming techniques, revision strategies, word choice, problemsolving, monitoring, and self-evaluation throughout the writing process. All of the questions were in the participants’ L1, Korean. Procedure Data was collected over the course of 12 weeks. The first part of the study involved the development of a self-report instrument to measure metacognitive knowledge. This entailed the following four phases: 1) developing an item pool; 2) piloting the instrument; 3) administering the revised instrument; and 4) evaluating the instrument. These three phases followed the guidelines by Dörnyei (2003) who outlined procedures to ensure high levels of reliability for survey instruments. The second part of the study involved collecting metacognitive knowledge scores using the questionnaire developed in the first part of the study, and collecting English 40 proficiency, and L2 writing proficiency data in order to determine the relationship between metacognitive knowledge, English proficiency, and L2 writing proficiency. The study procedure is summarized and presented in chronological order in Table 1. Table 1 Study Procedure Phase Phase 1: Developing an item pool Date January March March week 1 Procedure Literature Review March week 2 Pilot study with IEP students (n = 39) Think-aloud protocol pilot study (n = 2)  Think aloud and semistructured interview with Korean EFL students (n = 8) Pilot study 1 (n = 98)  March week 3 April week 3 Feedback from graduate seminar (n = 10) Phase 2: Piloting the questionnaire April week 4 May week 1 May week 4 Phase 3: Administrati ng the revised questionnaire Phase 4: Evaluating the final questionnaire June week 2 – week 4 Administration of revised questionnaire (n = 164) July week 3 Stimulated recall and semistructured interviews (n = 8) Pilot study 2 (n = 90) Materials  Questionnaire items from previous studies  Preliminary Questionnaire  Preliminary Questionnaire Preliminary Argumentative Essay Writing Prompt Revised Argumentative Essay Writing Prompt         Revised Questionnaire 1 English Proficiency Revised Questionnaire 2 English Proficiency Test Essay Revised Questionnaire 3 English Proficiency Test Essay  Argumentative Essay Writing Prompt Developing an item pool. The first step to establishing a definition and developing a measure for metacognitive knowledge is generating and refining a pool of test items that potentially correspond to the underlying components of the construct. With this goal in mind, the 41 initial item pool (see Appendix G) for the present study was compiled via three methods: an extensive review of relevant theoretical literature, a compilation of test items utilized in previous studies on metacognitive knowledge and strategies (see Appendix H), and conducting thinkaloud protocols and semi-structured interviews with a small group of participants after their completion of a writing task. Literature review. First, a literature review was carried out on previous studies in mainly three areas: the construct of metacognitive knowledge, the development of self-report instruments to measure metacognitive knowledge, and the role of metacognitive knowledge in language learning and L2 writing. A review of the theoretical literature on metacognitive knowledge and strategies helped establish a working definition of metacognitive knowledge in L2 writing, and provided a theoretical foundation for predicting potential factors underlying the construct of metacognitive knowledge. At later stages in the study, these theoretical predictions were compared against empirical data collected via the instrument developed in this study. Meanwhile, previous literature (e.g., Dörnyei, 2003; Tseng et al., 2006) that discussed issues regarding the development of self-report instruments informed the process of developing a valid and reliable questionnaire. The few extant empirical studies (e.g., Manchón, 2009; Tseng et al., 2006; Vandergrift et al., 2006) that examined the relationship between metacognitive knowledge (or metacognitive strategies) and language learning using self-report instruments were especially helpful as the metacognitive knowledge questionnaire items reported in these studies served as a starting point for compiling a pool of items to adapt and pilot in the present study. Preliminary instrument development. Once a preliminary draft of the metacognitive selfreport instrument (in English) was completed via the three methods described above, it was piloted with 39 ESL students enrolled in an Intensive English Program (IEP) and 10 graduate 42 students enrolled in a graduate seminar on Individual Differences in SLA. The main goal of piloting the preliminary draft of the self-report instrument at this stage was to obtain general feedback on the appropriateness of the items, level of comfort with the Likert scale, and the format of the self-report instrument. Think-aloud protocols and semi-structured interview. Additional items were generated from think aloud protocols with a small group of Korean EFL learners (n = 8). These participants were similar in terms of background and English proficiency to the larger group of participants (n = 164) that the final questionnaire was administered to, and represented a range of English proficiency levels. The think-aloud protocols and interviews were carried out individually with each participant in a quiet setting. First, participants read and signed the consent form for the study, and were given time to ask any questions they had about the study procedures. Then, they were briefly trained on how to produce a think-aloud protocol. Participants were then asked to think aloud while completing a 40-minute timed English argumentative writing task. The prompt for the argumentative writing task had been piloted with two Korean ESL learners to anticipate any problems in conducting the think-aloud protocols. Participants were given a sheet of scratch paper to use if needed, and were encouraged to use the allotted time to plan, draft, and revise their essay. The entire writing process was recorded on video tape. After participants completed the think-aloud protocols, semi-structured interviews were conducted in order to gain insight into certain components of participants’ metacognitive knowledge such as person knowledge that might not have been reflected as well in participants’ think-aloud protocols. The interview questions included both general and specific questions about participants’ writing processes, perceptions of self as writer, and previous writing 43 experiences, for example, participants’ perceived strengths and weaknesses when writing in their L1 and L2, and previous instruction in L1 and L2 writing. The following is an example of how participants’ verbalizations produced via think-aloud protocols and semi-structured interviews were incorporated into the metacognitive knowledge questionnaire. A side-by-side illustration of a participant ’s argumentative essay and corresponding verbalizations is shown in Table 2. After writing the first sentence of paragraph 3 which discussed possible reasons for supporting a fast food restaurant for the new space, the participant debates whether she should continue to develop paragraph 3, or quickly move on to the next paragraph instead. As she ponders this decision, she recalls what her writing instructor, Professor Williams (pseudonym), had said in class about the importance of having welldeveloped paragraphs in order to get high scores on essays, and decides to continue with paragraph 3. Table 2 Participant Essay and Think-Aloud Protocol Participant’s essay  Paragraph 3: “Second, if a fast food restaurant opens at the new space, it can help students having a meal in short break time… Verbalization during think-aloud protocol “Should I write one more reason? Time…time…Yeah, but Professor Williams (pseudonym) said having well-developed points gets better scores than just having a lot of points…let’s just finish this.” …In addition, the fast food restaurant can make big profits. It seems quite good but there is a problem with small space. People who eat hamburgers want to eat their meal inside of the restaurant. Hamburgers are pretty smelly, and eating smelly food in a class I not an eticket even in break time.” (MHR, April 19, 2012, think-aloud) 44 Table 2 (cont’d)  Paragraph 4: “The reason is why I recommand famous coffee franchise is…” Based on the participant’s verbalizations regarding awareness of evaluation criteria, questionnaire item 40 was developed and included in the metacognitive knowledge questionnaire: “I do not give much thought to the evaluation criteria when working on an English writing task.” Similarly, participants’ utterances during semi-structured interviews were also examined for meaningful themes that were not already reflected in the pool of questionnaire items. For example, awareness of instruction and feedback emerged as an important theme during the semistructured interviews. Several participants talked about the different types of corrective feedback they had access to at their university. A participant described the general procedure for receiving feedback on writing: “In class, we read about two or three essays together and then repeat and memorize the different expressions there. Then we go home, write an essay, turn it in and get feedback from the instructor” (KMS, April 19, 2012, semi-structured interview). Meanwhile, according to another participant, students could also receive feedback on their writing from the university’s online writing center: “They take a video of the essay and record as they talk about what we got wrong in the essay” (MHR, April 19, 2012, semi-structured interview). Participants expressed their preferences and beliefs regarding the efficacy of different types of feedback as well: When I send my paper to the tutor…he tells me that the verb is wrong and that I should think about what the error is very carefully, but I have no idea why the verb is wrong so even if I do think about it very carefully, I have no idea and I might get it wrong again! [I felt that] I wasn’t getting explicit feedback so I didn’t use that service very much. 45 (AYH, April 20, 2012, semi-structured interview) Based on participants’ comments on the topic of corrective feedback, questionnaire item 39 “After writing an English essay, I think about effective ways to incorporate my teacher’s feedback into my writing” was developed and included in the metacognitive knowledge questionnaire. Piloting the questionnaire. The first version of the questionnaire (see Appendix I) created through the process described above included 64 items, and was distributed to a total of 90 Korean EFL learners. In May, the researcher visited English classes at two universities, and explained the overall goal of the research project and the procedures of the tasks that participants would complete. Student who agreed to participate in the study received a packet including 1) a language learning background survey, 2) an English proficiency test (DIALANG), 3) an argumentative essay prompt, and 4) a metacognitive knowledge questionnaire. Participants first wrote an argumentative essay in response to the writing prompt they received. They had 40 minutes to plan, write, and revise an essay of approximately 250 to 300 words, and were provided with scratch paper to use if needed when planning their essay. Directly after the participants completed their essays, they were asked to respond to the metacognitive knowledge questionnaire. There was no time limit in completing the questionnaire, and students were asked to take note of items that were difficult to understand or otherwise worded inappropriately. The questionnaire took approximately 20 minutes to complete. Then, participants filled out the language background survey which asked them questions about their previous language learning experiences (e.g., How many years of English instruction have you had?). As the final task, participants took the grammatical structure section of the DIALANG test 46 in paper-and-pencil format. This test took them approximately 20 minutes to complete. In total, the study procedure took approximately 100 minutes to complete. Administering the revised questionnaire. Data collected from the pilot study described above was used to revise the first version of the questionnaire. Items that did not load onto a factor or cross-loaded on several factors were eliminated. A revised questionnaire of 45 items (see Appendix J, Korean translation Appendix K) was then administered to 164 students in English classes at two universities in Korea. The procedure for the administration of the tasks (e.g., English essay writing, metacognitive knowledge questionnaire, language learning background survey, and English proficiency test) was similar to the previous pilot study. One difference was that, per their request, the instructors of the English classes administered the tasks, whereas in the previous pilot study, the researcher was responsible for the administration. To ensure that the instructors had a good understanding of the purpose of the tasks and the study procedure, the researcher had a meeting with each of the instructors before the administration of the tasks, during which the instructors asked questions about specific study procedures and instructions for students. Another meeting was held with the instructors after they had administered the questionnaire and writing task. At this time, the researcher received the completed questionnaires, essays, and tests from the instructors, as well as any feedback or remaining questions the instructors had. Evaluating the final questionnaire. The final stage of developing a self-report instrument involved evaluating the reliability of the overall instrument as well as the reliability of its subscales. In order to do this, the data collected (n = 164) was first subjected to a factor analysis to identify the underlying components of metacognitive knowledge, and to eliminate items that did not load onto any of these factors. This process reduced the number of 47 questionnaire items from 45 to 31. Each of the subscales was examined for their internal reliability, and only the items that contributed to high subscale reliability were retained. Then, the reliability for each of the subscales was compared to the standard reliability criteria recommended in SLA literature. Stimulated recalls and semi-structured interviews. In addition to checking the reliability of the instrument against standard reliability indices, data from stimulated recall and semi-structured interviews were also analyzed to discern the validity of the metacognitive knowledge components that were identified by the self-report instrument. A small number of students (n = 8) were selected to participate in a more in-depth study of their use of metacognitive knowledge in the context of L2 writing using stimulated recalls and semistructured interviews. The participants were asked to write a short essay of approximately 250 words in length based on an essay prompt (see Appendix C) that was comparable to the previous writing task. Participants had 40 minutes to complete the planning, writing, and revision of their essays. Then the researcher conducted stimulated recall interviews with each participant. During these interviews (guided by instructions from Gass & Mackey, 2000, see Appendix L), the researcher sat with each participant and replayed the recorded video of the participant’s writing process. The researcher occasionally stopped the video and prompted the participant to recall what he or she was thinking at that particular moment. The participants were also encouraged to stop the video at any time when they wanted to comment on any part of the recording. The purpose of conducting stimulated recall interviews with participants was to “explore …[the] learner’s thought processes or strategies …[and] prompt the learner to recall and report thoughts that she or he had” (Mackey & Gass, 2000, p.78) during the writing task. 48 This enables insight into when and how participants accessed their metacognitive knowledge during the writing task, and how this affected the planning, evaluation, and monitoring of their writing. After the participants finished the stimulated recall interview, they filled out a brief questionnaire about their language learning background and proficiency. Also, a semi-structured interview with the researcher followed to clarify and/or probe other issues of interest that might have come up during the stimulated-recall interviews. Information collected through this stimulated-recall interview procedure helped triangulate the large-scale quantitative data collected through the administration of the questionnaire. Finally, the researcher conducted a semi-structured interview with these participants to clarify and/or probe other issues of interest that might have come up during the study procedure. The interview questions were based on the questions used at the beginning stage of the study procedure when first constructing the questionnaire. In summary, over the course of 12 weeks, a self-report instrument of metacognitive knowledge in L2 writing was developed through the following experimental phases: developing an item pool, administering and revising the instrument. Once the final version of the instrument was completed through this process, it was evaluated based on standard reliability criteria outlined in the SLA literature, and triangulated with introspective data collected through stimulated recall and semi-structured interviews. Quantitative Data Analysis Data entry and preliminary analysis. Participants’ performance on the L2 writing task, the metacognitive knowledge questionnaire, language learning background questionnaire, and L2 proficiency test was scored and entered in SPSS for further descriptive and inferential statistical analyses. 49 L2 argumentative essay writing task rating. A writing rubric used by the TOEFL writing test (see Appendix M) was selected for the rating of participants’ argumentative essays because the writing tasks in this study were similar to the tasks in TOEFL writing tests. This rubric was a holistic scale ranging from 1 to 6, with 1 indicating the lowest writing proficiency, and 6 the highest. Each score on the rubric represented a certain level of English writing proficiency which was described in the rubric. Half-points (e.g., 2.5, 3.5, 4.5) were allowed. The researcher and another rater who had a TESOL degree and many years of ESL teaching experience participated in the rating process. Before rating the essays, the raters met to conduct a rater norming session, during which they familiarized themselves with the writing task and instructions used in the study, as well as the rating scale. The norming process continued until an acceptable level of inter-rater reliability was reached. Then, all of the essays (n = 164) were rated by both raters. After all the essays were rated, the inter-rater reliability between the raters was calculated via Cohen’s kappa (.80). Exploratory factor analysis. The main statistic used in the study was exploratory factor analysis. According to Field (2003), factor analysis searches for a smaller set of latent factors to represent the larger set of variables and enables researchers to understand the internal structures of the data set. Factor analysis is commonly used in questionnaire-based research, and can help examine whether or not all of the questions on the questionnaire relate to, or measure the same construct. It can also be used to validate a questionnaire. Factor analysis was conducted at two stages in the present study: during the pilot administration of the self-report instrument, and the administration of the revised self-report instrument for the main part of the study. The purpose of conducting a factor analysis at both of these stages was to identify the components (factors) of 50 the construct of metacognitive knowledge, and to eliminate questionnaire items that did not load onto any major factor. Before proceeding to the main analysis, a couple of other statistical procedures needed to be carried out in addition to testing the assumptions of factor analysis (e.g., multicollinearity, singularity) in order to ensure that the data set is appropriate for conducting a factor analysis. All statistical analyses in this study were done with SPSS ver. 19. The resulting factors were named based on common themes found in the questionnaire items that loaded onto each component. Previous literature on the definition of metacognitive knowledge (e.g., Flavell, 1979; Wenden, 1998) was also consulted in this process. Subsequently, an internal consistency reliability analysis was performed to ensure the reliability of each subscale of the instrument. Metacognitive knowledge test scores. The factors that were obtained from the final exploratory factor analysis were seen as components of metacognitive knowledge in L2 writing. Participants’ questionnaire responses to each of these components of metacognitive knowledge were treated as factor scores and entered into SPSS. L2 proficiency test. Participants’ responses to the 30 items in the grammatical structure section of the DIALANG test were scored against the answer key. Participants’ raw scores were used as an index of their English proficiency and were entered in SPSS. Multiple regression analysis. Three multiple regressions were carried out in order to determine the relationships between the variables: 1) metacognitive knowledge factors and L2 writing proficiency; 2) metacognitive knowledge factors and L2 proficiency; and 3) metacognitive knowledge factors, L2 proficiency, and L2 writing proficiency. Multicollinearity. Before running the regression analysis, the researcher checked if the data set met the necessary assumptions for regression analysis. This process included, among 51 other procedures, checking for collinearity between the predictor variables. A high correlation among the predictor variables could pose problems in the regression model because it could possibly cause a good predictor variable to be found statistically non-significant, and consequently, rejected from the regression model. Entry method. Forced entry method was selected for the first two regression analyses. The forced entry method is recommended for theory testing, rather than the stepwise entry method, because the stepwise entry method can be influenced by random variation in the data (Studenmund & Cassidy, 1987, cited in Field, 2005). Meanwhile, a hierarchical entry method was used for the third analysis because L2 proficiency has been found in previous research to have a relationship with L2 writing performance. Qualitative Data Analysis Stimulated recall and semi-structured interviews. The recorded stimulated recall interviews and the semi-structured interviews were transcribed. Then, these transcripts were coded to identify themes that emerged in relation to participants’ use of metacognitive knowledge while they were engaged in the writing task. This data was interpreted in relation to the components of metacognitive knowledge that were identified through the questionnairebased, quantitative part of the study. 52 CHAPTER 3: RESULTS The results section is organized as follows. First, results obtained from the administration of the first and second versions of the metacognitive knowledge self-report instrument will be reported. These results will shed light on the first and second research questions (RQ1: What are the underlying components of metacognitive knowledge in L2 writing? RQ2: How can L2 learners’ metacognitive knowledge of writing be reliably assessed?) Next, participants’ scores on the metacognitive knowledge instrument, English proficiency test, and English essay task will be reported. Results from analyzing these data will answer the fourth research question on the relationship between participants’ metacognitive knowledge, L2 proficiency, and L2 writing proficiency. Principal Component Analysis Assumptions A principal component exploratory factor analysis was used for analysis of the first version of the metacognitive knowledge self-report instrument (see Appendix I). The primary purpose of this analysis is data reduction: to “derive a relatively small number of components that can account for the variability found in a relatively large number of measures” (1998, p.3). In conducting a Principal Component Analysis (PCA), examining whether the data set meets the assumptions of running the statistical analysis is especially important, because factor analysis is a complex procedure that involves making subjective decisions, for example, how many factors the researcher decides to retain. These decisions later affect the results and interpretation of the analysis. Therefore, in this section, several assumptions that must be met in order to conduct a factor analysis will be examined in detail for the data set in this study. The assumptions that are tested include: data measured at the interval level, roughly normal distribution of data, adequate samples size for factor analysis, and no extreme multicollinearity or singularity among variables. 53 Following the testing of these assumptions, criteria for deciding on the number of factors to be retained and for choosing the rotation method for analysis are reported. Interval data. Both PCA and Pearson’s correlation requires that the data submitted to the test are interval data in order for it to be an accurate measure of the relationship between two variables. The data in the current study met this assumption because they were scores from Likert scales which are assumed to be interval. Normal distribution. Next, the data was tested to see if it met the assumption that the data should be roughly normally distributed. Both the Kolmogorov-Smirnov test, and the Shapiro-Wilk tests indicated that the data were not normally distributed (p < .001). This may be because the sample size was too small for the data to be normally distributed, or perhaps because of the nature of the data set. Because the data came from a questionnaire using Likert-scales in which scores indicate participants’ beliefs as to how much they agree/disagree with a certain statement, it is possible that the distribution of scores would be skewed one way or another depending on whether or not participants had a common tendency to answer one way or another. Because of the non-normally distributed data, the results of the analysis should be interpreted with caution. Sample size. Correlation coefficients tend to fluctuate more in small samples, and therefore, having a large sample size is important for conducting factor analyses (Field, 2000). There are various criteria for how large the sample size should be, for example, 10 times as many participants as variables (Nunnally, 1978), or 5 to 10 participants per variable up to 300 (Kass & Tinsley, 1979). The sample size in the current data set, 89, did not meet these criteria stated in previous research on the issue. When the sampling adequacy was tested by the Kaiser-Meyer- 54 Olkin (KMO) measure, the obtained value was .65, which according to Kaiser (1974) is a mediocre (.50 to .70) sample size for factor analysis. Mulitcollinearity and singularity. In order to avoid having variables that do not correlate with any other variables, or having variables with extreme multicollinearity (i.e., variables that are very highly correlated) or singularity (i.e., variables that are perfectly correlated), the intercorrelation between variables was examined using Pearson’s correlation coefficient. An examination of the intercorrelation revealed that overall, the variables did not yield very high correlations (e.g., r > .90). The highest correlation found was r = .57. Multicollinearity was further examined by looking at the determinant of the R-matrix in the main analysis. The determinant of the R-matrix was greater than .00001, which indicated that there were no problems in terms of mulitcollinearity. While problems with mulitcollinearity were not found, an examination of the correlation coefficient matrix revealed that there were a few variables that had very low correlations with certain other variables. For example, item 6, “When writing an English argumentative essay, I think about what I should do to make it persuasive” had a very weak positive correlation (r = .002) with item 14, “Before I begin an English essay assignment, I think about whether I know enough English to successfully complete the task.” In order to run a factor analysis, variables should correlate fairly well, but not too highly, and items that do not correlate with any other of the variables should be excluded (Field, 2005). While items 6 and 14 did not correlate well with each other, they did have stronger correlations with several other items in the questionnaire, and therefore were retained. Other items that had low correlations with a few other items were also retained for the same reason. The Bartlett’s test of sphericity helped confirm this decision. The test result was significant, p < .001, indicating that the correlation matrix was not an identity 55 matrix, meaning that there was some relationship among the variables and that a factor analysis is appropriate for the data. In addition, an observation of the anti-image matrix of the correlations indicated that the sample size for each pair of variables was adequate. The diagonal elements in the matrix were mostly greater than .50, and the off-diagonal elements were close to zero, indicating a good model. The reproduced correlation matrix was also examined to determine the accuracy of the model. There were 752 (37%) non-redundant residuals with absolute values greater than .05 in the reproduced correlation matrix, which is below the 50% as suggested by Field (2005), and indicates that the differences between the observed correlation coefficients and those predicted by the model are small. Factor Extraction In this section, various criteria used to extract factors are examined and the rationale for the criteria used in the present study is reported. According to Field (2005) there is debate with regard to what criteria to use when deciding how many factors to retain for analysis, but it is important to consider the purpose of running the factor analysis when deciding on criteria for factor extraction. Jolliffe (1972, 1986) suggested that all factors with eigenvalues over .70 should be retained. While this may be helpful for the present study in that the study is exploratory and it seeks to investigate various underlying constructs in the questionnaire, it may be too liberal and too many constructs may be factored out to be meaningful. Meanwhile, Kaiser (1960) recommended retaining factors with eigenvalues over 1. This criterion is accurate when the sample size is larger than 250, and the resulting communalities are an average of .60 (Field, 2005). The data in this study did not completely meet Kaiser’s criteria 56 in that the sample size was smaller than 250. The average communality was approximately .60, but because of the smaller sample size, it would have been preferable for the data to have a higher average communality. Yet another criterion for factor extraction is using the point of inflection in the curve on a scree plot as a cut-off point for selecting factors. According Stevens (1992) when the sample size is larger than 200, the scree plot is a relatively reliable criterion for factor extraction. With the above criteria taken into consideration, in this study, primarily three different criteria of factor extraction were used: Kaiser’s eigenvalue over 1, the scree plot, and interpretability of the factor solution. In addition to these main criteria, the total amount of variance accounted for by the extracted factors, as well as the communalities after the extraction were examined as well. Using several criteria for deciding how many factors to retain allowed for the researcher to take widely-accepted guidelines into consideration while also finding the best factor solution for the purpose of the present study. Although the current data set did not exactly meet the prerequisites for using these criteria, they were deemed relatively close to meeting them. Factor Rotation Factor rotation was carried out for improvement in the interpretability of factors. Field (2005) noted that constructs examined through factor analyses in the Social Sciences and in Education are rarely independent. The underlying factors in this study were assumed to be correlated as well, and therefore, an oblique rotation method, direct oblimin, was selected. The cut-off point for assigning variables to their factors was an absolute value of .40, according to the recommendations of Stevens (1992) that a factor that accounts for 16% of variance in the outcome is considered important. 57 PCA Results The Kaiser criterion suggested a factor solution that included 19 factors accounting for 77.87% of the variance. Meanwhile, the scree plot (see Figure 3) was somewhat more difficult to interpret; The curve in the scree plot leveled off at multiple points, none of which suggested factor solutions that were congruent with those suggested by the Kaiser criterion. The first point at which the curve in the scree plot leveled off was 3, suggesting a two-factor solution. However, the two factors only accounted for 31.58% of the variance, and the communalities after extraction were also very low, averaging at approximately .30. The two-factor solution was also difficult to interpret because the items fell under two broad categories. The other points of inflection in the scree plot were more difficult to identify, as they were clustered together and the natural break point in the data was not as distinct as point 3. Following Costello and Osborne’s (2005) guidelines for addressing this issue, factor solutions suggested by points 8 and 11 were examined as well as the factors above and below the inflection points (i.e., 7 and 9, 10 and 12). Upon examining the different potential factor solutions based on strong item loadings, few item cross loadings, and item numbers per factor, a nine-factor solution was selected. 58 Eigenvalue Scree Plot Component Number Figure 3. Scree plot for first version of self-report instrument. 59 Table 3 shows the eigenvalues of the nine factors that were extracted, and also the percentage of variance each of the factors accounted for. The eigenvalues for the nine factors were mostly over 2, and in total, the factors explained 57.66% of the variance. Table 3 Extracted Components and Explained Variance Component 1 2 3 4 5 6 7 8 9 Eigenvalue 16.34 3.87 2.82 2.78 2.50 2.31 2.20 2.10 1.99 % of Variance 25.53 6.05 4.41 4.34 3.91 3.60 3.44 3.26 3.12 Cumulative % 25.53 31.58 35.99 40.33 44.24 47.84 51.28 55.54 57.66 Factor Names The items that loaded onto each of the nine factors were analyzed thematically in order to identify the potential constructs each factor represented. At this phase of the study, however, the purpose of performing the PCA was more to reduce the number of items in the first version of questionnaire rather than to clearly identify the components of metacognitive knowledge. Therefore, while the themes of the nine factors and the questionnaire items that loaded onto those factors are reported in Table 4, they will not be discussed in as much detail as the factors in the final version of the questionnaire. Table 4 Extracted Metacognitive Knowledge Factors Factor 1 2 3 Theme Strategy (planning, monitoring) Discourse (textual cohesion) Idea development/generation Items 25, 40, 26, 22, 36 35, 10, 19, 3, 20 2, 59, 38, 29, 32 60 Table 4 (cont’d) 4 5 6 7 8 9 Person (aptitude) Person, strategy knowledge Assessment, L2 proficiency, L2 learning Task (background knowledge, purpose) Strategy (problem solving) Strategy (monitoring, problem solving) 13, 42, 63, 7, 14 30, 39, 56, 44, 49, 31 52, 34, 33, 54, 64 58, 27, 48, 60, 6, 11 18, 8, 62 17, 21, 55 The themes for the nine factors in this first version of the instrument mostly converged with those discussed in previous literature on metacognitive knowledge (i.e., person, task, strategy). Factor 4 and 5 included items that involved participants’ person knowledge, while factor 2 and 7 included items that were broadly related to participants’ task knowledge. Factor 2 was later named discourse knowledge in order to distinguish it from Factor 7 which included items that were about participants’ awareness of the requirements and constraints of carrying out a task. Factors 1, 8, and 9 were related to strategy knowledge, and there were overlaps in terms of the themes they covered. Factor 1 included items related to participants’ awareness of planning and monitoring strategies. Factor 9 shared the theme of monitoring, but also included problem solving, and Factor 8 mostly included items on monitoring. Two factors did not fall under a category of metacognitive knowledge that had been listed in previous literature. Factor 3 had to do with participants’ idea development and generation when writing in English, and Factor 6 was related to participants’ awareness of their English learning and proficiency, and assessment criteria. Broadly speaking, Factor 3 may fall under the category of strategy knowledge in that idea development and generation involves planning for writing. It is also somewhat relevant to metacognitive knowledge of task, in that idea development involves an awareness of task requirements and purpose. Factor 6 may fall under person knowledge in that the items are related to participants’ awareness of their particular learning process, context, and proficiency. 61 PCA for Revised Version of Questionnaire The 45 items that were retained from the procedure outlined above were once again subjected to a PCA. According to Field (2005), there is much debate with regard to the choice of using PCA and EFA with questionnaire data. Theoretically, the main difference between PCA and EFA is that, whereas PCA is primarily a data reduction method, EFA is used to identify the latent variables in a data set. That is, in PCA, the components are based on the responses to the measures in the questionnaire, whereas in EFA, it is assumed that the responses to the measures are influenced by the latent factors. However, some scholars have argued that in practice, there is little difference between the factor solutions obtained by PCA and EFA (Guadagnoli & Velicer, 1998; Stevens, 2002, cited in Field, 2005). In this study, factor solutions obtained from both PCA and EFA were examined, and it was found that there was little difference between the two. While the number of factors extracted and the total variance explained were the same for the two types of analyses, PCA produced a factor solution that was clearer in terms of interpretation and number of items per subscale, and was thus selected as the method of analysis. PCA Assumptions As with the first version of the questionnaire, data collected from the administration of the revised, 45-item questionnaire (n = 164) was also examined as to whether it met the assumptions for conducting a PCA. The data in the current study met the assumption of interval data, as the data came from Likert scale responses measured in intervals. The KolmogorovSmirnov test, and the Shapiro-Wilk tests revealed that the data were not normally distributed (p < .001). The participants in this study tended to respond to the items positively (i.e., agree, strongly agree), and thus the distribution of the data was skewed to the left. Because of the nonnormally distributed data, the results of the analysis should be interpreted with caution. The 62 Kaiser-Meyer-Olkin (KMO) measure for the data was 0.8, which according to Kaiser (1974) is a good (.70 to .80) sample size for factor analysis. Next, the Pearson’s correlation coefficients were examined for multicollinearity and singularity. The intercorrelations of the variables were fairly high, but not too high (r > .90), indicating no multicollinearity or singularity. The highest correlation found was r = .61. Multicollinearity was further examined by looking at the determinant of the R-matrix in the main analysis. The determinant of the R-matrix was greater than .00001, which indicated that there were no problems in terms of mulitcollinearity. Although there were a few items that showed very low correlations with certain variables (e.g., Item 3 “I am aware of the importance of reading the writing assignment carefully before writing an English essay.” And Item 27, “I am aware of the differences and similarities between texts in English and in Korean.”), they were retained because they had reasonable correlations with other variables in the instrument. Results from the Bartlett’s test of sphericity was also significant, p < .001, which indicated that the correlation matrix was not an identity matrix, and that the data met the assumptions for running a factor analysis. Factor Extraction and Rotation The same criteria used for extracting and retaining factors in the PCA for the first version of the instrument were used for the second version of the instrument as well. These included Kaiser’s eigenvalue over 1, the scree test, and interpretability of the factor solution. In addition, the total amount of variance explained by the factors and the communalities after the extraction were taken into consideration as well. As with the previous PCA, the factors measured by the instrument were assumed to be correlated as they were psychological constructs (i.e., types of metacognitive knowledge). Therefore, an oblique rotation method, direct oblimin, was selected 63 for factor rotation. The absolute value of .40 was selected as the cut-off point for retaining factors. Descriptive Statistics The descriptive statistics for the 45 items included in the questionnaire are shown in Table 5 in order to provide a more comprehensive picture of the data. Table 5 Questionnaire Item Descriptive Statistics Item M SD 1 When writing an English argumentative essay, I think about what I should do to make it persuasive. 4.48 1.03 2 4.68 .92 4.78 1.19 4.05 1.16 3.63 1.22 3.57 1.21 5.12 .89 8 Before I start an English essay writing task, I try to determine what the essay prompt requires. I do not feel that it is important to read the writing assignment carefully before writing an English essay. Before I start writing an English essay, I try to predict which parts will be easy and what parts will be difficult. When I write an English essay, I plan what I am going to do so I can use my time well. Before I begin an English essay assignment, I think about whether I know enough English to successfully complete the task. When writing an English essay, I carefully think of the things I want to write about on the topic. I am aware of the need to plan my course of action when writing in English. 4.48 1.11 9 I am aware of my level of aptitude for English writing. 2.88 1.22 10 When writing an English essay, I think about how to select and organize relevant information to answer the essay prompt. 11 Before writing an English essay, I ask myself how the essay prompt relates to what I already know. 12 Before I begin writing an English essay, I decide how important it is for me to successfully finish the writing task. 13 Before I start to write an English essay, I think about the order in which I will put my ideas. 14 When writing in English, I try to understand the purpose of the writing task. 4.60 .984 4.52 1.02 4.08 1.15 4.58 1.05 4.62 1.06 15 I am aware of effective ways to use a dictionary when writing in English. 3.63 1.21 3 4 5 6 7 64 Table 5 (cont’d) 16 I am aware of aspects of my English writing that need improvement. 3.85 1.24 17 I am aware of English writing strategies that are not effective. 3.03 1.08 18 I think about whether the organization of my English essay is effective for the particular genre of writing. 19 I am aware of the function that words like furthermore, besides and moreover perform in the organization of an English essay. 20 I periodically check my English essay while writing it. 3.50 1.05 4.34 1.07 4.29 1.15 21 I am not aware of the difference in word order between an English sentence and in a Korean sentence. 22 When I write in English, I think about whether the English expressions I use sound natural to native speakers of English. 23 I am aware of the differences between spoken and written English. 4.06 1.16 4.05 1.20 3.23 1.21 24 When writing an English essay, I think about what the title tells the readers about the essay. 25 I am not sure what to do when I am writing an English essay and I can’t find the right word. 26 I check my accuracy as I progress through writing. 4.33 1.06 3.28 1.39 3.82 1.04 27 I am aware of the differences and similarities between texts in English and in Korean. 28 I think about the roles that words like but and however play in the organization of an essay. 29 When I write an English essay, I am aware of how much of the composition I have left to complete. 30 I am aware of alternative ways of saying what I mean when I have difficulty writing a sentence in English. 31 I think about whether the way I study English writing is effective. 3.41 1.16 4.73 .956 4.55 1.28 3.77 1.07 3.53 1.21 32 I am aware of effective ways to use Korean translation to solve the problem when I have difficulty writing an English sentence. 33 When writing an English essay, I think about the roles that words like first and second play in organizing the text. 34 I am not aware of which thinking technique or strategy to use when writing in English. 35 Before I hand in my English essay, I check my work. 3.35 1.18 4.74 .940 3.43 1.08 4.39 1.15 36 I try to learn from the mistakes I make in English writing. 4.61 .990 37 I am aware of my own thinking when writing in English. 3.59 1.33 38 I think about whether I am making progress in learning how to write in English. 3.99 1.14 65 Table 5 (cont’d) 39 After writing an English essay, I think about effective ways to incorporate my teacher’s feedback into my writing. 40 I do not think about the evaluation criteria when working on an English writing task. 4.23 1.08 4.30 1.06 41 I am aware of which type of feedback (teacher, peer, self) is most effective for improving my English writing. 42 I am aware of how my English writing proficiency compares to my Korean writing proficiency. 43 I think about ways to improve my English writing proficiency on my own. 44 After I finish an English essay writing task, I think about how I can do better the next time. 45 I set goals for myself in English writing. 4.01 1.18 4.25 1.17 3.88 4.11 1.08 1.00 4.04 1.07 PCA Results The PCA produced 13 factors with eigenvalues greater than 1, which accounted for 66.98% of the variance. Table 6 shows the thirteen factors that had eigenvalues greater than one, and the percentage of variance each of the factors accounted for. Table 6 Extracted Components and Explained Variance Initial Eigenvalues % of Component Total Variance Cumulative % 1 9.96 22.14 22.14 2 3.55 7.88 30.03 3 2.43 5.41 35.44 4 1.97 4.39 39.82 5 1.94 4.30 44.12 6 1.69 3.77 47.88 7 1.44 3.20 51.08 8 1.40 3.11 54.19 9 1.31 2.92 57.11 10 1.26 2.80 59.91 11 1.13 2.51 62.42 12 1.05 2.34 64.76 13 1.00 2.22 66.98 66 Extraction SSM Loadings % of Total Variance Cumulative % 9.96 22.14 22.14 3.55 7.88 30.03 2.43 5.41 35.44 1.97 4.39 39.82 1.94 4.30 44.12 1.69 3.77 47.88 Rotation SSM Loadings Total 6.79 5.90 2.70 5.13 3.12 3.75 Meanwhile, the scree plot (see Figure 4) showed two points of inflection where the curve leveled off: 4 and 7. These points each suggested a 3 and 6-factor solution. Eigenvalue Scree Plot Component Number Figure 4. Scree plot for revised version of questionnaire. 67 In order to find the best model, the item loading tables were compared to find the factor structure that had item loadings above .30, few item cross-loadings, and no factor with fewer than three items (Costello & Osborne, 2005). Upon examining the two factor solutions, the 6factor solution which accounted for 47.88% of the variance was selected, and items with factor loadings greater than .40 and no cross-loadings over .32 were retained. The factor loadings for each of the six components can be found in Table 7. Table 7 Factor Loadings Item 10 2 13 11 7 1 8 24 14 29 43 41 37 44 38 39 31 40 25 34 9 21 42 35 33 20 28 36 Component 1 2 3 4 5 .818 .781 .686 .669 .581 .563 .537 .463 .447 .379 .676 .651 .650 .650 .587 .585 .423 .409 .331 .356 .680 .647 .620 .446 -.339 .385 .329 .679 .678 .618 .557 .509 .429 68 6 Table 7 (cont’d) 26 17 19 3 5 15 6 32 16 4 12 30 27 23 18 22 45 .453 .338 .506 -.475 .430 .419 -.330 .360 .656 .534 .445 .438 .435 .421 .331 .363 .405 .321 -.625 -.494 -.482 -.471 -.418 -.375 .335 Table 8 shows the Component Correlation Matrix where we can observe the relationships among the 6 factors that were extracted. Correlations among the factors were relatively low, ranging from .02 between Factor 3 and 5, to .36 between Factor 1 and Factor 4. Table 8 Component Correlation Matrix Component 1 2 3 4 5 6 1 1.00 .22 .11 .36 .14 -.21 2 1.00 .02 .11 .19 -.18 3 1.00 .09 .02 -.07 4 1.00 .11 -.11 5 1.00 -.13 Metacognitive Knowledge Components The PCA uncovered six components that tapped into various areas of metacognitive knowledge in writing including: metacognitive knowledge of task, personal learning process, 69 6 1.00 strategy, text and accuracy, problem solving, and discourse features. In the following tables, the questionnaire items that strongly loaded onto each of the factors are grouped together. Table 9 Factor 1 Metacognitive Knowledge of Task Item 10 When writing an English essay, I select and organize relevant information to answer the essay prompt. Before I start an English essay writing task, I try to determine what the essay prompt requires. Before I start to write an English essay, I think about the order in which I will put my ideas. Before writing an English essay, I ask myself how the essay prompt relates to what I already know. When writing an English essay, I carefully think of the things I want to write about on the topic. When writing an English argumentative essay, I think about what I should do to make it persuasive. I am aware of the need to plan my course of action when writing in English. When writing an English essay, I think about what the title tells the readers about the essay. When writing in English, I try to understand the purpose of the writing task. 2 13 11 7 1 8 24 14 The first factor was labeled Metacognitive Knowledge of Task. There were nine items that loaded onto this factor. The majority of these items related to participants’ awareness of various aspects pertinent to the writing task. For example, questionnaire items relevant to the writing prompt (requirements, relevant information and background knowledge), the purpose of the writing task, and the characteristics of the genre of writing were included in this factor. Also included were items related to planning and generating ideas appropriate for the writing task. Table 10 Factor 2 Metacognitive Knowledge of Personal Learning Process Item 43 41 I think about ways to improve my English writing proficiency on my own. I am aware of which type of feedback (teacher, peer, self) is most effective for improving my English writing. 70 Table 10 (cont’d) 37 44 I am aware of my own thinking when writing in English. After I finish an English essay writing task, I think about how I can do better the next time. I think about whether I am making progress in learning how to write in English. I do not give much thought to the evaluation criteria when working on an English writing task. 38 40 The second factor was labeled Metacognitive Knowledge of Personal Learning Process. This factor was composed of six items that related to participants’ awareness of various aspects of learning to write in English. Items 43 and 41 tapped into participants’ awareness about ways to improve their L2 writing proficiency by themselves or through instruction. Items 44 and 40 kept with the theme of learning to write, but were more relevant to participants’ awareness of learning from and being evaluated on writing tasks. Item 37 reflects participants’ general level of awareness when writing in English, and item 38 reflects participants’ tendency to reflect on and monitor their learning process. Table 11 Factor 3 Metacognitive Knowledge of Strategy Item 25 34 9 I am not sure what to do when I am writing an English essay and I can’t find the right word. I am not aware of which thinking technique or strategy to use when writing in English. I am aware of my level of aptitude for English writing. Factor 3 was labeled Metacognitive Knowledge of Strategy. Item 34 involves participants’ awareness of effective strategies in L2 writing in general, while item 25 taps into participants’ awareness of effective strategy use to compensate for a lack of vocabulary knowledge. Interestingly, Item 9, (I am aware of my level of aptitude for English writing) which is often 71 given as an example of person knowledge in the metacognitive literature, loaded onto this factor as well. Table 12 Factor 4 Metacognitive Knowledge of Text and Accuracy Item 35 33 Before I hand in my English essay, I check my work When writing an English essay, I think about the roles that words like first and second play in organizing the text. I periodically check my English essay while writing it. I think about the roles that words like but and however play in the organization of an essay. I check my accuracy as I progress through writing. 20 28 26 The fourth factor was named Metacognitive Knowledge of Text and Accuracy. The items largely fell into two categories. Items 33 and 28 were related to participants’ awareness of the use of discourse markers in English writing, while items 35, 20, and 26 tapped into participants’ awareness of accuracy in writing. While these two categories may seem somewhat different from each other, they relate to participants’ attention to textual features of English writing. Table 13 Factor 5 Metacognitive Knowledge of Problem Solving Item 5 15 32 4 When I write an English essay, I plan what I am going to do so I can use my time well. I am aware of effective ways to use a dictionary when writing in English. I am aware of effective ways to use Korean translation to solve the problem when I have difficulty writing an English sentence. Before I start writing an English essay, I try to predict which parts will be easy and what parts will be difficult. Factor 5 comprised four items that were relatively more difficult to interpret as items loading onto a single factor. Overall, the four items seemed to fall under the broad notion of problem solving; Items 15 and 32 tapped participants’ awareness of means (i.e., dictionary use, 72 translation) of problem solving when confronted with difficulty in sentence formation. Items 5 and 4 seemed to be more related to participants’ anticipation and management of difficult task conditions such as time constraints. Other items that had initially loaded onto this factor, but were discarded due to cross-loadings were also related to notions of task difficulty, proficiency, and successful task completion. Therefore, the factor was labeled Metacognitive Knowledge of Problem Solving. Table 14 Factor 6 Metacognitive Knowledge of Discourse Features Item 30 27 23 22 I am aware of alternative ways of saying what I mean when I have difficulty writing a sentence in English. I am aware of the differences and similarities between texts in English and in Korean. I am aware of the differences between spoken and written English. When I write in English, I think about whether the English expressions I use sound natural to native speakers of English. The final factor was named Metacognitive Knowledge of Discourse Features. A couple of the items that loaded onto this factor were related to participants’ awareness of the characteristics of discourse in English and Korean (item 27), as well as in writing and speaking (item 23). Meanwhile, items 30 and 22 were more specifically related to participants’ awareness of expressions in English with regard to range and variety (item 30) as well as collocations (item 22). Reliability Analysis A reliability analysis was conducted to test instrument reliability. According to Kline (1999), a value of .80 is generally accepted as reliable, while cut-off points for ability tests are often lower, at around .70. Also, when testing psychological constructs, as in the present study, values under .70 may be accepted as well. 73 The Cronbach’s alpha for the overall questionnaire was α = .94. While this Cronbach’s alpha value was high, it is likely that this was due to the large number of items included in the questionnaire. Also, it is recommended that the Cronbach’s alpha formula be applied to each subscale within a questionnaire when multiple factors exist (Cronbach, 1951, cited in Field, 2005). Therefore, it is more meaningful to examine the reliability of each subscale. Table 15 shows the Cronbach’s alpha for the six subscales of the metacognitive knowledge questionnaire. Table 15 Reliability for Metacognitive Knowledge Component Subscales Factor 1 2 3 4 5 6 Factor Name Task Requirements Personal Learning Process Strategy Use Text and Accuracy Problem Solving Discourse Features Number of Items 9 6 3 5 4 4 Cronbach’s α .84 .78 .62 .76 .49 .68 Factor 1, Factor 2, and Factor 4 showed acceptable levels of reliability, whereas factors 3 and 6 showed lower levels of reliability. This could be due to the nature of the study (i.e., measuring psychological constructs) as Kline (1999) noted, or it could also be due to the relatively small number of questionnaire items included in these subscales, especially for Factor 3 which only had three items. The most problematic, however, was Factor 5 for which the Cronbach alpha value was .49. Table 16 shows the items and their Corrected Item-Total Correlation and Cronbach’s alpha if Item Deleted for each of the items in Factor 5. The Item-Total statistics indicated that item 15 had a deleterious effect on the subscale’s reliability. The correlation between item 15 and the scale was .20 which is lower than the value .30, recommended by Field (2005). Also, the Cronbach’s Alpha if Item Deleted indicated that the overall reliability would improve if Item 15 74 were excluded. However, item 15 was retained despite its low correlation with the scale because it was found that excluding the item resulted in a change in the overall factor structure of the PCA. Also, the improvement in reliability would be .01 which would not be a meaningful increase given that the increased value of Cronbach’s alpha (i.e., .50) would still indicate low reliability. Table 16 Item-Total Statistics Item # Item 5 15 32 4 Corrected Item-Total Correlation When I write an English essay, I plan what I am going to .35 do so I can use my time well. I am aware of effective ways to use a dictionary when .20 writing in English. I am aware of effective ways to use Korean translation to .24 solve the problem when I have difficulty writing an English sentence. Before I start writing an English essay, I try to predict .37 which parts will be easy and what parts will be difficult. Cronbach's α if Item Deleted .36 .50 .46 .35 Regression Analysis After the six components of metacognitive knowledge were identified via the PCA, a regression analysis was carried out in order to answer the fourth research question regarding the relationship between metacognitive knowledge, L2 proficiency, and L2 writing proficiency. First, a multiple regression looking at how the six components of metacognitive knowledge affect L2 writing performance was conducted. Then, the relationship between the six components of metacognitive knowledge and L2 proficiency was explored. This was to examine whether having higher levels of certain aspects of metacognitive knowledge contributed to a higher level of L2 proficiency. Finally, an analysis of the relationship between L2 proficiency, the six components 75 of metacognitive knowledge, and L2 writing proficiency was conducted. In the following section, assumptions for running and generalizing from the results of a regression analysis are reported, in addition to the descriptive statistics of metacognitive knowledge factor scores, L2 proficiency scores, and L2 writing scores. Sample size. First, the sample size was examined to see if it was adequately large to obtain a reliable regression model. The most widely-accepted rule of thumb suggests 10 to 15 cases per predictor, and based on this criterion, the present study had an adequate number of cases (n = 164) for its six (i.e., factor scores) or seven (i.e., factor scores and L2 proficiency) predictor variables. However, Field (2005) noted that this rule of thumb is overly simplistic, and recommended that effect size and power be taken into consideration as well. In order to achieve a high level of power (.80) with a medium effect size and six to ten predictors, a sample size of between 100 to 150 should be sufficient (Miles & Shevlin, 2001, cited in Field 2005). The sample size for the present study met this criterion as well, and was deemed appropriate for a regression analysis. Quantitative variables. Next, the variables in the study were examined as to whether the predictor variables were quantitative or categorical, and the outcome variable was quantitative, continuous, and unbounded. The predictor variables in this study which were L2 proficiency test scores and factor scores for the six components of metacognitive knowledge met the assumption that the predictor variables are quantitative. The outcome variable for the study was L2 writing proficiency scores from the English argumentative writing task ranging from 1 to 6, which were quantitative. While these scores were based on a scale and thus interval data, they were treated as continuous data because the underlying concept of the rating scale was continuous, and the 76 descriptors of the rating scale aimed to make the intervals between the points approximately equal. For the regression analysis investigating the relationship between L2 proficiency, metacognitive knowledge, and L2 writing, L2 proficiency scores and participants’ scores for the six metacognitive knowledge components were the predictor variables and participants’ scores on the L2 writing task was the outcome variable. L2 proficiency scores were participants’ raw scores on the DIALANG test, and the metacognitive knowledge component scores were calculated using the weighted sum scores method. The weighted sum scores method was chosen as the method of computing factor scores because it maintains the advantages of using the sum score method while accounting for the strength of the factor loading for each item. The sum score method is recommended for data that are exploratory, and preserves the variation and metric in the original data which allows for easier interpretation (DiStefano, Zhu, & Mindrila, 2009). Table 17 Descriptive Statistics for Multiple Regression Variables Variable L2 proficiency L2 writing Factor 1 Task Factor 2 Personal Learning Process Factor 3 Strategy Factor 4 Text and Accuracy Factor 5 Problem Solving Factor 6 Discourse Features M 17.15 2.77 2.84 2.38 2.07 2.68 1.87 1.81 SD 4.51 1.68 0.42 0.49 0.63 0.46 0.39 0.42 Regression Analysis Results Before examining the regression model, the correlation matrix (Table 18) was consulted to note the correlations among the variables. This also served as a preliminary check for any 77 variables with very high correlations (p > .9) that would cause multicollinearity. There appeared to be no mulitcollinearity, with correlations among the different variables ranging from .05 to .55. The predictor variable L2 proficiency had the highest positive correlation with the outcome variable, L2 writing performance (r = .55, p < .001), indicating that it would likely account for the greatest amount of variance in the outcome. Among the six metacognitive knowledge variables, Factor 4, Metacognitive Knowledge of Text and Accuracy, had the highest correlation with L2 writing performance (r = .29, p < .05). Meanwhile, the metacognitive knowledge variable that had the highest correlation with L2 proficiency was Factor 1, Metacognitive Knowledge of Task (r = .33, p < 0.001). The six metacognitive knowledge variables also showed small to medium correlations among themselves. Table 18 Inter-Variable Correlations Writing Proficiency Factor1 Factor2 Factor3 Factor4 Factor5 Factor6 Writing 1.00 Proficiency .55* 1.00 Factor1 .25* .33* 1.00 Factor2 .15* .12 .39* 1.00 Factor3 .26* .16* .19* .12* 1.00 Factor4 .29* .28* .51* .30* .13 1.00 Factor 5 .25* .05 .34* .24* .16* .36* 1.00 Factor6 .16* .23* .44* .48* .25* .37* .33* 1.00 N= 145, * p < .05 Relationship between metacognitive knowledge and L2 writing. A forced-entry multiple regression analysis between the six components of metacognitive knowledge and L2 writing revealed that the metacognitive components collectively explained 16% of the variance in L2 writing. The adjusted R2 indicated that this model would lose 3.7% of its predictive power when generalized. All of the metacognitive knowledge components except for discourse feature knowledge (factor 6) had a positive relation with L2 writing. Two predictors, metacognitive 78 knowledge of strategy (factor 3) and text and accuracy (factor 4) made statistically significant contributions to the regression model. Table 19 shows the summary results for the regression analysis. Table 19 Multiple Regression Analysis Summary for the Effects of Metacognitive Components on L2 Writing Variable Constant F1 Task F2 Personal learning F3 Strategy F4 Text and accuracy F5 Problem solving F6 Discourse features 2 Note. R = .16 (p < .001). *p < .05 B .80 .16 .06 .32 .38 .32 .11 SE B .57 .21 .17 .12 .19 .20 .22 β .07 .03 .21* .19* .20 .22 t 1.40 .74 .35 2.62 2.01 1.55 -.52 p .17 .46 .73 .01 .047 .12 .60 Relationship between metacognitive knowledge and L2 proficiency. Next, a forcedentry multiple regression analysis was conducted with the six components of metacognitive knowledge as predictor variables, and L2 proficiency as the outcome variable. The results in Table 20 show that the metacognitive components accounted for 18.5% of the variance in L2 proficiency. The predictive power of this model would decrease 3.3% when generalized to a population. Table 20 Multiple Regression Analysis Summary for the Effects of Metacognitive Components on L2 Proficiency Variable Constant F1 Task F2 Personal learning F3 Strategy B 2.88 2.92 -.68 1.03 SE B 2.99 1.10 .90 .64 79 β .25** -.07 .12 t .97 2.65 -.76 1.60 p .34 .009 .45 .11 Table 20 (cont’d) F4 Text and accuracy 1.84 .98 .17 1.89 .06 F5 Problem solving -1.76 F6 Discourse features 1.78 2 Note. R = .185 (p < .001). **p < .01 1.05 1.13 -.14 .15 -1.67 1.58 .10 .12 Four of the metacognitive knowledge components (task, strategy use, text and accuracy, discourse features) had positive relationships with L2 proficiency, whereas two metacognitive components (personal learning and problem solving) had a negative relationship with L2 proficiency. Task knowledge was the only component of metacognitive knowledge that made a statistically significant (p = .009) contribution to the model, although the contribution that metacognitive knowledge of text and accuracy made did approach significance (p = .06) at the .05 level. Relationship between L2 proficiency, metacognitive knowledge, and L2 writing. Results of a hierarchical multiple regression analysis revealed that L2 proficiency alone accounted for 29.9% of the variation in L2 writing performance. When the six components of metacognitive knowledge were added into the model, this increased to 38.6%. In other words, together, L2 proficiency and metacognitive knowledge predicted 38.6% of the variance in L2 writing performance. The adjusted R2 for the first model (i.e., L2 proficiency) was .294 indicating that this model had good cross-validity, while the adjusted R2 for the second model (i.e., L2 proficiency and metacognitive knowledge) was .355 indicating a 3% decrease in the model’s ability to explain the variance of the outcome when generalized to a population. 80 Table 21 Multiple Regression Analysis Summary for the effects of L2 Proficiency and Metacognitive Components on L2 Writing Scores Variable B SE B β Step 1 Constant 1.79 .22 L2 Proficiency .10 .01 .55 Step 2 Constant .41 .50 L2 Proficiency .10 .01 .52** F1 Task -.13 .19 -.06 F2 Personal Learning .12 .15 .06 F3 Strategy .24 .11 .16* F4 Text and accuracy .21 .16 .11 F5 Problem solving .48 .18 .21** F6 Discourse features -.23 .19 -.10 2 2 Note. R = .30 for Step 1; Δ R = .09 (p = .005) for Step 2. *p < .05, **p < .01 t p 8.11 7.81 .00 .00 .83 7.11 -.70 .79 2.33 1.28 2.72 -1.24 .41 .00 .49 .43 .02 .20 .01 .22 A closer look at each of the predictors in the second model revealed that L2 proficiency, and metacognitive knowledge of personal learning, strategy use, text and accuracy, and problem solving had a positive relationship with L2 writing performance, while metacognitive knowledge of both task and discourse features had a negative relationship with L2 writing performance. L2 proficiency was the predictor that made the greatest individual contribution to the model, and this contribution was statistically significant (p < .001). Meanwhile, only two of the metacognitive knowledge components made statistically significant contributions to the model: metacognitive knowledge of strategy use (p = .02) and problem solving (p = .01). The standardized beta values indicated that metacognitive knowledge of problem solving carried a slightly greater importance in the model compared to metacognitive knowledge of strategy use. 81 L2 Proficiency Strategy Knowledge .52** .16 * L2 Writing .21** Problem Solving Knowledge *p < .05, **p < .01 Figure 5. Relationship between L2 proficiency, metacognitive knowledge, and L2 writing. Assessing the Regression Model After a model has been produced via a regression analysis, it should be examined with regard to its fit with the observed data and its ability to draw conclusions about a general population. In the following section, the model fit and generalization of the three models produced via regression analysis will be examined. Model fit. The fit of a regression model to its observed data is tested by looking for indications of the model being unduly influenced by a small number of cases (Field, 2005). In order to do this, the three regression models in this study were examined for possible outliers and influential cases. An examination of residual statistics and DFBeta statistics revealed that there were few outliers and influential cases in the data. Generalization. To be able to draw conclusions about a general population from the results of a regression analysis, the data should meet several assumptions. First, because the data for the outcome variable came from independent cases, the assumption of independence was met. 82 Next, multicollinearity was examined in several ways. The initial correlation matrix indicated that there were no variables that correlated at an extremely high level. The VIF and tolerance statistics from each of the regression models were examined as well. Several criteria ( largest VIF smaller than 10, VIF not substantially greater than 1, tolerance above .10 or .20) listed in Field (2005) were used to determine whether these values were acceptable. The VIF values were below 2, and the tolerance statistics were all well above .20, confirming there was no multicollinearity. In addition, the assumption of independent errors (i.e., residual terms for any two observations are uncorrelated) for the regression models was tested by examining the DurbinWatson statistics. Results showed that the statistics were all between 1 and 3, which were acceptable values. Plots of standardized residuals and standardized predicted values were examined to check the assumptions of homoscedasticity and linearity. These assumptions were met. The normality of residuals was also confirmed by examining the histograms and normal probability plots for the data. An assumption that may not have been met was that the predictors in the model should be uncorrelated with external variables, which Field (2005) defines as “variables that haven’t been included in the regression model which influence the outcome variable” (p.170). It is possible that there were additional predictor variables that were not measured in this study and thus excluded from the regression models. Therefore, the assumption of no correlation with external variables may be violated if these potential external variables correlate with the predictor variables included in this study (i.e., L2 proficiency and metacognitive knowledge components). Summary of Quantitative Results 83 The multi-phase process of item compilation, pilot administration, and revision produced a 31-item self-report instrument on metacognitive knowledge in the context of L2 writing. The self-report instrument comprised six subscales that each represented a component of metacognitive knowledge: metacognitive knowledge of 1) task; 2) personal learning process; 3) strategy use; 4) text and accuracy; 5) problem solving, and 6) discourse features. A reliability analysis of each subscale revealed good reliability for measuring metacognitive knowledge of task (.84), personal learning process (.78), and text and accuracy (.76). The subscales of strategy use (.62) and discourse features (.68) yielded relatively lower, yet acceptable levels of reliability, while the subscale for problem solving had a low level of reliability (.49). Next, multiple regression analyses were conducted in order to shed light on the relationships between L2 writing, L2 proficiency, and metacognitive knowledge. Results indicated that the six metacognitive components accounted for 16% of the variance in L2 writing performance and 18.5% of the variance in L2 proficiency. Metacognitive knowledge and L2 proficiency together explained 38.6% of the variance in L2 writing. L2 proficiency was the most significant predictor in this model, while metacognitive knowledge of strategy and problem solving also made statistically significant contributions. Qualitative Results The qualitative data collected in this study through introspective methods such as thinkaloud protocols and stimulated recalls, in addition to semi-structured interviews also offered valuable insight into participants’ metacognitive knowledge of L2 writing. In this section, these qualitative results are presented thematically. The themes emerged from a cyclical analysis of the transcribed data, and roughly corresponded to the metacognitive knowledge components that were obtained through the Principal Component Analyses conducted in the study. 84 Metacognitive knowledge of task. One of the most prominent themes that emerged from analysis of participants’ utterances was their awareness of the task requirements. This may be due to participants’ orientation to the specific task at hand while thinking aloud or participating in a stimulated recall interview. For example, during a stimulated recall interview, a participant pointed out a moment in the recorded video where he checked on the requirements of the writing task that were laid out in the writing prompt: “Here, I was looking at the writing prompt again because I wanted to see if I needed to talk about all three businesses in the essay” (stimulated recall interview, YJH). Meanwhile, the relative novelty of the task may have also prompted the participants to become more aware of the different characteristics of the task. The following comment from a participant during a semi-structured interview illustrates this: It was refreshing to work with a writing prompt that’s different from the usual agree/disagree ones. Before writing, I thought a lot about the different requirements that the essay needed to meet, and I think it was fun that there were multiple possibilities. (semi-structured interview, HSY) The participants were also able to verbalize more general types of task knowledge when prompted during semi-structured interviews. For example, when asked what her perceptions of an effective argumentative essay was, a participant was able to give several criteria: What is a good argumentative essay? I think that a good argumentative essay needs to have a strong main argument with several, several, maybe three supporting reasons, and you should also be able to restate your opinion in the conclusion in a very clear way. (semi-structured interview, AYH) 85 Metacognitive knowledge of strategy use. While metacognitive knowledge of strategy use also featured prominently in the data, the participants’ utterances often resembled a running commentary, possibly due to the nature of think-aloud protocols, and to a lesser degree, stimulated recall interviews. The following comment is a somewhat more detailed example of how a participant described a challenging situation while writing in English, and used the strategy of simplifying her message in order to get across her meaning: Here, what I wanted to say- I wanted to say it’s easier for students to use these services, but- but I couldn’t think of a good expression to use, and so I decided to use a simple- I used ‘better school life’ instead. (stimulated recall, MHR) Similar to the comments on task knowledge, participants were also able to describe their general perceptions of effective and ineffective strategies when prompted during semi-structured interviews. For example: You know how there’s a lot of ways you can do brainstorming? Well, mind-mapping doesn’t really work for me, but I think making a list or free-writing are okay. So what I do is, I make a list and then start free writing based on that. (semi-structured interview, AYH) Metacognitive knowledge of person. In previous literature on metacognitive knowledge, person knowledge has often been described as knowledge we have about human factors that influence learning (Wenden, 1998). For example, the following quote illustrates a participant commenting on her self-perception of being more effective with certain genres than others: “I sighed here, because I was thinking I’m better at writing narratives (than argumentative essays)” (stimulated recall, HJY). 86 However, in general, data from the think-aloud protocols and stimulated recall interviews included few utterances pertaining to participants’ person knowledge as defined by Wenden (1998). This is not surprising when considering participants’ orientation and attention to the text they were given (writing prompt), and the text they were asked to produce (argumentative essay). Rather, the majority of the utterances produced during stimulated recalls and think-aloud protocols were directly related to specific aspects of the text (e.g., accuracy), or their engagement with the text (e.g., word search). Therefore, most of the person knowledge that was elicited from participants was through semi-structured interviews. The following comments show examples of participants’ responses during semistructured interviews on personal factors that had to do with their English writing. The first participant had a high level of awareness with regard to her L2 writing development. She commented, “Overall, I think I’ve improved in terms of content, and even though I only use simple vocabulary, I’ve become much more fluent in writing compared to the past” (semistructure interview, HSY). The next example shows a participant who had established a personal goal for improvement in English writing: When my friends read my essays, they say that it’s easy to understand, and that the examples I use are easy to relate to. That’s what my goal is. My goal for writing is to write in a way that that a lot of people can relate to and is easy to understand. (semi-structured interview, AYH) 87 CHAPTER 4: DISCUSSION This chapter discusses the research results in further depth and within the broader context of the fields of SLA and educational psychology. The chapter is largely organized into three parts. First, each of the research questions and hypotheses stated in the beginning of the dissertation will be discussed in light of the results obtained from the study. Next, the theoretical and practical implications of the study will be presented. The chapter will conclude with a brief discussion of limitations of the study, in addition to suggestions for further avenues of research in the areas of metacognition and language learning. As outlined at the beginning of the dissertation, there were three major purposes for this study. The first was to establish a clear definition and operationalization of metacognitive knowledge of L2 writing by identifying its constituent components. Next, the study aimed to demonstrate a multi-phase process for developing a psychometrically valid and reliable measure that could be used to assess L2 writers’ metacognitive knowledge. Finally, the instrument developed through this process was used to measure Korean EFL writers’ metacognitive knowledge. This data, together with data on these participants’ English proficiency and English writing performance were examined to shed light on the relationship between L2 writers’ metacognitive knowledge, L2 proficiency, and L2 writing performance. In the following section, each of the research questions that guided this study are answered in light of the research findings. Research Question 1 The first research question for this study was: What are the underlying components of metacognitive knowledge in L2 writing? At the outset of the study, it was hypothesized that the components of metacognitive knowledge of L2 writing would mostly converge with the 88 categories of metacognitive knowledge that have been outlined in the Educational Psychology and SLA literature (e.g., Flavell, 1979; Wenden, 1998). These categories broadly consisted of person knowledge (knowledge of individual characteristics of a learner), task knowledge (knowledge of the requirements, purpose, constraints of a learning task), and strategy knowledge (knowledge of effective and ineffective strategies). In the following section, the six components of metacognitive knowledge that were identified in this study are discussed in relation to the three-component (i.e., person, task, strategy) classification system proposed by Wenden (1998). Subcomponents within broad categories. The make-up of metacognitive knowledge discovered in this study was indeed similar to that outlined in the theoretical literature. A small departure from the categories that had been previously discussed was that there were six distinct components instead of three, namely, metacognitive knowledge of 1) task, 2) personal learning process, 3) strategy, 4) text and accuracy, 5) problem solving, and 6) discourse features. Some of these components represented constructs that seemed to fall under one of the three anticipated categories of metacognitive knowledge (i.e., person, task, and strategy). Meanwhile, other components were more difficult to place within this theoretical framework. For example, metacognitive knowledge of task, text and accuracy, and discourse features seemed to collectively represent the construct of task knowledge defined as what learners know about the purpose of the task and how it will serve their language learning needs (Wenden,1998). Questionnaire items that represented metacognitive knowledge of task (e.g., Before I start an English essay writing task, I try to determine what the essay prompt requires), text and accuracy (e.g., When writing an English essay, I think about the roles that words like first and second play in organizing the text), and discourse features (e.g., When I write in English, I think about 89 whether the English expressions I use sound natural to native speakers of English.) seemed to align with this definition. Meanwhile, the metacognitive knowledge components of personal learning process, strategy, and problem solving were somewhat more complex in their interpretation. These metacognitive components seemed to be a conglomeration of several ideas, and did not fit squarely within the anticipated categories. For example, the metacognitive knowledge component labeled ‘personal learning process’ included questionnaire items that targeted participants’ general awareness of their thought processes when writing, but also included questionnaire items that were more relevant to instruction and assessment. Metacognitive knowledge of strategy included participants’ awareness of aptitude (item 9, I am aware of my level of aptitude for English writing) which is interesting in that aptitude is often given as an example of person knowledge in the metacognitive literature. While item 9 seemed somewhat unrelated to the other two items on strategies, it may be interpreted in a few different ways. Participants’ awareness of a low level of aptitude for L2 writing may trigger their search for compensatory strategies. It could also be that participants’ conceptualization of English writing aptitude closely relates to how effective they are in deploying strategies when writing. Meanwhile, metacognitive knowledge of strategy and problem solving have been seen as falling under the category of strategy knowledge in previous literature, and were thus anticipated to be closely related in this study. It was found that even though the questionnaire items that loaded onto the two factors were in many ways similar, metacognitive knowledge of problem solving factored out as a separate component from strategy knowledge. An explanation could be that metacognitive knowledge of problem solving had a greater emphasis on using strategies for 90 the purpose of overcoming difficult situations when writing. That is, this factor had a more specific context, and was more task-oriented, and thus involved more ideas related to task difficulty, proficiency, and the employment of strategies for successful task completion. In the following section, the idea of task (or goal)-orientation is further discussed in relation to the six components of metacognition. Goal orientation of metacognitive knowledge components. In order to gain a better understanding of the characteristics of the six metacognitive knowledge components, the correlations among the factors were examined. An interesting finding was that the six metacognitive knowledge components in this study seemed to be on a continuum of more task/goal oriented, to less task/goal oriented, and this seemed to determine the relationship among the components. That is, the questionnaire items included in some of the components seemed to be focused on the idea of successful task completion more than others, and this, in turn, appeared to have an influence on the correlations among the components. The following example illustrates this point. Metacognitive knowledge of task was one of the most task/goal oriented, and was found to have a high positive correlation with metacognitive knowledge of text and accuracy. As the name suggests, task metacognitive knowledge included items that targeted L2 writers’ orientation to task requirements, constraints, and purpose which are crucial to the successful completion of a writing task. Meanwhile, the text and accuracy component also had a strong orientation to the goal of successful task completion, as it included items that tapped participants’ orientation towards producing an accurate text for the writing task at hand. It seemed that this common characteristic was the cause of the high correlation between the two components. 91 Also, metacognitive knowledge of problem solving, similar to metacognitive knowledge of strategy but with a stronger task/goal orientation, had a stronger positive correlation with task metacognitive knowledge, which also seemed to support that the different components had different degrees of task/goal orientation as well. Metacognitive knowledge of discourse features, which seemed to be the most general type of knowledge among all of the components, was found to have a negative correlation with metacognitive knowledge of task and several other metacognitive knowledge components. Influence of learning context. Another characteristic of the metacognitive knowledge components in this study was that they seemed to be influenced by the learning context. Previous literature has noted that metacognitive knowledge is task-specific (e.g., Wenden, 1991). When considering the fact that pedagogical tasks are part of a broader learning context, it is not surprising then, that metacognitive knowledge in this study was also found to be influenced by the conditions of the particular learning context: university EFL classes in Korea. Although there is now a greater level of interest, English writing has traditionally not received much attention in Korea, perhaps mirroring the way writing instruction is embedded within reading instruction for L1 writing instruction as well. While most of the participants reported that they had studied English for approximately 12 years at the time of the study, only about 35% of the participants reported having had some kind of English writing instruction in the past. Therefore, for the majority of participants, the EFL writing course they were enrolled in at the time of the study was the first time they received instruction on L2 writing. Most of the students who reported to have had writing instruction in the past had enrolled in private classes to prepare for the writing portions of standardized tests such as TOEFL. 92 English writing instruction starting at the university freshmen level, and a great amount of this instruction being focused on standardized tests and assessment seemed to reflect on the participants’ responses to the metacognitive knowledge self-report instrument. The characteristics of this particular learning context may be the reason why awareness of instruction and assessment were prominently featured in this study, and factored out together with students’ awareness of their own thought processes. Together they created a metacognitive component labeled ‘personal learning process’ that was somewhat different from the person metacognitive knowledge that has often been described in the literature. Results from the semi-structured interviews and stimulated-recalls also seemed to confirm this potential explanation. Research Question 2 While interest in metacognitive knowledge and its relationship with language learning has steadily increased in recent years, there has been a dearth of psychometrically valid and reliable measures that can accurately assess metacognitive knowledge. In response to this gap in research, the second research question in this study investigated ways to reliably assess L2 writers’ metacognitive knowledge. To this end, the study documented a process for the development of a reliable measure for metacognitive knowledge in the domain of L2 writing. If the self-report instrument resulting from this process (i.e., development of an item pool, administration and evaluation of the instrument, collecting introspective data) meets the standard reliability criteria outlined in the literature, it could be claimed to be a reliable means of assessment of L2 writers’ metacognitive knowledge. Self-report instrument reliability. The results of the study revealed that the subscales for the questionnaire mostly had good reliability ranging from .70 to .80 which was recommended by Kline (1999). The two subscales for metacognitive knowledge of discourse 93 features, and metacognitive knowledge of strategy had a Cronbach alpha level of .68 and .62 respectively, but for the measurement of psychological constructs such as metacognitive knowledge, this has also been discussed as an acceptable level of reliability (Kline, 1999). On the other hand, the subscale for metacognitive knowledge of problem solving had an unacceptably low reliability (α = .49) and was cause for concern. There are a several possible reasons for the low level of reliability for this metacognitive component. The first possible reason is the relatively small number of questionnaire items included in the subscale. Scales consisting of less than three items are considered weak and unstable, while scales including at least five strongly loading items (.50) indicate a solid factor (Costello & Osborne, 2005). The scale for metacognitive knowledge of problem solving had four items that loaded onto the factor, and two of the factor loadings were only slightly above the cutoff value of .40. In order to determine if indeed the low level of reliability of this subscale was caused by the number of items, the Spearman-Brown prophecy formula was applied. Results from applying this formula to the scale to estimate the effects of increasing the length of the scale indicated that with six items, the reliability of the scale would increase to .59, and with eight items, the reliability would reach .66. Results of this factor indicated that while the low reliability of the scale may be partially caused by the somewhat small number of items, other variables were also contributing to the low level of reliability. Turning to the content of the questionnaire items for metacognitive knowledge of problem solving, it is also possible to conceive that item 15 (I am aware of effective ways to use a dictionary when writing in English) was less explicit in its relevance to the idea of problem solving compared to the other items, and thus subtracted from the reliability of the scale. It is noteworthy, however, that item 15 was similar to item 32 (I am aware of effective ways to use 94 Korean translation to solve the problem when I have difficulty writing an English sentence) which had a good correlation with the subscale. Both items discussed means to overcome problems through the use of either a dictionary or L1 translation. The inter-item correlation matrix confirmed that item 15 did, in fact, have the highest correlation with item 32. The problem, then, seemed to be that item 15 did not correlate very well with the two other items: items 5 (When I write an English essay, I plan what I am going to do so I can use my time well) and 4 (Before I start writing an English essay, I try to predict which parts will be easy and which parts will be difficult), which were related to participants’ planning in order to overcome difficulties when writing. The wording of the items may also have had an impact on the way that participants responded to the four items in the questionnaire, and may have caused item 15 to lower the scale’s overall reliability. Item 32 which was found to have a relatively high correlation with item 15 was slightly more explicit in its wording than item 15 with regard to its relevance to the idea of problem solving. Measures of metacognitive knowledge and strategies. Another plausible reason for the relatively low reliability of the ‘problem solving’ subscale is its relationship with metacognitive strategies. Previous theoretical literature has characterized metacognitive knowledge as relatively stable (Wenden, 1998), implying that it is a more general type of knowledge that learners have. This characteristic is what has traditionally set apart the construct of metacognitive knowledge from metacognitive strategies which are the specific actions of planning, monitoring, and evaluation that learners deploy during the performance of a task. The two components of metacognitive knowledge that addressed awareness of these strategies were metacognitive 95 knowledge of problem solving (α = .49) and strategy use (α = .62), and these components also had the lowest levels of scale reliability among all components. Discussion of the limitations of using self-report instruments to measure metacognitive strategy use (and not metacognitive knowledge of strategies) provides support for this potential explanation. One of the greatest concerns of using self-report instruments to measure strategy use has been that these questionnaires require learners to “abstract one general characterization of executing learning strategies over multiple occurrences and events of strategic learning” (Samuelstuen & Braten, 2007, p.93). In other words, questionnaires prompt learners to retrieve information on their strategy use from their long-term memories. However, as Richardson (2004) argues, it may be unlikely that learners have an accurate record of these mental activities in longterm memory. For critics of strategy research, this would support the invalidity of making claims about strategy use when in fact perceptions of strategy use are being measured. However, even for research on metacognitive knowledge for which self-report instruments would be an acceptable methodology, it is possible that having to generalize multiple events of using a variety of strategies (see Oxford, 1991 for classification system) can lead to less reliable results. This is not to say that other types of metacognitive knowledge are not susceptible to this limitation as well. However, it seems that accessing strategy knowledge would be more demanding, and perhaps less reliable than for example, accessing knowledge of differences in spoken and written English (metacognitive knowledge of discourse features). Research Question 3 In order to answer the third research question for the present study (What does introspective data reveal about learners’ metacognitive knowledge in task-based L2 writing?), qualitative data were collected from three sources: 1) participants’ utterances produced via think- 96 aloud protocols while writing; 2) semi-structured interviews; and 3) stimulated recall interviews with participants directly after writing. These data were anticipated to enable triangulation of L2 writers’ metacognitive knowledge as measured by the self-report instrument, and offer greater insight into the relationship between the metacognitive knowledge participants have and the metacognitive strategies they deploy during a writing task. As reported in the previous chapter, the qualitative data gathered from introspective methods in this study revealed several aspects of participants’ metacognitive knowledge of task, person, and strategy in L2 writing. Thus, the results mostly corroborated the components of metacognitive knowledge proposed in the early theoretical literature (e.g., Flavell, 1979; Wenden, 1991, 1998). Influence of learning context. While the broad categories of metacognitive knowledge found in the introspective data were mostly in line with the theoretical components of metacognitive knowledge, subtle differences were found as well. While Wenden (1998) explained that person knowledge includes learners’ knowledge of SLA factors such as age, aptitude, motivation, and how these factors may influence their language learning, a prominent theme that emerged in this study was how writing instruction shaped participants’ selfperceptions as L2 writers. This seemed to be due to the particular context in which the study was conducted. Participants, the majority of whom had received English writing instruction for the first time through their university EFL writing course, often commented that their current English writing proficiency and their beliefs about English writing were mostly developed through instruction they received at the university: “I learned how to write in English for the first time when I got into college, so I think that the professor taught me 95% of what I know about writing” (semi-structured interview, LKH). Several participants commented on how much their 97 freshmen English composition course had affected their perceptions of English writing, for example, what they perceived to be good English writing, effective ways to improve their English writing, and their self-efficacy with regard to English writing. One participant summarized the students’ collective experience by saying: “We call it the Williams (pseudonym of writing instructor) Effect” (semi-structured interview, MHR). Meanwhile, participants who had received previous English writing instruction had done so at private institutions that prepared students for standardized tests such as the TOEFL. These students’ self-efficacy beliefs regarding their English writing proficiency and their heightened awareness of evaluation criteria when writing in English were strongly influenced by their experiences at these institutions. As a result, awareness about assessment also featured as a theme in the introspective data, in addition to English writing instruction. Triangulation of quantitative data. The introspective data in this study revealed unique aspects of participants’ metacognitive knowledge affected by the particular learning context within which they were learning how to write. This finding is meaningful in that it demonstrates how the context in which a study is conducted may influence the study results, and that it also highlights the need to validate instruments across different learning contexts and populations. In addition, it also shows how introspective data allows greater insight into quantitative results obtained through large-scale administration of self-report instruments. However, it is also important to point out that the information gleaned through the introspective methods in this study mostly complemented the results obtained from the quantitative data rather than rigorously triangulating each participant’s responses to the metacognitive knowledge questionnaire. The metacognitive knowledge questionnaire tapped various aspects of participants’ metacognitive knowledge through a comprehensive repertoire of 98 questionnaire items, whereas the stimulated-recall interviews that were conducted only reflected the metacognitive knowledge that participants accessed during one particular writing task, and also only the information that they were able to recall. The relative scarcity of participants’ person knowledge in the stimulated-recall data is illustrative of these limitations. Nevertheless, the information collected through various forms of introspection in this study provided valuable insights into what types of metacognitive knowledge were accessed during L2 writers’ completion of a writing task. Research Question 4 Whereas the relationship between L2 proficiency and L2 writing performance has been scrutinized in L2 writing research, the relationship between metacognitive knowledge and L2 writing performance has received less focused attention. Furthermore, few studies have looked at the relationship among all three of these factors with the exception of a handful of studies that have looked at the role of metacognitive knowledge within a larger cognitive model of L2 writing (e.g., Sasaki & Hirose, 1996; Schoonen et al., 2003). Thus, the fourth research question of this study asked: What is the relationship between L2 learners’ metacognitive knowledge, L2 writing proficiency, and L2 language proficiency? It was anticipated that L2 proficiency and metacognitive knowledge would contribute to L2 writing performance. This hypothesis was made based on the results of previous research (Sasaki & Hirose, 1996) that explored the relationship among these variables, albeit with a different measure for metacognitive knowledge than in the present study. The prediction was borne out in the results of this study; L2 proficiency and the components of metacognitive knowledge predicted 38.6% of the variance in L2 writing performance. 99 It was also hypothesized that L2 writing proficiency would affect L2 writing performance to a greater degree than would metacognitive knowledge, and thus, a high level of metacognitive knowledge might not be able to compensate for a very low level of L2 proficiency (cf. Sasaki & Hirose, 1996). This too, was confirmed in the present study. Upon examining the effects of L2 proficiency and metacognitive knowledge on L2 writing, it was found that L2 proficiency had a greater amount of influence on L2 writing and predicted 29.9% of the variance, while metacognitive knowledge predicted only 8.7%. Even when L2 proficiency was taken out of the equation, it was found that metacognitive knowledge explained 16% of the variance in L2 writing, which was less than what L2 proficiency explained. Results from Sasaki and Hirose (1996) were similar; Participants’ L2 proficiency and metacognitive knowledge each predicted 52% and 11% of variance in L2 writing proficiency. Meanwhile, whereas L2 proficiency made a significant contribution to predicting the variance in L2 writing performance, this did not hold true for all of the metacognitive knowledge components. Among the six components, only metacognitive knowledge of strategy and metacognitive knowledge of problem solving were found to be statistically significant predictors. The potential reasons for the prominence of the two components, problem solving and strategy, are discussed in the following section. In interpreting these results, it should be noted that L2 proficiency in the present study was measured by the grammar component of the DIALANG test. Thus, the variable ‘L2 proficiency’ in this study is, in fact, grammatical accuracy. Only one section of the DIALANG test was chosen for administration due to practical time constraints. The grammar component was chosen as an index of L2 proficiency among the different sections of the test because it assessed both vocabulary knowledge and grammatical accuracy which were thought to be central 100 aspects of L2 proficiency required for L2 writing. However, it should be acknowledged that selecting grammatical accuracy as an index for L2 proficiency most likely affected the regression results indicating the relationship between L2 proficiency and L2 writing. For example, had the listening component of the DIALANG test been chosen as the index for L2 proficiency, the strength of the relationship between L2 proficiency and L2 writing may have been different – possibly weaker. Metacognitive knowledge of problem solving and strategy use. When considering why it may be that metacognitive knowledge of problem solving and strategy use were the most prominent predictors of L2 writing, it is useful to recall what Flavell (1979) described as conditions that are favorable to metacognition. Flavell stated that metacognitive thoughts are likely to occur in novel situations that stimulate highly conscious thinking, planning, and evaluation. From this description, we can see that a challenging learning context requiring problem solving and strategy use (i.e., planning and evaluation) would be conducive to the emergence of metacognition. Thus, it may be that in challenging learning situations, such as a timed-essay writing task, the metacognitive components of problem solving and strategy are the components that are accessed the most. This may be the reason that among the six different components of metacognitive knowledge, the two components of problem solving and strategy use were found to be the most significant predictors of L2 writing performance. Livingston (1993) also confirms this idea of the importance of problem solving and strategy use to metacognitive knowledge. She claimed that metacognitive strategies “often occur when cognitions fail…[and] such an impasse is believed to activate metacognitive processes as the learner attempts to rectify the situation” (Roberts & Erdos, 1993, cited in Livingston, 1997, p.2). 101 A cognitive model of L2 writing. While the two variables of L2 proficiency and metacognitive knowledge did predict L2 writing performance to some extent in this study, the 38.6% of variance that was explained by these variables left approximately 60% of L2 writing performance unaccounted for. The large proportion of variance that was not explained is most likely due to additional variables that were not measured and included in the regression model, but would have had an effect on the outcome, L2 writing performance. While this does not pose a problem in the current study, because its primary purpose was limited to elucidating the relationship between L2 proficiency, metacognitive knowledge, and L2 writing proficiency, in order to build a more comprehensive model of L2 writing, other factors would have to be taken into consideration. For example, Schoonen and colleagues (e.g., Schoonen, Snellings, Stevenson, & Van Gelderen, 2009; Schoonen, Van Gelderen, De Glopper, Hulstjin, Simis, Snellings, & Stevenson, 2003) have proposed cognitive models of L2 writing composed of a range of variables such as L2 linguistic knowledge, speed of L2 processing, metacognitive knowledge, L1 writing proficiency, and L2 writing proficiency. Another example is a series of studies by Sasaki and Hirose (Hirose & Sasaki, 1994; Sasaki & Hirose, 1996; Hirose & Sasaki, 2000), who included L2 language proficiency, L1 writing proficiency, L1 writing strategies, L2 writing strategies, L2 writing metaknowledge, past writing experience, and instructional background in their model of L2 writing. Theoretical Implications The present study makes a meaningful contribution to the fields of SLA and Educational Psychology in several ways. First and foremost, it disentangles confounding definitions and terms in the literature on metacognition in general, and more specifically, metacognitive knowledge. Thus far, research in the realm of metacognition and metacognitive knowledge has 102 been somewhat fragmented, in part, due to the incoherent definitions and operationalizations of the construct of metacognition and metacognitive knowledge. A more coherent understanding of the construct of metacognitive knowledge is invaluable in that it provides fodder for further consistency/development in metacognitive research. In recent years, there has been a growing number of systematic reviews that synthesize empirical research studies on various issues in SLA via meta-analyses (see Norris & Ortega, 2006; Oswald & Plonsky, 2010; Plonsky & Gass, 2011). By producing a broad-stroke picture of empirical studies in the various sub-fields of SLA, these meta-analyses have allowed for reflection on what has been accomplished thus far in the field, and a sense of what it is that needs further exploration. A prerequisite to this kind of largescale research synthesis, however, is an ample number of empirical studies, and a consistent definition and operationalization of the constructs of interest. With a limited number of previous empirical studies investigating the role of metacognitive knowledge in L2 writing, the present study does not attempt to synthesize the research in this area by means of meta-analysis. Rather, it conducts a review of previous literature by applying scrutiny to the definitions and operationalizations of the construct metacognitive knowledge, and paves the way for future large-scale syntheses, once an adequate number of empirical studies have been accumulated. In addition to laying down the groundwork for meta-analyses of research in the domain of metacognitive knowledge, results from this study would create a more solid foundation upon which theoretical models of L2 writing can be built. Previous studies (e.g., Sasaki & Hirose, 1996; Schoonen et al., 2003) have found conflicting results in their studies with regard to how variables in L2 writing (e.g., metacognitive knowledge, L1 writing proficiency, L2 language proficiency, L2 writing proficiency) relate to one another. At the core of the problem lies the different operationalizations of what metacognitive knowledge means in the context of L2 103 writing. Thus, a clearer conceptualization and operationalization of metacognitive knowledge in L2 writing will help researchers build a more accurate model of the different variables that affect and are affected by L2 writing. The study of metacognitive knowledge in L2 writing also offers insight into other strands of writing research. For example, it can provide a more complete picture of the temporal aspects of L2 writing (e.g., Roca de Larios, Manchon, Murphy, & Marin, 2008) by allowing insight into what metacognitive resources L2 writers access at each stage of writing (i.e., planning, drafting, revising). It can also shed light on issues in task-based language learning, for instance, the effect of planning on L2 learners’ performance on a pedagogical writing task. Also, research on metacognitive knowledge can add to the research on individual differences and their influence on L2 writing performance. The present study also fills a research gap with regard to writing as a domain of language learning in both the fields of Educational Psychology, and SLA. Researchers in both fields who investigate metacognition and/or language learning strategies (e.g., Plonsky, 2011; Veenman, 2011, Wenden, 1998) have noted the lack of research on these topics in relation to writing, and have called for more empirical investigation in these areas. Finally, the development of a psychometrically valid self-report instrument carried out in this study serves as an example of the necessary procedures needed for instrument validation. It also contributes to the discussion of validity issues regarding self-report instruments (e.g., Tseng et al. 2006; Veenman, 2005) with regard to whether self-report instruments are a valid reflection of the construct of metacognitive knowledge. Pedagogical Implications 104 While this study certainly has important implications for theory building and research in SLA by means of establishing a definition and developing a measure for metacognitive knowledge, another significant aspect of the study is its potential to enhance pedagogical practices that aim to develop L2 students’ awareness and use of their metacognitive knowledge in writing. Increased metacognitive knowledge can encourage L2 students to take ownership of their learning and become more reflective, autonomous learners both inside and outside of the classroom. Within a classroom setting, the metacognitive knowledge test developed in the present study can be used as a diagnostic tool for classroom teachers who would like to accurately assess their students’ current metacognitive knowledge about writing. Oftentimes teachers administer diagnostic tests at the beginning of a new course to gauge students’ language and writing proficiency. These tests often comprise items that assess students’ grammatical knowledge of the language, or they resemble writing tasks that students will be expected to complete throughout the course. While the results of these tests give teachers some indication of students’ language and writing proficiency, they do not offer much information about students’ beliefs with regard to their self-efficacy in L2 writing, how students analyze and approach academic writing tasks, and students’ ability to select and deploy effective writing strategies. The instrument developed in the present study would be easy to administer in a short amount of time within in a classroom environment, and would provide teachers with a plethora of information about their students’ person, task, and strategy knowledge regarding writing. These insights, in addition to other information gained through traditional diagnostic tests, would enable teachers to obtain a more comprehensive picture of students as language learners and writers, and address areas of metacognitive knowledge in which their students are lacking. 105 In addition to using the metacognitive knowledge questionnaire as a diagnostic tool and jumping board for a course that nurtures more self-directed L2 writers, completing the metacognitive knowledge questionnaire in itself can also serve as a way to raise students’ awareness. Several participants in the present study commented on the usefulness of the metacognitive knowledge questionnaire, think-aloud protocols, and interviews in providing them with the opportunity to reflect on themselves as language learners and writers. While these activities were used as data collection methods in the current study, they can be employed in classroom settings as effective awareness-raising activities as well. The results of this study also have implications for designing and implementing pedagogical writing tasks within a writing classroom. In this study, it was found that metacognitive knowledge of strategy and problem solving made significant contributions to predicting L2 writing proficiency. These findings, combined with theoretical discussions of learning environments that require problem solving and the activation of metacognitive knowledge (e.g., Flavell, 1979; Livingston, 1997), suggest that it would benefit L2 writers to engage in pedagogical writing tasks that require active problem solving and the deployment of metacognitive strategies. Thus, while it is useful to have students write about familiar topics and work with familiar genres to practice their L2 writing, it is also important to design and implement writing tasks that challenge L2 writers to actively access their metacognitive knowledge of writing. For example, in the present study, an additional layer of complexity was added to an otherwise familiar writing prompt in order to create an environment in which students would have to actively plan, monitor, and evaluate their writing. Also, instead of simply assigning academic writing tasks for students to work on their own, it could be helpful to discuss and analyze sample writing tasks together in class. This could 106 exemplify effective ways of approaching the writing task, for example, identifying the purpose of the task, its requirements, and constraints. This practice extends students’ focus from formulating words and sentences to regulating and overseeing their overall writing process to achieve their goal for the task. This, in turn, reinforces students’ existing task knowledge and addresses possible gaps in other aspects of their metacognitive knowledge as well. Meanwhile, the metacognitive knowledge test also has potential for applications outside the classroom. With the wide-spread development of computer-assisted language learning (CALL), more and more language learning is taking place outside the classroom in various forms. However, thus far, tools to help students monitor and evaluate their learning in these distance learning contexts have not been readily available. This is especially true in the area of L2 writing, with existing online self-assessments mostly limited to tests of grammar and/or vocabulary. The instrument developed in the present study provides L2 writers with the opportunity to autonomously monitor and self-evaluate their metacognitive knowledge, especially in learning environments where self-assessment carries greater importance. Limitations There were several limitations to this study. While a large sample size is desirable for most quantitative research studies, it is especially important for studies that use statistical procedures that are conducted based on the assumption of an adequate sample size. Principal Component Analysis used in this study is one such procedure, and a larger sample size may have produced more reliable results. Also, in future studies, the scales for the measures of L2 proficiency, L2 writing proficiency, and metacognitive knowledge should be examined with an eye to improving their ability to capture variance in the data set. The small amount of variance in 107 the data may have been one reason why several of the metacognitive knowledge components were not found to have a significant effect on L2 writing performance. Future Research While the results obtained from this exploratory study contribute to a better understanding of L2 writers’ metacognitive knowledge, many questions remain, and thus call for further research. This study found that the context of research as well as participants’ previous experiences with instruction had an impact on the study results. This points to the necessity of cross-validating the metacognitive self-report instrument across different contexts and populations. This would allow for a generalization of the research results to a larger population, which would likely increase the practical utility of the instrument. To this end, a multiinvestigator research project on the validation of this self-report instrument in various learning contexts should yield meaningful results. Especially valuable would be the opportunity to validate the instrument with L2 participants of different language backgrounds or ages. This may also allow insight into the effects of L1 on metacognitive knowledge of L2 writing, and also offer the opportunity to study the development of metacognitive knowledge as well. Once a cross-validated self-report instrument is obtained, the next logical step would be to investigate the development of metacognitive knowledge. Some questions to ask would be: Do metacognitive knowledge and L2 proficiency develop in step or independently? Is the development of metacognitive knowledge incremental, or is it expedited by certain contexts or variables? In investigating the development of metacognitive knowledge, a longitudinal study design using qualitative methods such as in-depth interviews, journals, and observations could be adopted to gain a more rich and personal account of the development of metacognitive knowledge in writing. 108 Armed with greater knowledge of the construct and reliable measures for its assessment, it would be interesting to investigate metacognitive knowledge in relation to other variables in SLA as well. For example, metacognitive knowledge could be examined in relation to other individual differences such as motivation and language aptitude, or in relation to task characteristics such as task complexity. 109 CHAPTER 5: CONCLUSION This dissertation has provided an overview of the existing literature on the definitions and measures of metacognitive knowledge in language learning. In doing so, the study has highlighted problems with the confounding definitions and unreliable measures used in previous research which have made it difficult to draw any meaningful conclusions about metacognitive knowledge both in terms of theory and practice. In response, the present study identified six metacognitive knowledge components from data collected with a 31-item self-report instrument. The metacognitive knowledge components identified in this study were metacognitive knowledge of: 1) task, 2) personal learning process, 3) strategy, 4) text and accuracy, 5) problem solving, and 6) discourse features. These metacognitive knowledge components mostly converged with those described in early theoretical literature on metacognitive knowledge (e.g., Flavell, 1979), which is meaningful in that the results of this study provide empirical support for the components of metacognitive knowledge that have been proposed in theory but not empirically tested. Meanwhile, the study results also revealed that the different components of metacognitive knowledge were influenced by the particular context of the study, which in this case was an environment that placed heavy emphasis on standardized tests and assessment. This finding, in addition to others gleaned from introspective data remind us that large-scale quantitative studies provide the best insights when complemented with contextualized, qualitative data. With this in mind, future studies on metacognitive knowledge would benefit from greater attention to the instructional contexts in which the research is conducted as well as students’ previous literacy experiences and language learning backgrounds. 110 The self-report instrument of metacognitive knowledge was developed through a multiphase process of item compilation, questionnaire administration and evaluation. This process was carefully documented in this dissertation in hopes that it will allow for future replication and reliable instrument development. The resulting instrument was found to be reliable for the most part with the exception of the test for metacognitive knowledge of problem solving. Several reasons were speculated in relation to the nature of the scale and the construct, but further research is needed for better insight into the issue. Finally, the findings indicated that both metacognitive knowledge and L2 proficiency were able to predict L2 writing performance. Not surprisingly, L2 proficiency was the most significant predictor of learners’ L2 writing performance, and metacognitive knowledge of strategy use and problem solving also made statistically significant contributions. The main contribution of this study is that it serves as a starting-point for future investigation into the study of metacognition and its relationship with language learning. With further cross-validation across different learning contexts and learner populations, the self-report instrument developed in this study will serve as a valuable assessment tool for both research and instruction. 111 APPENDICES 112 Appendix A English Proficiency Test (DIALANG Grammar Section) English Proficiency Test Choose the best word for the gap in the following sentences: 1. Do you know who he was talking _____ on the phone? 1) to 2) at 3) for 2. ___________ late tonight or can we go to the cinema? 1) You are working 2) Are you working 3) Did you work 3. The price of gas has gone up again. _______ have increased the price to reduce the use of the car. 1) They 2) One 3) Each 4) It 4. I’m sorry. I have _______ much work to do that I can’t meet you this week. 1) so 2) such 3) very 4) too 5. I have never seen _________ beautiful girl in my life. 1) such a 2) such an 3) a such 4) such 6. Something always goes wrong when ____ is my turn to work on the machine. 1) there 2) here 3) it 4) its 7. I’ve nearly finished. There’s very _________ left to do. 1) little 2) less 3) least 8. She gave me her ___________ when I got my degree. 1) congratulations 2) congratulates 3) congratulation 9. The dog ate _______ food and went to sleep. 1) it’s 2) its 3) its’ 10. There can be __________ reasons why people fail their driving test. 1) dozens 2) dozen of 3) dozens of 11. Charles Dickens was a successful writer in his own country, and he was also known in America. _________works were enthusiastically received there, too. 1) Dickens’ 2) Dicken’s 3) Dickenses 4) Dickenses’ 5) Dickens’es 12. Everybody’s here ______________? 1) are they 2) aren’t they 3) isn’t he 113 4) is he 13. I was counting ______ my father to help pay for my studies. 1) with 2) for 3) on 14. He left his ______ in my apartment while he went on holiday. 1) belongings 2) belonging 3) belongs 15. People use a local language at home, but English _______ in all government offices. 1) speaks 2) is spoken 3) is speaking 4) was speaking 16. She asked me _____________ 1) what is my name 2) what my name was 3) what my name are 4) what my name be 17. The man with _____________ I’m supposed to be traveling hasn’t turned up yet. 1) which 2) whom 3) what 4) who 18. A ___________ car is handy for a family with children. 1) five door 2) fifth door 3) five doors 4) five door’s 19. _____________ hearing the news, he decided not to go on holiday. 1) At 2) On 3) By 20. Make a sentence with the three words listed below and write it in the blank. Use all the words.  drink, Pepsi, sometimes  I don’t always drink Coke. I __________________. 21. Make a sentence with the words listed below and write it in the blank. Use all the words.  I, get, up, on weekends, always, late  ______________________________________ 22. Write the correct form of the word ‘child’ in the blank.  This is our first child but we both want at least three more ______________. 23. What is the best word or group of words for the gap in the sentence? Write it in the blank.  Please, no more of this, and ______________ speak before you are asked to! 24. Write the correct form of the word ‘come’ in the blank.  Her criticism brought me to the sudden realization, which __________ as something of a shock, that I was wrong. 25. Write the correct form of the word ‘knife’ in the blank.  We don’t need forks and ____________. We can eat with our hands. 26. What is the best word or group of words for the gap in the sentence? Write it in the blank.  He _______________ there by now. He left two hours ago and it’s only a 20 minute 114 walk. 27. What is the best word or group of words for the gap in the sentence? Write it in the blank.  Tell Auntie what it is. _________________ shy! 28. Fill in the gap in the following sentence. Use the correct form of the word ‘talk’.  The boss _____________________ with his visitor for two hours now. I hope it isn’t bad news. 29. What is the best word or group of words for the gap in the sentence? Write it in the blank.  Mom’s pretty baby! Mom’s pretty baby! ____________ pretty baby are you? Mom’s! Yes, you are Mom’s own pretty baby! 30. Fill in the gap in the sentence. Use the correct form of ‘captivate’.  Perhaps you would like to say something about how you came to _______________ by this topic. 115 Appendix B Argumentative Writing Task Prompt 1 Your university has just announced that it is considering several different businesses to move into a small space that has opened up in the Student Union building on campus. The university has invited students to voice their opinions in the school newspaper about which business they would like to have in the building. The university has stated that they will give priority to businesses that fulfill all of the following requirements: 1) The business is environmentally friendly. 2) The business helps improve students’ academic performance. 2 3) The business will be able to fit into a small space (30 m ). Choose one the following businesses and write a short article for the school newspaper, arguing that it is the best choice for the new space in the union. Famous coffee franchise Bookstore Fast food restaurant Figure 6. Choices for the new space in the student union. (For interpretation of the references to color in this and all other figures, the reader is referred to the electronic version of this dissertation.) You have 40 minutes to complete this essay. The essay should be approximately 250-300 words in length. Remember that it is important to consider how well the business matches the five requirements set by the University. 116 Appendix C Argumentative Writing Task Prompt 2 Housing for Exchange Students Your university is developing an exchange student program. Exchange students will have the chance to study for 9 months at a university in the United States. Your university is currently considering several different housing arrangements for these exchange students. Read the following descriptions of each housing arrangement, and choose the one you think would be best. Then, write a short essay for the university newspaper, arguing that it is the best choice for exchange students. You have 40 minutes to complete this essay. The essay should be approximately 250 - 300 words in length.     Home-stay with an American family 20 minutes by car from campus Home-cooked meals provided $1000/month  Dormitory with fellow Korean students from your university Located on campus Cafeteria meals provided $700/month        Figure 7. Choices for housing for exchange students. 117 University apartment with roommates from various language and cultural backgrounds 15 minutes walking-distance from campus No meals provided $400/month Images retrieved from : http://lashell8908.wordpress.com/2010/10/28/southeast-hostsfamily-home-stay-program/ http://mediumutm.ca/blog/2011/09/06/how-to-stay-sane-on-residence/ http://www.sulit.com.ph/index.php/view+classifieds/id/2479986/%3A 118 Appendix D Language Learning Background Questionnaire PART 2 The following are questions about your English learning experiences. Read each item carefully, and place a check (√) mark next to the appropriate answer, or fill in the blank with a brief answer. 1. Name:_______________________ 2. Gender: □Male □Female 3. Year of Study: □Freshman □Senior □Sophomore □Junior 4. Major:_______________________ 5. I studied/am studying English from ___ to ___ year old. 6. Indicate the number that best represents your English proficiency.      Overall English: Reading: Writing: Speaking: Listening: (1: Minimal, 2: Basic, 3: Good , 4: Very good, 5: Excellent) □1 □2 □3 □4 □5 □1 □2 □3 □4 □5 □1 □2 □3 □4 □5 □1 □2 □3 □4 □5 □1 □2 □3 □4 □5 7. Have you ever lived in an English-speaking country (for example, USA, UK, Canada, Australia, Philippines, Singapore, Hong Kong)? □Yes □No If yes: Country: _____________ Length of Residence: ________ months 8. Have you taken a standardized English test (for example, TOEFL, TOEIC, TEPS, IELTS) □Yes □ No  Test: _____________________  Approximate date: Year_________ Month ________  Score: _________ 9. Have you received any English instruction? 119 □Yes □No 10. If yes, please briefly describe your experience (e.g., what kind of course, private/group tutoring) in the following box: When: Course title: Course description: 120 Appendix E Language Learning Background Questionnaire in Korean 제2부 다음은 본인의 영어 학습 경험에 대한 질문입니다. 각 문항을 잘 읽어보신 후, 그 문항에 알맞은 답에 체크(√) 표시를 하거나 간단하게 서술하여 주십시오. 성별: 1. □남 나이: _____________________ □여 3.전공:_______________________ 4. 학년: □1 □2 □3 □4 11.나는 영어 학습을 _____살부터 _____살 까지 했다/하고 있다. 12.자신의 영어 능력을 가장 잘 설명하는 숫자를 골라 표기하여 주십시오. (1: 최소한, 2: 기초적, 3: 준수한 , 4: 우수한, 5: 탁월한)  전반적 능력 : □1 □2 □3 □4 □5  읽기 능력: □1 □2 □3 □4 □5  쓰기 능력: □1 □2 □3 □4 □5  말하기 능력: □1 □2 □3 □4 □5  듣기 능력: □1 □2 □3 □4 □5 13.영어권 국가 (예시: 미국, 영국, 호주, 필리핀, 싱가폴, 홍콩) 거주 경험은? □있다 □없다 있다면: 나라: _____________ 거주 기간: ________ 개월 14.토플/토익/텝스/IELTS 등 영어 시험을 보신 적이 있습니까?  시험 이름: _____________________  대략적인 시험 날짜: ____________년 ________월 121 □예 □ 아니오  점수: _________ 15.영작문 지도를 받은 경험이 있습니까? □예 □ 아니오 16.만약 있다면, 언제, 어떠한 수업을 들었는지 (혹은 개인/그룹 지도를 받았는지), 간략하게 설명해 주십시오.  언제 (학년):  어디에서:  어떠한 영작문 수업/지도: 122 Appendix F Semi-structured Interview Questions General Questions What do you think good writing is?  Is this the same in both Korean and English? What makes a good writer? How do you feel about writing Korean?   What type of writing do you do in Korean? What are your strengths and weaknesses when writing in Korean? How do you feel about writing in English?   What type of writing do you do in English? What are your strengths and weaknesses when writing in English? Can you describe any experiences you have with (English/Korean) writing instruction?    When and where did this instruction take place? Can you describe what you did in class? Can you tell me about the instructor’s feedback? Specific Questions How did you feel about writing the English essay for this study? Can you tell me what you did before starting to write the essay?    Do you usually plan before you write? If yes: o What do you plan? (For example, main ideas, number of paragraphs…) o What are some things you do when planning? (For example, outlining) o Why do you do these things? o How specific are your plans for writing? If no: o Can you describe what you do at the beginning of your writing process? What came to your mind when you read the essay writing prompt? Can you tell me what you thought about during writing?   Did you think about the plans you made (if you made plans) while you are writing? Did you make any new plans while you are writing? 123   Did you re-read what you have already written? o Why, or why not? What were the things that you paid the most attention to while you were writing your essay? Can you describe what you did as you were finishing up your essay? What were the things that you paid most attention to when you were finishing up your essay? Did you make any changes to your essay?   If yes: o When did you make these changes? o Why did you make them? o Do you usually make changes to your writing? If no: o How did you feel about what you had written? o Do you usually make changes to your writing? How did you decide when to stop writing? How did you feel about your essay after you completed it? Did you have any difficulties writing the essay?   If yes: o What did you do to overcome these problems? If no: o What do you generally do when you run into problems when writing? Do you have any strategies you use when writing? Do you think in Korean or English when you write an English essay? (Korean Translations) 일반적인 질문 좋은 글이란 무엇이라고 생각하세요?  그것은 한국어 작문과 영작문에 있어서 같은가요? 글을 잘 쓰는 사람은 어떠한 사람이라고 생각하세요? 본인의 한국어 글쓰기에 대해 어떻게 생각하세요?  주로 어떤 종류의 한국어 글쓰기를 하시나요? 124  한국어로 글을 쓸 때 본인의 장단점이 무엇이라고 생각하세요? 본인의 영작문에 대해 어떻게 생각하세요?   주로 어떤 종류의 영작문을 하시나요? 영어로 글을 쓸 때 본인의 장단점이 무엇이라고 생각하세요? 한국어 작문이나 영작문 지도를 받은 경험에 대해서 말씀해주실 수 있으세요?    언제 어디서 지도를 받으셨습니까? 수업 시간 활동에 대해 설명해주실 수 있으세요? 선생님이 어떠한 피드백을 주셨나요? Specific Questions 이번 실험 때 한 영작문에 대해 어떻게 생각하세요? 글을 쓰기 전에 무엇을 하셨는지 설명해 주실 수 있으세요?    주로 글을 쓰시기 전에 계획을 세우시나요? 만약 그렇다면: o 어떠한 것들을 계획하십니까? (예를 들어, 글의 요지, 문단 갯수…) o 계획을 세우기 위해서 어떤 것들을 하시나요? (예를 들어, 개요쓰기) o 계획을 세우기 위해 이런 것들은 하는 이유는 무엇인가요? o 글 쓰기 위한 계획은 얼마나 자세하게 세우시나요 ? 그렇지 않다면: o 글 쓰기 과정의 시작 부분에서 주로 무엇을 하시는지 설명해 주실 수 있나요? 영작문 과제에 대한 설명을 읽은 후에 어떤 생각이 들었나요? 글 쓰는 동안에 어떤 생각을 하셨는지 말씀 해주시겠어요?     만약 글쓰기 계획을 사전에 세우셨다면, 그 계획에 대해서 다시 생각하셨습니까? 글을 쓰는 동안에 새로운 계획을 세우셨나요? 이미 쓰신 부분의 글을 다시 읽어보셨나요? o 그렇다면 왜 그러셨는지, 그렇지 않다면 왜 그렇지 않은지? 글을 쓰는 동안 가장 신경을 많이 쓴 부분은 무엇입니까? 글 쓰기를 마무리하는 과정에서 무엇을 하셨는지 설명해 주실 수 있나요? 글 쓰기를 마무리하면서 가장 신경을 많이 쓴 부분은 무엇입니까? 이미 쓴 부분의 글을 추후에 고치셨습니까? 125   만약 그렇다면: o 언제 고치셨습니까? o 왜 고치셨나요? o 보통 글 쓰실 때 쓰신 글을 많이 고치시나요? 그렇지 않다면: o 다 쓰신 글에 대해 어떤 생각이 드셨나요? o 보통 글 쓰실 때 쓰신 글을 많이 고치시나요? 영어 작문 과제를 다 마쳤다는 결정을 어떻게 하셨습니까? 영작문을 마치신 뒤, 쓰신 글에 대해 어떤 생각이 드셨습니까? 글을 쓰는 데 혹시 어려움을 겪으셨습니까?   만일 그렇다면: o 그러한 어려움들을 극복하기 위해 무엇을 하셨습니까? 만일 그렇지 않다면: o 보통 글을 쓸 때 문제가 생기면 어떻게 해결하십니까? 글을 쓸 때 본인이 사용하는 전략이 있나요? 영작문을 할 때 한국어로 생각을 하십니까, 영어로 생각을 하십니까? 126 Appendix G Initial Pool of Questionnaire Items 1) “Questions about yourself and what you know about texts, reading, and writing” NELSON Project Questionnaire used in: Schoonen, van Gelderen, Glopper, Hulstijn, Simis, Snellings, & Stevenson (2003). First language and second language writing: The role of linguistic knowledge, speed of processing, and metacognitive knowledge. Language Learning, 53(1), 165-202. 2) “Metacognitive strategies questionnaire” from Purpura, J. (1997). An analysis of the relationships between test takers’ cognitive and metacognitive strategy use and second language test performance. Language Learning, 47, 289-325. 3) “Self-assessment questionnaire” from O’Neil, H., & Abedi, J. (1996). Reliability and validity of a state metacognitive inventory: Potential for alternative assessment. The Journal of Educational Research, 89, 234-245. 1. The purpose of paragraphs is to make sure that there are not too many sentences in a row. 2. Words like first and second are used to show that a text contains several things about the same topic. 3. Changing the order of information in a text, can change the meaning of the text as a whole. 4. To be able to understand a text properly, you sometimes need to know things that are not said in the text. 5. Words like furthermore, besides and moreover are used to show that the writer is adding some extra information. 6. Words like he and she often refer to people the writer has not mentioned before. 7. Not all parts of a text deal with the main idea of the text. 8. All the sentences in a text are equally important for the main idea of the text. 9. There is one possible word order in a sentence. 10. A paragraph usually has more than one main idea. 11. Words like but and however are followed by something which differs from what has already been said. 12. The title often tells you what a text is about. 13. Texts in English and in Korean are clearly organized differently. 14. The new information contained in a sentence is usually put at the beginning of the sentence. 15. The order of the information in a text is usually unimportant. 16. Spoken language and written language are exactly the same. 17. The way texts are divided into paragraphs is generally the same in English and Korean. 18. The word order in an English sentence is often different from the word order in a Korean sentence. 19. Texts sometimes differ in the way they are organized. 20. If you can’t think of a word in English, it is a good idea to leave a blank space in the 127 sentence. 21. When you think of a sentence, the most important thing is to make sure that the idea you want to express is clear. 22. Before you start writing, it is useful to think of the things you want to write about the topic. 23. If you have to write a difficult sentence in English, it is important to translate it as literally as possible from Korean. 24. After you have finished writing, it is wise not to change anything in the text. 25. It is a good thing to start writing as soon as you have read the topic you have to write about. 26. When you are writing, it is a good idea to think of one idea at a time, write it down, and then think of a new idea. 27. It is wise not to think too much about your readers and what they know about the topic while you are writing. 28. While you are writing, it is useful to check that what you are writing makes sense. 29. Before you start to write, it is a good idea to decide in which order you will put your ideas. 30. It is sensible to think of the things you know about a subject after you have started writing. 31. It is sensible to think about the purpose of your text before you start writing. 32. Before you start writing, it is sensible to think of all the words you want to use in the text. 33. If you have difficulty finding a word in English, it is useful to look it up in a KoreanEnglish dictionary. 34. It is useful to always include everything you know about a topic in your text. 35. When you are having difficulty writing a sentence in English, it is useful to think of alternative ways of saying it. 36. After you have finished writing, it is sensible to read through the whole text again. 37. It is useful to read the writing assignment carefully before you start writing. 38. Before you start writing, it is useful to first think of as many different ideas about the topic as you can. 39. It is sensible to put your ideas in the text in the same order in which you think of them. 40. If it is difficult to finish a sentence, it is important to make sure you finish the sentence without changing the beginning of it. 41. When you are writing, it is a good idea to keep the organization of the whole text in mind. 42. If it is difficult to think of a word in English, it is sensible to stop writing until you can find the English word. 43. Before you start writing, it is useful to think about what you already know about the subject. 44. If you reread what you have written, it is sensible to check that the text is well-organized. 45. Before I write an English essay, I think about whether my grammar is good enough to express my ideas. 46. Before I start writing an essay, I try to see which parts will be easy and what parts will be difficult. 47. When I write an English essay, I plan what I am going to do so I can use my time well. 48. Before I hand in my English essay, I check my work. 49. I try to understand the purpose of the writing task. 128 50. When I reread what I wrote, I can identify places the reader might not understand. 51. I test my knowledge of English grammar rules by applying them to new situations when writing. 52. I test my knowledge of new English words by using them in new situations when writing. 53. I set goals for myself in English writing. 54. I think about whether I am making progress in learning how to write in English. 55. When I am writing a timed English essay, I know how much time has gone by. 56. Before I write an English essay, I think about how it will be scored. 57. When I write in English, I know when I make grammar mistakes. 58. After I have finished an English essay assignment, I think about how I can do better the next time. 59. I try to learn from the mistakes I make in English writing. 60. When I am writing an English essay, I try to concentrate on what I am doing. 61. Before I begin an English essay assignment, I think about whether I know enough English to do it. 62. Before I begin an English essay, I decide how important it is for me to get a good grade on the test. 63. Before I use my English, I think about how I can ask for help if I cannot express myself clearly or if I do not know a word. 64. I try to find out all I can about language learning by reading books or articles. 65. After I write something in English, I think about whether the reader will understand what I mean. 66. Before I write an essay, I think about how much the reader knows about what I'm going to write. 67. I was aware of my own thinking when writing in English. 68. I was aware of which thinking technique or strategy to use when writing in English. 69. I was aware of the need to plan my course of action when writing in English. 70. I was aware of my ongoing thinking processes when writing in English 71. I asked myself how the essay prompt related to what I already knew. 72. I used multiple thinking techniques or strategies to write in response to the essay prompt. 73. I selected and organized relevant information to answer the essay prompt. 74. I tried to determine what the essay prompt required. 75. I made sure I understood what had to be done to write a effective essay. 76. I periodically checked my essay while I was writing it. 77. I corrected my errors as I was writing. 78. I almost always knew how much of the composition I had left to complete. 79. I kept track of my progress in writing and, if necessary, I changed my techniques or strategies. 80. I checked my accuracy as I progressed through writing. 129 Appendix H List of Metacognitive Knowledge/Metacognition/Writing Questionnaires Consulted 1. Strategy Inventory for Language Learning (SILL) 2. Metacognitive Knowledge test – NELSON Project (Schoonen et al 2011) 3. Writing Beliefs Questionniare – Manchón, R. (2009). Individual differences in foreign language learning: the dynamics of beliefs about L2 writing. RESLA, 22, 245-268. 4. Sasaki’s classroom questionnaire for Metacognitive Knowledge about expository writing 5. Metacognitive Awareness Questionnaire (Sinclair, 1999): Sinclair, B. (1999). Wrestling With a Jelly: the Evaluation of Learner Autonomy. In B. Morrison (Ed.) Experiments in Evaluation in Self-Access Language Learning, pp.95-109. Hong Kong: Hasald. 6. The Awareness of Independent Learning Inventory (AILI) for general education: Elshout-Mohr, M., J. Meijer, M.M. van Daalen-Kapteijns, and W.Meeus. (2003). A selfreport inventory for metacognition related to academic tasks. Amsterdam: University of Amsterdam, SCO-Kohnstamm Insitutuut. (This is in Hayo Reinard’s website list)  Meijer et al. 2006. Construction and validation of a questionnaire on metacognition. Paper presented at the Second bi-annual conference of the special interest group on Metacognition of EARLI, Cambridge. 7. Metacognitive Awareness Inventory (MAI, Schraw & Dennison, 1994) Assessing metacognitive awareness. Contemporary Educational Psychology, 19, 460-475. 8. Metacognitive Awareness of Reading Strategies Inventory, Version 1.0 (MARSI: Mokhtari & Reichard, 2002) Assessing students’ metacognitive awareness of reading strategies. Journal of Educational Psychology, 94(2), 249-259. 9. State Metacognitive Inventory (SMI, O’Neil & Abeli, 1996) Reliability and validity of a state metacognitive inventory: Potential for alternative assessment. Journal of Educational Research, 89(4), 234-245. Writing Style Questionnaire Kieft, M., Rijlaarsdam, G., & Van den Bergh, H. (2006). Writing as a learning tool. Testing the role of students’ writing strategies. European Journal of Psychology of Education, 21(1), 17-34. 130 Kieft, M., Rijlaarsdam, G., & Van den Bergh, H. (2008). An aptitude-treatment interaction approach to writing-to-learn. Learning and Instruction, 18, 379-390. 131 Appendix I First Version of Questionnaire (64 items) English Writing and Metacognitive Knowledge This questionnaire is part of a research study being carried out in Applied Linguistics at Michigan State University. The purpose of the task is to encourage English language learners to reflect on their English writing processes. Part 1 consists of questions that ask about your thought processes when you write in English, and Part 2 asks about your English learning experiences. The answers you provide will strictly be used for research purposes only. There are no correct or incorrect answers, so your honest responses will be much appreciated! Please read each of the following statements carefully, and circle (o) the number that corresponds to the degree to which you agree with the given statement. 1 = strongly disagree 2 = disagree 3 = somewhat disagree 4 = somewhat agree 5 = agree 6 = strongly agree Example: Let’s say that you like to study English very much, and you are not too stressed out about it. In this case, you would circle (o) the appropriate number on the questionnaire, as shown in the following example. 1 I like studying English. 1 2 3 4 5 6 2 I am stressed about studying English. 1 2 3 4 5 6 Figure 8. First version of questionnaire example instructions. Please respond to the following statements following the example above. 1 2 3 4 5 6 I am aware of the role that paragraphs play in the organization of an essay. When writing an English essay, I carefully think of the things I want to write about on the topic. I think about the roles that words like but and however play in the organization of an English essay. When writing an English essay, I try to always include everything I know about a topic in the text. When writing an English essay, I try to start writing as soon as I have read the topic I have to write about. When writing an English argumentative essay, I think about what I should do to make it persuasive. Figure 9. First version of questionnaire. 132 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 Figure 9 (cont’d) 7 8 Before I begin writing an English essay, I decide how important it is for me to successfully finish the writing task. I periodically check my English essay while writing it. 9 1 2 3 4 5 6 1 2 3 4 5 6 I am aware of the differences and similarities between texts in English and in Korean. 10 I am aware of the function that words like furthermore, besides and moreover perform in the organization of an English essay. 11 I do not feel that it is important to read the writing assignment carefully before writing an English essay. 12 Before I hand in my English essay, I check my work. 1 2 3 4 5 6 13 I think about whether my way of studying English writing is effective. 1 2 3 4 5 6 14 Before I begin an English essay assignment, I think about whether I know enough English to successfully complete the task. 15 When I write an English essay, I believe it is a good idea to keep the organization of the whole text in mind. 16 When I write an English essay, I am aware of how much of the composition I have left to complete. 17 I am aware of an effective strategy to use when writing an English sentence and I can’t think of a word in English. 18 When writing an English essay, I am aware of effective ways to express my ideas even when it is difficult to finish a sentence. 19 When writing an English essay, I think about the roles that words like first and second play in organizing the text. 20 I am aware of the difference in word order between an English sentence and in a Korean sentence. 21 Before I start writing an English essay, I try to predict which parts will be easy and what parts will be difficult. 22 I am aware of my own thinking when writing in English. 1 2 3 4 5 6 23 Before I start writing an English essay, I think about the words I want to use in the text. 24 Before I start writing in English, I think about what I already know about the subject. 25 I am aware of the need to plan my course of action when writing in English. 26 When I write an English essay, I plan what I am going to do so I can use my time well. 27 When writing in English, I try to understand the purpose of the writing task. 28 When I reread what I wrote, I look for places the reader might not understand. 1 2 3 4 5 6 133 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 Figure 9 (cont’d) 29 When writing an English essay, I think about how to select and organize relevant information to answer the essay prompt. 30 I set goals for myself in English writing. 1 2 3 4 5 6 31 I am not aware of which thinking technique or strategy to use when writing in English. 32 When writing an English essay, I think about in which order I should organize my ideas in the text. 33 I try to learn from the mistakes I make in English writing. 1 2 3 4 5 6 34 I do not give much thought to the evaluation criteria when working on an English writing task. 35 I am aware of the differences and similarities between texts in English and in Korean. 36 I check my accuracy as I progress through writing. 1 2 3 4 5 6 37 When I write in English, I am aware of when I make grammar mistakes. 38 I am aware of the differences between spoken and written English. 1 2 3 4 5 6 39 I think about whether I am making progress in learning how to write in English. 40 Before I start to write an English essay, I think about the order in which I will put my ideas. 41 When writing an English essay, I correct my errors as I write. 1 2 3 4 5 6 42 I believe I can improve my English writing on my own. 1 2 3 4 5 6 43 I am aware of how changing the order of information in a text can change the meaning of the text as a whole. 44 I am aware of effective ways to use a dictionary when writing in English. 45 When writing in English, I think about the order of the information I am presenting. 46 When I am writing an English essay, I try to concentrate on what I am doing. 47 I try to think of as many different ideas about the topic as I can when writing an English essay. 48 When writing an English essay, I think about what the title tells the readers about the essay. 49 After I finish an English essay writing task, I think about how I can do better the next time. 50 After I finish writing an English essay, I reread what I have written and check if the text is well-organized. 1 2 3 4 5 6 134 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 Figure 9 (cont’d) 51 I use multiple thinking techniques or strategies to write in response to the essay prompt. 52 I am aware of aspects of my English writing that need improvement. 1 2 3 4 5 6 1 2 3 4 5 6 53 When writing an English essay, I think about what needs to be done to 1 2 3 4 5 6 successfully finish the task. 54 I am aware of how my English writing proficiency compares with my 1 2 3 4 5 6 Korean writing proficiency. 55 After I have finished writing, I read through the whole text again. 1 2 3 4 5 6 56 I am aware of effective ways to use Korean translation to solve the problem when I have difficulty writing an English sentence. 57 Before I write an English essay, I think about how much the reader knows about what I'm going to write. 58 Before I start an English essay writing task, I try to determine what the essay prompt requires. 59 I think about whether the organization of my English essay is effective for the particular genre of writing. 60 When writing in English, I think about how relevant the essay topic is to my background knowledge. 61 I am aware of my English writing proficiency compared to my proficiency in English speaking, listening, and reading. 62 When writing a timed-English essay, I think about how much time has passed as I write. 63 I am aware of my level of aptitude for English writing. 1 2 3 4 5 6 64 After writing an English essay, I think about effective ways to incorporate my teacher’s feedback into my writing. 1 2 3 4 5 6 135 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 Appendix J Second Version of Questionnaire (45 items) English Writing and Metacognitive Knowledge This questionnaire is part of a research study being carried out in Applied Linguistics at Michigan State University. The purpose of the task is to encourage English language learners to reflect on their English writing processes. Part 1 consists of questions that ask about your thought processes when you write in English, and Part 2 asks about your English learning experiences. The answers you provide will strictly be used for research purposes only. There are no correct or incorrect answers, so your honest responses will be much appreciated! Please read each of the following statements carefully, and circle (o) the number that corresponds to the degree to which you agree with the given statement. 1 = strongly disagree 2 = disagree 3 = somewhat disagree 4 = somewhat agree 5 = agree 6 = strongly agree Example: Let’s say that you like to study English very much, and you are not too stressed out about it. In this case, you would circle (o) the appropriate number on the questionnaire, as shown in the following example. 1 I like studying English. 1 2 3 4 5 6 2 I am stressed about studying English. 1 2 3 4 5 6 Figure 10. Second version of questionnaire example instructions. Please respond to the following statements following the example above. 1 2 3 4 5 6 When writing an English argumentative essay, I think about what I should do to make it persuasive. Before I start an English essay writing task, I try to determine what the essay prompt requires. I do not feel that it is important to read the writing assignment carefully before writing an English essay. Before I start writing an English essay, I try to predict which parts will be easy and what parts will be difficult. When I write an English essay, I plan what I am going to do so I can use my time well. Before I begin an English essay assignment, I think about whether I know enough English to successfully complete the task. Figure 11. Second version of questionnaire. 136 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 Figure 11 (cont’d) 7 When writing an English essay, I carefully think of the things I want to write about on the topic. I am aware of the need to plan my course of action when writing in English. I am aware of my level of aptitude for English writing. 1 2 3 4 5 6 10 When writing an English essay, I think about how to select and organize relevant information to answer the essay prompt. 11 Before writing an English essay, I ask myself how the essay prompt relates to what I already know. 12 Before I begin writing an English essay, I decide how important it is for me to successfully finish the writing task. 13 Before I start to write an English essay, I think about the order in which I will put my ideas. 14 When writing in English, I try to understand the purpose of the writing task. 15 I am aware of effective ways to use a dictionary when writing in English. 16 I am aware of aspects of my English writing that need improvement. 1 2 3 4 5 6 17 I am aware of English writing strategies that are not effective. 1 2 3 4 5 6 18 I think about whether the organization of my English essay is effective for the particular genre of writing. 19 I am aware of the function that words like furthermore, besides and moreover perform in the organization of an English essay. 20 I periodically check my English essay while writing it. 1 2 3 4 5 6 21 I am not aware of the difference in word order between an English sentence and in a Korean sentence. 22 When I write in English, I think about whether the English expressions I use sound natural to native speakers of English. 23 I am aware of the differences between spoken and written English. 1 2 3 4 5 6 24 When writing an English essay, I think about what the title tells the readers about the essay. 25 I am not sure what to do when I am writing an English essay and I can’t find the right word. 26 I check my accuracy as I progress through writing. 1 2 3 4 5 6 27 I am aware of the differences and similarities between texts in English and in Korean. 28 I think about the roles that words like but and however play in the organization of an essay. 1 2 3 4 5 6 8 9 137 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 Figure 11 (cont’d) 29 When I write an English essay, I am aware of how much of the composition I have left to complete. 30 I am aware of alternative ways of saying what I mean when I have difficulty writing a sentence in English. 31 I think about whether the way I study English writing is effective. 1 2 3 4 5 6 32 I am aware of effective ways to use Korean translation to solve the problem when I have difficulty writing an English sentence. 33 When writing an English essay, I think about the roles that words like first and second play in organizing the text. 34 I am not aware of which thinking technique or strategy to use when writing in English. 35 Before I hand in my English essay, I check my work. 1 2 3 4 5 6 36 I try to learn from the mistakes I make in English writing. 1 2 3 4 5 6 37 I am aware of my own thinking when writing in English. 1 2 3 4 5 6 38 I think about whether I am making progress in learning how to write in English. 39 After writing an English essay, I think about effective ways to incorporate my teacher’s feedback into my writing. 40 I do not give much thought to the evaluation criteria when working on an English writing task. 1 2 3 4 5 6 41 I am aware of which type of feedback (teacher, peer, self) is most effective for improving my English writing. 42 I am aware of how my English writing proficiency compares to my Korean writing proficiency. 43 I think about ways to improve my English writing proficiency on my own. 44 After I finish an English essay writing task, I think about how I can do better the next time. 45 I set goals for myself in English writing. 1 2 3 4 5 6 138 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 Appendix K Second Version of Questionnaire (45 items) in Korean 영어 작문과 초인지 본 설문지는 미시간 주립대학교 응용 언어학과에서 진행되는 연구 과제의 일부로, 영어 학습자들이 자신의 영어 작문 과정에 대해 체계적으로 생각해 볼 수 있도록 만들어졌습니다. 설문지 제 1 부는 영어 작문을 할 때 본인의 사고과정에 대한 질문들로 이루어져 있고, 제 2 부는 본인의 영어 학습 경험에 관한 질문들로 이루어져 있습니다. 여러분이 기재해 주신 답은 오로지 연구 목적으로 사용될 것이며, 정답은 없으니, 편안한 마음으로 솔직하게 답해주시면 되겠습니다. 감사합니다! 제1부 각 문항을 잘 읽어보신 후, 그 문항의 내용에 대해 본인이 동의하는 정도에 맞는 숫자에 동그라미(o)를 쳐주십시오. 간혹 다른 문항들과 반대 형식으로 쓰여진 문항들도 있으니, 각 문항을 신중히 읽고 답해주시기 바랍니다. 1 = 전혀 동의하지 않는다 2 = 동의하지 않는다 3 = 다소 동의하지 않는다 4 = 다소 동의한다 5 = 동의한다 6 = 매우 동의한다 예시: 여러분이 영어 공부를 하는 것을 아주 좋아하고, 영어 공부에 스트레스를 받고 있지 않다고 가정해봅시다. 이럴 경우, 설문지에 다음과 같이, 알맞은 숫자에 동그라미 (o)를 쳐서 답해주시면 됩니다. 1 나는 영어 공부를 하는 것을 1 2 3 4 5 6 좋아한다. 2 나는 영어 공부 때문에 1 2 3 4 5 6 스트레스를 받는다. Figure 12. Example instructions for second version of questionnaire in Korean. 1 나는 영어로 주장문을 쓸 때, 어떻게 하면 설득력 있는 글을 1 2 3 4 5 6 쓸 수 있을지 생각해본다. 2 나는 영어 작문을 할 때, 주어진 영작문 과제가 무엇을 요구하는지에 대해 생각해 본다. Figure 13. Second version of questionnaire in Korean. 139 1 2 3 4 5 6 Figure 13 (cont’d) 3 나는 영작문을 시작하기 전, 작문 과제에 대한 설명을 잘 1 2 3 4 5 6 살펴볼 필요성을 느끼지 못한다. 4 나는 영작문 과제를 시작하기 전에 내 앞에 놓인 영작문 1 2 3 4 5 6 과제의 어떤 부분이 쉽고, 어떤 부분이 어려울지에 대해 생각해본다. 5 나는 영작문을 시작하기 전, 주어진 시간을 어떻게 활용할지 1 2 3 4 5 6 생각해본다. 6 나는 영작문을 시작하기 전, 내가 영작문 과제를 성공적으로 1 2 3 4 5 6 마칠 만큼의 영어 능력을 가지고 있는지에 대해 생각해본다. 7 나는 영작문을 시작하기 전, 주어진 주제에 대해 쓸 내용을 1 2 3 4 5 6 생각해본다. 8 나는 영작문을 할 때 계획을 세우는 일의 중요성을 알고 1 2 3 4 5 6 있다. 9 나는 영작문에 내가 얼마만큼의 소질이 있는지 잘 모르겠다. 10 영작문을 할 때, 나는 주어진 영작문 주제에 알맞게 답하기 1 2 3 4 5 6 1 2 3 4 5 6 위해 나의 아이디어 중 주제와 가장 관련 있는 것을 선택, 정리해본다. 11 영작문을 할 때, 나는 주어진 영작문 주제가 나의 1 2 3 4 5 6 배경지식과 어떤 연관이 있는지에 대해 생각해본다. 12 나는 영작문 과제를 할 때, 영작문을 성공적으로 마치는 1 2 3 4 5 6 것이 나에게 얼마만큼 중요한지에 대해 생각해본다. 13 나는 영작문을 할 때, 내가 말하고자 하는 내용을 어떤 1 2 3 4 5 6 순서로 쓸지 계획을 세운다. 14 나는 영작문을 할 때 내가 쓰는 글의 목적이 무엇인지 별로 1 2 3 4 5 6 생각해보지 않는다. 15 나는 영작문을 할 때, 사전을 효과적으로 활용하는 방법을 1 2 3 4 5 6 알고 있다. 16 나는 영작문을 할 때 내가 개선해야 할 점이 무엇인지 알고 있다. 140 1 2 3 4 5 6 Figure 13 (cont’d) 17 나는 여러 가지 글쓰기 전략 중 나에게 도움이 되지 않는 1 2 3 4 5 6 전략들은 어떤 것들인지 알고 있다. 18 나는 서로 다른 장르의 영어 글을 써야 할 나의 글이 각각의 1 2 3 4 5 6 장르에 따라 알맞게 구성됐는지 생각해본다. 19 나는 영작문을 할 때, furthermore, besides 그리고 1 2 3 4 5 6 moreover 와 같은 단어들이 글을 구성하는 데 어떤 기능을 하는지 잘 생각하며 쓴다. 20 나는 영작문을 하는 동안 때때로 나의 글을 점검해본다. 1 2 3 4 5 6 21 나는 영어 문장의 어순이 한국어 문장의 어순과 비교했을 때 1 2 3 4 5 6 어떠한지 잘 모르겠다. 22 나는 영작문을 할 때, 내가 쓰는 영어 표현이 원어민이 1 2 3 4 5 6 읽었을 때 자연스러운 표현인지 생각해본다. 23 나는 영작문을 할 때 쓰이는 영어가, 영어 회화를 할 때 1 2 3 4 5 6 쓰이는 영어와 비교했을 때 어떤 특징을 가지고 있는지 알고 있다. 24 나는 영작문을 할 때 글의 제목이 독자에게 무엇을 알려줘야 1 2 3 4 5 6 할지 생각해본다. 25 나는 영작문을 하다가 적절한 영어 단어가 떠오르지 않을 때 1 2 3 4 5 6 어떻게 해결해야 할지 모르겠다. 26 나는 영작문을 하는 동안, 내가 얼마나 정확한 문법을 1 2 3 4 5 6 사용하고 있는지 점검한다. 27 나는 영어로 쓰여진 글이 한글로 쓰여진 글과 비교했을 때 1 2 3 4 5 6 어떤 특징을 가지고 있는지 알고 있다. 28 나는 영작문에서 but 과 however 와 같은 단어들이 글의 1 2 3 4 5 6 구성에 어떤 역할을 하는지 생각하며 쓴다. 29 나는 시간 제한이 있는 영작문을 할 때, 글을 써가면서 얼마만큼의 시간이 지났는지 확인한다. 141 1 2 3 4 5 6 Figure 13 (cont’d) 30 나는 영어 문장을 완성하기 어려울 때, 내가 아는 다른 1 2 3 4 5 6 표현을 사용하여 문제를 효과적으로 해결하는 방법을 알고 있다. 31 나는 내가 영작문을 공부하는 방법이 효과적인지에 대해 1 2 3 4 5 6 별로 생각해보지 않는다. 32 나는 영어로 어려운 문장을 써야 할 때 한국어 번역을 1 2 3 4 5 6 활용하여 문제를 효과적으로 해결하는 방법을 알고 있다. 33 나는 영작문을 할 때 first 와 second 과 같은 단어들이 글의 1 2 3 4 5 6 구성에 어떤 역할을 하는지 생각하며 쓴다. 34 나는 영작문을 할 때 어떤 전략적 사고를 하는 것이 좋은지 1 2 3 4 5 6 잘 모르겠다. 35 나는 영작문을 마치고 글을 제출하기 전에 나의 글을 다시 1 2 3 4 5 6 한번 읽으면서 점검해본다. 36 나는 내가 영작문을 하면서 저지른 실수들을 통해 더 1 2 3 4 5 6 배워나간다. 37 나는 영작문에 관한 나만의 목표를 가지고 있다. 1 2 3 4 5 6 38 나는 내가 영작문 실력을 늘려가고 있는지에 대해 1 2 3 4 5 6 생각해본다. 39 나는 영작문 후 선생님께 받은 피드백을 나의 영작문 공부에 1 2 3 4 5 6 효과적으로 활용할 방법을 생각해본다. 40 나는 영작문 과제를 할 때 평가 기준에 대해 별로 1 2 3 4 5 6 생각해보지 않는다. 41 나는 어떤 종류의 피드백 (선생님께, 같은 반 학생에게, 1 2 3 4 5 6 스스로 점검 등)을 받아야 나의 영작문 공부에 도움이 되는지 알고 있다. 42 나는 나의 영작문 능력이 한국어 글쓰기 능력과 비교했을 때 어떠한지 알고 있다. 142 1 2 3 4 5 6 Figure 13 (cont’d) 43 나는 나의 영작문 능력을 스스로 향상시킬 수 있는 방법에 1 2 3 4 5 6 대해 생각해본다. 44 나는 영작문을 마친 후, 다음에 어떻게 하면 더 잘 쓸 수 1 2 3 4 5 6 있을지 생각해본다. 45 나는 영작문을 하는 동안 내 자신의 사고과정에 대해 스스로 잘 알고 있다. 143 1 2 3 4 5 6 Appendix L Stimulated Recall Instructions and Guidelines Instruction for research participants: What we’re going to do now is watch the video. We are interested in what you were thinking at the time you were participating in the activity. We can hear what you were saying by looking at and listening to the video, but we don’t know what you were thinking. So, what I’d like you to do is tell me what you were thinking, what was in your mind at that time while you were completing the task. I’m going to put the camcorder in front of you, so you can pause the video any time that you want. If you want to tell me something about what you were thinking at any time, you can press pause. If I have a question about what you were thinking, then I will press pause and ask you to talk about that part of the video. (Korean Translation) 이제 조금 전에 촬영한 비디오를 볼 것입니다. 저희는 OO 씨가 과제를 수행하는 동안 어떤 생각을 하고 계셨는지에 대해 관심을 갖고 있습니다. 저희는 비디오를 보고 들음으로써 OO 씨가 어떤 말씀을 하셨는지 알 수 있으나, 어떤 생각을 하고 계셨는지는 알지 못합니다. 따라서, 과제를 수행했던 그 당시에 어떤 생각을 하고 계셨는지 말씀해주시면 감사하겠습니다. OO 씨가 원하시면 언제든 비디오를 멈출 수 있게, 여기 OO 씨 앞에 캠코더를 놓겠습니다. 아무때나 OO 씨가 어떤 생각을 하고 계셨는지 말씀해주시려면 비디오를 멈추시면 됩니다. 간혹 제가 OO 씨에게 어떤 생각을 하고 계셨는지 질문 드릴 일이 있으면 비디오를 멈추고 OO 씨에게 그 부분의 비디오에 대해서 물어보겠습니다. Instructions for researcher collecting recall data: After reading the instructions to the participant, model stopping the video and asking a question. For example, choose a segment and stop the video. Ask your question. If they stop the video, listen to what they say. If you stop the video, ask something general like: What were you thinking here/at this point/right then? 144 Can you tell me what you were thinking at that point? I see you’re laughing/looking confused/saying something there, what were you thinking then? If the participant says “I don’t remember,” accept the comment and move on. “Fishing” for recall comments that were not immediately given by the participant will increase the likelihood that the recall comments will be based on what participants think now, some other memory/perception, or some flawed or biased recollection. Try not to focus or direct participants’ answers beyond “what were you thinking then.” If the participant is not stopping the tape, you should stop the tape at some of the following points: 1. When there is a long pause in the interaction. 2. When there appears to be a misunderstanding. 3. When the native speakers recasts. If the participant begins to talk over the tape, pause the tape and angle the remote control towards the participant so that she can release the pause when she is finished talking. Additionally, researchers should not give concrete reactions to participants’ responses. Backchanelling or non-responses are preferable. For example, “Oh/mhm/I see/uh-huh/ok.” It is important to avoid extended responses or three-part exchanges, because providing feedback or input to learners may alter the nature of their recall comments. In short, try to be a “warm body,” not a conversational partner. 145 Appendix M TWE Scoring Guide 6 5 4 3 2 TWE Scoring Guide Clearly demonstrates competence on both the rhetorical and syntactic levels, though it may have occasional errors. A paper in this category:  Is well organized and well developed  Effectively addresses the writing task  Shows unity, coherence, and progression  Displays consistent facility in the use of language  Demonstrates syntactic variety and appropriate word choice Demonstrates competence in writing on both the rhetorical and syntactic levels, though it will have occasional errors. A paper in this category:  Is generally well organized and well developed though it may have fewer details than does a 6 paper  May address some parts of the task more effectively than others  Shows unity, coherence, and progression  Demonstrates some syntactic variety and range of vocabulary  Displays facility in language, though it may have more errors than does a 6 paper Demonstrates minimal competence in writing on both the rhetorical and syntactic levels. A paper in this category  Is adequately organized  Addresses the writing topic adequately but may slight parts of the task  Uses some details to support a thesis or illustrate ideas  Demonstrates adequate but undistinguished or inconsistent facility with syntax and usage  May contain some serious errors that occasionally obscure meaning Demonstrates some developing competence in writing, but it remains flawed on either the rhetorical or syntactic level, or both. A paper in this category may reveal one or more of the following weaknesses:  Inadequate organization or development  Failure to support or illustrate generalizations with appropriate or sufficient detail  An accumulation of errors in sentence structure and/or usage  A noticeably inappropriate choice of words or word forms Suggests incompetence in writing. A paper in this category is seriously flawed by one of more of the following weaknesses:  Failure to organize or develop  Little or no detail or relevant specifics  Serious and frequent errors in usage or sentence structure  Serious problems with focus 146 1 Demonstrates incompetence in writing. A paper in this category will contain serious and persistent writing errors, may be illogical or incoherent, or may reveal the writer’s inability to comprehend the question. A paper that is severely underdeveloped, or one that exhibits no response at all, also falls into this category. 147 REFERENCES 148 REFERENCES Andrich, D., & Masters, G. N. (1988). Rating scale analysis. In J. P. Keeves (Ed.), Educational research, methodology, and measurement: An international handbook (pp. 297-303). Oxford, UK: Pergamon. Baggozzi, R., & Yi, Y. (1988). On the evaluation of structural equation models. Journal of the Academy of Marketing Science, 6, 74-94. Belcher, D., & Braine, G. (1995). Introduction. In D. Belcher & G. Braine (Eds.), Academic writing in a second language (pp. xiii – xxxiv). Portsmouth, NH: Boynton/Cook. Chamot, A. (2001). The role of learning strategies in second language acquisition. In M. P. Breen (Ed.), Learner contributions to language learning: New directions in research (pp. 25-43). Harlow, England: Longman. Costello, A., & Osborne, J. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research, & Evaluation, 10(7), 1-9. Cotterall, S., & Murray, G. (2009). Enhancing metacognitive knowledge: Structure, affordances and self. System, 37, 34-45. Cronbach, L. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334. Cumming, A. (1989). Writing expertise and second language proficiency. Language Learning, 39(1), 81-141. DeCoster, J. (1998). Overview of Factor Analysis. Retrieved February 3, 2012 from http://www.stat-help.com/notes.html DIALANG – Project & System. [Software] (2002). Retrieved February 3, 2012 from http://www.dialang.org/english/summary.html/ DiStefano, C., Zhu, M., & Mindrila, D. (2009). Understanding and using factor scores: Considerations for the applied researcher. Practical Assessment, Research, & Evaluation,14 (20), 1-11. Dörnyei, Z. (2003). Questionnaires in second language research: Construction, administration, and processing. Mahwah, NJ: Lawrence Erlbaum. Dörnyei, Z. (2005). The psychology of the language learner: Individual differences in second language acquisition. Mahwah, NJ: Lawrence Erlbaum. 149 Dörnyei, Z., & Skehan, P. (2003). Individual differences in second language learning. In C. J. Doughty & M. H. Long (Eds.), The handbook of second language acquisition (pp. 589630). Oxford: Blackwell. Ellis, R. (2004). Individual differences in second language learning. In A. Davies & C. Elder (Eds.), The handbook of applied linguistics (pp. 525-551). Oxford: Blackwell. Field, A. (2005) Discovering statistics using SPSS. London: Sage. Flavell, J. (1979). Metacognition and cognitive monitoring, 34(10), 906-911. Gan, Z., Humphreys, G., & Hamp-Lyons, L. (2004). Understanding successful and unsuccessful EFL students in Chinese universities. Modern Language Journal, 88, 229–244. Gass, S., & Mackey, A. (2000). Stimulated recall methodology in second language research. Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers. Gu, Y., & Johnson, K. (1996). Vocabulary learning strategies and language learning outcomes. Language Learning, 46, 643-679. Guadagnoli, E., & Velicer, W. (1998). Relation of sample size to the stability of component patterns. Psychological Bulletin, 103, 265-275. Hall, C. (1990). Managing the complexities of revision across languages. TESOL Quarterly, 24(1), 245-266. Hirose, K., & Sasaki, M. (1994). Explanatory variables for Japanese students’ expository writing in English: An exploratory study. Journal of Second Language Writing, 3, 203-229. Hirose, K., & Sasaki, M. (2000). Effects of teaching metaknowledge and journal writing on Japanese university students’ EFL writing. JALT Journal, 22, 94-113. Holec, H. (1981). Autonomy and foreign language learning. Pergamon, Oxford. Hyland, K. (2007). Genre pedagogy, Language, literacy and L2 writing instruction. Journal of Second Language Writing, 16, 148-164. Jones, S. & Tetroe, J. (1987). Composing in a second language. In A. Matsuhashi (Ed.), Writing in real time: Modeling production processes (pp. 34-57). Norwood, NJ: Ablex. Jolliffe, I. (1972). Discarding variables in a principal component analysis, I: artificial data. Applied Statistics, 21, 160-173. Jollifee, I. (1980). Principal component analysis. New York: Springer-Verlag. 150 Kaiser, H. (1960). The application of electronic computers to factor analysis. Educational and Psychometrical Measurement, 20, 141-151. Kaiser, H. (1974). An index of factorial simplicity. Psychometrika, 39, 31-36. Kaplan, R., & Saccuzzo, D. (1993). Psychological testing: Principles, applications, and issues. Belmont, CA: Wadsworth. Kasper, L. (1997). Assessing the metacognitive growth of ESL student writers. TESL-EJ, 3(1). Kass, R., & Tinsley, H. (1979). Factor analysis. Journal of Leisure Research, 11, 120-138. Kieft, M., Rijlaarsdam, G., & Van den Bergh, H. (2006). Writing as a learning tool: Testing the role of students’ writing strategies. European Journal of Psychology of Education, 21(1), 17–34. Kieft, M., Rijlaarsdam, G., & Van den Bergh, H. (2008). An aptitude-treatment interaction approach to writing-to-learn. Learning and Instruction, 18, 379–390. Kline, P. (1999). The handbook of psychological testing. London: Routledge. Kobayashi, H., & Rinnert, C. (1992). Effects of first language on second language writing: Translation versus direct composition. Language Learning, 42(2), 183-215. Komorita, S. S., & Graham, W. K. (1965). Number of scale points and the reliability of scales. Educational and Psychological Measurement, 15, 987-995. Krosnick, J. A. (1999). Survey research. Annual Review of Psychology, 50, 537-567. Kuiken, F., & Vedder, I. (2007). Cognitive task complexity and written output in Italian and French as a foreign language. Journal of Second Language Writing, 17(1), 48-60. Langer, E. J. (1978). Rethinking the role of thought in social interaction. In J. H. Harvey, W. J. Ickes, & R. F. Kidd (Eds.), New directions in attribution research (Vol. 2). Hillsdale, NJ: Erlbaum. Livingston, J. (1997). Metacognition: An Overview. Retrieved November 12, 2012. http://gse.buffalo.edu/fas/shuell/cep564/Metacog.htm Lozano, L., García-Cueto, E., & Muñiz, J. (2008). Effect of number of response categories on the reliability and validity of rating scales. Methodology, 4(2) 73-39. Lujan-Ortega, V., & Clark-Carter, D. (2000). Individual differences, strategic performance and achievement in second language learners of Spanish. Studia Linguistica, 54, 280-287. 151 Manchón, R. (2009). Individual differences in foreign language learning: the dynamics of beliefs about L2 writing. RESLA, 22, 245-268. Manchón, R. (2001). Trends in the conceptualizations of second language composing strategies: A critical analysis. International Journal of English Studies, 1(2), 47-70. Manchón, R, Roca, J., & Murphy, L. (2000). An approximation of the study of backtracking in L2 writing. Learning and Instruction, 10(1), 13-35. Meijer, J., Elshout-Mohr, M., Van Daalen-Kapteijns, M., Meeus, W., & Tempelaar, D. (2006). Construction and validation of a questionnaire on metacognition. Paper presented at the Second bi-annual conference of the special interest group on Metacognition of EARLI, Cambridge. Macaro, E. (2006). Strategies for language learning and for language use: Revising the theoretical framework. The Modern Language Journal, 90(3), 320-337. Miles, J., & Shevlin, M. (2001). Applying regression and correlation: A guide for students and researchers. London: Sage. Mokhtari, K., & Reichard, C. (2002). Assessing students' metacognitive awareness of reading strategies. Journal of Educational Psychology, 94(2), 249–259. Naiman, N., Fröhlich, M., Stern, H. H., & Todesco, A. (1996). The good language learner. Toronto: Ontario Institute for Studies in Education (OISE) Press. (Original work published 1978) Negretti, R., & Kuteeva, M. (2011). Fostering metacognitive genre awareness in L2 academic reading and writing: A case study of pre-service English teachers. Journal of Second Language Writing, 20, 95-110. Neuman, W.L (2000). Social Research Methods: Qualitative and Quantitative Approaches. USA: Allyn & Bacon. Norris, J. M. & Ortega, L. (Eds.). (2006). Synthesizing research on language learning and teaching. Amsterdam: John Benjamins. Nunnally, J. C. (1978). Psychometric theory. New York: McGraw-Hill. O’Malley, J., & Chamot, A. (1990). Learning strategies in second language learning. In The international encyclopedia of education (Vol. 6, pp. 3329-3335). Oxford: Pergamon Press. O’Neil, H. F., Jr., & Abedi, J. (1996). Reliability and validity of a state metacognitive inventory: Potential for alternative assessment. Journal of Educational Research, 89(4), 234–245. 152 Oswald, F. L., & Plonsky, L. (2010). Meta-analysis in second language research: Choices and challenges. Annual Review of Applied Linguistics, 30, 85-110. Oxford, R. L. (1990). Language learning strategies: What every teacher should know. Boston: Heinle & Heinle. Oxford, R. L. (1999). Learning strategies. In B. Spolksy (Ed.), Concise encyclopedia of educational linguistics (pp. 552-560). Oxford: Elsevier. Phakiti, A. (2003). A closer look at gender and strategy use in L2 reading. Language Learning, 53, 649-702. Pintrich, P., Smith, D., Garicia, T., & McKeachie, W. (1991). A manual for the use of the motivated strategies for learning questionnaire (MSLQ). Ann Arbor: Michigan: The University of Michigan. Plonsky, L. (2011). The effectiveness of second language strategy instruction: A meta-analysis. Language Learning, 61, 993-1038. Plonsky, L., & Gass, S. (2011). Quantitative research methods, study quality, and outcomes: The case of interaction research. Language Learning, 61, 325-366. Politzer, R., & McGroarty, M. (1985). An exploratory study of learning behaviors and their relationship to gains in linguistic and communicative competence. TESOL Quarterly, 19, 103–123. Preston, C., & Colman, A. (2000). Optimal number of response categories in rating scales: Reliability, validity, discriminating power, and respondent preferences. Acta Psychologica,104, 1-15. Purpura, J. (1997). An analysis of the relationships between test takers’ cognitive and metacognitive strategy use and second language test performance. Language Learning, 47, 289-325. Qi, D. S. (1998). An inquiry into language-switching in second language composing processes. The Canadian Modern Language Review, 54(3), 413-435. Raimes, A. (1987). Language proficiency, writing ability, and composition strategies: A study of ESL college student writers. Language Learning, 37(3), 439-467. Rees-Miller, J. (1993). A critical appraisal of learner training: Theoretical bases and teaching implications. TESOL Quarterly, 27, 679-689. Reid, J. (1990). Responding to different topic types: A quantitative analysis from a contrastive rhetoric perspective. In B. Kroll (Ed.), Second language writing: Research insights for the classroom (pp. 191-210). Cambridge: Cambridge University Press. 153 Roberts, M., & Erdos, G. (1993). Strategy selection and metacognition. Educational Psychology, 13, 259-266. Robinson, P. (2002). Individual differences and instructed language learning. Amsterdam: John Benjamins. Robinson, P. (2011). Second language task complexity: Researching the cognition hypothesis of language learning and performance. Amsterdam: John Benjamins. Roca, J. (1996). Linearization strategies in L2 writing: Some observations. Lenguaje y Textos, 8, 191-208. Roca, J., Murphy, L., Manchón, R. (1999). The use of restructuring strategies in EFL writing: A study of Spanish learners of English as a foreign language. Journal of Second Language Writing, 8(1), 13-44. Roca de Larios, J., Manchón, R., Murphy, L., & Marín, J. (2008). The foreign language writer’s strategic behaviour in the allocation of time to writing processes. Journal of Second Language Writing, 17, 30–47. Rubin, J. (1975). What the “good language learner” can teach us. TESOL Quarterly, 9, 41–51. Rubin, J., & Thompson, I. (1994). How to be a more successful language learner (2nd ed.). Boston: Heinle & Heinle. Sasaki, M., & Hirose, K. (1996). Explanatory variables for EFL students’ expository writing. Language Learning, 46, 137-174. Samuelstuen, M. S., & Braten, I. (2007). Examining the validity of self-reports on scales measuring students’ strategic processing. British Journal of Educational Psychology, 77, 351 - 378. Schellings, G. (2011). Applying learning strategy questionnaires: problems and possibilities. Metacognition and Learning, 6, 91-109. Schellings, G., & Van Hout-Wolters. (2011). Measuring strategy use with self-report instruments: Theoretical and empirical considerations. Metacognition and Learning, 6, 83-90. Schmitt, N. (1997). Vocabulary learning strategies. In N. Schmitt and M. McCarthy (Eds.) Vocabulary: Description, acquisition and pedagogy. Cambridge: Cambridge University Press. Scheuermann, F., & Pereira, A. (2008). (Eds.) Towards a research agenda on computer-based assessment: Challenges and needs for European Educational Measurement. JRC Scientific and Technical Reports. 154 Schoonen, R. , & De Glopper, K. (1996). Writing performances and knowledge about writing. In G. Rijlaarsdam, H. van den Bergh, & M. Couzijn (Eds.), Theories, models and methodology in writing research (pp. 87-107). Amsterdam: Amsterdam University Press. Schoonen, R., Snellings, P., Stevenson, M., & Van Gelderen, A. (2009). Towards a blueprint of the foreign language writer: The linguistic and cognitive demands of foreign language writing. In R. M. Manchon (Ed.), Learning, teaching, and researching writing in foreign language contexts (pp. 77-101). Clevedon, UK: Multilingual Matters. Schoonen, R., Van Gelderen, A., De Glopper, K., Hulstijn, J., Simis, A., Snellings, P. & Stevenson, M. (2003). First language and second language writing: The role of linguistic knowledge, speed of processing, and metacognitive knowledge. Language Learning, 53(1), 165-202. Schoonen, R., Van Gelderen, A., De Glopper, K., Hulstijn, J., Snellings, P., Simis, A., & Stevenson, M. (2002). Linguistic knowledge, metacognitive knowledge and retrieval speed in L1, L2 and EFL writing. A structural equation modeling approach. In Ransdell & M. –L. Barbier (Eds.), New directions for research in L2 writing (pp. 101-122). Dordrecht: Kluwer Academic. Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19, 460–475. Schraw, G., & Dennison, R.S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19, 460-475. Sengupta, S. (2000). An investigation into the effects of revision strategy instruction on L2 secondary school learners. System, 28, 97-113. Stern, H. H. (1975). What can we learn from the good language learner? Canadian Modern Language Review, 31, 304–318. Stevens, J. (1992). Applied multivariate statistics for the social sciences. Hillsdale, NJ: Erlbaum. Stevick, E.W. (1989). Success with foreign languages: Seven who achieved it and what worked for them. London: Prentice Hall. Stoffer, I. (1995). University foreign language students’ choice of vocabulary learning strategies as related to individual differences variables. PhD dissertation. University of Alabama. Studenmund, A., & Cassidy, H. (1987). Using econometrics: a practical guide. Boston: Little, Brown. 155 Tobias, S., & Everson, H.T. (2009). The importance of knowing what you know: A knowledge monitoring framework for studying metacognition in education. In D.L. Hacker, J. Dunlosky, & A. Graesser (Eds.), Handbook of metacognition in education (pp. 107-127). New York: Routledge, Taylor and Francis. Tseng, W., Dörnyei, Z., & Schmitt, N. (2006). A new approach to assessing strategic learning: The case of self-regulation in vocabulary acquisition. Applied Linguistics, 27(1), 78-102. Vandergrift, L., Goh, C., Mareschal, C., & Tafaghodtari, M. (2006). The metacognitive awareness listening questionnaire: Development and validation. Language Learning, 56(3), 431-462. VanderStoep, S., & Pintrich, P. (2003). Learning to learn: The skill and will of college success. Upper Saddle River, NJ: Prentice Hall. Van Hout-Wolters, B. (2000). Assessing self-directed learning. In P. R. J. Simons, J. van der Linden, & T. Duffly (Eds.), New learning (pp. 83-101). Dondrecht: Kluwer. Veenman, M. V. J. (2005). The assessment of metacognitive skills: What can be learned from multimethod designs? In C. Artelt & B. Moschner (Eds.), Lernstrategien and Mtakognition: Implikationen fur Forschung and Praxis (pp. 75-97). Berlin: Waxmann. Veenman, M. V. J. (2011). Learning to self-monitor and self-regulate. In R. Mayer & P. Alexander (Eds.), Handbook of research on learning and instruction (pp. 197-218). New York: Routledge. Victori, M. (1999). An analysis of writing knowledge in EFL composing: A case study of two effective and two less effective writers. System, 27, 537-555. Wenden, A. (1982). The processes of self-directed learning: A study of adult language learners. Unpublished doctoral dissertation. Teachers College, Columbia University. Wenden, A. (1983). The process of intervention: Review essay. Language Learning, 33(1), 103121. Wenden, A. (1987). Conceptual background and utility. In A. Wenden & J. Rubin (Eds.), Learner strategies in language learning (pp. 3-13). London: Prentice Hall. Wenden, A (1991). Metacognitive strategies in L2 writing: A case for task knowledge. In A James (Ed.), Georgetown university round table on languages and linguistics (pp. 303322). Washington, DC: Georgetown University Press. Wenden, A. (1998). Metacognitive knowledge and language learning. Applied Linguistics, 19(4), 515-537. 156 Wenden, A., & Rubin, J. (Eds.) (1987). Learner strategies in language learning. Oxford, UK: Pergamon Press. Weng, L. (2004). Reliability impact of the number of response categories and anchor labels on coefficient alpha test-rests. Educational and Psychological Measurement, 64, 956-972. Zamel, V. (1983). The composing processes of advanced ESL students: Six case studies. TESOL Quarterly, 17(2), 165-187. Zhang, L. (2010). A dynamic metacognitive systems account of Chinese university students’ knowledge about EFL reading. TESOL Quarterly, 44(2), 320-353. 157