4. 23.... . . é . . H a. . . r, l}- ! :. 1.1,. ..r..? THESiS g ‘mitiiiiiiflii“ This is to certify that the dissertation entitled Individual Differences in Choice During Learning: The Influence of Learner Goals and Attitudes in Web-based Training presented by Kenneth Guy Brown has been accepted towards fulfillment of the requirements for Ph . D . degree in PSYCh0108Y MM - / Major professor Date am? [2; /Zf7 MS U is an Affirmative Action/liq ual Opportunity Institution 0- 12771 PLACE IN REFURN BOX to remove this checkout from your record. To AVOID FINE return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE W W“ INDIVIDUAL DIFFERENCES IN CHOICE DURING LEARNING: THE INFLUENCE OF LEARNER GOALS AND ATTITUDES 1N WEB—BASED TRAINING By Kenneth Guy Brown A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Psychology 1 999 ABSTRACT INDIVIDUAL DIFFERENCES IN CHOICE DURING LEARNING: THE INFLUENCE OF LEARNER GOALS AND ATTITUDES IN WEB-BASED TRAINING By Kenneth Guy Brown In recent years the growth of the World Wide Web has sparked an interest in using the web to deliver workplace training. Although there are many potential benefits of placing training on the web, there is little empirical evidence that such training can be effective. As one of the defining characteristics of web-based training (WBT) is the presence of hyperlinks and the control that they afford the trainee, research on learner control has the potential to offer useful theory and data regarding how and when such training can be useful. Unfortunately, the learner control research has been criticized for lack of theory and sound research (e.g., Reeves, 1993; Williams, 1996). The purpose of this dissertation is to examine the learner control research in light of the trend toward WBT, to develop a theory regarding how trainees use control during such training, and to test the theory. Research on learner control, individual differences in trainee characteristics, and the learner process are reviewed. To integrate existing theory and empirical evidence, a theoretical model depicting the influence of individual differences on the choices that trainees make during training iS advanced. This theory, labeled the individual differences in choice during learning theory, emphasizes trainee motivation. The theory suggests that learner goals, attitudes toward the content, self-efficacy for learning the content, and self-efficacy for using the technology are antecedent to two critical choices trainees must make during training: (1) Strategy and (2) Effort. These choices in turn influence knowledge gain and post-training attitudes such as self- efficacy for applying training back at work. A study of 80 trainees in a Fortune 500 manufacturing firm is presented to test this model. Trainees completed a web-based training course that was originally offered as 3-days of instructor-led training. All trainees completed the course at a central facility, but they were allowed to proceed through the course at their own pace. Overall, the theory provides a number of valid predictions. The results support the influence of goals and attitudes on a number of strategic and effort learning choices. Individual differences were also found to predict application self-efficacy. Effort choices regarding percent of activities to complete were found to be the best predictors of two measures of knowledge gain. Time on task was a marginally Significant predictor. A number of other predictions of the theory were not confirmed. Neither individual differences nor strategic choices were found to predict knowledge gain. The best predictor of knowledge gain, percent of activities completed, was the process that was least well predicted by the individual difference measures. These findings are discussed with regard to the structure and predictive validity of the learner choice theory. Future research directions are discussed, particularly the need to conduct more detailed research on the learning process and to search for motivational constructs that more effectively predict trainee activity levels. ACKNOWLEDGMENTS I want to acknowledge those people that made this dissertation a reality. First, I want to acknowledge the support of the managers and team members at Strategic Interactive (SI), especially Tom and Mark. Because of these gentlemen, SI is a cutting-edge, forward-thinking company. They recognize that research has both intrinsic and market value and, as a result, provided me with the access and support necessary to complete this project. Second, I want to recognize the design team at Strategic Interactive that completed the course that is the subject of this dissertation. In particular, Keith Hamilton provided invaluable technical assistance and Wendy Golden offered her ear at every opportunity. I feel fortunate to have worked with such capable (and fun-loving) people. I also want to recognize the SI client who allowed this project to proceed. Because of a need to maintain anonymity, client company employees and managers cannot be thanked by name. However, I want to be clear about my appreciation for their efforts supporting this project. While getting access to data for this dissertation was a tremendous hurdle, the challenge of conceptualizing and writing was no less daunting. My thanks go to Kevin Ford for his clarity of thought and caring demeanor throughout this process. I am fortunate to have encountered a role model like Kevin who somehow manages to wear many hats and keep many balls in the air without ever losing sight of the importance of people and relationships. I also want to thank the other members of the dissertation committee: Steve Kozlowski, Ray Noe, and Ann Marie Ryan. Steve, in particular, has been a powerful influence on my thinking. I appreciate his patience iv with me in my early years as a graduate Student. Fellow students and departmental staff played no less important roles in ensuring that this dissertation was complete. In that regard, I have to give a million thanks to Dan Weissbein for sticking by me through all the trials and tribulations of graduate school. He is truly a dear friend. Others students, including Stan, Eleanor, Earl, Morrie, Becca, and Karen, all had Significant influences on me. I appreciate the opportunity to have worked with each of them. With regard to departmental staff, I feel fortunate to have worked with some of the nicest people on earth. Suzie did magic by making the system work. Marcy helped out with everything, and helped keep me sane with gossip and chat when I needed it most. They, along with many others, simply made the psychology department a great place to work. Finally, I want to thank my family. I am grateful to my Mom and Dad, brother, and Sisters for instilling in me a love of learning, which is one of my greatest strengths. I am indebted to my wife for putting up with me despite how annoying this “strength” can become. In the learning environment of life, She has been without a doubt the best choice I have ever made. Without the love and support of these individuals, I would have been lost in a tangle of TV and junk food a long time ago. This dissertation is truly a collaborative product. Without the assistance of these people and others whom I may have neglected to mention, I never would have typed a single word. Thanks go to all of you. TABLE OF CONTENTS LIST OF TABLES ............................................................................ viii LIST OF FIGURES ............................................................................ ix INTRODUCTION ............................................................................... 1 LITERATURE REVHiW ...................................................................... 6 Learner Control ......................................................................... 8 Learning Choices ...................................................................... 15 Individual Differences ............................................................... 25 Learning Outcomes ................................................................... 38 Summary ............................................................................... 42 THEORETICAL AND RESEARCH MODELS .......................................... 44 Learner Choice Theory .............................................................. 44 Research Model and Hypotheses ................................................... 51 METHOD ....................................................................................... 64 Sample ................................................................................. 64 Research Design ....................................................................... 65 Power Analysis ........................................................................ 66 Training Technology ................................................................ 66 Training Course ....................................................................... 67 Procedure .............................................................................. 74 Measures ................................................................................ 75 Data Analysis .......................................................................... 89 RESULTS ....................................................................................... 91 Controls ................................................................................ 96 Individual Difference Effects on the Learning Process .......................... 96 Learning Choice Effects on Training Outcomes ................................ 104 Direct and Indirect Effects of Individual Differences on Training Outcomes ............................................................ l 10 DISCUSSION ................................................................................. 117 Malleable Individual Differences ................................................. 119 Learning Choices .................................................................... 123 Training Design: The Unmeasured Factor ....................................... 129 Limitations and Implications ...................................................... 131 Conclusion ........................................................................... 1 35 vi APPENDIX A: INFORMED CONSENT ................................................ 138 APPENDIX B: SURVEY AND TEST ITEMS .......................................... 139 APPENDIX C: APPLICATION TEST KEYAND SAMPLE CODING SHEET... 154 REFERENCES ............................................................................... 1 60 vii LIST OF TABLES TABLE 1. Course Modules and Learning Events ......................................... 70 TABLE 2. Measures ........................................................................... 76 TABLE 3. Rotated Component Matrix of Goal Measures ................................ 79 TABLE 4. Correlations Among Raters and Questions on Application Pre-Test. 88 TABLE 5. Correlations Among Raters and Questions on Application Post-Test. . .. 88 TABLE 6. Descriptive Statistics and Correlations ........................................ 92 TABLE 7. Regression Results of Attentional Focus (Perceived Focus Measure). . . . 97 TABLE 8. Regression Results of Attentional Focus (Time on Task Measure) ....... 98 TABLE 9. Regression Results for Metacognition ........................................ 99 TABLE 10. Regression Results for Activity Level (Percent Measure) ............... 100 TABLE 1]. Regression Results for Activity Level (Words Measure) ................ 101 TABLE 12. Regression Results for Activity Level (Repeats Measure) .............. 102 TABLE 13. Regression Results for Application Self-Efficacy ........................ 105 TABLE 14. Regression Results for Verbal Knowledge ................................ 106 TABLE 15. Regression Results for Application Knowledge .......................... 107 TABLE 16. Regression Results for Knowledge Composite ........................... 109 TABLE 17. Regression Results for Application Self—Efficacy Training Outcome..11 1 TABLE 18. Regression Results for Verbal Knowledge and Individual Differences .................................................................................... 1 13 TABLE 19. Regression Results for Application Knowledge and Individual Differences .................................................................................... 1 14 TABLE 20. Summary of Results .......................................................... 1 15 viii LIST OF FIGURES FIGURE 1. Individual Differences in Learning Choice Theory ........................ 45 FIGURE 2. Individual Differences in Learning Choice Research Model ............. 53 0! Of INTRODUCTION In recent years, the growth of the World Wide Web and its associated technologies have triggered an interest in using the web to deliver training. Web-based training (W BT) is training that is delivered via the Internet or corporate Intranet. For' purposes of clarity, WBT refers to structured information intended to improve job- relevant knowledge and skill. This differentiates WBT from information simply deposited or posted on the web (e.g., bulletin boards), from education via the web targeted at students (e. g., on-line classrooms), and from computer-assisted learning (CAL) where computers and/or the web are used to supplement classroom activity rather than to convey instruction. A key feature of WBT is that the learner controls many aspects of the learning experience such as which information to review, which exercises to complete, and how long to stay in the learning environment. There are many potential benefits of placing training on the web. Information on the web is generally stored in one location and transmitted as requested to remote Sites. At a remote site, a trainee can access this information using widely available computer programs called web browsers (e.g., Internet Explorer, Netscape). Compared to traditional CBT, these features lower training development cost, Simplify updating or revising materials, and increase accessibility (Hall, 1997; Khan, 1997). In addition, training that is available via the web does not have to be taken at a central location; it can be taken in the workplace closer to the time that the Skill is necessary. “J ust-in- time” training delivery has the potential to lower the Chances for trainees to forget learned material before it can be used on the job. The potential benefits of WBT have been recognized and its use is growing (Hall, 1997; Owston, 1997). In fact, the American Society of Training and Development suggests that, while the percentage of computer-based training and self- paced training in other formats have remained constant at 3% and 7% respectively, the percentage of intemet/network distance education has increased from .4% to 2% from 1994 to 1996. This percentage is likely to increase even further in the coming years (Hall, 1997). Despite the growth in WBT, there is little empirical research to demonstrate its effectiveness (Craiger & Weiss, 1997). Existing research on learner controlled training can be informative for determining how WBT should be designed and when it should be used. Unfortunately, the learner control research has been criticized for lack of theory and sound research. The purpose of this dissertation is to examine learner control research in light of its relevance to WBT. Reviews of this research area suggest that studies to date have primarily focused on the issue of learner control versus program control. In other words, existing research addresses the question, "Should trainees have control over different aspects of their training?" Because WBT being developed today offers such learner control, more relevant research questions may be "How do trainees use the control afforded to them?" and "Can trainees use such control effectively?" The answer to these questions has been pursued in a few studies but no organizing framework has been advanced for research on these questions. More specifically, no single coherent framework has been advanced to explain which trainees are more likely to succeed in learner controlled environments. Research to date has focused on ability or personality determinants of control choices, without considering malleable influences such as trainees’ goals and attitudes. A more subtle but no less important issue is that an organizing framework has not been advanced for understanding the types of choices trainees make during training. In other words, there is no process model for understanding how individual differences influence training outcomes in learner controlled environments. This fact suggests that there is a set of broader theoretical questions that have yet to be fully addressed including "How do learners use control during learning? Are certain trainees more prone to use control effectively?" TO advance this area of theory and research, this dissertation addresses two theoretical issues. First, without arguing against the influences of ability or personality, this dissertation attempts to balance the individual differences considered in learner control research by examining malleable motivational variables. Motivational variables that are relevant to choices and activity during learning are examined and placed within a comprehensive theoretical framework. Second, this dissertation advances a learning process model that specifies the types of choices and activity that occur in learner controlled training environments. In combination these two focal points provide for a theory of individual differences in learning choices (hereafter referred to as a theory of learning choices). Why would such a theory provide a contribution? First, a process theory can become a powerful lever for future practice because it offers the understanding necessary to modify the training and/or provide additional interventions in order to improve outcomes. With regard to this dissertation, understanding the process by which individual differences affect training outcomes may suggest how WBT Should U E.“ TY‘T ICC? "v kit. . ‘2 be modified, supplemented, or possibly replaced to be more effective. Moreover, focusing on malleable individual differences provides greater direction and opportunity for modifying training that focusing on immutable differences. Training can be purposefully designed to begin with instruction to influence malleable characteristics of trainees. Second, a process theory is also useful for future research because it identifies factors that should be measured while studying important questions in training research such as: What individual differences influence learning? What training design is most effective? What features of training influence training outcomes? These questions have been addressed, at least to some degree, in traditional training environments (see Tannenbaum & Yukl, 1992). However, there is very little research that focuses on training in which the trainee, rather than the instructor, controls key features of the learning environment. A process theory would offer understanding of these environments, rather than empirical prediction attempted by current learner control research. With regard to individual differences, this manuscript advances an integrated theoretical perspective that combines self-efficacy (Bandura, 1997) and goal theories (Dweck, 1986). With regard to the learning process, a combination of theoretical perspectives is offered including theories from mental workload and attention (Kanfer & Ackerman, 1989; Fisher & Ford, 1998); metacognition and learning strategies (Ford, Smith, Weissbein, Gully, & Salas, 1998; Pintrich & DeGroot, 1990); and learner practice and activity (Ford et al., 1998). Together these theories Offer predictions for how particular trainees will use control provided to them in WBT, and guidance on how to conduct further research on this issue. An empirical study that tests many of the predictions offered by this theory is proposed and presented. LITERATURE REVIEW Web-based training (W BT) is a formal effort to change job-related knowledge and Skill through the use of information and activities presented on the computer. The training typically involves computer programs and data that reside on a Single computer but can be accessed by many computers via network technology. Trainees can use commonly available programs called web browsers to access different types of information (i.e., text, pictures, audio, animations) over an Intranet or the Internet. At the core, WBT is basically training delivered via computer. Many of the technical features that differentiate WBT from computer—based training (CBT) are not experienced directly by the user. These technical features include how the training is programmed and delivered to the trainee'. From the leamer’s perspective, there are two key features that distinguish WBT from CBT. These features are the presence of hyperlinks and the control they providez. Hyperlinks, or links for Short, are connections between documents that allow the user to quickly and easily move from one document to another. Learning environments with such links afford users tremendous control over such issues as: Information displayed, sequencing of information, and pacing. ' CBT generally runs as a stand-alone computer program. As a result it is typically designed for particular computer platforms. WBT, on the other hand, is generally programmed using a set of languages that are not specific to particular machines. Also unlike CBT, the programming for and information in WBT reside on a server rather than on trainees' computers; the training material is transferred or downloaded from the server as the user and/or program requests. 2 WBT can be created without links, but to do so would make it indistinguishable from traditional CBT. Similarly, CBT can be designed to appear as if it has hyperlinks. That type of training would not have the practical advantages of WBT, but it would be the same from the learners’ perspective. It iS important to note that published empirical research that focuses on WBT is practically non-existent. Although there is great deal of anecdotal evidence regarding web-based instruction (e. g., Khan, 1997), and some evidence for its cost effectiveness (e. g., Hall, 1997), there is currently neither a systematic theory of learning from WBT nor any systematic evaluation of this type of training (Craiger & Weiss, 1997). In fact, many of the hypermedia solutions developed today are driven more by technology than by instructional theory (Yang & Moore, 1995), so the focus of evaluation is often on interface issues (Kommers, 1996). None-the-less, there is research on learning theory and instructional design that can be used to aid in design and evaluation of WBT. The current lack of attention to theory and evaluation is critical because, without either, the technology may be used ineffectively and capital investments necessary to deliver this type of training may be wasted. Despite the lack of research on WBT directly, many studies have been conducted regarding when it is efficient and inefficient to give learners control of their learning environment. AS learner control is a defining characteristic of WBT (Park, 1991; Wilson & J onassen, 1989), learner control research Should be explored to increase our understanding of how trainees learn in these linked environments. Moreover, there is additional research in instructional technology, training, education, and organizational behavior that is relevant to understanding the effective use of WBT. The research literature that should be reviewed is noted below in the order it will be presented in the next sections of the dissertation. First, as note above, research on learner control, including when it is effective and when it is not effective, offers critical theoretical background for the study of WBT. Second, research must investigate the learning process during learner controlled training. More specifically, the choices that trainees make during training must be understood. Research on the learning process in web-based environments, including the choices about information to view, how to study it, and for how long, is noticeable only for its absence. The second literature review section explores research on the learning process. Third, individual differences in the use of this technology are also fundamental. Systematic differences among individuals in training outcomes have been shown to be exacerbated by offering learner control (e. g., Tennyson, 1980), so individual difference effects in WBT may be even more powerful than those identified in current training research. The section following the process literature review focuses on individual differences. The final literature review section discusses research on different learning outcomes. The evaluation of any training intervention should include multiple outcomes (Kraiger, Ford, & Salas, 1993) and outcomes that are particularly relevant to WBT are discussed here. Learner Control Learner control refers to instruction that allows learners to make their own way through training materials. Control can include the option to choose content (i.e., what to study), sequence (i.e., in what order to study), activity (i.e., how much to practice), pace (i.e., how long to study), display (i.e., how the material looks), and/or any other feature of the instructional environment or process (e. g., Hannafin, 1984; Milheim & Martin, 1991; Chung & Reiguluth, 1992). Learner control is often contrasted with program control, where the instructor or machine determines the nature of the instruction (Reeves, 1993). The rationale for allowing learner control iS that learners know what is best for them. Milheim and Martin (1991) indicate that allowing control Should improve learning for a number of reasons. First, from a motivational perspective, control allows trainees to choose information that is personally relevant to them, and pace and sequence that information as they desire. By allowing trainees to make these choices, learner controlled training may increase motivation relative to program controlled training where the trainer or program makes the choices. Second, attribution theory adds that learner control may be related to expectations for success. If allowing control pushes trainees to ascribe success to personal, stable, and controllable factors, then increased motivation and learning may result. Third, from the information processing perspective, control allows trainees to organize information in a way that is personally relevant, increasing attention and presumably retention. Compared to program controlled training, learner controlled training may increase learning by allowing trainees to encode the training material in a manner consistent with their existing knowledge structures. Even though learner control is thought to have numerous potential benefits reviews of the empirical research suggest that results have been mixed (Kinzie, 1990; Milheim & Martin, 1991; Steinberg, 1977; Williams, 1996). Some studies find learner control results in higher knowledge post-test scores than program control (e.g., Avner, Moore, & Smith, 1980; Ellermann & Free, 1990; Kinzie, Sullivan, & Berdel, 1988), while others find program control results in higher scores (e. g., Morrison, Ross, & Baldwin, 1992; Pollock & Sullivan, 1990; Tennyson, Tennyson, & Rothen, 1980). The majority of studies, however, find no differences between the two with regard to post-test achievement (Carrier, Davidson, Higson, and Williams, 1984; Lee and Lee, 1991; Murphy and Davidson, 1991; Pridemore and Klein, 1991, 1995; for a review see Williams, 1996). The mixed findings in this literature suggest a number of possible limitations to the research being conducted. Reeves (1993) notes the following limitations to existing learner control research: Lack of an adequate theoretical foundation, inadequate definitions of learner control, and poor methods and data analysis procedures. In the discussion that follows, these last two points are combined with comments by other authors (e. g., Williams, 1996) under a single heading regarding problems with the design and interpretation of learner control research. Another concern iS that much of the research on learner control has been conducted in laboratory settings with students. Concerns about generalizability to WBT raise a number of‘other possible limitations, including the research population, training outcomes, and learner characteristics studied. Inadequate TheoreticJal Foundation. The majority of research in this area is focused on comparing program versus learner control, and does not offer a framework regarding how learners use the control provided to them. As noted by Williams (1996), “. .. In the simple pursuit of the winner in the contest between learner control and program control, too much leamer-control research has proceeded in the absence or ignorance of relevant basic psychological research that might clarify the actual phenomena being studied, namely, the act of learner choice.” (p. 965). Montazemi and Wang (1995) have suggested that lack of theory is a problem that plagues not only 10 learner control research but also nearly all research on computer-based instruction. Research in learner control and computer-based training Should build theory regarding why trainees make the choices they do, and how those choices influence learning outcomes. This calls for learner-focused research that explores how learners make decisions in these environments. Referring Specifically to hypertext that is used in WBT, Wilson and Jonassen (1989) note that “when we examine the instructional aspects of hypertext, we need to look at how the learner makes use of the hypertext environment...” (p. 35). This limitation can also be seen as an emphasis on outcome rather than process. Researchers currently focus on the outcomes of program versus learner control without studying the process by which these differences emerge. Researchers have to understand the learning process to understand how differences in outcomes emerge from exposure to training. To accomplish this goal, researchers need theory regarding why trainees make the choices they do, and models of the learning process so that these choices can be measured. There is some recent literature that focuses on the choices that trainees make during learner controlled training (e. g., Carrier & Williams, 1988; Milheim, 1995; Relan, 1995). A review of the learner control literature published in the late 1980's suggested that there has been an increased interest in learning strategies used by trainees in learner controlled environments (Steinberg, 1989). However, Steinberg only noted one study in this category and it was not published in a traditional peer review journal (Rubincam & Olivier, 1985). It is only more recent research that offer some process focus (e. g., Milheim, 1995; Relan, 1995). Unfortunately these studies, 11 which will be reviewed later in the manuscript, measure process without advancing a theory to explain the nature of different learning choices. Inadequate Definitions. Reeves (1993) notes that whenever learner control is present, researchers should define what exactly the learner can control. The “control of what” question is critical for determining the effects of different forms of learner control (ROSS & Morrison, 1989). Unfortunately, few authors are explicit about the nature of the control afforded to trainees (Reeves, 1993). Clear definitions are presented in a few studies. For example, Pridemore and Klein (1991) focus on control of feedback following practice exercises. The nature of control was that students could choose whether they wanted to review feedback after answering questions. All other aspects of training, including format and sequencing, were fixed. Similarly, Carrier and Williams (1988) tested the effects of choosing optional elaborative material offered during the lesson, compared to students offered the minimum and maximum amounts of material. Reeves (1993) notes that studies such as these are in the minority. He suggests that research must clarify the choices available to the trainee so that other researchers can understand and classify the nature of learner control provided. This issue is particularly important if researchers seek to understand the learning process. For example, studies that do not indicate what aspects of control are available and which are fixed or controlled by the program are unlikely to effectively model control of these features as part of the learning process. Design and Interpretation Issues. This limitation refers to a combination of design issues that may affect interpretation and generalization of results. An interpretation problem often encountered in learner control research iS the natural 12 confound of time on task, amount of instruction, and learner control (Williams, 1996). Trainees in program control conditions often see additional material or spend more time on task than trainees in learner control conditions. Even when comparisons are not being made between different forms of control, it is important to address the issue Of material viewed and time on task, because these factors have been linked to training outcomes in a number of Studies (see Williams, 1996). As a result, material viewed and time Spent should be assessed in research on WBT, or results maybe difficult to interpret. Mrrow Focus on Learner Characteristics. Reviews of the recent learner control literature note that two primary cognitive characteristics are the focus of study: Cognitive ability and prior knowledge (e.g., Hannafrn, 1984; Milheim & Martin, 1991; Steinberg, 1989). Non-cognitive, personality or affective characteristics of the learner have been studied, but these studies are generally limited to trait-like constructs such as spatial ability (e. g., Campagnoni & Ehrlich, 1989), locus of control (e. g., Gray, Barber, & Shasha, 1991), field dependency (Lee, 1989), and learning style (W ey, 1992). These individual differences are stable characteristics that cannot be changed through additional instruction or intervention. There is a dearth of research on the effects of malleable individual differences that can be influenced by organizational interventions, such as goals and attitudes. With regard to goals, many authors assume that students have learning goals, rather than actually assessing trainees’ goals for training (e.g., Milheim & Martin, 1991). Yet motivation appears to be central to the issue of how much material trainees choose to view during instruction (Hancock et al., 1993), and it Should have implications for the amount of effort trainees will exert. Unfortunately goals have rarely been studied directly with regard to learner control. Limited Range of Outcomes. Almost all of the studies noted above focus on the outcome of post-test knowledge. While Studies will often present results for learning times and satisfaction (e.g., Kinzie et al., 1988), these variables are not focused on as learning outcomes. More often, results focus on a Single learning outcome, implying that learning is unidimensional. Although researchers disagree Slightly about taxonomies of learning outcomes (e. g., Bloom, 1964; Gagne, Briggs, & Wager, 1993; Jonassen & Tessmer, 1996/7; Kraiger, Ford, & Salas, 1993), it is clear that at least three different outcomes can be distinguished: Cognitive, Skill-based, and affective. Each of these outcomes is important. Recent research suggests that constructs from each category of outcomes can be important for determining trainee performance on transfer or skill generalization tasks (Ford etal., 1998; Kozlowski et al., 1995). These findings reinforce that all three categories of outcomes are critical outcomes if the ultimate concern is whether training influences behavior and performance back on the job. As a result, research on learner control in WBT Should focus on cognitive, skill-based, and affective outcomes. In summary, research on learner control suffers from a number of problems that limit our ability to draw strong conclusions regarding the influence of motivation on choices learners make during learner controlled training. Research is needed that brings a theory-base to this issue, carefully defines the nature of control provided to trainees, and captures the activity of the learner and how that activity is influenced by malleable individual differences, particularly motivational variables such as goals. 14 Also, these effects Should be studied for a range of training outcomes, not just verbal knowledge post-tests. Learning Choices To understand the choices that trainees make during WBT, research must have a theoretical framework for understanding the learning process. As noted earlier, a focus on process is noticeably absent from the learner control and CBT literatures (Milheim, 1995). A few recent studies in the instructional design and instructional technology literature are reviewed below as examples of research that does assess learner choice (Hancock, Thurman, & Hubbard, 1995; Lee & Lee, 1991; Milheim, 1995; Pridemore & Klein, 1991, 1995; Relan, 1995). These studies suggest the importance of assessing how active trainees are in terms of viewing material and completing practice exercises. However, these studies offer mixed findings because they ignore the thought processes used by trainees. To address the neglected cognitive aspects of the learning process, recent studies in industrial and organizational psychology are also reviewed. Instructional Designflfechnology Research. Milheim (1995) conducted one instructional design study that focused on trainees’ choice of material and activities. Milheim studied the activity of 28 graduate students in an interactive computer-based lesson that used the same form of interaction (e. g., non-sequential access to information) as typically displayed with hypertext. Milheim recorded trainee characteristics and sought to determine how age, sex, grade-point average, and test scores influenced repeated viewing of screens, Skipping over screens, and completion 15 of input fields. The study did not make a connection between any of these processes and learning outcomes. The results of this study suggest there are some significant differences among demographic categories in use of the medium. For example, he found that those with lower GPAS were more likely to skip screens. The primary limitation of this study is the lack of a theoretical framework for understanding why different types of people make different choices. Presumably demographic categories serve as indicators of underlying psychological constructs that influence the choices made and the thought processes used by trainees. For example, students with lower GPAS may possess less desire to learn from the material and/or greater desire to finish the lesson quickly. This may have led them to Skip more screens than those with higher GPAS. Direct examination of psychological factors such as goals could illuminate the causes behind different approaches to the learning task. This study is useful because it demonstrates a concern for what information trainees choose to view and what activities they choose to complete. However, from a broader perspective this study is uninformative because it does not relate these choices to outcome measures. A second study improved on learning process research by linking trainee choice and learning outcomes. Hancock, Thurman, and Hubbard (1995) studied the choice to study feedback following quiz questions. In this study 54 undergraduates were presented with a HyperCard learning activity. After reviewing the material, trainees were presented with drill questions and asked their confidence regarding the answers provided. After answering, students were offered an Opportunity to review feedback that explained the correct answer and presented a demonstration of it. In 16 general, choice patterns were predicted such that trainees would spend more time reviewing feedback following answers that they were not confident of, and following answer that they got incorrect. Students who followed this choice patterns were expected to learn more from the course. The results confirmed that trainees who scored higher on average tended to study feedback longer, and study feedback more when they were incorrect. This suggests that choice of material to view, in this case explanatory feedback, iS related to verbal knowledge learning outcomes. None—the-less, many subjects deviated from this pattern, and the strength of these results was not impressive. No data regarding total time on task was presented. The authors conclude by suggesting that differences in mindfulness when reviewing feedback was a critical factor in determining learning. Mindfulness, or the extent to which trainees actively concentrated on the material, was unmeasured. Similarly, the authors conclude that subjects who did not view optional material likely had higher-order goals that were not focused on learning, but goals were not assessed. While Hancock et al., (1995) found that choices to review explanatory materials were related to learning, Relan (1995) found that total amount of review was generally unrelated to a post—test presented immediately at the end of the lessons (r = -.16). In this study, 107 Sixth-graders were given a computer-based science tutorial. The authors conclude that review may not have influenced learning because certain subjects may have engaged in mindless review. The authors conclude that, “Extensive use of a strategy during training does not necessarily improve performance on a learner-controlled task; mindful use of a strategy along with Strategy monitoring mutt: 01 mm: the sum: outcomc respon» supplsrr of leedb regudln differen. benefit ( condino know ltC prove u} tell. bitllleen Optional mlOll ed “mgac an WWI: w.‘ , p‘d‘nce It may be required. . .” (p. 147). Again, however, no data regarding the cognitive activity of trainees was collected. Pridemore and Klein ( 1991) and Pridemore and Klein (1995) used essentially the same paradigm to investigate the effects of explanatory feedback on training outcomes. Explanatory feedback is information provided to the trainee following a response to questions or problems posed during training. This information adds supplemental instruction tied directly to the response given by the trainee. This type of feedback is provided in addition to outcome feedback, which is a Simple statement regarding whether a trainee’s response was correct or incorrect. In the Pridemore and Klein studies program versus learner control made no difference on outcomes, but providing explanatory feedback had an overall learning benefit over outcome feedback alone. Furthermore, trainees in the learner control condition who choose feedback more following incorrect answers scored higher on the knowledge post-test. This finding suggests that choice of elaborative material may prove useful, particularly when it iS used for material that the trainee does not know well. Another study by Lee and Lee (1991) offers a caution regarding the distinction between Optional activity provided early in training, during Skill acquisition, and optional activity provided later in training, during review of that material. The study involved 56 eleventh grade chemistry students learning to solve chemistry problems using a computer-aided learning system. Half of the learner control trainees received an introductory lecture on the topic before being presented with options regarding practice (control during knowledge review); the other half of learner control trainees 18 were immediately give access to the computer and control over practice (control during knowledge acquisition). Thus, the primary difference between conditions was the timing of the control provided, during acquisition or review. Lee and Lee (1991) found little difference between scores on practice activities (M = 16.86 acquisition vs. M = 16.77 review) but large differences on final criterion scores (M = 17.93 acquisition vs. M = 23.32 review). Perhaps more importantly, the correlations between previous chemistry knowledge and criterion test performance were dramatically different. The relationship between test performance and prior achievement in the learner control acquisition group was r = .75, while the relationship between test performance and prior achievement in the learner control review group was r = -.l l. The difference between these correlations suggests that optional practice and review may have very different effects depending on when it is provided. Optional activities provided during initial knowledge acquisition maintain prior differences in knowledge. Optional activities provided after initial knowledge acquisition, however, eliminated prior differences. These findings were interpreted to suggest that trainees with prior content knowledge make better choices during training than trainees with less content knowledge. This finding supports previous research that trainees with high prior content knowledge can be more efficient and learn more from learner controlled training than trainees with low prior content knowledge (e. g., Gay, 1986; Tobias, 1987). It is interesting to note that the studies reviewed above that most clearly support the link between choosing to view additional material and learning were 19 conducted by Pridemore and Klein (1991, 1995). These researchers found a learning benefit for offering information (feedback) after the basic material was presented. In other words, trainees’ choices were made after receiving exposure to the material. The effects of prior exposure to material may stem, in part, from an increase in trainees’ awareness of their current state of knowledge. Research by Flavell (1979) and Nelson, LeoneSio, Shimamura, Landwehr, and N arens (1982) suggests that students are generally poor at estimating how much they know about a topic, and Williams (1996) suggests this problem is exacerbated when students have little knowledge of the content area. Williams (1996) summarizes this argument as fOllows, “It could very well be, then, that people often really don’t know what they don’t know, and that those who know very little know even less about what they don’t know” (p. 966). The awareness of current knowledge iS closely tied with what researchers have referred to as metacognition. Metacognition is defined as knowledge and control over oneS’ cognition (Flavell, 1979). Metacognitive activities include planning, monitoring, and revising goal appropriate behavior (Brown, Bransford, Ferrrar, & Campione, 1983). It is possible that the effects of prior knowledge result from an increased capacity to engage in this type of cognitive activity, as suggested by research linking metacognition and expertise (Etapelto, 1993), and metacognition and learning (Pintrich & DeGroot, 1990). Of note is the fact that metacognitive activity varies across individuals even at the same level of expertise, and it generally uncorrelated with cognitive ability (Ridley, Schutz, Glanz, & Weinstein, 1992; Schraw & Dennison, 1994). If these findings hold true in WBT, then differences in 20 metacognitive activity during training may explain substantial portions of variance in learning outcomes. Unfortunately, while the studies reviewed above measured choices in reading about and practicing the training task, they did not collect measures that capture the extent and nature of attention devoted to those materials. As implied by Williams (1996), viewing or reviewing material does not mean that high quality mental effort iS exerted. Similarly, Carroll’s research on minimalist training suggests that not all practice is equivalent (e. g., Carroll, 1990). Simple drill or practice may not be as effective as the mindful completion of realistic and meaningful activities. While a few of the studies reviewed above address the issue of mindful versus mindless processing as post hoc explanations, they do not present data regarding the cognitive activity of the learner. More specifically, none of these studies assesses the amount of attention devoted to training materials, except for time on-task, and none measured metacognitive activity. Recent literature in organizational psychology has focused on the choices that trainees make with regard to the quality and direction of attention exerted during training. In particular, two recent studies have isolated constructs that capture both behavioral and cognitive aspects of the learning process. These Studies suggest that metacognition, attentional focus, and practice activity are all critical learning processes that Should be captured in order to understand choices during learning. Organizational Psychology Research. Ford et al. (1998) studied choice of practice in learner controlled training. In this study, 93 undergraduates learned a novel computer simulation over the course of two days. After an initial introduction to the 21 task on day one, subjects chose the nature of the practice scenarios on day two, which varied in difficulty along two key task dimensions. Trainees could choose among 9 different practice scenarios that varied along each dimension from high, medium, or low difficulty. At the end of the second day of training, students were provided with a complex trial to assess their ability to generalize the skills learned. The purpose of the study was to determine the influence of different learning strategies and link those strategies to trait measures of goal orientation. The results indicate that, among three different learning strategies, metacognitive activity was the most influential. Controlling for the nature of the practice scenarios chosen, students who reported greater metacognitive activity performed better on a knowledge test near the end of training, performed better on a final training trial, and reported higher levels of self-efficacy, or task-Specific confidence. All of these measures were in turn predictive of greater skill generalization on the very last trial. Another important learning strategy used by subjects in the Ford et al. (1998) study was called activity level. Activity level was defined as the extent to which trainees practiced key task Skills. This concept is conceptually Similar to the completion of practice exercises in computer-based instruction (e. g., Lee & Lee, 1991) because it reflects the extent to which trainees choose to explore the task at hand. Higher activity levels were associated with greater knowledge and final training performance at the end of training. It is important to note that the effects for metacognition were found while accounting for activity level, and vice versa. 22 The Ford et al. Study provides evidence that the nature of attention to the task can influence outcomes above and beyond the choices regarding practice. Limitations in the study prevent strong statements about the nature of this effect, though. These limitations are reviewed briefly below. First, the metacognitive measure was collected after training, making it difficult to claim that metacognitive activity caused learning outcomes. Actual performance during training may have influenced the metacognitive ratings provided (i.e., it seemed like I did well, I must have been thinking hard about the task). Alternatively, how well people were performing could have influenced their willingness to report leaming-focused activity. For example, if I did not seem to have done well, I might be unwilling to admit that I invested effort into learning. In either case, obtaining a rating of metacognitive activity during training may provide a more precise measure of the construct and strengthen the support for the hypothesized causal link. Second, the overall extent to which trainees focused on the task was not assessed. Metacognition captures the type of cognitive activity, but it does not suggest how much effort was invested overall during training. For example, trainees can report engaging in metacognitive activity, but that activity may have only occurred for a brief segment of the total training content. Furthermore, it is possible that trainees engage in a metacognitive activity but also engage in a great deal of off-task related thinking, an occurrence that Should interfere with learning. Metacognitive activity captures a particular learning strategy, but it does not address the issue of general attention and effort devoted toward the material. The amount of effort and on-task 23 attention should influence learning, above and beyond the use of metacognition or any other learning strategy. The issue of effort was raised in a study by Fisher and Ford (1998). Fisher and Ford (1998) studied mental effort during training. Effort was operationalized using time on task and self—reported attention. This research is based a theory by Kanfer and Ackerman (1989) which suggests that attention during skill acquisition is often divided among on-task, Off-task, and self-regulatory activity. Kanfer and Ackerman (1989) demonstrated that trainees who engage in greater off- task cognition tend to acquire less skill during training. Fisher and Ford (1998) used a Similar measure of off-task attention to determine the extent to which trainee focused on topics other than the task at hand. They also measured time on task to determine its relationship with attention and learning outcomes. For this study, 121 undergraduates learned a stock prediction task. In addition to off-task attention, the authors measured cognitive ability and mental workload. The results suggest that off-task attention and cognitive ability are not correlated (r = .02), and off-task attention and mental workload are negatively correlated (r = -.48). Furthermore, off-task attention predicted final verbal knowledge (r = -.35) and application knowledge (r = -.33). Regression analyses indicate that either the measure of off-task attention or the measure of mental workload were Significant predictors of learning outcomes, controlling for cognitive ability and other individual differences measures, learning strategies, and time on task. The correlation between mental workload and off-task attention suggests a possible collinearity problem, which results in only one or the other construct being significant in the regression analyses. Both 24 constructs, however, measure the extent of attention that was devoted to the task. Time on task was less predictive of learning outcomes than either the effort or attention measures. The authors note that time is generally a deficient measure, as it does not actually capture the focus of cognitive activity (i.e., on-task or off-task). Unfortunately, this study did not measure mindfulness in the same way Ford and colleagues (1998) assessed it. Fisher and Ford (1998) did not assess metacognitive activity, although they did measure the effects of other learning strategies of organizing, elaborating, and rehearsal (e. g., Gagne, 1984). None of these strategies had an influence on learning. Both of these studies used Student learners participating in research for extra credit. Research is needed on these learning processes with adult learners in job- relevant training programs. Given the lack of relevant research, it is unclear whether the influence of metacognition and attention differs between adults and students. In addition, research Should examine the individual difference factors that are associated with greater attention and metacognition during learning. Individual Differences Learner control research has traditionally focused on a limited range of individual differences. The two most dominant individual difference constructs assessed are prior knowledge and cognitive ability (Williams, 1996), although there are studies that focus on personality constructs such as locus of control (e. g., Tobias, 1987). The vast majority of these studies investigate the effects of fixed, stable individual characteristics. Williams (1996) notes the potential for studying the role of 10 Lil achievement motivation, but only cites one study that applied these ideas to the study of learner control (i.e., Carrier & Williams, 1988). Arguably the most critical characteristics to study in learner control, and consequently WBT, are motivational constructs. Trainee motivation should have a critical influence on choice behaviors in training, including how much material to view and how much effort to exert while viewing it. Motivation is an important concept in training (Campbell, 1989; Mathieu & Martineau, 1997), and it is even more critical in learner controlled training, when choices regarding how training will proceed are left up to the trainee. The dominant perspective of motivation in this research comes from an expectancy framework, where the desire to engage in a particular behavior is driven by expectations that the behavior will help bring about valued outcomes (Vroom, 1964). The centrality of motivation to learning can be seen in current studies of training. Recent research has emphasized the role of situational influences on learning (e. g., Mathieu, Tannenbaum, & Salas, 1992), but much of this influence is mediated through motivational constructs of intentions and attitudes. For example, research models of training effectiveness emphasize that Situational characteristics have their dominant effect on learning through pre-training motivation (Mathieu & Martineau, 1997; Quinones, 1997). Pre-training motivation is the most proximal influence on learning, yet it remains relatively neglected in research on training effectiveness (Noe, 1986). Goals and attitudes are even more critical in WBT because of differences between student and adult learners. Theories of adult learning (e. g. Knowles, 1984; 26 Rogers & Freiberg, 1994) emphasize that adults are self-directed learners who focus on material that has the greatest perceived relevance to them. For example, one of the basic principles of andragogy, the adult learning theory advanced by Knowles, is that adults will be most interested in subjects that have immediate utility in life or at work. This highlights the importance of motivational differences between trainees in determining where they will focus effort and attention during training. §ga_Ls_. Goals are desired end-states that mobilize and direct behavior. Trainees can hold many different types Of goals for training (Brett & VandeWalle, 1997). Training goals can be focused learning a particular Skill, performing to a certain level of competency, appearing competent to observers, or removing oneself from the training situation as quickly as possible. The dominant perspective of goals in industrial and organizational psychology is driven by research on goal setting by Latham and Locke (1991). This research is typically focused on how the provision of performance goals influences task performance. The industrial and organizational psychology literature on goals focuses almost entirely on goals provided by a manager or researcher that are difficult and specific levels of performance for which the trainees should strive (e. g., Latham & Locke, 1991). This research consistently demonstrates that individuals with difficult and Specific goals perform better than individuals with vague “do your best goals,” provided that trainees are committed to the goal and have the capability to accomplish them. In one goal setting Study, Kanfer and Ackerman (1989) provided performance goals to trainees learning a complex radar simulation, and compared their performance 27 to trainees who were provided vague, “do your best goals.” Contrary to established findings, Kanfer and Ackerman (1989) found that performance goals decreased performance over do your best goals. Other authors have argued that this finding reflects a boundary condition of performance-oriented goal setting (e. g., DeShon, Brown, & Greenis, 1996; Earley, Connolly, & Ekegren, 1989). These authors present evidence suggesting that, during complex learning tasks performance goals during training can actually hurt performance. However, contrary to the position asserted by Kanfer & Ackerman (1989), these later studies argue that it is not goals per se but performance goals that lead to decrements in learning. Recent research has studied the provision of goals that are focused on learning rather than performance. For example, Winters and Latham (1996) provided learning goals to trainees by asking them to learn shortcuts for performing a scheduling task. One-hundred and fourteen undergraduate business majors participating in the study. Winters and Latham (1996) found that, for complex tasks, learning goals ultimately led to greater performance than performance goals. This effect occurred because trainees in the learning goal condition learned more shortcuts early in training, and they were able to use these shortcuts to improve their performance later in training. Given the straightforward finding with regard to learning goals, it is surprising that additional research has not investigated the effects of these types of goals on learning in training environments (for exceptions, see Kozlowski, et al., 1995, 1996). An examination of training research suggests that one of the most commonly employed motivational constructs is motivation to learn. A review of research on motivation to learn suggests that this construct is actually a form of self-reported 28 learning goal. Research in industrial and organizational psychology has examined the effects of motivation to learn on training outcomes. Motivation to learn is defined as the extent to which trainees desire to gain knowledge and skill from a given training experience (Noe, 1986). This definition, and the measure used to assess the construct, were developed based on an expectancy framework, such that hi gher-levels of the motivation to learn represent greater motivation force with regard to engaging in learning-oriented behaviors during training. As it is currently defined, motivation to learn is indistinct from a self-reported learning goal. Unfortunately, research on motivation to learn has not provided clear evidence of its importance as a predictor of learning outcomes. For example, research by Hicks and Klimoski (1987) used an overall measure of motivation to learn but found little effect for motivation on a final role-play and test performance. Their study, however, showed few Si gnificant predictors of these criteria, indicating possible contamination or deficiency problems. Similarly, Tannenbaum, Mathieu, Salas, and Cannon-Bowers (1991) tested the effects of various training characteristics on attitudinal outcomes of training. While it was not the focus of their study, they did find significant relationships between training motivation and 3 of 5 training outcome variables. One of these was in negative direction, opposite what one might predict. The variables that were not predicted were honors and demerits, outcomes that likely have significant influences from sources external to the individual. Learning goals should not be expected to have a Significant effect on this type of criterion. The Tannenbaum et a1. (1991) study did find that test performance, a variable that is more likely to be influenced by motivational differences, was 29 significantly related (in the predicted direction) to training motivation. In a study of educational administrators, Noe and Schmitt (1986) found effects for training motivation, although the effects were small. This research provides only marginal support for the use of self-reported motivational constructs in predicting training outcomes. However, as noted above, criterion problems make some of these findings suspect. In addition, each study offered summary motivation indices as predictors of learning outcomes without studying the type of behaviors and activities that trainees engaged in during training. For greater clarity, research on learning goals Should focus on the types of choices and behaviors that trainees engage in during training, depending on the nature of their goal. In other words, research should seek to explicate the link between a learning goal and the learning process. Another issue that Should be considered in research on training motivation is the presence of alternative, competing goals. Motivation to learn is generally measured as in an isolated fashion, without reference to alternative outcomes that trainees might desire from training. Educational research offers an alternative approach to goals that involves measuring multiple goals. In the past 15 years, educational research has focused on the influence of multiple goal orientations, or trait-like tendencies to pursue certain types of tasks and outcomes in school settings (e. g., Dweck, 1986). Most of this research has focused on mastery and performance orientations, although there is a third goal orientation, work avoidance, that is also studied and will be reviewed later. This research has indicated that learning orientation, or the degree to which an individual values challenge and 30 learning, Significantly affects how individuals approach difficult tasks (Bouffard, Boisvert, Vezeau, & Larouche, 1995; Dweck, 1986, 1989; Elliot & Dweck, 1988). Similarly, performance orientation, or the degree to which an individual values performance and achievement, significantly affects how individuals react to failure in achievement situations. While learning oriented individuals view errors as challenge and Show increased effort and persistence in the face of adversity, performance oriented individuals focus on demonstrating competence and disengage from activities that are difficult and hard to learn (Dweck, 1986, 1989). This form of disengagement is Similar to the well-researched phenomena of learned helplessness, in which individuals withdraw task effort after repeated negative feedback (Dweck, 1986; Mikulincer, 1994). Applying the notion of goal orientation to the acquisition ijob-relevant knowledge and Skill has been the topic Of a number of recent studies. For example, Boyle and Klimoski (1995) presented research on learning from a computer tutorial that demonstrated learning orientation and verbal knowledge outcomes were positively correlated. What this study does not indicate is the types of choices trainees made that accounted for the influence of goal orientation. A study by Kozlowski and colleagues ( 1995) used a student population to study goal orientation. These researchers found performance orientation to be negatively related with verbal knowledge outcomes, and mastery orientation to be positively related to the exploration of various task features. Similar results were found for manipulated goals. Thus, students who were more oriented toward performance learned less from the training, and students who were more oriented 31 toward learning explored the task more thoroughly. While it is not clear whether these effects were mediated through state goals adopted by the trainees’, the pattern of results suggests that both situational and dispositional effects were operating in a consistent manner. Whether the goals adopted by trainees can account for that effect in its entirety is a question beyond the scope of this research, but it is reasonable to assume that state goals play a significant role. Fisher and Ford (1998) indicated that effort was influenced by mastery goal orientation. They found mastery goal orientation to significantly predict reported workload, such that individuals with higher mastery reported greater workload, and performance goal orientation to significantly predict off-task attention, such that individuals with high performance orientation thought more about non-task related issues. Similarly, Ford and colleagues (1998) found that mastery goal orientation was positively related to metacognitive activity. They did not find that goal orientation variables were related to activity level, or the choice to practice activities most Similar to the training objectives. There is additional evidence that learning oriented individuals engage in more metacognitive activity (Bouffard et al., 1993). AS evident from this review, goal orientation is generally considered to be a stable, trait-like characteristic of individuals. In general, little research has been conducted on the distinction between goal orientation as a state and as a trait, although recent research seems to indicate that they are distinguishable constructs (e. g., Brett & VandeWalle, 1997; Fisher, 1998; Kozlowski et al., 1995; 1996). These studies clearly indicate that goals can be thought of as having a fixed personality component and a more malleable state component. It is likely that the malleable state component is the 32 more proximal influence of learning, as suggested by Noe (1986) and demonstrated by Brett and VandeWalle (1997). Research also suggests that avoidance goals may be a relevant motivational orientation. In a study of students reported by Meece (1994), work avoidance goals were positively correlated with superficial engagement of course materials (see also Meece, Blumenfeld, & Hoyle, 1988). The goal constructs used by Meece (1994) are defined as goal orientations, but the measures focus on course-specific intentions. Thus, this author is focusing more on self-report goals, rather than global orientations. Overall, Meece’s work suggests that individuals with completion goals, if given the opportunity, Would avoid working hard by moving through a course more quickly and using as little effort as possible. There are few studies that report work avoidance orientations, but it may be a particularly important consideration in learner controlled training. Research on mastery and performance orientation has neglected avoidance goals, perhaps because of possible redundancy between work avoidance and low mastery orientations. It seems reasonable to assert that individuals with low mastery orientations may avoid working hard because they have no desire to learn. This behavioral pattern would be indistinguishable from those who are work avoidant. The results presented by Meece (1994) suggest that work avoidance is negatively correlated with mastery goals (r = -.50). However, the lack of any additional research on this topics suggests that the relationship among completion, mastery, and performance goals should be the focus of further study. 33 An integrated approach to the study of goals in training is offered here. Research suggests that learning, performance, and avoidant goals generally exhibit consistent effects on learning, regardless of their definition and operationalization. Thus, the mechanism by which these motivational effects are occurring must be Similar. Based on research on intentions summarized by Azjen (1991), the most powerful influence for intentions should be those that refer specifically to the behavior in question. Thus, the greatest motivational influence on learning Should be course- specific goals. In fact, it is likely that dispositional and Situational influences operate through behaviorally-specific intentions regarding the training material. That is, specific course-related intentions should capture the influence of both disposition and Situation, as they are determined by individuals’ general theories (dispositional goal orientations) and by task-specific factors (Nicholls, 1992). Two studies in the instructional technology literature provide anecdotal evidence for the importance of course-specific learner goals. Carrier and Williams (1988) studied the options selected by trainees of different levels of initial task persistence. The number of options selected in the first exercise assessed task was used to assess persistence. Trainees with greater levels of task persistence learned more from learner controlled training than trainees with lower levels of task persistence. This effect held both for immediate post-test and delayed post-test. In the study, amount of material seen was related to learning. The authors note that, “Future research should examine those characteristics that make various options appealing to students” (p.303). 34 These authors used a task-Specific measure of motivation that required early measures of task exposure to be collected and deemed characteristic of the individual. From a scientific perspective, this type of motivational construct provides no generalizability and little clarity on the nature of the constructs. For purposes of this dissertation, it is possible that the extent to which learners held a goal for learning the course content might explain the differences found on task persistence. Hancock, Thurman, and Hubbard (1995) also present anecdotal evidence for the importance of goals in the use of response feedback. These experimenters conducted an informal post-experimental questionnaire asking 23 students about their priorities during the experiment. Students who reported learning goals as top priority spent more than twice as long studying feedback messages than students who reported getting finished as one of their priorities (1083 vs. 5.05). This finding was used to explain why some subjects did not follow normative learning patterns, such as studying feedback longer following incorrect responses. While goals were offered as a central feature of the explanations in this study, only post hoc data were presented. Current research on goals either focuses on one or, at most, two goals held by the trainee. Generally, research in this area collects measures of both goals and uses both as predictors of behavior and outcomes. To the extent that goals are uncorrelated, this is a reasonable process. However, research on completion and learning goals suggests that it would be difficult for an individual to pursue both goals simultaneosly. Similarly, although research suggests that mastery and performance orientations tend to be uncorrelated, it is unclear whether an individual could actively pursue both goals simultaneously. The exclusivity of goals Should be captured as 35 negative correlations among goal measures. It is possible that an individual will hold a greater range of intentions than time and resources will allow fulfilled (a point that is very evident to the author right about now), which would lower the relationship between intention and behavior. This attenuation of the relationship due to over- reporting of intentions suggests that some measure of prioritization would be a useful measure of training motivation. In other words, research should determine which of these three goals is a trainee’s dominant motivating factor. While prioritization is a common focus in research on values (e. g., Chapman, 1989), it has not been studied in research on goals. Content Attitudes. In addition to goals, attitudes toward the training content have significant relationships with learning outcomes (e. g., Alliger & Janak, 1989). As noted earlier, adults will favor material that they believe is useful to them. This idea is captured in the construct of perceived utility, or the extent to which trainees feel that training content will be useful on the job (Alliger et al., 1997; Warr & Bunce, 1995). Unfortunately, most research on attitudes considers them to be outcomes of training. Noe (1986) suggests that attitudes toward training content are critical factors in training motivation, yet they are seldom assessed. Moreover, Vroom's portrayal of expectancy theory would suggest that utility, as a composite measure of instrumentality and valence, Should be a powerful predictor of motivational force. Warr and Bunce (1995) reported high correlations between utility perceptions and learning. They used utility perceptions as an outcome, but it is reasonable to suggest that many trainees may have perceptions about utility prior to training. This is particularly true for adult trainees who are being taught job-relevant knowledge and 36 skill. These perceptions may influence the effort that trainees are willing to exert in training. In addition to utility, research indicates that pre-training self-efficacy can influence the effort that trainees exert during training. For example, Martocchio (1994) found that pre—training self-efficacy for computer use predicted performance on a knowledge post-test after computer training. Self-efficacy theory suggests that high levels of self-efficacy will be associated with high levels of attention to and effort on the task. This basic finding has been supported in many studies (Bandura, 1997). Technology Attitudes. The study by Martocchio (1994) measured self—efficacy with computers for a training class on computer Skills delivered primarily by computer. This measurement approach poses an interesting question. Is the self- efficacy effect identified by Martocchio (1994) an effect for content efficacy (i.e., I can learn this content), or for efficacy with the technological medium of training (i.e., I can use the computer to learn)? If the instruction had been provided through another technology (i.e., lecture or instructional television), or the content of the training had been different (i.e., how to use La Machine kitchen preparation tool), self-efficacy for both the content and the technology could have been studied for their influence on learning. This idea is consistent with research in educational technology (e.g., Saloman, 1981) that suggests confidence with learning media can influence learning outcomes. When trainees learn through technology that is novel to them, self-efficacy for the technology may be just as important as for the content. Those trainees with low technology self-efficacy may become anxious about interacting with the technology 37 during training (e. g., Martocchio, 1994). In particular, one effect of low technology efficacy might be an avoidance of the unique and difficult aspects of that medium. Hyperlinks, represented in WBT, are unique to this medium, and offer the potential to confuse trainees (e.g., Park & Hannafin, 1993). Thus, trainees with lower technology self-efficacy may be less likely to use the links to optional materials. Learning Outcomes The most popular model of training outcomes in use today was developed by Kirkpatrick almost 40 years ago (Kirkpatrick, 1959-1960 in Alliger & Janak, 1989; Kirkpatrick, 1974). This model specifies four steps of training evaluation: Reactions, learning, behavior, and results. Reactions are defined as “how well the trainees liked a particular training program” (Kirkpatrick, 1974, p. 18-2) and usually assessed using ratings regarding course content. Kirkpatrick notes that positive reactions do not ensure learning, but that positive reactions are necessary for maximal learning. Learning is defined as “the principles, facts, and skills which were understood and absorbed by the conferees” (Kirkpatrick, 1974, p. 18-11). Classroom performance and paper-and-pencil tests are ordinarily used to assess this step. The third step is behavior, and it is defined as “on-the-job behavior” (Kirkpatrick, 1974, p. 18-16). Kirkpatrick suggests that job performance should be measured before and after training to assess whether training influences performance. The final step is results, which include factors like “reduced turnover, reduced costs, improved efficiency, reduction in grievances, increase in quality and quantity of production, or improved morale, which, it is hoped, will lead to some of the previously stated results” (Kirkpatrick, 1974, p. 18-21). 38 Recent literature has noted that there are a number of faulty assumptions rooted in how researchers use Kirkpatrick’s steps. More specifically, Alliger and J anak (1989) review 3 assumptions that underlie the current use of Kirkpatrick’s model. The first assumption is that the four steps are arranged in ascending order of information. This assumption can lead to the belief that results are the highest in a hierarchy; thus, they are the best measure of training effectiveness. This is a difficult assumption to support, because there are instances where dollar estimates can be impossible to obtain, and may even be misleading about the results of training. The second and third assumptions are related in that they both involve the assumption that the levels of evaluation are causally linked. This assumption is also untenable. While it is true that trainees who seriously dislike training may withdraw from learning, reactions and learning are not necessarily linked (Goldstein, 1993; Mathieu, Tannenbaum, & Salas, 1992). Furthermore, behavior change is not always preceded by indications of verbal learning (Lewicki, Hill, & Czyzewska, 1997). This assertion is supported by the relatively low correlation between learning and behavior reported in a recent meta-analysis (Alliger et al., 1997). These problems suggest that the Kirkpatrick model may have limited usefulness as a taxonomy of learning outcomes. An outcome taxonomy Should provide clear links to learning processes and, consequently, training interventions (J onassen & Tessmer, 1996/1997). Furthermore, the taxonomy Should be easily linked to real world job performance, as the ultimate concern of training is to influence some aspect of performance back on the job. The Kirkpatrick steps do not provide this guidance because they confound the type of learning outcome (i.e., knowledge or 39 skill) with the time it is assessed (i.e., at the end of training or back on the job) in describing the second and third steps. The third step also confounds behavior and job performance by suggesting that archival measures of job activities (i.e., absenteeism, supervisor ratings) are reasonable measures of on-the-job behavior. As suggested by recent writing on job performance, job performance is best considered as behavior on the job, and research Should be careful not to consider the results of behavior as uncontaminated measures (Campbell, McCloy, Oppler, & Sager, 1993). Training outcomes Should be stated as distinct psychological constructs that can be assessed with different methods and at different times, but always be clearly resulting from particular learning processes and comprising or at least influencing components of work-related behavior. More recent outcome taxonomies incorporate these critical features (e.g., Gagne, Briggs, & Wager, 1990; Jonassen & Tessmer, 1996/1997; Kraiger, Ford, & Salas, 1993). These taxonomies identify psychological constructs as outcomes, based on current principles of learning, and can be easily linked to training interventions and to job performance. These taxonomies are actually quite Similar, and differ mostly in the level of detail employed at the highest level of the respective taxonomies. For this evaluation, I will adopt the Kraiger, Ford, and Salas (1993) version for its relative parsimony and closer tie to literature on job performance. Kraiger, Ford & Salas (1993) suggest there are three primary categories for training evaluation: Cognitive, Skill-based, and affectively-based outcomes. The focus of this dissertation is on cognitive and Skill-based outcomes, so these categories of outcomes are reviewed below. 40 Cognitive learning outcomes refer to the quantity and type of knowledge available to the trainee. The traditional tests of cognitive learning are achievement tests of verbal knowledge. While tests of verbal knowledge have been criticized for being unable to discriminate among learners at higher levels of development, they are useful during early stages of skill acquisition. In addition to verbal knowledge, other cognitive outcomes include knowledge organization and cognitive strategies. These outcomes become more critical at later stages of Skill acquisition, so they will not receive a great deal of attention here. Skill-based leaming outcomes also tend to reflect later stages of learning. According to Kraiger et al. (1993) the two major components of Skill—based outcomes are compilation and automization. Compilation involves the combination of discrete behaviors into domain-Specific routines that are relatively fast and efficient. During this stage errors are reduced, verbal rehearsal is eliminated, and behavior is more task- focused. During early stages of skill acquisition this may take place and be ascertained by direct or indirect observations of performance. Indirect observations can be taken by reviewing task performance for evidence that compilation is beginning to occur -- fewer errors and faster production or reaction time. Automatization, on the other hand, involves an even greater level of skill. Automatization implies that tasks or portions of task can be handled without conscious monitoring. The lack of monitoring frees cognitive resources to engage in other activities. Few training programs bring trainees to the point of automaticity, as it requires extensive, time-consuming practice. 41 In WBT, Skill-based outcomes can be obtained by having trainees engage in activities that represent the skill of interest. Computer recording and tracking can be used to record that activity, and it can be reviewed either by the computer or by a person for its quality. Unfortunately, it requires a great deal of resources to reproduce Skill-based environments on the computer (i.e., interact with a team to solve a problem), unless that skill is a computer-based skill, as in the research by Frese and colleagues (Frese & Altmann, 1989). One solution to this problem is to assess an outcome that indicates skill-based learning, but is assessed in a manner similar to verbal knowledge. An outcome that implies skill-based learning is what Bloom (1956) and Fisher and Ford ( 1998) call application knowledge. This is the use of verbal knowledge to answer novel questions or make judgments in new situations. This type of outcome is similar to Skill-based outcomes when the performance of interest is highly cognitive, such as problem-solving or trouble-shooting. Application knowledge can be assessed using open—ended situational questions, as demonstrated by Fisher and Ford (1998). Summary Research on learner control, learning choices, individual differences, and learning outcomes were reviewed. The first three sections provide insight into a number of missing elements in the literature that could be used to understand WBT effectiveness. Learner control research to date has been very limited in scope and, perhaps more importantly, atheoretical. An integrated theory of learner control that identifies important process variables and considers malleable individual differences 42 characteristics would provide a meaningful contribution to this area of research. The learning choices and individual differences reviews were conducted to identify previous empirical research that might address that issue. Learning choices regarding effort, such as activity and attention, and regarding strategy, such as metacognition, are critical influences on learning outcomes. Individual differences in goals and attitudes appear likely to be effective predictors of these choices. The final review in this section discussed learning outcomes. This section clearly indicates that any attempt to provide a theory of choice in learner control must model the influence of such control on multiple training outcomes. In the next section these ideas will be integrated into a theoretical framework, and a research model and hypotheses for an empirical study will be offered. 43 THEORETICAL AND RESEARCH MODELS The literature review suggests that an integrated theory of learning in learner controlled environments is absent from the literature. The empirical results discussed, coupled with existing theories on motivation and learning, can be used to create such a model. This theory, individual differences in learning choice, is presented in Figure l and described below. Following the general description of the theory a more specific research model is presented. This more specific model is used to derive specific hypotheses for the dissertation research. Learner Choice Theory The learner choice theory is an input-proceSS-output model that depicts the link between individual differences as an input to training and learning oriented activity during training, and the link between this activity and learning outcomes. The theory is similar to the learner control model advanced and tested by Ford and colleagues (1998), but it integrates malleable individual differences with immutable differences such as personality. The model is distinct from existing learning models, such as Noe (1986), because it focuses more on the learning process than on the varied inputs or outputs to training. Perhaps most importantly, it is a leamer-centered model of training effectiveness. In other words, the theory addresses what learners do, rather than what training designers or trainers do (e. g., Gagne, Briggs, & Wager, 1992). 44 8253.. wEEfiHDmom :80 Eva a. Owen—Bosvm $8830 ESSA. 835?. o>EcmoU . goggom . 320:0 flatm— mofionu 388% wEEEHOE A _ 2%232. A \ - f bzmcomcom _ \T 8:25me _ memmooocmlwmgl O..— _ 5:27.. 4 fit . . 22858:: 838.155 3:239: E35. 3325 mEanq E moocobta 3:229: ._ oSmE 45 Individual Differences. In the theory individual differences are classified as immutable or malleable. Distinguishing these categories allows researchers to clearly consider whether the individual differences of interest are amenable to change at the beginning or during training, or whether they are fixed factors that will remain constant throughout training. The emphasis in this dissertation is clearly on malleable characteristics. None-the-less, both types of individual differences are relevant to training outcomes and both will be discussed. The immutable characteristics can be classified into three categories: Ability, personality, and experience. Much of the existing learner control research has focused on the effects of these constructs. Ability, whether general mental ability or a more specific facet relevant to the training task, should influence knowledge gain directly (e. g., Ree & Earles, 1991). Trainees with higher ability should be able to learn more from the same exposure to materials. Content-related experience should have a Similar effect. Williams (1996) summarizes research indicating that trainees with more content-relevant knowledge and experience learn more in learner controlled training. In addition to direct effects, immutable characteristics influence the learning process and outcomes through state or malleable individual differences. In particular, personality will influence the goals and attitudes trainees bring to the learning environment. Personality can be conceptualized as a broad, cross-situation intention that in turn influences more specific goals in particular Situations (Brett & VandeWalle, 1997). Training research support this conceptualization. For example, Noe (1986) suggests that locus of control serves to influence learning outcomes through motivation to learn. Similarly, Martocchio & Judge (1997) demonstrate that 46 conscientiousness influences learning in part through self-efficacy. The malleable characteristics of relevance to outcomes in learner controlled environments are motivational in nature. Motivation is likely to influence the types of strategies used during and the extent of effort devoted to training. Learner controlled environments like WBT provide trainees with many choices, and goals and attitudes relevant to training Should be the dominant influence on how they make those choices. Research and theory on motivation to learn suggests that this is a determinant of learning (e.g., Colquitt & Simmering, 1997; Noe, 1986; Noe & Schmitt, 1986). AS previously suggested, motivation to learn is indistinguishable from holding a learning goal. Similarly, research suggests that attitudes such as perceived utility and self- efficacy will be important determinants of the effort exerted during training (e. g., Warr & Bunce, 1995). It is worthwhile to note that some researchers would argue about the causal ordering of attitudes and goals (e.g., Bagozzi, 1981). Attitudes and goals are related in that goals are driven by value judgments. For example, perceptions of the value of training should be related to the goals adopted with regard to that training program. However, from the perspective of designing and administering training programs, both learner constructs are exogenous. Thus, the causal order of goals and attitudes is not the focus of this study; instead, their joint influence on the learning process will be assessed. The purpose of placing attitudes and goals together is to maximize the prediction of learning choices. Except for their relationship with immutable characteristics, goals and attitudes are considered exogenous factors in this theory. 47 Learning Choices. To understand how individual differences influence learning, the learning process must be modeled. Based on literature reviewed above, the learning process in learner controlled environments requires trainees to make decisions about two major factors: (1) Strategy and (2) Effort. Strategies are internal processes that learners use to select or modify their ways of attending, learning, remembering, and thinking (Gagne, Briggs, & Wager, 1992). Research suggests that there are a number of categories of learning strategies that learners employee employ, including rehearsal, organizing, and elaboration (e. g., Fisher & Ford, 1998). Metacognition is often considered just another learning strategy (e.g., Pintrich et al., 1991) but it may be more accurately represented as a broader, more inclusive category of learning strategy because it involves both awareness and control of one’s cognition (Flavell, 1979). Thus, metacognition can be viewed as a latent factor explaining all strategies involving monitoring learning and making calculated adjustments to learning processes. With this broad definition of metacognition, other mindful learning strategies (i.e., those that involve deeper processing of the training material) are simply indicators of the attempt to be more strategic and purposeful in learning activity. Metacognition clearly influences knowledge gain (Ford et al., 1998; Pintrich & DeGroot, 1990; Pintrich et al., 1991). It is less clear whether metacognition, or any subordinate learning strategy, would influence post-training attitudes. Because learning strategies target changes in remembering and thinking, the theory suggests that the greatest effects for metacognition should be demonstrated on knowledge gain. While it may be possible for changes in knowledge to later influence attitudes, 48 learning strategies would most directly influence knowledge alone. Strategic choices regarding metacognition and other learning strategies are not expected to influence effort consistently, so they are portrayed as independent in the theory. As suggested by the classic saying “Work smarter, not harder,” strategy and effort are not always related. Trainees who engage in mindful learning strategies think differently and focus their attention differently than trainees who do not use such strategies. However, mindful strategies do not always require greater effort or practice. A metacognitive judgment about learning may lead a trainee to focus on only one part of the material and skip over the other parts. Thus, metacognition may allow a trainee to reduce total effort but ensure the effort is used wisely. Similarly, repeated practice may not necessarily lead to knowledge gain if the wrong things are practiced. In other words, repetition that is not guided by a judgment of current learning may dramatically increases effort in the form of practice and time on task but have no appreciable effect on learning outcomes. So, while it is possible for metacognition to result in some modulation of effort level, the modulation cannot be predicted across trainees because it can occur either up or down. More important for prediction across trainees is the fact that strategic choices like metacognition will influence the focus of effort and consequently knowledge gain. Effort has been operationalized in many different ways (e. g., Paas, 1992) including time on task, mental workload, and task persistence. These different indicators of effort are better understood when they are divided into cognitive and behavioral categories. Behavioral effort is the amount of activity that trainees engage in during a learning episode. An example of behavioral effort is the activity level 49 construct employed by Ford et a1. (1998). Trainees who engaged in more practice of key task skills were exerting more behavioral effort. Cognitive effort, on the other hand, is the amount of attention devoted to the learning task. An example of cognitive effort is the off-task attention measure of Kanfer and Ackerman (1989), Fisher and Ford (1998), and Brown (1996). This measure determines the extent to which trainees devoted attention to on or Off-task topics. These research studies indicate that trainees who exert greater cognitive effort gain more knowledge and Skill than trainees who exert less effort. Trainees who exert either cognitive or behavioral effort Should be more likely to have enactive mastery experiences in which they have success with key Skills during training and build confidence in their ability to succeed. Thus, greater effort in either form should result in improved self-efficacy, and improved attitudes toward the training content. Learning Outcomes. Training should be evaluated based on a range of important outcomes. In workplace training the ultimate concern is typically whether or not trainees are able to use acquired Skill back on the job (Baldwin & Ford, 1988; Noe, 1986). Consequently, outcomes assessed at the end of training Should be those outcomes that are most likely to predict positive transfer. Research by Kozlowski and colleagues (1995, 1996) and Ford et a1. (1998) suggests that knowledge test scores, Skill practice scores, and self-efficacy predict generalization of skill. Consequently, training should be evaluated based on at least these three criteria. In the learning choices theory, knowledge and skill gain are placed together because they are likely to have similar antecedents. Higher strategy use and effort will result in greater 50 knowledge and skill gain than low strategy use and effort. AS noted above, post- training attitudes are influenced by effort but not by strategy. Post-trainin g attitudes are also directly influenced by pre-training attitudes. Attitudes such as self—efficacy and perceived utility are influenced by immutable individual differences and by unmeasured environmental influences. Such influences are unlikely to change over the course of a training program, so a Si gnificant portion of the variance in attitudes is likely to remain unchanged. Summm. The purpose of this theory is to identify malleable individual differences that are relevant to choices that trainees make during learner controlled training. The model Specifies goals and attitudes as critical antecedents to two major ' categories of decisions that trainees must make in these environments--strategy and effort. These choices are linked to training outcomes of knowledge and skill gain and post-training attitudes. A number of the links in this model have not been tested; some of them have been tested but only in pieces. While it is more than likely that modifications to the model will be necessary as research progresses, the model offers a useful guide for developing future research on learner controlled training. Rew Model zfll Hypotheses The individual differences in learner choice theory can be used to develop a more specific research model for empirical testing. Figure 2 presents a research model that offers specific constructs to be tested as part of each category noted in the theory. The hypothesized effects suggested by this research model are stated explicitly below. First, hypotheses regarding individual differences are presented. Then, hypotheses 51 regarding the effects of the learning choices are discussed. The hypotheses end with a section on the direct and indirect effects of individual differences on outcomes. Individual Differences Effects on Learning Choices. Immutable individual difference characteristics will influence malleable characteristics but they will also influence learning directly. Two common ability and experiential variables available in organizations are education and content experience. These constructs serve as indicators of general mental ability and of practical abilities that may influence learning. Moreover, content experience is the experiential variable most commonly identified to influence learning outcomes in learner controlled environments (Williams, 1996). Because the focus of the dissertation is on malleable rather than immutable characteristics, these two constructs will be serve as control variables rather than variables of substantive interest. In terms of malleable individual differences, both goals and attitudes should be assessed. Research on goal theory indicates that three different types of goals can be distinguished: Learning, performance, and completion. First, learning goals are intentions to gain new knowledge and skill from the training experience. Second, performance goals are intentions to perform well on exercises and quizzes in order to appear intelligent. The third goal has been called work avoidant in the educational literature (e. g., Meece, 1994). While the educational literature implies that such individuals seek to avoid work, it is reasonable to assume that individuals may seek to avoid hard work in a particular course because they have other more pressing responsibilities to which they must attend. As a result, the term completion goal will be used, and defined as a desire to finish training as quickly as possible. 52 555 38:60 . 38533 xomoummfiom 320308. . ‘i 5532996. . moaoEm £50m 3:05:83. . .bom mEanq . 83%;. _o>o.._ 33:3. ems—302M coco—@800 . sauna—made. wEEmoA . 5 Omega oocafiuotom . Ai aouEwoofio—Z m «o ego—Beam ~ 0 39.85 E omSfiU oocotoaxm do cosmozum 88830 WEBER 83305 3:534 3288me 3:232: EDGE 5.880% 8325 $5534 5 mooaeota =32an .N oSwE 53 Goals should be considered both in terms of their absolute level and in terms of their relative importance. Trainees may have high learning, high performance, and high completion goals because they desire to obtain all three goals. The reality, however, is that one can only pursue a limited number of goals at a Similar action level (Kluger & DeNisi, 1996). Thus, while trainees may desire to achieve all 3, they may find that only one goal can be actively pursued during training. In other words, trainees may have to prioritize their desired outcomes such that one goal is dominant. As a result, it is possible to speak both of a trainee with a high learning goal, and a trainee that is pursuing a learning goal. The former suggests a trainee who endorses a learning goal and the latter suggests a trainee who endorses a learning goal over other possible training goals. To date this issue has not been addressed because the majority of research utilizing multiple goals has focused on goal orientations (traits) rather than states. While trait variables might indeed be empirically independent, allowing a trainee to be high learning and high performance-oriented, the States and behaviors that these traits induce may be mutually exclusive when considered in a more narrow span of time. Consequently, an effort should be made to examine not only the level of each goal, but the structure of goals to determine if one is dominant. Trainees’ goals influence both the amount and type of effort that will be exerted during training (Nolen, 1988). Previous research has focused on either dispositional goal orientations or situationally-induced goal states. In contrast, little research has focused on the goals trainees hold for a Specific learning episode. AS the internal representation of both disposition and situation, specific course-related goals should influence the learning process in much the same way that dispositions and 54 Situations have in previous research. Goals influence behavior by focusing attention (Latham & Locke, 1990). Trainees with high learning goals will focus on course material more than trainees with low learning goals will. Learning goal trainees focus on the task in order to learn the material. H1: Trainees with high learning goals will have greater attentional focus than trainees with low learning goals. Compared to trainees who focus on learning goals, trainees with high performance goal may focus some of their attention on how well they are doing, reducing their on-task attention. Similarly, trainees with high completion goals seek . every opportunity exit the training environment. Thus, these trainees are unlikely to focus their attention on the training task but to things outside of the training environment and/or to any opportunity that may arise to exit early. Compared to learning focused trainees, performance and completion trainees will have lower attentional focus. Hla: Trainees pursuing learning goals will have greater attentional focus than trainees pursuing completion goals. Hlb: Trainees pursuing learning goals will have greater attentional focus than trainees pursuing performance goals. In addition to differences in attention, trainees with a high learning goal Should be more likely to use deep processing strategies than trainees with a low learning goal (Nolen, 1988). Deep processing involves the use of learning strategies such as metacognition. Deep processing is often contrasted with surface processing which involves little reflection or thought about the material. For trainees focused on 55 learning, reflecting on their knowledge and moving to improve it is necessary for goal progress. Research supports the link between trait mastery or learning orientation and learning strategies (e. g., Meece, 1994; Nolen, 1988), and supports the idea that learning strategies and other active forms of learning are not employed unless trainees’ are motivated to use them to learn (Garcia & Pintrich, 1994; Garner, 1990). Compared to trainees pursuing learning goals, trainees with performance and completion goals are likely to use surface engagement strategies that are not oriented toward learning (Meece, 1994). In other words, these trainees will not focus on their level of knowledge because it is not directly relevant to their goals. H2: Trainees with high learning goals will engage in more metacognition than trainees with low learning goals. H2a: Trainees pursuing learning goals will engage in more metacognition than trainees pursuing completion goals. H2b: Trainees pursuing learning goals will engage in more metacognition than trainees pursuing performance goals. Goals Should also influence the choices trainees make about training activities. Trainees with high learning goals should see practice activities as a learning opportunity, and make use of them. Trainees with high completion goals Should take the opportunity to Skip ahead and proceed with the course in order to accomplish their goal of completing the course as quickly as possible. Trainees with high performance goals are likely to avoid supplemental exercises because the use of optional material might be interpreted as the need for remedial instruction, which would imply low ability. One hallmark of individuals with performance orientation is a desire to avoid looking unintelligent (Dweck, 1986), so they should avoid any chance to be labeled as 56 such. When compared to the others, trainees focused on learning goals should engage in more activities than performance or completion goal trainees. H3: Trainees with high learning goals will have higher activity levels than trainees with low learning goals. H3a: Trainees pursuing learning goals will have higher activity levels than trainees pursuing completion goals. H3b: Trainees pursuing learning goals will have higher activity levels than trainees pursuing performance goals. In addition to goals, training research has identified trainee attitudes as having an impact on training outcomes. Traditional industrial and organizational psychology research focuses on trainee attitudes as they relate to training content. Research in instructional technology suggests that learning can be influenced by attitudes toward the technology in which training information is conveyed. Web-based training is completely computer-mediated, so attitudes toward the technology should be assessed in addition to attitudes toward content. Technology attitudes have been Show to influence how trainees use the medium during learning activity (e.g., Salomon, 1981). With web-based training, trainees with confidence in their ability to use computers for learning should be more comfortable using the technology. They Should be more able to focus on the activity rather than interface and be willing to employ the various features of the technology necessary to engage in extensive practice. H4: Trainees with high self-efficacy with the training technology will have higher activity levels than trainees with low self-efficacy. 57 Differences in technology use should also occur because of content attitudes. Trainees who perceive the training to be useful, and who feel confident that they can engage in training behaviors successfully, should be more likely to complete the activities offered by the computer. H5: Trainees who perceive the training content to be useful for performing their job will have higher activity levels than trainees who do not believe the content is useful. H6: Trainees with high self-efficacy for learning the training content will have higher activity levels than trainees with low self- efficacy. A similar argument can be made for the effects of attitudes on attentional focus. Trainees who perceive the training to be useful, and who are confident that they can perform those behaviors, will be more likely to focus their attention on the task. In short, trainees with positive attitudes toward the content will be more willing to engage in cognitive as well as behavioral effort to learn that content. H7: Trainees who perceive the training content to be useful for performing their job will have higher attentional focus than trainees who do not believe the content is useful. H8: Trainees with high self-efficacy for learning the training content will have higher attentional focus than trainees with low self-efficacy. Trainees who are confident in their ability to use the technology will also be able to spend less time worrying about how to use the technology to learn. A reduction in anxiety and technology focused cognitions will allow for greater attention to be placed on-task. 58 H9: Trainees with high self-efficacy with the technology will have higher attentional focus than trainees with low self-efficacy. Learning Choices Effects on Training Outcomes. Trainees in learner controlled environments must make choices about the effort to exert and the strategies to use. Particular learning choices should improve training outcomes. One important outcome to consider is the confidence that trainees have that they can apply learned knowledge and skill on the job. Research reviewed earlier indicates the importance of self-efficacy for predicting the generalization of skill. Consequently, the confidence that trainees have leaving the training environment is an important outcome. This outcome is expected to be influenced by greater activity levels and greater attentional focus. Trainees that focus on the task at hand, and actively practice it, are more likely to gain confidence that they can apply the material back at work. H10: Trainees with higher activity levels will have higher application self-efficacy at the conclusion of training. H11: Trainees with greater attentional focus will have higher application self-efficacy at the conclusion of training. The learning benefits of metacognitive activity have long been the focus of educational research. Research suggests that individuals who reflect on the state of their knowledge and the learning process will learn more from a given episode (Pintrich et al., 1991). Yet, how metacognition is related to application knowledge is unclear, as much of the research on metacognition occurs in classroom settings where outcome measures are traditional verbal knowledge measures (e. g., Pintrich & DeGroot, 1990). Given that metacognition involves thinking about thinking, the focus of metacognitive 59 activity is most likely to focus on whether terms and concepts in the course are understood. Metacognition may not allow for insight into how well a set of concepts can be applied to new situations; insight regarding Skill levels are more likely to come from experts or observers. Consequently, it is hypothesized that the relationship between metacognition and changes in application knowledge will not be as strong as the relationship between metacognition and changes in verbal knowledge. H12: Trainees who use more metacognition will gain more verbal knowledge than trainees who use less metacognition. H13: Metacognition will be more related to verbal knowledge gain than to application knowledge. The extent to which trainees attention is direct at task-related rather than off- task cognitions has been shown to influence learning (Brown, 1996; Fisher & Ford, 1998; Kanfer & Ackerman, 1989). When attention is focused away from the task, trainees spend less time actively thinking about the task. Any diversion of attention away from task-related cognitions should impair the acquisition of verbal and application knowledge. Conversely, greater on—task attention Should facilitate Skill acquisition. H14: Trainees with higher attentional focus will gain more verbal knowledge than trainees with lower attentional focus. H15: Trainees with higher attentional focus will gain more application knowledge than trainees with lower attentional focus. There are a number of different indicators of cognitive effort or attentional focus. Two such indicators that have received research attention are perceived (self- reported) focus and time on task. AS suggested by previous research, time on task is 60 not expected to be a good indictor of effort, or predictor of learning (Fisher & Ford, 1998). As a measure of attentional focus, time is a deficient measure because it does not reflect what trainees are thinking about while information is displayed on their computer screens. Perceived attentional focus, on the other hand, Should be an effective indicator of the quality of trainees’ attention. Trainees are uniquely suited to judge the amount of mental effort that they exert toward the task. H16: Perceived attentional focus will be more related to verbal knowledge gain than time on task. H17: Perceived attentional focus will be more related to application knowledge gain than time on task. Practice activities are created in training to increase the number of times important skills are practiced and relevant concepts are applied. Active reproduction of to-be-learned behaviors has always been considered one of the most effective methods of learning (Goldstein, 1993). As a result, it is hypothesized that the more activities trainees complete the more they will learn. H18: Trainees with greater activity level will gain more verbal knowledge than trainees with less activity level. H19: Trainees with greater activity level will gain more application knowledge than trainees with less activity level. Direct and Indirect Effects of Individual Differences on Train_ir;g Outcomes. The theory and research models suggest that many of the effects of individual differences on training outcomes are mediated by learning choices. A few effects, however, are hypothesized to be direct. More specifically, trainees’ self-efficacy about learning at the beginning of training is expected to directly affect the application self-efficacy trainees hold at the end of training. 61 Confidence related to the training task, whether it be learning or performing, is expected to be fairly constant over time. Similarly, trainees who believe that the training will be useful back at work should be very confident that they can use that Skill back at work. Thus, pre-training attitudes of utility and self— efficacy should have a strong influence on the post-training attitude of application self-efficacy. H20: Trainees with high self-efficacy for learning the training content will have higher application self-efficacy at the end of training than trainees with low self-efficacy. H21: Trainees who perceive the training content to be useful at the start of training will have higher application self-efficacy at the end of training than trainees who do not perceive the training to be useful Unlike attitudes, goals are not expected to have a direct effect on the post-training attitude Of self-efficacy. AS a proximal antecedent to behavior, goals Should influence cognition and behaviors such as practice during training. Practice activity and exposure to the task should be more powerful influences on self-efficacy than pre-training goals. H22: The effects of goals on application self-efficacy will be mediated by the choices trainees make while learning. In terms of indirect effects, the individual differences of technology efficacy, learning efficacy, perceived content utility, and goals are expected to influence changes in verbal and application knowledge through the strategy and effort choices made during training. In other words, the learning choices identified here should account for nearly all of the variance in outcomes that is associated with individual differences. 62 H23: The effects of individual differences on verbal knowledge gain will be mediated by the choices trainees make while learning. H24: The effects of individual differences on application knowledge gain will be mediated by choices trainees make while learning. 63 METHOD Sample The sample for this study is 80 trainees who are technical employees or contractors of a Fortune 500 manufacturing company. The trainees were on a waiting list to take the traditional instructor-led version of a course and were offered the opportunity to take it early by taking a web-based version. Eighty-four trainees volunteered but only 80 attended the training. One trainee did not finish reviewing all the training materials but still took the final post-test measures. Two trainees completed the entire course but did not complete all post—test measures. A few trainees opted not to complete all of the on-line surveys. Data for all eighty trainees was maintained, but the sample size for statistical analyses varies slightly depending on the measures involved. The majority of trainees had college degrees (42, 52%) while just over a third indicated some graduate school experience in addition to college (30, 37%). The remaining trainees (8, 10%) all had trade school or college experience beyond high school. In terms of relevant content experience, trainees were split among strongly disagree (20, 25%), disagree (23, 29%), and agree (29, 36%) responses to the Statement "I am familiar with the concepts and skills covered in this course." Only 4 trainees (5%) selected strongly agree as their response. Five of the trainees indicated that they were contractors rather than employees of the company. Although contractors tended to be less educated than employees, there were no Si gnificant differences between these trainees and trainees who were 64 company employees. It is common practice in this company to allow and even encourage contractors to take training courses along with company employees. Data regarding differences between employees who chose the web-based over the instructor-led course are unavailable because demographic information is not typically collected from trainees at this company. In addition, the company has no records regarding how many trainees or which kind of trainees prefer web-based courses because this course was the first on-line technical course sponsored by the corporate training office. None-the-less, it is likely that the sample is representative of the company’s technical pOpulation who would select WBT for two reasons. First, trainees were drawn from the waitlist of an on-going technical course. Second, this ' particular course is one of the most popular in a set of courses required for technical employees and recommended for contractors. Nearly all employees and contractors take this course at one time or another. Research Design The design is a non-manipulated field study. Trainees received equal opportunity for exposure to the course material and equal amounts of testing. To avoid potential confounds from different instructional environments all trainees took the course at the company’s central training facility. 65 Power Analysis The primary statistic of interest in this study is the correlation coefficient. Many of the hypothesized relationships have not been studied in previous research, so effect size estimates are unavailable for certain relationships. Research has used the process measures studied here including metacognition and attentional focus. In the Ford et a1. (1998) study, the correlation between metacognition and verbal knowledge was .32. In Fisher and Ford (1998) the correlation between off-task attention and verbal knowledge was —.35 and the correlation between off-task attention and application knowledge was -.33. With the sample size of 80, medium effect sizes such as these provide power of .78 to reject the null hypothesis of no relationship at an alpha level of .05 (Cohen, 1988). Trainirlg Technology The course was designed to work on the company’s standard office computers, which are IBM-compatibles with 40486 or Pentium processors, 15” monitors, no sound cards, and no graphics accelerators. These characteristics precluded the use of sophisticated animation or sound in the course. As a result, only text and basic graphics were used as course materials. The relatively low technological sophistication of the office computers did not interfere with the creation of interactive learning events using server-Side programming. Interactivity was created by having trainees type in answers to questions and select options from lists on the computer screen. Trainees would click a button on the screen to submit this information via the web to the company's server. Programming 66 on the company’s server generated feedback based on trainees’ responses that was sent back to trainees and displayed on their computer screen. Trainees completed the course in one of three computer laboratories at the corporate training center. These laboratories contained 12 to 20 computers set in rows of 6 to 8 machines. The computers had Intel Pentium processors, 15” monitors, and standard two-button mice. Screen resolution was set at 800 x 600 pixels. The course was optimized for Netscape 2.01 , which is the company's standard web browser. The course was designed to work without special plug-ins or software modifications. Although the course can and does run when viewed with other browsers, Netscape was used by all trainees in this study. Training Course The training course teaches a standardized problem-solvin g process developed by the company. This process was created and is trained as part of a corporate-wide manufacturing initiative to improve quality. The course presents information regarding how to identify, describe, and solve manufacturing problems, including steps for emergency, interim, and permanent solutions that protect the customer from the effects of problems. The course contains nine modules that cover each step in the problem solving process. These nine steps and the associated training modules are: Prepare for the Problem-Solving Process, Establish the Team, Describe the Problem, Develop Interim Containment Action (ICA), Define and Verify Root Cause and Escape Point, Choose and Verify Permanent Corrective Action (PCA), Implement and Validate Permanent 67 Corrective Action, Prevent Recurrence, and Recognize Contributions3. The course was originally designed for and delivered as instructor-led, face-to- face instruction. Strategic Interactive (SI), an outsourced technology firm, used the original course materials to develop an on-line version of the course. The principal investigator, subject matter experts from the company, and SI employees all contributed to the design effort. Although the basic format of the course follows that of the face-to-face course, information in the course had to be translated from a primarily oral to an entirely written presentation. To make this translation, the SI instructional designer divided each module into sections and pages. Each module was divided into sections that covered similar knowledge or skill. Sections were further divided into pages that would be displayed on the computer screen at any one time. Menus were created at both the course level, displaying the 9 module options, and the module level, displaying 3 to 10 sections depending on the module. Trainees could select items on these menus to view the pages associated with that topic. To provide an indication of scope, the modules ranged from 15 to 52 pages in length with an average of 33 pages in each4. An iconic user interface was present on the screen at all times so trainees could make decisions about how to "navigate" through the course or select the next material 3 Minor wording changes have been made in order to maintain the company’s anonymity. 4 These numbers were calculated by counting the number of content screens within each module. Test materials and questionnaires were not included. Similarly, although each quiz item was presented and scored on separate pages, quizzes were only counted as a single page. 68 to be presented on the screen. The navigational options available to trainees included: go to main course menu, go to current module menu, go back a page, and go forward a page. Other features of the course that could be accessed via icons were: a course map (depicting the structure of the course materials), a glossary (containing definitions of key terms), and job aid diagrams (summaries of key tools presented in the course). In translating the material from face-to-face to WBT, attempts were made to incorporate learning events or activities to keep the trainee actively involved rather than passively reading. These activities comprised trainee responses to questions and feedback regarding their responses. While some of these activities were simplified modified from the instructor-led class, the SI instructional designer added a number 'of additional activities for the web-based version. Following the redesign there were 47 possible responses involved in these activities, at least three in each module. Table 1 summarizes the objectives of each module and indicates the learning activities and other instructional features present. Additional learning events include the case study and quiz questions at the end of each module. In the case study activities, trainees are asked to read material and make decisions using the knowledge and skill taught earlier in the module. Some of the activities involved selecting a response from a closed set of alternatives, while others involved typing in a response. Most case studies were contained in the original course, although a few were created jointly by the instructional designer and the principal investigator. Case study activities created during redesign were reviewed by a SME to ensure the correctness of responses and feedback. 69 There were 20 cases in total and most were continuations of 3 core cases used throughout the course. The cases involved 74 opportunities for response activities. The majority of these opportunities (17, 85%) were presented at or near the end of the modules. To complete a module, trainees had to page through some of the cases but not all of them. Moreover, trainees did not have to select a response to move forward; they could Simply select to continue. As a consequence, the amount of case study activity (i.e., how many responses were selected) was under trainee control. TABLE 1. Course Modules and Learning Events STEP MODULE OBJECTIVES FEATURES 1 Prepare for the Problem-Solving Process Choose, verify, implement, and validate an ERA Determine whether or not to use the process Describe the function of assessing questions Explain the key functions of the supporting software Explanation of importance. Presentation of materials with example(s). Practice and compare feedback on whether to use process. Practice and compare feedback on quantifying symptoms. Optional case activity. Case study practice and feedback on whether to use the process, how to implement and Describe team roles, their functions, and how they are implemented Explain the three elements of team operating procedures Describe characteristics of team synergy verify an ERA with feedback. Quiz with feedback. 2 Establish the Team Describe the guidelines Explanation of importance. for determining team Presentation of materials with membership example(s). Practice and compare feedback recognizing team roles and their effects on teams. Optional case activity. Case study practice and feedback on who to put on teams. Quiz with feedback. 70 Describe the Problem Explain the process for Explanation of importance. describing a problem Presentation of materials with Develop a problem example(s). statement Practice and compare feedback Develop a problem on developing problem description statements. Practice and feedback on developing a problem description. Two optional case activities. Case study practice and feedback on developing a problem statement and problem description. Quiz with feedback. Develop ICA Define and explain the Explanation of importance. features of an Interim Presentation of materials with Containment Action example(s). (ICA) Optional case activity. Distinguish between Case study practice and ‘ verification and validation feedback on selecting an ICA, Explain how to verify Quiz with feedback. Explain how to validate DefineNerify Root Use the problem-solving Explanation of importance . Cause process and worksheet to Presentation of materials with identify the root cause of example(s). a problem Practice and feedback on Identify the escape point defining root cause. of a problem Practice and verifying root cause. Three optional case activities. Case study practice and feedback using the problem- solving process and worksheet. Quiz with feedback. Choose/Verify PCA Define Permanent Explanation of importance. Corrective Action Choose a PCA using the seven-step decision- making process Use a decision-making worksheet Explain how to verify a PCA Presentation of materials with example(s). Practice and feedback on choosing a PCA. Optional case activity. Case Study practice and feedback on using the decision- making process and worksheet. Quiz with feedback. 71 7 Implement/Validate Describe the elements of Explanation of importance. PCA planning a PCA Presentation of materials with implementation example(s). Describe the elements of Practice and feedback problem prevention implementing PCA. Optional case activity available. Case study practice on planning PCA implementation. Quiz with feedback. 8 Prevent Recurrence Explain how to identify Explanation of importance. opportunities to improve Presentation of materials with on factors affecting the example(s). present problem Case study practice and Explain how to identify feedback on how to identify improvement system improvements. opportunities for similar Quiz with feedback. problems Explain how to make recommendations for systemic improvements 9 Recognize Describe the theory of Explanation of importance. Contributions recognition Presentation of materials with Explain the closure example(s). process Optional case activity. Case study practice on how to get closure. Quiz with feedback. Quizzes are also used at the end of each module to stimulate learning. Multiple choice questions with 2 to 5 response alternatives were written based on module objectives. Two different quiz formats were created, and trainees were randomly assigned to receive different quizzes. The first type of quiz included only outcome feedback regarding correct or incorrect answers. The second type of quiz provided somewhat more detailed feedback about why particular responses were correct or incorrect. The difference between the quiz types was in the feedback provided not in the questions asked. AS quiz type was not the focus of this investigation, it is dummy coded and controlled in all analyses. 72 The quiz items were written by the principal investigator, and reviewed by the SI instructional designer and a subject matter expert from the company. There were 37 quiz questions divided into sets of 2 to 8. Trainees answered a set of items at the end of each module. To complete a module trainees had to select answers to quiz questions. Because responses to quizzes were required to proceed through the course, they are not optional activities over which trainees had control. It is important to summarize from this discussion the exact nature of control provided to trainees in this course. First, trainees had control over pacing. They controlled how long to view each screen, and how long to spend on each module. Second, trainees had control over whether to complete and/or repeat within-module and case exercises. Finally, trainees had control over sequencing. Trainees were able to select any module from the main menu, and select any subsection from each module menu. However, given that this course teaches a step-by-step process, it was not expected that trainees would Skip over material or proceed in any other non-linear fashion. Rather, trainees were expected to proceed through the course in order and use the menu to jump back occasionally for review. This is in fact what transpired. All trainees proceeded through the course in order, conforming to the structure of the problem-solving process. So although trainees were offered control over sequence, this type of control is not of interest here because of the lack of variance across trainees. 73 Procedure Trainees on the instructor-led course waitlist were contacted via phone by registrars of the company’s central training facility. They were offered the opportunity to volunteer for the web-based version of the course. Trainees that volunteered were scheduled for one of several two-day time periods. Although the course could be taken at trainees’ desktops via the corporate Intranet, the company opted to pilot the training at a centralized training Site. Having a centralized pilot allowed the company to evaluate the characteristics of the course while holding the learning environment constant. From a research perspective, a centralized pilot provides for control of the learning environment and stimulus materials often unavailable in field research. To facilitate the goal of holding the environment constant, nearly identical computer laboratories were used for all sessions, the facilitators followed a scripted protocol, and no content instruction other than that presented on the computer was provided. When they arrived on their scheduled day, trainees were greeted by one or more facilitators, employees of the company and/or the vendor, and led to a personal computer in the laboratory. Trainees worked on the course individually, but took the course in groups that varied in size from 8 to 15. Following a script, the facilitators introduced themselves and explained that they were present only to help with navigation questions and feedback about the course. It was explained that the computer was to serve as the instructor. While trainees were encouraged to comment on any questions or concerns they have about the interface or the course, content questions were answered only by reiterating material already displayed on the 74 computer screen. No additional materials or explanations were provided. After this introduction, trainees started the course. Trainees began by reviewing a screen that contains informed consent for this research. The consent screen is displayed in Appendix A. Trainees then completed a pre-test questionnaire that included demographic and individual differences measures as well as the knowledge and application pre-tests. During the training a number of other questions were asked about the trainees’ attention to and activity during the course. A summary of the measures and when they were collected is in Table 2. Trainees proceeded by using the mouse to select icons that controlled which material to place on the screen. All trainees received the same basic instructional material, although trainees did have the options of how long to spend on each screen, in each section and module, and on the course as a whole. Trainees also had control over whether to complete the exercises and activities offered. Trainees were neither encouraged nor discouraged from completing these activities. The post-tests and other outcome measures were presented after trainees had completed all training material. Measures Computer-administered surveys were used to collect the self-report data. The company’s web server kept records of trainee responses as well as trainee time on task and activity. The data was downloaded and provided to the principal investigator following the conclusion of the study. The only data collected via pencil-and-paper was the application tests, which were 3-question open-ended instruments administered at the beginning and end of training. The instruments are contained in Appendix B. 75 TABLE 2. Measures Beginning of Training During End of Training Training Individual . Education (1) Differences 0 Content experience (1) 0 Training goal (l2) 0 Training priority ratings (15) 0 Technology Efficacy (4) Learning Efficacy (4) Content Utility (4) Learning O Metacognition (8) Process 0 Attentional Focus (6) Cognitive 0 Verbal Knowledge o Verbal Knowledge Outcomes PFC-test (25) post-test (25) Skill-Based ' Application 0 Application Knowledge Outcomes Knowledge pre-test (3) post-test (3) AffCCtiVC 0 Application Efficacy (4) Outcomes Note. Numbers in parentheses reflect the number of items in each scale. Questions used in each scale are in Appendix B. Time on task and activity level variables are calculated from data saved on the server as trainees complete the course. Demographics. Education and content experience were collected on the pre- test questionnaire. Each was collected using a Single multiple-choice question. Quiz Type. The type of quiz feedback provided was coded 0 or 1. As with demographics, this variable is used as a control. chlsy Goals are the outcomes that trainees want to receive from training. Three types of goals were assessed with the pre-test questionnaire: (1) Learning, (2) Performance, and (3) Completion. Each goal was assessed with a four-item measure 76 created from statements to which trainees note their agreement. Items for the learning and performance scales were derived in part from 8-item scales that measure trait goal orientations (e.g., Button, Mathieu, & Zajac, 1995). However, the items were reworded to focus on state intentions for the course, rather than general intentions and task preferences. This rewording brings the item wording closer to the scales used by Meece et a1. (1988). Examples of learning goal statements are "I plan on learning as much as I can from this course" and "Its important to me that I learn about this problem-solving process." Examples of performance goal Statements are "I plan on doing better than other trainees throughout this course" and "I want to impress others with my knowledge of this subject." Examples of completion goal Statements are "My primary goal for this course is to just to complete it" and "I want this course to be as easy as possible." Responses were provided along a 4-point scale with “strongly agree,” “agree,” “disagree,” and “strongly disagree” anchors. The completion scale contained one item with a negative corrected item-total correlation. Removing this item improved the coefficient alpha from .13 to .52. Similarly, one item in the performance goal scale had a low corrected-item total. This item was removed to improve the reliability of the scale from .69 to .71. The reliability of the learning goal scale was .65. These coefficient alphas are lower than those obtained in other research studies. A factor analysis was used to examine the underlying structure of the goal constructs. While the small sample size prohibits drawing strong conclusions from such an analysis, the question of factor structure is an important one. There has been some debate in the literature regarding the factor structure of goal orientation scales 77 (see Button et al., 1996). In addition, the introduction of completion goals raises another issue. Past research has assessed completion goals without addressing whether having a completion goal is identical to having a weak or low learning goal. TO examine this issue the revised scale items were entered into a principal components analysis. Three components with eigenvalues over 1 were extracted and the resulting component matrix was submitted to a Varimax rotation. The resulting matrix, presented in Table 3, indicates that there is indeed substantial overlap between the learning and completion items. The analysis indicates that items were Split into two components containing items from both scales. In addition, most of the items displayed substantial cross-loadings between these two components. An examinatiOn of factors estimated using only common variance, a principal axis analysis, revealed a Similar structure. An analysis that explicitly recognizes that the underlying completion and learning goal constructs may be related was attempted but could not be completed. An oblique rotation did not converge, a likely result of the small sample size. These findings stimulated an attempt to find some combined measure of learning/completion. That is, different item combinations were explored in an attempt to create a combined scale capturing both goals. However, none of the alternative combinations, including those suggested by the factor analysis, resulted in an improvement in the internal consistency reliability obtained from the a priori scales. Consequently, completion and learning goals were maintained as independent scales despite their modest negative correlation (r = -.30). 78 TABLE 3. Rotated Component Matrix of Goal Measures Item Component 1 2 3 Perfl .86 -. l 8 -.04 Perf3 .85 -.05 -.04 Peer .55 .34 . 17 Comp3 .2 l .76 .07 Leam2 .27 -.73 .23 Learnl .34 -.57 .33 Learn3 . 19 -.04 .82 Learn4 -.03 -.00 .73 Comp] .04 .43 -.52 Comp2 . 19 .37 -.43 Note. N = 80. Solution obtained via Principal Component analysis and Varimax rotation. Perf = Performance goal item, Comp = Completion goal item, Learn = Learning goal item. As a means to address the structure of trainee goals, paired comparisons were collected. Trainees completed 15 comparisons in which they selected the most important outcome of the course for them from two Options (i.e., which of the following is more important to you with regard to this course?) The comparisons are randomly ordered pairs of 6 terms with 2 terms representing each goal. Learning was represented by the terms, "learn a lot" and "gain new Skill." Performance was represented by the terms, "look knowledgeable" and "avoid mistakes." Completion was represented by the terms, "avoid thinking too hard" and "finish quickly." Order of presentation (i.e., learning vs. performance compared to performance vs. learning) was fixed to one order for all pairs and all trainees, cutting the number of necessary ratings in half. This type of procedure fixes possible order effects to be constant across all trainees. 79 Pairwise ratings yield ipsative judgments regarding which goals are most important to the rater relative to other goals. AS behaviors that further the pursuit of these goals may conflict (i.e., it would be difficult to both learn the material and leave early), a forced choice rating scale offers an estimate of goal priorities held by trainees. Ipsative measures are inherently within-subject measures, so descriptive statistics derived on these scales are not useful for research purposes. However, ipsative ratings can be used to make between—subjects comparisons when they are i used to categorize individuals. This iS accomplished by counting the total number of times a trainee endorses a particular goal, as indicated by his or her selection of one of each pair of terms. In counting endorsements the 3 within-pair ratings contained in the survey were not counted. Thus, trainees could have at most 8 endorsements for any one goal, and the sum all endorsements could not exceed 12. Results indicate little variability in ratings. All but 2 trainees selected learning as their first priority (i.e., trainees endorsed more learning goal statements than other goal statements). Thus only the second priority goal (i.e., the goal with the second greatest number of endorsements) could be used to distinguish among trainees. Trainees were categorized as to whether performance or completion was their second priority. This variable is called "completion priority" and was coded 1 for those with completion as their second priority and 0 for those with performance as their second priority. Content Utility. This is the perceived usefulness of training content for improving job performance. Four-point Likert scales with “strongly agree” and “strongly disagree” anchors are used to measure this construct. A 4-item scale was 80 developed specifically for this study and found to have internal consistency reliability, based on coefficient alpha, of .76. Examples of statements used in this scale are "The content of this course will be useful for me back on the job" and "I do not learn this material, I may have difficulty performing my job well." Learning Self-Efficacy. This construct captures the confidence that trainees feel with regard to their ability to learn the problem-solving process presented in the course. This is a four-item scale answered using the four-point response anchors noted earlier. Brown (1996) used a Similar scale. Sample items are "I am confident that I can gain the skills necessary to perform a G8D" and "I can learn the material in this course." This construct was assessed at the beginning of training and found to have a coefficient alpha of .71. Removal of one-item with a low corrected item-total correlation improved the internal consistency reliability to .76. The more reliable scale was selected for all analyses. Technology Self-Efficacy. This construct captures the confidence that trainees have with regard to using the web browser to learn new knowledge and Skill. Four- point Likert were used for this construct as well. A 4-item scale was developed using similar wording as the learning self-efficacy scale. Samples items are "I am confident that I can learn using this training delivery technology" and "I am comfortable taking courses and receiving training via computer." This construct was also assessed at the beginning of training. The internal consistency reliability of this scale was improved by removing an item with a near zero corrected item-total correlation. The alpha coefficient for the revised scale is .63. 81 Metapognition. Metacognition is awareness and control over one’s cognition. In this study metacognition is considered to include self-awareness of knowledge level (i.e., do you understand the material?) and strategy use to improve that knowledge (i.e., do you use particular learning strategies to learn the content?) Metacognition was assessed with an 8-item self-report Likert measure adopted from Pintrich et a1. (1991). A similar measure, although slightly longer at 12 items, was used by Ford et a1. (1998). The measure was collected twice in the middle of training. Sample items are "I asked myself questions to see if I understood material" and "I tried to monitor whether I understood the material I was reading." The alpha coefficient of the scale was .55 and .61 at administrations one and ‘ two, respectively. This is substantially lower than the reliability obtained by Ford et a1. (1998). Differences in reliability across the studies may be attributable to different number of scale points (i.e., four versus five), different time of administration (i.e., during versus after training), and/or differences in research populations (i.e., adult versus student learners). No subset of items was found to offer higher internal consistency reliability; thus, the scale was retained in its original form for hypothesis testing. The estimate of test-retest reliability, obtained from the correlation between the two scales, is .58. This finding suggests moderate consistency across time. Unfortunately, eighteen trainees (23%) did not complete the second administration of the metacognition scale. Consequently, the metacognition construct is set to the value obtained in the first administration. 82 Attentional Focus. This is the amount of attention devoted to the course materials as opposed to unrelated topics or material; this is the cognitive facet of effort. This construct has two operationalizations. The first measure is perceived focus, a self-report Likert scale asking the extent to which trainees thought about the task-related and task-unrelated subjects. The scale is adopted from Fisher (1995) and Brown (1996). It includes statements such as "I let my mind wander while I was learning the material" and "I concentrated on the training materials." Most of the items in these two scales were derived from Kanfer and Ackerman’s ( 1989) measure of off-task attention. The six-item measure used in this study is most similar to the measure employed by Brown (1996). Items about attention to off-task topics were reflected so that high scores reflect greater task-related attentional focus. As with metacognition, this construct was assessed twice during training. One item in the scale was found to have a low corrected item-total correlation and its removal, for both administrations, improved the reliability. The revised scale displayed alphas of .78 at both administrations. Test-retest reliability was .67, suggesting moderate consistency in attention over time. Unfortunately, as with the metacognition construct, Sixteen trainees (20%) did not complete the second survey. Consequently, the first measure is used as the indicator of attention. The second operationalization of attentional focus is time on task, or the time it takes trainees to complete the course. This is calculated using time stamps from data stored in a database. Time stamps recorded the time trainees started and ended each module. Total time was calculated for each day by taking the difference between the first time stamp (i.e., the first module started) and the last time stamp (i.e., the last 83 module started or ended). The sum of time from both days was used as the time on task measure of attentional focus. As the measure was generated from computer files, reliability is assumed to be high but no estimate is available. This measure introduces some error because trainees did not all take the exact same amount of time for lunch or breaks during the day. The theory advanced in this manuscript suggests that these two operationalizations are different manifestations of the same underlying construct. The zero-order correlation between attentional focus and time on task, however, is -.04. This suggests that these measures are empirically and perhaps conceptually independent. As a result of the obtained correlation, both measures of attention focus. are maintained as separate indicators throughout the analyses. Activity Level. This is the amount of practice trainees perform during training; it is the behavioral facet of effort. Practice included answering questions, filling in forms, marking check-lists, and entering text at certain points during training. There were 121 such activities in the program, 47 (39%) of them presented throughout the modules and 74 (61%) presented as cases at or near the end of the modules. Quiz and test questions were not counted in this number because those activities were required to progress through the course. The completion rate for quiz and test question was 100%, equivalent across all trainees. The primary operationalization of activity level is the percent of all possible activities completed. This measure captures the extent to which trainees used all practice activities offered through the computer. This operationalization adheres closely to the construct definition of level of behavioral practice. In this course 84 activities were designed to provide the necessary practice to achieve the course objectives, and objectives were used to develop the evaluation instruments. Thus, completing all exercises is the best way a trainee can practice the targeted knowledge and skill. This operationalization does not, however, capture whether or not trainees repeated or reviewed activities in an effort for further practice. In addition, this measure does not provide an indication of the quality of effort exerted during each activity. To address the quality of practice concept, the total number of words typed for open-ended questions can be measured. Forty-four of the 121 activities (36%) were open—ended questions for which trainees were asked to type a response. The total number of words entered by trainees was summed. This measure reflects the thoroughness with which trainees attempted to answer these questions. However, this measure may be contaminated by individual differences in communication skills such as the ability to write or type effectively. Another operationalization of activity is the number of repeated activities. This measure addresses the issue of repetition in practice. This measure is a sum of the number of times exercises were completed more than once. This measure provides additional information because, rather than capturing the amount of coverage, it captures the extent to which trainees sought additional practice by repeated activities. An alternative measure of activity level would capture whether trainees viewed Optional materials or screens. In other words, the amount of optional material viewed could serve as an indicator of effort. Unfortunately, the database created by the program did not record whether trainees viewed optional material; it only recorded 85 whether trainees used optional activities and exercises. Consequently, the activity operationalization focuses on active practice rather than both review and practice. Application Self-Efficaey= This construct captures the confidence trainees have for using the problem-solving process back on the job. This is a four-item scale very Similar to the scale used to measure learning and medium self—efficacy. Sample statements used to assess this construct include "I am confident that I have gained the Skills necessary to perform a problem-solving project" and "Even though it may be difficult, I know that I can use the problem-solving process." As with the other self- efficacy scales, the alpha coefficient of the scale was improved by removing one item. The internal consistency reliability Of the revised 3-item scale was found to be .73. Ve_rbal knowledge. Verbal knowledge is assessed as the number of items correct on pre and post tests. Pre-test and post-tests are identical 25-item measures that ask trainees to select the correct answer from a list of distracters. Twenty-four of the items contain 4 distracters, one item contains 5. Eighteen of these items were taken from the original company pre/post test. On the post-test, all items indicated positive corrected item-total correlations. The internal consistency of the post-test test as indicated by alpha is .84. The pre—test had a lower alpha, .54, because a number of items had low and even negative corrected item-total correlations. An analysis of item difficulties suggests that the tests were moderately difficult and therefore capable of discriminating among trainees with different knowledge levels. Item difficulties for the pre-test varied from .10 to .81 with a mean of .53 (SD = .20). Item difficulties on the post-test varied from .28 to .91 with a mean of .72 (SD 86 = .18). These results indicate that the post-test was easier than the pre-test; yet, the post-test contained items that were difficult enough to provide variability. Application Knowledge. Application knowledge is reflected in a trainee’s ability to apply concepts discussed in the course to new problems. Three multiple-part essay questions are used to tap this construct for different activities covered in the course. These questions were written by the principal investigator. Based on feedback from the company's SME, an answer key was developed that employed a 3- point rating scale. Trainees were given a 0 for incorrect answers; a 1 for answers that reflect the course materials in a technically accurate manner but do not provide evidence of application; and a 2 for answers that are both technically accurate and reflect application of course materials. This scoring key explicitly acknowledges that application knowledge requires verbal knowledge and conceptual understanding. Scoring focused on technical accuracy and evidence of application while ignoring presentational issues such as spelling, grammar, and punctuation. The answer keys and sample coding sheets are contained in Appendix C. Two advanced graduate students were employed as raters. Raters were provided training on how to grade the responses using these keys. The training involved an initial discussion of the training content and practice grading 5 sample answers created by the principal investigator. Differences in perspective on the sample cases were resolved through discussion. Following training the raters coded all answers independently. Multiple responses within question were averaged to yield a within-rater question score for each question and trainee ranging from 0 to 2. The resulting scores 87 were correlated to generate the equivalent of a multi-method, multi-trait matrix. In this case the methods are the two raters and the traits are responses to the three application questions. From such a matrix the reliability of raters can be estimated by examining the correlation of scores provided by different raters for the same question. In addition, the correlation of scores from the same rater but for different questions provides an estimate of internal consistency. This analysis was conducted for both pre and post tests and is displayed in Tables 4 and 5. TABLE 4. Correlations among Raters and Questions on Application Pre-Test RATER 1 1 1 .00 RATER 2 1 RATER 2 3 Note. N = 80. Correlations in bold are Significant at p < .05. Correlations in the .03 .20 .75 -.06 .09 l .00 .03 . 10 .63 .06 1.00 .13 .16 .85 1.00 -.06 .04 1.00 .13 lower square diagonal represent the reliability estimates of interest. 1.00 TABLE 5 . Correlations among Raters and Questions on Application Post-Test RATERI RATER 2 Q1 Q2 Q3 Q1 Q2 Q3 Q1 1.00 RATERI Q2 .25 1.00 Q3 .48 .38 1.00 Q1 .80 .29 .43 1.00 RATER 2 Q2 .18 .80 .35 .23 1.00 Q3 .37 .38 .85 .33 .33 1.00 Note. N = 80. Correlations in bold are significant at p < .05. Correlations in the lower square diagonal are the reliability estimates of interest. 88 Tables 4 and 5 indicate that the correlations between raters are high, indicating reliability of raters, but internal consistency is low. On the pre-test the average cross- rater, same-question correlation is .74, while the average within-rater, cross-question correlation is .08. The post-test scores offer the same basic pattern; however, the off- diagonal correlations are generally higher. For the post-test the average cross-rater, same-question correlation is .82, while the average within-rater, cross—question correlation is .33. Despite the overall increase in correlations among the ratings for the post-test, the data indicate that raters provided consistent data. Consequently ratings were averaged to generate a single score for each question. The low item-to-item correlations, both within and across raters, suggest that I successful application of one skill from the course is not highly correlated with successful application on the others. Alphas, calculated using average ratings, are .63 for the post-test and .20 for the pre-test. N one-the-less, the question scores were averaged to create composite measures of application knowledge. This criterion is acknowledged to be complex and multi-dimensional, but it can be reliably coded. Data Analysis Hierarchical regression will be used to test the hypotheses in this study. Although regression results are influenced by random error in the measures, a structural equation modeling (SEM) analysis that adjusts for unreliability was not attempted because of the small sample Size. Maximum-likelihood analyses used to generate solutions in SEM typically require large sample Sizes in order to obtain stable parameter estimates. Regression analysis opts for more stable estimates while 89 sacrificing the ability to adjust for the effects of random error. The verbal and application knowledge constructs are studied for their change from beginning to end of training, so all analyses with these constructs will examine effects of independent variables on post-test values while controlling for pre-test values. This method for the analysis of change is superior to the analysis of change scores because it avoids the use of notoriously unreliable difference scores (Johns, 1981). 90 RESULTS Table 6 presents the descriptive statistics and correlations for the variables created in this study. Correlations that are significant at p < .05 are presented in bold. Appropriate reliability estimates are presented in parentheses on the diagonal. The correlation matrix provides important information about the relationships among the individual differences, learning processes, and outcome variables. With regard to individual differences, the correlations suggest that the control variables do not have strong relationships with learning choices or with outcomes. Neither education nor content experience is Significantly related to learning process ' variables, nor are these variables significantly related to post-test scores. However, as would be expected, content experience is positively correlated with verbal knowledge pre-test score (r = .23) and application knowledge pre—test score (r = .16). The goal and attitude measures are moderately correlated. The learning goal measure is negatively correlated with both completion goal measures (continuous and dichotomous), but positively correlated with the performance goal measure. The correlation between learning and completion goal suggests that those who hold strong learning goals are less likely to want to complete the course quickly. Learning goal and content utility are positively correlated (r = .41), suggesting that those who think the content will be useful are more likely to want to learn the material than those who think the content will not be useful. Learning self-efficacy is positively related to content utility (r = .41) and to technology self-efficacy (r = .39). 91 8. 8. 8.- 2.- .- 3.- 8. 8. 8. 2.- 8.- 3. 8. .2 8.2 8.85,: 8. 8. 2.- 8. 8. 2. 8. 3. 8. 8.- 8.- 8. 8.- 88 8.2 m...88>.8 8. 8. 8. 8. 2.- 2. 2. 8.- 3.- 8.. 2.- 3.- 2. 3.8 8.8 88:82:22.2 8. 2. 8.- 8. 8.- 8. 8. 3.- :. 8.- 8. 2. 2. 8.8 3.8 E 8282:: .2 2. 8.- 8.- 8. 8. 8. 8.- 8.- 8.- 8. 8.- 2. 8.- 3.8 88 8 882.852 8.- :.- 8.- 8. 8. 8. 8. :. 2. 8. 8.- 2.- 8.- 8.8 8.8 82.818822 2.. 8. 8.- 3.- .. 8. 8. 2. 8. 8.- 3.- 2.- 2.- :.o :8 22:22.3. .2 8. 8. 8.- 8.. 2.- 2. 2. 2. 8.- 8.- 8. 2.- 8.. 8.2 8...: 258223.: - 8. 3.- 2. 2. 8. 8. 8.- 8. 3. 2. 8. 2.- 8.8 8.28 2.53 2.22 .2 -- 3.- 2.- 8.- 8.- 2.- 8.. 3.- 8. 8. 8.- 8.- 8.8. 8.82. 8.2; .5 2::- .2 28 2. 8.- 8. 8.- 8.- 2.- 2.. 8. 3.- 2.- 28 82 88-32.: 28.2 8. 8. 8.- 8.- 2. 2. 8. 8. :. :8 88 mm .58. .2 68 G. 8. 2.- 2. 2.. 8.- :. 8. 8.8 :.m 22: .25 .8 68 8. 8.- 8. 2. 8.- 2. 8. :8 :8 mm 5.8-— .8 -- 8. 8. .. 8. 8.- 2. 28 8.8 .2 =22..an .2. $8 3. 8.. 8.- 8.- 2. 28 88 .O 88.855 e :8 2. 8.- 3.- 8. 28 8.8 .2528. .m 28 8.- 8.- 2. 8.8 88 .25 5.8-. e - 3.- 2.- 8.8 one 25. 2.5 .m - 2.. 82 2.8 92.525 .N - 8.8 88 8:822 .— 2 2 = 2 a a a e m a m N _ o M 822280 28 222m 2,8882 .2- Mimi 92 Sewage-:8 H mm 3.9.2-802 H mm 38265 n Mm .888 £222: 8 3882-88 .238 .056 .moumfizmo 8:52.“: Em 3855.3 E 835:2 2.283% c.8288 was :38 :5on 2:228 03. 82E .8. v a EmoEch. Em Eon E 85:20:00 .302 2.8 2.. 8. 3.. 2.- 8.- 2.. 8. 8 .25, .8 2.8 8. 2.. 8. 2.- 2. 8. E .25, .8 a8 8. 8. 8.- 8. 8. m.— 8_.8__8< .2 9.8 8. 2.- 8. 8. E 8:82.22 .2 88 2. 8. 8.- mm 528:8...5 $8 8.- 8.- 522895»: .8 - 2.. $1223. .2 - :58: £23. .3 8 8 8 8 2 8 2 3 6.2:on .N. mama; 93 One interesting and unexpected individual differences finding is the pattern of correlations for learning goals and content utility. Both of these measures have negative correlations with activity (repeats and percent) and knowledge post-tests. With regard to learning choices, the strongest correlations observed are for time on task and for percent and word activity levels. Time on task is significant and positively related to activity as measured by number of words (1' = .29) and percent of activities completed (r = .27). All three of these process measures are significantly and positively correlated with knowledge test scores. The repeats measure of activity level is not highly correlated with any measured variable. The low correlation of time on task and self-report attentional focus (r = -.O4) does not support the assertion that they are indicators of a single underlying construct. Further evidence against the assertion is the pattern of correlations with technology self-efficacy and knowledge test scores. Time on task is negatively related to time on task (r = -.15), but it is positively related to perceived attentional focus (r = .19). Similarly, time on task is significantly related to test scores (r = .15 to .37) while perceived focus is not (r = -. 10 to .01). Because the two measures of attentional focus appear to be empirically independent, both will be used in latter analyses. With regard to outcome measures, the knowledge test measures appear to be related but independent of application self-efficacy. The correlation between verbal and application post—test scores is .57, and correlation between these scores and application self-efficacy are -. 10 and .00, respectively. The strength of the former correlation suggests that a single underlying knowledge construct may be sufficient to explain the data. Because hypotheses were worded for each knowledge outcome, 94 independent analyses will be presented along with a composite measure. An examination of descriptive statistics is also useful for determining the distributional properties of the variables. In this regard, three variables in this study have large enough standard deviations to warrant concern. The time on task and word and repeats activity measures display considerable variability across trainees, raising concerns about whether the variables are normally distributed. Many time and count variables like these have variances that exceed their means, an event called overdispersion (Long, 1997). One method for reducing variance and bringing overdispersed variables closer to a normal distribution is to use a square-root transformation. Transforming data can be a useful technique for ensuring that data ' meet analytic assumptions, but it can create difficulty for the interpretation of findings. Specifically, the interpretation of b and beta weights can be confusing because the variable metrics are no longer tied to the metric that was used to collect the data. Square root transformations were conducted on these measures and the new measures were examined for their descriptive properties. The transformed variables had reduced variances and distributions that more closely resembled the normal. In addition, correlations with the transformed variables were slightly larger than those obtained with the untransformed variables. However, the pattern of correlations remained the same, and none of the conclusions drawn from the regression analyses ’ changed. Because the results from transformed variable can be difficult to interpret, only the results from the untransformed data are presented here. Hypothesis testing is conducted following the major sections of the hypotheses, starting with the influence of individual differences on learning choices, 95 followed by the influence of choices on outcomes, and concluding with an examination of the mediational hypotheses. Within section, however, the results are organized by dependent variable because of the nature of the analyses. This organization can pose some difficulty for the reader because not all hypotheses are presented in numeric order. To make the interpretation of the results more manageable, a table summarizing the results is presented at the end of this section. Controls Control variables used in this study are education, content experience, and quiz type. The first two variables are used to control for the ability and experiential components of the individual differences in learning choice model. Although the model indicates that immutable trainee characteristics will have their major influence directly on knowledge gain, these control variables are used in every analysis in order to provide a more conservative test of the hypotheses. The third variable, quiz type, is used to control for possible learning process and learning outcome differences that arise as a result of different types of quizzes. None of the results for these variables are significant, none-the-less they are maintained in all analyses to ensure that statistical tests conform to the theory. Individual Differences Effects on the Learning Process H1, H7, H8, and H9 suggest that trainees with higher learning goals, technology self-efficacy, learning self-efficacy, and perceptions of content utility will evidence greater attentional focus than trainees who are low on these characteristics. 96 These hypotheses are tested for both operationalizations of attention: Perceived attentional focus (self-report) and time on task (computer-generated). Tables 7 and 8 present the regression of perceived focus and time on task, respectively, on the control variables and individual differences. TABLE 7. Regression Results of Attentional Focus (Perceived Focus Measure). Step: Variable(s) B R2 (If ARZ Adf 1: Education -.16 .03 2, 67 -- -- Content Experience -.06 2: Quiz Type .03 .03 3, 66 .00 1, 66 3: Learning Goal .48 .36 9, 6O .33 6, 6O Completion Goal -.15 Performance Goal -.26 Learning Self-Efficacy .10 Technology Self-Efficacy .25 Content Utility -.21 Notes. Dependent variable is attentional focus. Bold indicates significance at p < .05. Content utility is marginal at p < .10. The results depicted in Table 7 support Hl regarding the influence of learning goals and H9 regarding technology self—efficacy, but they do not support H8 regarding learning self-efficacy. The coefficient for perceived utility is marginally significant (p < .10) suggesting that utility may be associated with lower attentional focus. The direction of this finding is counter to H7. Trainees who perceive the training to be 97 more useful back on the job are less likely to focus their attention on task-related topics. Another unexpected finding in this analysis was the negative influence of performance orientation. Trainees with higher performance orientations were less likely to focus their attention on task. The self-efficacy findings were examined in more detail to determine whether collinearity between the two different measures of self—efficacy affected the results. The two predictors are correlated (r = .40) so this is a reasonable concern. To examine this possibility the analysis was conducted separately for each self-efficacy construct. The conclusions drawn from this supplemental analysis do not differ. Technology self-efficacy is clearly a better predictor of perceived attentional focus than leaming' self-efficacy. TABLE 8. Regression Results for Attentional Focus (Time on Task Measure) Step: Variable(s) B R2 df A R2 (If 1: Education -.07 .01 2, 71 -— -- Content Experience -.09 2: Quiz Type -.02 .01 3, 70 .00 1, 7O 3: Learning Goal .02 .13 9, 64 .11 6, 64 Completion Goal -.29 Performance Goal -.01 Learning Self-Efficacy .12 Technology Self-Efficacy -. 19 Content Utility -. 12 Notes. Dependent variable is time on task. Bold indicates significance at p < .05. 98 Time on task measure of attentional focus was also regressed onto the control variables and the individual differences. The results are very different. As indicated by Table 8, none of the hypotheses were supported. Instead, completion goal was a significant predictor of time on task. As would be expected given the construct definition of completion goal, trainees with high completion goals spent less time on task. The results of this analysis provide further support for the idea that time on task and perceived attentional focus are not indicators of a single underlying construct. H2 suggests trainees with learning goals will engage in more metacognitive activity. Table 9 contains the regression analysis. The significant beta on learning goal indicates that H2 is supported. No other individual differences were significant predictors of metacognition. TABLE 9. Regression Results for Metacognition. Step: Variable(s) [3 R2 df A R2 df 1: Education -.04 .03 2, 67 -- -- Content Experience -.17 2: Quiz Type —.24 .08 3, 66 .05 1, 66 3: Learning Goal .29 .17 9, 6O .09 6, 60 Completion Goal .18 Performance Goal -.00 Learning Self-Efficacy -.04 Technology Self-Efficacy .12 Content Utility -.01 Note. Dependent variable is metacognition. Bold indicates significance at p<.05. 99 H3, H4, H5, and H6 suggest that trainees with higher learning goals, technology self-efficacy, perceived utility, and learning self-efficacy, respectively, will have higher activity levels. To test these hypotheses regression analyses were conducted on the primary measure of activity, percent, and then on the remaining 2 measures: Words and repeats. TABLE 10. Regression Results for Activity Level (Percent Measure). Step: Variable(s) B R2 df A R2 df 1: Education -. 17 .03 2, 71 -- -- Content Experience -.23 2: Quiz Type -.05 .08 3, 70 .05 l, 70 3: Learning Goal ' .04 .17 9, 64 .09 6, 64 Completion Goal .2] Performance Goal .04 Learning Self-Efficacy .26 Technology Self-Efficacy -.01 Content Utility -.32 Note. Dependent variable is activity level (percent). Coefficients in bold are significant at p < .05. R-square on step 1 and 3 are marginal (p < .10). Coefficient for learning self-efficacy is marginal (p < .10). The results for the regression of the percent operationalization on the individual difference measures are presented in Table 10. The results suggest marginal support for H6 that learning self-efficacy predicts activity, but no support for H3, H4, and H5 regarding the other individual differences. The results suggest that utility is a 100 significant predictor of activity but in a direction opposite than that hypothesized. Trainees who perceive the content to be useful were less likely to complete activities. Table 11 presents the regression of activity as operationalized by the number of words typed. The results of this analysis indicate that H6 was supported, because learning self-efficacy was positively related to activity, but H3, H4, and H5 were not supported. In fact, most of the betas predicting this activity operationalization were near zero. TABLE 11. Regression Results for Activity Level (Words Measure). Step: Variable(s) p R2 df A R2 df 1: Education -. l 6 .03 2, 71 -- -- Content Experience .02 2: Quiz Type .15 .05 3, 70 .02 1, 70 3: Learning Goal .01 .16 9, 64 .12 6, 64 Completion Goal .05 Performance Goal -.05 Learning Self-Efficacy .30 Technology Self-Efficacy .05 Content Utility .07 Note. Dependent variable is activity level (words). Coefficients in bold are significant at p < .05. The results for the number of repeated activities operationalization of activity, presented in Table 12, are similar. None of the hypotheses are supported, although the 101 direction of the learning self-efficacy relationship matches those for the other analyses. An unexpected finding was that the relationship between learning goal and activity is in a direction opposite than that hypothesized. Trainees with higher learning goals repeated fewer of the activities presented. TABLE 12. Regression Results for Activity Level (Repeats Measure). Step: Variable(s) B R2 df A R2 df 1: Education -.20 .05 2, 70 -- -- Content Experience -.10 2: Quiz Type -.15 .06 3, 69 .02 1, 69 3: Learning Goal -.30 .19 9, 63 .13 6, 63 Completion Goal -.07 Performance Goal .16 Learning Self-Efficacy .22 Technology Self-Efficacy .00 Content Utility .08 Note. Dependent variable is activity (repeats). Coefficients in bold are significant at p < .05. Hla, H2a, H3a suggest that attentional focus, metacognition, and activity level will all be higher for trainees with learning goals than for those with performance goals. Similarly, Hlb, H2b, and H3c suggest that these measures will be higher for trainees with learning goals than for those with completion goals. Unfortunately, nearly all trainees indicated learning as their top priority, so it is impossible to test 102 these hypotheses. The dichotomous goal measure created from the goal priority ratings only contrasts performance and completion. An alternative method for testing these hypotheses is to use the continuous goal measures to categorize trainees into goal categories. There are two ways trainees could be categorized. In the first method, trainees could be placed in a 2 x 2 matrix created by median splits of the two goals required in an analysis. In other words, to examine the learning-completion comparison, median splits could be used to divide each trainee into one of four cells: High Learning/Low Completion, High Learning/High Completion, Low Learning/Low Completion, or Low Learning/High Completion. One difficulty with interpretation of this analysis is that any planned contrasts do not control for main effects that occur for each type of goal. Controlling for these main effects in the two analyses, no Si gnificant interaction effects emerge. This finding suggests that differences between individuals with different goals occur as the result of one or both of the goals that divide them rather than as a result of a combination of the two. Consequently, according to the first method of testing, no support is found for the between-goal hypotheses. The second method of testing these hypotheses is to categorize trainees based on which goal was more heavily endorsed. Because each goal was rated on the same 4-point scale, a simple comparison of which goal score is higher provides the necessary data. Because so few trainees rated performance or completion goals highly, trainees were classified as endorsing a performance or completion goal if their scores were equal to or above the score for learning goal. Such counts indicate that 5 out of 80 trainees endorsed completion goals as much or more than learning goals and 103 16 of 80 trainees endorsed performance goals as much or more than learning goals. T- tests indicate that the only learning choice that demonstrated significant between group differences was time on task by learning-completion goals. Trainees with completion goals averaged 389 minutes in training and trainees with learning goals averaged 506 minutes in training. The average difference of 117 minutes, nearly 2 hours, was significant (t = 2.40, df = 78, p < .05). This provides partial support for H3a that attentional focus will be higher for trainees with learning goals than trainees with completion goals. Learning Choice Effects on Training Outcomes Three training outcomes were examined: Application self-efficacy, verbal knowledge, and application knowledge. The criteria are examined in the order listed. To test H10 and H11, application self-efficacy was regressed on the set of choice variables. As Table 13 indicates, application self-efficacy is not well predicted by any choice variables. Neither hypothesis is supported because none of the coefficients are significant. The second outcome examined was verbal knowledge. H12, H14, and H18 suggest that higher verbal knowledge gain will result from higher metacognition, attentional focus, and activity, respectively. Table 14 presents the regression of verbal post-test score on these constructs, controlling for pre-test score. The addition of pre- test score to the control variables provide a test for change in verbal knowledge. Before using the pre-test score as a covariate, a test of invariance was conducted by testing for Si gnificant interactions between pre-test and each other predictor variables. 104 No significant interactions were discovered, suggesting that the prediction of post-test scores by process variables is invariant across different levels of pre-test score. TABLE 13. Regression Results for Application Self-Efficacy Step: Variable(s) [3 R2 df A R2 df 1: Education -.03 .04 2, 67 -- -- Content Experience .20 2: Quiz Type -.04 .04 3, 66 .00 1, 66 3: Attentional Focus -.20 .14 7, 62 .09 4, 61 Time on Task -.09 Metacognition .l 1 Activity Level (percent) .17 Note. Dependent variable is application self-efficacy. Coefficients in bold are significant at p < .05. The results suggest support for H18 but not H12. The coefficient for activity level percent is Significant and positive indicating that trainees who completed more training activities gained more verbal knowledge than trainees who performed fewer activities did. The analysis also suggests some support for H14, that attentional focus will predict verbal knowledge gain, because the effect of time on task was marginal (p < .10). Time on task was marginal in the final equation but is significant if entered into the equation separately from activity. This indicates that while there is some overlap between time and activity, a learner who chooses to increase both may receive incremental gains from each. 105 TABLE 14. Regression Results for Verbal Knowledge Step: Variable(s) B R2 df A R2 df 1: Education .07 .01 2, 67 -- «- Content Experience .01 2: Verbal Pre-Test .38 .14 3, 66 .14 1, 66 3: Quiz Type -.04 .14 4, 65 .00 1, 65 4: Attentional Focus .05 .38 8, 61 .26 4, 61 Time on Task .19 - Metacognition .06 Activity Level (percent) .43 Note. Dependent variable is verbal knowledge. Coefficients in bold are Si gnificant at p < .05. Time on task is marginal (p < .10). Alternative regression analyses with word and repeats activity operationalizations were conducted. The analysis with words provided very similar results. The pattern of relationships was essentially the same. However, words and percent were predicting similar variance in outcomes because an exploratory analysis with both predictors in the equation results in the word measure becoming non- significant. In isolation either measure is a significant predictor but when placed together in the equation collinearity renders the word measure non-significant. Repeats was not a significant predictor of verbal knowledge gain either alone or in conjunction with percent activity. 106 The next dependent variable examined was application knowledge. As with verbal knowledge, the pre-test measure was entered into the regression equation first. Before using the pre-test score as a covariate, a test of invariance was conducted by testing for significant interactions between pre-test and each other predictor variables. No significant interactions were discovered, suggesting that the prediction of post-test scores by process variables is invariant across different levels of application pre-test SCOI'C. TABLE 15. Regression Results for Application Knowledge Step: Variable(s) B R2 df A R2 (if 1: Education .18 .03 2, 67 -- -- Content Experience -.00 2: Application Pre-Test .59 .37 3, 66 .33 1, 66 3: Quiz Type .03 .37 4, 65 .00 l, 65 4: Attentional Focus .02 .47 8, 61 .10 4, 61 Time on Task .14 Metacognition —. 17 Activity Level (percent) .21 Note. Dependent variable is application knowledge. Coefficients in bold are significant at p < .05. H15 and H19 suggest that attentional focus and activity level will each predict gain in application knowledge. Table 15 presents regression results that support H19 but not H15. While activity level is a significant predictor of knowledge gain, neither 107 time on task nor self-reported attention are significant. An alternative analysis with total activity run did not change these conclusions. Alternative regression analyses with word and repeats activity operationalizations were conducted. The analysis with words provided very similar results. However, words and percent were predicting Similar variance in outcomes because an exploratory analysis with both predictors in the equation results in both measures becoming non-significant. In isolation either measure is a significant predictor but when placed together in the equation collinearity renders both measures non-significant. Repeats was not a significant predictor of application knowledge gain either alone or in conjunction with percent activity. The post-test application and verbal knowledge scores exhibit a strong relationship (r = .57). Examining predictors of these constructs independently increases the number of non-independent statistical tests run and, as a result, may produce results that capitalize on chance. To examine this possibility a composite measure of verbal and application knowledge was created by averaging standardized scores. The Standardized scores were submitted to the same analysis that was presented in Tables 14 and 15. The results of the composite outcome analysis are shown in Table 16. The results confirm that, controlling for pre-test scores, percent activity level is a significant predictor of post-test knowledge and time on task is marginally significant. [08 TABLE 16. Regression Results for Knowledge Composite Step: Variable(s) B R2 df A R2 df 1: Education .07 .02 2, 67 -- -- Content Experience .01 2: Verbal Pre-Test .57 .33 3, 66 .33 l, 66 3: Quiz Type -.00 .33 4, 65 .00 1, 65 4: Attentional Focus .03 .49 8, 61 .16 4, 61 Time on Task ‘ .16 - Metacognition -.05 Activity Level (percent) .36 Note. Dependent variable is average of standardized verbal and application post-test scores. Coefficients in bold are Significant at p < .05. Time on task is marginal (p = .10). H13, H16, and H17 posit that the strength of relationships will differ depending on the choice variable and the outcome of interest. Specifically, H13 suggests that metacognition will be a better predictor of verbal than application knowledge. The R2 values for metacognition when entered in the last step of regression equations predicting verbal and application knowledge are .00 and .02, respectively. Neither of these values is significant so neither is considered a useful predictor. As a result, H13 is neither confirmed nor disconfirmed. The negligible effect sizes obtained make any comparison meaningless at this level of power. H16 predicts that perceived focus, the self-report measure of attentional focus, will be a better predictor of verbal knowledge gain than time on task. When these 109 measures are entered into the regression equation separately, the R2 values for self- report and time on task measures are .00 and .03, respectively. The time on task measure is marginally Significant (p < .10) but neither is significant according to traditional standards. Thus, as with H13, H16 is neither confirmed nor disconfirmed. H17 predicts that perceived focus would be a better predictor of application knowledge gain than time on task. The R2 values for self-report and time on task measures are not significant at .00 and .02, respectively. As with the results for verbal knowledge gain, the hypothesis is neither confirmed nor disconfirmed because neither measure is a reliable predictor of application knowledge. Despite a lack of statistical significance, the pattern of the findings for H16 and H 17 are important to note. The findings are opposite of the predictions in that time on task appears to be the better predictor. These findings suggest that, given increased power, it is more likely that time on task, rather than perceived focus, would be the more useful predictor. Direct and Indirect Effects of Individual Difference on Training Outcomes Individual differences were hypothesized to have some direct effects on training outcomes. H20 and H21 predict that trainees with high learning self-efficacy and high perceived utility will have higher application self-efficacy at the end of training. Table 17 presents the relevant regression results. The table indicates that both hypotheses are supported as both learning self-efficacy and perceived utility are significant predictors of application self-efficacy. Learning and performance goals llO were also significant predictors of application self-efficacy, although no specific hypotheses were provided in this regard. TABLE 17. Regression Results for Application Self-Efficacy Training Outcome. Step: Variable(s) [3 R2 df A R2 df 1: Education -.02 .04 2, 69 -- -- Content Experience .20 2: Quiz Type -.06 .04 3, 68 .00 1, 64 3: Learning Goal .29 .46 9, 62 .42 6, 58 Completion Goal .15 Performance Goal -.24 Learning Self-Efficacy .22 Technology Self-Efficacy .07 Content Utility .41 Note. Dependent variable is application self—efficacy. Coefficients in bold are significant at p < .05. The final hypotheses suggest that the influence of goals on application self- efficacy (H22) will be mediated by learning choices and that all individual differences influences on gains in verbal (H23) and application (H24) knowledge will be mediated by learning choices. To demonstrate mediation, a relationship between the independent variables, in this case the individual differences, and the ultimate dependent variable, training outcomes, must be established. Table 17 provides the preliminary regression results necessary to test H22 because it indicates that both learning and completion goals are significant predictors lll of application self-efficacy. When controlling for learning choices, the effects listed the table are reduced. More specifically, the beta weight for learning goal is reduced from .29 to .21, becoming non-significant (p > .10). The beta weight for performance goal is reduced from -.24 to -.21 and is marginally significant (p < .10) in the expanded equation. Thus, learning choices at least partially mediate the relationship between goals and application self-efficacy. This is partial support for H22. Implicit in this hypothesis is that attitudes will be fairly constant from pre to post training. In other words, the relationship between pre-training and post-trainin g attitudes should not vary when learning choices are controlled. This idea is partially supported. The effect for learning self-efficacy is reduced from .22 to .13 becoming ' non-significant (p > .10), suggesting partial mediation. However, the effects for utility are unchanged because the beta weight is .41 in the original equation and .44 when learning choices are controlled. Tables 18 and 19 present the regression of knowledge test scores on the set of individual difference to test H23 and H24. The tables indicate that none of the individual difference variables are significant predictors of knowledge gain. However, perceived content utility is marginally significant in its prediction of verbal knowledge. Because utility does predict activity level and activity level does predict change in verbal knowledge, mediation of this effect should be examined. Controlling for entire set of process measures, content utility drops from B = -.26, p < .10 to B = - .1 1, p > .10 suggesting at least partial mediation. To verify that this mediation is not simply an issue of lost power due to the number of variables in the equation, an analysis was run with only utility and activity level entered beyond the control ll2 variables. When entered first, utility was a Si gnificant predictor of verbal knowledge (B = -.30, p < .05). This relationship was reduced when activity level was entered first but the beta weight was still marginally significant ([3 = —.20, p < .10). Thus, activity level appears to partially mediate the relationship between utility and verbal knowledge. These results provide partial support for H23 but no support for H24. TABLE 18. Regression Results for Verbal Knowledge and Individual Differences Step: Variable(s) [3 R2 df A R2 Df 1: Education .06 .01 2, 69 -- -- Content Experience .05 2: Pre-Test .41 .17 3, 68 .16 1, 68 3: Quiz Type -.04 .17 4, 67 .00 1, 67 4: Learning Goal -.08 .29 10, 61 .12 6, 61 Completion Goal .02 Performance Goal .06 Learning Self-Efficacy .08 Technology Self-Efficacy -.18 Content Utility -.26 Note. Dependent variable is verbal knowledge. Coefficients in bold are significant at p < .05. Content utility is marginally significant p < .10 H3 TABLE 19. Regression Results for Application Knowledge and Individual Differences Step: Variable(s) B R2 (If A R2 df 1: Education .14 .02 2, 69 -- -- Content Experience .01 2: Pre-Test .58 .34 3, 68 .32 1, 68 3: Quiz Type .08 .35 4, 67 .01 l, 67 4: Learning Goal -.08 .40 10, 61 .05 6, 61- Completion Goal -.07 Performance Goal -.09 Learning Self-Efficacy .15 Technology Self-Efficacy -.07 Content Utility -.15 Note. Dependent variable is application knowledge. Coefficients in bold are significant at p < .05. A summary of the results obtained is contained in Table 20. The table indicates that many of the hypothesized relationships did not hold. None-the-less, a number of Si gnificant findings emerged, particularly for relationships between activity level and knowledge gain, and between individual differences and application self- efficacy. Some findings did not hold for all operationalizations of each construct, and these findings are noted as "partial" support in the table. 114 TABLE 21. Summary of Results Hypothesis Summary Support? Individual Differences and Learning Choices 1 Leaming Goal predicts Attentional Focus (AF) Partial 1a Learning Goal Trainees will have higher AF than Partial Completion Goal Trainees 1b Learning Goal Trainees AF higher than Performance Goal No Trainees 2 Learning Goal predicts Metacognition (MC) Yes 2a Learning Goal Trainees will have higher MC than No Completion Goal Trainees 2b Learning Goal Trainees will have higher MC than No Performance Goal Trainees 3 Learning Goal predicts Activity Level (AL) No 3a Learning Goal Trainees will have higher AL than No Completion Goal Trainees 3b Learning Goal Trainees will have higher AL than No Performance Goal Trainees 4 Technology Self-Efficacy predicts Activity Level No S Perceived Utility predicts Activity Level No 6 Learning Self-Efficacy predicts Activity Level Partial 7 Perceived Utility predicts Attentional Focus No 8 Learning Self-Efficacy predicts Attentional Focus No 9 Technology Self-Efficacy predicts Attentional Focus Partial 115 TABLE 21. (cont’d) Learning Choices and Training Outcomes 10 Activity Level predicts Application Self-Efficacy No 11 Attentional Focus predicts Application Self-Efficacy No 12 Metacognition predicts Verbal Knowledge gain No 13 Metacognition better predictor of Verbal than Application -- Knowledge gain 14 Attentional focus predicts Verbal Knowledge gain Partial 15 Attentional focus predicts Application Knowledge gain No 16 Perceived Attentional Focus better predictor of Verbal -- Knowledge gain than Time on Task 17 Perceived Attentional Focus better predictor of -- Application Knowledge gain than Time on Task ‘ 18 Activity Level predicts Verbal Knowledge gain Partial 19 Activity Level predicts Application Knowledge gain Partial Individual Differences and Training Outcomes 20 Learning Self-Efficacy predicts Application Self-Efficacy Yes 21 Content Utility predicts Application Self-Efficacy Yes 22 Goal effects on Application Self-Efficacy mediated by Partial Learning Choices 23 Individual Difference effects on Verbal Knowledge gain Partial mediated by Learning Choices 24 Individual Difference effects on Application Knowledge No gain mediated by Learning Choices NOTE. Partial indicates the hypothesis is supported for some operationalizations or statistical tests but not all. 116 DISCUSSION This dissertation attempted to study the effectiveness of WBT by addressing unanswered research questions on the issue of learner control. In this regard, a theoretical model was developed to predict and understand learning outcomes in learner controlled environments. A number of motivational individual difference constructs were suggested as important determinants of two critical choices that learners make during learner controlled training: (1) Strategy choices, (2) Effort choices. An empirical study was reported that tested the direct and indirect effects of these individual differences on learning choices and outcomes. The results of this study provide information about the effectiveness of the individual differences as predictors of learning choices and outcomes as well as implications for future research on learner control and future web-based training design efforts. Overall, the theory provides a number of valid predictions. For example, the results support the importance of goals and attitudes in determining a number of strategic and effort learning choices, including metacognition, attentional focus, time on task, and activity level. More Specifically, learning goals predicted metacognition and self-reported attentional focus; completion goals predicted time on task; and perceived content utility predicted activity level. All of these effects were moderate in size, with correlations ranging from .20 to .40. These individual difference constructs were found to have somewhat larger predictive relationships with application self- efficacy. Learning goals, performance goals, learning self-efficacy, and content utility were all significant predictors, and, as a set, explained nearly half of the variance in 117 application self-efficacy (R2 = .42). In terms of the relationships among learning choices and training outcomes, effort choices regarding level of activity were found to predict knowledge gain (B = .36 for composite). Time on task was also a significant predictor (B = .16 for composite), although it is only marginally significant when the level of activity is controlled. Unfortunately, a number of other predictions of theory were not confirmed. In general, individual differences were not found to predict knowledge gain. Strategic choices, as measured by metacognition, also did not influence knowledge gain. The best predictor of knowledge gain, activity level, was the process that was least well predicted by the individual difference measures. The size of these relationships, typically small with correlations less than .15, suggests that low power is not a sufficient explanation for these findings. In particular, the small effect size obtained in this study does not replicate the medium effect size obtained in previous research on metacognition and knowledge gain. These results are discussed in more detail in the following paragraphs. First, the results for the structure and predictive validity of individual differences are discussed. Second, the results for the structure and predictive validity of learning choices are discussed. Third, implications for training design are discussed. Finally, a comment on limitations is provided. Future research directions are suggested in each section. 118 Malleable Individual Differences M The theory suggests that goals should indirectly influence learning and post-training attitudes through learning choices. Goals did predict choices regarding metacognition, time on task, and perceived attentional focus. Unfortunately none of these measures had a significant influence on knowledge gain. The primary importance of goals, based on an examination of learning outcomes, was their prediction of application self-efficacy. Both learning and performance goals predicted application self-efficacy at the end of training. This effect was only partially mediated by learning choices. Overall, the effects of goals seems restricted to particular choices during training and to attitudinal outcomes. The results of this study also provide information about the structure of state goal constructs. This study assessed three types of training goals: Learning, performance, and completion. While the first two have been researched in a number of studies, at least as traits, very little research has been conducted on completion goals. A question offered at the beginning of this study was whether completion goals are distinct from learning goals. The results suggest that there is indeed considerable overlap in the two concepts. A factor analysis was unable to clearly distinguish them. However, the pattern of results for completion and learning goals were quite different. In particular, while learning goals predicted metacognition, completion goals predicted time on task. The reverse did not hold true. Thus, these goals do seem to represent distinct but related constructs. Learning goals predict strategy choices and completion goals predict time on task. This suggests that completion goals may be a valuable addition when studying task persistence, an activity traditionally connected with 119 learning goals (Dweck, 1986; 1989, Kozlowski et al., 1995, 1996). Future research would benefit from examining the concept of completion goal in more detail. Recent research on learning and performance orientations (e.g., Button et al., 1995) suggests that considerable validation went into the development of current goal scales. Similar validation work is needed for completion goals. In particular, research should address whether trainees with completion goals are work avoidant, as currently studied in the educational literature, or whether they seek to gain the most learning benefit in the least time. Trainees who follow this latter goal may actually prove to be more strategic in their learning choices because they are seeking to maximize gain. The current measure of completion goal may have confounded these two diffferent goals, and an attempt to clearly distinguish them would prove a valuable advance. Attitudes. The theory suggests that attitudes would influence knowledge gain through effort learning choices. The results confirm that attitudes did not predict strategic choices as expected. However, attitudes also did not predict most other learning choices. The strongest relationship found was opposite the direction hypothesized. Perceived utility and percent of activity were found to be negatively correlated (r = -.24). Thus, trainees who perceived the training to be something they could use on the job completed fewer training activities. Training activities, in turn, predicted knowledge gain. Consequently, perceived utility contributed indirectly to reducing learning. One possible explanation for these counterintuitive findings lies with the nature of WBT. Trainees in this study were told that the information presented would 120 be available during training and on the web for later use. This “on-demand” feature of WBT is one of its key strengths. This strength, however, may influence motivational dynamics of the learning environment. A number of trainees asked for information about how to continue access to the site after training, and many more trainees asked to save and print materials from the course to take back to work. It is possible that those trainees who thought the course content was going to be useful planned to use the Site for performance support following training. These trainees would have expended greater efforts to save or print out materials to take back to work. Moreover, these trainees may have sought to learn the task at a broader level of detail. That is, these trainees may have directed their effort toward learning the basic structure of the site and the basic information necessary to be able to use the site for performance support. Trainees with these intentions would very likely skip over any activities that were not seen as central to understanding the basic concepts, a hypothesis that may explain the negative correlation with utility and percent activity. Unfortunately, the data to test this post hoc hypothesis are unavailable. No records exist on which trainees requested further information or asked for access to the web site. Clearly future research should examine how the presence of possible performance support following training influences learning outcomes. It is possible that, given the increased use of technology for performance support, some traditional motivation measures may be negatively correlated with traditional measures of learning. Instead, trainees may focus on learning where to get information and how to use it. Adding trainees’ ability to use the system to collect information to the outcomes evaluated would provide a way to test whether trainees improved with 121 regard to that skill. This discussion prompts another future research direction. The measures of goals used in this study were specific to the situation but not Specific in terms of defining specific behavioral objectives for trainees. Without moving toward specific behavioral intentions (e. g., Azjen, 1990), the goal measures used in future motivation studies could focus on a more detailed analysis of what trainees hope to learn. For example, trainees could be asked the extent to which they plan to learn each of the major concepts offered in the class. Trainees could also be asked the depth to which they intend to learn those concepts. An alternative method for studying this issue would be to use a qualitative research approach whereby trainees are probed about the specific outcomes they would like to achieve from training. This type of a qualitative analysis would move from an etic, or externally imposed perspective on motivated choice, to an emic perspective in which trainees’ motivation is examined without such extemally-defined constraints. Instead, the researcher would assume motivation differs in substance across individuals and would seek to understand each individual’s motivational structure. The current research assumes that trainees hold some level of 3 goals without examining the possibility that other goals may be salient. The value an emic approach would be greater clarity in the possible range of outcomes desired from training. As noted above, the introduction of new technologies to training raises the possibility that past research findings on motivation in training may be contradicted. The traditionally "motivated" trainee according to current research standards may not perform well on standard evaluation measures because they plan to gain a different 122 type of knowledge from training than the evaluation may have intended or even planned for. Learning Choices Cognitive Effort. The theory suggests that cognitive effort, as indicated by perceived attentional focus and time on task, would influence knowledge gain and application self-efficacy. Perceived attentional focus did not predict any outcomes, but time on task was a marginally significant predictor of verbal knowledge gain. When activity level was not entered into the equation, time on task was significant at conventional levels. Thus, it appears that there is some overlapping influence of time and activity level on verbal knowledge gain. One conclusion that could be drawn from this analysis is that, in learner controlled environments, facilitating activity and time on task may both provide incremental gains in knowledge. This is an important finding because, as noted in the literature review, the current perspective in the learner control literature is that time on task is a measure of efficiency rather than an important indicator of learning. These results suggest that time on task can influence learning and that it Should be considered along with other learning choices. Time on task and perceived focus were initially offered as indicators of a single underlying construct--cognitive effort. The results do not support this assertion. These measures predicted different outcomes and were predicted by different individual differences. This suggests that in learner controlled environments these are not different indicators of the same construct, but different constructs in their own right. Upon further reflect this seems readily apparent. 123 Attentional focus is the amount of on versus off task cognition that occurs during training. Trainees who think about off-task topics during a fixed lecture or short activity period will be unlikely to learn. However, trainees that think about off- task topics during learner controlled activity have the option to return to the training following the episode of off-task thought. Thus, trainees can engage in a great deal of off-task cognition but make effective choices for learning by staying in the training environment longer. AS a result, perceived attentional focus and time on task are relatively independent learning process constructs that should be considered in research. Perceived attentional focus is perhaps the better indicator of cognitive effort ‘ because it addresses the extent to which trainees focused their mental effort toward the task. Time on task, however, may be better conceptualized as an indicator of behavioral effort. To the extent that time on task reflects sustained effort over time, then it reflects task persistence rather than focus per se. The pattern of correlations supports this distinction. Time on task correlations were more similar to activity level correlations, an indicator of behavioral effort, than to perceived focus. This finding contradicts the Fisher and Ford (1998) finding that attentional focus, as indicated by self-reported mental workload, is a better predictor of learning outcomes than time on task. The experiment used by Fisher and Ford involved a short training exposure time, as compared to the two days of training in the present study. This raises the possibility that their experiment involved less between-subject variability in time on task. To the extent that time on task can vary significantly across trainees, the choices that learners make in this regard will be an important issue for 124 research and design. Future research could address how to influence time on task during longer training courses. The provision of social cues about appropriate lengths of time and information about current levels of learning in conjunction may prove a useful means to increase time on task for those who need it. Future research should explore the potential for these design features to improve learning through time on task. Bflavior Effort. The theory suggests that activity level will influence knowledge gain and application self-efficacy. Two operationalizations of activity level, percent of activities completed and words typed, did predict gain on both knowledge tests. The other operationalization, repeats, was not an effective predictor. Despite the similar predictive validities of the words and percent measures, the three measures were quite different. Repeats had low correlations with all variables in the study. Words had low correlations with individual differences but high correlations with outcomes. This suggests that the words measure may be contaminated with other constructs that influence learning outcomes such as communication skills or genereal mental ability. The percent measure was the best predictor of the three, and in fact the best measure in the study. Percent activity was the only learning process to connect individual differences and learning outcomes. The word and repeats measures were developed to capture the quality of activity level. Unfortunately, because neither measure improved on the prediction of outcomes over percent activity, neither appears to full capture quality. Other measures of quality, such as the completeness of an answer, should be developed to more thoroughly explore this issue. The existing data does not offer many alternatives for 125 developing a more detailed quality measure, but future research could explore possibilities for rating the activity of learners. One interesting possibility would be to track attentional focus and metacognition while trainees were completing 2 or 3 of the major activities in a training course. By connecting the cognitive effort and strategy measures directly to a behavioral episode, the quality of that learning experience might be more effectively captured. While such a detailed analysis is beyond the scope of this dissertation, a WBT study could be designed to accommodate such a fine-grained analysis of learning choices. Another critical avenue for future research on activity is to find individual difference variables that predict learning choices. This research proposed that course-I specific goals would be a primary determinant of learning activity. Unfortunately, activity was not predicted well. Future research could examine both more general constructs to predict training activity, such job involvement (e. g., Noe, 1986), or more detailed constructs such as interest in the content or interest in the particular activity. Because percent activity had the strongest impact on knowledge gain, future research should focus on finding those factors that determine activity. Strategy. The metacognitive measure used in this study was predicted to influence knowledge gain. Metacognition was predicted by learning goals, as suggested by theory, but it did not serve as a good predictor of training outcomes. This finding is counter to that of Ford et al. (1998). Differences in timing of administration or the nature of the sample may have played a role in these findings. As noted earlier, the measure used by Ford et al. (1998) was collected at the end of training. The timing of that measure makes it difficult to claim that 126 metacognition caused changes in performance because performance may have influenced metacognitive ratings. This study attempted to remedy that concern by administering the measure of metacognition during training. The reduction in validity obtained with this measurement strategy suggests that the relationship identified by Ford et al. (1998) may be in part spurious. Before drawing this conclusion, however, a few other possibilities should be considered. It is possible that the validity findings were heavily influenced by the poor reliability of the metacognitive measure. However, the obtained results suggest that random error is not the sole explanation for these findings. An examination of the beta weights indicates that metacognition is negatively related to the development of application knowledge but positively related to the development of verbal knowledge. Thus, assuming no Shift in Sign would occur through an increase in sample size and power, metacognition may be negatively associated with gains in application knowledge. Conversely, metacognition would be positively associated with gains in verbal knowledge. It is also posSible that trainees in this study had a less accurate perception of their learning strategies than trainees in the Ford et al. (1998) study because of the timing of the survey administration. Trainees in this study may have switched strategies throughout training, and a report during training may have only covered strategies used for part of training. This possibility could be examined by collecting multiple measures of metacognitive activity throughout training and verifying the issue of stability and predictive validity. 127 The multiple administration strategy was attempted here, but the data from the second administration suffered from many missing data points. While the sample size is relatively low, the existing data suggests that metacognition was relatively Stable throughout training. This finding casts doubt on the alternative hypothesis that trainees in this study only reported learning strategies that they used in part of the training course. Thus, it is possible that the relationship between metacognitive activity and learning reported by Ford et al. (1998) is at least in part spurious. A Study specifically designed to address this hypothesis should be completed. Current research in educational psychology supports the influence of metacognition and other learning strategies as influences on learning outcomes (e. g., ‘ Pintrich et al., 1991; Pintrich & DeGroot, 1990). What might explain the difference between that research and this research study? An examination of the educational literature indicates that the majority of learning strategy research is conducted on student learners, and the current measures of strategy are largely derived from this research. It is possible that adults use different learning strategies, particularly in workplace training courses. Again, an emic research approach to this issue would prove valuable. Having adult trainees verbally shadow their thoughts during training would provide insight into the thought processes and strategies used. Such research may demonstrate that the measurement of metacognition and other learning strategies in adult populations will be difficult with existing scales. Modified scales may be necessary for adult populations. As a result of the inconsistencies between this study and previous research, future research should compare different conceptualizations and operationalizations of 128 learner choices. This dissertation only provides one operationalization of strategy— metacognition. Future research should examine more specific learner strategies, such as rehearsal, organization, and elaboration (e. g., Fisher & Ford, 1998). Similarly, this study operationalized effort with activity level, perceived focus, and time on task. In learning environments where sequencing and content differences emerge across trainees, research should explore how these choices influence learning outcomes. For example, some trainees may choose to skip supplemental “case” examples. Why leads trainees to make such a choice? Does skipping this extra material influence knowledge gain? Examining such research questions will offer greater insight into the learning process, and more practically, help trainers discover factors that determine how and when to offer core versus supplemental or Training Design: The Unmeasured Factor An unmeasured variable in this study is the quality of the training design. Because this research focused on a single training program, there was no variance in design features to examine. Current research and practice was used extensively in making design decisions about this course. The course has been highly praised by the customer and others who have seen it. In fact, the course recently won a national award for multi-media training design. Thus, it is reasonable to suggest that the design of this course was high quality in that it is easy for trainees to use and it actively engages the learner in activities that are appropriate for accomplishing espoused objectives. The high quality of this training course may explain some of the findings. For 129 example, technology self-efficacy was not found to be an important factor in determining learning outcomes. While it did predict perceived attentional focus, perceived focus was not a significant predictor of knowledge gain. The well-designed interface may have helped all trainees move through the course easily. Another factor in the importance of technology self—efficacy may have been the use of volunteers to take the training. All trainees who volunteered knew they would be taking the course on the computer. Despite obvious differences in computer familiarity witnessed during training, it is likely that the range of comfort with technology was reduced through the solicitation of volunteers. Consequently, these results should not be used to imply that computer skills or associated confidence are unimportant. Rather, in a sample of volunteers working with a well-designed course the level of confidence did not have a significant impact on training outcomes. The quality of the course may also explain the strength of the activity level findings. Training that is more poorly designed might have many activities that provide no learning. Such was clearly not the case here. The greater the percent of activities completed, the greater the learning. This suggests that training activity were well-designed and effectively placed throughout the course. Because training was constant for all trainees, these conjectures cannot be empirically tested in this study. Future research could examine design features as well as interactions between training design features and individual differences. The theory of learning choices could also be expanded to address possible interactions between individual differences and training features. 130 Limitations and Implications There are a number of limitations to this study that influence the generalizations that can be drawn. Despite these limitations, implications for the design of web-based training can be drawn. The first limitation in this study relates to features of the measurement. The reliabilities of a number of the scales, particularly the metacognition and application knowledge scale, were below the .70 threshold often applied. Low reliabilities in this study may have resulted from the number of items used or from the number of scale points, both of which were reduced relative to earlier studies. Low reliabilities attenuate bivariate relationships, and have the potential to either increase or decrease ‘ observed relationships in multiple regression. The latter issue does not appear to be a significant problem in this study because none of the conclusions drawn run counter to findings in the correlation matrix. Consequently, it would seem that the biggest concern for this study is the underestimation of population parameters. Longer scales and more scale points, both factors that were influenced by the company supporting this research, would prove valuable for obtaining more accurate estimates of population estimates. A similar design problem that was influenced by the company supporting this research was sample size. The sample size of 80 was far larger than any controlled pilot that the sponsoring company had ever run before. However, while this sample size provides adequate power for testing bivariate relationships, the power to test mediating relationships is substantially lower. To address this issue the dissertation focuses on both Significant and marginally Significant results, and presents effect size 131 estimates (i.e., correlations, betas) where appropriate. The lack of actual ability and personality measures in this study may also be considered a limitation. While education level was controlled, no direction measure of g was collected. Neither issue is truly a limitation. The lack of an ability measure is not problematic because research suggests that g influences learning outcomes directly. There is no evidence suggesting that those with higher ability use more effort or strategy. In fact, the current research on metacognition and learning strategy provides convincing evidence that intelligence and strategy are independent (e. g., Garner, 1990). Similarly, Brown (1996) measured attentional focus and cognitive ability and found them to be negligibly related. Consequently, ability may explain variance in learning outcomes that is not fully captured in this study, but because of evidence that covariation between ability and learning processes, it is unlikely that the relationships found between process and outcomes are spurious. The personality issue is similar. While personality was not measured, it is unlikely that any relationships found between individual differences and process are spurious. This conclusion can be drawn because existing research indicates that personality operates primarily through goals and attitudes (e. g., Brett & VandeWalle, 1997; Colquitt & Simmering, 1997). Consequently, the malleable individual differences in this study are simply exogenous variables. Predicting these variables through combinations of dispositional and Situational variables is left for future research. A potentially more serious limitation of the study is the use of broad goals for predicting behavior in training. The rationale for using such goal measures is that they 132 capture desired outcomes at the same level of specificity for which the process measures and outcomes are defined—the entire training course. The limitation in this approach is that trainees may not have goals and may not be regulating their behavior with regard to a focus on the whole course. Instead trainees may have specific intentions regarding different parts of the course. To complicate the matter even further, these fine-grained goals may arise during training rather than before training. In other words, trainees may begin each module with a search for information that determines their level of interest, make a judgment in that regard, and set a goal regarding the type of learning that they plan to pursue. Trainees may decide that a particular section of the course is less important for their work and consequently set a goal to finish that section as quickly as possible. This may occur while the trainee maintains a course-level goal to learn as much as possible. There is considerable value in the analyses presented in this paper because goals, processes, and outcomes were matched in their level of specificity. The limitation, and the issue for future research, is whether trainees self-regulate at the level of specificity assumed by the measures used here. As noted earlier, an emic research approach would not assess one focus of motivation (i.e., what you plan to do with regard to the whole course?) but instead look to discover which focus trainees hold (i.e., do you have plans for your interactions with this course? What kind of plans are they?) Such research could provide data that would generate hypotheses for a more detailed etic research approach. Given a more clear understanding of how trainees approach the training course, research could examine goals, activity, and attention at multiple focal action levels simultaneously (e.g., Kluger & DeNisi, 1996). 133 Despite these limitations there are a number of strengths to this study. First, trainees in this study are adults seeking to learn a process that is valued by the company that employees them. Second, the training was conducted in a controlled environment, which allowed environmental influences on learning to be held constant. This allows for greater precision in testing the process theory offered in this manuscript. The control provided here does, however, limit the focus of this study to learning in general rather than to learning on the desktop. With one of the strengths of WBT being that trainees can take training “anytime, anywhere,” future research should consider how workplace features influence trainees’ learning efforts. The model offered in this manuscript offers a process model to use for this research, the necessary addition is a taxonomy of work environment features that will influence these learning choices. The strengths of this study do provide for suggestions for WBT design. The most important of these suggestions relates to the design and encouragement of activity. The most powerful finding in this dissertation was the influence of percent activity completion on learning outcomes. Practice activities should be built throughout training so trainees have the opportunity to practice key skills. Furthermore, trainees should be encouraged to make use of these opportunities throughout the course. Providing trainees with encouragement to complete these activities, in the form of rewards or positive feedback, may prove useful. 134 Conclusion A theory of learner choice was developed to explain the learning process in learner controlled training. Learner control is of critical importance today because of the increasing popularity of web-based training, which places many decisions about the nature of the learning experience into the hands of the trainee. Transitioning training from the classroom to the web is one of the major trends defining the training and development arena today. Unfortunately, this work is proceeding with little or no evidence about the effectiveness of WBT. The study presented here brings existing theory and research to bear on this issue. The theory created based on previous research, individual differences in learning choices, suggests that malleable individual differences, motivational differences, are critical determinants of choices that trainees make when they are allowed to control their learning experience. This and many previous studies demonstrate that not all trainees make choices that benefit their learning. More specifically, trainees who choose to skip over practice opportunities designed into the course gain less verbal and application knowledge than trainees who use these practice Opportunities. Consequently, the importance of learning activity is emphasized in this study, and design implications of this point were highlighted. Training that is left for trainees to explore on their own must be provided in such as way as to encourage full coverage of the material presented. It is clear from these results that simply placing training on the web is unlikely to provide great learning benefits. Instead, active learning experiences must be designed and trainees must be encouraged to complete those experiences. 135 This manuscript ended with future research directions regarding the study of individual differences and learning choices. These were noted in an attempt to stimulate further research on the issues of learner control and web-based training. The key point of this discussion is that, as training continues to move toward being more technology-mediated, research must do more than determine if technology-mediated training is better or worse than instructor—led training. Instead, guided by theory about individuals choices in these environments, research should investigate who can learn in these environments and how they do it. Research should also investigate how to design and deploy these environments so that maximum learning gain is attained. The technologies of computers and the web offer many potential benefits to companies, but these benefits will not materialize without explicit attention to when and how they should be used to enact changes in employees’ knowledge. 136 APPENDICES 137 APPENDIX A INFORMED CONSENT IMPORTANT NOTICE REGARDING THE COURSE Please read before continuing... Your activity and responses during this course are recorded and saved in a database that was designed and is maintained by representatives of the XXXXXXX Company (xxx). All information contained in these databases is the property of xxx. All or part of this database may be reviewed by researchers external to xxx. This review is for research that will seek to improve this and other web-based courses in future administrations. At no time during this external research will your individual responses be identified or singled out. As a result, there are no risks associated with having your data reviewed by external researchers. Even so, your participation in this research is completely voluntary. If you have questions regarding this process, or you would like to have your responses removed from the database before it is reviewed for research purposes, please contact XXXXXXXXX. You can e- - mail him right now by clicking on his address here (XXXXXXXX@xxx.com), or call him at xxx-xxx- xxxx. Alternatively, if you do not feel comfortable taking the course under these conditions, alternative arrangements can be made for you without penalty. Simply notify the Sight coordinator (if you are at a central facility) or exit from the program now and call xxx-xxx-xxxx to set up an alternative training arrangement. If you would like to ask questions about web-based training at xxx, or about the research being conducted here, contact information is presented below. If you would like to print this page and save it, you can do so by hitting the “print” icon at the top of the page. XXXXXX XXXXXXXXXXXXXX Address Phone E-mail Thank you for your attention. Your efforts in this course are greatly appreciated as they are invaluable to improving all forms of training offered by xxx. We hope you enjoy the course and, of course, learn a lot from it! 138 APPENDIX B SURVEY AND TEST ITEMS DEMOGRAPHICS The following questions are designed to help us understand a little about you. This information will be used to group your responses so we can understand how we can modify or gear the course for particular groups of employees. 0 What is the highest level of education you have completed? High School Diploma or equivalent Technical/Vocational Degree Associates Degree or 2 years of college education Bachelors Degree Masters or equivalent advanced professional degree 0 Ph.D. or equivalent 9 I am familiar with the concepts and skills covered in this course. 0 Strongly Agree 0 Agree 0 Disagree 0 Strongly Disagree GOOD 0 [Unless otherwise noted, remaining questions use a pull-down menu with the following options: Strongly agree, agree, disagree, strongly disagree] TECHNOLOGY SELF-EFFICACY 0 Even though I may have some difficulty with the training technology, I know that I will be able to figure out how to use it correctly. 0 I am concerned because I don’t know how to use a web browser effectively. 6 I am confident that I can learn using this training delivery technology. 0 I am comfortable taking courses and receiving training via computer. CONTENT UTILITY O The content of this course will be useful for me back on the job. 9 I will never use anything that I am learning here. 0 If I do not learn this material, I may have difficulty performing my job well. 0 The knowledge and skill taught in this course are valuable to me. LEARNING SELF-EFFICACY 0 Even though it may be difficult, I know that I am able to learn the 68D process. 0 I am concerned that I will not be able to understand all components of G8D. o I am confident that I can gain the skills necessary to perform a G8D. e I can learn the material in this course. LEARNING GOAL 0 I plan on learning as much as I can from this course. 0 I want this course to provide a learning challenge for me. 0 Its important to me that I learn about the G8D process. 0 I intend to gain new knowledge and skill as I work through this course. 139 COMPLETION GOAL 0 My primary goal for this course is just to complete it. 9 I can’t wait until this course is over. 0 I want this course to be as easy as possible. 0 I intend to do as little work as possible to finish this course. PERFORMANCE GOAL 9 I plan on doing better than other trainees throughout this course. 0 I want to impress others with my knowledge of this subject. 0 Its important to me to avoid making mistakes while I work through this course. 6 I intend to score better than other trainees on the quizzes, exercises, and tests. PAIRED JUDGMENT TERMS FOR GOALS For each pair of items below, select the most important outcome that you wish to obtain from taking this course: 0 Avoid thinking too hard vs. Learn a lot 0 o Gain new skill vs. Avoid thinking too hard 0 0 Avoid mistakes vs. Finish quickly 0 0 Learn a lot vs. Avoid mistakes 0 0 Finish quickly vs. Look knowledgeable o o Iearn—a—let—w.—Gain—new—skill————e 0 Avoid mistakes vs. Gain new Skill 0 Gain new skill vs. Look knowledgeable Finish quickly vs. Learn a lot Avoid thinking too hard vs. Avoid mistakes Look knowledgeable vs. Avoid thinking too hard Gain new skill vs. Finish quickly Look knowledgeable vs. Learn a lot 000000 000000 * crossed-out pairs are within-goal comparisons that were not used in developing goal priority constructs. ATTENTIONAL FOCUS I thought about how well or how poorly l was doing. I daydreamed while I was learning. I lost interest in learning the material for short periods of time. I thought about other things I have to do today. I let my mind wander while I was learning the materials. I concentrated on the training materials (R) 09009. 140 METACOGNITION 09...... While going through the course, I made up questions to help focus my attention. When I became confused about something, I went back to figure it out. Before I read through materials, I skimmed ahead to see how it was organized. I asked myself questions to see if I understood the material. I tried to think through the process and determine what I am supposed to learn from each module. I set goals for myself while I was working through the course. I tried to monitor closely whether I was understanding the material I was reading. I noticed where I made mistakes on questions and exercises and tried to focus on that material. APPICATION SELF-EFFICACY O O O 0 Even though I may have some difficulty using the problem-solving process, I know that I will be able to use it effectively. I am concerned that I will not know enough to be able to use the problem-solving process at work. I am confident that I can use Skills from problem-solving course back on the job. I am comfortable applying the problem-solving process to solve problems at work. KNOWLEDGE PRE/POST TEST 1. At step D0, the Beamen Aviation team should consider which of the following in the problem- solving process? (ORIGINAL) a. Determining appropriate Permanent Corrective Action. b. Whether to initiate a G8D and whether to protect the customer with an Emergency Response Action.* c. Immediately initiating action to determine the cause of the crashes. (1. Immediately initiating action to prevent recurrence of the crashes. Which of the following statements best describes the purpose of the Trend chart, the Pareto chart, and the Paynter chart? (ORIGINAL) a. To display, prioritize, and stairstep the symptoms. b. To display, prioritize, and determine the Root Cause. c. To display, prioritize, and establish Given and Want criteria. (1. To display, prioritize, and validate results“ What is the difference between verifying and validating an Emergency Response Action (ERA)? (QUIZ) a. Verification requires testing the effectiveness of the ERA and validation does not. b. Verification occurs before the corrective action is implemented, validation is ongoing evidence that the action is working.* c. Validation must’ be done before the ERA is implemented, in order to ensure that customers are protected from the identified symptom as intended. (1. Validation requires. that customers and affected parties be identified, verification does not. Which of the following statements best describes what the Beamen Aviation team would do at Step D1 of the 08D process? (ORIGINAL) a. Verify that the control system is capable of detecting the problem. b. Determine that the Interim Containment Action provides the best balance of Benefits and Risks. c. Know who is affected by the problem and establish the team.* (I. Choose a Process Map that can be used with this problem. 141 5. Which of the following best describes an issue the team should consider at Step D1? (ORIGINAL) 10. 11. a. Choosing a Prevent Action. b. Determining whether an Emergency Response Action is necessary. c. Identifying where the problem entered the system. d. Establishing team operating procedures.* Select which of the following teams best meets the guidelines for G8D team membership: (QUIZ) a. Team composed of 3 electrical engineers solving a problem that appears to be an electrical shortage. Two of the engineers have 5 years experience; the other has only 6 months. b. Team composed of 2 electric engineers, 2 mechanical engineers, and member of the human resources staff. The problem appears to do with both staffing and machine design.* c. Team composed of 6 machine operators and a manufacturing engineer. All operators have over 10 years of experience; the engineer has been on the job 8 months. The problem involves a particular machine that all operators have used in the past. (1. Team composed of 3 engineers, 4 operators, 2 managers, and 2 training specialists. These individuals have a good range of skill, and experience with the problem involved. e. Team composed of 3 managers who travel frequently, but have a great deal of expertise in the problem area, 1 service representative, and 1 computer programmer. The problem appears to be programming glitch that freezes the representatives computer during certain operations. Which of the following statements best describes what must occur at Step D2? (ORIGINAL) a. The team must isolate and verify the root cause by testing each possible cause against the Problem Description. b. The team must select the best Permanent Corrective Action to address the Escape Point. c. The team must modify the necessary systems, including policies, practices, and procedures. (1. The team must develop a clear Problem Statement and Problem Description.* As a team works through Step D2 of the G8D process, they use Repeated Why’s and Is/Is Not analysis. Which of the following does the use of these techniques assist the team in doing? (ORIGINAL) Developing the Problem Description“ Identifying system benefits and system risks Analyzing control systems to identify the Escape Point Developing Prevent Actions 9.0.6:» A problem statement does which of the following? (QUIZ) a. Determines who belongs on the G8D team. b. Identifies the Root Cause. c Serves as a starting point for the Problem Description.* (1 Narrows the search for a simple, concise statement of the object and defect Which of the following best represents the components of a valid Problem Description? (ORIGINAL) Givens, Wants, Risk, and Benefits What, Where, When, and How Bi g* Symptom, Cause, Problem, and Escape Point Who, When, Why, and How 9'.“ 9‘!” Which of the statements below best describes why the Beamen Aviation team would implement an Interim Containment Action? (ORIGINAL) a. To determine the Root Cause of the HAWK crashes. b. To secure the HAWK crash sites. 142 12. l3 14. 15. I6. 17. 18. c. To avoid further crashes of the HAWK aircraft.* (1. To modify the aircraft design. Completing an action plan as part of a G8D assists the team in identifying which of the following? (ORIGINAL) a. What will be done, who will do it, and how it will be funded. b. The champion, who will perform tasks, and completion dates. c. What will be done, who will do it, and when it will be completed.* (1 What will be done, why it will be done, and how much it will cost. Which of the following definitions best captures the VERIFICATION process at step D3? (QUIZ) a. Following the management cycle by planning, doing, and studying the Interim Containment Action. b. Ensuring that customers continues to be protected from the problem. c. Analysis of benefits and risks associated with isolating customers from the problem. (1. Indicating before implementation that the Interim Containment Action prevents customers from experiencing the problem.* In a Comparative Analysis, the team would need to perform which of the following? (ORIGINAL) a. Identify differences, identify and date all changes.* b. Identify measurables, find and date all changes. c. Review the original Failure Mode Effects Analysis. (1 Identify Givens and Wants to make a final Balanced Choice. At Step D4, how would you go about developing theories for differences and changes? (QUIZ) a. Use subject matter experts to determine how a change impacts the system. b. Use brainstorming techniques to generate ideas.* c. Start with the most likely cause, and perform a trial run using critical thinking. (1. Use critical thinking to build statements about how changes created trouble. The team has identified the Root Cause of the aircraft crashes. The Beamen Aviation team verified that the Root Cause could be eliminated by which of the following techniques? (ORIGINAL) a. Identifying potential problem areas b. Asking ls/Is Not questions c. Utilizing the Change-How theory d. Making the problem come and go* Permanent Corrective Actions are chosen and verified to eliminate which of the following? (ORIGINAL) a. Problem Statement and Description b. Potential risks and off-standard costs c. Symptom and possible cause (1. Root Cause and its Escape Point* Which of the following techniques would the team use to systematically evaluate the Permanent Corrective Action based on Features, Benefits, and Risks at Step D5? (ORIGINAL) a. Failure Mode and Effects Analysis b. Process Improvement approach c. Decision making process“ (1. Brainstorming techniques 143 19. 20. 21. 22. 23. 24. A team has just finished making ratings of Givens and Wants at Step D5. Use the following table to answer the question: (QUIZ) Choice Given 1 Given 2 Want 1 Want 2 #1 NO YES 10 7 #2 YES YES 6 5 #3 YES NO 2 9 #4 YES YES 4 4 Which of the choices above should be considered in later steps of the process? a. Choice #1 only b. Choice #1 and Choice #3 c. Choice #2 and Choice #4* (1. Choice #1, #2, and #3 Which statement below best describes the reason for the team performing step D6? (ORIGINAL) a. To implement the Permanent Corrective Action and verify the outcome. b. To implement the Permanent Corrective Action and validate the outcome.* c. To implement the Permanent Corrective Action and reward the team members. (1. To implement the Permanent Corrective Action and prevent recurrence. Which of the following is one function of the Planning and Problem Prevention Worksheet at Step D6? (ORIGINAL) To identify the Root Cause b To identify the Escape Point c. To identify systematic prevent recommendations d To identify the Action Plan steps“ 1” How would you use the Repeated Why’s technique to find the root cause of the root cause at Step D7? (QUIZ) a. Ask why the symptoms occurred, continue asking “why” for every cause and effect identified. b. Start with the Problem Statement and ask how and where did this problem enter our process. c. Ask why the symptoms occurred until you get to the Root Cause. (1. Start with the Problem Statement, ask why the problem happened until you begin to answer questions about why the Root Cause was present.* At Step D7 of the G8D process, the Beamen Aviation team is concerned with preventing recurrence of the problem with the HAWK aircraft. In addition to taking actions to prevent the present problem and similar problems from recurring, which of the following would the team do? (ORIGINAL) a. Assure their process is in control b Recommend systematic improvements, if necessary* c. Talk to the customer to verify the Permanent Corrective Action’s effectiveness (1 Eliminate the Interim Containment Action. Which of the following must be done to complete step D8? (ORIGINAL) Recognize people outside the team who have made significant contributions.* Implement Prevent Recurrence actions. Implement and validate the Permanent Corrective Action. Detail the problem in quantifiable terms. 9'.“ 9'!” 144 25. Which of the following is a common question that the team must address throughout the G8D process? (ORIGINAL) a. Do we have the right team composition to proceed to the next step?* b. How well does the proposed G8D meet the application criteria? c. Should any moral, social, or legal obligations related to this problem be considered? (1. What management policy, system, or procedure allowed this problem to occur or escape? QUIZZES 0.1 The following is a scenario where a G8D team Should not be assembled. Linda works in packaging, and she notices that all of the parts from a certain machine have the same defect, a long scratch. She does some preliminary investigation to determine that this scratch is not present on the same part from other machines. All parts from this particular machine go to the single largest customer of their plant, so she is concerned. She talks to a few machine operators, who are unsure about the reason for the scratch. So, she takes her concern to the production engineer who oversees this process. The production engineer looks up the part specifications and notes that, for this particular part, surface appearance is not considered critical. After looking over the part and the offending machine, the production engineer cannot find the cause of the scratches. Select from the list below the primary reason why an SD team is inappropriate in this scenario: a. There is no definition of the symptom. b. The G8D customer who experienced the symptom has not been identified. c. A performance gap does not exist.* (1. The cause is known. e. The complexity of the symptom exceeds the ability of one person to resolve the problem. 0.2 What is the difference between verifying and validating an Emergency Response Action (ERA)? a. Verification requires testing the effectiveness of the ERA and validation does not. b. Verification occurs before the corrective action is implemented, validation is ongoing evidence that the action is working.* c. Validation must be done before the ERA is implemented, in order to ensure that customers are protected from the identified symptom as intended. (1. Validation requires that customers and affected parties be identified, verification does not. 0.3 Which of the following is a key function of the G8D software? a. Track and document the G8D process.* b Provide a reference tool of the G8D application criteria. 0. Save resources by helping to identify who and what is needed for the 08D process. (1 Replace the role of note—taker and record-keeper in the team. 0.4 What is the function of the assessing questions? a. Provide structure for what needs to be done at every step of the G8D process. b Serve as a project management tool that increases reusability. c. Provide assistance in measuring or quantifying symptoms that initiate the G8D process. (1 Serve as an advance organizer, interim check, and memory jogger.* 0.5 Which of the following is the most important issue to address at step D0? How well does the G8D meet the application criteria?“ Are Emergency Response Actions (ERA) necessary? Will the new G8D duplicate an existing G8D? Do you have the right team composition to proceed to the next step? 53-.“ 9'?“ I45 1.1 1.2 1.3 1.4 2.1 2.2 Select which of the following teams best meets the guidelines for G8D team membership: a. Team composed of 3 electrical engineers solving a problem that appears to be an electrical shortage. Two of the engineers have 5 years experience; the other has only 6 months. b. Team composed of 2 electric engineers, 2 mechanical engineers, and member of the human resources staff. The problem appears to do with both staffing and machine design.* c. Team composed of 6 machine operators and a manufacturing engineer. All operators have over 10 years of experience; the engineer has been on the job 8 months. The problem involves a particular machine that all operators have used in the past. (1. Team composed of 3 engineers, 4 operators, 2 managers, and 2 training specialists. These individuals have a good range of skill, and experience with the problem involved. e. Team composed of 3 managers who travel frequently, but have a great deal of expertise in the problem area, 1 service representative, and 1 computer programmer. The problem appears to be programming glitch that freezes the representatives computer during certain operations. Which of the following best captures the distinction between a champion and a team leader? a. The Leader allocates time to agenda items, Champion determines the agenda. b. The Leader ensures that all team members have an opportunity to contribute; the Champion determines who is on the team. c. The Leader acts as the team’s business managers; the Champion works with the team to set objectives and tasks. d. The Leader asks for and summarizes team member opinions; the Champion removes organizational barriers to the G8D process.* Which of the following is NOT a true statement about how to implement roles. a. Roles can be changed during a meeting. b. Roles cannot be shared.* c. Roles are not people. (1. Facilitation is essential throughout discussion What are the three elements of team operating procedures? a. Establish ground rules; conduct task observations; use maintenance behaviors. b. Build a cohesive team; use speaking skills; make sure all team members communicate effectively. c. Establish ground rules; observe task, maintenance, and processes; use communication/speaking skills.* d. Build a cohesive team; use maintenance behaviors; implement team roles effectively Why is it so important to develop accurate and specific problem statements? Choose the best answer: a. Because identifying the wrong cause can lead to the wrong corrective action.* b. Because later steps in the problem-solvin g process demand the information contained in this step. c. Because the Global 8D process must be followed as it is laid out, step by step. d. Because once a conclusion is made, it is difficult to back up. Identify this as an observation (a) or conclusion (b): Theresa notes that “all customers who bought product X from my department have never purchased another product from my department again.” a. Observation* b. Conclusion 146 2.3 2.4 2.5 2.6 2.7 3.1 3.2 Identify this as an observation (a) or conclusion (b): John says, “my computer crashes every day when I try to type memos because I am using an old version of my word processor application.” a. Observation b. Conclusion“ A problem statement does which of the following? Determines who belongs on the G8D team. b Identifies the root cause. c. Serves as a starting point for the problem description.* (1 Narrows the search for a simple, concise statement of the object and defect 9’ Which of the following are the steps involved in developing a problem statement? a. Ask what is wrong with what, divide symptoms into multiple statements, and then use the Repeated Why's technique!“ b. Use the Repeated Why's technique, then answer the physics questions of who, what, where, and when. c. Keep the team focused, narrow the search for the root cause, and define the problem. d. Ask what is wrong with what, use the Repeated Why's technique to narrow the search for the root cause and define the problem Which of the following is NOT a question that problem descriptions answer? a. What the problem is and is not. b. How the problem occurs and how it does not.* c. Where the problem is and is not d. When the problem occurs and when it does not (but could) occur Which of the following best describes the process for developing a problem description? Ask what is wrong with what, and then use the Repeated Why's technique. Ask what, where, when, and how big.* Ask IS/IS NOT for the object and the defect in question First, develop the problem statement, then narrow the search for the root cause using the Repeated Why's technique. 999'!” Which of the following effectively describes an ICA? a. An action that isolates the effects of the problem from both internal and external customers until a permanent corrective action can be found.* b. An action implemented that works against the root cause of the problem to ensure customers are not affected by that problem. c. An action that moves beyond the emergency response action (ERA) by correcting problems created by that earlier action. (1. An action implemented to minimize costs incurred by the ERA, maximize the benefits of eliminating the symptoms with proof that it will not introduce new problems. Which of the following definitions best captures the VERIFICATION process? a. Following the management cycle by planning, doing, and studying the ICA. b Ensuring that customers continues to be protected from the problem. c. Analysis of benefits and risks associated with isolating customers from the problem. (1 Indicating before implementation that the ICA prevents customers from experiencing the problem.* 147 4.1 4.2 4.3 4.4 4.5 5.1 5.2 What is a root cause? 999'? A point or location where the problem could be found. An action implemented by the team that solves the problem quickly and effectively. The single verified reason that a problem exists."I A change in manufacturing/service process that influences the observations made by the team. What is the purpose of a comparative analysis? 9’ b c. d To limit the search for the root cause.* To ensure the customer continues to be protected from the problem. To analysis of benefits and risks of different approaches to eliminating the root cause. To determine what caused the problem How would you go about developing theories for differences and changes? 9’ b c. d Use subject matter experts to determine how a change impacts the system. Use brainstorming techniques to generate ideas.* Start with the most likely cause, and perform a trial run using critical thinking. Use critical thinking to build statements about how changes created trouble. If your team has just completed a comparative analysis and developed theories of differences and changes, what would you do next? a. b. c. (1. Create statements of ways that changes or differences created the problem Consider the influence of factors including people, machines, materials, methods, measurements, and mother nature Verify that the root cause identified is indeed the root cause of the problem. Trial run each theory.* What is an escape point? a. b. c. d. Earliest location in the process in which the problem could have been detected but was not.* A control point within the system that is used to check compliance. Where you change the root cause through passive verification. Where your theory indicates quality slipped, making the product/process fall below customer expectations. Which of the following most clearly explains the relationship between a PCA, ICA, and ERA? a. The ERA, ICA, AND PCA use the same basic process, they are just done at different times. b. The ERA and ICA are similar in that they deal with symptoms, the ICA and PCA are similar in that they both must be verified and validated.* c. The PCA and ICA must be done quickly to avoid damaging the companies relationship with the customers. The ERA is an optional step that need not be done. (1. The ERA and ICA mask problems and do not deal with the root cause. The PCA also masks problems, but it does so in a way that matches needs and wants, but avoids risk. Which of the following is the first step of the 7-step decision making process? a. b. c. (1. List decision criteria Decide on the PCA Describe the end results* Outline choices 148 5.3 5.4 5.5 5.6 6.1 If your team has just finished a risk analysis and found that the number #1 choice has few risks, what is the next step you should take? a. Make the balanced choice.* b. Verify that the solution does in fact work. c. Review givens to ensure that the choice meets each given. (1. Subtract risks from wants to calculate a total value score. Use the following table to answer this question: Choice Given 1 Given 2 Want 1 Want 2 #1 NO YES 10 7 #2 YES YES 6 5 #3 YES NO 2 9 #4 YES YES 4 4 Which of the choices above should be considered in later steps of the process? Choice #1 only Choice #1 and Choice #3 Choice #2 and Choice #4* Choice #1, #2, and #3 9.0 9‘!» Use the following table to answer this question: Choice Given 1 Given 2 Want 1 Want 2 #1 NO YES 10 7 #2 YES YES 6 5 #3 YES NO 2 9 #4 YES YES 4 4 Which of the choices is the best choice going into the next stage of the process? a. Choice #1 b. Choice #2* c. Choice #3 (1. Choice #4 Which of the following is an effective way to verify a PCA? a. Verify with an SME that the proposed solution appears to effectively resolve the Root Cause. b. Review the process used by the team to ensure that all assessing questions were answered. c. Survey customers to ensure they are not experiencing the problem. (1. Conduct an off-line demonstration run.* What are the first three steps of planning for PCA implementation? a. State the objective, identify key steps, identify barriers. b. Identify key steps, identify barriers and prevention actions, identify protection actions. c. Identify key steps, identify barriers, identify prevention actions. (1. State the objective, identify standards/conditions, identify key steps.* 149 6.2 6.3 6.4 7.1 7.2 7.3 8.1 On what two characteristics do you rate key steps of the implementation phase? a. Severity and probability.* b. Probability and frequency. c. Frequency and severity. d. Importance and probability. Why is it so important to identify barriers during the implementation process? a. Because barriers provide information about cues and responsibilities. b. Because barriers clarify the probability that a particular step will not be completed successfully. c. Because barriers identify the problems that prevention and protection actions address.* (1. Because barriers determine how PCA validation can proceed quickly and efficiently. How is validation in the implementation of a PCA different from validation of the ICA? a. Validation of the PCA must address the impact on the customer. b. Validation of the PCA should prove that the unwanted effect has been totally removed.* c. Validation of the PCA must include multiple methods. (1. Validation of the PCA should be run by the champion. Why is it important to focus on problem recurrence, even after implementing a PCA? a. Because outdated policies and procedures may cause the similar problems to occur later.* b. Because the PCA may fail if barriers were identified incorrectly in D6. c. Because problem recurrence means the G8D effort was wasted. (1. Because the PCA may not have handled the root cause of the problem. How would you use the Repeated Why’s technique to find the root cause of the root cause? a. Ask why the symptoms occurred, continue asking “why” for every cause and effect identified. b. Start with the problem statement and ask how and where did this problem enter our process. c. Ask why the symptoms occurred until you get to the root cause. (1. Start with the problem statement, ask why the problem happened until you begin to answer questions about why the root cause was present.* How do you identify system improvements for the current/similar problems? a. Use the Repeated Why's to determine the root cause of the root cause, and remove it. b. Brainstorm on what can be done to prevent this problem for happening again.* c. Have the champion use his or her authority to carry out systemic changes. (1. Rate the probability of recurrence for each cause identified from the Repeated Why’s. To be provided most effectively, recognition should be all of the following except: a. Sincere b. Timely c. Tangible* d. Focused e. Equal in measure to the contribution of the team or individual 150 8.2 The most critical issue during closure is: 3. Express complaints and regrets to team members. b. Celebrate the achievement. c. Ensure that external recognition is provided by the Team Champion. This should involve his/her attendance at the last team meeting and the provision of recognition. (1. Retain key documents and record lessons learned.* APPLICATION PRE/POST TEST 1. Use the information below to answer the next set of questions. You get a call from one of your customers, a large after-market parts Shop. They have had a number of problems with the last few shipment of parts from your plant. First, the exterior packaging was ripped open on two or three of the boxes, and a few of the parts were lost. Second, ever since the new packaging system was implemented, a few parts in each box get scratched from banging together. Third, one of the twelve boxes in the latest shipment had parts with seams that were only partially welded. They haven’t had a failure reported yet, but they only sold a few of the parts before catching the problem. Your boss suggests that you make the call on whether to form a G8D team, and she charges you with the task of ' determining how to proceed. You do some preliminary investigation and find that a few new processes are being implemented in your plant. First, a new packaging process was developed and implemented a few weeks ago. It uses a lighter packing material that reduces Shipping cost. Second, a few new machines were recently rotated into the assembly line, and the parts from these machines were recently added to inventory. No one seems to know anything about the complaints from the customer, and no one has a quick answer for why those complaints might have come about. a. What would you do to develop a problem statement? b. Based on the information given, write a problem statement. c. After the problem statement is written, what information would you want to collect to develop the problem description? 151 These questions continue with the same case in the last question. Your team collects information necessary to fill out an IS/IS NOT worksheet. You then begin work on a comparative analysis. Part of the comparative analysis is displayed below. Use this information to answer the next questions. DIFFERENCES CHANGES DATES Weld defect only occurs for a 0 New machinery uses a 0 New machinery in seam completed by a set of new slightly different welding limited use over last machines. process year Only parts built by one shift 0 New machinery used on shift 0 Brought on-line for have the weld defect. that shift last week 9 Two new employees on shift 0 Less than two months Only occurred in last two 0 Very high humidity 0 Last two weeks weeks, never seen previously." a. The next step in the process is to develop theories of differences and changes. b. If you were with your team, how would you do this step? [ c. Start this step using the process identified in the course. Write 3 to 4 sentences/lines. 152 Use the following situation to answer the questions below. The team has identified the root cause and has completed the first steps of the decision-making process. Below is part of the decision-making worksheet your team completed. Use this sheet to answer the next questions. and when to adjust valves. CHOICE A: Train new employees on how CHOICE B: Automate valve control by building sensor and control system. GIV ENS YIN GIVENS YIN a. zero defects Yes a. zero defects Yes b. keep new Yes b. keep new Yes machinery machinery c. cost less than Yes c. cost less than No monthly product monthly product Jrofit profit WANTS SCORE WANTS SCORE a. implemented by 40 a. implemented by 8 end of month end of month b. insensitive to 12 b. insensitive to 60 current changes current changes in process and in process and personnel personnel 3. Given this information, what choice would you focus on for the next step of the process? Explain your reason. b. Start the next step in the process. Write 3 to 4 sentences/lines. c. Outline how you would plan for implementation. (1. Once implemented, how would you validate the Permanent Corrective Action? 153 APPENDIX C APPLICATION TEST KEY AND SAMPLE CODING SHEET WEB-BASED TRAINING EVALUATION APPLICATION TEST CODING MANUAL This manual provides keys and coding Sheets for the 3-item application test used as part of a WBT training evaluation. The key for each question provides instruction on what grade to provide for particular responses. The coding sheets can be used so that the values assigned to each response are simply circled. The basic logic behind the grading scheme is simple. The test is designed to provide open-ended questions that tap trainees’ ability to recall and apply information used in the course. For each question trainees are assigned a score of 0, 1, or 2. The grading scheme is as follows. 0 Incorrect answers or left blank _ 1 Correct recall of relevant information from the course, or application that is only partially correct 2 Correct application of course information to the problem at hand. All answers must be coded with one of these 3 numbers. Judgment calls will often have to be made, and some of these calls are depicted in the samples. For those that are not depicted in the sample, follow the basic logic of the scheme employed above. I have tried, to the best of my ability, to provide guidelines on each key that will help you to make those calls. 154 KEY FOR QUESTION #1 QUESTION #1: Use the following situation to answer the questions below. [The first part is the same as the last question] You get a call from one of your customers, a large after-market parts shop... Examples of responses that should be graded as 0, l, or 2 are provided for each response. Note that this question actually has 3 parts, separated by blank lines on the page. Part 1: What would you do to develop a problem statement? [A problem statement is developed by asking what is wrong with what object. This essentially provides a simple statement of the issue at hand. ] 0 l 2 No answer provided or answer other than those below Ask what is wrong with what or determine what is wrong Ask what is wrong with the parts or determine what is wrong with the parts Part 2: Based only on the information given, draft a problem statement. [A problem statement should be a single issue, so if two issues are mentioned a 2 should not be provided. A problem statement should also be specific, so a 2 Should be provided only if the response mentions both the parts and something about the faulty welding or weld seam. Also, for this particular question, the issue trainees should focus on is the welding, not the shipping or packaging] 0 1 No answer provided or answer other than those below Welding problem or Quality is poor or Parts have a problem or parts shipped are bad (too generic) or any statement that includes a specific statement below AND another statement (because a proper problem statement focuses on only 1 issue) Parts have seams that—are only partially welded or pgrts have @or welds Part 3: After the problem statement is written, what information would you want to collect to develop the problem description? [Information about what, where, when, and how big should be collected. Trainees Should provide at least 2 questions/issues from the categories listed in each response in order to receive that score. This information is usually recorded on an IS/IS NOT Worksheet. If they mention the worksheet it should be considered a generic response because they are supposed to know what information is contained on that worksheet] O l 2 No answer provided or answer other than those below What, where, when, and how big (or generic questions that cover at least two of these issues) or Fill out the IS/IS NOT worksheet. At least two specific question for each of the following terms: What, where, when, and how big. For example, what is wrong with the parts (weld or seam), where is the problem occurring (one machine, many machines), flop is the problem occurring (time of day, for how long, since when), how big is the problem (how many parts affected, how serious is the defect) 155 KEY FOR QUESTION #2 QUESTION #2: The questions continue with the same case in the last question. Your team completes an IS/IS NOT worksheet. Then, you begin work on a comparative analysis... Examples of responses that should be graded as 0, l, or 2 are provided for each response. Note that this question actually has 3 parts, separated by blank lines on the page. Part l:If you were with your team, how would you do this next step? Explain. [The correct answer here is that the group Should brainstorm how changes in the process could have caused or led to the problem occurring. At this stage they should not be determining which theory is most likely, or otherwise eliminated options. Trainees who mention that should receive a 0.] 0 No answer provided or answer other than those below or any attempt to rule out possible explanations by looking at start dates. 1 Brainstorm, or brainstorm explanations/theories about cause of defect or look at differences and changes and develop theories (generic) 2 Brainstorm how each change could have caused or lead to the seams or parts being pgrtialleelded or defective Part 2:Start the process of developing theories of differences and changes. [This process is done by developing a theory for how a change could have caused or contributed to the problem. For a 2 trainees must use information from the change column and make a specific statement about how that change could result in bad seam welds. We are not grading the correctness of these, merely whether or not a plausible explanation derived from the change column is presented. Also, trainees Should not be ruling out explanations at this point or asking further questions, just developing explanations in the form of possible causes. If they are ruling out explanations or asking questions, the highest score should be a 1 (and the 1 should only provided if they are addressing items from the change column—machinery, employees, or humidity). Also, trainees should be creating theories, not restating facts. Restating data from the table should be given a 0.] 0 None mentioned or answer other than those below, or simply restating facts presented in the table such as problem occurred during high humidity. 1 Ask how could change] s) have caused the problem or weld defect (generic) or any statement indicating a conclusion rather than a theory. Also generic responses such as nemchinery is caps; or questions such as is machinery n_ew_? (which don’t capture the full spirit of developing a theory). 2 At least one specific reference to a theory derived from the “change” column, such as: - New machinery causes/leads to/results in... - New emplovees don’t know, can’t operate, need training... - Humidity causes/leads to/triggers/results in... 156 KEY FOR QUESTION #3 QUESTION #3: Use the following situation to answer the questions below. The team has identified the root cause and has completed the first steps of the decision-making process... Examples of responses that should be graded as 0, l, or 2 are provided for each response. Note that this question actually has 3 parts, separated by blank lines on the page. Part 1:Given this information, what choice would you focus on for the next step in the process, A or B? [Correct answer is A because the trainee should select the option that meets all given criteria, that is there should be all YES’s in the Y/N column. The explanation should clearly indicate either that all givens are met or that cost parameters are acceptable. A simple note that costs are lower is not sufficient, the response must indicate that cost meets givens or is acceptable] 0 No answer provided, B, or answer other than those below 1 A with no explanation or incorrect explanation 2 A because it meets all the givens or criteria or because cost is acceptable Part 2:Start the next step in the process. Write 3 to 4 sentences below: [The correct answer is to analyze the risks of their selected choice in order to ensure that a balanced decision is made. Implementation issues are NOT the next step and should noted as a 0.] 0 No answer or answer other than those below 1 Analyze, calculate, considerI or identify risks (or similar wording) 2 Analyze risks of the (selected option) to make the best (or balanced) choice (anything that provides more than just analyze risks) Part 3:Once implemented, how would you validate the Permanent Corrective Action? [Validation requires field testing of this action. Trainees should note that the parts need to be measured, checked, or otherwise verified for their quality. A 2 should be provided if the response clearly notes to check on the weld seams] 0 No answer or answer other than those below 1 Measure quality or as_k customers or check the proces_s 2 Measure/check/test weld seams, or followw/question customers about the weld seams SAMPLE CODING SHEET QUESTION #1 Each sheet has an ID code in the upper left-hand comer. Find that number and match it to the number in the left-hand column on this sheet. This sheet is for the sample practice questions. ID# Part 1 Part 2 Part 3 Sample #1 0 l 2 0 l 2 0 1 2 Sample #2 0 l 2 0 l 2 0 l 2 Sample #3 0 l 2 0 l 2 0 l 2 Sample #4 0 l 2 0 l 2 0 1 2 Sample #5 0 1 2 0 1 2 0 l 2 SAMPLE CODING SHEET QUESTION #2 Each Sheet has an ID code in the upper left-hand comer. Find that number and match it to the number in the left-hand column on this sheet. This sheet is for the sample practice questions. ID# Part 1 Part 2 Sample #1 0 l 2 0 l 2 Sample #2 0 l 2 0 l 2 Sample #3 0 l 2 0 1 2 Sample #4 0 l 2 0 1 2 Sample #5 0 l 2 0 1 2 SAMPLE CODING SHEET QUESTION #3 Each sheet has an ID code in the upper left-hand comer. Find that number and match it to the number in the left-hand column on this sheet. This sheet is for the sample practice questions. ID# Part 1 Part 2 Part 3 Sample #1 0 l 2 0 l 2 0 l 2 Sample #2 0 1 2 0 l 2 0 l 2 Sample #3 0 1 2 0 1 2 0 1 2 Sample #4 0 l 2 0 l 2 0 1 2 Sample #5 0 l 2 0 l 2 0 l 2 158 REFERENCES 159 REFERENCES Alliger, G. M. & Janak, E. A. (1989). Kirkpatrick’s levels of training criteria: Thirty years later. Personnel Psychology, 42, 331-342. Alliger, G. M., Tannenbaum, S. 1., Bennett, W., Traver, H., & Shotland, A. (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50. 341-358. Anderson, J. R. (1982). Acquisition of cognitive skill. Psychological Review, 89, 369-406. Anderson, J. R. & Fincham, J. M. (1994). Acquisition of procedural skills from examples. Joumflf Experimental Psychology: Learning, Memog, and Cognition. 20 1322-1340. Anderson, J. C. & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two step approach. Psychological Bulletin, 103, 411-423. Audia, G., Kristof-Brown, A., Brown, K. G., & Locke, E. A. (1996). Relationship of goals and microlevel work processes to performance on a multipath task. Journzi of Applied Psychology, 81. 483-497. Avner, A., Moore, C. & Smith, S. (1980). Active external control: A basis for superiority of CBT. Journal of Computer-Based Instruction, 5, 115-118. Ajzen, I. (1991). The theory of planned behavior. Organizationfiehwr and Human Decision Processes, 50. 179-21 1. Bagozzi, R. P. (1981). Attitudes, intentions, and behavior: A test of some key hypotheses. Journal of Personality & Social Psychology. Vol 41,607-627. Baldwin, T. T. & Ford, J. K. (1988). Transfer of training: A review and directions for future research. Personnel Psychology. 41. 63-103. Bates, R. A., Holton, E. F., & Syler, D. L. (1996). Principles of CBI design and the adult learner: The need for further research. Performance Improvement Quarterly, 9, 3-24. Bloom, B. S. (1956). Taxonomy of educhional objectives: The classification of educational goals. London: Longmans, Green, and Company. 160 Bouffard, T., Boisvert, J., Vezeau, C., & Larouche, C. (1995). The impact of goal orientation on self-regulation and performance among college students. British Joumal of Educational Psychology, 65, 317-329. Boyle, K. A. & Klimoski, R. J. (1995). The role of goal orientation in a training context. Paper presented at the Tenth Annual Meeting of the Society for Industrial and Organizational Psychology, Orlando, FL. Brett, J. F. & VandeWalle, D. (1997). Goal orienfitation and goal content as predictors of performance in a training proggam. Unpublished manuscript. Brooks, D. W. (1997). Web-teaching: A gpide to designing interactive teaching for the world wide web. New York: Plenum Press. Brown, K. G. (1996). Motivational and informational consequences of errors in early skill acquisition: The effects of individual differences and training strategy on perceptions of negative feedback. Unpublished Masters Thesis. Michigan State University, East Lansing, MI. Brown, K. G., Mullins, M., Weissbein, D., Toney, R., & Kozlowski, S. W. J. (1997, April). Mastery goals and strategic reflection: Preliminary evidence for learning interference. In S. W. J. Kozlowski (Chair) and M. Quinones and J. Martocchio (Discussants) Symposium “Metacognition and Training,” presented at the Twelfth Annual Meeting of the Society of Industrial and Organizational Psychology, St. Louis, MO. Button, S. B., Mathieu, J. E., & Zajac, D. M. (1996). The development and psychometric evaluation of measures of learning goal and performance goal orientation. Organizational Behavior and Human Decision Processes, 67, 26-48. Campbell, J. P., McCloy, R. A., Oppler, S. H., & Sager, C. E. (1993). A theory of performance. In N. Schmitt & W. C. Borman (Eds.), Personnel selection in organizations (pp. 71-98). San Francisco, CA: Jossey-Bass. Campbell, J. P. (1989). An agenda for theory and research. In I.L.Goldstein (Ed.), Training and development in organizations (pp. 469-486). San Francisco, CA: Jossey-Bass. Carrier, C. A., Davidson, G. V., Higson, V., and Williams, M. (1984). Selection of options by field independent and field dependent children in a computer- based concept lesson. Journal of Computer-Bagd Instruction, 1 1, 49-54. Carrier, C. A. & Williams, M. D. (1988). A test of one learner-control strategy with students of differing levels of task persistance. American Educational Research Journal 25 285-306. 161 Carroll, J. M. (1990, Ed.). The Nurenberg Funnel: Designing minimalist instruction for practical computer skill. MIT Press: Cambridge, MA. Chung, J. & Reiguluth, C. M. (1992). Instructional prescriptions for learner control. Educational Technology, 32, 14-20. Cohen, J. (1988). Statistical power apalysis for the behaviogil sciences (2nd ed.). Hillsdale, NJ: Erlbaum. Cotton, E. G. (1997). The online classroom: Teaching with the intemet (2nd ed.). Bloomington, IN: EDInfo Press. Craiger, J. P. & Weiss, R. J. (1997). Traveling in cyberspace: Web-based instruction. The Industrial-Organizational Psychologist, 35, 11-18. DeShon, R. P. Brown, K .G., & Greenis, J. L. (1996). Does self-regulation require cognitive resources? Evaluation of resource allocation models of goal setting. Journal of Applied Psychology, 81, 595-608. Dweck, C. S. (1986). Motivational processes affecting learning. American Psychologist, 41, 1040-1048. Earley, P. C., Connolly, T. & Ekegren, G. (1989). Goals, strategy development, and task performance: Some limits on the efficacy of goal setting. Journal of Applied Psychology. 7i 24-33. Ellermann, H. H. & Free, E. L. (1990). A subject-controlled environment for paired associate learning. Journal of Computer-Based Instruction. 17. 97-102. Elliott, E. S., & Dweck, C. S. (1988). Goals: An approach to motivation and achievement. Journal of Personalitfignd Social Psychology. 54, 5-12. Etapelto, A. (1993). Metacognition and expertise of computer program comprehension. Scandinavian Journal of Educfiatyional Research. 37, 243-254. Farr, J. L., Hofmann, D. A., & Ringenbach, K. L. (1993). Goal orientation and action control theory: Implications for industrial and organizational psychology. In C.L.Cooper & I.T.Robertson (Eds.), International Review of Industrial and Organizational Psychology (pp. 193-232). New York: Wiley. Filipczak, B. (1996, Feb.). Training on the intranets: The hope and the hype. Training, 24-32. Fisher, S. L. & Ford, J. K. (1998). Differential effects of learner effort and goal orientation on two learning outcomes. Personnel Psychology. 51, 397-420. 162 Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist. 34. 906-911. Ford, J. K., & Kraiger, K. (1995). The application of cognitive constructs and principles to the instructional systems model of training: Implications for needs assessment, design, and transfer. In C.L.Cooper & I.T.Robertson (Eds.), International Review of Industrial and Organizational Psychology (pp. 1-48). New York: Wiley. Ford, J. K., Quinones, M., Sego, D., & Sorra, J. (1992). Factors affecting the opportunity to perform trained tasks on the job. Personnel Psychology, 45, 51 1-527. Ford, J. K. Smith, E. M., Weissbein, D. A., Gully, S. M., & Salas, E. (1998). Relationships of goal orientation, metacognitive activity, and practice strategies with learning outcomes and transfer. Journal of Applied Psychology, 83, 218-233, Frese, M. & Altmann, A. (1989). The treatment of errors in learning and training. In L. Bainbridge & S. A. R. Quintanilla (Eds.), Developing skills with information technology (pp. 658-682). New York: Wiley. Gagné, R.M., Briggs, L.J., & Wager, W.W. (1992). Principles of instructional design (4th Ed.). New York: Harcourt Brace Jovanovich College Publishers. Garner, R. (1990). When children and adults do not use learning strategies: Toward a theory of settings. Review of Educational Research. 60, 517-529. Gay, G. (1986). Interaction of learner control and prior understanding in computer-assisted video instruction. Journal of Educational Psychology. 3. 225-227. Giardina, M., Laurier, M, & Meunier, C. (1997). A 3D model to operationalize interactivity in multimedia learning environments. Training Research Journal 2 162-179. Gist, M. E. (1987). Self-efficacy: Implications for organizational behavior and human resource management. Ac_ademy of Management Review. 12, 472-485. Gist, M. E., Rosen, B., & Schwoerer, C. (1988). The influence of training method and trainee age on the acquisition of computer skills. Personnel Psychology. 4_1, 255-265. Gist, M. E., Schwoerer, C., & Rosen, B. (1989). Effects of alternative training methods on self-efficacy and performance in computer software training. Journal of Applied Psychology, 74, 884-891. Goldstein, I. L. (1993). Training in organizations: Needs assessment, develpment, and evaluation. (3rd ed.). Monterey, CA: Brooks/Cole. 163 Goldstein, I. L. & Gilliam, P. (1990). Training system issues in the year 2000. American Psychologist. 45A 134-143. Hall, B. (1997). Web-based training cookbook. New York: Wiley. Hancock, T. E., Thurman, R. A., & Hubbard, D. C. (1995). An expanded control model for the use of instructional feedback. Contemporfiary Educational Psychology. 20, 410-425. Hannafin, M. J. (1984). Guidelines for using locus of instructional control in the design of computer-assisted instruction. Journal of Instructional DevelopmenLL 6-10. Johns, G. (1981). Difference score measures of organizational behavior variables: A critique. Organizational Behavior and Human Performance. 27, 443-463. J onassen, D. & Tessmer, D. (1996/1997). An outcomes-based taxonomy for instructional systems design, evaluation, and research. Training Research Journal, 2, 1 1-46. Jorde-Bloom, P. (1988). Self-efficacy expectations as a predictor of computer use: A look at early childhood administrators. Computers in the Schools. 5. 45-63. Kanfer, R. & Ackerman, P. L. (1989). Motivation and cognitive abilities: An integrative aptitude-treatment interaction approach to skill acquisition [Monograph]. Journal of Applied PsychologyL74, 657-690. Kinzie, M. B. (1990). Requirements and benefits of effective interactive instruction: Learner control, self-regulation, and continuing motivation. Educational Technology Research and Development; 38, 1-21. Kinzie, M. B., Sullivan, H. J., & Berdel, R. L. (1988). Learner control and achievement in science computer-assisted instruction. Journal of Educational Psychology. 80. 299-303. Kirkpatrick, D. L. (1974). Evaluation of training. In R. L. Craig (Ed.), Training and Development Hapidbook (2nd ed., pp. 18-1:18-27). New York: McGraw Hill. Kluger, A. N. & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Pg/chological Bulletin. 119, 254-284. Knowles, M. (1984). The adult learner: A neglected species (3rd ed.). Houston, TX: Gulf Publishing. Kommers, P.A.M. (1996). Research on the use of hypermedia. In P.A.M. Kommers, S. Grabinger, & J .C. Dunlap (Eds.), Hypermedia learning environments: Instructional design fld integration (pp. 33-75). Mahwah, NJ: Lawrence Erlbaum. Kozlowski, S. W. J ., Gully, S. M., Smith, E. M., Brown, K. G., Mullins, M., & Williams, A. (1996, April). Enhancing the effect of practice: The effects of sequenced mastery goals and advance organizers. In K. Smith-Jentsch (Chair) and Paul Thayer (Discussant) Symposium “When, how, and why practice makes perfect?” presented at the Eleventh Annual Conference of the Society for Industrial and Organizational Psychology, San Diego, CA. Kozlowski, S. W. J ., Gully, S. M., Smith, E. M., Nason, E. R., & Brown, K. G. (1995, May). Sequenced mastery training and advance organizers: Effects on learning, self-efficacy, performance, and generalization. In R. J. Klimoski (Chair) and R. G. Lord (Discussant) symposium “Thinking and feeling while doing: Understanding the learner in the learning process” presented at the Tenth Annual Conference of the Society for Industrial and Organizational Psychology, Orlando, FL. Kraiger, K., Ford, J. K., & Salas, E. (1993). Application of cognitive, skill- based, and affective theories of learning outcomes to new methods of training evaluation. Journal of Applied Psychology, 78, 311-328. Latham, G. P. & Locke, E. (1991). Self-regulation through goal setting. Organizational Behgiflnd Human Decision Processes, 50, 212-247. Lee, S. & Lee, Y. H. K. (1991). Effects of leamer-control versus program- control strategies on computer-aided learning of chemistry problems: For acquisition or review? Journal of Edmtional Psychology, 83, 491—498. Lewicki, P., Hill, T, & Bizot, E. (1988). Acquisition of procedural knowledge about a pattern of stimuli that cannot be articulated. Cogpitive Psychology, 20, 24-37. Lewicki, P., Hill, T., & Czyzewska, M. (1997). Hidden covariation detection: A fundamental and ubiquitous phenomenon. Journal of Experimengtl Psychology: Learning, Memory, & Cognition, 23, 221-228. Long, J. S. (1997). Regression models for—categorical and limited dependent variables. Thousand Oaks, CA: Sage. Main, J. & Sarenpa, D. (1983). Distributed training: Meeting the challenges of the ‘803. Journal of Instructional Development. 6. 15-19. Martocchio, J. J. (1992). Microcomputer usage as opportunity: The influence of context in employee training. Personnel Psychology. 45, 529-552. 165 Martocchio, J. J. (1994). Effects of conceptions of ability on anxiety, self- efficacy, and learning in training. Journal of Applied Psychology. 79, 819-825. Martocchio, J. J., & Dulebohn, J. (1994). Performance feedback effects in training: The role of perceived controllability. Personnel Psychology, 47, 357-373. Martocchio, J. J. & Judge, T. (1997). Relationship between conscientiousness and learning in employee training: Mediating influences of self-deception and self- efficacy. Joumal of Applied Psychology. 82. 764-773. Martocchio, J. J ., & Webster, J. (1992). Effects of feedback and cognitive playfulness on performance in microcomputer software training. Personnel Psychology. 45, 553-578. Mathieu, J. E. & Martineau, J. W. (1997). Individual and situational influences on training motivation. In J. K. Ford and Associates (Eds.), Improving training effectiveness in work organizations (pp. 193-222). Mahwah, NJ: Erlbaum. Mathieu, J. E., Martineau, J. W., & Tannenbaum, S. I. (1993). Individual and situational influences on the development of self-efficacy: Implications for training effectiveness. Personnel Psychology. 46, 125-147. Mathieu, J. E., Tannenbaum, S. I., & Salas, E. (1992). Influences of individual and situational characteristics on measures of training effectiveness. Academy of Management Journal, 35, 828-847. Milheim, W. D. & Martin, B. L. (1991). Theoretical bases for the use of learner control: Three different perspectives. Journal of Computer-Based Instruction. 18, 99-105. Meece, J. L. (1994). The role of motivation in self-regulated learning. In D. H. Schunk & B. J. Zimmerman (Eds.), Self-regu_lation of learningand performance: Issues and educational applications (pp. 25-44). New Jersey: Erlbaum. Meece, J ., Blumenfeld, P. C., & Hoyle, R. (1988). Students’ goal orientations and cognitive engagement in classroom activities. Journal of Educational Psychology. fl, 514-523. Montazemi, A. R. & Wang, F. (1995). An empirical investigation of CBI in support of mastery learning. Journal of Educational Computing Research, 13. 185- 205. 166 Morris, C. D., Bransford, J. D., & Franks, J. J. (1978). Level of processing versus transfer appropriate processing. Journal of Verbal Learning and Verbal Behavior. 16. 519-533. Morrison, G. R., Ross, S. M. & Baldwin, W. (1992). Learner control of context and instructional support in learning elementary school mathematics. Educational Technology Research and Development. 40, 5-13. Murphy, M. A. & Davidson, G. V. (1991). Computer-based adaptive instruction: Effects of learner control on concept learning. Journal of Computer- Based Instruction, 18. 51-56. Noe, R. A. (1986). Trainee’s attributes and attitudes: Neglected influences on training effectiveness. Academy of Management Review. 1 1, 736-749. Noe, R. A. & Ford, J. K. (1992). Emerging issues and new directions for training research. In K. Rowland & G. Ferris (Eds.), Research in Personnel M Human Resources Management (Vol. 10, pp. 345-384). Greenwich, CT: JAI Press. Noe, R. A. & Schmitt, N. (1986). The influence of trainee attitudes on training effectiveness: Test of a model. Personnel Psychology. 39. 497-523. Nolen, S. B. (1988). Reasons for studying: Motivational orientations and study strategies. Cognition and Instruction. 5. 169-287. Nolen, S. B. & Haladyna, T. M. (1990). Motivation and studying high school science. Journal of Research on Science Teaching, 27, 115-126. Owston, R. D. (1997, March). The world wide web: A technology to enhance teaching and learning? Educatioral Researcher, 27-32. Paas, F. G. W. C. (1992). Training strategies for attaining transfer of problem- solving skill in statistics: A cognitive-load approach. Journal of Educational Psychology, 8A 429-434. Park, 0. (1991). Hypermedia: Functional features and research issues. Educational Technology. 31. 24-31. Park, I. & Hannafin, M. J. (1993). Empirically based guidelines for the design of interactive multimedia. Educational Technology Research and Develgament. 41. 63-85. Phillips, J. M. & Gully, S. M. (1997). Role of goal orientation, ability, need for achievement, and locus of control in the self-efficacy and goal-setting process. Journal of Applied Psychology. 82, 792-802. 167 Pintrich, P. R. & de Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82, 33-40. Pintrich, P. R. & Garcia, T. (1991). Student goal orientation and self- regulation in the college classroom. In P. Pintrich & M. Maehr (Eds.), Advances in motivation and aphievement (Vol. 7, pp. 85-114). Greenwich, CT: JAI Press. Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1991). A manual for the use of the motivated strategies for learning questionnaire (MSLQ). Technical Report No. 9l-B-004. Ann Arbor, MI: University of Michigan. Pollock, J. C. & Sullivan, H. J. (1990). Practice mode and learner control in computer-based instruction. Contemporary Edgational Psychology, 15, 251-260. Porter, L. R. (1997). Creating the virtual classroom: Dismce learning with the intemet. New York: Wiley. Pridemore, D. R. & Klein, J. D. (1991). Control of feedback in computer- ' assisted instruction. Educational Technology Research and Development. 39. 27-32. Pridemore, D. R. & Klein, J. D. (1995). Control of practice and level of feedback in computer-based instruction. Contemporary Educational Psychology, 20, 444-450. Quinones, M. A. (1995). Pretraining context effects: Training assignment as feedback. Journal of Applied Psychology, 80, 226-238. Quinones, M. A. (1997). Contextual influences on training effectiveness. In M. A. Quifiones & A. Ehrenstein (Eds.), Training for a rapidly changing workplace: Application of psychological research. Washington, DC: American Psychological Association. Ree, M. J., Carrettta, T. R., & Teachout, M. S. (1995). Role of ability and prior job knowledge in complex training performance. Journal of Applied Psychology, 80. 721-730. Ree, M. J. & Earles, J. A. (1991). Predicting training success: Not much more than g. Personnel Psychology. 44, 321-332. Reeves, T. C. (1993). Pseudoscience in computer-based instruction: The case of learner control research. Journal of Computer-Based Instruction, 20, 39-46. Reiser, R.A. & Gagné, RM. (1983). Selecting mediafor instruction. Englewood Cliffs, NJ: Educational Technology Publications. 168 Ridley, D. S., Schutz, P. A., Glanz, R. S., & Weinstein, C. E. (1992). Self- regulated learning: The interactive influence of metacognitive awareness and goal- setting. Journal of Experimental Education-6Q. 293-306. Rogers, C. R. & Freiberg, H. J. (1994). Freedom to learn (3rd ed.). Columbus, OH: Merrill/Macmillan. Ross, S. M. & Morrison, G. R. (1989). In search of a happy medium in instructional technology research: Issues concerning external validity, media replications, and learner control. Educational Technology Research and Development. 3_7, 29-33. Rubincam, I. and Olivier, W. P. (1985). An investigation of limited leamer- control options in a CAI mathematics course. AEDS Journal. SummeL 211-226. Salomon, G. (1981). Communiaation and education: Socizaand psychological interactions. Beverly Hills, CA: Sage. Schraw, G. & Dennison, R. S. (1994). Assessing metacognitive awareness. ' Contemporary Educational Psychology, 19, 460-474. Shaw, D. S. (1992). Computer-aided instruction for adult professionals: A research report. Joumala of Computer-Based Instruction. 19, 54-57. Shin, E.C., Schallert, D.L., & Savenye, WC. (1994). Effects of learner control, advisement, and prior knowledge on young students’ learning in a hypertext environment. Educational Technology Research and Developmenta42, 33-46. Small, R.V. & Grabowski, BL. (1992). An exploratory study of information- seeking behaviors and learning with hypermedia information systems. Journal of Educational Multimedia_and Hypermedia. 1. 445-464. Steinberg, E. R. (1989). Cognition and Learner Control: A Literature Review, 1977-1988. Journal of Comfllter-Based Instructional6. 117-21. Steinberg, E. R. (1977). Review of student control in computer-assisted instruction. Journal of Computer-Based Instruction, 3, 84-90. Tannenbaum, S. 1., Mathieu, J. E., Salas, E., and Cannon-Bowers, J. A. (1991). Meeting trainees’ expectations: The influence of training fulfillment on the development of commitment, self-efficacy, and motivation. Journal of Applied Psychology. 76. 759-769. Tannenbaum, S. I. & Yukl, G. (1992). Training and development in work organizations. Annual Review of Psychology, 43, 399-441. 169 Tennyson, C. L. (1980). Instructional control strategies and content structures as design variables in concept acquisition using computer-based instruction. Journal of Educational Psychology. 72, 225-232. Tennyson, C. L. (1981). Use of adaptive information for advisement in learning concepts and rules using computer assisted instruction. American Educational Research Journal, 18, 425-438. Tennyson, C. L., Tennyson, R. D., & Rothen, W. (1980). Content structure and instructional control strategies as design variables in concept acquisition. Journal of Educational Psychology. 7; 499-505. Tobias, S. (1987). Mandatory text review and interaction with student characteristics. Journal of Educational Psychology. 79, 154-161. Vroom, V. (1964). Workand motivation. New York: Wiley. Warr, P. & Bunce, D. (1995). Trainee characteristics and the outcomes of open learning. Personnel Psychology. 48, 347- 375. Weiner, B. (1986). Attribution, emotion, and action. In R. M. Sorrentino & E. T. Higgins (Eds.), Handbook of motivation and cognition: Foundations of social behavior. New York: Guilford Press. Williams, M. D. (1996). Leamer-control and instructional technologies. In D. Jonassen (Ed.), Hanboogif edtLation research technology (pp. 957-983). New York: Simon & Schuster McMillan. Wilson, B. G & Jonassen, D. H. (1989). Hypertext and instructional design: Some preliminary guidelines. Performance Improvement Quarterly. 2. 34—49. Winters, D. & Latham, G. P. (1996). The effect of learning versus outcome goals on a simple versus a complex task. Grogpand Organization Management, Yang, C. & Moore, D. M. (1995). Designing hypermedia systems for instruction. Journal of Educational Technology Systems. 24. 3-30. 170