THE EFFECTS OF TRAINING DESIGN ON LEARNER-CONTROLLED PRACTICE DIFFICULTY: HOW LEARNER DECISIONS IMPACT ADAPTIVE TRANSFER By Jessica Marie Webb A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of PsychologyŒMaster of Arts 2016 ABSTRACT THE EFFECTS OF TRAINING DESIGN ON LEARNER-CONTROLLED PRACTICE DIFFICULTY: HOW LEARNER DECISIONS IMPACT ADAPTIVE TRANSFER By Jessica Marie Webb The present study sought to understand how two different training conditions, error management training and procedural error-avoidant training, influenced learners™ decisions about practice difficulty when learning a complex task. Goal orientation was examined as a moderator to this relationship. Other relationships were investigated to understand the effect of learner-controlled practice difficulty on self-regulatory processes as well as performance in an adaptive transfer trial. Results showed that self-evaluation activity played a key role in mediating the relationship between learner-controlled practice difficulty and performance in the adaptive transfer trial. Self-evaluation was also found to mediate the relationship between practice difficulty and strategic knowledge. In addition, performance goal orientation moderated the relationship between training design and learner-controlled practice difficulty such that participants in the procedural, error-avoidant condition chose higher levels of practice difficulty on average when mastery goal orientation was low, which suggests a beneficial matching effect for performance oriented individuals in procedural, error avoidant training conditions. Additional findings are discussed with regard to self-regulatory processes and training outcomes. This work is dedicated to Alexander Webb, my family, Rory Dixon, Dia Chatterjee, Morgan Showler, and Marissa Hatch.iii ACKNOWLEDGMENTS I would like to thank my advisor, J. Kevin Ford, for his enduring support throughout the thesis process. He encouraged me to take a mastery orientation during each phase of the manuscript which allowed me to grow and develop my ideas, as well as my skills, as an academic writer. I would also like to express my gratitude to my committee members, Steve W. J. Kozlowski and Christopher Nye, for providing excellent guidance during the proposal and defense meetings. The discussion in those meetings stoked my excitement about the research and ultimately made the manuscript a better product. I would also like to express my deepest gratitude to my support network for the past three years, which I cannot adequately do in one paragraph. Thank you to my husband, Alexander Webb, my immediate family, and the members of my cohort for the countless sense-making conversations, their open and listening ears, and their constant emotional support. iv TABLE OF CONTENTS LIST OF TABLES ...................................................................................................................... vii LIST OF FIGURES ................................................................................................................... viii INTRODUCTION..........................................................................................................................1 TRAINING DESIGN AND LEARNING.....................................................................................6 GOAL ORIENTATION AND LEARNING ..............................................................................14 PRACTICE DIFFICULTY AND SELF-REGULATION .......................................................20 SELF-REGULATION AND LEARNING .................................................................................25 METHOD .....................................................................................................................................30 Participants .......................................................................................................................30 Task ...................................................................................................................................32 Design ................................................................................................................................34 Procedure ..........................................................................................................................35 Phases ....................................................................................................................35 Familiarization phase ...............................................................................35 Training phase ..........................................................................................35 Practice phase ...........................................................................................37 Adaptive transfer .......................................................................................38 Pilot studies .......................................................................................................................39 First pilot session ..................................................................................................39 Level design ...............................................................................................40 Results from the first pilot session ............................................................42 Conclusions from the first pilot session ...................................................43 Actions taken in response to the first pilot session ..................................43 Second pilot session ..............................................................................................45 Results from the second pilot session .......................................................46 Conclusions from the second pilot session...............................................47 Measures ...........................................................................................................................47 Control variables ..................................................................................................48 Cognitive ability.........................................................................................48 Demographics .......................................................................................................49 Trait goal orientation ...........................................................................................49 Condition ..............................................................................................................50 State goal orientation ...........................................................................................50 Practice difficulty .................................................................................................50 Self-efficacy ...........................................................................................................51 v Practice performance...........................................................................................51 Self-evaluation activity ........................................................................................52 Metacognition .......................................................................................................52 Effort .....................................................................................................................53 Knowledge ............................................................................................................53 Adaptive transfer performance ..........................................................................54 Analytic strategy ..............................................................................................................55 RESULTS .....................................................................................................................................57 Trends of the repeated measures variables ...................................................................58 Results from the a priori model ......................................................................................63 Results from the simple slopes analysis .........................................................................66 Post-hoc analyses ..............................................................................................................68 Overall model fit ..................................................................................................68 Strategic knowledge .............................................................................................69 Self-evaluation activity ........................................................................................71 DISCUSSION ...............................................................................................................................74 Limitations ........................................................................................................................81 Practical implications ......................................................................................................84 Future directions ..............................................................................................................86 APPENDICES ..............................................................................................................................90 APPENDIX A Flow of experiment .................................................................................91 APPENDIX B Demographics..........................................................................................92 APPENDIX C Trait goal orientation (Vandewalle, 1997) ...........................................93 APPENDIX D Training topics for experiment .............................................................94 APPENDIX E Proceduralized, error-avoidant training manipulations .....................95 APPENDIX F Error management training manipulations .........................................98 APPENDIX G State mastery orientation (Horvath et al., 2001) ...............................100 APPENDIX H Self-regulation measures .....................................................................101 APPENDIX I Knowledge assessment...........................................................................102 REFERENCES ...........................................................................................................................106 vi LIST OF TABLES Table 1 Trial design characteristics for the first pilot session .................................................41 Table 2 Means and standard deviations for engagement difficulty in the first pilot session 42 Table 3 Means and standard deviations for execution difficulty in the first pilot session ....42 Table 4 Means and standard deviations for prioritization difficulty in the first pilot session ............................................................................................................................................44 Table 5 Means and standard deviations for overall difficulty in the first pilot session.........44 Table 6 Means and standard deviations for comparisons of difficulty in the first pilot session ............................................................................................................................................45 Table 7 Trial design characteristics for the second pilot session and final experiment ........45 Table 8 Means, standard deviations, and bivariate correlations for all variables of interest ...........................................................................................................................................59 Table 9 Number of participants who selected a practice difficulty level for a corresponding practice trial .................................................................................................................................61 Table 10 Results from the simple slopes analysis for hypotheses 4 and 5 ..............................67 vii LIST OF FIGURES Figure 1 Graphical depiction of hypothesis 4 ............................................................................18 Figure 2 Graphical depiction of hypothesis 5 ............................................................................19 Figure 3 A heuristic of the proposed hypotheses ......................................................................29 Figure 4 Mean practice difficulty choices before the eight practice trials ..............................60 Figure 5 Mean self-efficacy scores before the eight practice trials and the adaptive transfer task ................................................................................................................................................62 Figure 6 Mean self-evaluation following the eight practice trials and the adaptive transfer task ................................................................................................................................................62 Figure 7 Mean metacognition following the eight practice trials and the adaptive transfer task ................................................................................................................................................63 Figure 8 Results of the a priori model ........................................................................................64 Figure 9 Graphical depiction of the simple slopes analysis for hypotheses 4 and 5 ..............68 Figure 10 Results of the post-hoc adjusted a priori model.......................................................72 Figure 11 Results from the confirmatory factor analysis of the items in the knowledge test..................................................................................................................................................73 viii INTRODUCTION As e-learning and web-based training settings become more commonplace, methods to foster positive training outcomes will be of great importance. Understanding learner control (i.e., the extent to which the learner is able to make decisions about one or more variables in his or her learning environment; Kraiger & Jerden, 2007) is critical to improving learning and transfer of learning onto the job in these types of computer-based settings. There are numerous benefits associated with learner-controlled training; organizations can deliver consistency in training and lower costs on in-classroom instructors. In addition, trainees have much more flexibility (i.e., time, location) and control over the pace, the lesson sequence, and the truncation of lessons, practice, and/or feedback while learning. Of these, learner-controlled pace has the most positive effects on training outcomes. A recent meta-analysis showed that learner-controlled pace during training is positively associated with transfer of training (Carolan, Hutchins, Wickens, & Cumming, 2014). This finding implies that when trainees are allowed to control their own pace, they are better able to align their ability to the complexity of the training task. Carolan et al. (2014) also found that truncation of practice and/or feedback was shown to have negative effects on transfer of training. Learner control can come in another form, however, that was not investigated in the meta-analysis: learner-controlled practice difficulty. Learner-controlled practice difficulty is conceptualized here as a learner™s ability to choose the task complexity of the conditions (as defined by Wood, 1986) to apply their knowledge and practice their skills during training. In a recent study, learner-controlled practice difficulty was shown to be directly and positively related to task knowledge and post-training performance, as well as indirectly and positively related to transfer of training (Hughes et al., 2013); however, this line of research is new and leaves much to be discovered. The antecedents of and effects of learner-controlled 1 practice difficulty on self-regulatory activity has only being studied by a few researchers, including Hughes and his colleagues. Within a learner-controlled training environment, learners take relevant information in their environment into account in order to make the best decisions possible, including training design and individual differences. One of the most important goals of a training program is for trainees to transfer newly learned knowledge and skills to their work environment. In order to facilitate adaptive transfer, or the transfer of knowledge and skills to tasks and/or situations significantly different from the learning setting (Blume, Ford, Baldwin, & Huang, 2010), researchers have investigated the effects that errors have on learners™ cognitive, behavioral, and affective processes. Error management training (EMT), or encouraging learners to make errors as they explore their learning environment, has been found to encourage adaptive transfer over proceduralized error-avoidant training (Keith & Frese, 2008). It is likely that learners™ decisions will differ based on the instructions they receive on how to treat and think about errors in the training environment. At the same time, learners™ individual differences impact the choices they make in the learner-controlled environment. The goals the learner sets during training will affect how they proceed through the learner-controlled program. Goal orientation, or an individual™s goal preference in an achievement situation, takes two forms: mastery or performance. Individuals with a mastery orientation tend to focus on learning and developing knowledge and skills within an achievement situation; individuals with a performance orientation tend to focus on managing others™ judgments about their ability (Dweck, 1986). Kanfer (1990) notes that adopting a mastery orientation during training behooves the learner because they focus on developing competence in 2 their new skills instead of focusing on outperforming their peers or failing to perform in front of those peers. Together, the treatment of errors and the goal orientation of the learner will have different effects on his or her self-regulatory activity during training. Self-regulatory activity encompasses the processes of self-monitoring, self-evaluation, and self-reaction (Kanfer & Ackerman, 1989). Studies of the components of EMT (i.e., error encouragement and exploratory learning) have shown that learners engage in more self-regulatory activity than learners trained using proceduralized and/or error-avoidant designs (Hughes et al., 2013; Keith & Frese, 2005). In addition, individuals with a mastery orientation tend to engage in more self-regulatory activity than those with a performance orientation (Bell & Kozlowski, 2008; Ford, Smith, Weissbein, Gully, & Salas, 1998). To examine how learners™ decisions affect adaptive transfer, behavioral considerations also need to be taken into account in addition to the cognitive and affective processes of self-regulatory activity. Learners™ performance during training and practice, as well as the effort they expend, will affect the learners™ self-regulatory activity (e.g., when the learner performs well in a low complexity task, they will feel more self-efficacious and therefore may expend less effort the next time they encounter the same task), but to what extent? To investigate this question, the present study seeks to understand how adaptive transfer is affected by training design, goal orientation, and choices of practice difficulty over time in a learner-controlled training environment. By encouraging error management (i.e., encouraging trainees to make errors as they explore the training environment), the present study seeks to show that an active and safe training environment can influence learners to engage in more difficult practice than learners who are trained using a proceduralized, error-avoidant design. The present study also seeks to understand how learners™ goal orientation moderates the relationship between 3 training design and practice difficulty. At its core, the present study will show how learners™ choices of practice difficulty impact self-regulatory processes and behavior over time to affect adaptive transfer. Thus, the current study aims to contribute to the literature in the following ways. First, the current study will assess the degree of safety that EMT provides trainees. According to Ericsson (2009), trainees need to practice their skills in an optimal training environment (e.g., perceived as safe to try more difficult tasks and fail without penalty) that allows them to raise their level of skill over time. By relating EMT to choices of practice difficulty, the safety of the training design can be directly linked to learners™ level of skill over time. Second, the current study will examine the effects of goal orientation on the relationship between training design and learner-controlled practice difficulty to understand how learners™ preferences for goals strengthen or weaken the relationships between training design and choice. Finally, the current study will dynamically examine how self-regulation and behavior change over time to impact adaptive transfer. In the sections that follow, the literature on error-related training designs and goal orientation will be reviewed in relation to learner-controlled practice difficulty. First, EMT and proceduralized error-avoidant training will be defined and explained. The review will compare EMT and proceduralized, error-avoidant training and explain the benefits and disadvantages of each. Second, a brief review on the conceptualization of goal orientation will be provided, as well as its relations to training outcomes. A brief history of learner-controlled practice difficulty will be examined, drawing from Wood (1986). Training design and goal orientation will be reviewed in relation to learner-controlled practice difficulty and hypotheses will be proposed. Then, the past research on self-regulatory processes (i.e., self-monitoring, self-evaluation, and 4 self-reaction) and behavior (i.e., performance and effort) will be reviewed to relate learner-controlled practice difficulty to adaptive transfer. 5 TRAINING DESIGN AND LEARNING EMT is a training intervention that involves active exploration as well as explicit encouragement for learners to make errors during training and to learn from these errors (Keith & Frese, 2008). The foundation behind EMT is the idea that errors are ubiquitous, that human error is natural, and that errors cannot always be prevented (Frese & Keith, 2015). Exploratory learning and active exploration are opposite to approaches that require external guidance, rule following, and a mandatory, proceduralized sequence of behaviors during practice (Keith & Frese, 2008). Active exploration requires that individuals explore their environment independently. Encouragement to make errors is the other necessary component of EMT. The aim is for individuals to avoid frustration from errors and, instead, view them as an opportunity to learn about the task in their learning environment. Overall, the goal of EMT is for individuals to persevere in the face of errors and to continue exploring their learning. According to Frese et al. (1991), errors act as information, or system indicators, for the individual. For instance, the error may indicate that the individual does not know the skill well enough or does not have all the knowledge he or she needs. In addition, the error can provide insights into individuals™ learning strategies and provide individuals the chance to self-reflect. The error can signal individuals to monitor and reflect on the state of their knowledge, or choose alternative strategies to reach a desired end state. By engaging in such self-regulatory behavior, individuals create more efficient mental models because they know that the error exists and that it results in an outcome that should be avoided. Overall, it is known that encouragement to avoid errors or make errors initiates different self-regulatory processes (Gully, Payne, Koles, & Whiteman, 2002). To investigate these 6 processes further, many studies have looked at the differences in learning outcomes between EMT and proceduralized, error-avoidant training. In one of the earliest studies of EMT, Frese and colleagues (1991) measured the performance of two small participant samples using EMT and error-avoidant training. They found that participants in the EMT condition had higher scores in non-speed performance tests. Like many future EMT studies, the task centered on learning computer-based skills. Specifically, the task employed by Frese et al. (1991) was to learn a word processing software program. The participants were provided with introductory material and leaflets of the commands to serve as a starter for exploratory learning. Error framing phrases, such as fiI have made an error. Great!,fl fiThere is a way to leave the error situation,fl fiLook at the screen,fl and fiI watch what is on the screen and what is changingfl were shown on posters in the training setting. Frese and colleagues™ (1991) study also examined how participants™ affective responses differed between conditions. When errors were forced in later trials, participants in the EMT condition showed significantly less frustration than participants in the error-avoidant training condition. In a similar study, Nordstrom, Wendland, and Williams (1998) found that participants in the EMT condition decreased in frustration over time whereas participants™ frustration in the error-avoidant condition increased over time. Similarly, Dormann and Frese (1994) found superior performance using EMT over error-avoidance training in a study where participants learned how to perform numerous statistical commands in a statistical software package. As in Frese et al. (1991), participants in the EMT condition had no instruction on how to proceed, but received a leaflet with a short description of the commands needed to accomplish the tasks. Similar error framing phrases were used, such as fiErrors are a natural part of learning. They point out what you can still learn!,fl fiThere is always 7 a way to leave an error situation!,fl and fiLook at the screen so that you can see what is changing there!fl Overall, participants who received EMT showed higher task performance, made fewer requests for assistance, and reported lower frustration in computer-based training tasks (Chillarege, Nordstrom, & Williams, 2003; Dormann & Frese, 1994; Frese et al., 1991; Heimbeck, Frese, Sonnentag, & Keith, 2003; Ivancic & Hesketh, 2000; Keith & Frese, 2005; Lazar & Norcio, 2003; Nordstrom et al., 1998; Wood, Kakebeeke, Debowski, & Frese, 2000). EMT has also been shown to positively affect transfer of training. For instance, a study that trained participants to use Microsoft Excel found that error training combined with error management instructions resulted in better near and far transfer than error training without error management and error-avoidant training (Heimbeck et al., 2003). Similar to previous studies, the instructions provided to the participants in the error training combined with error management instructions condition framed errors positively, encouraged learning from errors, and included positive error phrases on posters from the Frese et al. (1991) study. Similarly, Keith and Frese (2005) found that EMT alone and EMT combined with a metacognitive module led to better transfer performance than error avoidant training in a study where participants learned Microsoft PowerPoint. The EMT and metacognitive module combination differed from EMT alone in that participants were instructed to pose questions to themselves throughout the training (e.g., fiWhat am I trying to achieve? What do I know about the program so far that can be useful now?). The key finding of this study was that self-regulatory processes, specifically emotion control and metacognitive activity, mediated the effects of EMT on adaptive transfer performance. In another study of transfer of training, Bell and Kozlowski (2008) explored the two components of EMT, exploratory learning and error encouragement, separately. They found that exploratory learning (i.e., participants were given instructions to explore the task and experiment 8 to discover the best strategy to use during the task, but not given task solutions, rules, or strategies) and error encouragement training (i.e., prior to each training block, participants were given a list of potential errors and positively framed errors) positively affected performance on an adaptive transfer task. In addition, Bell and Kozlowski (2008) found that participants in the exploratory learning condition reported higher levels of metacognitive activity than participants in the proceduralized learning condition. Despite the findings of positive learning outcomes associated with the exploratory learning aspect of EMT, there are two major limitations. First, the exploratory learning component of EMT is often not effective in positively influencing learning outcomes for novices. For instance, Debowski, Wood, and Bandura (2001) found that novices had higher levels of self-efficacy, satisfaction with performance during training, strategy quality, and lower levels of wasted effort on electronic search tasks in guided exploration (i.e., participants follow topics in a proceduralized manner and an experimenter modeled appropriate responses) than self-guided, active exploration. They concluded that self-guided, active exploration by novices is less effective when feedback does not facilitate learning or internal motivation. In addition, research has found that active, guided exploration leads to better learning outcomes for novices over self-guided exploration (Kirschner, Sweller, & Clark, 2006). Kirschner and colleagues (2006) argue that self-guided exploration training techniques overload novices™ working memory and hinder their ability to learn. Furthermore when the second component of EMT, error encouragement, is examined, individuals with low cognitive ability have been shown to perform worse than individuals with high cognitive ability when encouraged to make errors. In a computerized radar simulation task, Gully and colleagues (2002) provided participants with instructions prior to the task that included phrases such as, fiit is likely that you will find yourself making mistakes,fl 9 fimaking errors is simply part of the learning process,fl fimaking mistakes is common,fl and fimaking mistakes is expectedfl (p. 147). Overall, they found that participants high in cognitive ability tended to report higher levels of declarative knowledge, task performance, and self-efficacy in error encouragement training than participants with low cognitive ability. These two weaknesses of the components EMT highlight that its active, exploratory component does not stimulate learning for participants with certain individual differences due to the burden it can place on working memory. Keith and Frese (2008) conducted a meta-analysis to determine the overall effectiveness of EMT. They found that EMT leads to better post-training transfer, but not better within-training performance than proceduralized or exploratory training methods. They also found that the effectiveness of EMT was larger for adaptive than for analogical, near transfer performance. The underlying theory behind these results aligns with past research. For instance, Hesketh (1997) identified two major reasons for why researchers find differences in learning outcomes in analogical contexts and adaptive, far transfer. The first is that the principles, schemas, and mental models that individuals develop during training may not be as useful in the immediate training context, but more useful generally (i.e., on the job). Second, individuals tend to develop general metacognitive skills during training, which may be more useful when individuals encounter adaptive transfer tasks. In EMT, trainees are encouraged to practice their skills as they learn them during training trials. Practice is regarded as a learning and development opportunity that helps to build self-efficacy (Bell & Kozlowski, 2002; Debowski et al., 2001; Wood et al., 2000). As trainees engage in practice, their practice should strengthen their satisfaction with their progress (Wood et al., 2000). Numerous studies of training emphasize the need for active practice (e.g., Chillarege et 10 al., 2003; Kozlowski & Bell, 2006; Kozlowski, Gully, et al., 2001; Martocchio, 1994; Stevens & Gist, 1997; Tabernero & Wood, 1999), but few capture the difficulty at which participants practice trained skills. Therefore, little is known about how practice difficulty relates to learning outcomes. In the context of error management, only one study has investigated the impact of error framing on practice difficulty. In a study of a learner-controlled training environment, Hughes et al. (2013) found that positive error framing led participants to choose more difficult practice conditions than negative error framing or no error framing. Trainees who receive EMT are explicitly instructed to make errors; difficult practice will undoubtedly lead to more errors because the skill level required is higher than what the trainee originally practiced or currently has. The Hughes et al. (2013) study results support the notion that trainees who received positive error framing chose more difficult practice because they were not afraid or nervous about making errors in the training setting. This finding is consistent with the work by Ericsson and his colleagues. They have found that practice with the aim of achieving superior performance and developing competence (i.e., deliberate practice) is characterized by an individual pursuing new challenges beyond his or her current skill level in a safe, optimal learning environment that provides immediate and high quality feedback (Ericsson, 2009). Engaging in deliberate practice under these conditions, then, is likely to facilitate optimal learning outcomes. For instance, deliberate practice has been found to positively relate to performance on the job (Sonnentag & Kleine, 2000), as well as increase the likelihood that individuals achieve proficient or expert performance (Charness, Tuffiash, Krampe, Reingold, & Vasyukova, 2005; Duvivier et al., 2011; Ericsson, 2004; Ericsson, 2006; Ericsson, 2009; Ericsson & Charness, 1994; Ericsson, Krampe, & Tesch-Romer, 1993; Ericsson 11 & Lehmann, 1996; Ericsson, Nandagopal, & Roring, 2009; Krampe & Ericsson, 1996). According to Kraiger (2002), learning is advanced by lasting changes in individual knowledge, skills, and attitudes that directly result from practice and experience. It is evident, then, that practice is necessary to develop competence and advance through stages of skill acquisition, but the conditions under which it occurs need to be safe and optimal for the trainee. EMT contains these design components identified by Ericsson (2009). As seen in studies of EMT, trainees learn in a safe environment that provides feedback about the training task and error situations. Many studies have used posters showing positive, encouraging phrases about errors which are in clear view of the trainees. Trainees know that it is safe to make errors and that doing so will not penalize them. In fact, errors are good and should be made in order to advance learning. Trainees are encouraged to explore the training and task environment without punishment. As seen in Keith and Frese (2005), trainees felt comfortable to think aloud and pose questions that were metacognitive in nature in EMT conditions. Thus, the exploratory nature of the environment combined with the positive error framing feedback provided in EMT should create the necessary conditions for trainees to engage in practice that pushes them past their current skill level. Given the safety that EMT provides trainees both affectively and cognitively, trainees would be more likely to engage in challenging practice where they can develop their skills and knowledge than training designs without components of EMT. Hypothesis 1: On average, individuals who participate in EMT will engage in more difficult practice sessions than individuals who participate in proceduralized, error-avoidant training. Hypothesis 2: EMT and practice difficulty are positively related. 12 Hypothesis 3: Proceduralized error-avoidant training and practice difficulty are negatively related. 13 GOAL ORIENTATION AND LEARNING Inherently, learners set goals during a training program whether it be for knowledge or skill development, or for some standard of performance. Goal setting lies in the primary phase of self-regulation. Central to goal setting is the dispositional influence that will impact the types of goals that individuals choose to set, especially in an achievement situation. An individual™s goal preference in an achievement situation is described as his or her goal orientation (Dweck, 1986). Dweck distinguished between two types of goals that individuals prefer in achievement situations: learning (mastery) goals and performance goals. Individuals who set mastery goals tend to seek challenges and persist when faced with obstacles. Mastery-oriented goals often focus on the development and improvement in competence. Individuals who set performance goals tend to involve themselves in situations where they can seek certain affective judgments about their ability. Performance goals focus on gaining favorable judgments or avoiding negative judgments about competence in achievement situations (e.g., Dweck, 1986; Elliot & Harackiewicz, 1994). The trait perspective on goal orientation discusses goal orientation as an individual™s disposition; an individual difference that is stable and long-lasting (e.g., Bell & Kozlowski, 2002; Vandewalle, 1997). Trait goal orientation can be characterized in two main ways: mastery-oriented and performance-oriented. Individuals with mastery goal orientations may be better able to adapt in achievement situations than those with performance goal orientations (Dweck, 1986). With a focus on development and competence, individuals who tend to set mastery goals tend to learn from failure better than those who tend to set performance goals. Performance-oriented individuals set achievement goals that will influence the judgments of those who perceive their competence. For those with performance-avoid orientations, they often fear failure in the 14 presence of their peers; those with performance-prove orientations, they often want to demonstrate their ability to their peers and outperform others. Research finds that individuals with a mastery goal orientation tend to fare better on learning outcomes than individuals with a performance goal orientation. Overall, there is evidence that having a mastery goal orientation is positively linked to the use of important self-regulation and cognitive strategies, such as metacognition (Bell & Kozlowski, 2008; Ford et al., 1998) and intrinsic motivation (Nordstrom et al., 1998). For example, Nordstrom and colleagues (1998) found that individuals who participated in EMT and were given situational cues to set learning goals reported higher levels of performance, intrinsic motivation, and decreased frustration. Despite the evidence for positive relationships between mastery-oriented individuals and cognitive and affective strategies, there is little evidence that trait mastery orientation influences skill-based practice strategies directly. For instance, in a study of training a computer-based radar simulation program, Ford et al. (1998) found that mastery-oriented trainees reported higher levels of metacognition and self-efficacy than performance-oriented trainees. Ford and colleagues (1998) also investigated the practice strategies that participants used, including activity level (i.e., the frequency that trained skills were practiced) and identical elements (i.e., the extent to which trainees practiced in conditions similar to the transfer task). Their results indicated that trainees™ trait goal orientation and trainees™ choices to follow an identical elements strategy for practice were not significantly related. Why were the researchers in this study unable to find a relationship between goal orientation and identical elements? One reason may be that trait goal orientation alone was not a strong enough influence to persuade trainees to engage in practice that aligned with the transfer 15 task and develop competence in the task. Another reason may be that setting mastery goals does not in and of itself provide a safe enough environment in which to engage in practicing skills at a more challenging level. In an error management situation, however, individuals with a mastery goal orientation may feel more comfortable to push themselves past their current skill level in order to learn and develop competence in those skills. The answer to this question is unknown as the studies that have investigated EMT and mastery goal orientation in tandem largely focus on cognitive and/or affective learning processes and adaptive transfer of training rather than their impact on behavioral aspects of training (i.e., performance and amount of effort expended) or practice strategies (i.e., choosing certain levels of task complexity) that can facilitate the development of skills. For example, in their study of analogical and adaptive transfer, Bell and Kozlowski (2008) found that trait mastery orientation and error framing interacted to influence trainees™ state mastery orientation such that error framing did not significantly influence the state mastery orientation of individuals high in trait mastery orientation; however, error framing did significantly impact the state mastery orientation of individuals low in trait mastery orientation such that these individuals displayed greater levels of state mastery orientation under error-encouragement conditions than under error-avoidance conditions. State mastery orientation was positively related to metacognitive activity, which led to adaptive transfer mediated first through self-evaluation, and second through strategic knowledge (i.e., cognitive strategies of how and when to use knowledge). Based on Bell and Kozlowski (2008), it appears that mastery goal orientation is important to facilitating learning outcomes, but what is less clear are the behaviors that benefit trainees during training. In a similar vein, Heimbeck et al. (2003) found that individuals™ trait mastery orientation did not significantly moderate the relationship between 16 EMT and performance in a study that trained the software program, Microsoft Excel. Performance was measured by a 45-minute test immediately following training and one week following training. Overall, participants in the EMT condition performed better than participants in the error-avoidant condition, but the practice strategies that led to participants™ performance were not investigated. Despite knowing that EMT and mastery goal orientation facilitate successful analogical and adaptive transfer, there is value for researchers and organizations in knowing the kind of behaviors and practice strategies that trainees engage in during training (when given choice) to develop and advance their skill. By identifying the factors that influence these strategies, the appropriate conditions needed for trainees can be identified, further researched, and compared to other methods to encourage more successful transfer on the job. Until now, few have explicitly commented on the safety that EMT provides trainees. By framing errors positively, trainees are less apt to react negatively when the encounter an error. The anxiety surrounding errors is dissipated by the elements of the training design (e.g., emotion control strategies). As mentioned previously, deliberate practice is characterized by pursuing challenges beyond the current level of ability in a safe and optimal learning environment (Ericsson, 2009). Therefore, in training environments that emphasize error management, the expectation would be that trainees are more likely to engage in more difficult practice over time. By creating an error management situation, the mastery-oriented learner may be more likely to engage in more difficult practice to develop trained skills and not be fettered by stress or anxiety. There is evidence that individuals with a mastery goal orientation also tend to engage in more practice strategies than performance-oriented individuals (Smith, 2005). Therefore, the mastery-oriented learner should be encouraged to learn from their mistakes in training and safely advance 17 through the stages of skill acquisition when participating in EMT. In this safe environment, trainees will be more likely to engage in self-regulatory behaviors and employ the strategies needed to advance their skill (i.e., choosing more difficult practice conditions to challenge them past their current skill level) than trainees in the proceduralized, error-avoidant training. Figure 1 shows a graphical depiction of the fourth hypothesis. Hypothesis 4: Mastery goal orientation will moderate the relationship between training design and practice difficulty such that the relationship between training design and practice difficulty will be more positive under conditions of EMT rather than proceduralized, error-avoidant training. Figure 1 Graphical depiction of hypothesis 4 Conversely, the performance-oriented learner is likely to base his or her goals off of the judgments of others. In an effort to show his or her ability, the performance-oriented learner is more likely than the mastery-oriented learner to select practice conditions with low difficulty in order to show that he or she can accomplish the task well. In EMT conditions where errors are encouraged, the performance-oriented learner will likely choose more difficult practice tasks than performance-oriented learners in proceduralized, error-avoidant conditions where the need Practice Difficulty High Mastery Orientation Low Mastery Orientation EMT Proceduralized error-avoidant 18 to be perfect and avoid failure will strengthen the learner™s disposition and encourage lower difficulty tasks. Hypothesis 5: Performance goal orientation will moderate the relationship between training design and practice difficulty such that the relationship between training design and practice difficulty will be more negative under conditions of proceduralized, error-avoidant training rather than EMT. Figure 2 Graphical depiction of hypothesis 5 Practice Difficulty High Performance Orientation Low Performance Orientation EMT Proceduralized error-avoidant 19 PRACTICE DIFFICULTY AND SELF-REGULATION Learner-controlled practice difficulty is defined following Wood™s (1986) typology of task complexity. Wood (1986) defines three types of task complexity: component complexity, coordinative complexity, and dynamic complexity. Component complexity is the summation of the number of information cues to be processed during the performance of each act of each subtask in the task. Coordinative complexity involves the relationships between the information cues, acts, and products of the task, specifically focusing on the timing, frequency, intensity, and location of the required acts of the task. Dynamic complexity is defined by the need to adapt to changes in the task environment that affect the relationships between information cues, acts, and products of the task. Following past research by Hughes et al. (2013) and Bell and Kozlowski (2008), an increase from one level of practice difficulty to another in the present study involves an increase in each of the three types of complexity mentioned above. In training, self-regulatory strategies are commonly employed when trainees encounter a difficult task in order to meet its cognitive and behavioral demands. Self-regulatory strategies include self-monitoring (e.g., metacognition), self-evaluation (e.g., comparing the current state to the desired end state), and self-reaction (e.g., affective judgments to the situation) (Kanfer & Ackerman, 1989). Of these, metacognition plays a pivotal role in determining learning outcomes. Metacognition consists of (1) assessing and understanding one™s knowledge and cognition and (2) selecting strategies to employ one™s knowledge and cognition (Ericsson, 2009). Thus, metacognition consists of both monitoring and control. The monitoring aspect of metacognition involves the assessment of one™s knowledge and cognitive processes; the control aspect of metacognition involves the regulation of these cognitive processes (Koriat, Ma™ayan, & Nussinson, 2006). 20 In educational settings, researchers have found that individuals tend to engage in metacognitive activity when confronted with high task difficulty. For instance, Veenman and Beishuizen (2004) found that when students encountered a difficult reading comprehension task, they tended to engage in metacognitive activity to accomplish the task. In order to make sense of the unfamiliar, difficult task, these students determined which task strategies to employ to orient themselves and comprehend the readings. In learner-controlled environments, metacognition plays an important role in influencing learning outcomes. Kraiger and Jerden (2007) posit that the positive relationship between learner control and learning outcomes will be strengthened by trainees™ use of self-regulatory strategies, including metacognition. In their research study of a learner-controlled training environment, Schmidt and Ford (2003) found that participants in the metacognitive intervention had enhanced declarative knowledge, performance, and self-efficacy post-training than participants in the control condition. In complex tasks, engaging in metacognitive activity allows for self-directed learning such that trainees monitor their own performance and assess when they require new learning to become competent in knowledge or skill (Ericsson, 2009). For instance, when an individual is first learning to park a car, he or she needs to attend to many stimuli in their environment and will often refer back to their current knowledge about the environment. When first learning to park the car, the individual might first park the car over the white parking lines; the individual will then exit the car and notice the error. Upon re-entering the car and parking the car again, the individual will now revise the strategies he or she employed when he or she first parked the car. He or she will not turn the wheel as tightly, will not push the gas pedal as far, and will check the side view mirrors to ensure the car is within the parking lines. Here, the individual engaged in 21 metacognition to monitor his or her cognitive processes and revise strategies where necessary. Thus, when trainees learn complex skills, they are likely to engage in some degree of metacognition. As trainees becomes more competent and encounter more difficult situations, it is likely that they will aim to better their skills and engage in more difficult practice to become competent in these skills. Referring back to the parking example, the individual will likely encounter more difficult situations in which to park the car (e.g., parking on an uphill slope) over time. He or she will need to engage in metacognition to understand where the gaps in their learning exist and employ the relevant strategies to fill in the gaps (e.g., turning the wheels to face the curb or using the parking brake to ensure the car does not roll down the hill). It follows, then, that when trainees engage in more difficult practice, they will need to monitor and regulate their cognitive processes to meet the demands of the task. Hypothesis 6: On average, individuals who choose higher levels of practice difficulty will be more likely to engage in metacognition than individuals who engage in lower levels of practice difficulty. Self-evaluation, or comparing one™s current state to a desired end state, is also likely to be affected by practicing in more difficult conditions. In Hughes and colleagues™ (2013) study, the results showed that practice difficulty and self-evaluation had a strong positive relationship such that participants who practiced more difficult levels of a video game tended to self-evaluate (by reviewing the strategies and resources they used during the game and explaining why they used them) more often than participants who practiced less difficult levels of the video game. According to Kanfer and Ackerman (1989), self-regulatory behavior (i.e., self-monitoring, self-evaluation, and self-reaction) and overall cognitive effort are likely to increase as the difficulty of the task increases, which is in line with what Hughes et al. (2013) found. 22 Hypothesis 7: On average, individuals who choose higher levels of practice difficulty will be more likely to engage in self-evaluation than individuals who choose lower levels of practice difficulty. Engaging in more difficult practice also has effects on learners™ performance. In Hughes and colleagues™ (2013) study, their results showed that participants who practiced more difficult levels in a video game tended to have poor performance. This finding is in line with what is found in the literature. According to Locke, Shaw, Saari, and Latham (1981), when tasks require unattained knowledge, more cognitive effort, and more ability, individuals will not perform well. Hypothesis 8: On average, individuals who engage in more difficult practice will be less likely to perform well during practice than individuals who engage in less difficult practice. The literature on training has emphasized the need for practice when acquiring knowledge and skills in order to develop self-efficacy. Identical elements theory, or maximizing the similarities in stimuli and responses between training and transfer contexts, emphasizes that practice elements should mimic the elements in an achievement situation (Thorndike & Woodworth, 1901). Gagne (1962) affirms that individuals learn by engaging in practice that emulates the achievement situation, but to reinforce such learning individuals need to gain the right knowledge, determine the best procedure for practice, and then practice the subcomponents of the procedure in the proper sequence. In applying identical elements theory, Ford et al. (1998) found that practicing scenarios most similar to the transfer task was positively related to self-efficacy post-training. Similarly, Hughes et al. (2013) found that practice difficulty was positively related to post-training self-efficacy controlling for performance during practice. Hypothesis 9: On average, individuals who choose higher levels of practice difficulty will 23 be more self-efficacious than individuals who choose lower levels of practice difficulty. As tasks increase in complexity and difficulty, the cognitive and behavioral demands of the task increase as well. Kahneman (1973) notes that an easy task demands little effort whereas a difficult or complex task demands great effort. Therefore, when the complexity of the task increases, learner must increase his or her cognitive and behavioral effort in order to meet the rising demands of the task. Hypothesis 10: On average, individuals who choose higher levels of practice difficulty will expend more effort than individuals who choose lower levels of practice difficulty. 24 SELF-REGULATION AND LEARNING As previously mentioned, self-regulation is characterized by three distinct activities: self-monitoring, self-evaluation, and self-reaction (Kanfer & Ackerman, 1989). Self-monitoring involves allocating attention to one™s behavior and the consequences of that behavior in relation to a desired end state or goal. Self-evaluation involves comparing one™s current level of performance to the desired end state. Finally, self-reactions are either affective judgments or self-efficacy expectations of goal-directed behavior made by the individual. Metacognition, akin to self-monitoring, has been defined as fiknowledge and cognition about cognitive phenomenafl (p. 1; Flavell, 1979) and has since been characterized by monitoring cognition, including knowledge, experiences, goals, tasks, behaviors, strategies, and actions. In a review of how metacognition and self-regulation have been discussed in the educational literature, Dinsmore, Alexander, and Loughlin (2008) noted that metacognition may be thought of as nested within self-regulation in that self-regulation involves monitoring and controlling cognition, behavior, and affect whereas metacognition focuses on monitoring and controlling cognition solely. The importance of metacognition has been discussed in numerous outlets. Overall, metacognition has been found to be positively related to declarative and strategic knowledge (Ford et al., 1998; Schmidt & Ford, 2003) and performance (Berardi-Coletta, Buyer, Dominowski, & Rellinger, 1995; Ford et al., 1998; Schmidt & Ford, 2003). Additionally, metacognition has been found to be especially beneficial in learner-controlled training contexts (Schmidt & Ford, 2003). Trainees who engage in metacognitive activity attempt to select the appropriate strategies for the task at hand. In a study of self-regulation behavior in a learner-controlled training environment, Young (1996) found that participants who reported low self-25 regulatory strategies fared worse on a performance task in learner-controlled settings than participants who reported high self-regulatory behavior. Inherently, metacognition should lead to higher strategic knowledge as trainees gain mastery through trial and error to choose the correct cognitive strategies for the task. Hypothesis 11: On average, individuals who engage in more metacognition will have higher levels of strategic knowledge than individuals who engage in less metacognition. The second step of the self-regulatory process, self-evaluation, plays a major role in active learning contexts and has been linked to numerous learning outcomes. For instance, Hughes et al. (2013) found that self-evaluation, operationalized as trainees reflecting on the reasons why they used resources and implemented strategies during training, was positively related to practice difficulty, but was not related to level of task knowledge. It is possible that the operationalization of self-evaluation in this study was not strong enough to capture the definitional elements of self-evaluation. For instance, Hughes and colleagues (2013) did not explicitly ask participants to provide a standard to compare current performance to during self-evaluation. A critical component of self-evaluation is to compare the current level of performance to the desired end point. Contrasting results were found in Sitzmann, Bell, Kraiger, and Kanar™s (2009) study where they found that trainees who were prompted to self-evaluate or self-monitor had increased declarative and procedural knowledge than trainees who were not prompted to self-regulate. Similarly, Kozlowski and Bell (2006) found that trainees who engaged in more self-evaluation activity (operationalized as time spent reading feedback) tended to show higher levels of basic and strategic knowledge. Similarly, Bell and Kozlowski (2008) found that trainees who engaged in self-evaluation (operationalized as time spent reading feedback) were 26 more likely to show higher levels of strategic knowledge (i.e., locating the perimeters of the task and prioritizing targets) in a computer-based radar simulation. Hypothesis 12: On average, individuals who engage in more self-evaluation activity will have higher levels of strategic knowledge than individuals who engage in less self-evaluation activity. Individuals with high self-efficacy tend to be more committed to goals as well as discover and implement more effective task strategies to attain goals than individuals with low self-efficacy (Locke & Latham, 2002). In line with this finding, Bell and Kozlowski (2008) found positive correlations between self-efficacy and strategic knowledge, as well as self-efficacy and strategic performance (e.g., the understanding of the underlying complexities of the task and the more complex strategies required; Kozlowski, Toney, et al., 2001; Sitzmann et al., 2009). In addition, Bell and Kozlowski (2008) found a significant positive correlation between self-efficacy and training performance. Similarly, Ford et al. (1998) found that self-efficacy was positively related to transfer performance (e.g., performance on a more difficult scenario than the training scenarios). Hypothesis 13: On average, individuals who are more self-efficacious will have higher strategic knowledge than individuals who are less self-efficacious. Ericsson et al. (1993) argue that the amount the learner practices in addition to the amount of effort that he or she expends allows the learner to develop competence in knowledge and skill. Similarly, Kozlowski and Bell (2006) note that effort alone is not sufficient enough to develop strategic knowledge, but that focused self-evaluative activity and attention to the task are also required. Nevertheless, in order to develop strategic knowledge, learners will need to expend cognitive and behavioral effort within the task to know when and where to apply their 27 knowledge in order to perform well. Hypothesis 14: On average, individuals who expend more effort will have higher strategic knowledge than individuals who expend less effort. As noted by Baldwin and Ford™s (1988) model of transfer, both the training design (i.e., EMT and influencing state mastery goal orientation) and the work environment (i.e., the opportunity to perform trained skills at work) impact the learning and retention of knowledge and skills. The model implies that, during the training of knowledge and skills, trainees are attending to elements of the training design as well as the environment they will return to post-training. Blume and colleagues (2010) distinguish between the successfulness of near and far transfer. According to Blume et al. (2010), transfer of trained skills is more likely to occur with near transfer tasks than far transfer tasks. The distance of transfer can be conceptualized in a few ways. First, transfer can be thought of temporally such that near transfer could occur within the training session or a few days later whereas far transfer could occur many days, months, or years later. Second, the distance of transfer can be characterized by location such that near transfer could occur in the same location as training whereas far transfer could occur in a different location. Finally, the distance of transfer can be characterized by its similarity to the training task such that near transfer could occur in tasks that are very similar to the training task whereas far transfer could occur in tasks that are less similar to the training task (e.g., the elements of the task and the skills needed are different to those in the training task). The present study will view near and far transfer as a difference in the similarity of design features. In the far transfer task, trainees will be required to adapt their knowledge and skills. In the near transfer task, identical 28 elements will take over and trainees will need to replicate their knowledge and skills from the training task. Far (adaptive) transfer tasks inherently place a higher burden on cognitive load than near transfer tasks. Often, adaptive transfer tasks require trainees to use and apply the knowledge they have learned in new and more challenging ways. Therefore, strategic knowledge is required for effective performance in tasks with high cognitive load so that trainees can apply the most effective strategies to complete the task (van Gog, Paas, & van Merrienboer, 2008). In past studies, strategic knowledge has been shown to be positively related to adaptive transfer (Bell & Kozlowski, 2008; Schmidt, 2008). Hypothesis 15: On average, individuals with high levels of strategic knowledge will perform better on the adaptive transfer task than individuals with low levels of strategic knowledge. A summary of the proposed hypotheses is provided in Figure 3. Figure 3 A heuristic of the proposed hypotheses - - - + + + + - + + + + + + + Practice Difficulty Metacognition Self-evaluation Self-efficacy Performance Goal Orientation + Adaptive Transfer EMT Strategic Knowledge Mastery Goal Orientation Error-avoidant training Performance Effort + 29 METHOD Participants A power analysis was conducted (using http://www.danielsoper.com/statcalc3/calc.aspx?id=89) for an a priori sample size for structural equation models with an effect size of .15 (Keith & Frese, 2008) and alpha of .05. The recommended minimum sample size suggested from the power analysis calculator was 204 participants. In order to confirm the sample size estimated by the power analysis calculator, the minimum sample size of the present study was compared to similar published studies. The minimum sample size of 204 reported by the power analysis was found to be similar to other studies similar to the present study. A post-hoc power analysis was conducted in order to confirm the appropriate sample size for the a priori model using desired power of .80, null RMSEA of .05, alternative RMSEA of .08, alpha of .05, and degrees of freedom of 54. The resulting minimum sample size was 202. A total of 333 individuals participated in the present study. Participants were recruited from the human participant research pool at a large Midwestern university. Participants received six psychology course credits for their participation. Nine participants were removed from the total sample because their data showed evidence of insufficient effort. These nine participant spent less than 66 seconds completing the knowledge test, which was more than two standard deviations below the mean time spent answering across participants. Another thirty participants were removed from the overall sample because they did not complete all eight of the practice trials, which was most likely due to insufficient attention to the task. In the same vein, 33 participants were removed from the analysis because they responded with fidisagreefl or fistrongly disagreefl to items that aimed to detect insufficient effort responding 30 (IER). The two IER detection items were taken from Meade and Craig (2012). In the Meade and Craig (2012) study, these items had factor loadings greater than or equal to .65 on the diligence factor of participant engagement. The items were placed after approximately 50 items had been answered by participants following recommendations from Meade and Craig (2012). The first IER detection item, fiI carefully read every survey item,fl was given to participants following the last item on one of nine metacognition measures. This measure was provided after the feedback for the second practice trial, approximately one hour into the two-hour experiment. Five participants provided ratings fistrongly disagreefl to this item; 24 participants provided ratings of fidisagreefl. The second IER detection item, fiI put forth my best effort in responding to this survey,fl was given to participants following the last item on one of nine self-efficacy measures. This measure was provided after participants set their goal level prior to the final performance trial, approximately one hour and 45 minutes into the experiment. One participant provided a rating of fistrongly disagreefl to this item; four participants provided ratings of fidisagreefl. Finally, the default settings in Mplus screened out 23 participants because there was not enough information in the data, specifically pertaining to cognitive ability, to estimate the model using maximum likelihood estimation. Thus, these 23 participants were not included in the analysis. Altogether, data were used from 238 participants, which was sufficient enough to detect a significant effect based on the a priori and post-hoc power analyses. Of these 238 participants, 36.6% were male (63.4% female) and had an average age of 19.49 years (SD = 1.50). Participants were mostly White or Caucasian (70.2%), followed by Black or African-American (13.0%), Asian (7.1%), Latino (2.5%) or other (7.1%). Most of the sample were sophomores in college (44.5%), followed by freshmen (26.9%), then juniors (17.6%), and seniors (10.5%). One 31 participant did not provide information about their year in college. The majority of the sample majored in the social sciences (34.5%), natural sciences (20.2%), or business (10.5%). The remainder of the sample majored in communication arts and science (8.8%), education (7.1%), nursing (5.5%), engineering (2.9%), liberal arts (2.9%), and other pursuits (7.6%). The demographic characteristics of the smaller sample did not drastically differ from those of the original sample (N = 333). Of these 333 participants, 34.5% were male (65.5% female) and had an average age of 19.53 years (SD = 1.48). Participants in the larger sample were mostly White or Caucasian (65.2%), followed by Asian (13.2%), Black or African- American (12.0%), Latino (2.7%), and other (6.9%). Most of the larger sample were sophomores in college (40.8%), followed by freshmen (28.5%), then juniors (17.7%), and seniors (12.3%). The majority of this larger sample majored in the social sciences (33.9%), natural sciences (18.0%), or business (11.7%). The remainder of the sample majored in communication arts and science (10.5%), education (6.6%), nursing (5.4%), engineering (3.0%), liberal arts (2.1%), and other pursuits (8.7%). Task A computer-based, radar-tracking simulation called TANDEM was used for the present study. In previous research, TANDEM has been used as a valid training task (Bell & Kozlowski, 2008; Ford et al., 1998; Gully et al., 2002). TANDEM presents participants with a black radar screen with numerous menus. Each participant has a fishipfl in the middle of the radar screen that participants must protect in order to win points in the simulation. The ship is surrounded by an inner perimeter and an outer perimeter. On the screen, there are numerous targets that have different characteristics that are programmed by the experimenter. TANDEM allows researchers to control many aspects of the task, for example the number of targets, placement of the targets, 32 the speed of targets, length of the trials, information presented before and after trials, and rules for allocating points for engaging and executing targets. To perform well in the task, participants must make a number of decisions about the targets before making a final decision (i.e., to clear or shoot the target). Before each trial, participants are given time to read and study a digital instruction manual. The instruction manual provides information to the participants about how to make correct decisions about targets. The decision rules are based on three characteristics, including type of the target (e.g., air, submarine, surface), class of the target (e.g., civilian, military), and intent of the target (e.g., hostile, peaceful). If the target is hostile, they must shoot the target in order to gain points; if the target is peaceful, they must clear the target in order to gain points. If one of these four decisions is made incorrectly, participants will lose points. In the instruction manual, participants are also given information about how to zoom in and out of the radar screen. Participants will be required to zoom in and out in order to monitor their outer perimeter. Participants must zoom out three times in order to find the outer perimeter. The outer perimeter is invisible to participants and must be located using marker targets. Marker targets are stationary targets that sit just inside the outer perimeter. Participants must learn to hook these targets to locate the outer perimeter so that they can hook and execute targets that may penetrate this perimeter. When targets cross either the inner or outer perimeters, they lose 10 points. Once the participants read and study the information about how to make decisions in the simulation, they are taken to a trial. During a trial, participants must select (i.e., hook) targets to retrieve the information to make their type, class, and intent decisions. Simultaneously, participants must not allow the targets to penetrate the inner or outer perimeters. Thus, 33 participants will be required to use the zoom function during the trial to ensure that they do not lose points. Points are rewarded for correctly making target decisions (e.g., type, class, intent, and engagement); 100 points are lost for incorrectly making these decisions and 10 points are lost for allowing targets to penetrate the perimeters. Following the completion of a trial, participants are taken to a screen that displays performance feedback. The participants receive a total of four pages of performance feedback. The first page of feedback provides information about participants™ scores. For instance, participants are told the total number of points they accrued, the number of targets they hooked, the number of targets they engaged correctly and incorrectly, and how many points each of those correct/incorrect engagements cost them. The second page of feedback provides information about participants™ decisions. Here, participants are told the number of correct type, class, intent, and engagement decisions they made. The third page of feedback informs participants about how many targets they allowed to cross the inner and outer perimeters as well as the number of points they lost as a result of the perimeter crossings. Finally, the fourth page of feedback tells participants the average time they spent per target, the number of pop-up targets they engaged, how many pop-up targets they engaged correctly, the number of high-priority targets they engaged (i.e., the number of targets that were programmed to cross the perimeters), the number of marker targets they hooked, and the number of times they zoomed in and out. Design The present study used a between-subjects design comparing two training conditions: EMT or proceduralized, error-avoidant training. Participants were randomly assigned to one of the two training conditions. Four phases occurred during the study. First, participants engaged in a familiarization phase where the TANDEM task was introduced and performance was not 34 recorded. Second, participants were trained on the TANDEM task with the instructions assigned to their condition and completed three training trials. Third, participants engaged in the practice phase where they completed eight practice trials. Finally, participants engaged in a final adaptive transfer task. In total, the experiment lasted two hours. See Appendix A for the overall flow of the experiment. Procedure Phases. As previously stated, the present study had four main phases, including familiarization, training, practice, and transfer. Familiarization phase. Participants were first asked to complete an informed consent. Once consent was given, participants completed a questionnaire asking for demographic information (see Appendix B) and a questionnaire that assessed their trait goal orientation (see Appendix C). Once finished, the experimenter demonstrated the fundamental components of the task through an eight minute video which highlighted how to hook targets, zoom, and the proper sequence for making a decision of whether or not to engage targets. Following the presentation, participants were given two minutes to look over the instruction manual. The instruction manual provided in depth information about the topics discussed in the video. When the study period was over, participants engaged in a one minute familiarization trial where performance was not recorded in order to expose participants to the simulation prior to beginning training. Training phase. Participants were provided instructions on how to proceed through training. Instructions varied based on the condition, but the training objectives were consistent across all conditions (see Appendix D). The instructions were similar to past studies that used error management framing and/or proceduralized, error-avoidant framing (e.g., Bell & 35 Kozlowski, 2008; Stevens & Gist, 1997). See Appendices E and F for specific information about the manipulations. First, participants received instructions on how to proceed through training and practice based on their conditions. Then, the experimenter read a variation of the instructions aloud to the participants emphasizing that they should refer to these instructions throughout the experiment that day. Next, participants were given one minute to read through the instructions they received. Once participants finished reading the instructions, they received a document providing specific instructions on how to proceed during the first training block. Altogether, the training phase consisted of three training blocks. After each training block, participants were given specific instructions on how to proceed during that block. In the first block, participants were instructed to make decisions about the target (e.g., type, class, intent, and engagement) and how to navigate the task environment. In the second block, participants were instructed to prevent targets from crossing the perimeters by learning how to operate the zoom function. In the third and final block, participants were instructed to prioritize targets and balance tradeoffs between the decisions they needed to make and the possibility of targets crossing the perimeters. In the EMT condition, participants received and listened to instructions emphasizing that making errors was positive and encouraged. The instructions that participants received included nine positive error phrases which remained on their desk for the duration of the experiment. Example phrases included, fiIt is good to make mistakesfl and fiThe more errors you make, the more you learn!fl In the instructions given prior to each training block, participants received information about the types of errors they were likely to make and should make during the task. For more specific information, see Appendix F. 36 In the proceduralized, error-avoidant condition, participants received and listened to instructions emphasizing that errors should be avoided. They were also told to follow the 15 steps outlined for them in the instructions during the training and practice phases. Prior to each training block, participants received information about the types of errors that were commonly made in the simulation. These errors were the exact same errors that participants in the EMT conditions were given, but participants in the proceduralized, error-avoidant condition were told to avoid these errors during training and practice. For more specific information, see Appendix E. The 330-second training trials had the following format: an opportunity to study the instruction manual (120 seconds), the training trial (150 seconds), and performance feedback (60 seconds). Participants then completed a state mastery goal orientation measure though no hypotheses were specifically made about the results of this measure (see Appendix G). Practice phase. Following the training phase but before beginning practice trials, the EMT or proceduralized, error-avoidant training manipulation was reiterated to participants to ensure that the instructions and frame were salient prior to the practice phase. Participants in the EMT condition were told to explore the simulation and feel free to make errors during practice while participants in the proceduralized, error-avoidant condition were told to refer to their procedural list of how to engage targets in the simulation. However, the strategies employed and the specific skills to practice (e.g., zooming out, monitoring the perimeters) during the eight practice trials were completely decided by the participant. Once the experimenter finished framing the practice period, participants were asked to choose between five practice difficulty levels. Participants were told that the most difficult level (i.e., level five) was the level most akin to the adaptive transfer task and that the levels increased 37 in difficulty from level one to level five. Participants were also told that they were allowed to practice the levels in the order they chose and that they would likely repeat a few of the five levels across the eight practice trials. Once participants made a decision about the trial to practice, they were asked to state a desired goal level (i.e., points to achieve during that trial) before each of the eight practice trials in order to use as a referent when they self-evaluate (see Appendix G). Then, participants answered a questionnaire about their self-efficacy in the simulation. Participants then began interacting with the simulation and engaging targets during the practice trials. Following each practice trial, participants filled out a short questionnaire regarding their metacognitive activity (see Appendix H). Following the eighth and final practice trial, participants were given a strategic knowledge test to assess the knowledge they gained during practice and if they could apply it correctly (see Appendix I). Adaptive transfer. After the strategic knowledge test, participants were asked to provide a goal level for the transfer task and how self-efficacious they felt going into the transfer task. Participants then engaged in the adaptive transfer task. The task was similar in the elements to the most difficult practice trial, but had higher task complexity. As a result, participants were required to zoom out, prioritize multiple contacts, and diligently monitor the inner and outer perimeters during the transfer task. Following prior research (e.g., Bell & Kozlowski, 2002; Bell & Kozlowski, 2008; Ford et al., 1998; Kozlowski, Gully, et al., 2001), the adaptive transfer task had an increased total number of contacts, an increased time interval (i.e., 10 minutes), more pop-up targets, more points deducted when contacts cross the inner (-175 points) and outer perimeters (-125 points), more pop-up targets near the defensive perimeter, and more targets threatening the outer perimeter. 38 Pilot studies Similar to the studies done by Ford et al. (1998) and Hughes et al. (2013), participants were able to control the difficulty of the trials throughout the practice phase. The present study was initially proposed to include nine 7-minute practice trials based on the studies mentioned above and laboratory time constraints. In order to explore the viability of this proposal, two pilot sessions were conducted for three reasons. First, the optimal number of practice trials needed to be decided so that the necessary performance and effort data were collected, but that participants did not feel fatigued. Second, the number of practice trials needed to fit within the experimental session™s time constraints while including time for training, questionnaires, and the final adaptive transfer trial. Finally, the complexity across the trials needed to be tested with the participants so that they could reliably differentiate the differences between the proposed filevelsfl. First pilot session. A total of 27 participants completed the first pilot session, but were not included in the larger sample (N = 333). Approximately 29.6% of the sample was male (70.4% female) and the mean age was 19.74 years (SD = 1.13). The majority of the pilot sample was White or Caucasian (51.9%), followed by Black or African-American (18.5%), Asian (11.1%), Latino (3.7%), and other (14.8%). Most participants were sophomores in college (55.6%), followed by juniors (22.2%), freshmen (14.8%), and seniors (7.4%). These participants filled out the demographics and trait goal orientation questionnaires before being trained. The participants watched an eight minute video explaining the nature of the task before engaging in a one minute familiarization trial. Following the familiarization trial, participants were asked to engage in three training trials. After the training period, participants were told that they would have the opportunity to engage in ten practice trials that would increase in difficulty from level 1 (the easiest trial) to level 5 (the most difficult trial). The 39 sequence of practice trials were fixed such that each participant began with level 1 and ended with level 5. Each trial lasted three minutes long and they interacted with each trial twice. Participants were asked to provide answers about their level of self-efficacy and the amount of points they wanted to reach before the trial began. After each trial, participants were provided feedback on their performance in the trial. Then, participants were asked to reflect on the extent to which they engaged in metacognitive activity. After every other trial, participants were asked four lab-made questions about the level of difficulty of the trial from 1 to 5 where 1 was fiVery easyfl and 5 was fiVery difficultfl. Participants were asked about how easy or difficult the level was in terms of engaging, executing, and prioritizing targets as well as how easy or difficult the level was overall. After each participant had interacted with a level for the first time (i.e., during the third trial, participants interacted with level 2 for the first time), they were also asked to decide which level was more difficult overall: the trial they had just played or the trial they had played before. Participants were also asked to provide a short explanation of why they thought the trial they chose was more difficult. Level design. As previously mentioned, practice difficulty was manipulated according to Wood™s (1986) taxonomy. Across the levels, complexity was manipulated by (1) increasing the speed of high-priority targets (i.e., targets that must be executed in order to avoid losing points), (2) increasing the number of ambiguous targets (i.e., targets that presented information to participants that did not exactly match a particular engagement decision) and pop-up targets (i.e., targets that fipopped upfl on screen without warning), and (3) increasing the proximity of targets from the inner and outer perimeters. Difficulty was manipulated by increasing the total number of targets and the number of targets that appear on the opening screen as the level number increased. 40 Table 1 Trial design characteristics for the first pilot session Total number of targets Number of targets on opening screen Number of close range targets (<20 nm) Number of pop-up targets Number of pop-up targets appearing on opening screen Number of high priority pop-up targets Number of inner crossings Number of outer crossings Number of ambiguous targets Level 1 30 9 2 3 0 0 0 0 0 Level 2 30 9 4 3 1 0 2 0 5 Level 3 40 11 6 4 1 0 1 1 10 Level 4 50 13 8 5 2 0 2 1 20 Level 5 60 16 10 6 2 0 2 2 30 Transfer 60 20 12 10 5 2 3 2 40 The adaptive transfer trial was manipulated in the same vein as the practice trials, but there were marked differences between the two trial types. First, the cost of making a mistake (i.e., allowing a target intrude the inner or outer perimeters) increased tenfold. During practice trials, perimeter intrusions only cost 10 points for each target that crosses either the inner or outer perimeter. In the transfer trial, the cost of allowing a target to cross the inner perimeter was 150 points whereas the cost of allowing a target to cross the outer perimeter was 100 points. Second, the proportion of pop-up targets rose from one-tenth of the total targets to one-sixth of the total targets. Notably, one-half of these pop-up targets appeared on the opening screen near the inner perimeter where intrusions were the most costly. In practice trials, the highest proportion of pop- up targets to appear on the opening screen was one-third. Finally, the requirements to avoid losing points in the trial changed. In the practice trials, pop-up targets appeared on screen, but were never set to cross either the inner or outer perimeter. Notably, the pop-up targets came 41 dangerously close to crossing the perimeters, but never actually intruded them. In the transfer trial, participants must have engaged two pop-up targets during the task or they lost 250 points from one intrusion at both the inner and outer perimeters. Thus, the conditions of the transfer trial required that participants adapt and use their engagement skills at a moment™s notice. A full description of how the trials were manipulated is shown in Table 1. Table 2 Means and standard deviations for engagement difficulty in the first pilot session Engagement difficulty Mean SD Level 1 3.30 1.79 Level 2 3.70 1.41 Level 3 3.20 1.41 Level 4 3.84 1.49 Level 5 4.16 1.41 Transfer 3.15 1.43 Table 3 Means and standard deviations for execution difficulty in the first pilot session Execution difficulty Mean SD Level 1 3.00 1.75 Level 2 3.67 1.49 Level 3 3.04 1.43 Level 4 3.80 1.44 Level 5 3.92 1.44 Transfer 3.12 1.37 Results from the first pilot session. The results from the first pilot session are shown in Tables 2, 3, 4, 5, and 6. Level 5 had the highest ratings of difficulty across all the levels assessed 42 in the pilot. Level 5 was determined to be the most difficult trial in terms of engaging, executing, and prioritizing targets as well as the most difficult trial overall. The easiest trial varied depending on the question being asked. Participants believed that level 1 was the easiest trial to execute targets and the easiest trial overall, but the transfer trial was the easiest trial to engage and prioritize targets. However, level 3 was shown to be easier than level 1 in terms of engaging and prioritizing targets. Level 3 was also shown to be easier than level 2 in terms of executing targets. Nevertheless, the mean of the comparison question was above 1.50 for each comparison asked of the participants. A mean equal to 1.50 meant that participants picked the more difficult trial (denoted by 2.00) more often than the easier trial (denoted by 1.00) in each difficulty questionnaire. Conclusions from the first pilot session. After careful consideration, it was determined that edits needed to be made to the level 3 trial so that it could be more easily differentiated from levels 1 and 2. In addition, an unintended error was made in the posing of the difficulty items. In the questionnaires, participants were asked how easy or difficult the level was in regards to the different criteria. It was determined that the term filevelfl implied difficulty such that level 1 should be more difficult than level 2 based on the pairing of the word, filevel,fl with the corresponding increased number. Actions taken in response to the first pilot session. In light of the results from the first pilot session, a major revision was made to the level 3 trial to increase the likelihood that participants would be able to differentiate the difficulty between levels 2, 3, and 4 in the intended fashion. Specifically, the number of inner perimeter crossings in level 3 increased from one to two targets. The data from the first pilot session suggested that though the difficulty of the third trial increased such that participants needed to execute an outer perimeter target, participants did 43 not realize that they needed to zoom out to prevent losing points. However, they most likely realized that the number of inner crossings decreased by 100%, thus creating the perception that the third level was easier than the second level. The details of the changes made to the practice trial levels are shown in Table 7. Table 4 Means and standard deviations for prioritization difficulty in the first pilot session Prioritization difficulty Mean SD Level 1 3.85 1.92 Level 2 3.89 1.58 Level 3 3.60 1.58 Level 4 3.96 1.34 Level 5 4.12 1.42 Transfer 3.23 1.48 Table 5 Means and standard deviations for overall difficulty in the first pilot session Overall difficulty Mean SD Level 1 3.26 1.66 Level 2 3.85 1.35 Level 3 3.68 1.44 Level 4 4.00 1.56 Level 5 4.24 1.36 Transfer 3.42 1.33 A second edit was made before holding another pilot session. To avoid leading the participants to believe one level was more difficult than another, the difficulty items were revised 44 to ask how easy or difficult the scenario was in regards to the corresponding criteria instead of how easy or difficult the level was. Table 6 Means and standard deviations for comparisons of difficulty in the first pilot session Comparison Mean SD Level 1 compared to Level 2 1.74 .45 Level 2 compared to Level 3 1.60 .50 Level 3 compared to Level 4 1.68 .48 Level 4 compared to Level 5 1.68 .48 Level 5 compared to Transfer 1.65 .49 Table 7 Trial design characteristics for the second pilot session and final experiment Total number of targets Number of targets on opening screen Number of close range targets (<20 nm) Number of pop-up targets Number of pop-up targets appearing on opening screen Number of high priority pop-up targets Number of inner crossings Number of outer crossings Number of ambiguous targets Level 1 30 9 2 3 0 0 0 0 0 Level 2 30 9 4 3 1 0 2 0 5 Level 3 40 11 6 4 1 0 2 1 10 Level 4 50 13 8 5 2 0 3 1 20 Level 5 60 16 10 6 2 0 2 2 30 Transfer 60 20 12 10 5 3 7 3 40 Second pilot session. A total of thirty participants completed the second pilot session and were included in the larger sample (N = 333). Approximately 23.3% of the sample was male 45 (76.7% female) and the mean age was 19.73 years (SD = 1.29). The majority of the pilot sample was White or Caucasian (70.0%), followed by Black or African-American (13.3%), Asian (10.0%), Latino (3.3%), and other (3.3%). Most participants were either juniors (33.3%) or sophomores (30.0%), followed by freshmen (20.0%) and seniors (16.7%). The procedures were the same as the first pilot session with the exception of the practice trials. After the training period, participants were told that they would have the opportunity to engage in eight practice trials and that they would be able to choose different trials that would range in difficulty from the first trial (the easiest trial) to the fifth trial (the most difficult trial). Each trial lasted 150 seconds. Participants were asked to provide answers to their level of self-efficacy and the amount of points they wanted to reach before the trial began. After each trial, participants were provided feedback on their performance in the trial. Then, participants were asked to reflect on the extent to which they engaged in metacognitive activity. After every trial, participants were asked four lab-made questions about the level of difficulty of the scenario from 1 to 5 where 1 was fiVery easyfl and 5 was fiVery difficultfl. Participants were asked about how easy or difficult the scenario was in terms of engaging, executing, and prioritizing targets as well as how easy or difficult the scenario was overall. In addition, they were also asked to decide which scenario was more difficult overall: the scenario they had just played or the scenario they had played before. Participants were also asked to provide a short explanation of why they thought the scenario they chose was more difficult. Results from the second pilot session. Many of the participants followed the sequential order of the scenarios (e.g., starting with scenario 1, then scenario 2, scenario 3, scenario 4, and ending with scenario 5), but some did stray from this ordered sequence. As a result, the data were more difficult to analyze since the participants had choice in which scenarios to play. As such, 46 the data were compared in pairs of scenarios such that each sequential scenario pair that occurred was analyzed to understand the difficulty comparisons between those two scenarios. On the whole, participants rated the levels in the expected order where level 1 was perceived to be the easiest and level 5 was perceived to be the most difficult. Overall, level 2 was perceived to be more difficult than level 1, but easier than level 3; level 3 was perceived to be more difficult than level 2, but easier than level 4; level 4 was perceived to be more difficult than level 3, but easier than level 5. Over the 209 choices that the participants made, only 19 (9.09%) choices were not in the expected order. After inspecting the open-ended responses that accompanied these unexpected choices, some participants™ responses indicated that they were simply learning more about the simulation, which impacted their difficulty ratings. For example, one participant said that level 3 was more difficult than level 1 because they were fijust getting better and really learning how to use the information.fl Another participant who had a discrepancy in a question comparing level 2 to level 5 noted that fi[level 5], although it was more difficult than the last– was easier because I really studied the manual so I knew what I was doing more.fl Conclusions from the second pilot session. After reviewing the data on the difficulty of the trials, it was determined that the manipulation of practice difficulty was working as intended. On the whole, participants perceived the more difficult trials to actually be more difficult to engage, execute, and prioritize targets. In addition, the entire experiment including questionnaires, training, practice, and transfer fit within the two-hour window allotted. Thus, the data from the second pilot session were included in the final sample and data collection began. Measures Measures were distributed at three time points throughout the experiment. Cognitive ability, demographic information, and trait goal orientation were measured following the receipt 47 of informed consent. State goal orientation was measured following training once the error framing had been completed by the experimenter. During the practice phase, participants repeatedly completed measures of self-regulation. Specifically, participants™ chosen practice difficulty, goal level, and perceptions of self-efficacy were measured before each practice trial. After each practice trial, participants™ self-evaluation activity and metacognitive activity were measured. Once participants had completed all practice trials, participants took a strategic knowledge assessment and filled out the same self-regulatory measures in relation to the adaptive transfer task. Control variables. Cognitive ability was used as a control variable in the present study. Research has shown that individuals with high cognitive ability tend to fare better on learning outcomes than individuals with low cognitive ability in active, exploratory learning conditions (Bell & Kozlowski, 2008; Gully et al., 2002; Sitzmann et al., 2009). Therefore, the effects of cognitive ability were controlled for in this study. Cognitive ability was collected before the familiarization phase. Cognitive ability. Participants™ SAT scores and ACT scores were collected as a proxy for cognitive ability. SAT and ACT scores have been found to be valid and suitable measures of intelligence (Frey & Detterman, 2004; Koenig, Frey, & Detterman, 2008). Participants™ SAT scores were converted into ACT scores using the estimated SAT-ACT relationship table provided on the ACT website (https://www.act.org/aap/concordance/estimate.html). Thus, cognitive ability was measured using participants™ ACT or converted ACT scores. The minimum ACT score one can receive is 11.00; the maximum ACT score is 36.00. In the reduced sample (N = 238), only 5.9% of participants provided SAT scores that needed to be converted into ACT scores. The reduced sample had an average ACT score of 25.02 (SD = 3.44). Scores ranged from 48 17.00 to 33.00. The original sample (N = 333) did not differ highly from the reduced sample such that participants had an average ACT score of 24.87 (SD = 3.52) and scores ranged from 15.00 to 33.00. In the original sample, approximately 5.1% of the 333 participants™ test scores needed to be converted into ACT scores. Demographics. Participants™ year in school, major, gender, age, and race were collected to identify sample characteristics and determine the generalizability of the current findings. Demographics were collected before the familiarization phase. Trait goal orientation. Vandewalle (1997)™s 13-item measure of trait goal orientation was used to assess trait goal orientation. Response options ranged from 1 = strongly disagree to 6 = strongly agree. An example item for trait mastery orientation was fiI am willing to take on challenges that I can learn a lot from.fl An example item for trait prove orientation was fiI™m concerned with showing that I can perform better than my peers.fl An example item for trait avoid orientation was fiI prefer to avoid situations where I might perform poorly.fl Trait goal orientation was measured before the familiarization phase. The mean for the five mastery goal orientation items was 5.03 (SD = .69), the mean for the four performance prove items was 4.45 (SD = .94), and the mean for the four performance avoid items was 3.33 (SD = 1.06). Cronbach™s alpha for the five mastery trait goal orientation items was .85 whereas the Cronbach™s alpha for the eight performance trait goal orientation items was .83. The original sample (N = 333) had a mean of 4.96 for mastery goal orientation (SD = .73), a mean of 4.41 for performance prove orientation (SD = .93), and a mean of 3.41 for performance avoid orientation (SD = 1.04). 49 Condition. The experimental condition was numerically coded such that 0 = proceduralized, error-avoidant and 1 = EMT. Approximately 47.9% of the sample participated in the EMT condition (52.1% in the proceduralized error-avoidant condition). State goal orientation. Horvath, Scheu, and DeShon™s (2001) measure of state goal orientation was used to assess participants™ state goal orientation following error framing after training. Response options ranged from 1 = strongly disagree to 5 = strongly agree. An example item for state mastery orientation was fiOn this task, my goal is to learn the task as well as I can.fl An example item for state prove orientation was fiIt is important to me to impress others by doing a good job on this task.fl An example item for state avoid orientation was fiOn this task, I would like to hide from other that they are better than me.fl Cronbach™s alpha for the five mastery goal orientation items was .82, .82 for the four performance prove goal orientation items, and .77 for the five performance avoid goal orientation items. State goal orientation was measured once, before the practice phase, in line with previous research (Bell & Kozlowski, 2008; Baard, 2013; Perry, 2015). No formal hypotheses were included in the present study about state goal orientation would affect the self-regulation process. Practice difficulty. Levels of difficulty ranged from 1 to 5 where 1 was the easiest trial and 5 was the most difficult trial. Participants chose a level of difficulty (ranging from 1 to 5) to practice before beginning each of the eight practice trials. Altogether, practice difficulty was collected eight times over the course of the experiment and averaged across all eight trials. As previously described, practice difficulty was manipulated according to Wood™s (1986) taxonomy of task complexity. The aspects of the task were manipulated, as shown in Table 7. Based on the results of two pilot studies, participants were able to differentiate between the levels of difficulty 50 such that lower levels of practice trials were rated lower on lab-made difficulty scales that higher levels of practice trials. Self-efficacy. An adapted three item measure of Ford and colleagues™ (1998) eight item measure of self-efficacy was used to assess participants™ self-efficacy before each practice trial. An example item was fiI am certain that I can manage the requirements of this task.fl Response options ranged from 1 = strongly disagree to 5 = strongly agree. Three items were chosen so that participants did not feel overburdened with questionnaires throughout the eight practice trials. To choose these three items, an exploratory factor analysis was performed on the first pilot session data to investigate which items loaded most highly on the metacognition factor. The exploratory factor analysis used varimax rotation and one factor emerged. This factor explained greater than 60.9% of the variance across the eight different iterations of the self-efficacy measure. The three items chosen all had factor loadings larger than .62. Self-efficacy was measured after each of the eight practice trials and immediately preceded the practice trial. Cronbach™s alpha for the three items ranged from .82 to .93 across the eight practice trials. The average Cronbach™s alpha across the eight practice trials was .90. For the purposes of testing the hypotheses, the three items were averaged to compute a scale score for each participant. Then, the eight scale scores were averaged to compute an overall mean self-efficacy score. Practice performance. Practice performance was operationalized as the number of points accrued during each practice trial. Participants received 100 points per target if they correctly made all four decisions (e.g., type, class, intent, and engagement) for that target; 100 points were deducted if one of the four decisions was incorrect. The number of targets varied 51 throughout the practice trials. For more information about the number of targets included in each trial refer to Table 7. In order to test the hypotheses, the number of points accrued across the eight practice trials were averaged to compute an overall mean practice performance score. Self-evaluation activity. Self-evaluation activity was operationalized as the amount of time (in seconds) spent reviewing feedback following each practice trial. To ensure that participants were self-evaluating (i.e., comparing the current state to the desired end state), participants were prompted to provide a point goal they would like to achieve during the practice trial. Participants provided a point goal before starting the chosen scenario. The participants then interacted with the scenario and were prompted to read performance feedback. Participants had the opportunity to read four pages of feedback for each of the eight practice trials. Across the four pages, participants were presented with information about their point score, the number of engagements they made, the correctness of those engagements, the number of targets for which they made a correct classification decision, the number of perimeter intrusions they allowed, the average amount of time they spent per target, the number of pop-up targets they engaged, the number of high priority targets they engaged, and the number of zooms they used. The amount of time spent reviewing feedback started immediately following completion of the scenario. To test the hypotheses, the amount of time spent reviewing feedback was averaged over the eight trials to create an average of self-evaluation activity. Metacognition. An adapted three item version of Ford and colleagues™ (1998) twelve item measure of metacognition was used for this experiment. An example item was fiAs I practiced the scenario, I evaluated how well I was learning the skills of the simulation.fl Response options ranged from 1 = strongly disagree to 5 = strongly agree. 52 Three items were chosen so that participants did not feel overburdened with questionnaires throughout the eight practice trials. To choose these three items, an exploratory factor analysis was performed on the first pilot session data to investigate which items loaded most highly on the metacognition factor. The exploratory factor analysis used varimax rotation and one factor emerged. This factor explained greater than 63.0% of the variance across the eight different iterations of the metacognition measure. The three items chosen all had factor loadings larger than .60. Metacognition was measured after each of the eight practice trials and immediately followed the feedback participants received after completing a trial. Cronbach™s alpha for the three items ranged from .75 to .93 across the eight practice trials. The average Cronbach™s alpha across the eight practice trials was .86. For the purposes of testing the hypotheses, the three items were averaged to compute a scale score for each participant. Then, the eight scale scores were averaged to compute an overall average metacognition score for the eight trials. Effort. Behavioral effort was operationalized as the average number of targets hooked by the participant per trial. In order to hook a target, participants needed to click on the target in order to receive information about its type, class, and intent in the simulation. For the purposes of testing the hypotheses, effort was averaged across the eight trials to compute an average effort score. Knowledge. A 22-item multiple choice test was given to assess declarative and strategic knowledge about the aspects of the TANDEM task. Twelve items assessed strategic knowledge while ten items assess declarative knowledge. The present study only hypothesized on the nature of strategic knowledge, thus only twelve items were used to operationalize this variable. The ten declarative knowledge items were included for exploratory purposes. 53 Strategic knowledge was characterized by twelve questions asking for the appropriate strategies to prioritize contacts and monitor perimeters, as well as how to use the zoom function to manage the targets in the scenarios. Response options were coded such that a correct response option was coded as 1 and all other incorrect response options were coded as 0. These recoded responses were summed to create one strategic knowledge score out of a possible twelve points. Cronbach™s alpha for the twelve binary recoded strategic knowledge items was poor such that alpha was equal to .38. In light of the poor internal consistency, the response options were recoded to create a scale such that the best answer received 4 points, the second best answer received 3 points, the third best answer received 2 points, and the worst answer received 1 point. When Cronbach™s alpha was computed for the strategic knowledge scale using this scoring format, alpha worsened ( = .29). Thus, the previous scoring (i.e., each response option was coded 1 = correct, 0 = incorrect) was maintained for the analyses. Further analyses were performed to understand the underlying response patterns of the data, which are described in the results section. The questions chosen for this assessment were adapted from Bell and Kozlowski (2008) and have been used in other research studies (see Baard, 2013; Perry, 2015). The knowledge assessment was given following the eighth practice trial. See Appendix H for the questions included in the assessment. Adaptive transfer performance. Performance in the adaptive transfer task was similar to how practice performance was operationalized in the practice trials. Consequently, the number of points that participants accrued in the adaptive transfer task was used to measure adaptive transfer performance. 54 Analytic strategy In order to test if there were mean differences in average practice difficulty between experimental conditions, a univariate analysis of variance (ANOVA) was used. To test the a priori model shown in Figure 3 as well as all other a priori hypotheses, path analysis was employed. To test the a priori model, path analysis procedures were followed by Bollen (1989) and Muthen and Muthen (2012). The model was analyzed in Mplus version 7.4. No latent variables were included in the path analysis. The repeated measures used in this study were averaged across the eight time points to compute an overall mean for each self-regulation variable. Due to the nature of the self-regulation variables, an overall measurement model could not be estimated for the self-regulation scales. Path analysis assumes that observed variables are measured perfectly; therefore, a path model was estimated for the present study and only the structural model was tested. Maximum likelihood estimation was used to estimate the path model as the variables of interest were about normally distributed. The performance variables were particularly leptokurtic, but maximum likelihood is robust when it comes to the normal distribution assumption. In addition, the sample size was large and each observation was independent. The path diagram in Figure 3 was entered accordingly into Mplus version 7.4. Cognitive ability was included as a control variable in the model such that cognitive ability predicted practice difficulty (Hughes et al., 2013), self-efficacy (Chen, Casper, & Cortina, 2001), practice performance, effort, and performance (Kanfer & Ackerman, 1989). Cognitive ability was also included to predict self-evaluation, but was removed because the path coefficient was nonsignificant. The experimenter who led the session was also included as a control variable 55 such that it predicted adaptive transfer performance, but it was removed from the model because the path was nonsignificant. The direction, magnitude, and significance level of the path coefficients were used to determine whether or not support existed for each of the a priori hypotheses. One thousand bootstrapped samples were taken to test for indirect effects and to compute 90% bootstrapped confidence intervals for each coefficient estimate. Standardized coefficients from the Mplus version 7.4 output were used to determine the direction, magnitude, and significance level of the coefficients. Coefficients were considered significant if the two-tailed p-value was below .05. Two interaction terms were included in the path model by multiplying condition by mastery trait goal orientation and multiplying condition by performance trait goal orientation. The simple slopes of these interactions were tested using SPSS PROCESS version 2.15 created by Andrew Hayes (for more information refer to Hayes, 2013) using his second model template. One thousand bootstrapped samples were taken to test for indirect effects and to compute 95% bootstrapped confidence intervals. The interaction products were mean-centered to alleviate collinearity between the variables (Cronbach, 1987). 56 RESULTS Table 8 shows the means, standard deviations, and intercorrelations for all variables measured in the experiment. Cronbach™s alphas are reported on the diagonal where appropriate. Note that the Cronbach™s alphas reported in the table for the metacognition and self-efficacy scales are means of eight alphas corresponding from the eight practice trials. The high standard deviations for the performance variables provide support for the leptokurtic nature of these data. Almost every participant had a mean positive score across the practice trials (except for 9% of the sample); approximately 56.30% of the sample scored between 300 and 600 points across the eight trials. Thus, during the two minutes and thirty seconds provided, participants were able to correctly execute three to six targets on average. When given ten minutes to execute targets in the adaptive transfer performance trial, 68.35% of participants scored between 500 and 2000 points. This is the equivalent of correctly executing five to twenty targets correctly, or 8% to 33% of the total targets shown in the simulation. As a result of the amount of time provided and participant behavior during that time, the two performance variables vary from a normal distribution. As expected, significant correlations existed between practice difficulty and self-evaluation (r = .34, p = .00) and self-evaluation and strategic knowledge (r = .20, p = .00) such that they both had positive relationships. Adaptive transfer performance was significantly and positively related to all of the averaged self-regulation variables, including practice difficulty, metacognition, self-evaluation, self-efficacy, effort, and practice performance. In addition, the control variable of cognitive ability was positively related to practice difficulty, effort, practice performance, strategic knowledge, and adaptive transfer performance. 57 Table 8 also shows that mastery and performance trait goal orientation were negatively related such that those who had higher scores on mastery trait goal orientation tended to have lower scores on performance trait goal orientation (and vice-versa). In line with previous research, mastery trait goal orientation was also positively related to metacognition and self-efficacy during the simulation (Ford et al., 1998). Alternatively, performance trait goal orientation was negatively related to the amount of effort put forward in the simulation. Some self-regulation variables also had significant intercorrelations including practice performance and self-efficacy (r = .18, p = .00) and practice performance and effort (r = .17, p = .00). Trends of the repeated measures variables No formal a priori hypotheses were made about the trajectory of or changes in the self-regulation variables over time. However, descriptive analyses were conducted on these self-regulation variables to provide context for participant behavior throughout the experiment. Figure 4 shows the mean practice difficulty choices for all participants across the eight practice trials. On average, participants in both experimental conditions followed a steady upward trend through the levels of practice difficulty until they reached practice trial 6. Although participants in both conditions were told that they could practice the levels in any order of their choosing, the practice difficulty means show that most participants proceeded through the levels in consecutive order. One possible explanation for the present findings could be due to how the difficulty options were presented to participants. The options were presented in consecutive order to all participants, which may have primed participants to follow the predetermined order. In hindsight, participants should have been presented with difficulty options in random order to avoid possible priming effects. 58 Table 8 Means, standard deviations, and bivariate correlations for all variables of interest M SD 1 2 3 4 5 6 7 8 9 10 11 1. Experimental condition .48 .50 - 2. Mastery trait goal orientation 5.02 .69 -.02 (.85) 3. Performance trait goal orientation 3.89 .83 .04 -.15* (.83) 4. Practice difficulty 3.12 .69 -.09 -.07 -.10 - 5. Metacognition 3.92 .48 .02 .28** -.05 -.00 (.86) 6. Self-evaluation 20.39 7.61 -.09 -.04 -.07 .34** .06 - 7. Self-efficacy 3.94 .48 .08 .30** .02 .02 .69** -.03 (.90) 8. Effort 8.26 1.93 .12 .05 -.14* .07 .01 .04 .04 - 9. Practice performance 371.49 213.29 .12 -.08 .00 -.06 .09 -.03 .18** .17** - 10. Strategic knowledge 5.47 1.93 -.04 .03 -.09 .07 -.03 .20** -.02 .12 -.02 (.38) 11. Adaptive transfer performance 1231.43 851.43 .12 -.06 -.02 .19** .17* .16** .22** .20** .65** .08 - 12. Cognitive ability 25.02 3.44 .06 .06 -.00 .19** .04 .02 .13 .22** .26** .31** .33** Note. *p < .05; **p <.01. Cronbach™s alphas for the scales are provided in the diagonal. The alphas reported for metacognition and self-efficacy are computed averages of all eight Cronbach™s alphas across the practice trials. 59 In the remaining three practice trials, the mean for practice difficulty remained hovered around the midpoint as the participants prepared for the adaptive transfer trial. Table 9 provides an in depth description of how participants selected practice difficulty levels across time in each experimental condition. Notably, the majority of the participants in the procedural, error-avoidant condition followed the majority of the participants in the total sample. Participants in the EMT condition deviated from the pattern of the total sample only before engaging in trial 6. Before trial 6, the majority of EMT participants chose level 1 whereas the majority of procedural, error-avoidant participants chose level 5. Figure 4 Mean practice difficulty choices before the eight practice trials 1.002.00 3.004.005.00Trial 1Trial 2Trial 3Trial 4Trial 5Trial 6Trial 7Trial 8Level of DifficultyProcedural, Error-AvoidantEMTTotal Sample60 Table 9 Number of participants who selected a practice difficulty level for a corresponding practice trial Trial 1 Trial 2 Trial 3 Trial 4 Trial 5 Trial 6 Trial 7 Trial 8 Procedural, error-avoidant Level 1 86 6 5 10 7 22 22 29 Level 2 15 79 11 8 7 7 16 16 Level 3 16 15 76 19 12 23 15 18 Level 4 4 15 18 70 28 24 20 18 Level 5 3 9 14 17 70 48 51 43 EMT Level 1 75 7 10 14 15 32 28 21 Level 2 8 59 16 8 7 10 14 12 Level 3 22 20 53 20 11 19 20 22 Level 4 3 12 22 54 31 26 21 23 Level 5 6 16 13 18 50 27 31 36 Total Sample Level 1 161 13 15 24 22 54 50 50 Level 2 23 138 27 16 14 17 30 28 Level 3 38 35 129 39 23 42 35 40 Level 4 7 27 40 124 59 50 41 41 Level 5 9 25 27 35 120 75 82 79 Figure 5 shows the mean self-efficacy scale scores over the eight practice trials and before the adaptive transfer trial. On average, self-efficacy is slightly higher in the EMT condition throughout the experiment, though this difference is not statistically significant. In the first four practice trials, participants across the conditions felt self-efficacious on average. There was a minor dip before practice trial 5 where 50.40% of the sample chose to practice level 5 and 24.80% of the sample chose to practice level 4. On average, after participants completed the fifth practice trial, they felt the most self-efficacious than at any other time during the experiment. Notably, participants™ average self-efficacy dropped again before beginning the adaptive transfer trial. 61 Figure 5 Mean self-efficacy scores before the eight practice trials and the adaptive transfer task Figure 6 shows mean self-evaluation activity after the eight practice trials and after the adaptive transfer performance trial. Both experimental conditions showed similar trends of self-evaluation activity throughout the experiment. Average self-evaluation activity decreases over time, such that participants tend to spend less time viewing their performance feedback as they practice the simulation. However, average self-evaluation activity increases after the adaptive transfer task. Figure 6 Mean self-evaluation following the eight practice trials and the adaptive transfer task Figure 7 shows the mean metacognition scale scores following the eight practice trial and the adaptive transfer performance trial. There were no significant mean differences between the 3.603.70 3.80 3.90 4.00 4.10 4.20Mean Scale ScoreProcedural, Error-AvoidantEMTTotal Sample1.006.0011.00 16.00 21.00 26.00 31.00Seconds spent viewing feedbackProcedural, Error-AvoidantEMTTotal Sample62 conditions and average metacognitive activity remains above the midpoint of the scale throughout the experiment. There was a slight increase in metacognitive activity following the adaptive transfer trial. Figure 7 Mean metacognition following the eight practice trials and the adaptive transfer task Results from the a priori model Overall, the goodness of fit statistics for the a priori model indicated that the data fit the model poorly (2 = 396.72, df = 54, p = .00; CFI = .27; TLI = -.03; SRMR = .11; RMSEA = .16). Figure 8 presents a depiction of the findings for the a priori model. For reference, acceptable goodness of fit indices are as follows: CFI > .90, TLI > .90, SRMR < .08, and RMSEA < .06. An alternative model was tested by including a direct path predicting adaptive transfer performance from condition. The path coefficient for the condition-performance relationship was nonsignificant ( = .10, p = .10) and fit did not greatly differ from the original a priori model (2 = 394.12, df = 53, p = .00; CFI = .27; TLI = -.04; SRMR = .10; RMSEA = .16). 3.603.703.803.904.00 4.10 4.20Mean Scale ScoreProcedural, Error-AvoidantEMTTotal Sample63 Figure 8 Results of the a priori model Hypothesis 1 posited that participating in EMT would encourage the choice of more difficult practice sessions on average than participating in proceduralized, error-avoidant training. After performing a univariate analysis of variance (ANOVA), there were no significant differences between conditions and average practice difficulty. Thus, hypothesis 1 was not supported. Hypothesis 2 posited that EMT and mean practice difficulty would have a positive relationship and hypothesis 3 posited that proceduralized error-avoidant training and mean practice difficulty would have a negative relationship. In the a priori model, the coefficient for the condition to mean practice difficulty path was positive, implying that participants in the EMT condition chose more difficult practice sessions on average; however, this path was nonsignificant ( = .21, p = .73). Thus, hypotheses 2 and 3 were not supported. .21 Practice Difficulty Metacognition Self-evaluation Self-efficacy Performance Goal Orientation (PGO) Adaptive Transfer Performance Strategic Knowledge Mastery Goal Orientation (MGO) Condition Performance Effort Condition x PGO Condition x MGO .00 -.01 -.03 Cognitive Ability .20** 64 Hypothesis 6 posited that those who chose more difficult practice on average would be more likely to engage in metacognition on average. The path coefficient for this relationship was approximately zero and was nonsignificant ( = -.00, p = .98). Thus, hypothesis 6 was not supported. Hypothesis 7 posited that those who chose more difficult practice on average would be more likely to engage in self-evaluation on average. The path coefficient for this relationship was positive and significant (= .34, p = .00). Thus, hypothesis 7 was supported. Hypothesis 8 posited that those who chose more difficult practice trials on average would be less likely to perform well on average during those practice trials. The path coefficient for this relationship was negative as expected, but nonsignificant (= -.12, p = .12). Thus, hypothesis 8 was not supported. Hypothesis 9 posited that those who chose more difficult practice trials on average would be more self-efficacious than those who chose less difficult practice trials. The path coefficient for this relationship was near zero and nonsignificant (= .00, p = .99). Thus, hypothesis 9 was not supported. Hypothesis 10 posited that those who chose more difficult practice trials on average would expend more effort on average than those who chose easier practice trials. The path coefficient for this relationship was positive but nonsignificant (= .03, p = .66). Thus, hypothesis 10 was not supported. Hypothesis 11 posited that those who engaged in more metacognition on average would have higher levels of strategic knowledge than those who engaged in less metacognition on average. The path coefficient for this relationship was negative and nonsignificant (= -.04, p = .62). Thus, hypothesis 11 was not supported. 65 Hypothesis 12 posited that those who engaged in more self-evaluation on average would have higher levels of strategic knowledge than those who engaged in less self-evaluation. The path coefficient for this relationship was positive and significant (= .19, p = .00). Thus, hypothesis 12 was supported. Hypothesis 13 posited that those who had higher self-efficacy on average would have higher levels of strategic knowledge than those who had lower self-efficacy on average. The path coefficient for this relationship was negative and nonsignificant (= -.01, p = .87). Thus, hypothesis 13 was not supported. Hypothesis 14 posited that those who expended more effort on average would have higher levels of strategic knowledge than those who expended less effort on average. The path coefficient for this relationship was positive and nonsignificant (= .06, p = .32). Thus, hypothesis 14 was not supported. Hypothesis 15 posited that those who had higher levels of strategic knowledge would perform better in the adaptive transfer task than those who had lower levels of strategic knowledge. The path coefficient for this relationship was negative and nonsignificant (= -.03, p = .67). Thus, hypothesis 15 was not supported. Results from the simple slopes analysis The plot of the simple slopes is shown in Figure 9 where -.48 = procedural, error-avoidant and .52 = EMT. The conditional effects of condition on mean practice difficulty at values of the two moderators, mastery and performance trait goal orientation, are shown in Table 10. Hypothesis 4 predicted that mastery goal orientation would moderate the relationship between condition and mean practice difficulty such that the relationship would be more positive under conditions of EMT than proceduralized error-avoidant training. Hypothesis 5 predicted 66 that performance goal orientation would moderate the condition- mean practice difficulty relationship such that the relationship would be more negative under conditions of proceduralized error-avoidant training than EMT. As shown in Table 10, there were no significant differences between conditions at different values of mastery goal orientation when performance goal orientation was low or at its mean. Thus, hypothesis 4 was not supported. Alternatively, hypothesis 5 received partial support. At low levels of performance goal orientation, there were no differences between conditions as hypothesized. However, those in the proceduralized error-avoidant condition tended to choose higher difficulty practice trials on average than those in the EMT condition when they had low scores on mastery goal orientation and high scores on performance goal orientation. This conditional effect was significant (t = -2.05, p = .04). Table 10 Results from the simple slopes analysis for hypotheses 4 and 5 Mastery Trait Goal Orientation Performance Trait Goal Orientation Effect SE t p -.69 -.83 -.03 .16 -.18 .86 .00 -.83 .00 .13 .01 .99 .69 -.83 .03 .15 .21 .83 -.69 .00 -.16 .13 -1.31 .19 .00 .00 -.13 .09 -1.52 .13 .69 .00 -.10 .13 -.83 .41 -.69 .83 -.30 .15 -2.05 .04 .00 .83 -.27 .12 -2.15 .03 .69 .83 -.24 .16 -1.49 .14 67 Figure 9 Graphical depiction of the simple slopes analysis for hypotheses 4 and 5 Post-hoc analyses Overall model fit. As previously mentioned, the a priori model fit poorly as evidenced by the poor global fit indices. In particular, the negative TLI indicated that the model was misspecified and fit poorly. In addition, large residuals from the a priori model suggested that fit would be improved by including direct paths from the averaged self-regulation variables (e.g., metacognition, self-efficacy, self-evaluation, practice performance, and effort) to adaptive transfer performance as well as a path predicting practice performance from self-efficacy. Direct paths from the averaged self-regulation variables to adaptive transfer were included since these variables may be only partially mediated through strategic knowledge, not fully mediated. In addition, the path from mean self-efficacy to mean practice performance was included based on its theoretical basis in the literature where self-efficacy and performance share a positive meta-analytic relationship (Stajkovic & Luthans, 1998). All paths were expected to be positive. Thus, a post-hoc model was fit to the data that included these missing paths. Figure 10 presents a depiction of the findings of the adjusted a priori model. This adjusted a priori model fit the data 68 better than the original a priori model such that the goodness of fit statistics markedly improved (2 = 258.67, df = 48, p = .00; CFI = .55; TLI = .29; SRMR = .09; RMSEA = .14). The path predicting mean practice performance from mean self-efficacy was positive and significant, as seen in previous research (Stajkovic & Luthans, 1998). In addition, the path predicting adaptive transfer performance from mean self-evaluation was significant (= .17, p = .00) as was the path predicting it from mean practice performance (= .59, p = .00). It is important to note that though the adjusted a priori model fit better than the a priori model, the global fit statistics did not meet the rules of thumb. Thus, the modification indices were investigated for the adjusted a priori model. The modification indices for this model indicated that additional paths should be included, such as metacognition predicted by self-efficacy, self-efficacy predicted by metacognition, metacognition predicted by mastery trait goal orientation, and self-efficacy predicted by mastery trait goal orientation. The modification indices provide evidence that metacognition, self-efficacy, and mastery trait goal orientation should be correlated or predicted in some way. A model with these paths was not estimated because the modification indices indicated a reciprocal relationship between self-efficacy and metacognition. Future directions should investigate past studies of the nature of this relationship to understand the theory guiding the relationship. Once a theoretical argument is developed, researchers should create a priori hypotheses and test them to understand how mastery trait goal orientation, self-efficacy, and metacognition relate to each other. Strategic knowledge. The strategic knowledge assessment had low internal consistency ( = .38) and the results showed that strategic knowledge was not significantly related to adaptive transfer performance. This result was unexpected. Thus, to understand if there were malfunctioning items included in the strategic knowledge assessment, all items from the 69 knowledge assessment (including both declarative and strategic knowledge items) were analyzed using confirmatory factor analysis using Mplus version 7.4. All 22 items from the complete knowledge assessment were entered into the analysis to test if a two factor model, specifically declarative and strategic knowledge factors, would fit. The ten items designed to assess declarative knowledge were entered to load onto the declarative knowledge factor; the twelve items designed to assess strategic knowledge were entered to load onto the strategic knowledge factor. As shown in Figure 11, the fit statistics indicated that this model fit poorly (2 = 307.19, df = 208, p = .00; CFI = .61; TLI = .56; SRMR = .07; RMSEA = .05). Upon further investigation, it was found that numerous items loaded negatively onto the two factors. Thus, five items were removed from loading onto the declarative knowledge factor and four items were removed from loading onto the strategic knowledge factor. Another CFA was performed using only the items that positively loaded on the two factors (for the strategic knowledge items, = .32). This model fit better than the full model, but the CFI and TLI statistics were still quite low (2 = 79.70, df = 64, p = .09; CFI = .80; TLI = .76; SRMR = .05; RMSEA = .03). After performing this condensed CFA, five items that loaded positively onto the strategic knowledge factor were included to operationalize the strategic knowledge construct. The a priori model and adjusted a priori model were analyzed again with the revised strategic knowledge construct, but this did not improve model fit (a priori model: 2 = 393.21, df = 54, p = .00; CFI = .26; TLI = -.04; SRMR = .11; RMSEA = .16; adjusted a priori model: 2 = 257.33, df = 48, p = .00; CFI = .55; TLI = .28; SRMR = .09; RMSEA = .14). The possibility that strategic knowledge played little to no role in the overall self-regulation process in training was entertained. Thus, the adjusted a priori model was tested without the strategic knowledge construct such that the self-regulatory variables (i.e., 70 metacognition, self-efficacy, self-evaluation, effort, and practice performance) predicted adaptive transfer task performance directly. This post-hoc proposition was not supported when comparing the global fit indices of this model to the adjusted a priori model (2 = 262.82, df = 52, p = .00; CFI = .55; TLI = .34; SRMR = .09; RMSEA = .13), as model fit did not improve. Self-evaluation activity. The hypotheses that included self-evaluation activity were supported throughout the analysis. Thus, the mediation of the practice difficulty Œ strategic knowledge relationship was tested with self-evaluation as the mediator. Cognitive ability was controlled for in the dependent variable. The mediation was tested using SPSS PROCESS version 2.15 and the fourth model template. One thousand bootstrapped samples were taken to test for indirect effects and to compute 95% bootstrapped confidence intervals for the indirect effects. The 95% confidence interval for the indirect effect of practice difficulty on strategic knowledge through self-evaluation was positive and did not include zero (95% CI = (.08, .37) using unstandardized coefficients). A Sobel test for testing the indirect effect corroborated these results (Z = 2.79, p = .01). Similarly, the mediation of the practice difficulty Œ adaptive transfer performance relationship was tested with self-evaluation as the mediator. The same mediation procedures were used as described above. The 95% confidence interval for the indirect effect of practice difficulty on adaptive transfer performance was positive and did not include zero (95% CI = (5.73, 101.45) using unstandardized coefficients). A Sobel test for testing an indirect effect corroborated these results (Z = 1.76, p = .08). Thus, self-evaluation activity mediated the relationship between practice difficulty and adaptive transfer performance. 71 Figure 10 Results of the post-hoc adjusted a priori model .21 Practice Difficulty Metacognition Self-evaluation Self-efficacy Performance Goal Orientation (PGO) Adaptive Transfer Performance Strategic Knowledge Mastery Goal Orientation (MGO) Condition Performance Effort Condition x PGO Condition x MGO .00 -.04 .00 Cognitive Ability .20** .16* .06 72 Figure 11 Results from the confirmatory factor analysis of the items in the knowledge test Declarative Strategic 1 2 4 3 10 6 5 7 9 8 11 12 13 14 15 16 17 18 19 20 21 22 73 DISCUSSION The results from this study provide evidence that self-regulation plays a role in understanding how learner-controlled practice difficulty relates to knowledge and behavioral outcomes post-training. In particular, self-evaluation activity mediated the relationships between learner-controlled practice difficulty and strategic knowledge as well as learner-controlled practice difficulty and adaptive transfer performance. Choosing more difficult practice trials on average was positively related to participants™ average self-evaluation during the practice period. Specifically, these participants spent more time reading and trying to understand the performance feedback given to them on average. Based on the mediation results, it is likely that this self-evaluative process allowed for self-reflection and deeper learning of the simulation for the participants. Notably, these results are similar to Bell and Kozlowski™s (2008) study of self-regulation and active learning where they found positive relationships between self-evaluation and strategic knowledge. As learners reviewed their performance feedback, they likely were able to understand the mistakes they made and refine their knowledge about the simulation. However, these results differ from Hughes and colleagues™ (2013) study of learner-controlled practice difficulty. Though practice difficulty was positively related to self-evaluation, most likely due to the need for greater cognitive effort (Kanfer & Ackerman, 1989), self-evaluation did not significantly relate to task knowledge. One reason the results differ from the present study and the Hughes et al. (2013) study is the operationalization of self-evaluation. The present study operationalized self-evaluation as the average amount of time spent reviewing performance feedback after the eight practice trials whereas the Hughes et al. (2013) study used trained assistants to rate open-ended responses about the strategies and resources they used in the simulation on a scale from no self-evaluation to high self-evaluation. It is possible that 74 participants self-evaluated themselves in relation to the performance feedback they received in the Hughes et al. (2013) study, but this activity could not be explicitly explained by the participants in the format they were given. As seen in the present study, self-evaluation activity positively related to both cognitive and behavioral outcomes. The present study also provided evidence for differential effects of individual difference variables on self-regulatory processes. For instance, the results indicate that cognitive ability plays a large role in predicting self-regulatory variables, such as self-efficacy, performance, effort, and strategic knowledge. Learners with higher cognitive ability tended to be more self-efficacious on average, have higher practice performance on average, expend more behavioral effort on average, have more strategic knowledge, and score higher on the adaptive transfer task than learners with lower cognitive ability. These results fit with previous research that found that cognitive ability affects self-regulation processes in training, such as metacognition and goal orientation (Bell & Kozlowski, 2008), the effectiveness of self-regulation prompts (Sitzmann et al., 2009), goal assignment (Kanfer & Ackerman, 1989), and the overall effectiveness of error training (Gully et al., 2002). It is apparent that when using learner control in training, learners with lower cognitive ability may not fare as well on training outcomes as those with higher cognitive ability either due to the cognitive demands of the task or the choices that learners are required to make during training. As the results of the present study showed, cognitive ability and practice difficulty had a positive relationship such that participants with higher abilities tended to choose more difficult practice trials on average than participants with lower abilities. Thus, low ability learners may need cues in the training environment to prompt them to choose more difficult practice trials so that they increase their chances of beneficial training outcomes. 75 Other individual differences were examined in this study that show that learners can benefit from the learning environment being tailored to their needs. Though unexpected, the moderating influence of performance goal orientation on the condition-practice difficulty relationship showed that matching learners™ traits to the training mode can lead those learners to engage in difficult practice trials on average. In the proceduralized, error-avoidant condition, participants who were highly performance oriented and less mastery oriented chose trials with higher practice difficulty on average than participants who were both highly performance and mastery oriented. This result implies that matching performance oriented individuals to proceduralized, error-avoidant training could unlock potential benefits in learner controlled environments such that they practice more difficult tasks, engage in more self-evaluation, and gain the knowledge and skills they need to perform well in transfer more so than learners whose goal orientations do not match the framing of the training environment. Similar results were found by Heimbeck et al. (2003). In their study comparing error training to error-avoidant training, they found that performance-prove and performance-avoid oriented learners performed better in the proceduralized error-avoidant condition. In this condition, learners were asked to follow a step-by-step procedure that was designed to reduce that amount of errors learners could make. Thus, if learners followed instructions, the performance-prove and performance-avoid learners excelled because they had an opportunity to show off their ability and had little chances of failing in the scripted scenario. Thus, these results indicate that when designing learner-controlled training programs careful attention should be paid to the individual differences of the learner so that the program can be tailored appropriately to those differences in order for the learners to succeed. 76 Though performance oriented learners chose more difficult practice trials in error-avoidant training than mastery oriented learners, it was expected that mastery oriented learners would choose more difficult practice trials on average in EMT than performance oriented learners. In addition, it was expected that the mean practice difficulty would be higher in the EMT condition for the mastery oriented learners than for the performance oriented learners; however, neither portions of this hypothesis were found. Prior research from Heimbeck et al. (2003) has suggested that the error framing condition presents a strong situation that overrides and washes out any individual difference effects on the dependent variable of interest. This may be the case in the low performance orientation case, given that the practice difficulty means of the participants were essentially the same across high, mean, and low levels of mastery goal orientation in the EMT condition. Since the means are the same across levels of mastery goal orientation, it is possible that the EMT condition presents a strong situation that eliminates the effect of mastery goal orientation on practice difficulty; however, this is not likely because the practice difficulty means were not significantly different than those in the proceduralized, error-avoidant condition. Thus, the argument from Heimbeck et al. (2003) does not hold water within the context of learner-controlled practice difficulty as the researchers in that study measured effects on task performance. It is possible that the mechanisms that guide learners to choose certain levels of practice difficulty are not susceptible to changes via exploratory or proceduralized formats or based on different types of error framing. In Hughes and colleagues™ (2013) study, they found that pre-training self-efficacy positively predicted chosen practice difficulty. Thus, learners™ cognitive mechanisms pertaining to choosing tasks to practice may be based on pre-training beliefs of their abilities and competency, not necessarily the tendencies they have when presented with achievement contexts. 77 Nevertheless, the present study aligned with previous research investigating the relationships between mastery goal orientation and self-regulation variables. In particular, Ford and colleagues™ (1998) study showed that mastery goal orientation was positively related to both metacognitive activity and self-efficacy during training. The present study found positive and significant Pearson correlations between mastery goal orientation and these self-regulation variables as well, though these paths were not tested in the a priori or adjusted a priori models. Thus, learners with high mastery trait goal orientation participated in more fithinking about thinkingfl on average such that they evaluated and revised their methods for learning the skills of the simulation than low mastery oriented learners. In addition, high mastery oriented learners also felt more self-efficacious about enacting those skills in the simulation on average than low mastery oriented learners. Unlike Ford et al. (1998), no causal inferences about those self-regulation variables and adaptive transfer performance could be made in the present study. In an effort to replicate Ford and colleagues™ (1998) findings, an alternative static model was analyzed without operationalizing the self-regulation variables as averages. Instead, each self-regulation variable was operationalized as the measurement from the last practice trial (immediately preceding the adaptive transfer performance trial), but no significant changes in fit or coefficients resulted. There could be multiple reasons for the failure to replicate the findings of Ford et al. (1998). For one, participants in the Ford et al. (1998) study engaged in twelve practice trials prior to the performance trial whereas participants in the present study only engaged in eight practice trials. Participants in the prior study likely had a greater understanding of the simulation by engaging with it more often and had more opportunities to evaluate and revise their strategies in the simulation than participants in the present study. By that same token, participants in the prior study likely felt more self-efficacious about enacting their skills because they had more instances 78 of practice than the participants in the present study. Differences in the methodology of the two studies could also contribute to the present study™s failure to make causal inferences. Participants in the Ford et al. (1998) study only had exposure to the metacognitive activity and self-efficacy scales once, just before the final training performance trial; in the present study, participants had taken the same scale eight times before the adaptive transfer performance trial. Nye, Brummel, and Drasgow (2010) highlight the different types of change that can occur over time when measuring organizational constructs. Alpha and gamma change are particularly important to the present study. Alpha change occurs when the actual level of a construct changes from one time point to a later time point. Gamma change occurs when participants™ perception of the construct changes from one time point to a later time point. Though an exploratory aim of the present study was to examine alpha change over time for the self-regulation variables, it is possible that participants™ perceptions of the self-regulatory variables changed as they had repeated exposure to both the measurement scales and the simulation. Although the operationalization of the self-regulation variables were the same in both studies, the perceptions of the variables in the present study and the Ford et al. (1998) study could be markedly different due to the methodology in which the scales were employed. Due to possible misalignment in perceptions, the causal inferences of the two studies could be substantially different. When looking at cognitive outcomes in the present study, the twelve items assessing strategic knowledge had poor reliability (a = .38); however, when the assessment was edited according to the factor loadings from the CFA, there was still no improvement in model fit nor were there any path coefficient changes that would affect the support for the proposed hypotheses. Though Bell and Kozlowski (2008) were able to show that strategic knowledge was related to adaptive transfer, the present study was unable to replicate that result. One possible 79 reason could be that the assessment includes items similar to situational judgment items. Situational judgment tests (SJTs) are typically used in selection contexts where an applicant™s judgment is assessed by asking how he or she would or should respond in a situation in the workplace (Whetzel & McDaniel, 2009). SJTs are low-fidelity assessments in that they assess behavioral intentions or knowledge instead of actual behavior. Oullette and Wood™s (1998) meta-analysis reports that behavior is guided by intentions and past behavior, thus, the results from the present study should have shown positive relationships between both practice performance and adaptive transfer performance, as well as strategic knowledge and adaptive transfer performance. In addition, previous research has shown that situational judgment questions assess procedural knowledge and skills rather than declarative knowledge, which is consistent with the operationalization of this assessment (Taylor & Small, 2002). In looking closer at the assessment, the strategic knowledge items are similar to situational judgment items in that they ask the respondent to determine what they would or should do if they were presented with a certain situation in the task. Ployhart and Ehrhart (2003) have shown that situational judgment items that ask what a respondent would do produce more favorable characteristics than items that ask what a respondent should do. In addition, Ployhart and Ehrhart (2003) found that the correlation between the same items were low when replacing the words, fiwouldfl with fishouldfl. The strategic knowledge assessment used in the present study uses both fiwouldfl and fishouldfl in the items. Thus, it could be that the wording of the items caused problems in the underlying response patterns of the participants. In addition, the strategic knowledge assessment asked participants to choose only one response. This type of format has been shown to have the lowest internal consistency ratings (e.g., a = .24) out of different types of SJTs in previous research 80 (Ployhart & Ehrhart, 2003). Altogether, the strategic knowledge assessment was likely not a reliable predictor of adaptive transfer performance in this study. Limitations There are numerous limitations to this study. First, this study recruited an undergraduate sample, which may not be generalizable to the greater population of workers. In particular, the present study used an undergraduate sample that were not necessarily highly motivated to perform well in the simulation. The present study did not make any claims that the participants would be able to transfer the skills that they learned in the simulation to their daily or work lives. Although the participants were compensated for their participation with course credit, the only requirement for compensation was to remain in the laboratory for the entire duration of the experiment. While the present study employed detection methods for insufficient effort responding, it is possible that participants put forth minimal cognitive effort into the experiment that went undetected by those methods. Therefore, low intrinsic motivation to transfer learned skills in the sample population may have contributed to the lack of effects seen between the self-regulation variables and training outcomes. While previous research has incentivized undergraduate participation by offering a raffle or cash prize for performing at the top of the sample, the present study chose not to include those incentives as they may have masked true score differences in training outcomes between experimental conditions. Second, the present study employed numerous self-report measures which are subject to common method bias. To combat common method bias in the present study, variables that could be measured by behavior were operationalized as such. The only variables that required self-report were cognitive phenomena that had no methodological basis in being measured through observable behavior (e.g., metacognition, self-efficacy). As noted previously, however, these 81 measurement scales may have been subject to gamma change such that the participants™ perceptions of the constructs changed over time. Future analyses of these data will employ latent growth curve models to understand the trajectory of the self-regulation variables and understand what type of change was measured in the present study. Third, the present study was conducted in a laboratory setting for two hours with transfer occurring immediately following training. In real-world work settings, transfer of learned knowledge and skills follows after more than one hour post-training. Thus, a more ecologically valid approach to studying transfer for this task would be to bring participants back approximately one to two weeks post-training. In addition, the saliency of errors while practicing a task is likely more apparent in ecologically valid settings than in the laboratory. Participants in the present study were told of the type of errors they made or could make in the simulation. In particular, they were provided with a list of common errors that could be made in the simulation, they received feedback in the simulation when they made an incorrect engagement, and they received feedback about their total performance in each trial. Nevertheless, these errors were not tied to any extrinsic incentives (e.g., monetary compensation, promotions, social influence) that would motivate participants to attend or correct them. In workplace settings, the potential cost of making an error influences the worker to prevent, attend to, and correct errors in their surrounding environment, especially if the organization has a strong climate for error management (Frese & Keith, 2015). If an organization has a climate for error management, this climate is likely upheld and propagated by management, co-workers, norms, and policies on a regular basis. Thus, the present laboratory experiment may not have been able to create the conditions for incentivizing the prevention of errors and attending to those errors in a timely fashion and with sufficient effort. 82 Therefore, another limitation resides in the lack of salience of the experimental conditions. Overall, there was a marginally significant mean difference between the two training conditions such that participants in the EMT condition performed better in the adaptive transfer task than the participants in the proceduralized, error-avoidant condition. However, multiple studies have shown that learners perform significantly better under exploratory, error management conditions than proceduralized, error avoidant conditions (Chillarege et al., 2003; Dormann & Frese, 1994; Frese et al., 1991; Heimbeck et al., 2003; Hughes et al., 2013; Keith & Frese, 2005; Lazar & Norcio, 2003; Nordstrom et al., 1998; Wood et al., 2000). The marginally significant findings in the present study are likely due to a lack of saliency of the training conditions. Participants were given three documents during training that encouraged them to either explore the simulation and feel free to make errors (i.e., the EMT condition) or follow a step-by-step procedure and avoid making errors (i.e., the proceduralized, error avoidant condition). It is possible that participants did not attend to the language in the training documents that encouraged them to act one way or another, or the message was not communicated in such a way that the participants felt this was an important aspect to the simulation. The three experimenters who ran the data collection sessions recited a script that reiterated the rhetoric of the training documents, but perhaps these experimenters spoke in a monotone or did not provide proper emphasis of the error management/error-avoidant phrases to convince participants to use and apply these instructions during training or practice. In similar studies, researchers have posted signs around the laboratory to emphasize the condition (e.g., there is always a way to leave an error situation), which likely instilled the error framing into the learners more so than the present study. 83 Practical implications Organizations can use the findings from this study to improve training outcomes in learner-controlled environments in three ways. First, professionals in training and development should encourage learners to self-evaluate their performance as they progress through learner-controlled training programs. As the results from this study show, self-evaluation activity mediated the relationship between adaptive transfer performance and practice difficulty. As a result, positive transfer outcomes can be achieved by encouraging learners to practice their trained skills in difficult environmental settings and setting goals along the way to evaluate their progress against. In particular, the medical field has focused on training its medical professionals by use of high-fidelity medical simulations whereby trainees learn, practice, and hone their skills with computerized mannequins that respond to actions taken by the trainees (e.g., blood oxygen levels increase when they receive oxygen via a manual resuscitator). In their systematic review, Issenberg, McGaghie, Petrusa, Gordon, and Scalese (2005) show that repetitive practice and a range of difficulty levels facilitate learning in medical simulations. In a recent meta-analysis, Zendejas, Brydges, Hamstra, and Cook (2013) note that medical professionals who practiced skills in simulations with high task complexity had longer training times, but had improved skill outcomes than those who practiced in nonclinically relevant tasks. Though long training times could be due to many factors, one likely factor is trainees™ self-evaluation of learned skills and performance feedback during the simulation. To improve training outcomes, training and development professionals can provide information to learners about creating specific and reachable goals within the difficult practice setting so that positive training outcomes are achieved in learner-controlled settings. 84 Second, practitioners should provide performance feedback during practice that people at all levels of cognitive ability can understand and digest. The present study showed that cognitive ability had positive relationships with chosen level of practice difficulty and performance both in practice and in adaptive transfer. Thus, feedback should be presented so that it is easily understandable to the learning audience. Specifically, Hattie and Timperley (2007) advocate that feedback should provide the following information in order to positively impact performance: reiterating the learner™s goal, an explanation of the discrepancy between the learner™s goal and his or her performance, and how this information relates to the learner moving forward. Though the present study did not manipulate feedback, it is evident that reading, processing, and understanding performance feedback impacted overall performance in the adaptive transfer task. Therefore, understanding how to tailor feedback to learners will be important in ensuring self-evaluation occurs under the appropriate learning conditions. Finally, practitioners should provide ample opportunity for learners to read and process the performance feedback they receive. In this study, self-evaluation was operationalized as the amount of time spent reading performance feedback following a performance trial on average. When participants spent more time reading performance feedback, they were more likely to perform well in the adaptive transfer task. Therefore, practitioners should ensure that learners have enough time to read performance feedback following a practice episode so that the positive effects of self-evaluation emerge. It is unclear exactly how long learners should spend time reading feedback. For instance, in this study, participants only had one minute to read through performance feedback following each practice trial. It is possible that for more complex tasks that relate directly to the workplace, learners may require more than one minute to read and understand feedback in relation to their goals. 85 Future directions Based on the finding that self-evaluation played a major role in influencing adaptive transfer performance, future research should explore the optimal amount of time to provide learners with to review feedback, the best format in which feedback should be presented, and which aspects of performance feedback learners attend to most. Answering these questions will help researchers and practitioners understand the levers to adjust and interventions to provide in order to help learners gain the knowledge and skills they need to perform well in transfer contexts. Currently, there are few articles that explore manipulating feedback within a self-regulation framework. Often, the researchers that study do self-regulation and provide performance feedback within their training program give few details about the nature of the feedback they provide. Future research should aim to close these gaps so that learners can self-evaluate effectively while training and reap the benefits of performing well in adaptive transfer. In this study, self-evaluation was predicted by learner-controlled practice difficulty. Like this study, Hughes et al. (2013) showed that practice difficulty played a key role in predicting adaptive transfer; however, both studies examined practice difficulty under learner-controlled conditions. To better understand the effects of practice difficulty on self-regulatory processes and training outcomes, future research should examine practice difficulty under laboratory controlled conditions so that researchers and practitioners can understand how to appropriately progress learners through stages of practice difficulty during training. Much of the research on practicing during training focuses on manipulating the schedule of practice (i.e., blocked versus alternating schedule), but not how this differs across individuals systematically. In this study, average practice difficulty was related to adaptive transfer performance such that participants who chose more difficult trials on average tended to perform better in the adaptive transfer task than those 86 participants who chose less difficult trials on average. The trend in figure 4 showed that participants tended to move through the levels of difficulty in consecutive order especially in the first five practice trials. It is possible that consecutive order was not the best method for participants to reap the benefits of performing well in the adaptive transfer task. For instance, some participants, particularly those with low cognitive ability, may have had to practice each level twice before moving onto the next highest level to ensure mastery of their learned skills. Alternatively, it may be that the best way to prepare for the adaptive transfer trial was to only practice the highest level during all eight instances. It could also be that participants should have received some sort of guidance or scaffolding through the levels from the program itself. Instead of receiving arbitrary performance feedback, the feedback could have been more tailored to an individual™s performance. For instance, if the participant had exceptional scores in the level with the easiest difficulty, the feedback could have stated, fiYou have mastered this level. You are ready to move onto the next level of difficulty.fl If the participant scored poorly, the feedback could have stated, fiYou had many incorrect type decisions during this trial. Try this level again and focus on making correct type decisions.fl Thus, future research should focus on discovering the optimal progression through practice difficulty for learners with various individual differences (e.g., cognitive ability, goal orientation, personality traits) to ensure successful transfer. In addition, future studies are needed that operationalize strategic knowledge in a different way than relying on self-report methods. As discussed previously, the Cronbach™s alpha for the scale was low. To minimize measurement error and avoid psychometric issues with the self-report scale, future research could examine behavioral indicators of strategic knowledge. By operationalizing the construct behaviorally, investigators would be able to understand the actual 87 knowledge and skills that participants studied and used during training and practice. It is possible that a training condition effect on strategic knowledge may exist when operationalizing it behaviorally instead of by self-report. One way to operationalize strategic knowledge behaviorally would be to examine the amount of time spent viewing material in the instruction manual and how often that material was viewed. Operationalizing the variable in this way would allow investigators to understand if the training conditions had differential effects on the type of material that was studied (e.g., declarative topics or strategic topics). For example, future research could answer whether or not participants in the EMT condition were more likely to study strategic topics (like zooming in and out) than participants in the proceduralized, error-avoidant condition. In this way, one could test whether the specific type of knowledge studied during training is linked to the skills practiced during the pre-transfer trials. Finally, investigations into self-regulation should be expanded from between-person studies to include within-person studies. As learner-controlled training settings and online training programs continue to grow in popularity and demand, it will become increasingly important to understand how the self-regulation process evolves over time within-person. Specifically, researchers and practitioners should study what kind of interventions learners with specific individual differences need and, moreover, at what times they need these interventions to ensure successful training transfer. In the present study, levels of average self-efficacy decreased before performing the fifth trial. As seen in the results, average self-efficacy was positively related to average practice performance. Before trial 5, a self-efficacy intervention could have been introduced to boost participants™ motivation. For instance, participants could have been told to focus on behavioral outcomes instead of learning outcomes from within the simulation (Brown & Latham, 2002). Using repeated measures designs, researchers can 88 benchmark the patterns of various self-regulatory variables, such as self-efficacy, self-evaluation, and metacognition, for learners with differing individual characteristics. As this body of within-person research grows, researchers and practitioners can develop profiles of learners and provide complementary interventions or training approaches to influence the learner to follow the most optimal self-regulatory trends. 89 APPENDICES90 APPENDIX A Flow of experiment Familiarization Phase: Informed consent, demographics measure, trait goal orientation measure (10 minutes) Demonstration of TANDEM task (8 minutes) Familiarization trial (4 minutes) Total time: 22 minutes Training Phase: Training trial 1 instructions (1 minute) View manual (2 minutes) Scenario 1 (2.5 minutes) Feedback (1 minute) Training trial 2 instructions (1 minute) View manual (2 minutes) Scenario 3 (2.5 minutes) Feedback (1 minute) Training trial 3 instructions (1 minute) View manual (2 minutes) Scenario 5 (2.5 minutes) Feedback (1 minute) Total time: 16.5 minutes Practice Phase: Reiteration of EMT/error-avoidant frame (3 minutes) Learner-controlled practice scenarios (5.5 minutes) (repeated eight times) Self-regulation measures (1 minutes) (repeated eight times) Total time: 55 minutes Self-efficacy and strategic knowledge (15 minutes) Transfer task: Scenario (10 minutes) Total time: 25 minutes Debrief (1.5 minutes) Total time for experiment: 120 minutes 91 APPENDIX B Demographics Demographics Questionnaire Please provide as much of the following information as is applicable. It is important to understand that these scores will be kept confidential and used only for research purposes. If you do not remember your exam scores, please put a zero in that space. Gender: _____ (M / F) Age: _____ SAT score: ____________ ACT score: ____________ Year in College: ____ Major: __________________ Race: ___________________ 92 APPENDIX C Trait goal orientation (Vandewalle, 1997) For each of the following statements, please indicate how true it is for you on the scale provided below. 1 2 3 4 5 6 Strongly Disagree Disagree Slightly Disagree Slightly Agree Agree Strongly Agree Goal Orientation Learning: 1. I am willing to take on challenges that I can learn a lot from. 2. I often look for opportunities to develop new skills and knowledge. 3. I enjoy challenging and difficult activities where I™ll learn new skills. 4. For me, development of my abilities is important enough to take risks. Goal Orientation Prove: 1. I prefer to do things that require a high level of ability and talent. 2. I™m concerned with showing that I can perform better than my peers. 3. I try to figure out what it takes to prove my ability to others. 4. I enjoy it when others are aware of how well I am doing. 5. I prefer to participate in things where I can prove my ability to others. Goal Orientation Avoidance: 1. I would avoid taking on a new task if there was a chance that I would appear rather incompetent to others. 2. Avoiding a show of low ability is more important to me than learning a new skill. 3. I™m concerned about taking on a task if my performance would reveal that I had low ability. 4. I prefer to avoid situations where I might perform poorly 93 APPENDIX D Training topics for experiment Training Topics for Training Trial 1 In this first trial, the major focus of training is getting familiar with the simulation and making contact decisions. You should focus on the following training topics: 1. Using the mouse and other equipment to operate the simulation. 2. Hooking contacts and accessing the pull down menus. 3. Making type contact decisions. 4. Making class contact decisions. 5. Making intent contact decisions. 6. Making final engagement contact decisions. 7. Viewing right button feedback after making contact decisions. Training Topics for Training Trial 2 In this second trial, the major focus of training is preventing contacts from crossing the defensive perimeters. You should focus on the following training topics: 1. Using the zoom function to view the fibig picturefl and monitoring the inner and outer perimeters. 2. Using marker contacts to locate the outer defensive perimeter. 3. Watching for pop-up contacts that appear suddenly on your screen. Training Topics for Training Trial 3 In this third trial, the major focus of training is being able to apply strategies that are used to better prevent contacts from crossing the defensive perimeters. You should focus on the following training topics: 1. Prioritizing contacts located on the radar screen to determine high and low priority contacts and the order in which contacts should be prosecuted. 2. Making trade-offs between contacts that are approaching your inner and outer defensive perimeters. 94 APPENDIX E Proceduralized, error-avoidant training manipulations Proceduralized, Error-avoidant Instructions During the training trials, there will be 30 contacts on the radar screen. When you start each trial, you will see a number of these contacts on the screen. Since your focus is on learning basic features of the task and making contact decisions, you should focus on these contacts for now. After the experimenter instructs you to start the scenario and the timer begins to count down, you will focus on hooking contacts, making decisions about the contacts, and viewing feedback about your decisions. You should follow the instructions below for Trial 1. During this trial, you should follow the following steps: 1. Hook a contact of your choice. a. Using the mouse, place the arrow on a contact and click the left mouse button. b. When the contact is properly hooked, it will turn green and the Hooked Track # in the lower right corner of your radar screen changes to correspond to the contact number. c. When you gather information from your ship™s sensors, that information will be given for the contact you currently have hooked. 2. Make TYPE, CLASS, INTENT sub-decisions for hooked contacts. a. After the contact is hooked, place the arrow on the TYPE menu button located in the top right of your radar screen. Click on the right mouse button to display the menu options. b. Move your arrow to the button that says fiSpeedfl and click and hold the right mouse button to view the contact™s speed. Use the chart on the next page to see what type of contact is indicated by the speed information. After viewing contact speed, do the same for fiAltitude/Depthfl and fiCommunication Time.fl Once again, use the chart on the next page to see what type of contact is indicated by the cue values. Note that one value you gathered may be inconsistent with the other two; if this is the case choose the option indicated by the majority (2 out of 3) of the values. c. After viewing the three pieces of information, you are ready to make the TYPE decision. Choose fiID_Air/Sub/Surfacefl from the bottom of the TYPE menu. d. A list of choices appears in a menu on the lower right corner of your radar screen. Choose the option that was indicated by the majority of the cues you collected by clicking your right mouse button on the option. e. Perform steps a through d for the CLASS decision and the INTENT decision. 3. Make FINAL ENGAGEMENT decision. a. After you have made the TYPE, CLASS, and INTENT decision for a contact, you can then make the FINAL ENGAGEMENT decision. b. Move your arrow to the OPER menu and click the right mouse button. 95 c. Move your arrow to the menu option that says fiEngage_Shoot/Clearfl and click the right mouse button. A list of choice appears in the lower right corner of your radar screen. d. If the INTENT of the contact was peaceful you should click your right mouse button on the ficlearfl option, but if the INTENT of the contact was Hostile you should click your right mouse button on the fishootfl option. See next steps before doing this. e. When you click your right mouse button on either ficlearfl or fishootfl you can hold it down to receive information on whether you have engaged the contact correctly. 4. REPEAT a. After you make the final engagement decision, the contact will disappear and you should repeat steps 1-3 for another contact. b. If you eliminate all contacts in your viewing range, place your arrow on the OPER menu and click the right mouse button. Then place the arrow on the fiZoom-Outfl option and click your right mouse button. When you do this, more contacts should appear and you can continue. Training Trial 1 Instructions [Training topics for training trial 1] Below is a list of common errors made in this trial. During training, you are encouraged to avoid these errors. For training to be effective, you should try not to make these errors. Errors are detrimental to the learning process and will hurt your understanding of the simulation. The more errors you avoid the more you learn. Clicking on the wrong mouse button (left/right) to hook a contact or access a menu. Not properly evaluating contact information and making incorrect contact decisions (type, class, intent, and final engagement). Making contact decisions based on a single cue value. For example, deciding a contact™s type based only on speed information. Making contact decisions too quickly. Training Trial 2 Instructions [Training topics for training trial 2] Below is a list of common errors made in this trial. During training, you are encouraged to avoid these errors. For training to be effective, you should try not to make these errors. Errors are detrimental to the learning process and will hurt your understanding of the simulation. The more errors you avoid the more you learn. Focusing only on the inner perimeter rather than zooming out to see the fibig picturefl and to monitor the outer perimeter. Hooking the wrong marker contacts or not using marker contacts to locate the outer perimeter. Focusing only on stable contacts and ignoring contacts that pop up suddenly on the screen. Often people do not monitor their screen for pop-up contacts. Allowing contacts to cross the inner and outer defensive perimeters. 96 Training Trial 3 Instructions [Training topics for training trial 3] Below is a list of common errors made in this trial. During training, you are encouraged to avoid these errors. For training to be effective, you should try not to make these errors. Errors are detrimental to the learning process and will hurt your understanding of the simulation. The more errors you avoid the more you learn. Focusing on low priority rather than high priority contacts. Not gathering information on the speeds of contacts close to the inner or outer perimeter. Preventing all contacts from crossing one perimeter while ignoring the other perimeter. 97 APPENDIX F Error management training manipulations EMT Instructions During today™s training, do not hesitate to make errors as you explore and learn the simulation task. Errors are a natural part of any training experience! When you make errors, you create opportunities to learn from your mistakes. By making errors, you can develop a better understanding of the simulation task. When you make an error during the simulation, view it as a chance to learn something new! Remember that you should make errors in the training simulation today. Read and recite these phrases to yourself when participating in the simulation: I have made an error. Great! There is always a way to leave the error situation! Look at the screen to find out what happened and when an error was made. Watch what is on the screen and what is changing. It is good to make mistakes. You can learn from errors. Errors are a natural part of any training experience. Errors inform you about what you can still learn! The more errors you make, the more you learn! Training Trial 1 Instructions [Training topics for training trial 1] Below is a list of common errors you are likely to make and should make in this part of the simulation. Remember that making these errors will help you to learn more about the simulation! Clicking on the wrong mouse button (left/right) to hook a contact or access a menu. Not properly evaluating contact information and making incorrect contact decisions (type, class, intent, and final engagement). Making contact decisions based on a single cue value. For example, deciding a contact™s type based only on speed information. Making contact decisions too quickly. Training Trial 2 Instructions [Training topics for training trial 2] Below is a list of common errors you are likely to make and should make in this part of the simulation. Remember that making these errors will help you to learn more about the simulation! Focusing only on the inner perimeter rather than zooming out to see the fibig picturefl and to monitor the outer perimeter. Hooking the wrong marker contacts or not using marker contacts to locate the outer perimeter. Focusing only on stable contacts and ignoring contacts that pop up suddenly on the screen. Often people do not monitor their screen for pop-up contacts. Allowing contacts to cross the inner and outer defensive perimeters. 98 Training Trial 3 Instructions [Training topics for training trial 3] Below is a list of common errors you are likely to make and should make in this part of the simulation. Remember that making these errors will help you to learn more about the simulation! Focusing on low priority rather than high priority contacts. Not gathering information on the speeds of contacts close to the inner or outer perimeter. Preventing all contacts from crossing one perimeter while ignoring the other perimeter. 99 APPENDIX G State mastery orientation (Horvath et al., 2001) For each of the following statements, please indicate how true it is for you with regard to how your approach this task on the scale provided below. 1 2 3 4 5 Strongly Disagree Disagree Neither Agree Nor Disagree Agree Strongly Agree Goal Orientation Learning 1. I prefer to work on aspects of this task that force me to learn new things. 2. I am willing to work on challenging aspects of this task that I can learn a lot from. 3. The opportunity to learn new things about this task is important to me. 4. The opportunity to work on challenging aspects of this task is important to me. 5. On this task, my goal is to learn the task as well as I can. Goal Orientation Prove: 1. It is important to me to perform better than others in this task. 2. It is important to me to impress others by doing a good job on this task. 3. I was the experimenters and other students to recognize that I am one of the best on this task. 4. I want to show myself how good I am on this task. 4. On this task, my goal is to perform well. Goal Orientation Avoidance: 1. On this task, I would like to hide from others that they are better than me. 2. On this task, I would like to avoid situations where I might demonstrate poor performance to myself. 3. On this task, I would like to avoid discovering that others are better than me. 4. I am reluctant to ask questions about this task because others may think I™m incompetent. 5. On this task, my goal is to avoid performing poorly. 100 APPENDIX H Self-regulation measures Goal Level (to be used in conjunction with time spent reading feedback for self-evaluation) Please indicate your desired level of performance on the next practice trial. Total points ____ Metacognition (Ford et al., 1998). 1 2 3 4 5 Strongly Disagree Disagree Neither Agree Nor Disagree Agree Strongly Agree 1. While practicing a scenario, I monitored how well I was learning its requirements. 2. As I practiced the scenarios, I evaluated how well I was learning the skills of the simulation. 3. I tried to monitor closely the areas where I needed the most practice. 4. I noticed where I made the most mistakes during practice and focused on improving those areas. Self-efficacy (Ford et al., 1998) 1 2 3 4 5 Strongly Disagree Disagree Neither Agree Nor Disagree Agree Strongly Agree 1. I am certain that I can manage the requirements of this task. 2. I believe I can develop methods to handle changing aspects of this task. 3. I am certain I can cope with task components competing for my time. 101 APPENDIX I Knowledge assessment The following is a knowledge test about the simulation. Please select the response that best answers the question. 1. Your outer defensive perimeter is located at: a. 64 nm b. 128 nm c. 256 nm d. 512 nm 2. If you zoom out to find three targets clustered together near the outer perimeter, how would you determine which to engage first? a. Check the ranges b. Check the speeds c. Check both range and speed for each target d. Zoom to 256 nm and engage the first target closest to the center 3. If a target is outside of the current radius of your screen, you can view it by doing what? a. There is nothing you can do b. Wait for the target to enter your screen c. Zoom out d. Zoom in 4. If you™ve just noticed three contacts near your inner perimeter, which of the following should you do next? a. Engage the contact nearest the inner perimeter b. Engage the faster contact near the inner perimeter c. Zoom out to check the outer perimeter d. Zoom in to check how close the contacts are to the inner perimeter 5. If you zoom out to find three contacts around your outer perimeter, how would you determine which contact is the marker contact? a. Check to see which contact is closest to the outer perimeter b. Check the speeds of the contacts c. Check to see which contact is civilian d. Check to see which contact is hostile 6. In which of the following situations would looking up the range of targets be MOST critical? a. To determine which of two targets is closer to the inner perimeter b. To compare how close targets are to the outer perimeter c. To identify targets that are an equal distance between the inner and outer penalty circles d. Range is not important to look up in any situation because speed is the critical factor 102 7. What is the purpose of marker contacts? a. To determine which contacts are hostile and which are peaceful b. To locate your inner defensive perimeter c. To quickly determine the speeds of contacts near your perimeters d. To locate your outer defensive perimeter 8. Which of the following pieces of information is not useful for prioritizing contacts? a. The distance of contacts from the outer defensive perimeter b. Whether the contact is peaceful or hostile c. The distance of contacts from the inner defensive perimeter d. The speed of contacts near your inner and outer defensive perimeter 9. Which of the following functions is most useful for identifying marker contacts? a. Zoom in b. Right-button feedback c. Engage shoot or clear d. Zoom out 10. If three contacts are about 10 miles outside your outer defensive perimeter, which of the following should you do to prioritize the contacts? a. Engage the fastest contact b. Engage the hostile contact c. Engage the closest contact d. It makes no difference in what order you engage the contacts 11. Which of the following targets would be the lowest priority? a. A target which has a speed of 15 knots b. A target which has just crossed your inner defensive perimeter c. A target which is Peaceful d. A target which is Civilian 12. Which of the following would be the most effective strategy for defending your outer defensive perimeter? a. Zoom-out to 128 nm, locate the marker contacts, and check the speed of contacts near the outer perimeter b. Zoom-out to 256 nm, locate the marker contacts, and check the speed of contacts near the outer perimeter c. Zoom-out to 128 nm, locate a hostile air contact, and check the speed of contacts near the outer perimeter d. Zoom-out to 256 nm, locate a hostile air contact, and check the speed of contacts near the outer perimeter 103 13. If you have three targets, Target A with range = 255 nm, speed = 150 knots; Target B with range = 260 nm, speed = 100 knots; Target C with range = 25 nm, speed = 120 knots, which target should be engaged first? a. Target A b. Target B c. Target C d. Engage in any order 14. What information should you use to prioritize targets to engage? a. Speed and initial bearing b. Speed and range c. Range and initial bearing d. Speed 15. If all penalty intrusions cost -100 points, which would be the most effective strategy? a. Do not allow any contacts to enter your inner defensive perimeter, even if it means allowing contacts to cross your outer defensive perimeter b. Do not allow any contacts to enter your outer defensive perimeter, even if it means allowing contacts to cross your inner defensive perimeter c. Defend both your inner and outer defensive perimeters d. None of these are effective strategies 16. It is important to make trade-offs between contacts: a. That are hostile and those that are peaceful b. Approaching your inner and outer perimeters c. That are civilian and those that are military d. That have already crossed your inner defensive perimeter and those that are approaching your outer defensive perimeter 17. If a Response is given, what is the likely Intent of the contact? a. Military b. Hostile c. Civilian d. Peaceful 18. If a contact™s altitude/depth is 10 feet, what is the Type of the contact? a. Air b. Surface c. Submarine d. Unknown 104 19. If a contact™s Speed is 40 knots, what does this suggest about the contact? a. The contact is Air b. The contact is Surface c. The contact is Civilian d. The contact is Military 20. A communication Time of 52 seconds indicates that the contact is likely: a. Air b. Surface c. Submarine d. Unknown 21. If a contact™s Intelligence is Unavailable, what Class does this suggest for the contact? a. Air b. Civilian c. Military d. Unknown 22. If a contact™s Maneuvering Pattern is Code Echo, this suggests that the contact falls into which category? a. Class is Unknown b. Class is Military c. Class is Hostile d. Class is Peaceful 105 REFERENCES106 REFERENCES Baard, S. K. (2013). An insight into adaptation: Self-regulatory mechanisms as a driver of adaptive performance over time. (Master™s thesis). Retrieved from http://catalog.lib.msu.edu/record=b10289241~S39a. Baldwin, T. T., & Ford, J. K. (1988). Transfer of training: A review and directions for future research. Personnel Psychology, 41, 63-105. Bell, B. S., & Kozlowski, S. W. (2002). A typology of virtual teams implications for effective leadership. Group & Organization Management, 27, 14-49. Bell, B. S., & Kozlowski, S. W. (2008). Active learning: effects of core training design elements on self-regulatory processes, learning, and adaptability. Journal of Applied Psychology, 93, 296-316. Berardi-Coletta, B., Buyer, L. S., Dominowski, R. L., & Rellinger, E. R. (1995). Metacognition and problem solving: A process-oriented approach. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 205-223. Blume, B. D., Ford, J. K., Baldwin, T. T., & Huang, J. L. (2010). Transfer of training: A meta-analytic review. Journal of Management, 36, 1065-1105. Bollen, K. A. (1989). Structural equations with latent variables. New York, NY: John Wiley & Sons, Inc. Brown, T. C., & Latham, G. P. (2002). The effects of behavioral outcome goals, learning goals, urging people to do their best on teamwork behavior on a group problem-solving task. Canadian Journal of Behavioural Science, 34, 276-285. Carolan, T. F., Hutchins, S. D., Wickens, C. D., & Cumming, J. M. (2014). Costs and benefits of more learner freedom: Meta-analyses of exploratory and learner control training methods. Human Factors, 56, 999-1014. Charness, N., Tuffiash, M., Krampe, R., Reingold, E., & Vasyukova, E. (2005). The role of deliberate practice in chess expertise. Applied Cognitive Psychology, 19, 151-165. Chen, G., Casper, W. J., & Cortina, J. M. (2001). The roles of self-efficacy and task complexity in the relationships among cognitive ability, conscientiousness, and work-related performance: A meta-analytic examination. Human Performance, 14, 209-230. Chillarege, K. A., Nordstrom, C. R., & Williams, K. B. (2003). Learning from our mistakes: Error management training for mature learners. Journal of Business and Psychology, 17, 369-385. 107 Cronbach, L. J. (1987). Statistical tests for moderator variables: Flaws in analyses recently proposed. Psychological Bulletin, 102, 414-417. Debowski, S., Wood, R. E., & Bandura, A. (2001). Impact of guided exploration and enactive exploration on self-regulatory mechanisms and information acquisition through electronic search. Journal of Applied Psychology, 86, 1129-1141. Dinsmore, D. L., Alexander, P. A., & Loughlin, S. M. (2008). Focusing the conceptual lens on metacognition, self-regulation, and self-regulated learning. Educational Psychology Review, 20, 391-409. Dormann, T., & Frese, M. (1994). Error training: Replication and the function of exploratory behavior. International Journal of Human-Computer Interaction, 6, 365-372. Duvivier, R. J., van Dalen, J., Muijtjens, A. M., Moulaert, V. R., van der Vleuten, C. P., & Scherpbier, A. J. (2011). The role of deliberate practice in the acquisition of clinical skills. BMC Medical Education, 11, 101-107. Dweck, C. S. (1986). Motivational processes affecting learning. American Psychologist, 41, 1040-1048. Elliot, A. J., & Harackiewicz, J. M. (1994). Goal setting, achievement orientation, and intrinsic motivation: A mediational analysis. Journal of Personality and Social Psychology, 66, 968-980. Ericsson, K. A. (2004). Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Academic Medicine, 79, S70-S81. Ericsson, K. A. (2006). The influence of experience and deliberate practice on the development of superior expert performance. In K. A. Ericsson, N. Charness, P. J. Feltovich, & Hoffman, R. R. (Eds.), The Cambridge handbook of expertise and expert performance (pp. 683-703). Cambridge, UK: Cambridge University Press. Ericsson, K. A. (2009). Enhancing the development of professional performance: Implications form the study of deliberate practice. In K. A. Ericsson (Ed.), Development of professional expertise: Toward measurement of expert performance and design of optimal learning environments (pp. 405-431). Cambridge, UK: Cambridge University Press. Ericsson, K. A., & Charness, N. (1994). Expert performance: Its structure and acquisition. American Psychologist, 49, 725-747. Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100, 363-406. 108 Ericsson, K. A., & Lehmann, A. C. (1996). Expert and exceptional performance: Evidence of maximal adaptation to task constraints. Annual Review of Psychology, 47, 273-305. Ericsson, K. A., Nandagopal, K., & Roring, R. W. (2009). Toward a science of exceptional achievement. Annals of the New York Academy of Sciences, 1172, 199-217. Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitiveŒdevelopmental inquiry. American Psychologist, 34, 906-911. Ford, J. K., Smith, E. M., Weissbein, D. A., Gully, S. M., & Salas, E. (1998). Relationships of goal orientation, metacognitive activity, and practice strategies with learning outcomes and transfer. Journal of Applied Psychology, 83, 218-233. Frese, M., Brodbeck, F., Heinbokel, T., Mooser, C., Schleiffenbaum, E., & Thiemann, P. (1991). Errors in training computer skills: On the positive function of errors. International Journal of Human-Computer Interaction, 6, 77-93. Frese, M., & Keith, N. (2015). Action Errors, Error Management, and Learning in Organizations. Annual Review of Psychology, 66, 661-687. Frey, M. C., & Detterman, D. K. (2004). The relationship between the scholastic assessment test and general cognitive ability. Psychological Science, 15, 373-378. Gagne, R. M. (1962). The acquisition of knowledge. Psychological Review, 69, 355-365. Gully, S. M., Payne, S. C., Koles, K., & Whiteman, J. A. K. (2002). The impact of error training and individual differences on training outcomes: An attribute-treatment interaction perspective. Journal of Applied Psychology, 87, 143-155. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81-112. Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. New York, NY: Guilford Press. Heimbeck, D., Frese, M., Sonnentag, S., & Keith, N. (2003). Integrating errors into the training process: The function of error management instructions and the role of goal orientation. Personnel Psychology, 56, 333-361. Hesketh, B. (1997). Dilemmas in training for transfer and retention. Applied Psychology: An International Review, 46, 317-339. Horvath, M., Scheu, C. R., & DeShon, R. P. (2001). Goal orientation: Integrating theory and measurement. Paper presented at the 16th Annual Conference of the Society for Industrial and Organizational Psychology, San Diego, CA. 109 Hughes, M. G., Day, E. A., Wang, X., Schuelke, M. J., Arsenault, M. L., Harkrider, L. N., & Cooper, O. D. (2013). Learner-controlled practice difficulty in the training of a complex task: Cognitive and motivational mechanisms. Journal of Applied Psychology, 98, 80-98. Issenberg, S. B, McGaghie, W. C., Petrusa, E. R., Gordon, D. L., & Scalese, R. J. (2005). Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Medical Teacher, 27, 10-28. Ivancic IV, K., & Hesketh, B. (2000). Learning from errors in a driving simulation: Effects on driving skill and self-confidence. Ergonomics, 43, 1966-1984. Kahneman, D. (1973). Attention and effort. Englewood Cliffs, NJ: Prentice-Hall, Inc. Kanfer, R. (1990). Motivation and individual differences in learning: An integration of developmental, differential, and cognitive perspectives. Learning and Individual Differences, 2, 219-237. Kanfer, R., & Ackerman, P. L. (1989). Motivation and cognitive abilities: An integrative/aptitude-treatment interaction approach to skill acquisition. Journal of Applied Psychology, 74, 657-690. Keith, N., & Frese, M. (2005). Self-regulation in error management training: Emotion control and metacognition as mediators of performance effects. Journal of Applied Psychology, 90, 677-691. Keith, N., & Frese, M. (2008). Effectiveness of error management training: A meta-analysis. Journal of Applied Psychology, 93, 59-69. Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational psychologist, 41, 75-86. Koenig, K. A., Frey, M. C., & Detterman, D. K. (2008). ACT and general cognitive ability. Intelligence, 36, 153-160. Koriat, A., Ma'ayan, H., & Nussinson, R. (2006). The intricate relationships between monitoring and control in metacognition: Lessons for the cause-and-effect relation between subjective experience and behavior. Journal of Experimental Psychology: General, 135, 36-69. Kozlowski, S. W., & Bell, B. S. (2006). Disentangling achievement orientation and goal setting: effects on self-regulatory processes. Journal of Applied Psychology, 91, 900-916. 110 Kozlowski, S. W. J., Gully, S. M., Brown, K. G., Salas, E., & Nason, E. R. (2001). Effects of training goals and goal orientation traits on multidimensional training outcomes and performance adaptability. Organizational Behavior and Human Decision Processes, 85, 1-31. Kozlowski, S. W., Toney, R. J., Mullins, M. E., Weissbein, D. A., Brown, K. G., & Bell, B. S. (2001). Developing adaptability: A theory for the design of integrated-embedded training systems. In E. Salas (Ed.), Advances in human performance and cognitive engineering research, (pp. 59-123). Amsterdam: JAI/Elsevier Science. Kraiger, K. (2002). Decision-based evaluation. In K. Kraiger (Ed.) Creating, implementing, and managing effective training and development (pp. 331-375). San Francisco: Jossey-Bass. Kraiger, K., & Jerden, E. (2007). A meta-analytic investigation of learner control: Old findings and new directions. In S. M. Fiore & E. Salas (Eds.), Toward a science of distributed learning, (pp. 65-90). Washington, DC: American Psychological Association. Krampe, R. T., & Ericsson, K. A. (1996). Maintaining excellence: Deliberate practice and elite performance in young and older pianists. Journal of Experimental Psychology: General, 125, 331-359. Lazar, J., & Norcio, A. (2003). Training novice users in developing strategies for responding to errors when browsing the web. International Journal of Human-Computer Interaction, 15, 361-377. Locke, E. A., & Latham, G. P. (2002). Building a practically useful theory of goal setting and task motivation: A 35-year odyssey. American Psychologist, 57, 705-717. Locke, E. A., Shaw, K. N., Saari, L. M., & Latham, G. P. (1981). Goal setting and task performance: 1969Œ1980. Psychological Bulletin, 90, 125-152. Martocchio, J. J. (1994). Effects of conceptions of ability on anxiety, self-efficacy, and learning in training. Journal of Applied Psychology, 79, 819-825. Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17, 437-455. Muthen, L. K., & Muthen, B. O. (2012). Mplus user™s guide: Seventh edition. Los Angeles, CA: Muthen & Muthen. Nordstrom, C. R., Wendland, D., & Williams, K. B. (1998). fiTo err is humanfl: An examination of the effectiveness of error management training. Journal of Business and Psychology, 12, 269-282. Nye, C. D., Brummel, B. J., Drasgow, F. (2010). Too good to be true? Understanding change in organizational outcomes. Journal of Management, 36, 1555-1577. 111 Oullette, J. A., & Wood, W. (1998). Habit and intention in everyday life: The multiple processes by which past behavior predicts future behavior. Psychological Bulletin, 124, 54-74. Perry, S. K. B. (2015). The process of adaptation. (Doctoral dissertation). Retrieved from ProQuest Dissertations Publishing. (Accession No. 3702530). Ployhart, R. E., & Ehrhart, M. G. (2003). Be careful what you ask for: Effects of response instructions on the construct validity and reliability of situational judgment tests. International Journal of Selection and Assessment, 11, 1-16. Schmidt, A. M., & Ford, J. K. (2003). Learning within a learner control training environment: The interactive effects of goal orientation and metacognitive instruction on learning outcomes. Personnel Psychology, 56, 405-429. Schmidt, G. M. (2008). Learning and performance goals: Disentangling the effects of goal specificity. (Unpublished thesis). Michigan State University, East Lansing, MI. Sitzmann, T., Bell, B. S., Kraiger, K., & Kanar, A. M. (2009). A multilevel analysis of the effect of prompting self-regulation in technology-delivered instruction. Personnel Psychology, 62, 697-734. Smith, B. P. (2005). Goal orientation, implicit theory of ability, and collegiate instrumental music practice. Psychology of Music, 33, 36-57. Sonnentag, S., & Kleine, B. M. (2000). Deliberate practice at work: A study with insurance agents. Journal of Occupational and Organizational Psychology, 73, 87-102. Stajkovic, A. D., & Luthans, F. (1998). Self-efficacy and work-related performance: A meta-analysis. Psychological Bulletin, 124, 240-261. Stevens, C. K., & Gist, M. E. (1997). Effects of self-efficacy and goal-orientation training on negotiation skill maintenance: What are the mechanisms? Personnel Psychology, 50, 955-978. Tabernero, C., & Wood, R. E. (1999). Implicit theories versus the social construal of ability in self-regulation and performance on a complex task. Organizational Behavior and Human Decision Processes, 78, 104-127. Taylor, P. J., & Small, B. (2002). Asking applicants what they would do versus what they did do: A meta-analytic comparison of situational and past behavior employment interview questions. Journal of Occupational and Organizational Psychology, 75, 277-294. Thorndike, E. L., & Woodworth, R. S. (1901). The influence of improvement in one mental function upon the efficiency of other functions. Psychological Review, 8, 247-261. 112 Vandewalle, D. (1997). Development and validation of a work domain goal orientation instrument. Educational and Psychological Measurement, 57, 995-1015. Van Gog, T., Paas, F., & van Merriënboer, J. J. (2008). Effects of studying sequences of process-oriented and product-oriented worked examples on troubleshooting transfer efficiency. Learning and Instruction, 18, 211-222. Veenman, M. V., & Beishuizen, J. J. (2004). Intellectual and metacognitive skills of novices while studying texts under conditions of text difficulty and time constraint. Learning and Instruction, 14, 621-640. Whetzel, D. L., & McDaniel, M. A. (2009). Situational judgment tests: An overview of current research. Human Resource Management Review, 19, 188-202. Wood, R. E. (1986). Task complexity: Definition of the construct. Organizational Behavior and Human Decision Processes, 37, 60-82. Wood, R. E., Kakebeeke, B. M., Debowski, S., & Frese, M. (2000). The impact of enactive exploration on intrinsic motivation, strategy, and performance in electronic search. Applied Psychology: An International Review, 49, 263-283. Young, J. D. (1996). The effect of self-regulated learning strategies on performance in learner controlled computer-based instruction. Educational Technology Research and Development, 44, 17-27. Zendejas, B., Brydges, R., Hamstra, S. J., & Cook, D. A. (2013). State of the evidence on simulation-based training for laparoscopic surgery: A systematic review. Annals of Surgery, 257, 586-593. 113