REGULATING APPLICATIONS & MEDIA DISTRACTED BEHAVIOR: NOT CLEARLY THE ANSWER TO A QUESTIONABLE PROBLEM By Colin A. Terry A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Educational Psychology and Educational Technology – Doctor of Philosophy 2019 REGULATING APPLICATIONS & MEDIA DISTRACTED BEHAVIOR: NOT CLEARLY THE ANSWER TO A QUESTIONABLE PROBLEM ABSTRACT By Students are increasingly more distracted and off-task with technology. While Colin A. Terry contemporary research has clearly argued the pervasive nature and problematic effects of media distracted behavior, research has yet to identify and validate, by way of a real-world experiment, an efficacious and promising practical or pedagogical response. This dissertation study used a quasi-experimental, longitudinal experiment to test regulating smartphone applications that purport to mitigate distracted technology use and heighten the student attention. To test whether or not different regulating applications “work” as purported, this study examined two different regulating applications and their effects on the media distracted behavior, student engagement, behavioral regulation, perceptions of technology dependency, and course performance. The experiment included first-year college students enrolled in a mandated entry- level science course at a medium-sized public STEM and applied science university. Stratified random assignment permitted experimental, contamination, and control treatment group comparisons. Long-term motivation effects (including student-held feelings with self-efficacy, expectancy-value, and achievement goals) were also considered. Last, varying application affordances and design approaches were contrasted by way of feelings related to self- determination. The results of quantitative and qualitative data analyses indicated that applications sporadically and minimally lowered student reported media distracted behavior in and outside of class, but had no effect on engagement, behavioral regulation, or perceived dependency on technology. Unexpectedly, there was a negative effect on Chemistry motivation, as students reported lower expectancy-value, more negative achievement goals, and lower self-efficacy. Last, application use negatively affected student performance in the course as those asked to use regulating applications generally performed poorer as compared to those in the control and contamination groups. Challenging the promising assertions of regulating applications, the results of this dissertation suggest that rather than alleviate the problem, these particular apps may actually exacerbate media distraction’s negative effects by also diminishing engagement, regulation, achievement, and motivation. Copyright by COLIN A. TERRY 2019 For Meagan & Reese v ACKNOWLEDGMENTS Academic efforts are, at the most fundamental level, the product of hard-work. I first learned this sitting next to my mom. She sat patiently while we diligently worked through a 7th grade assignment on Travels with Charley, by J. Steinbeck. I remember feeling overwhelmed and nervous, afraid that I didn’t possess the intellectual ability succeed. Together, one step at a time, we trudged through the assignment. This degree did not come easy for me. Without question, my colleagues, professors, and papers pushed my intellectual capacities beyond anything comfortable; sometimes beyond anything seemingly possible. There were many late nights and early mornings where I trudged through the work, often feeling overwhelmed and unfit for the task. I am forever grateful for the opportunity to work hard and earn a Ph.D. from such a respected and prolific department. Thank you for believing in me, giving me a chance, and supporting my efforts along the way. To Punya Mishra, Ph.D., thank you for the opportunity to publish and collaborate within a vibrant research group, and the invaluable foundation in my first few years of the program. Thank you for continuing to advance my scholastics efforts by way of service on the Dissertation Committee. It was an honor to work with you. Additionally, I want to thank Danah Henriksen, Ph.D., and the members of the Deep-Play Research Group who pushed my writing and presentation skills to new levels while providing opportunities to research divergent topics. To Matthew Koehler, Ph.D., your feedback, guidance, and critique are of the highest quality and I am grateful for the opportunity to start and end my degree with you. To Lisa Linnenbrink-Garcia, Ph.D., while we didn’t work together prior to my Dissertation, your additions and guidance were invaluable when it mattered most. To Kris Renn, Ph.D., Kelly Mix, vi Ph.D., Cindy Okolo, Ph.D., and Jack Smith, Ph.D., it was a true privilege to learn from you. To John Bell, Ph.D., thank you for advancing CEPSE Design Studio for the betterment of hybrid students. Balancing a Ph.D. while working fulltime was never easy. I was often a great student or great employee, but never both. Words cannot express the depth of my gratitude for my colleagues at Colorado School of Mines. Thank you, Caroline Fuller, Amy Argyris Dupont, Jessica Keefer, Brad Lindeberg, Amanda Davis, Katie Ludwin, and Jen Drumm; I am so grateful for your support and patience. Thank you for working so hard and for being such quality friends. To Rebecca Flintoft, thank you for modeling great humility and professionalism. It has been an honor to be your supervisee and colleague. To Dan Fox, Ph.D., thank you for the support, laughs, friendship, guidance, and care. To Brent Waller, Ph.D. you are a stellar professional and inspiring friend. And to Derek Morgan, Ph.D., it has been an honor to work for you, call you my friend, and work alongside you while learning from the best. To my fellow Cohortians and EPET students, it was honor to work alongside students who will undoubtedly prove to be international leaders in research. To Josh Rosenberg, Ph.D., thank you for the friendship. You are a brilliant, humble thinker. To Nick Holton, Ph.D., and Virginia Hiltz, Ph.D., this degree was not possible without your community, support, and levity. Nick – I am proud to call you my friend. And Ginny – you are quite possibly one of the most generous, hardworking, and brilliant individuals I’ve had the pleasure of calling a friend. And to Bret Staudt Willet, thank you for your help with my dissertation. To my extended Chicago family, thank you for the lodging, the Chicago pizza nights, and the unwavering support. To my Spokane family, thank you for supporting my efforts all these years, including those before MSU. To Robbie and Andy, it is an honor to be your son-in-law. vii Thank you for believing in me and supporting our family during hard times. To my grandpa (Vovo), you’ve always been a pillar of hard work and intellect – Montana meets medicine. You have been an inspiration for years. To my parents, thank you for Beano and unbridled belief in me and patience with me. I am proud to be your son and grateful for the life lessons. To Dan, Justin, and Ryan, you three exemplify intellect, achievement, and kindness. You have been there for some of my most favorite life-moments, and I hope you are for years to come. To Ryan, thank you for always being kind, loving, supportive, and sharp. To Justin, thank you for modeling the qualities of a great father and thinker. And to Dan, thank you for our morning talks, moments of raw realness, and unfettered support. I am lucky to be your friend. Very literally, this degree was not possible without Cary Roseth, Ph.D.. To Cary, I will never be able to thank you for all the support, care, tutelage, and assistance you’ve provided. Additionally, it was an honor to work for you as EPET Liaison. Greg Lemond won the Tour de France by eight seconds in 1989. As a cyclist, you understand that unrelenting drive can make all difference. I am so grateful for the drive that defined your tutelage and my learning. And now, I am grateful to call you my friend. To Reese, your smile, your touch, your laugh, and your developmental moments fill me to the brim. In many ways, this was for you. To Meagan, you are beautiful, more now than ever before. And through all the storms we have weathered together, you have always been there to support, love, and challenge me. I have learned as much at home as I did as part of this degree. Your sacrifices made possible this degree and the family of my dreams. I am grateful for the ways you’ve made me more empathetic, loving, thoughtful, and present. I cannot wait to spend more time with you and I look forward to supporting your professional and personal efforts. I love you. viii TABLE OF CONTENTS LIST OF TABLES LIST OF FIGURES CHAPTER ONE Introduction Statement of the Problem Purpose of the Study Do the applications work? Media distracted behavior and academic performance Motivational, long-term effects Continued use of the applications Pilot Study Regulating Applications Dissertation Study Research Questions Attention and Distraction The Emergence of Multitasking Media Multitasking Pervasiveness and Concerns Human Limitations with Multitasking Suggested and Tested Responses CHAPTER TWO Literature Review CHAPTER THREE Pilot Study, Dissertation Study, and Research Questions CHAPTER FOUR Methods Conditions and design Data collection Media distracted behavior Behavioral regulation Dependence on technology and anxiety without technology Student engagement Academic achievement Student motivation in Chemistry Overview Sample Procedure Measures Expectancy-value ix xii xiii 1 1 2 4 7 7 7 11 14 17 20 24 24 24 25 28 29 29 31 33 35 38 38 38 38 40 41 43 44 44 45 46 46 47 47 47 Treatment fidelity Achievement goals Self-efficacy Subscale reliability Convergent validity Frequency and course preparation Demographic information Prior use and contamination Data Analyses Self-determined motivation to use regulating applications Additional considerations Qualitative measures Quantitative analyses Qualitative analyses Data protection and integrity CHAPTER FIVE Results & Discussion Participant Flow Measure Reliability and Convergence Validity Media Distracted Behavior In and Out of Class Hours Preparing for Class Frequency and Duration of Application Use Research Question 1: Media Distracted Behavior Research Question 2: Student Regulation Research Question 3: Technology Dependency Technology dependency results Technology dependency discussion Research Question 4: Student Engagement Research Question 5: Performance Achievement Student engagement results: Behavioral engagement Student engagement results: Persistence Student engagement discussion Student regulation results: Control of attention Student regulation results: Behavior regulation Student regulation discussion Media distracted behavior results: During class Media distracted behavior results: Outside of class Media distracted behavior discussion Performance achievement results Performance achievement discussion Expectancy-value results: Interest value Expectancy-value results: Attainment value Expectancy-value results: Utility value Expectancy-value results: Opportunity cost Research Question 6: Expectancy-Value x 48 48 48 49 49 49 50 50 51 51 52 53 54 54 54 54 55 55 56 56 57 57 57 58 58 59 61 61 61 61 62 62 62 63 63 63 63 64 64 65 66 66 66 66 67 Expectancy-value results: Effort cost Expectancy-value results: Psychological cost Expectancy-value discussion Achievement goal results: Mastery approach Achievement goal results: Performance approach Achievement goal results: Performance avoidance Achievement goal discussion Research Question 7: Achievement Goal Research Question 8: Self-Efficacy Research Question 9: Self-Determination CHAPTER SIX Summary and Conclusion Self-determination results Self-determination discussion Self-efficacy results Self-efficacy discussion Implications Limitations Directions for Future Research & Conclusion APPENDIX A : Tables APPENDIX B: Figures APPENDICES REFERENCES 67 67 68 68 69 69 69 70 71 71 71 72 72 73 75 75 84 87 91 93 94 103 120 xi 93 95 96 97 98 99 100 LIST OF TABLES Table 1 Demographic & Participant Breakdown by Condition Table 2 Participant Experience Chart Table 3 Communication with Treatment Groups Table 4 Participant Flow Table 5 Correlations: Course Scholarly Performance Table 6 Media Distracted Behavior In and Out of Class, Hours Preparing for Class Table 7 Frequency of Application Use and Duration xii LIST OF FIGURES 106 105 104 103 101 Figure 1 Media Distracted Behavior: During Class (Linear Mixed Method Analysis) Figure 2 Media Distracted Behavior: Outside of Class (Linear Mixed Method Analysis) 102 Figure 3 Control of Attention (Linear Mixed Method Analysis) Figure 4 Behavioral Regulation (Linear Mixed Method Analysis) Figure 5 Technology Dependency (Linear Mixed Method Analysis) Figure 6 Behavioral Engagement (Linear Mixed Method Analysis) Figure 7 Persistence (Linear Mixed Method Analysis) Figure 8 Course Performance (Linear Mixed Method Analysis) Figure 9 Interest Value (Linear Mixed Method Analysis) Figure 10 Attainment Value (Linear Mixed Method Analysis) Figure 11 Utility Value (Linear Mixed Method Analysis) Figure 12 Opportunity Cost (Linear Mixed Method Analysis) Figure 13 Effort Cost (Linear Mixed Method Analysis) Figure 14 Mastery Approach (Linear Mixed Method Analysis) Figure 15 Performance Approach (Linear Mixed Method Analysis) Figure 16 Performance Avoidance (Linear Mixed Method Analysis) Figure 17 Self-Efficacy (Linear Mixed Method Analysis) 107 116 117 108 109 110 111 112 113 114 115 xiii CHAPTER ONE Introduction Advancements in educational technology have made possible that which was previously unimaginable. With technology, we can teach in remote locations around the world (e.g. course management systems, video-conferencing tools, and robotic telepresence), individualize pedagogical approaches for hundreds of students simultaneously (e.g. responsive question modules), and support new approaches towards deeper understandings of content (e.g. virtual and immersion simulations). Modern classrooms are defined by their technological integration in hardware, software, and pedagogical approach. Juxtaposed to the ever-expanding list of benefits, opportunities, and discoveries attributed to advancements in educational technology, the proliferation of ubiquitous technology has also presented new challenges. Contemporary technology-enabled challenges include cyberbullying (e.g. Aydin, 2012), narcissism and depression (e.g., Campbell & Twenge, 2015), and unhealthy codependence, even when inflicting self-harm (e.g., Wilson, Reinhard, Westgate, Gilbert, et al., 2014), to name a few. This binary presentation of “positive” and “negative” effects is overly simplistic. In reality, technology tenuously prompts both positive and negative considerations concurrently. As an example, smartphones present desirable affordances like texting and ever- present connectivity. In the vehicle, however, ever-present connectivity and pervasive texting is associated with gravely concerning spikes in motor vehicle fatalities as a result of distracted driving (Richtel, 2014). As it pertains to this study, advancements in technology have prompted new and challenging considerations related to distraction and attention. In the last decade, coupled with awesome advancements with clandestine and portable technologies such as smartphones, 1 individuals are both distracted and engaged by new technologies at previously unimaginable rates (Alter, 2017). From an educational perspective, constant student connectivity and distraction has burgeoned in the last decade and brought with it new concerns related to personal wellness, distracted learning, and social development (Courage, Bakhtiar, Fitzpatrick, Kenny, and Brandeau, 2015; Radesky, Schumacher, and Zuckerman, 2015). This study focused on the tenuous relationship between student use of technology, learning, and distracted behavior. A growing body of contemporary research related to media distracted behavior, often in the form of technology misuse (e.g. YouTube when studying) and multitasking (e.g. texting while attending class), underscores a myriad of undesired effects including, but not limited to, heightened distraction and inattention, significantly hindered productivity, diminished scholastic performance, and encumbered and inaccurate cognitive processing/learning (Bergen, Grimes, & Potter, 2005; Hembrooke & Gay, 2003; Sana, Weston, & Cepeda, 2013; Unsworth, McMilan, Brewer, & Spillers, 2012). Despite these widely-known effects, product designers and companies are increasingly adept at creating stimulating and immersive user experiences or environments (Alter, 2017). And yet, the fundamental importance for students to regulate and direct their attention when studying or learning remains as important as ever. Research has thoroughly documented the prevalence of distracted learning and the paired, arguably-causal effects. More recently, researchers have also shifted their inquiries from diagnosing the situation (i.e. what is happening), to treating the phenomenon (i.e. how to best respond or address the issue). Statement of the Problem This study considered attention and distraction in the form of technology use, and more specifically considered student media distracted behavior, which encapsulates media 2 multitasking behavior (Carrier, Kersten, & Rosen, 2015) and a student’s focused misuse or off- task use of technology. First coined by the Kaiser Foundation (Roberts, Foehr, & Rideout, 2005), media multitasking is “the consumption of two or more streams of content, facilitated by content at the same time” (Ophir, Nass, & Wagner, 2009). Media multitasking is an important application of media distracted behavior, but media multitasking fails to capture instances wherein a student is purely off task with technology but not consuming multiple streams of content. Consider, for example, the student who watches YouTube for an hour despite intending to spend this time studying. To date, research related to student media distracted behavior has predominately focused on describing the phenomena and the common effects (see e.g., Aagaard, 2015; Foehr & Kaiser Family Foundation, 2006) as well as identifying potential predictors of the common behavior (see e.g., Part IV: Multitasking from The Wiley Handbook of Psychology, Technology and Society, 2015). Research has found that distraction and technology misuse may be strongly facilitated by students’ anxiety without technology and perceived dependency on technology (Terry, Mishra, & Roseth, 2016), bottom-up psychosocial stimulus and reward (Goleman, 2013), and the perceived value of technology use and media multitasking behavior (Rosen, Whaling, Carrier, Cheever, & Rokkum, 2013). While contemporary research has advanced public understanding of media multitasking, little is known about how practitioners/teachers can regulate this ever-present behavior, or how students can efficaciously regulate their own media distracted tendencies. This study aimed to advance the discussion of how to regulate media distracted behavior by testing a promising, but unverified and unevaluated approach: using personal software applications (or “apps”) to regulate media distracted behavior and tendencies. 3 Purpose of the Study While it may seem ironic to address a technology-enabled problem with technology, numerous software applications have espoused the ability to heighten attention and mitigate distracted technology use. Examples include, but are not limited to, Freedom, Self-Control, Forest, Focus, Pomodoro, and Focus Booster. For example, Freedom claims to “eliminate distractions”, “break bad habits”, “build new habits”, and be “more productive” all as a result of using their application. There are practical reasons for considering a technological response to media distracted behavior. First, if an individual possesses the distracting device, such as a smartphone, they may also possess the regulating application. In other words, these regulating applications are well “positioned” to address the behavior at the source of multitasking and distraction. Second, technology applications also have the potential to help regulate the behavior beyond the physical delineation of a classroom or study space. With an application on the same phone used for media distracted behavior, the treatment or solution is always available and effective. At a more theoretical level, it is also possible that the applications could, potentially, induce long-term motivational shifts and increase students’ self-regulation. In line with previous media multitasking research, this study argued that media distracted behavior can be regulated, as can the desire to be physically and cognitively focused or attentive (Magen, 2017; Zimmerman, 2002). As it pertains to the theoretical effect of application use, it may be plausible for these applications bolster a student’s desire as well as a student’s effort to be more regulated, and thereby present long-term positive behavioral or motivational change. This study considered the credibility of regulating applications as a promising response to pervasive and problematic media distracted behavior. Specifically, the study first considered 4 whether or not they “work” by examining their effect on media distracted behavior, student regulation (as measured by attentional control and behavioral regulation), perceived dependency for technology, student engagement, and academic performance (as measured by student scholastic achievement in a course). It was hypothesized that using the regulating applications will lower media distracted behavior and perceived dependency on technology while heightening student regulation, student engagement, and academic performance. Beyond the effective consideration of whether or not an application “works” to mitigate media distracted behavior, this study also considered whether regulating software applications prompt long-term motivational changes related to Chemistry. This line of questioning may suggest “how” (e.g. through sustaining motivational shifts in the student) the regulatory applications “works” (e.g. attentional control, engagement, media use, etc.). For example, a positive shift in student held feelings related to self-efficacy in Chemistry may further explain possible observed shifts in heightened academic performance. To measure this motivational change, this study considered expectancy-values, achievement goals, and self-efficacy. Because the effect of regulating applications depends on application use, it was also important to understand whether and why some applications were used more than others. This study addressed this issue by comparing two different regulating software approaches, each with varying affordances and constraints in the user-experience. One approach controls the user and overrides self-regulatory failure, while the other provides incentives the user to heightened self- regulation. As a way to examine why certain applications are more effective at fostering sustained use, this study considers the user-experience with the two varying applications from a self-determination motivational perspective. As it pertains to the application used, one application incentivizes long-term use but provides freedom of choice. The other application did 5 not incentivize but removes all autonomy from the user. This study also examined which is more successful in fostering comparatively grater continued utilization. All three of the above considerations – Do regulating applications work? How do they prompt long-term motivational change? Are some applications used more than others? – are considered through quantitative and qualitative data analysis, wherein the qualitative analysis aims to extrapolate and complement the quantitative findings. 6 CHAPTER TWO Literature Review Attention and Distraction Attention has been an important psychology and education research concept for decades, years before the contemporary relevance to personal electronic technology (e.g. James, 1899). This long-standing area of research, however, has garnered revitalized interest and complexity as individuals struggle to direct and sustain their attention in contemporary hypermedia laden distracting environments (Rosen, Carrier, & Cheever, 2013). The human ability to maintain focus towards “stimulus, context, and goal information in the face of interference or other sources of conflict” (Engle & Kane, 2004, p. 149) has never been more challenged or more at peril (Alter, 2017). Attention – an individual’s ability to direct and sustain cognitive focus– is a neurological process at the bedrock of learning (Schunk, 2011). Information processing theory asserts that adequate attention, as well as accurate perception, are necessary components to any learning effort (Schunk, 2011). By definition, a learning disability is a measurable deficit in a learner’s ability to sustain necessary attention or perceive accurately (e.g. ADHD and Autism, respectively) (Schunk, 2011). Further, intelligence is highly correlated with an individual’s ability to direct, and sustain, their own attention. As Unsworth et al. (2012) noted, “the ability to control one’s attention [attentional control] is an important predictor of everyday attentional failures and a major source of individual differences in scholastic abilities and intelligence...” (p. 1771). The broad topic of attention presents both physical and cognitive related, but separate considerations. Given this study’s focus on participant behavior (e.g. multitasking and 7 technology use) and cognitive effects (e.g. motivation shifts), it is important to clearly define the terms and phenomenological considerations related to attention as a reciprocal negotiation between cognition and individual, behavioral regulation. An individual must present the cognitive and behavioral capacity to carry out basic attention functions such “as achieving and maintaining the alert state, oriented to sensory events, and controlling thoughts and feelings” (Posner, 2012, p. 2). At this level, there are behavioral considerations, such as a child’s ability to direct eye movement towards priority tasks at hand, or an individual’s ability to physically engage with the attention priority while shielding, ignoring, or removing off-task stimuli (e.g. inhibitory control) (Rothbart & Posner, 2015). These are the basic behavioral and cognitive necessities which mediate an individual’s ability to process and sustain focus on a set of stimuli, a neurological process often referred to as attention scope (Cowan, Fristoe, Elliott, Brunner, & Saults, 2006). In the framework of information processing and neuropsychology, attention is defined as an individual’s ability to cognitively accept, process, and then encode working memory stimuli (Cowan et al., 2006). Attention research related to multitasking and distraction often most directly considers the executive attention network. Executive attention is one of three networks which enables human attention, with the other two identified as alerting (e.g. external, incoming stimuli) and orienting (e.g. selection of information from sensory input) (Posner, 2012). Executive attention involves “mechanisms for monitoring and resolving conflict among thoughts, feelings, and responses” (p.19). As the most complex component of an individual’s network of attention, executive attention is often coupled with self-regulatory awareness and behavior as well as a wide “variety of cognitive tasks underlying intelligence” (Posner, 2012, p. 22). Accordingly, broad references to “attention” consider both cognitive processing (e.g. attentional 8 scope and executive attention) as well as the coupled, inextricably necessary self-regulatory behavioral capacity and behavioral execution of an individual. Coupled with attention, distraction is an intimately human attribute. The neurological finely-tuned ability to be distracted and interpret external or internal stimulus is a highly-evolved process fundamental to our daily persistence (Posner & Rothbart, 1998). It is the neurological basis by which we notice an oncoming car in our peripheral vision and the neurological basis by which we daydream and imagine, a phenomenon empirically linked with creative thinking (Baird, Smallwood, Mrazek, Kam, Franklin, & Schooler, 2012; Lavie, 2010; Unsworth, Redick, Lakey, & Young, 2010). Humans are meant to be distracted and we are predisposed to distraction. An interesting twist – most distractions start as a purposeful attention alert (Posner, 2012). Through this lens, distraction is an effective and laudable cognitive phenomenon. Heatherton and Wagner (2011) note that attentional failure, often the term used to describe distraction within the attention literature, is frequently instigated by subcortical regions of the brain, often leading to a breakdown in prefrontal, top-down processing and focus. Subcortical regions of the brain are responsible for basic human existence and needs, including the desire to procreate, eat, and survive when encountering danger. As well, this is the part of the brain responsible for primal human emotions such as fear and happiness. Accordingly, attention and distraction are concepts understood by considering both the human as well as the environment the human operates within (Rueda, Posner, & Rothbart, 2005). This segues to a much larger, complex, and evolving challenge: the modern, media-laden environment within which students must be attentive, but are often, and increasingly, distracted (Alter, 2017). Research pertaining to student attention in distracting technology environments emerged in the 1950’s, when televisions first began to define the modern home (Greenstein, 1954). Today, 9 the average professional is interrupted or demonstrates a shift between tasks nearly every three minutes (Mark, Gonzalez, & Harris, 2005). On the road, distracted driving has prompted new state and federal laws meant to combat the rise in unsafe, inattentive driving (Engström, Johansson, & Ostlund, 2005; National Safety Council, 2012). In homes across the Western world, rampant technology misuse and technology-laden distraction has prompted concerning parenting implications such as absentee parenting (Novotney, 2016). And in the classroom, the distracted student switches tasks an average of twenty-seven times per hour and opens more than sixty-five computer windows (e.g. applications or webpages) per hour (Kraushaar & Novak, 2006; Marci, 2012). There are a multitude of hypotheses to explain the dramatic emergence of distracted, technology-driven behavior in Western youth. Hypotheses include a fear of missing out (FOMO) (Carrier, Rosen, Cheever, & Lim, 2015), the value attributed to innate human desire to advance social capital and connection (Steinfield, Ellison, & Lampe, 2008), and the desire to feel good about oneself, even if exhibiting narcissist behavior and values (Campbell & Twenge, 2015). Considering technology-abetted distraction is commonly exhibited in the form of social connectivity (e.g. social media and social networking) or immediate access to gratification, information, and news (e.g. smartphone use), these hypotheses are well-reasoned (Junco, 2013; Junco, Heiberger, & Loken, 2011). As Hassoun (2014) noted, the result of pervasive multitasking has been “the production of a generation of learners overly reliant on instant gratification and disturbingly inefficient at processing information about the world” (2014, p. 1681). Without suggesting one specific hypothesis is more accurate or salient, this paper imitated that at, a neurological level, the driving force behind distraction may be some level of 10 emotional reward, despite the associated cost of high-level thinking, operation, or learning (Wang & Tchernev, 2012). And, distraction is both instigated by and, more recently, abetted by the emotional-affordances of modern technology (Heatherton & Wagner, 2011). The Emergence of Multitasking Technology-driven distraction is most commonly presented in the form of multitasking with technology. Multitasking is defined as the ability and behavior of quickly switching between differing tasks (e.g. task-switching) or mutually dividing mental faculties between two concurrent tasks (e.g. multitasking) (Alzahabi & Becker, 2013; Rothbart & Posner, 2015). Multitasking research, whether explicitly referencing multitasking or task-switching, is most appropriately described as the action of switching measurable cognitive attention between task efforts, often task-relevant and task-irrelevant efforts (e.g. distracting stimulus), either in rapid succession (e.g. task-switching) or simultaneously (e.g. multitasking) (Shao & Shao, 2012). It is important to note that multitasking or task-switching considers two or more cognitively relevant tasks (Aagaard, 2015). Multitasking is not, for example, chewing gum while taking notes. As the cognitive effort to chew gum, in this hypothetical example, is negligible. Subsequently, there is only one cognitive effort: taking notes. Adversely, an individual is multitasking when taking notes while also listening to a lecture, or texting while driving. The majority of multitasking research often casually interchanges the terms multitasking and task-switching, sometimes within the same study. Or, the two are colloquially referenced as multitasking as they produce similar effects (Rothbart & Posner, 2015). This study used the colloquial reference of multitasking. In recent years, the most common form of multitasking is that of media multitasking, reflecting the pervasive and ubiquitous nature of personal technology (Schuur, Baumgartner, 11 Sumter, & Valkenburg, 2015). “Media multitasking has become increasingly popular thanks to the versatility and accessibility of computers, smartphones, and tables which allow for the seamless integration of work, play, and social interaction” (Xu, Wang, David, 2016, p. 243). Media multitasking, a term first coined by the Kaiser Foundation, is defined as the consumption of more than one stream of content, at once, and facilitated by technology (Ophir et al., 2009). Media multitasking extant research often considers an individual’s poor regulation and exhibited inability to sustain attention. For example, media multitasking often presents the unregulated behavior within the context of technology misuse (e.g. web-browsing within a classroom), task- irrelevant behavior (e.g. social media use while studying), or complete distraction, often prompted by external stimuli (e.g. incoming text message) or facilitated by immersive environments (e.g. online video games) (Xu et al., 2016). Media multitasking behavior arguably defines the modern classroom. In fact, “the percentage of students using a smartphone for academic purpose was about twice as many in 2012 (67%) than 2011 (37%) through a variety of mobile friendly institutional services and resources, including grade checking” (Chen & Yan, 2016, p. 35). However, as Chen and Yan argue, when students have access to mobile phones, they are also more likely to “engage in off- task multitasking behaviors” (p. 35). Now, 90% or more of university students text message during class, typically task switch every 19 seconds, and express significant, disabling anxiety when removed from their phones (Alter, 2016; Krushner & Novak, 2006; Tindell & Bohlander, 2012; Zhang, 2015). Media multitasking resides within the attentional control construct (e.g. Wood, Zivcakova, Gentile, Archer, De Pasquale & Nosko, 2012), and within the broader umbrella of self-regulation (e.g. Hagger, Wood, Stiff & Chatzisarantis, 2010). Daydreaming and mind- 12 wandering also reside within attentional control (Smallwood & Schooler, 2006; Unsworth et al., 2012). These laudable and naturally occurring phenomena, which have been linked with creative and abstract thought (Baird et al., 2012), are purely cognitive, whereas media multitasking involves cognitive and behavioral engagement with technologies (Alzahabi & Becker, 2013). Accordingly, media multitasking can be considered as a manifestation or operationalization of distraction. The presentation of media multitasking within attentional control, and even more broadly, self-regulation underscores a key assumption of this study – namely, that media multitasking can be a monitored, controlled, and planned behavior. In the context of this study, self-regulation is defined “as the process whereby students activate and sustain cognitions and behaviors systematically oriented towards the attainment of their learning goals” (Schunk, 2008, p. 465). Zimmerman (2002) also noted that “self-regulation is not a mental ability or an academic performance skill; rather is the self-directive process…” (p. 65). Thus, media multitasking can be regulated, as can the desire to be physically and cognitively focused or attentive. More, it may be plausible for these applications to bolster a student’s desire and effort to be more regulated, and thereby potentially mitigate their media multitasking tendencies. Media multitasking presents a multitude of dynamic and interwoven considerations. Research suggests that the relevance of tasks, the modalities of tasks, the complexity of tasks, and novelty of the tasks can heavily influence an individual’s media multitasking endeavors (Wang, Irwin, Cooper, & Srivastava, 2015). Additionally, there are media multitasking considerations related to environment (e.g. appropriate technology use in the presence of others or in quiet spaces such as a library), motivation (e.g. an individual’s decision to not multitask because they are engaged in a valued exercise or endeavor), and temporal considerations such as 13 immediacy of task-relevant matters (e.g. differing multitasking tendencies as a result of pending deadlines) (Xu et al., 2016). As Hassoun (2014) noted as part of an ethnographic field study, “my observations suggest that multitasking has become a deeply “ordinary” practice, one whose norms of conduct reveal not passive modes of distraction, but rather a rich series of negotiations with one’s co- present environment” (p. 1681). Akin to attention and distraction, the media multitasking phenomenon presents common, yet complex, human mediating factors such as environment, motivation, familiarity, and self-regulation. In an effort to capture the complexity of ubiquitous media multitasking behavior, this study considered media multitasking, by way of media distracted behavior, over the course of an academic term (e.g. 16 weeks), in an authentic real-world application and setting. In extent media multitasking research, this type of study design was rare. Media Multitasking Pervasiveness and Concerns Humans have been distracted for decades (Courage et al., 2015). So why the concern, now? The recent critical response to media distracted behavior is attributable to the rise in pervasiveness of the phenomenon and its negative effects (Terry et al., 2016). In and out of the classroom, students between the ages of 8 and 18 – those who media multitask more than any other age group (Alzahabi & Becker, 2013) – engage with technology at levels previously unimagined. Moreno, Jelenchick, Koff, Eikoff, Diermyer, and Christakis (2012) found that students were using the Internet and multitasking more than 50% of the time, during an experiential sampling study. This included time in the classroom. “There has been a 120% increase in time that youth between the ages of 8 and 18 years old multitask with media” (Alzahabi & Becker, 2013, p. 1). As a result, “American children and teenagers, on average, 14 squeezed 10 hours and 45 minutes’ worth of daily media content into 7.5 hours with media” (Carrier et al., 2015, p. 373). A recent national study with more than 500 undergraduate students noted that 73% of students were unable to study without some form of technology, and 38% exhibited distracted behavior within 10 minutes of studying (Kessler, 2011). Kraushaar and Novak (2006) found students multitasking nearly half of the class time (42%) and open an average of 65 (65.8) computer windows such as an Internet browser or software application, per hour-long class lecture. The vast majority of these windows were deemed to be off-task. Ofcom and Gfk (2010) found that 16 to 26-year-olds studied used media 9.5 hours per day, of which 52% involved multitasking. In a recent study of 992 respondents, time spent on communication and media devices/endeavors reached 39 hours per day; a number only possible through multitasking behavior (David, Kim, Brickman, Ran, & Curtis, 2014). Generally, students are distracted and off-task within 10 minutes of a lecture (Rosen, Carrier, et al., 2013). Beyond the ubiquity of media multitasking, the foremost concern with “pervasive student multitasking is more directly related to the negative performance implications” (Terry et al., p. 243). As Judd (2014) noted, “real concerns are being raised around the interaction between multitasking and learning, in light of strong evidence that multitasking can interfere with the learning process” (p. 195). The associated performance implications can be attributed to the distractive potential of technology used for multitasking purposes. In addition, negative performance can be attributed to the “negative effects of multitasking – these include a decline in task accuracy and performance” (Judd, 2014, p. 195). Specific to students, studies routinely suggest a positive correlation between media distracted behavior and comparatively poorer scholastic performance (Kraushaar & Novak, 2006; Rosen, 15 Lim, Carrier, & Cheever, 2011; Sana et al., 2013; Wood et al., 2012). In a repeated controlled experiment, Wood et al., (2012) found a strong correlation between diminished performance and student multitasking. Considering the neurological limitations related to multitasking performance, it is not surprising that multitasking comes at the detriment of performance. While extant research on media multitasking has focused on performance implications in scholastic environments, there are other significant concerns. Beyond the contemporary and well-known concerns related to distracted driving, there are relevant concerns related to well- being, social relatedness, and psycho-social development. For example, “research has found that among children, [media multitasking] negatively correlates with the feeling of normalcy and capabilities to develop intimate relationships with friends, and it has been associated with the symptoms of depression and social anxiety in adults” (Xu et al., 2016, p. 242). These concerns more appropriately reflect the antecedents for engaging in media multitasking behavior (e.g. seeking social connection) or the type of multitasking behavior pursued (e.g. Facebook). These associated effects have prompted associated concerns related to concerning presentations of narcissism (Campbell & Twenge, 2015), interpersonal communication (Turkle, 2012; Turkle, 2015), and anxiety (Becker, Alzahabi, & Hopwood, 2013; Rosen, Whaling, et al., 2013). In sum, media multitasking behavior is ubiquitous, and the overwhelming body of research underscores a list of undesired effects, not the least of which is measurably poorer cognitive performance. This study contributed to the existing body of literature on media multitasking as practitioners, parents, administrators, and students are ill-equipped to combat the commonly correlated negative effects of media multitasking. 16 Human Limitations with Multitasking Admittedly, multitasking is often an integral, and often praiseworthy, professional competency (Monk, Trafton, & Bohm-Davis, 2008). It is a necessary skill for certain professions, such as emergency response or medical professionals. And, multitasking is an unavoidable reality in daily life, especially when considering our human predisposition to daydream and distraction (Wood & Zivcakova, 2015). In instances where multitasking moments are well-regulated or monitored, multitasking can also lead to creative problem solving and high efficiency (Altman & Tafton, 2002; Brasel & Gips, 2011; Levine, Waite, & Bowman, 2007; Lin, 2009). In these examples, there exists an important caveat related to expert, professional knowledge versus novice familiarity, and the varying associated cognitive burden (Lin, Robertson, & Lee, 2009). Emergency response and medical professionals, for example, record hundreds, if not thousands, of hours in training and preparation so that they may become a competent multitasker in crisis situations. This training is necessary, of course, because distracted multitasking behavior – which is starkly different from the well-regulated and well- rehearsed multitasking exhibited in medicine – is nearly always problematic. Two reasons are often cited for the problems associated with multitasking: limited cognitive ability and poor cognitive performance. For cognitive ability, the human brain simply cannot simultaneously process two stimuli with equal cognitive attention (Levy & Pashler, 2001). More to the point, the brain is unable to simultaneously process two disparate cognitive tasks or endeavors. Neuroscientists argue that competing tasks trigger different areas of the brain, mainly the prefrontal cortex (top-down processing: high-functioning, high-order) and striatum or subcortical (bottom-up processing: low-level, reactionary, emotional stimulus) (Bozeday, 2010; Lin, 2009; Rosen, Lim et al., 2011). As a result, the brain frequently prioritizes 17 one over the other, unable to stimulate both (Bozeday, 2010; Foerde, Knowlton, & Poldrack, 2006). Accordingly, the brain is limited by its physical ability to encode two or more external stimuli at once, as well as in its ability to process two or more neurological, cognitive endeavors simultaneously. Because of this inability to encode or process stimuli simultaneously, the brain often defaults to a task-switching approach. However, the brain is poor at effectively and efficiently task-switching. This is the second of two reasons why multitasking and distracted behavior nearly guarantees detrimental effects. “The cognitive literature clearly indicates that competing cognitive tasks affect performance. Early and more modern researchers of attention have agreed that attentional capacities are limited and that diving attention among one or more different tasks leads to decrements in performance” (Bowman, Levine, Waite, & Gendrom, 2010, p. 928). Salvucci, Taatgen, and Borst’s (2009) model for Unified Theory of the Multitasking Continuum states that not all cognitive tasks are similar. Instead, tasks fall along a spectrum of complexity with more complex, sequential tasks (e.g., writing a paper) requiring more attention and time. Further, tasks that necessitate attention to detail, complex reflection, or significant concentration are challenging for the multitasking or task-switching student (Bowman et al., 2010). New, unfamiliar, and arcane tasks may also present more complexity than familiar or known cognitive tasks. Additionally, research has shown that tasks similar in nature, such as instant messaging and transcribing notes which both involve focused cognition and typing, can be a greater challenge than two completely unrelated tasks, such as instant messaging (IMing) and Facebook (Bowman et al., 2010). Moreover, inherently more engaging (as defined by emotional reward) tasks such, as Facebook versus e-mail, prompt further attentional failure (Wood et al., 2012). Regardless of the varying tasks characteristics, Salvucci et al., (2009) posit 18 that all tasks vie for the same, limited cognitive resources, and consequently any one cognitive task must wait its turn to use necessary resources or modules. Studies frequently consider three agents with multitasking: the primary task (or task- relevant), the interruptive task (or task-irrelevant), and the lag between (Rosen, Carrier, et al., 2013). The process of activating the dormant task while switching from the current task requires additional energy and time. This is referred to as interruption lag (cognitive decoupling from one task to another) and resumption lag (acclimation to the new task). Both lags increase total time and energy exertion (Salvucci et al., 2009). Though this time and energy may be minimal, “it may help explain why some people feel more productive while multitasking; the increased brain activity makes them feel like they have accomplished more” (Grinols & Rajesh, 2014, p. 90). In reality, the time lost in switching from task to task produces cumulative cognitive inefficiencies (Altman & Tafton, 2002; Grinols & Rajesh, 2014). In sum, the cognitive negotiation inherent to task-switching efforts are mediated by multiple factors and considerably complex. The product of this complexity is an increased chance of inaccurate, disjointed information encoding (Sana et al., 2013). As the brain toggles between disparate cognitive tasks, the rate of errors increases and the learning effort is substantially more ineffective, fraught with mistakes (Salvucci et al., 2009). As Bowman et al. (2010) suggests, distracted, multitasking behavior has been “linked to slower time to complete tasks, forgetting of target activities, and more errors in carrying out those activities” (p. 958). In stark contrast, “when focused on a single primary task, our attentional resources are well directed and uninterrupted, and information is adequately processed, encoded, and stored” (Sana et al., 2013, p. 24). Not surprisingly, controlled laboratory studies routinely produce heightened errors when comparing multitasking students vs non-multitasking students when 19 tested against focus assessments (e.g. driving tests or Stroop assessment) and memory recall tests (e.g. multiple-choice questions following a written passage) (Bailey & Konstan, 2006; Bellur, Nowak, & Hull, 2015; Kraushaar & Novak, 2010; McVay and Kane, 2010). The multitasking mind is an inefficient and ineffective mind, arduously toggling between tasks while further perpetuating the chance for greater error. Yet, some people do multitask “better” than others. Students with heightened measured intelligence and working memory capacity have been linked to heightened levels of multitasking proficiency, yet they are not immune to the same eventual limitations (Cowan et al., 2006; Unsworth et al., 2012; Unsworth et al., 2010). Those with higher self-regulatory awareness and self-regulatory behavior also tend to multitask with less detrimental consequences (Rosen, Carrier, et al., 2013). This suggests that students can regulate their multitasking and mediate their behavior to limit the negative effects. Last, Ophir et al., (2009) found that those who were calculated to be high media multitaskers were, in fact, comparatively worse multitaskers in terms of proficiency and efficiency to those who were calculated to be low multitaskers. Ironically, Ophir et al., also found that those who were high multitaskers also believed they were more efficient and effective multitaskers. Such variations in multitasking aptitude and self-reflection may explain exhibited variations calculated as part of the study. Suggested and Tested Responses As the Literature Review has shown, existing media multitasking research has sufficiently diagnosed the media multitasking phenomenon (e.g. motivation, effects, breadth, associated challenges, etc.). However, media multitasking research has yet to identify promising ways to prevent it, even as a handful of approaches to regulating media multitasking have been suggested by practitioners and researchers. Some call for limits or bans on technology (e.g. 20 Turkle, 2015), while others encourage student “technology breaks”, so as to incentivize their temporary attention (e.g. Rosen, Whaling, et al., 2013). Others encourage full integration of the distracting devices into the pedagogical plan (Willingham, 2010), or encourage practices to discuss “the consequences of laptop use with their students” in an effort to catalyze a student’s own self-regulatory efforts (e.g. Sana et al., 2013, p. 30; Terry et al., 2016). Research illuminates fundamental flaws with all previous recommendations. For example, technology bans, limits, or breaks are only as effective as the instructor who controls the technology or behavior, and only possible when a student’s scholastic efforts are monitored or controlled (e.g. happening in the classroom). This is problematic as students often struggle to regulate media both in and out of the classroom (Junco, 2012; Turkle, 2015). Research has also found that banning or removing student device(s) does little to address the anxiety and subsequent distraction, thereby creating an equally undesirable state of inattentive anxiousness (Rosen, Carrier, et al., 2013). More, technology bans or limits are quite impractical given many modern teaching environments with technology-infused pedagogical approaches or large lecture halls where clandestine technology use is prominent (Sana et al., 2013). In fact, most post- secondary institutions rely on student cellphone access/use in crisis situations such as active- shooter situations or bomb threats (Tindell & Bohlander, 2012). The recommendation to heighten student awareness and thereby empower greater regulatory behavior has also proven to be problematic and ineffective. Students acknowledge that technology can be distracting and verbalize known potential negative effects; and yet, they still persist in exhibiting media multitasking behavior (Terry et al., 2016). This disconnect between awareness and behavior is akin to other addictive behaviors such as smoking and 21 suggests that technology misuse persists despite an acknowledged risk (Rosen, Whaling, Rab, Carrier, & Cheever, 2013; Richtel, 2014; Strayer, Watson, & Drews, 2011). Without verified, real-world examination of effective responses, practitioners are ill- equipped to address and mitigate the rampant behavior. Existing research hypotheses have left practitioners to often feel caught between two choices in how to respond - permit or restrict the behavior (Hassoun, 2014). Permitting the behavior comes with the near-guaranteed risk of distracted and disengaged students. Adversely, restriction is often unsuccessful. As Hassoun notes, “such regulatory efforts are frequently unsuccessful, with many students sidestepping policies through strategies of concealment” (p. 1682). More, there is research to suggest that external control of individual devices amplifies student anxiety and frustration (Rosen, Whaling, Rab, et al., 2013). Adversely, “since banning [technology] may prove to be impractical if not impossible in the long run, instructors can take the opposite approach: They can create ways to make smartphone usage contribute to the learning environment” (Grinols & Rajesh, 2014, p. 92). And while this theoretical argument is logically espoused, there is little empirical evidence to support such an approach. In fact, research more commonly argues that the heightened potential for distracted behavior and hampered learning is positively associated with the mere presence and access to personal technology in the learning environment (Aagard, 2015). In short, no generally credible response to technology misuse has been identified, studied, or verified and the need remains to identify and examine an approach to regulating media distracted behavior that works in and out of the classroom, can be self-directed, and is not reliant an outside agent, such as a teacher. This study suggested that regulating software applications may be a credible response. 22 It is ironic to suggest that technology may be the answer and most efficacious response to a to a technology-relevant problem. If, however, the affordances of a particular technology can empower desired outcomes, such as focused attention, the particular technology may be abetting a motivation to be focused, rather than distracted. And, there is extant research to suggest that an individual’s motivation or behavior can be positively influenced by technology. At an applied level, research suggests that technology can positively influence behavior and motivation. Contemporary examples include technologies’ role in promoting improved personal health care choices, increasing a student’s collegiate retention through intentional text messaging, and heightened athlete motivation (Castleman & Page, 2015; Kratzke and Cox, 2012; Patel, Asch, & Volpp, 2015). In these examples, the affordances of technology – including the pervasiveness with which individuals use the available technology – yield positive results. At a theoretical level, there is also robust research related to the “relationship” between an individual and a piece of technology. Consider the ways in which technology can teach social understanding (e.g. structured multiple player games such as Simms or social cues and rewards as part of a children’s game), reward behavioral efforts (e.g. Nintendo’s Pocket Pikachu designed to motivate users to be active), and advance individual social capital and economic capital through purposeful yielding of technology (e.g. Facebook) (Fogg, 2002). Nass and Yen (2012) even suggested that we treat computers like other people; we argue with them, form bonds with them, and empathize with them. Accordingly, human behavior and motivation can be positively or negatively connected with an individual’s use of technology. This study argued that the problematic tools of mass distraction may also present the key to heightened focus and attention. 23 CHAPTER THREE Pilot Study, Dissertation Study, and Research Questions Pilot Study In preparation for the semester-long study, a truncated, two-week pilot was conducted to test-run all aspects of the full study. This included, but was not limited to, the evaluation of the software applications, written instructions on how to download/install, the proposed qualitative questions (approachability, clarity of questioning, etc.), and the full measure (question sequencing, survey monkey design, time to complete, etc.). Students provided feedback via a survey as well as a facilitated focus group lead by the researcher. Twenty incoming first-year students registered in the summer session of Chemistry 1 participated in the pilot. This was a desirable sampling frame as the full study would ultimately involve three-hundred and eighty first-year students in Chemistry 1, during the fall semester. Students were incentivized via a free catered lunch, during which time the researcher asked reflective questions about students’ experience. There were four notable findings from the pilot, all of which ultimately informed the full study. First, those who participated in the pilot noted that social comparison and peer behavior heavily influenced individual participation in the pilot. This was not surprising as research has previously suggested that students are heavily influenced by peer distracted and focused behavior (Sana et al., 2013). As a result, the full study considered a contamination effect among student participants with the introduction of two, semi-random contamination treatment groups. Second, the pilot underscored the need to only consider smartphone application use rather than computer operating system and web-extension applications of the same software. While desktop and web- extensions were used by a small handful of participants (less than five) in the pilot, the shared 24 experience among all participants centered heavily around the smartphone application. Accordingly, the full study prioritized the smartphone applications use, solely. Third, the initial drafted survey measure was overly onerous as a result of the inclusion of the Media Multitasking Index (MMI) (Ophir et al., 2009) and too many qualitative questions. Students universally noted that they would “most likely not” complete the measure over multiple administrations. As a result, the codebook was revisited, the MMI was removed from the second and third administration, a truncated media use measure was drafted to capture similar information (e.g. use of media) in place of the MMI, and the qualitative questions were cut by two-thirds. All measure components are outlined in the next Chapter. Last, it was decided not to use the software application, Freedom. This software application did not appropriately support non-iOS operating systems (despite claiming to support Android), was found to be cumbersome and less than intuitive. Worse, Freedom failed to work in a predictable and reliable manner for student participants (e.g. the app often crashed). Adversely, Cold Turkey and Forest proved to be easy to install and use, reliable, and bug free. More, neither application required a profile or user-account with the software company to use the application, unlike other applications such as Freedom. Regulating Applications Two different regulating software applications were ultimately included as part of the Dissertation study: Cold Turkey and Forest (https://getcoldturkey.com/) and (https://www.Forestapp.cc/en/, respectively). The two applications shared similar affordances and design considerations. For example, both require the user to “activate” or “turn-on” the application. In other words, both require the user to set a pre-determined length of time they wish to be focused (i.e. a “timer”). Additionally, both can be used on student smartphones. Most 25 importantly, both Forest and Cold Turkey claim to abet an individual’s desire to be focused and attentive. Forest helps a user “stay focused” by putting “down your phone and [helping to] focus on what’ more important in your life.” Cold Turkey, similarly, “lets you temporarily block distractions to help you focus and finish your work sooner.” While both applications abet this desire by regulating the individual’s media distracted behavior, the two applications differ in their approach. The similarities between applications related to user-regulation and user-activation, but variance in approach, presented a desirable opportunity to examine the efficacy of regulation generally while also more narrowing examining if one approach prompted comparatively greater utilization. Cold Turkey was chosen because it “blocks distractions” and overrides the student user in moments of self-regulation failure. Cold Turkey purports to block distractions, break bad habits, help to build new habits, and help an individual to be more productive. As they state, “Think of Cold Turkey as a firewall for your attention.” Harkening self-regulation theory and Willpower (e.g. Baumeister & Tierney, 2011), Cold Turkey blocks incoming distractions and also blocks users from accessing distracting sites, such as Facebook, in moments of self-regulatory failure. There is research to support this type of intervention (e.g. see Turkle, 2015), and there is research to support the application approach (e.g. blocking) as media multitasking has been correlated with attentional failure due to external and internal stimuli (Rueda, Posner, & Rothbart, 2010). But, there is also research to suggest that simply blocking or removing the problematic technology can be short-sided, and unsuccessful (Cheever, Rosen, Carrier, & Chavez, 2014), as it does little to lower perceived technology dependence, and thereby can induce a level of unproductive anxiety. More, blocking and overriding the user may introduce 26 new challenges, such as the need to block some online behavior (e.g. viewing Facebook) but not others (e.g. accessing a learning management system such as Canvas or Desire 2 Learn). Forest offers a different approach. Whereas Cold Turkey blocks distraction originating from outside the user and overrides the user from navigating away from the application towards off-task behavior, Forest prompts the user to be mindful, and an active agent in their intentional choice to remain focused. Like Cold Turkey, Forest calls on the user to execute self-regulatory decisions towards task-relevant material and avoid task-irrelevant tendencies. Leveraging behaviorist learning mechanisms, Forest encourages the user to achieve their set goal by rewarding them with the flourishment of a plant (e.g. 5 minutes of focus), bush (e.g. 20 minutes of focus), or tree (e.g. 60 minutes of focus), depending on the length of time determined by the user. See images included in Appendix A of Forest and Cold Turkey. As Forest states, “Forest provides an interesting solution to beat your phone addiction. You can plan a seed in forest…However, if you cannot resist the temptation and lave this app to check Facebook or play a game, your tree will wither away.” The assumption is, of course, that cultivating one’s “Forest” with plants, bushes and trees will positively reinforce task-relevant behavior such as attentional control and self-regulation while killing one’s Forest will discourage task-irrelevant behavior like media multitasking. As Forest states, “the sense of achievement and responsibility will drive our users to stay away from their phone.” As there was extant research to suggest that Cold Turkey’s blocking approach will work, there is also research to suggest that media distracted behavior can be mitigated through intentional choice and individual self-regulation, the underlying mechanism for Forest (Zhang, 2015). Forest’s use of rewards aims to prompt a sense of responsibility and accomplishment, thereby underscoring individual choice whereas Cold Turkey removes choice from the user. As 27 argued, both presented theoretically sound approaches towards limited media distracted behavior to the benefit of greater attention and focus. But both applications also rely on an important assumption related to utilization; quite simply, for either Forest or Cold Turkey to work, they must be used. And for either application to positively affect pervasive media distracted behavior, they must be used in an ongoing, consistent manner, akin to the effective value of a treadmill or workout machine. For this reason, it was important to consider how the applications abetted or thwarted ongoing utilization. This underscores the value in considering the user-experience in an effort to illuminate if a particular approach is more successful in abetting ongoing utilization. Relevant to these applications, did assumed self-regulatory failure prompt greater utilization at the expense of control and choice, or did reward or punishment prompt greater ongoing utilization at the expense of potential self-regulatory failure? Accordingly, student motivation by way of application experience was considered through a self-determination perspective. Dissertation Study This dissertation study involved a quasi-experimental, nonequivalent groups design administered over a short-term longitudinal period, with three query times spanning the duration of a standard collegiate semester (e.g. 16 weeks). Surveys were sent to participating students at the beginning, middle, and end of term. The study presented a qualitative sequential explanatory data analysis wherein the qualitative data served as a secondary dataset meant to extrapolate discussion related to quantitative analysis (Creswell, 2009). By introducing an embedded qualitative data set into a sequential explanatory design (quantitative data collection > quantitative data analysis > qualitative data collection > qualitative data analysis), the qualitative data served as a secondary 28 dataset within a larger dataset (Creswell, 2009). As no other known study had examined regulatory applications, there existed the obvious potential for significant variability within the study. For example, while this study presented regulatory applications as a potentially credible response to media multitasking behavior, the qualitative data may capture relevant unmeasured variables or factors such as a student’s inability to regulate non-technology forms of distraction and multitasking such as external peer stimuli. Accordingly, the qualitative data within a sequential explanatory design can “be used to examine any surprising results in more detail” or deepen discussion with quantitative findings (Creswell, 2009, p. 211). This was the first known media distracted quasi-experimental study to consider student behavior, performance, and motivation in an authentic, long-term real-world environment. Research Questions Do the applications work? In an effort to assess the credibility of regulating applications as a response to prevalent distracted behavior, this study compared the effects of different regulatory applications on students’ media multitasking behavior, self-regulation (as measured by attentional control and behavioral regulation), perceived dependency on technology, engagement, and academic performance (as measured by achievement in the course). Broadly, the first set of research questions and associated hypotheses addressed the question: “Do these regulating applications work as intended?” RQ1: What effect does the use of regulation software have on reported media distracted behavior? 29 H1: It was hypothesized that the use of regulating software would reduce students’ media multitasking reported behavior as compared to students who did not use these regulating applications. RQ2: What effect does the use of regulation software have on student regulation, as measured by control of attention and behavioral regulation? H2: Based on research indicating that self-regulation can be a strengthened (Baumeister & Tierney, 2011), it was hypothesized that the use of regulating software would increase attentional control and behavioral regulation as compared to students who did not use these regulating applications. RQ3: What effect does the use of regulation software have on technology dependency? H3: As the commonly associated variable with media multitasking is emotional reward, so- called “top-down” approaches involving self-regulatory cognitions and external regulation strategies such as software applications may be misdirected. This study therefore considered whether regulatory application use reduces perceptions of dependency for technology, which is defined as held “anxieties about missing out on technology as well as feeling dependent on technology” (Rosen, Whaling, Carrier, et al., 2013, p. 2507). If a reduction occurred as hypothesized, then it may be inferred that top-down approaches such as regulating applications can override behavior that is conditioned by emotional rewards. Conversely, if regulatory application did not reduce dependency for technology, then it may be because bottom-up drivers like emotional rewards are simply too powerful to change by top-down regulatory strategies. 30 RQ4: What effect does the use of regulation software have on student engagement in the Chemistry course? H4: Framing student engagement as a cognitive and behavior manifestation of an individual’s effort, motivation, and attention towards the scholastic environment (Jang, Joo, & Reeve, 2016), it was hypothesized that the use of the regulation software would also increase students’ engagement with course material compared to students who did not use these regulating applications. Media distracted behavior and academic performance. As the literature review clearly argued, media multitasking and distraction research has garnered popular salacious attention as a result of the seemingly causal, undesired effects, such as distracted driving, impaired interpersonal skills and communication, and distracted student scholarship. Consider, for example, the forewarnings of Carr (2011) who argues we, as a society, are turning into shallow thinkers. Or Campbell and Twenge (2015) who suggest media use produces narcissistic, shallow friendship and social capital. Or Turkle (2015) who argues that we are losing our ability to hold sustained conversations or Stone (2009) who argues that we’ve become a society of continuous partial attention. The concerns are legitimate and the research is convincing. But as it pertains to scholarly performance and real-world situations, a multitude of factors influence and, potentially predict, scholarly achievement and performance to a greater degree than media multitasking behavior alone (Hassoun, 2014; Junco, 2012). This study’s examination of real-world achievement may potentially illuminate an important gap in the media multitasking research while addressing a 31 potentially sensational yet unsupported assumption that media distracted behavior “guarantees” poor performance. Research suggests that those who report heightened multitasking and distraction will, also, perform comparatively poorer to those who report less multitasking and distraction (Bowman, Waite, & Levine, 2015; Junco & Cotton, 2011; Sana et al., 2013; Wood et al., 2011, Zhang, 2015). And while laboratory-based studies intimate near-causal comparatively poorer performance with distracted behavior (End, Worthman, Matthews, & Wetterau, 2010), no known research has examined student performance over the course of a term against media multitasking behavior when controlling for initial content competency or whilst examining a full battery of motivational considerations. If, for instance, incoming competency is found to be more highly correlated and suggestive of overall performance as compared to media multitasking behavior, the most promising “intervention” may be more focused on course content, rather than student technology use. RQ5: What effect does the use of regulation software have on academic achievement in the course? And, to what degree did student media use and distraction correlate with overall performance in the course as compared with additional relevant considerations such as motivation and incoming content competency? H5: Because attentional failure and prevalent media multitasking behavior have been strongly correlated with diminished academic achievement (Cain, Leonard, Gabrieli, & Finn, 2016), it was hypothesized that the use of regulation software would increase academic achievement, as measured by course grades. Additionally, it was hypothesized that increased achievement would 32 correlated, in an expected manner, with higher-levels of student engagement, regulation and motivation, and lower levels of perceived anxiety and dependency. Motivational, long-term effects. Students’ motivation was also examined in order to understand how the applications affected students’ willingness and reasons for engaging in Chemistry. Assuming the regulatory applications worked as intended (i.e. lowered media multitasking behavior, technology dependency, and increased attentional control, engagement, and achievement), then students’ motivation in Chemistry should also increase as they would feel more efficacious in learning and develop a deeper appreciation for the value of Chemistry. Conversely, if the applications did not “work” as intended, then students’ motivation would not change compared to control conditions or, perhaps, even decrease as negative experiences with the regulatory software was generalized to the Chemistry course. An important part of this inquiry was examining whether long-term changes could be attributed to the use of regulatory applications. Analogous to the immediate and acute value of technology bans during a class lecture, regulating applications may work when used, but otherwise present little lasting effect. However, if significant shifts in student motivation occur, then one could argue that the applications produced more enduring outcomes. In particular, this study suggests that positive motivational changes may positively affect the self-regulatory efforts of the students now abetted by the regulatory applications. Accordingly, by broadly connecting student application use to changes in motivation, this study considers the way motivational dynamics relates to changes in self-regulatory control inhibition and approach. In short, the second set of research questions and hypotheses addressed the question: “How did the regulatory applications work?” 33 RQ6: As a result of regulation application use, do students report significant shifts in held expectancy-value motivation? H6: As students with higher regulatory control frequently perform at higher levels in their educational pursuits as they structure and execute efforts towards the achievement of a learning goal, often while inhibiting task-irrelevant or media multitasking tendencies (Wei, Wang, & Klausner, 2012), it was hypothesized that the use of regulation software would increase their expectancy-value beliefs – i.e., students would expect to do better in Chemistry and value the course further. RQ7: What effect does the use of regulation software have on student achievement goals, as measured by mastery approach, performance approach, and performance avoidance? H7: In concert with increases in self-regulation, engagement, and expectancy-value, it was also hypothesized that the use of regulation software would increase mastery approach and performance approach goals and decrease performance avoidance goals. RQ8: What effect does the use of regulation software have on students’ self-efficacy in Chemistry? H8: In keeping with increased regulatory control, it was hypothesized that the use of regulation software would also increase students’ self-efficacy in Chemistry. Theorists argue that, “students who have the capabilities to detect subtle progress in learning will increase their levels of self- satisfaction and their beliefs in their personal efficacy…Clearly their motivation does not stem from the task itself but rather form their use of self-regulatory processes such as self-monitoring, and the effects these processes on their self-beliefs” (Zimmerman, 2002, p. 66). 34 Continued use of the applications. The aforementioned lines of inquiry – “do they work” and “how do they work” – assume student continued use of the software over time. However, it was also possible that students choose to not use the applications, even when they are personally motivated to regulate their behavior. Accordingly, it was also important to consider the way specific features of the applications potentially support or thwart students’ motivation to continue using the application. Self-determination theory offers one way to conceptualize how application features may affect students’ motivation to use them. Broadly, self-determination theory suggests that “specific psychology nourishments present in activities are necessary for activities to be experienced as inherently enjoyable or fun, and it is these nourishments that influence the effects the activities have on motivation and well-being” (Przybylski, Rigby, & Ryan, 2010, p. 155). More, self-determination theory research shows activities foster “greater intrinsic motivation to the extent to which they satisfy three fundamental human needs: the need for competence (sense of efficacy), autonomy (volition and personal agency) and relatedness (social connectedness)” (p. 155). Accordingly, students who perceive a desirable level of autonomy and competence are more likely to be intrinsically motivated and exhibit continued effort and engagement (Grolnick, Kurowsky, Dunlap, & Hevey, 2000). Conversely, students who perceive they are being controlled or do not have sufficient competence in a task are more likely to exhibit lower levels of effort and engagement (Chen & Jang, 2010). While both applications promote focused behavior through a self-regulation approach, the two applications do so in different ways. Cold Turkey controls and inhibits the user by blocking external stimuli, while Forest encourages the user to stay regulated and on-task for a preset amount of time using rewards (i.e., plants, bushes, trees) and punishments (killing the 35 plants, bushes, and trees) for task-irrelevant behavior. By way of self-determination theory, the loss of individual control in Cold Turkey might thwart ongoing software use whereas the introduction of rewards in Forest might also be perceived as an external form of control, undermining their autonomy. In either case, the effect would decrease students’ motivation to continue using the applications (Burgers, Eden, Engelenburg, & Buningh, 2015; Chen and Jang, 2010). On the other hand, it was also possible that Forest might incentivize relatively greater use over the course of the semester compared to Cold Turkey. After all, Forest positively reinforces the user for achieving set goals with rewards and the growth of a Forest while Cold Turkey simply “ends the session” without any external reward or feedback. Moreover, Forest does not prevent or override the user from ending the session prematurely, should they choose to do so. As a result, it may be perceived as less controlling and more autonomy supportive than Cold Turkey, which does not allow the user to override a pre-determined session. Students might perceive Forest as being less controlling and more reinforcing than Cold Turkey, and in turn support more long-term use. In contrast, Cold Turkey might be perceived in the opposite way – i.e., as thwarting autonomy and providing no reinforcement, both of which undermine motivation, especially over time (Chen & Jang, 2010). To measure these hypothesized disparities, this study also assessed autonomy, competence, and self-determined motivation (external regulation, intrinsic motivation, amotivation, and identified regulation). RQ9: Do the regulating applications have differential effects on students’ self-determination motivation and continued use? 36 H9: While Cold Turkey and Forest both necessitate a level of individual (self-regulatory) activation (i.e. the choice to use the application or preset a timer), only Cold Turkey thwarts user autonomy by way of total external control. It is therefore hypothesized that Forest will result in comparatively greater autonomy, intrinsic motivation, and identified regulation, and less extrinsic motivation, introjected motivation, and amotivation. 37 CHAPTER FOUR Methods Overview This study was a quasi-experimental, nonequivalent groups design administered over a semester. Using three waves of measurement, the study considered within-participant variation (e.g. shifts in individual media multitasking behavior) as well as between-group variation (e.g. differences between control and experimental group behavior). The also study included qualitative data collection as prescribed with a mixed methods sequential explanatory design protocol whereby the qualitative data corroborates, deepens, or clarifies quantitative findings (Creswell, 2009). Sample The study was administered at Colorado School of Mines, in the 2017 Fall Semester. All students enrolled in four separate sections of CHGN101 (colloquially known as “Chemistry 100” or “Chemistry 1”) were invited to participate in the study via e-mail, following a brief visit to each of the four sections by the lead researcher. However, only students 18 years of age, or older, were permitted to participate. Students validated their eligibility with their unique Campus Wide Identification Number (CWID) and institutional e-mail (e.g. spartan@mines.edu). Students who were interested in participated but were not 18 years of age at time of consent were awarded the full 30 extra credit points by course faculty. Chemistry 100 is an entry-level university course. The assumption is that all enrolled students have the necessary requisite knowledge to succeed in the course. All incoming students, with the exception of those who have previously earned Chemistry 100 credit (traditionally less than 20% of the incoming class), were registered in Chemistry 100 in the fall term. The vast 38 majority (more than 95%) of students enrolled in Chemistry 100 are first-year students of traditional college age (e.g. 18 or 19 years of age), though there were a handful of transfer students as well as repeat students (e.g. those who failed to pass the course in previous terms). Student demographics data (e.g. first-year, transfer, repeat, etc.) are reported in Table 1, found in Appendix A. The four course sections of Chemistry 100, while independent from one another, were identical in learning outcomes, course content, lesson plans, class assignments, and pedagogical approach. The four sections were taught by two lecturers and two coordinators. As a team, the four professors shared responsibility for ensuring continuity between the four sections. For example, lesson plans were shared between all four sections as were modules and labs. Last, students from the four sections were given the same examinations at the same time (e.g. a common exam, at a common hour). Thus, the random enrollment of the four sections, shared pedagogical approach, and common exams increase confidence that any difference between sections can be attributed to the independent variable. Across the four sections, 380 students enrolled (n = approximately 90 to 100 per section). This accounts for approximately 35% of the total first-year student population, and approximately 9.5% of the total undergraduate enrollment. Chemistry 100 is a required course for all Colorado School of Mines students regardless of major or institutional college, and as such the course presented an authentic sampling of the total undergraduate population. Moreover, enrollment in a specific section was random and administratively handled by the institution (i.e. students may not self-register for particular sections of the course). While the course enrollment process increases the study’s generalizability, the population was not without its own limitations. Colorado School of Mines (Mines) is a rigorous and 39 selective public Science, Technology, Engineering, and Mathematics (STEM) and applied- science university that only confers Bachelor of Science degrees at the undergraduate level. Admitted students in 2016 were in the top 10% of their graduating high-school class and had an average high school GPA of 3.8 and an ACT score of 32. Like many STEM and applied-science universities, the school is predominantly male, with 30% of the undergraduate population identifying as female. The school is also fairly racially homogenous, with 18.5% of the undergraduate population identified as belonging to an underrepresented minority; and an additional 8% represent international, non-US citizens. It follows that Mines should not be equated with all four-year institutions and is more comparable to highly-rigorous research institutions such as MIT, Cal-Tech, Stanford, Georgia Tech, CalPoly, and NYU Polytechnic, all of whom are identified peer institutions for Colorado School of Mines. Of note, however, a previous study administered at Colorado School of Mines did present generalizable national data in the areas of technology usage and student-held attitudes towards technology, despite these acknowledged demographic differences (Terry et al., 2016). Procedure Student operating system distribution and class enrollment combined to make the stratified levels for random assignment to the five treatments groups: Cold Turkey and Forest (those explicitly asked to use the Cold Turkey or Forest application, respectively), Cold Turkey Control and Forest Control (those registered in the same class as students in the Cold Turkey and Forest experimental groups, respectively), and a Pure Control (participants enrolled in a separate course with no participants from the other four groups). 40 All five groups completed the same measures, with the exception that students in the Cold Turkey and Forest experimental groups received slightly longer measures due to application-specific questions (e.g. frequency of use of application). All five groups received the same e-mails, directions, and correspondence. Students were incentivized to participate. Those who participated received 30 extra credit points towards their overall Chemistry 100 grade. Additionally, students who participated had an opportunity to win randomly awarded Amazon Gift Cards valued at $20.00. Of note, the Cold Turkey and Forest groups were also given paid, premium downloads as part of their participation in the study. The opportunity to award gift cards and provide premium downloads was supported through a fellowship conferred by the College of Education, at Michigan State University. Conditions and design. Following collection of the first administration, course sections were assigned via random assignment. Four Chemistry 1 course sections participated in the study. Random assignment was used to identify a single course as Pure Control. Additionally, random assignment was used to assign an additional single section as Forest and Forest Control. The remaining two course sections were identified as Cold Turkey and Cold Turkey Control, with members of both groups in each of the two sections. Once the course sections were randomly identified, stratified random assignment based on operating system and class section enrollment. Stratified assignment was necessary because Cold Turkey was only accessible on Android operating system, whereas Forest was iOS and Android compatible. The prevailing use of iOS contributed to the necessity of identifying two of the four class sections as Cold Turkey and Cold Turkey Control, which is only accessible on Android. Table 2, found in Appendix A, summarizes the timing of all procedures. The lead researcher visited all four Chemistry 100 sections at the beginning of the term. During this visit, 41 the lead researcher asked students to consider participating in the study. The call for participation was kept brief and consistent for all four sections. It was presented as a study that considered “Study habits in Chemistry 100” and subsequently the measures, of “which there would be three over the course of the term would consider student motivation, opinions about effort, technology use, and individual student performance in the course”. All students were invited to participate via their institutional e-mail. The Consent to Participate Disclosure generally noted: “Students are being asked to participate in a semester- long study that considers their academic performance in the Chemistry 100 course against their use of technology and self-reported thoughts related to motivation, such as their perceived ability to do well in Chemistry 100.” E-mails were distributed via MailChimp. Following the initial call for participation, and the two subsequent calls for survey completion, MailChimp was used to send reminder emails – one reminder per administration. Following assignment of participants to their conditions, a second e-mail was sent to those who have agreed to participate and completed the first measure. Those enrolled in the stratified Pure Control group received an e-mail acknowledging their participation. Additionally, this e-mail requested that they complete forthcoming measures, when distributed. Those assigned to the four additional groups (Cold Turkey, Control Turkey Control, Forest and Forest Control) received a version of the same e-mail sent to the Pure Control group. That e-mail also referenced research that suggests that misuse of technology and distracted behavior with technology is routinely correlated with hampered learning. Those in the purposeful contamination groups were encouraged to seek individual proactive ways to focus their scholastic efforts, while those in Cold and Turkey were encouraged to use the highlighted application. For example, “Technology misuse and student distraction often linked with poor 42 academic performance. Cold Turkey application may prove to be a promising and helpful tool in your own study habits.” Table 3 summarizes student communication. Students were not made aware of other treatment conditions or planned experiences. Installation directions were sent by the researcher to students individually. The researcher was available, via e-mail, phone, and in-person to assist with any technological matter. The e- mail relaying account information also linked to the software application webpages with helpful guides and tutorials on how to use the software; though, it should be noted that these two applications present user-friendly and intuitive design. Finally, two follow-up emails were sent to all five groups. These emails acknowledged their ongoing participation in the study and noted when a future survey would be sent. These follow-up emails were distributed approximately one week before the next measure. For the Cold Turkey and Forest groups, the first follow-up e-mail also reminded the students of their paid, personal accounts with the software and encouraged the use of the applications in a similar manner to the first e-mail. While reminding the students of the applications may arguably confound the results as we remind students to use the applications, it may also heighten utilization of the apps, a comparatively desirable alternative. In an effort to limit the degree to which it may confound the data, the reminder e-mails used much of the same verbiage from the initial e-mail. Additionally, follow-up e-mails may prove to be a particularly important effort for the Pure Control group who may present comparatively greater attrition as the semester progresses (Rubin & Babbie, 2009). Data collection. Three surveys were administered during weeks when the Chemistry 100 professors felt adequate student attention could be given to the measures, rather than preparation for an examination or work on significant lab or homework assignment. The first survey was 43 distributed one week after the entrance diagnostic exam. The second was sent one week after the second course examination. And the third, final survey was distributed one week prior to the final course examination. All measures were distributed via Mailchimp and hosted on SurveyMonkey (www.surveymonkey) to an account only accessible to researchers. Students had the opportunity to withdraw from the study and request that their data be purged from analysis. Measures Measures assessed media distracted behavior, behavioral regulation, perceived dependency on technology, student engagement, academic achievement, motivation (e.g., expectancy-value, self-determination, etc.) See Appendix C for the complete Dissertation Codebook. Media distracted behavior. Accurately measuring media distracted behavior is an increasingly challenging endeavor. With the advent of newer, more ubiquitous and more clandestine technology, the traditional approach of observation is problematic, and untenable (Rosen, Whaling, Carrier, et al., 2013). As Rosen et al. state, “With a Wi-Fi enabled mobile device, people can access the Internet, e-mail, text, and use applications that can do most traditional computing activities anywhere and at any time of the day or night and research shows that people are doing just that” (p. 2501). While some innovative approaches have been utilized, such as monitoring software (e.g. Ravizza, Uitvlugt, & Fenn, 2016) and eye-tracking software (e.g. Calderwood, Ackerman, and Conklin, 2014), these innovative approaches present significant limitations related to their participant intrusiveness as well as limited distribution. Other approaches, including self-report and experience sampling, are more common. These are not, however, without their limitations as students often report erroneous/incorrect data (Junco, 2013). 44 In place of the full Media Multitasking Index, a nineteen-question subscale was created to assess student media use and distracted behavior. Twelve of the nineteen questions pertained to media use, measured in frequency, in and outside of the Chemistry 100 class. Analogous to the MMI scale, these twelve questions asked about social media, game play, watching videos, doing homework, surfing the web, and communication (text, message, or email). As an example, “While attending Chemistry…On average, how often do you watch videos?” Answers were on a five-point Likert scale ranging from “Never” to “Very Frequently”. questions were adopted from Junco (2012). The remaining seven questions pertained to distracted behavior and were taken from previous studies (Killingsworth & Gilbert, 2010; Ravizza, Hambrick, & Fenn, 2014). These seven questions consider student beliefs related to their own distracted behavior and that of their peers. For example, “In thinking of your use of technology, to what degree do you believe your multitasking or distracted behavior affects your learning?” and “In thinking of yourself as a student, generally I am often distracted or engaged in non-schoolwork efforts when I should be studying or listening.” These were valuable additions to the overall measure as they concretely consider student beliefs related to their own distracted impact or effects. Behavioral regulation. Behavioral regulation was measured with two scales: Control of Attention and Behavioral Regulation. For Control of Attention, this study administers an adapted form of the Control of Learning Beliefs subscale from the Motivated Strategies for Learning Questionnaire (MSLQ) (Pintrich, Smith, Garcia, & McKeachie, 1991). This measure considers attentional efforts specific to student behavior during scholastic endeavors, such as “If I try, then I will be able to focus and direct my attention to learn the material in this course”. Questions pertaining to behavior associated with scholastic endeavors is a desirable alternative to general, 45 self-report questions related to attention and multitasking (e.g. questions found as part of the Multitasking Preference Inventory) as such questions are, indeed, a poor indicator to actual multitasking behavior (Terry et al., 2016). Additionally, this study used a Behavioral Regulation subscale adopted from the MSLQ. This six-question subscale presents questions such as, “I made sure to keep up with my weekly readings and assignments for Chemistry” and “I forced myself to finish my Chemistry coursework even when there were other things I’d rather be doing.” This scale focused more on the regulating behaviors, rather than efforts captured as part of the behavioral engagement scale from Skinner, Furrer, Marchand, and Kindermann (2008) (see below). The inclusion of both subscales provides a complete behavioral lens for analysis – both effort and action. Dependence on technology and anxiety without technology. The Media and Technology Usage and Attitudinal Scale (MTAUS; Rosen, Whaling, Carrier et al., 2013) was used to assess Dependency on Technology, and Anxiety without Technology. As Rosen, Whaling, Carrier et al. (2013) and Terry et al. (2016) exhibited, this particular subscale presents high reliability and validity in examining student media multitasking tendencies. Questions from this scale include, “I get anxious when I don’t have my cell phone” and “I am dependent on my technology”. Student engagement. Student engagement was operationalized in terms of Behavioral Engagement and Persistence. Behavioral Engagement was assessed using a subscale adapted from Skinner et al., (2008), which considers the degree to which a student actively engages in the course material. Questions were adapted to fit the Chemistry course. For example, “I tried hard to do well in Chemistry.” Additionally, this scale used an adapted form of the behavioral regulation subscale from the MSLQ (Pintrich et al., 1991). In addition, this study presents a 46 Persistence subscale adapted from Fan and Wu (2016). As example item is: “Even if my school is dull or boring, I keep at it until I am finished.” Academic achievement. All students enrolled in Chemistry 100 are required to take a pre-term content assessment/examination. This is a graded examination administered in the first week of school administered with the intent of assessing incoming competency. Similar to the pre-term assessment, the final cumulative exam considers student competency with all course material. Between the pre-term assessment and the final cumulative exam, Chemistry 100 presents three content exams. These are different from the pre-term and final exam as they only pertain to content taught in the weeks preceding the exam, whereas the pre-term and final consider the totality of course material. Additionally, each participant’s final course grade was collected, which reflects examination performance as well as laboratory performance, homework and attendance. Student motivation in Chemistry. Students’ motivation in Chemistry was assessed via expectancy-value, achievement goals, and self-efficacy. All scales used a 5-point Likert scale ranging from Strongly Disagree to Strongly Agree. Expectancy-value. For expectancy-value, the following subscales were administered: Interest value, Attainment Value, Utility Value, Opportunity Cost, Effort Cost, and Psychological Cost. The task values of interest, attainment, and utility were adopted from Conley (2012) and include questions such as, “I enjoy the subject of Chemistry”, and “It is important for me to be someone who is good at solving problems in Chemistry”, and “Chemistry will be useful for me later in life”, respectively. The cost values of opportunity, effort, and psychology were adapted from Perez et al. (2014) and present questions such as, “I have to give up a lot to do well in Chemistry”, and “When I think about the hard work needed to be successful in Chemistry, I 47 am not sure that studying Chemistry is going to be worth it in the end”, and “I worry that others will think I am failure if I do not do well in Chemistry”, respectively. Achievement goals. In addition, this study captured achievement goal orientations towards Chemistry, including Mastery Approach, Performance Approach, and Performance Avoidance. All three were adapted from PALS (2000) and Bonney (2006). They presented questions such as, “One of my goals in Chemistry is to learn as much as I can”, and “One of my goals is to show others that I’m good at Chemistry”, and “It’s important to me that I don’t look stupid in Chemistry”. Self-efficacy. Additionally, this study considers student self-efficacy with an adapted form of Wood and Locke’s (1987) more recently used in Fan & Wu (2016) self-efficacy scale. The seven-question measure included questions such as, “I am confident in my ability to concentrate and stay fully focused on the materials being presented throughout each Chemistry course”. Self-determined motivation to use regulating applications. Students’ self-determined motivation to use the regulating applications was assessed in terms of three fundamental needs (competence, autonomy, and relatedness). To assess autonomy, the 6-item Perceived Autonomy scale was adapted from Standage, Duda, and Nutoumanis (2005; see also Chen & Jang, 2010) to fit the Chemistry 100 application. As an example, “With Forest/Cold Turkey, I can decide how I want to use the application.” An adapted version of the Situational Motivation Scale (SIMS; Guay, Vallerand, & Blanchard, 2000) was used to assess intrinsic motivation, external regulation, amotivation, and identified regulation. An example item includes, “Because I think this activity is interesting” was adapted to read, “Because I think this software application is interesting.” 48 Additional considerations. Additional considerations included the frequency of application use, level of course preparation, demographic information, and prior use of applications and potential contamination effects. Frequency and course preparation. Students in the Forest and Cold Turkey groups were asked to note their frequency of use with the applications, respectively. Rather than quantifying use (e.g. hours of use), asking for frequency presents fewer conflating variables that may come about from a variance in study habits (Rosen, Whaling, Carrier, et al., 2013). Accordingly, students were asked how frequently they used Forest or Cold Turkey while studying outside of class with the possible answer options of Never; Seldom; Half of the Time; More Often than Not, All the Time. Additionally, students were asked how frequently they use Forest or Cold Turkey while attending class with the following possible answer options: Never; Seldom; Half of the Time; More Often than Not, All the Time. Additionally, students were asked for what duration they most commonly use the applications. Finally, students noted the number of hours they personally spent preparing for class. This may prove to be important in the instance students do not study for class, thereby undermining the potential benefit for a regulating application. Demographic information. This study captured general demographic information including age, race/ethnicity, native language, and year in school. In addition, students were asked to note if this is the second time they have taken Chemistry 100 at Mines or if they entered the institution as a transfer student. Additionally, students were asked if English was their native language, when they started at Mines (e.g. Fall 2017 or Fall 2016), if they had previously been enrolled in the course, and what operating system they used. Last, students were asked about their prior experience with regulating applications. 49 Prior use and contamination. Students were asked to note their previous experience with software applications as a means to regulating behavior. As part of the first survey, students were asked if they have ever used “an application to help manage their distraction and keep you focused”. If they answered in the affirmative, students were asked to note the name of the application and to note “to what degree the application has worked for you?”. As part of the final survey, students were also asked a series of contamination questions in order to understand neighboring student’ (e.g., the student seated nearby) in-class technology use and distraction (Sana et al., 2013). Questions included, “Have you spoken with any colleagues about this study? ”and “Have you spoken with any colleagues about Forest/Cold Turkey?” Those who answered in the affirmative were asked if their conversation “influenced their participation in the study” and if they had begun using an application to “help manage their distraction and keep you focused”. Qualitative measures. A series of qualitative questions were also administered. Specifically, all students were asked two sets of questions. The first was: “Think about your ability to focus (i.e. avoid distractions) when studying Chemistry this term. Was it better than expected? Worse? Or about normal? How do you think your ability to focus affected your performance in Chemistry? Please explain.” The second was: “Did your ability to focus when studying Chemistry change over the semester? If yes, in what ways? If no, why?” Additionally, those in the Cold Turkey and Forest treatment groups were asked a set of questions related to the applications themselves, including what “worked” and what “didn’t work” about the applications. The questions were intended to be understandable for all participants (see Appendix C) 50 Data Analyses Quantitative analyses. Quantitative data was analyzed through a series of tests, all of which are clearly noted in the results section. Prior to full analysis of effects, individual reliability of each integrated scale was calculated. Additionally, outliers were checked across the board. Where appropriate, questions were removed from scale means so as to improve individual scale reliability. Additionally, Pearson-product correlations were calculated between subscales to ensure expected convergence or divergence (e.g. convergent validity). A series of linear mixed models (LMMs; Fitzmaurice, Laird, & Ware, 2004) were used to compare the relative effect of Cold Turkey and Forest on growth curve trajectories while accounting for the correlated error terms associated with repeated measures. Specifically, repeated measures (e.g., media distraction, motivation, etc.) were at Level 1 (“within” group variance) and student data (i.e., intervention condition) were at Level 2 (“between” group variance). Fixed-effect dummy coding was used to examine the main and interactive effects of time (coded 0, 1, 2) and condition (0, 1). Model building used a “top-down” strategy (West, Welch, & Galecki, 2007), starting with a “loaded” model that contained fixed effects for all covariates (including interactions), and then deviance statistics (e.g., -2LL) and a significance level of alpha = .05 in the MIXED procedure in SPSS 24 to compare model fit using different residual covariance structures and fixed effects associated with interactions and growth parameters (slope, quadratic). Finally, non-significant findings were trimmed from LMMs in order to confirm the robustness of findings and present the most parsimonious final model. Given the structure of our study and breakdown of treatment groups, “LMMs accommodate the nested structure of the data and the correlated error terms associated with the repeated measures” (Roseth, Lee, & Saltarelli, 2018, p. 8). In this study, the nesting is found 51 within the four classrooms and three repeated measures. In line with previous LMM research, “conditional models using fixed effect dummy coding were constructed to examine the main and interactive effects of time and condition” (Roseth et al., 2018, p. 8). In the results, this study only reports marginally significant (alpha < .10) effects when removing them resulted in a significant decrease in model fit. Throughout the model-building process, I also used normal probability plots to screen residuals for outliers and to check for violations of assumptions (e.g., normality, constant variance). Qualitative analyses. Acknowledging the large number of potential participants (n = 380) and the desire to adhere to the repeated measures design, students only received the qualitative questions (via open-ended text fields in the survey) as part of their third, and final survey. While all participants were asked to complete the open-ended questions, time constraints required that only a random sample of students’ qualitative data was analyzed. Specifically, fifteen randomly selected responses from each treatment were chosen for coding and analysis. In full, 75 participant responses were randomly selected and coded for each qualitative question. This accounts for 34.7% of the collected qualitative data provided by all five treatment groups (e.g. Question 1 and Question 2). For the remaining qualitative questions only answered by those in the Forest and Cold Turkey treatment groups, the 30 randomly selected qualitative responses accounts for 43.4% of the collected answers. Qualitative questions were coded via an open coding, line-by-line approach, with a subsequent inter-rater reliability procedure set at 80%, or better. To achieve this reliability standard, a qualitative codebook was constructed by the researcher. This codebook presented the various open-code answers and themes identified after reading through the data multiple times. It was not the intent of the qualitative analysis to create themes or categories (e.g. axial or 52 selective). Instead, it was the intent to quantify various responses in an effort to capture unmeasured and unconsidered responses at a raw-data level. This codebook and a random sampling of 20% of responses collected for each question was then shared with an independent colleague. This colleague independently coded data according to the Codebook created (see Appendix D). The codebook remained the same whenever 80% or greater perfect congruence was recorded. In instances where 80% was not met, the coding was reexamined as appropriate, and then redistributed with a new 20% sample of student responses. Data protection and integrity. All data was held on an encrypted hard-drive, with password protection. Participant names and Campus Identification Numbers were purged following curation of a random participant identification number. All SPSS calculations were run from drafted Syntax. 53 CHAPTER FIVE Results & Discussion Participant Flow Of the 380 students eligible for the study, 267 (70.3%) completed the first survey and noted an interest in participating in the study. Twenty students noted an interest in participating but were 17 years or younger at the time of study consent. No students wrote during the time of the study with a request to be removed or have data purged. Of the students who participated (n = 267), 63.3% identified as male and 36.3% responded as female (slightly higher than the undergraduate population percentage of 30% who identify as female). One student requested not to identify a gender. Additionally, 79.4% identified as white, 8.2% Asian/Pacific Islander, and 6.7% Hispanic/Latino. The remaining 5.7% were split between African American/Black, Native American/American Indian, Not Listed or Other, or Prefer to Not Respond. This demographic distribution nearly mirrors that of the undergraduate population of Colorado School of Mines where 18.5% of the undergraduate population identifies as belonging to an underrepresented minority. Also as expected, 88.4% of the participants were 18 years of age when consenting to the study. An additional 7.9% of students were 19 years of age. 93.3% of the students identified English as their native language. Last, more than 98% of the students participating started college in the same semester as the study, were new to college as a traditional first-year student and had never previously been enrolled in Chemistry 100 at Colorado School of Mines. Demographic distributions among the treatment groups are reported in Appendix A. Treatment fidelity. As noted in the Methods section, stratified-random assignment was used to assign experimental and control groups. The breakdown was as follows: 54 • Class Section 1: Forest Application and Forest Control (n = 39 and 27, respectively) • Class Sections 2 & 3: Cold Turkey and Cold Turkey Control (n = 45 and 83, respectively) • Class Section 4: Pure Control (n = 73) Pearson chi-square tests of independence indicated that demographic values were similar across all conditions, which suggests that random assignment worked as intended. Table 1, found in Appendix A, presents the chi-square test results (e.g. X2) alongside the demographic breakdowns by condition. Participation throughout the study was strong with 231 (86.5%) students completing the second survey and 216 (80.8%) completing the third survey. Participation rates are noted in Table 4. Measure Reliability & Convergence Validity Subscale reliability. Prior to any comparative analysis, Cronbach alpha calculations were completed for each subscale, at each measure administration (e.g. Time 1, 2, and 3). Control of Attention was found to have the weakest initial reliability of all subscales (α = .59). The subscale consists of four questions. By removing Question Four (“If I don’t understand the course material, it is because I didn’t focus my attention”), the reliability improved to an acceptable α = .63. All other subscales scales presented acceptable or desirable reliability with the complete set of questions presented. Of the twenty-two subscales with reliability calculations, eighteen presented reliability scores, on average between the three administrations ³ .75. See Appendix A for descriptive statistics and participant flow of the subscales, including mean and standard deviation for each treatment by each scale. In cases where outliers were removed, the means and standard 55 deviations were re-calculated without the outliers in order to assess the extent to which they biased the summary statistics. In each instance, the effect of the outliers was negligible and, accordingly, no outliers were removed from the study. Convergent validity. Pearson correlations were run between related measures to examine evidence of convergent and divergent validity. For example, as expected by theory and research, interest value, attainment value, and utility value were all positively correlated, with r’s ranging from .39 to .78, all p’s < .01. Likewise, opportunity cost, effort cost, and psychological cost were all positively correlated in the expected manner, with r’s ranging from .16 to .77, p’s < .05. Likewise, behavior regulation, engagement, self-efficacy, control of attention, and persistence were also positively correlated in the expected manner, with r’s ranging from .30 to .77, p’s < .01. Media use in and out of the classroom were positively correlated as were all course grades against themselves (entrance exam vs final vs examination 1, 2, and 3), with r’s ranging from .44 to .90, p’s < .01. And last, correlations among the achievement measures (e.g. entrance exam, exam 1, exam 2, etc.) were consistently correlated. As expected all six achievement measures were highly correlated with one another (r’s ranging from .42 to .90, all p’s < .01 (see Table 5). This strengthens confidence in the reliability of achievement measures across time. Taken together, this pattern of correlations provided evidence of convergent and divergent validity and strengthens confidence in the psychometric quality of the study’s measures. For a full report of all correlations, see Appendix E. Media Distracted Behavior In and Out of Class While attending class, students across all conditions reported rarely, or less than rarely (i.e., M = 1.0 to 2.0) using media to play games, watch videos, do other homework, surfing the 56 web, texting, or engaging with social media. Outside of class, students across all conditions reported sometimes, or less than sometimes, (i.e. M = 3.0 vs 2.0) engaging in media distracted behavior while studying. See Table 6 for all reported media use and hours preparing for class. Hours Preparing for Class Hours of preparation generally declined across the semester in all conditions. However, the pattern of change was different for Cold Turkey and Cold Turkey Control, as indicated by a significant quadratic difference between the two conditions, B = 0.30, p = .04. That is, hours of preparation for class among Cold Turkey students decreased from Time 1 to 2, then increased from Time 2 to 3, while Cold Turkey Control students did the opposite. There were no other significant findings with hours preparing for class between groups or times. Frequency and Duration of Application Use Simple descriptive statistics of application use and duration suggested similar levels of utilization, and å lack of change in utilization for both Cold Turkey and Forest applications. Specifically, on a five-point scale ranging from 1 (Never) to 5 (As Much as Possible), Cold Turkey and Forest participants both reported less than “seldom” utilization of the applications (i.e. less than 2.0) for utilization across all time periods, while attending and while outside of class. See Table 7 for a descriptive breakdown of utilization, as well as the duration of use when used. Research Question 1: Media Distracted Behavior Research question: What effect does the use of regulation software have on media distraction behavior? 57 Media distracted behavior results: During class. We started by evaluating the intervention effects on media distraction while attending class. As displayed in Figure 1, results indicated that individuals with Cold Turkey reported lowered rates of media distraction at the semester’s midpoint (Time 2) as compared to Pure Control, B = -0.32, p = .01. However, this result was qualified by evidence that both Cold Turkey Control, B = -0.43, p < .001, and Forest Control, B = -0.48, p < .001, also significantly lowered mid-semester rates compared to Pure Control. This suggests that merely being in the presence of others using Cold Turkey or Forest also diminished mid-semester level of multitasking behavior. This pattern of findings changed over time, however, as Forest also increased media distraction compared to Forest Control at the end of the semester, B = 0.37, p = .03. There was also marginally significant evidence that Cold Turkey Control increased media distraction during class compared to Pure Control, B = 0.23, p = .052. Across the semester, there was no evidence that Forest significantly differed from Pure Control. Media distracted behavior results: Outside of class. Next, we evaluated the intervention effects on media distraction while studying outside of class, finding an opposite pattern of change compared to inside of class. At the semester’s midpoint (Time 2), Cold Turkey students reported increased multimedia distraction compared to Cold Turkey Control, B = 0.40, p = .01. However, this changed over time, as Cold Turkey students reported decreasing rates outside of class while Forest, B = -0.39, p = .03 and Cold Turkey Control, B = -0.34, p = .04, reported increasing rates. Moreover, Forest’s rate of increase was also significantly greater than Pure Control, B = 0.43, p = .02. As displayed in Figure 2, by the end of the semester these changes both eliminated the initial difference between Cold Turkey (M = 2.75) and Cold Turkey Control (M = 2.72, respectively), and increased the difference between Cold Turkey and Forest 58 (M = 3.16). Thus, Cold Turkey decreased media distracted behavior over time outside of class while Forest increased it. This is different from the in-class results where both applications increased media distraction over time. Media distracted behavior discussion. Taken together, these findings provided mixed support for the original hypothesis that application use would decrease media distracted behavior in and out of the classroom and, broadly, suggest that their efficacy varied by the environment in which they were used and by the specific applications. During class, Cold Turkey and merely being in the presence of students using Cold Turkey or Forest applications significantly lowered media distraction, and these effects remained relatively stable over time (i.e. little to no differences in slopes between conditions). This corroborates our hypothesis and are also consistent with extant research demonstrating the way peer-to-peer dynamics influence media distraction within classrooms (e.g. Sana et al., 2013). These findings also suggest that merely being in the presence of students using Cold Turkey or Forest may be equally effective as using Cold Turkey on reducing media distracted behavior. Why might this be? Whereas Cold Turkey’s effects would traditionally be attributed to the blocking media, the effects of the Cold Turkey and Forest controls suggest that simply being in the presence of other students who are using these applications might also change behavior by raising awareness of media distraction and/or increasing the social desirability of reporting lower levels media distraction. Adding more complexity to this picture, the efficacy of the two applications also changed over time, with Forest increasing media distraction at the end of the semester as compared to the Forest Control. This was contrary to the hypothesized decrease over time and suggests that the two applications were not equally effective in mitigating media distracted behavior within the 59 classroom environment. At the same time, however, it is important to note that media distraction during class increased over the semester in all but the Pure Control, which suggests that the efficacy of the applications may also be short-lived, demonstrating initial effects that do not endure over time. I return to the question of why this might be below, when discussing the results for self-determined application use. Outside of class, the effects of Cold Turkey and Forest were noticeably different. Rather than decrease media distraction at semester mid-point, Forest and Cold Turkey students reported similar rates of media distracted use as compared to Pure Control and greater rates compared to Forest Control and Cold Turkey Control. Here again, these findings are contrary to what was hypothesized. However, over time, Cold Turkey and Pure Control reported less media distracted behavior as compared to Forest, Forest Control, and Cold Turkey Control. This suggests that the effects of both applications on media distracted behavior outside of class emerged slowly over the semester, and that Cold Turkey was more effective than Forest. This is an important finding as no extant research has compared the efficacy of different regulating applications on media distracted behavior. One explanation for this difference may be that the environment outside of class fosters less sensitivity among students to rewards and more openness external control. In other words, outside of class, students’ motivation may be directed towards activities that diminish the efficacy of Forest (i.e., reward sensitivity) and, relatively speaking, increase the efficacy of Cold Turkey (e.g., openness to external control). Importantly, being in the presence of those with applications did not prompt others (those from Cold Turkey Control and Forest Control) to start using either application. Only 11 of 147 students from the pure control and contamination groups reported as part of their last administration that they were “aware of the software applications being used as part of the 60 study.” And of those eleven, none started using Cold Turkey or Forest during the time of the study. Research Question 2: Student Regulation Research question: What effect does the use of regulation software have on student regulation, as measured by control of attention and behavioral regulation? Student regulation results: Control of attention. For control of attention, there were significant linear, B = -0.27, p = < .001, and quadratic trends, B = 0.07, p = .03. This indicated that control of attention significantly decreased between Time 1 and 2, and then leveled-off between Time 2 and 3. Figure 3 clearly shows the difference between the three Time periods and their altered slopes. At Time 1, there were marginally significant differences between Pure Control against Cold Turkey, B = -0.14, p = .07, and Pure Control against Forest, B = -0.15, p = .07, when participants had not yet received treatment assignment. However, there was no significant difference between treatments at Time 2 and 3, so application use did not significantly improve or hinder student control of attention. Student regulation results: Behavioral regulation. For behavioral regulation, there were also significant linear, B = -0.48, p = <.001, and quadratic trends, B = 0.11, p = <.001. This suggests that behavioral regulation decreased over time but leveled off between Times 2 to 3 (see Figure 4), similar to control of attention. There were no significant differences between Cold Turkey, Forest, and Pure Control, nor between any other condition. Here, too, application use did not differentiate reported behavioral regulation. Student regulation discussion. There is no evidence that application use increased student regulation, as measured by behavioral regulation or control of attention. Instead, 61 regulation generally decreased across all treatments. This finding was inconsistent with the hypothesis that using the regulating applications would prompt greater behavioral regulation, and that lowered media distracted behavior, which was sporadically observed (see RQ#1), would correspondence with increased behavioral regulation. Instead, the results suggest that student regulation was unaffected – either positively or negatively – by application use. No other known research has considered student regulation effects of applications designed to mitigate media distracted behavior. Research Question 3: Technology Dependency Research question: What effect does the use of regulation software have on technology dependency? Technology dependency results. There was no evidence that application use affected perceived levels of dependency on technology. Instead, feelings of anxiety without technology and perceptions of dependency on technology remained constant throughout the semester in all conditions. As displayed in Figure 5, students generally noted a “neutral” response to the questions presented as part of the dependency on technology subscale, on a 5-point Likert scale from “Strongly Disagree” to “Strongly Agree”. Technology dependency discussion. It was hypothesized that student anxiety without technology and perceptions of dependency for technology would decrease over time in accord with the hypothesized decrease in media distraction and increase in regulation. The results, however, failed to support this hypothesis. Further, the lack of change over time was particularly surprising given the aforementioned downward trend in student regulation, because one explanation for decreasing regulation might be that students’ dependence on technology 62 exhausted their ability to regulate its use. In fact, there was evidence of a strong positive correlation between media distracted behavior and technology dependency (see Appendix E), which is in line with prior research reporting a positive relationship between perceptions of dependency and media distracted behavior (Rosen, Whaling, Carrier, et al., 2013; Terry et al., 2016). Thus, this study’s results that some other mechanism besides technology dependence à behavioral regulation may account for the observed increases in media distraction over time (see RQ #1). Research Question 4: Student Engagement Research question: What effect does the use of regulation software have on student engagement in the Chemistry course? Student engagement results: Behavioral engagement. For behavioral engagement, there was also evidence of significant linear, B = -0.46, p = <.001, and quadratic trends, B = .11, p = <.001, reflecting a steep decrease from Times 1 to 2, followed by leveling off between Times 2 and 3. There was no evidence of any significant differences between conditions during any time of collection, including Time 1, which predated the intervention (see Figure 6). Student engagement results: Persistence. Similar to behavioral engagement, there was evidence of a significant linear trend, B = -0.14, p = <.001, with persistence decreasing over time. However, there was no significant change between time periods, suggesting that the decrease in persistence was throughout, rather than a significant difference between Time 1 and 2. There was no evidence of any difference by condition (see Figure 7). Student engagement discussion. Results provided no evidence that application use changed student engagement in the course, as behavioral engagement and persistence decreased 63 over time in a similar way among all condition. This suggests that despite lower media distracted behavior in the classroom with Cold Turkey, Cold Turkey Control, and Forest Control and lower media distracted reported behavior with Cold Turkey outside of the class towards the end of semester, there was no corresponding increase in engagement. These findings failed to support the hypothesis that engagement would increase as a result of application use, increased student regulation, and lowered anxiety without technology. As no other known study has examined the effect of regulating applications on engagement, this finding neither challenges nor affirms prior work. It does, however, corroborate internal findings that suggest that application use has no effect on regulation or technology dependency. Research Question 5: Performance Achievement Research question: What effect does the use of regulation software have on academic achievement in the course? Performance achievement results. We started by evaluating the intervention effects on academic achievement, as measured by Exam 1-3. Results revealed a significant negative linear trend amongst all conditions over the course of the semester, B = -6.47, p = <.001. Among the planned contrasts between conditions, Forest reported a significantly steeper decline than Forest Control (B = -3.48, p = .01) as well as a significantly steeper decline against Cold Turkey (B = 3.05, p = .01). As Figure 8 clearly shows, exam scores generally decreased over the semester in all conditions, with Forest resulted in a relatively steeper decline in achievement compared to Forest Control and Cold Turkey. Next, we considered the intervention effects on students’ overall course grade, when controlling for the entrance competency assessment. Three significant differences between 64 conditions were identified. First, Cold Turkey performed worse than Cold Turkey Control (B = - 2.12, p = .001), Forest performed worse than Forest Control (B = -4.27, p = <.001), and Forest Control performed better than Pure Control (B = 3.22, p = .04). This suggest that, contrary to predictions, Cold Turkey and Forest applications both negatively diminished students’ overall course grade compared to their respective control conditions (i.e., Cold Turkey worse than Cold Turkey Control; Forest worse than Forest Control). One interesting exception to this pattern was that Forest Control performed better than Pure Control, which suggests that being in the presence of other students using Forest improved Forest Control course grades. Performance achievement discussion. The data suggests that use of the regulating software did not present significant positive effects related to academic performance in the course. This was the opposite from what was hypothesized. In fact, the results suggest that application use introduced unintentional, and undesirable, achievement effects. There is no extant research related to achievement effects of regulating application use. Accordingly, we are limited to the degree that these findings concretely challenge or affirm outside research. However, there exists relevant and transferable research related to performance effects and personal fitness wearable technologies, such as Garmin and Fitbit. That is, prior research on wearable technologies has shown that devices meant to improve performance instead induces negative, adverse motivation (as measured by through a self-determination construct) and thereby lower performance (Kerner & Goodyear, 2017). As Kerner and Goodyear (2017) noted, the technologies negatively affected individuals’ feelings of internal and social pressure, guilt, competition, and performance, thereby lowering overall motivation and negatively undermining desired performance – via the health lens – outcomes. Thus, one explanation of this study’s findings is that application use lowered student motivation, thereby prompting comparatively 65 lower achievement. Accordingly, the next three research questions are highly informative in validating or challenging this interpretation of the results for academic achievement. Research Question 6: Expectancy-Value Research question: As a result of regulation application use, do students report significant shifts in expectancy-value motivation? Expectancy value results: Interest value. For interest value, there was again evidence of significant linear, B = -0.42, p = <.001, and quadratic trends, B = 0.11, p = <.005, indicating that interest value decreased significantly from Time 1 to 2, but then increased slightly from Time 2 to 3. There was also evidence of multiple significant differences between treatments but in the opposite direction than expected. Those in Cold Turkey, B = -0.60, p = .001, Forest, B = - 0.45, p = < .05, and Cold Turkey Control, B = -0.62, p = < .001, all reported lower interest value as compared to Pure Control throughout all three time periods (see figure 9). Expectancy value results: Attainment value. Once again, there was evidence of significant linear trend, B = -0.49, p = <.001, with attainment value decreasing over time. As with interest value, there were also significant initial differences between Pure Control and Cold Turkey (B = -0.31, p = .002), Forest (B = -0.32, p = .003), and Cold Turkey Control (B = -0.35, p = <.001). As displayed in Figure 10 and contrary to the hypothesis, Pure Control treatment group reported significantly higher attainment value throughout all three time periods as compared to the regulatory software conditions. Expectancy value results: Utility value. Following the same pattern, linear trend was significant for utility value, B = -0.49, p = <.001, as ratings decreased over time. Likewise, the Pure Control treatment group reported higher initial utility value as compared to Cold Turkey (B 66 = -0.61, p = <.001), Forest (B = -0.27, p = .06), Cold Turkey Control (B = -0.52, p = <.001), and Forest Control (B = -0.20, p = .06) (see Figure 11). Expectancy value results: Opportunity cost. For opportunity cost, there was evidence of significant linear, B = 0.56, p = < .001, and quadratic trends, B = -0.20, p = <.001, indicating that opportunity cost increased from Time 1 to 2, and then leveled off or decreased from Time 2 to 3 (see Figure 12). There was also evidence of treatment differences, with students in Pure Control reporting significantly lower initial opportunity cost compared to Forest (B = 0.28, p = .033) and Cold Turkey Control (B = 0.23, p = .028). Expectancy value results: Effort cost. Once again, there was evidence of significant linear, B = 0.66, p = < .001, and quadratic trend, B = -0.20, p = <.001, indicating that effort cost increased from Time 1 to 2, and then leveled off or decreased from Time 2 to 3 (see Figure 13). There was also evidence of treatment differences, with students in Pure Control reporting significantly lower initial effort cost as compared to those in Cold Turkey (B = 0.55, p = <.001), Forest (B = 0.34, p = .016), and Cold Turkey Control (B = 0.43, p = <.001). This trend remained throughout all three time periods, as seen in Figure 13. Thus, unexpectedly, students in the regulatory software conditions reported higher effort cost in Chemistry. Expectancy value results: Psychological cost. Again, there was evidence of significant linear, B = 0.21, p = < .001, and quadratic trend, B = -0.08, p = .024, indicating that psychological cost increased from Time 1 to 2, and then leveled off or decreased from Time 2 to 3. However, unlike the other forms of cost, there were no significant differences between conditions at any time throughout the semester, underscore the null effect change attributable to the applications or being in the presence of the applications. 67 Expectancy-value discussion. For the three types of expectancy-values (i.e., interest, attainment, and utility), the pattern of findings was the same with Cold Turkey, Forest, and Control Turkey Control. That is, contrary to the hypothesis, expectancy-value was less as compared to Pure Control. Noting that in all three value constructs of interest, attainment, and utility for those in the Pure Control group consistently reported higher value as compared to those with applications or in the presence of applications suggests an interesting uniqueness to the Pure Control section (i.e. initial differences prior to intervention) whilst underscoring the null effect of applications. This is unexpected and inconsistent with the hypothesis that regulatory application use would increase students’ perceived value of Chemistry. The pattern of findings was also similar for cost scales, with all four experimental groups reporting increased perceived cost over all three Time periods as compared to Pure Control. This too was in the opposite direction as expected and suggests that students not only perceived Chemistry as having less value when using or being in the presence of regulatory applications; they also perceive an increase in the cost of doing well in Chemistry. As was the case with expectancy-value subscales of interest, attainment, and utility for Pure Control was initially different from the experimental groups. This initial cost, however, remained throughout with no evidence that application use or exposure affected feelings related to cost. Moreover, because application use was generally low, it also suggests that students valued the course less and perceived a level of increased cost despite minimal dosage of the applications. Research Question 7: Achievement Goal Research question: What effect does the use of regulation software have on students’ achievement goals? 68 There was no evidence that the frequency of application use moderated their effects on mastery approach, performance approach, and performance avoidance achievement goals. Achievement goal results: Mastery approach. With mastery approach, there was evidence of significant linear, B = -0.42, p = < .001, and quadratic trends, B = 0.09, p = .009, indicating that mastery-approach goals decreased significantly from Time 1 to 2, but then increased slightly from Time 2 to 3 (see Figure 14). In addition, at Time 1, prior to introduction to the apps, there were significant differences between Pure Control and Cold Turkey (B = -.39, p = <.001) and Cold Turkey Control (B = -0.30, p = <.001), both of which reported lower levels of mastery approach. As Figure 14 shows, Pure Control reported the highest mastery approach throughout the term, whereas Cold Turkey reported the lowest, with a continual decreasing trend throughout the term. Achievement goal results: Performance approach. For performance approach, there was also a slight general decline over time, as indicated by significant linear trend, B = -0.07, p = .005. At Time 1, prior to introduction to the apps, Cold Turkey was also significantly less than Pure Control, B = -0.39, p = .019, and a marginally significant difference between Cold Turkey and Forest, B = -0.37, p = .052, with Cold Turkey reporting lower levels of performance approach. Throughout the semester, Cold Turkey trended downward while Pure Control trended up. In fact, only Pure Control trended towards greater performance approach while all other treatments declined (see Figure 15). Achievement goal results: Performance avoidance. This pattern of findings was also found for performance avoidance goals. That is, there was evidence of significant linear, B = - 0.44, p = < .001, and quadratic trends, B = 0.10, p = .018, with performance avoidance goals decreasing over time, but at a slower rate between Time 2 and 3. At Time 1, there was also 69 evidence of significant differences between Cold Turkey and Pure Control, B = -0.34, p = .018, and a significant difference between Cold Turkey and Forest, B = -0.37, p = .044 (see Figure 16). Thus, Cold Turkey performance approach and avoidance were both lower than Pure Control throughout the semester, at all three time periods. Achievement goal discussion. Generally speaking, all three achievement goals declined over time with Pure Control presenting the highest reports of mastery approach, performance approach, and performance avoidance, both initially and throughout the semester. Unexpectedly, Cold Turkey and Forest were also associated with different achievement goals, with Cold Turkey decreasing both mastery and performance approach compared to Pure Control and Forest, while Forest increased performance avoidance compared to Pure Control and Cold Turkey. Generally, the pattern of findings was inconsistent with our hypothesis that mastery goals and performance approach in Chemistry would increase as a result of application use, performance avoidance goals would decrease. While neither application did increase achievement goals, the discrepancy between Cold Turkey and Forest, with Forest behaving in a more similar manner to the control conditions and Cold Turkey universally reporting lower mastery approach, performance avoidance, and performance approach merits further investigation and inquiry to possible differences between the applications. One explanation of this pattern of findings is that Forest increased performance approach and avoidance compared to Cold Turkey by way of social comparison of trees, forest growth, etc. In other words, it may be that Forest made social comparison more salient to students in a way that Cold Turkey does not. Future research should examine effects of Cold Turkey on emotions (e.g., anxiety, frustration, boredom) to further understand the mechanisms by which this occurs. 70 Research Question 8: Self-Efficacy Research question: What effect does the use of regulation software have on students’ self-efficacy in Chemistry? Self-efficacy results. As with all other motivation variables, there was evidence of negative linear trend, B = -.05, p = .010, with self-efficacy declining across the semester. At Time 1, there was also evidence of significant differences between Cold Turkey and Forest, B = - 0.14, p = .010, with Forest students reporting higher levels of self-efficacy compared to Cold Turkey students. Interestingly, the rate of change over time in self-efficacy also differed between Pure Control and Forest, B = .09, p = .034, and Forest Control, B = .10, p = .043, with both Forest students reporting increasing levels of self-efficacy over time while Pure Control students reported decreased levels. This suggests that application use may have had a hypothesized effect on self-efficacy with Forest Control and Forest. Figure 17 shows the upward trend of Forest and Forest Control, whereas Cold Turkey Control, Pure Control and Cold Turkey trended downward or maintained a consistent level. Self-efficacy discussion. While there was a general trend towards lower self-efficacy, and while those in the Pure Control group reported and maintained higher initial levels of self- efficacy, Cold Turkey decreased self-efficacy over time while Forest did the opposite. These findings sporadically aligned with our original hypothesis of that self-efficacy would increase as a result of application use. Forest and Forest Control exhibited increases with self-efficacy as compared to Pure Control, which declined consistently through the term, and Cold Turkey and Cold Turkey Control which declined and leveled, between Time 2 and 3. Akin to the reported achievement orientations, Cold Turkey and Forest behaved slightly different from one another, 71 with Cold Turkey presenting the comparatively worse, or more unintentional, findings (i.e. lower achievement orientation and lower feelings self-efficacy). When considering the long-term motivational effects of application use on Chemistry, it was broadly hypothesized that application use would “work” in heightening student regulation and engagement, thereby effecting comparatively better performance in the course. And by extension, it was hypothesized that student, long-term motivational feelings towards Chemistry would also trend towards a desirable effect. As noted above, however, the results for long-term motivation did not affirm our hypothesis. And, as no other study has considered expectancy- value, achievement orientations, or self-efficacy in connection with regulating application use, this study is limited to the degree it can affirm or challenge external research. One explanation for the unintended effects on Chemistry engagement and motivation, coupled with low application use, may have caused students to report lower self-efficacy in Chemistry. That is, treatment students may have felt that they should use the applications but failed to do so, which induced cognitive dissonance that they subsequently resolved by inferring that they had less self- efficacy and perceived value and increased cost perceptions. Future research should examine this issue by testing whether believing that one ought to use an application but then failing to do so has indirect effects on other aspects of motivation. Research Question 9: Self-Determination Research question: Do the two regulating applications, Forest and Cold Turkey, have differential effects on students’ self-determined motivation? Self-determination results. For all self-determination scales, including amotivation, external regulation, intrinsic motivation, identified regulation, and perceived autonomy, there 72 was no evidence that Cold Turkey or Forest had any significantly different effects. This suggests that varying application approaches to mitigating media distracted behavior and heightened focus presented no differential effect on students’ self-determination to use the applications. Self-determination discussion. It was hypothesized that differing user-experiences with the applications would present significant differences in self-determination feelings related to application use. In particular, it was hypothesized that the loss of control and autonomy would be significant with Cold Turkey as compared Forest, an application which values autonomous choice to stay “focused”. Additionally, it was hypothesized that the reward-system of Forest would prompt significant differences related to external regulation and intrinsic motivation, as compared to Cold Turkey, and as result also incite greater utilization. As a whole, however, the results failed to support these hypotheses. One explanation is that “how” the applications work (either to blocked entirely or incentivized by the growth of a bush or tree) is less important than the choice of setting a timer in the first place in order to be self-regulated. For example, it may be that the individual desire to be regulated and engage in less media distracted behavior is the one truly relevant choice or student “experience”, especially as compared to how a student is less distracted which is, by definition, merely an extension of the initial choice. If this is true, it may explain the relatively low application use. The initial inertia to “be focused” by way of selecting a timer, may be too great, or perceived to be too great. Future research could specifically assess the perceived anxiety of “starting” an application, akin to biofeedback extant research which has study induced anxiety when removing smartphones from student physical control (e.g. Cheever, Rosen, Carrier, & Chavez, 2014). Additionally, future research could study regulating application use in correlation with 73 perceptions of value around media distracted behavior. That is to say, might those who value media distracted behavior more demonstrate less engagement with the applications? Last, while frequency of use did not moderate effects, there was also evidence to suggest that utilization was low among all users and throughout the semester. This suggests that while the applications did not present differential effects by way of self-determination, they also collectively failed to incite significant use amongst all participants. 74 CHAPTER SIX Summary and Conclusion Contemporary media multitasking research has clearly shown a near-universal pattern of detrimental performance considerations as a result of distracted, ill-regulated technology use. Extant research has detailed, with impressive breadth, the pervasiveness of media multitasking as well as the immediate, arguably casual effects, which are broader than performance considerations and include undesired interpersonal and intrapersonal effects. But as it pertains to performance, suggested and espouses responses have remained largely untested and unpromising. This study aimed to test the efficacy of an increasingly popular response to media distracted behavior – namely, regulatory software applications - in a real-world setting over an extended period of time. Related research would suggest that regulating software applications would lower media distracted behavior. Additionally, it was hypothesized that students’ regulation (control of attention, behavioral regulation) and engagement would increase, as a result of less media distracted behavior. It was also predicted that students’ perceptions of dependency would decrease and that academic performance would increase, again as an indirect effect of the regulatory applications’ decrease on media distracted behavior. In an effort to consider long- term, motivational shifts and effects of application use, this study also considered students’ achievement goals, perceptions of value and cost, and self-efficacy. In accordance with increased regulation and engagement, it was hypothesized that desirable motivational shifts would result from application use. Last, as a way of considering how application design can potentially influence long-term use, this study purposefully used two different applications – one which supported student 75 autonomy but incentivized long-term use, and one that took away autonomy and expected self- regulatory failure. Self-determination theory provided a germane lens on which to consider the impact of user-experience in terms of external regulation and perceived autonomy. The following section summarizes the study findings while attempting to comment, more broadly, on the findings as a whole. As it pertains to whether or not the applications “worked,” there was evidence to suggest that students with Cold Turkey did, in fact, report less media distracted behavior while attending class as compared to Pure Control. Additionally, students “in the presence” of those with applications, including Forest, also reported less media distracted behavior as compared to those who were in the pure, isolated pure control group. As displayed in Figure 1, however, all four treatment groups increased media distraction over time whereas Pure Control remained constant. Forest behaved in a similar manner to Cold Turkey (e.g. increased media distracted behavior by end of term) but failed to present significantly lower media distracted behavior at any measured point. This suggests that the effect of the two applications differed for in-class media distracted behavior, but their long-term effects were comparably null. The findings were quite different outside the classroom where, contrary to the hypothesis, Forest actually increased media distracted behavior, while Cold Turkey and Pure Control reported lower media distracted behavior towards the end of term. In sum, there were consistent trends with media distracted behavior (e.g. Forest increased media distracted behavior in and outside of the classroom) and also inconsistent trends (e.g. Cold Turkey was initially significantly less than Pure Control in the classroom but trended towards more media distracted behavior whereas media distracted behavior outside the classroom progressively decreased). 76 Further complicating any certain assertion related to media distracted behavior as an effect of application use was qualitative data that suggested that the applications “work”. To the question, “As a result of your use of FOREST/COLD TURKEY, did you change other aspects of the way you study? If yes, in what ways? If no, why?”, twice as many students answered “Yes” than “No.” Of the six possible optional codes to this question, 85% of those who answered, “Yes” noted that the applications “Worked,” defined by the Qualitative codebook as, “The application is meant to heighten focus and limit distraction. In explaining, the respondent explicitly suggests that application DID heighten focus or limit distraction as a result of the design of the application.” Additionally, to the question, “What did you like about FOREST/COLD TURKEY?”, the vast majority (>71%) referenced “Application Efficacy/Value”, defined in the Codebook as, “Respondent expressed an appreciation for the WAY or the ‘CONCEPT/IDEA’ as to how the application worked. This would include how the application blocked distractions. This would include commentary how the application ‘worked’ for the respondent – i.e. it further motivated them to stay focused.” To both questions, students with Cold Turkey and Forest were effectively split, with both groups answering the affirmative. For example, those with Cold Turkey noted, “I liked how you could set your own time limit to force yourself to study without distractions” and “I like that it kept me from getting distracted”. Similarly, those with Forest noted, “I like the strictness of Forest because it helped me stay off my phone and do my work”, “It was a cool design and a good idea”, and “It kept me off my phone for longer periods of time”. This qualitative data suggests that students believed the applications worked as they were designed and intended. However, this contradicts the quantitative findings with media distracted behavior. It may be possible, when considering the relatively low application use and increasing 77 media distraction throughout the term, that espoused value in the applications did not translate into consistent, substantial, ongoing application use. Additionally, it may be possible that students’ perceptions of the value of an application was a poor gauge of its actual effectiveness. In other words, while students perceived value in an application similarly, it may also be true that one application was more effective than the other. In this case, Cold Turkey was more effective at lowering media distracted behavior, especially when outside of class, as compared to Forest, which appeared to be broadly ineffective. Juxtaposed to the sporadic examples of lowered media distracted behavior and espoused value in application use via qualitative accounts, is the absence of any increase in student attentional control, behavioral regulation, or student engagement in the course, at least as measured by the quantitative measures. Indeed, there were no significant effects between groups, though each of these considerations generally declined over the course of the term. This may suggest a high potential for social desirability with the first administration being distributed in the first few weeks of school. Additionally, there was no measurable decrease with technology dependency. These findings are consistently inconsistent with our hypothesis and fail to account for the sporadic examples of lowered media distracted behavior. One explanation for this pattern of findings may be that students’ beliefs about their own ability to regulate their media distraction may undermine rather than support internalized shifts with behavioral regulation, engagement, or technology dependency. In other words, students fail to exhibit behavioral regulation change because they did not “see a need” to improve. In this way, their use of an application may lower distracted behavior, but not change the user because they merely “go through the motions” but do not internalize the potential positive effect of 78 lowered media distracted behavior. This is particularly possible when the media distracted effect of the applications may have been short-lived. This segues to an important consideration related to awareness and behavior change. Media multitasking research has shown that students are cognizant of their distracted and off- task nature (Terry et al., 2016). In this study, students verbalized an awareness related to media distracted behavior and effects that is seemingly incongruent with exhibited behavior. Consider, for example, the > 50% who answered “Agree” or “Strongly Agree” to the question, “In thinking of yourself as a student...During lecture, I often miss important points because I am thinking of other things.” As a second example, consider the > 60% who noted that their learning was “greatly impeded” or “somewhat impeded” because of their multitasking and distracted behavior with technology (see Appendix E for the full descriptive details and figures). This suggests that students are generally aware of the ramifications of media distracted behavior. This may affirm, as Hassoun (2014) argues, that students proceed in a deeply ordinary yet complex negotiation on when, where, how, and to what degree they engage in media distracted behavior, despite the known risks. Thus, null findings with attentional control, engagement, and regulation may have be attributable to an arguably passive complacency for a pervasive distracted behavior, thereby further extenuating the short-lived application intervention. Additionally, it is possible that some aspect of the classroom environment fostered less media distracted behavior. While media distracted behavior increased in class throughout the term, the mean-levels of media distracted behavior outside of class was always consistently lower, despite the fact that Forest, Forest Control, and Cold Turkey Control increased media distracted behavior outside of class. 79 The qualitative data suggest one explanation for why media distracted behavior was different in classrooms versus outside of classroom. To the question of “Did COLD TURKEY/FOREST work for you? If yes, why? If no, why?”, approximately half of the coded respondents answered in the affirmation, with “Yes”. The other half noted “No”. In qualifying their answer, “application efficacy and value” were both noted as a reason why participants liked Cold Turkey or Forest, and why they didn’t like Cold Turkey and Forest. For example, students who liked their application efficacy and value noted, “I liked the lack of distraction” and “The phone was unusable” and “It shut off all my distractions”. In answering what they disliked, one student noted, “I would often misjudge how much time I needed and wait longer than needed to use my phone.” Future research may consider application effects vary as a function of whether or not students believed the application did or did not work for them. In the spirit of interpreting qualitative data towards unexpected and surprising quantitative findings (Creswell, 2009), it may be possible that the classroom environment better supports application use or underscores the value of using the applications. For instance, classes were taught during the day, for a predetermined amount of time, wherein many students might also have friends in class, effectively reducing the need for social connectivity or the fear of missing out (FOMO). That is to say, the affordances of a regulation application may be more welcomed during class as compared to out of class because they feel less dependent on their phones to stay connected to friends during this time. A relevant, and possible future student, may consider the held anxiety of students in class versus out of class. In sum, the in-class relevance and value of regulating application may be present, but still not enough to have a positive effect on behavioral engagement, regulation, and motivation more broadly (i.e. the application works when convenient but is not transformative in changing the 80 student approach broadly). The question of application effectiveness – i.e. “Do they ‘work’?” – remains unconvincing and sporadic. The data suggests that application use had a null effect on engagement, regulation and dependence, but there also some evidence that the applications significantly lowered media distracted behavior. There was, however, a salient, significant, and unsavory, narrative related to the semester-term effects of application use on Chemistry motivation. Specifically, the data suggest that application use was marginally correlated with heightened feelings of psychological, effort, and opportunity cost, and lower feelings of attainment, utility, and interest value. This is opposite from the hypothesized motivational effects. More, it is a concerning and unintentional outcome. Data related to achievement goal orientation suggests that application use influenced mastery approach and performance approach orientations in an unintentional and undesired manner. Specifically, application use correlated with comparatively lower mastery and performance approach in Chemistry, while there was evidence of increased performance avoidance associated with application use, as compared to pure control groups. Finally, students in the pure control treatment group reported greater self-efficacy in Chemistry as compared to the experimental groups. The culmination of these motivational considerations suggests a provocative and undesired outcome: Regulatory application use may have mixed and inconsistent effective value in terms of lowering media distracted behavior, yet they present more salient and consistent negative effects with long-term motivation for Chemistry. This is a particularly unsavory finding when coupled with the absence of any positive, desired, and hypothesized increase related to student behavioral regulation, control of attention, technology dependency, or performance. 81 With the data presented, it is challenging to suggest why students’ value, achievement goals, and self-efficacy related to Chemistry were negatively diminished by application use. There is extant research to suggest, however, that the introduction of a personal technology may prompt adverse internal self-reflections (Kerner and Goodyear, 2017). Take, for example, the unintended effect of wearable fitness technologies. As the Literature Review suggested, there is research to suggest that personal fitness technologies, such as Garmin and Fitbit, further motivate and drive greater personal wellness. However, there is also evidence that these apps and hardware also fail to produce any effect (Lines, 2017; Maddox, 2014; Patel et al., 2015) or, worse, have adverse motivational effects when individuals come to new, personal realizations around their poor health, lack of motivation or value, or missing fitness wherewithal (Kerner & Goodyear, 2017). In short, the same technology that was designed to motivate may have the unintended effect of inducing discouraging reflections that reduce motivation, much like the proverbial dual-edged sword. Thus, the regulation applications may have similarly negatively affected the goals, value, and self-efficacy held by the students using them, or felt they should be using them and were choosing to not. Perhaps students using the regulating applications realized how poor they were at actually regulating their distracted tendencies, then generalized this difficulty to their motivation in Chemistry. In other words, students reported lower motivation with Chemistry as a result of negative experience with the application effectiveness or utilization, perhaps as the result of cognitive dissonance wherein students reported negative chemistry motivations as a way of reconciling their acknowledged and recognized low use of the applications. We did not test for this nor does the qualitative data provide any insight with this theoretical hypothesis. 82 Qualitative data also failed to explain why students reported similar levels of self- determination for both applications. While a few students noted self-determination relevant considerations, such as, “It was somewhat annoying when the trees die” and “Not being able to access my phone at all, even in an emergency, is terrifying to me,” there was little qualitative data to suggest that loss of autonomy or external rewards were influential in altering student use of the applications. While the applications differed in their approach to mitigate media distracted behavior, student response to these differences did not result in corresponding changes in self- determined motivation, at least as measured by the quantitative scales. Future research may consider regulating applications through a self-determination lens, with all experimental groups included. Akin to Kerner and Goodyear (2017), it may be that regulating applications incite undesired feelings of self-determined motivation, thereby undermining desired performance outcomes or effective value. Last, this study’s findings suggest that academic performance was poorly correlated with student media distracted behavior. More concerning, student use of regulating software applications introduced comparatively poorer achievement as compared to no technology or being in the presence of technology applications. This finding challenges Bellur, Nowak, and Hull (2015) who found that those “who reported multitasking with technology in class had lower GPAs than students who did not multitask” (p. 68). Burak (2012), also found that classroom multitasking behavior was positively correlated with poorer GPA. However, Bellur et al., also found that multitasking while doing homework had no correlation with GPA. And Clem and Junco (2015) noted that multitasking and various social media use were both positively and negatively related to GPA, mediated by the activity of the distracted behavior and social 83 engagement. As Chen and Yan (2015 argue), there is inconsistent associations between media multitasking behavior and academic performance. Internally, the comparatively poorer performance of students with the regulating software applications parallels the salient, undesired effects of motivation related to Chemistry in an understandable and seemingly valid manner (e.g. face validity). This suggests that the introduction of applications to regulate media distracted behavior, even when utilization is relatively low and there exists sporadic evidence of lowered media distracted behavior, also introduces far more concerning predictable effects with student motivation, and by further extension, student performance. This study contributes to the inconsistent literature on the performance effects as a result of media distracted behavior. However, whereas other studies have more narrowly considered performance in reference to media distracted behavior, this study also considered relevant motivation, regulatory, and engagement considerations, as well as incoming competency with the course material. Thus, while media distracted behavior may be a worthy consideration when considering performance, it may also be necessary to consider students’ motivation to have a complete understanding of the media distraction phenomena. Implications There are three significant implications from this study. First, there is some evidence to suggest that regulating software applications designed to heighten focus and mitigate media distracted behavior can, in fact, work. This evidence was split between two different applications wherein Cold Turkey presented initial, but not lasting, change in media distraction during class, Forest universally unremarkable effects on lowering media distracted behavior. These findings are qualified by equally important findings that suggest that merely being in the presence of 84 applications, especially during class, may be equally as effective in mitigating media distracted behavior. This evidence is further complicated by the lack of associated changes in students’ engagement, regulation, and technology dependence. Thus, further research is needed to better understand the environmental factors or characteristics that underscore or undermine an applications ability to work as a regulating tool. Second, there may be conflated or adverse effects of regulating application use. As noted before, the findings suggest that long-term student impact was null or in the opposite direction as intended. Indeed, as compared to the single hypothesized effect of lowered media distracted behavior in the classroom, the unintentional effects of the applications on decreasing motivation were considerable. This finding is quite significant when considering the growing popularity of regulating applications. Forest downloads on Android devices, alone, are categorized at 1 million to 5 million, per year. This is particularly impressive when considering Android accounts for the minority share of market as compared to iOS (Apple). From an economic, wellness, performance, and integration perspective, the true value and effect of regulating applications must be further examined, especially before practitioners, institutions, or districts widely support or adopt such approaches to regulating the media distracted behavior of their students. As a corollary to the exhibited undesired long-term motivational effects on Chemistry, our study also suggests that application use, even when relatively low, instigated comparatively poorer course performance, as measured by multiple data points. In particular, students with the applications did worse than those with no technology and those who were in the presence of those with technology. This underscores the need to further all effects of regulating applications, including the possibility for introduced cognitive dissonance among users at the expense of lowered motivation and poorer performance. 85 And last, media multitasking research has, as the Literature Review argued, garnered attention because of the pervasive nature of the behavior and associated, problematic effects. The findings of this study, however, suggest that academic performance in a real-world application, as measured by three content examinations, a final cumulative exam, and an overall course grade were poorly correlated with media distracted behavior. Correlations between media distracted behavior merited sporadic, and limited, correlations with performance. This argument does not challenge existing research which clearly shows diminished real- time performance effects of media distracted behavior (e.g. distracted driving tests or memory recall tests following distracted behavior in a laboratory setting). However, this finding may suggest that, over more extended periods of time like a university semester, when performance marks are more reflective of the totality of a student’s effort rather than a specific constrained experience, students’ efforts might mitigate the degree to which their media distracted behavior actually affects performance. This may underscore Thompson’s (2013) broad argument that new technology use is a new literacy, which may be yielded towards desirable or equally adequate performance measures over time as individuals change their behavior in accord with the positive and negative effects of the technology. As Thompson (2013) suggested, it may be that “we panic that life will never be the same, that our attentions are eroding, that culture is being trivialized. But, as in the past, we adapt, learning to use the new and retaining what is good of the old.” Specific to university settings, this study’s unintentional negative effects on motivation and performance suggests that a more promising response from practitioners may be to consider other ways to improve student engagement and behavioral regulation. Additionally, this data suggests that an entrance assessment is a more informative data point for considering future potential performance as compared to media distracted tendencies. Accordingly, an intervention 86 or response targeted to students who underperform on an entrance assessment may, in fact, merit greater performance outcomes as compared to an intervention or response targeting high media distracted individuals. Limitations As with any study, this dissertation study had several limitations. First and foremost, this study is limited to the degree it can comment on the effectiveness or impact of all software applications that purport to heighten focus and mitigate distraction. There are many additional software applications on the market with theoretically sound designs and user-experiences which differ from Cold Turkey and Forest. Second, this study depends heavily on the self-reported media distracted behavior of students. This is problematic for two reasons. First, self-report can be inherently unreliable. This is particularly true with student self-perceptions related to media use and technology (Junco, 2013). Students often woefully over estimate or underestimate their actual technology use (Junco, 2013). Additionally, with the increased seamless experience between personal technology and non-personal technology, as exemplified by smartwatches and smartphone, the delineation between use and non-use is more-opaque than ever before (Alter, 2016). In addition, the study is limited to the degree that it can comment on attention and focus, which are difficult to measure. This study suggests, in line with previous laudable media multitasking research, that distracted behavior strongly correlates with inattention, split attention, or distracted attention (Magen, 2017). However, mind wandering is a pervasive and blatant challenge to this casual correlation. Student behavior can be “on-task”, but their mind is wandering. And therefore, the student is equally as unfocused or distracted (Smallwood and Schooler, 2006). Accordingly, this study is limited to the degree it can comment on the efficacy 87 of regulatory software applications in helping students concentrate or focus. Instead, this study can comment on reported student media distracted behavior, by proxy comment on focus. Fourth, performance and media distracted behavior is correlational and this study may have failed to capture all relevant performance factors, such as the impact of other enrolled courses, family or personal matters, or social adjustment to school. This is both the desired benefit but acknowledged limitation with a field-study experiment. While the four courses were taught with identical pedagogical approaches, there exists a potential for variation in teacher pedagogical aptitude, student perceptions of teacher engagement, and unexpected events such as a canceled class. There is always a chance individual classes present unmeasured factors that alter the class dynamic, class motivation, or class performance. More, this study cannot control the variety of additional courses students may be concurrently enrolled in. It is possible that these additional courses may affect how much a student is able to prioritize Chemistry. For example, a student may have multiple tests during the Chemistry 100 mid-term examination, thereby potentially influencing their ability to perform as well as they could otherwise. Additionally, there were multiple applicable motivation and self-regulatory considerations that are untested. For example, student self-regulatory shifts may come about through natural acclimation to the post-secondary collegiate rigors experienced in all courses, and not just Chemistry 100. In addition, we are limited to the degree we can assert motivational shifts in self-regulation and self-efficacy are directly associated with potential software use. Shifts in these areas may be reflective of unmeasured considerations, such as parental or teacher guidance and support. A related, additional unmeasured but relevant consideration that can confound or alter the recorded data was positive or negative student held attitudes towards technology. Related, this study did not measure student-held beliefs explicitly related to their 88 desire or interest to be more attentive or less distracted. Nor did we measure student-held value of these laudable efforts. This is relevant as application use may have been undermined by lack of initial value in the affordances provided by the applications. Additionally, extant research would argue that the application or technology is not a change-agent as much as it is a tool to support and abet pre-existing interest and motivation (Patel et al. 2015). These opinions, measurable through multiple scales, have proven to be informative and worthwhile correlated antecedents to understanding student technology use (Terry et al., 2016), but the inclusion of these scales would have pushed measure well beyond a reasonable length. An additional concern is the extent to which social desirability also influenced findings. Our choice to administer the first survey early in the term (e.g. Week Two), was a calculated risk common to repeated measure, longitudinal studies. However, there exists a chance that answers were slightly skewed, especially when considering questions related to motivation and regulation, towards the higher-end. While the participation and completion rate were laudable, especially for a sixteen-week quasi-experimental study, there were students who chose not to complete all three measures and there were also students who dropped out of the course, thereby not earning a final exam grade or course grade. It should be noted, many students who dropped out of the course did, still, complete the study. The relatively small number missing final exam grades and course grades, however, may limit the degree to which we can fervently argue that media distracted behavior is poorly correlated to expected or earned performance in the course. Additionally, this study was limited to the degree it captured frequency of media distracted behavior as part of the first survey administration. The inclusion of the MMI as part of the first survey proved to be more problematic than helpful with open-field multitasking 89 inquiries which produced unusable data (e.g. instead of “5 hours”, students would note “a lot” or “1,000,000,000” to the question of “How often do you use social media?”). Accordingly, the MMI was not considered as part of any data analysis. Adversely, the curated media and distraction subscale was quite informative – above and beyond its own internal reliability - and should have been used in the first administration, displacing the MMI entirely. The study’s qualitative data helped to deepen our quantitative analysis. However, the following question, “Think about your ability to focus (i.e. avoid distractions) when studying Chemistry this term. Was it better than expected? Worse? Or about Normal? How do you think your ability to focus affected your performance in Chemistry? Please explain.” was purposefully omitted from the qualitative data. Following two rounds of interrater discussion and independent coding, it was clear that poorly written question merited poor data. As an example, independent coding merited agreement levels less than 20% following two revisions to the codebook, with even worse agreement within the required code related to an individual’s ability to focus, with the possible answers of “Better than Expected”, “Worse than Expected”, “Comprehensive Response, and “Respondent Did Not Answer”. In hindsight, the question should have been split into two separate questions. Last, the unique characteristics of the sample population significantly limit the degree to which the study findings can be generalized to other collegiate settings or traditional college- aged individuals. While the data is quite comparable to similar STEM and applied science universities, this is a niche post-secondary education community defined by high incoming ACT and SAT marks as well as predominate male populations, thereby compromising the generalizability. 90 Directions for Future Research & Conclusion Future research is needed to further understand why student regulation, engagement, and perceptions of dependency was arguably unchanged despite recorded differences in media distracted behavior in and out of class. Additionally, more research is needed to understand why application use negatively affected, to a greater comparative degree, student held motivations related to Chemistry. These were not hypothesized findings And, it may speak to the greater, more concerning effect regulating applications present as a result of use. More importantly, further research is needed to consider the perceived value of regulating applications as well as the effect regulating application present to the user. It has been argued that cognitive dissonance, or some type of projected internalization of low application use with social desirability, may have prompted students to report low, and unexpected, feelings of motivation with Chemistry. Additionally, it was argued that this cognitive dissonance coupled with low feelings of motivation led to poorer performance overall. There is extant research, from the area of wearable technologies, that support this interpretation of findings. But further research is needed, in specific relation to media distracted behavior, to better understand the phenomenon (motivation, antecedents, factors, etc.) between use distracted behavior and their individual choice to still engage and exhibit said behavior, despite readily possessing the tools to mitigate such behavior and often the knowledge or recognition, as our research has also argued, that such behavior is not conducive or productive. In sum, this study sought to test regulating software applications as a credible and worthwhile response to a pervasive and problematic problem. The literature related to how practitioners can effectively respond to media distracted behavior is scant, and unconvincing. Regulating applications are not clearly the answer to the ubiquitous behavior, in and out of the 91 classrooms. In fact, any recommendation to use regulating applications comes with a more likely assurance that long-term motivation and scholastic achievement will suffer as a result. Thankfully, the study also suggests that media distracted behavior was a poor predictor of performance, regardless of application use or being the presence of application use. While there is no doubting the pervasiveness of the phenomenon or the negative performance effect distracted behavior has produced via a plethora of controlled experiments, this study does challenge the degree to which it is a truly problematic issue when considering performance over an extended period of time. 92 APPENDICES 93 Table 1 Demographic & Participant Breakdown by Condition Demographic Category Sex (X2 = 11.57) Male Female Transgender Prefer to Not Answer Operating System (X2 = 155.68) iOS (Apple) Android iOS and Android Other APPENDIX A Tables 52 31 0 0 82 0 0 1 24 15 0 0 30 8 1 0 94 Cold Turkey (n = 45) Forest (n = 39) Cold Turkey Control (n = 83) Forest Control (n = 27) 20 (74.07%) Pure Control (n = 73) Total (n = 267) (77.78%) (61.54%) (62.65%) (52.05%) (63.30%) (22.22%) (38.46%) (37.35%) (25.00%) (46.58%) (36.33%) (0.00%) (0.00%) (0.00%) (0.00%) (0.00%) (0.00%) (0.00%) (0.00%) (0.00%) (0.00%) (1.37%) (0.37%) (0.00%) (76.92%) (98.80%) (59.26%) (72.60%) (67.79%) (100.00%) (20.51%) (0.00%) (32.14%) (26.03%) (30.34%) (0.00%) (2.56%) (0.00%) (0.00%) (0.00%) (1.20%) (0.00%) (0.00%) (0.37%) (7.14%) (1.37%) (1.50%) 35 10 0 0 0 45 0 0 38 34 0 1 53 19 0 1 169 97 0 1 181 81 1 4 16 7 0 0 9 0 2 Table 1 (cont’d) Ethnicity (X2 = 22.68) African American/Black Asian/Pacific Islander Hispanic/Latino Native American/American Indian White Not Listed or Other Prefer Not to Respond Age (X2 = 21.36) 17 or Younger 18 19 19 20 21 22 1 1 1 1 0 0 0 35 1 1 1 0 0 95 0 5 6 0 0 1 0 2 1 1 0 0 (0.00%) (2.56%) (1.20%) (3.70%) (0.00%) (1.12%) (11.11%) (2.56%) (13.33%) (2.56%) (0.00%) 33 (73.33%) (2.56%) 35 (89.74%) (0.00%) (0.00%) (2.22%) (0.00%) (8.43%) (8.43%) (1.20%) 64 (77.11%) (2.41%) (1.20%) (3.57%) (10.96%) (8.24%) (7.41%) (2.74%) (6.74%) (0.00%) 22 (81.48%) (0.00%) 58 (79.45%) (0.75%) 212 (79.40%) (3.57%) (5.48%) (2.62%) (0.00%) (1.37%) (1.12%) (0.00%) 39 (86.67%) (0.00%) (89.74%) (0.00%) 74 (89.16%) (0.00%) 22 (78.57%) (0.00%) (90.41%) (4.44%) (2.56%) (10.84%) (11.11%) (8.22%) (0.00%) 236 (88.39%) 21 (7.87%) (2.22%) (2.56%) (2.22%) (2.56%) (0.00%) (0.00%) (0.00%) (0.00%) (0.00%) (0.00%) (0.00%) (0.00%) (7.14%) (1.37%) (1.87%) (0.00%) (0.00%) (0.75%) (0.00%) (0.00%) (0.00%) (0.00%) (0.00%) (0.00%) 1 7 7 1 2 1 0 9 0 0 0 0 1 1 2 0 1 0 0 3 2 0 0 0 0 8 2 0 4 1 0 66 6 1 0 0 0 3 22 18 2 7 3 0 5 2 0 0 2 (4.44%) 1 (2.56%) 0 (0.00%) 0 (0.00%) 0 (0.00%) 3 (1.12%) 23 or Older Table 1 (cont’d) Native Language (X2 = 3.52) English Non-English Note: Chi-Square Test noted as Cronbach Alpha X2 (11.11%) (88.89%) (97.44%) (93.98%) (88.89%) (94.52%) (2.56%) (6.02%) (10.71%) (5.48%) 24 3 69 4 249 (93.26%) 18 (6.74%) 40 5 38 1 78 5 96 Table 2 Participant Experience Chart Week Course Examinations Entrance Assessment Exam #1 Exam #2 Exam #3 Cumulative Final Exam 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Relevant Communication with Students Visited all Students: Invite to Participate Following Census, Conditions and Treatment Instructions Distributed Email to Students Noting Upcoming Survey Email to Students Noting Upcoming Survey Survey Administration First Administration Second Administration Third Administration 97 Table 3 Communication with Treatment Groups E-mail #1 Content: E-mail #2 Content: E-mail #2 Content: Pure Control Group: Consent to participate, with full quantitative measure available to those who consent. Acknowledgement of participation and request for continued involvement throughout term. Forest and Cold Turkey Experimental / Contamination Groups: Forest Experimental Group: Cold Turkey Experimental Group: Consent to participate, with full quantitative measure available to those who consent. Consent to participate, with full quantitative measure available to those who consent. Consent to participate, with full quantitative measure available to those who consent. Acknowledgement of participation and request for continued involvement throughout term. Acknowledgement of participation and request for continued involvement throughout term. Acknowledgement of participation and request for continued involvement throughout term. Potential harmful influences of technology misuse and media multitasking. Encouragement to self-direct regulatory efforts afforded by apps, with no reference to apps. Potential harmful influences of technology misuse and media multitasking. Encouragement to yield regulatory efforts afforded by apps, with explicit reference to Forest application. Potential harmful influences of technology misuse and media multitasking. Encouragement to yield regulatory efforts afforded by apps, with explicit reference to Cold Turkey application. 98 Table 3 (cont’d) E-mail #2 Content: Instructions on how to use Forest Application, including details of their own personal, paid account. Instructions on how to use Cold Turkey Application, including details of their, paid account. Table 4 Participant Flow Cold Turkey Forest Cold Turkey Control Forest Control Pure Control N = Time 1 Time 2 Time 2 Time 3 % Complete Time 3 % Complete 37 33 71 23 67 231 82.22% 84.61% 85.54% 85.18% 91.78% 86.51% 34 35 64 23 60 216 75.55% 89.74% 77.10% 85.18% 82.91% 80.89% 45 39 83 27 73 267 99 Number Who Did Not Complete Time 2 or Time 3 3 8 10 Number of Students Withdrawn from Course and Did Not Complete Time 2 and 3 1 1 0 2 5 28 0 0 2 Table 5 Correlations: Course Scholarly Performance Variable 1. Entrance Assessment 2. Final Exam Grade 3. Examination 1 4. Examination 2 5. Examination 3 6. Course Grade (Overall) ** Correlation is significant at p < 0.01 level (2-tailed). .42** (243) 2 3 .54** (243) .51** (256) 4 .47** (243) .61** (256) .60** (266) 5 .44** (243) .66** (256) .49** (258) .61** (258) 6 .51** (243) .90** (256) .69** (256) .76** (256) .75** (256) N/A 100 Table 6 Media Distracted Behavior In and Out of Class, Hours Preparing for Class Media Distracted Behavior During Class Time #1 Time #2 Time #3 Cold Turkey (n =45) Forest (n =39) Cold Turkey Control (n =83) Forest Control (n =27) Pure Control (n =73) Cold Turkey (n=37) Forest (n =33) Cold Turkey Control (n =71) Forest Control (n =23) Pure Control (n =67) Cold Turkey (n =34) Forest (n =35) Cold Turkey Control (n =64) Forest Control (n =23) Pure Control (n =60) 1.73 (0.62) 1.94 (0.86) 1.62 (0.51) 1.56 (0.58) 2.04 (0.68) 1.93 (0.80) 2.16 (0.84) 1.89 (0.75) 1.77 (0.62) 2.04 (0.73) 101 Media Distracted Behavior Outside of Class While Studying 2.98 (0.97) 2.95 (0.81) 2.56 (0.76) 2.52 (0.89) 2.72 (0.82) 2.75 (0.85) 3.16 (0.74) 2.72 (0.79) 2.91 (0.77) 2.61 (0.86) Hours per Class 1.76 (0.43) 1.95 (0.51) 1.82 (0.54) 1.81 (0.55) 1.79 (0.49) 1.65 (0.71) 1.73 (0.62) 1.90 (0.77) 1.74 (0.68) 1.75 (0.72) 1.79 (0.77) 1.63 (0.77) 1.59 (0.63) 1.65 (0.57) 1.65 (0.68) Time 3 (NCold Turkey = 34) (NForest = 35) 1.82 (0.79) 1.80 (0.83) 1.50 (0.70) 1.40 (0.73) 3.53 (2.12) 2.77 (2.08) Table 7 Frequency of Application Use and Duration Outside Attending Duration Cold Turkey Forest Cold Turkey Forest Cold Turkey Forest Time 2 (NCold Turkey = 37) (NForest = 33) 1.70 (0.70) 2.00 (0.90) 1.46 (0.76) 1.42 (0.66) 3.22 (2.07) 2.76 (1.83) 102 APPENDIX B Figures Figure 1 Media Distracted Behavior: During Class (Linear Mixed Method Analysis) 103 Figure 2 Media Distracted Behavior: Outside of Class (Linear Mixed Method Analysis) 104 Figure 3 Control of Attention (Linear Mixed Method Analysis) 105 Figure 4 Behavioral Regulation (Linear Mixed Method Analysis) 106 Figure 5 Technology Dependency (Linear Mixed Method Analysis) 107 Figure 6 Behavioral Engagement (Linear Mixed Method Analysis) 108 Figure 7 Persistence (Linear Mixed Method Analysis) 109 Figure 8 Course Performance (Linear Mixed Method Analysis) 110 Figure 9 Interest Value (Linear Mixed Method Analysis) 111 Figure 10 Attainment Value (Linear Mixed Method Analysis) 112 Figure 11 Utility Value (Linear Mixed Method Analysis) 113 Figure 12 Opportunity Cost (Linear Mixed Method Analysis) 114 Figure 13 Effort Cost (Linear Mixed Method Analysis) 115 Figure 14 Mastery Approach (Linear Mixed Method Analysis) 116 Figure 15 Performance Approach (Linear Mixed Method Analysis) 117 Figure 16 Performance Avoidance (Linear Mixed Method Analysis) 118 Figure 17 Self-Efficacy (Linear Mixed Method Analysis) 119 REFERENCES 120 REFERENCES Aagaard, J. (2015). Media multitasking, attention, and distraction: a critical discussion. Phenomenology and the Cognitive Sciences, 14(4), 885–896. https://doi.org/10.1007/s11097-014-9375-x Alter, A. (2017). Irresistible: The rise of addictive technology and the business of keeping us hooked. New York, NY: Penguin Press. Altman, E. M., & Trafton, J. G. (2002). Memory for goals: An activation-based model. Cognitive Science, 26(1), 39–83. Alzahabi, R., & Becker, M. W. (2013). The association between media multitasking, task- switching, and dual-task performance. Journal of Experimental Psychology: Human Perception and Performance, 39(5), 1485–1495. Artino, A. (2005). Review of the Motivated Strategies for Learning Questionnaire (MSLQ). Retrieved online: https://eric.ed.gov/?id=ED499083. Aydin, S. (2012). A review of research on Facebook as an educational environment. Educational Technology Research and Development, 60(6), 1093–1106. Bailey, B. P., Konstan, J. a, & Carlis, J. V. (2001). The effects of interruptions on task performance, annoyance, and anxiety in the user interface. Proceedings of INTERACT ’01, 593–601. https://doi.org/10.1109/ICSMC.2000.885940 Baird, B., Smallwood, J., Marek, M. D., Kam, J. W. Y., Franklin, M. S., & Schooler, J. W. (2012). Inspired by distraction: mind wandering facilitates creative incubation. Psychological Science, 23(10), 1117–22. https://doi.org/10.1177/0956797612446024 Barker, V. (2012). A generational comparison of social networking site use: The influence of age and social identity. The International Journal of Aging and Human Development, 74(2), 163–187. https://doi.org/10.2190/AG.74.2.d Barkley, R. A. (2012). Executive functions: What they are, how they work, and why they evolved. Guilford Press, New York, NY. Baumeister, R. F., Bratslavsky, E., Muraven, M., & Tice, D. M. (1998). Ego depletion: is the active self a limited resource? Journal of Personality and Social Psychology, 74(5), 1252–65. Baumeister, R. & Tierney, J. (2011). Willpower: Rediscovering the greatest human strength. London, England: Penguin. 121 Becker, M. W., Alzahabi, R., & Hopwood, C. J. (2013). Media multitasking is associated with symptoms of depression and social anxiety. Cyberpsychology, Behavior and Social Networking, 16(2), 132–5. https://doi.org/10.1089/cyber.2012.0291 Bellur, S., Nowak, K. L., & Hull, K. S. (2015). Make it our time: In class multitaskers have lower academic performance. Computers in Human Behavior, 53, 63–70. https://doi.org/10.1016/j.chb.2015.06.027 Bergen, L., Grimes, T., & Potter, D. (2005). How attention partitions itself during simultaneous message presentations. Human Communication Research, 31(3), 311–336. https://doi.org/10.1111/j.1468-2958.2005.tb00874.x Bonney, C. R. (2006). Investigating the influence of the 2 x 2 achievement goal framework on college athletes' motivation and performance. Dissertation Abstracts International Section A: Humanities and Social Sciences, 67(2-A), 457. Bowman, L. L., Levine, L. E., Waite, B. M., & Gendron, M. (2010). Can students really multitask? An experimental study of instant messaging while reading. Computers & Education, 54(4), 927–931. https://doi.org/10.1016/j.compedu.2009.09.024 Bowman, L. L., Waite, B. M., & Levine, L. E. (2015). Multitasking and attention. The Wiley Handbook of Psychology, Technology, and Society (pp. 388–403). Bozeday, G. (2013). Media multitasking and the student brain. School Specialty. Retrieved from https://store.schoolspecialty.com/OA_HTML/xxssi_ibeGetWCCFile.jsp?docName=G198 4922&minisite=10206 Brasel, S. A., & Gips, J. (2011). Media multitasking behavior: Concurrent television and computer usage. Cyberpsychology, Behavior and Social Networking, 14(9), 527–534. doi:10.1089/cyber.2010.0350 Burgers, C., Eden, A., Van Engelenburg, M. D., & Buningh, S. (2015). How feedback boosts motivation and play in a brain-training game. Computers in Human Behavior, 48, 94– 103. https://doi.org/10.1016/j.chb.2015.01.038 Calderwood, C., Ackerman, P. L., & Conklin, E. M. (2014). What else do college students “do” while studying? An investigation of multitasking. Computers and Education, 75(2014), 19–29. https://doi.org/10.1016/j.compedu.2014.02.004 Cain, M. S., Leonard, J. A., Gabrieli, J. D. E., & Finn, A. S. (2016). Media multitasking in adolescence. Psychonomic Bulletin & Review, 1932–1941. https://doi.org/10.3758/s13423-016-1036-3 Campbell, K. W., & Twenge, J. M. (2015). Narcissism, emerging media, and society. Wiley Handbook of Psychology, Technology, and Society. 358 – 370. 122 Carr, N. (2011). What the internet is doing to our brains: The shallows. New York, New York. W. W. Norton Publishing. Carrier, L. M., Kersten, M., & Rosen, L. D. (2015). Searching for generation M. In Wiley Handbook of Psychology, Technology, and Society (pp. 371–387). Retrieved from http://onlinelibrary.wiley.com/doi/10.1002/9781118771952.ch21/summary%5Cnhttp://on linelibrary.wiley.com/doi/10.1002/9781118771952.ch21/pdf Carrier, L. M., Rosen, L. D., Cheever, N. a., & Lim, A. F. (2015). Causes, effects, and practicalities of everyday multitasking. Developmental Review, 35, 64–78. https://doi.org/10.1016/j.dr.2014.12.005 Castleman, B. L., & Page, L. C. (2016). Freshman year financial aid nudges: An experiment to increase FAFSA renewal and college persistence. Journal of Human Resources, 51(2), 389–415. https://doi.org/10.3368/jhr.51.2.0614-6458R Chen, K. C., & Jang, S. J. (2010). Motivation in online learning: Testing a model of self- determination theory. Computers in Human Behavior, 26(4), 741–752. Chen, Q., & Yan, Z. (2016). Does multitasking with mobile phones affect learning? A review. Computers in Human Behavior, 54, 34–42. https://doi.org/10.1016/j.chb.2015.07.047 Cheever, N. a., Rosen, L. D., Carrier, L. M., & Chavez, A. (2014). Out of sight is not out of mind: The impact of restricting wireless mobile device use on anxiety levels among low, moderate and high users. Computers in Human Behavior, 37, 290–297. https://doi.org/10.1016/j.chb.2014.05.002 Chisholm, C. D., Collison, E. K., Nelson, D. R., & Cordell, W. H. (2000). Emergency department workplace interruptions: are emergency physicians “interrupt-driven” and “multitasking”? Academic Emergency Medicine, 7(11), 1239–1243. Conley, A. M. (2012). Patterns of motivation beliefs: Combining achievement goal and expectancy-value perspectives. Journal of Educational Psychology, 104(1), 32-47. Courage, M. L., Bakhtiar, A., Fitzpatrick, C., Kenny, S., & Brandeau, K. (2015). Growing up multitasking: The costs and benefits for cognitive development. Developmental Review, 35, 5–41. https://doi.org/10.1016/j.dr.2014.12.002 Cowan, N., Fristoe, N. M., Elliott, E. M., Brunner, R. P., & Saults, J. S. (2006). Scope of attention, control of attention, and intelligence in children and adults. Memory & Cognition, 34(8), 1754–68. Retrieved from http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1868392&tool=pmcentrez&r endertype=abstract Creswell, J. (2009). Research design: Qualitative, quantitative, and mixed methods approaches. Third addition. Thousand Oaks, CA: Sage Publishing. 123 David, P., Kim, J.-H., Brickman, J. S., Ran, W., & Curtis, C. M. (2014). Mobile phone distraction while studying. New Media & Society. http://dx.doi.org/10.1177/ 1461444814531692. Advance online publication. End, C. M., Worthman, S., Mathews, M. B., & Wetterau, K. (2010). Costly cell phones: The impact of cell phone rings on academic performance. Teaching of Psychology, 37(1), 55– 57. https://doi.org/10.1080/00986280903425912 Engle, R. W., & Kane, M. J. (2004). Executive attention, working memory capacity, and a two- factor theory of cognitive control. The Psychology of Learning and Motivation: Advances in Research and Theory, 145–199. https://doi.org/10.1016/S0079-7421(03)44005-X Engström, J., Johansson, E., & Östlund, J. (2005). Effects of visual and cognitive load in real and simulated motorway driving. Transportation Research Part F: Traffic Psychology and Behaviour, 8(2), 97–120. https://doi.org/10.1016/j.trf.2005.04.012 Field, A. (2013). Discovering statistics using IBM SPSS Statistics. Fourth Edition. London: Sage Publishing. Fitzmaurice, G. M., Laird, N. M. and Ware, J. H. (2004). Applied Longitudinal Analysis. New York, New York. Wiley Press. Foehr, U. G. (2006). Media multitasking among American youth: Prevalence, predictors, and pairings. Retrieved from The Henry J. Kaiser Family Foundation website: https://kaiserfamilyfoundation.files.wordpress.com/2013/01/7592.pdf Foerde, K., Knowlton, B. J., & Poldrack, R. a. (2006). Modulation of competing memory systems by distraction. Proceedings of the National Academy of Sciences of the United States of America, 103(31), 11778–83. https://doi.org/10.1073/pnas.0602659103 Fogg, B. J. (2003). Computers as Persuasive Social Actors. Persuasive Technology: Using Computers to Change What We Think and Do (Vol. 1, pp. 89–121). Goleman, D. (2013). Focus: The hidden driver of excellence. New York, NY: Harper Collins. Greenstein, J. (1954). Effects of television on elementary school grades. Journal of Educational Research, 48, 161-176. Grinols, A. B., & Rajesh, R. (2014). Multitasking with smartphones in the college classroom. Business and Professional Communication Quarterly, 77(1), 89–95. https://doi.org/10.1177/2329490613515300 Grolnick, W. S., Kurowski, C. O., Dunlap, K. G., & Hevey, C. (2000). Parental resources and the transition to junior high. Journal of Research on Adolescence, 10(4), 465–488. https://doi.org/10.1207/SJRA1004_05 124 Guay, F., Vallerand, R. J., & Blanchard, C. (2000). On the assessment of situational intrinsic and extrinsic motivation: The Situational Motivation Scale (SIMS). Motivation and Emotion, 24(3), 175–213. https://doi.org/10.1023/A:1005614228250 Hagger, M. S., Wood, C., Stiff, C., & Chatzisarantis, N. L. D. (2010). Ego depletion and the strength model of self-control: A meta-analysis. Psychological Bulletin, 136(4), 495–525. doi:10.1037/a0019486 Hassoun, D. (2014). “All over the place”: A case study of classroom multitasking and attentional performance. New Media & Society, April, 1–16. https://doi.org/10.1177/1461444814531756 Heatherton, T. F., & Wagner, D. D. (2011). Cognitive neuroscience of self-regulation failure. Trends in Cognitive Sciences, 15(3), 132–139. Hembrooke, H., & Gay, G. (2003). The laptop and the lecture: The effects of multitasking in learning environments. Journal of Computing in Higher Education, 15(1), 46–64. https://doi.org/10.1007/BF02940852 James, W. (1899). Talks to teachers on psychology and to students on some of life’s ideals. New York, New York. Holt. Jang, H., Kim, E. J., & Reeve, J. (2016). Why students become more engaged or more disengaged during the semester: A self-determination theory dual-process model. Learning and Instruction, 43, 27-38. DOI: 10.1016/j.learninstruc.2016.01.002 Jenkins, H. (2006). Convergence culture: Where old and new media collide. New York: New York University Press. Judd, T. (2014). Making sense of multitasking: The role of Facebook. Computers and Education, 70, 194–202. https://doi.org/10.1016/j.compedu.2013.08.013 Junco, R. (2012). Too much face and not enough books: The relationship between multiple indices of Facebook use and academic performance. Computers in Human Behavior, 28(1), 187–198. https://doi.org/10.1016/j.chb.2011.08.026 Junco, R. (2012). In-class multitasking and academic performance. Computers in Human Behavior, 28(6), 2236–2243. https://doi.org/10.1016/j.chb.2012.06.031 Junco, R. (2013). Comparing actual and self-reported measures of Facebook use. Computers in Human Behavior, 29(3), 626–631. https://doi.org/10.1016/j.chb.2012.11.007 Junco, R., & Cotten, S. R. (2012). No A 4 U: The relationship between multitasking and academic performance. Computers and Education, 59(2), 505–514. https://doi.org/10.1016/j.compedu.2011.12.023 125 Junco, R., Heiberger, G., & Loken, E. (2011). The effect of Twitter on college student engagement and grades. Journal of Computer Assisted Learning, 27(2), 119–132. Juneja, P., & Roper, K. (2010). Experience of concentration or distraction: The cost to knowledge organizations. International Conference in Facilities Management (pp. 163– 173). Kerner, C., & Goodyear, V. A. (2017). The motivational impact of wearable healthy lifestyle technologies: A self-determination perspective on Fitbits with adolescents. American Journal of Health Education, 48(5), 287–297. https://doi.org/10.1080/19325037.2017.1343161 Kessler, S. (2011). 38% of college students can’t go 10 minutes without tech. Mashable. Retrieved online from http://mashable.com/2011/05/31/college-tech-devicestats Killingsworth, M. A., & Gilbert, D. T. (2010). A wandering mind is an unhappy mind. Science, 330, 932. doi:10.1126/science.1192439 Kratzke, C. & Cox, C. (2012). Smartphone technology and apps: Rapidly changing health promotion. International Electronic Journal of Health Education. V15, 72 – 82. Kraushaar, J. M., & Novak, D. C. (2006). Examining the effects of student multitasking with laptops during the lecture. Journal of Information Systems Education, 21(2), 241–252. Lavie, N. (2010). Attention, Distraction, and Cognitive Control Under Load. Current Directions in Psychological Science, 19(3), 143–148. Levine, L. E., Waite, B. M., & Bowman, L. L. (2007). Electronic media use, reading, and academic distractibility in college youth. Cyberpsychology & Behavior, 10(4), 560–566. Levy, J., & Pashler, H. (2001). Is dual-task slowing instruction dependent? Journal of Experimental Psychology: Human Perception and Performance, 27(4), 862–869. Lin, L., Robertson, T., & Lee, J. (2009). Reading performances between novices and experts in different media multitasking environments. Computers in the Schools, 26(3), 169–186. Lin, L. (2009). Breadth-biased versus focused cognitive control in media multitasking behaviors. Proceedings of the National Academy of Sciences, 106(37), 15521–15522. Loukopoulos, L. D., Dismukes, R. K., & Barshi, I. (2008). The multitasking myth: Handling complexity in real-world operations. Journal of Adult Education, 37(1), 33–38. Magen, H. (2017). The relations between executive functions, media multitasking and polychronicity. Computers in Human Behavior, 67, 1–9. https://doi.org/10.1016/j.chb.2016.10.011 126 Marci, C. (2012, March). A (biometric) day in the life: Engaging across media. Paper presented at Re:Think 2012, New York, NY. Mark, G., Gonzalez, V. M., & Harris, J. (2005). No task left behind? Examining the nature of fragmented work. CHI. 321 – 330. McVay, J. C., & Kane, M. J. (2012). Why does working memory capacity predict variation in reading comprehension? On the influence of mind wandering and executive attention. Journal of Experimental Psychology: General, 141(2), 302–320. https://doi.org/10.1037/a0025250 Midgley, C., Maehr, M. L., Hruda, L. Z., Anderman, E., Anderman, L., Freeman, K. E., … Roeser, R. (2000). Manual for the Patterns of Adaptive Learning Sciences (PALS). PALS, 734–763. Monk, C. A., Trafton, J. G., & Boehm-Davis, D. a. (2008). The effect of interruption duration and demand on resuming suspended goals. Journal of Experimental Psychology: Applied, 14(4), 299–313. Moreno, M. a., Jelenchick, L., Koff, R., Eikoff, J., Diermyer, C., & Christakis, D. a. (2012). Internet use and multitasking among older adolescents: An experience sampling approach. Computers in Human Behavior, 28(4), 1097–1102. Nass, C. & Yen, C. (2012) The Man Who Lied to His Laptop: What We Can Learn About Ourselves from our Machines. New York. New York. Penguin Random House Publishing. National Safety Council. (2012). Annual estimate of cell phone crashes 2010. Retrieved from http://www.nsc.org/safety_road/Distracted_Driving/Documents/Attributable%20Risk%2 0 Summary.pdf. Novotney, A. (2016). Smartphone = Not-so-smart parenting? American Psychological Association. 47(2), 52-55. Ofcom and GfK (2010) The consumer’s digital day. Available: http:// stakeholders.ofcom.org.uk/binaries/research/811898/consumers-digital-day. pdf. Accessed 2013 April 17 Ophir, E., Nass, C., & Wagner, A. D. (2009). Cognitive control in media multitaskers. Proceedings of the National Academy of Sciences of the United States of America, 106(37), 15583–7. Patel, M. S., Asch, D. A., & Volpp, K. G. (2015). Wearable devices as facilitators, not drivers, of health behavior change. Journal of the American Medical Association, 19104, 459–460. https://doi.org/10.1001/jama.2014.14781. Perez, T., Cromley, J. G., & Kaplan, A. (2014). The role of identity development, values, and costs in college STEM retention. Journal of Educational Psychology, 106, 315–329. 127 Pintrich, P. R. (2003). A motivational science perspective on the role of student motivation in learning and teaching contexts. Journal of Educational Psychology, 95(4), 667–686. Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. (1991). A Manual for the Use of the Motivated Strategies for Learning Questionnaire. Posner, M. I., & Rothbart, M. K. (1998). Attention, self-regulation and consciousness. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 353(1377), 1915–27. Posner, M. (2012). Attention in a Social World. New York, New York. Oxford Press. Przybylski, A. K., Rigby, C. S., & Ryan, R. M. (2010). A motivational model of video game engagement. Review of General Psychology, 14(2), 154. https://doi.org/10.1037/a0019440 Radesky, J. S., Schumacher, J., & Zuckerman, B. (2015). Mobile and interactive media use by young children: The good, the bad, and the unknown. Pediatrics, 135(1), 1–3. https://doi.org/10.1542/peds.2014-2251 Ravizza, S. M., Uitvlugt, M. G., & Fenn, K. M. (2016). Logged in and zoned out: How laptop internet use relates to classroom learning. Psychological Science. DOI: 0956797616677314. Ravizza, S. M., Hambrick, D. Z., & Fenn, K. M. (2014). Non-academic internet use in the classroom is negatively related to classroom learning regardless of intellectual ability. Computers & Education, 78, 109–114. Richtel, M. (2014). A deadly wandering: A tale of tragedy and redemption in the age of attention. New York, NY: Harper Collins. Roberts, D. F., Foehr, U. G., & Rideout, V. (2005). Generation M: Media in the lives of 8-18- year-olds. The Henry J. Kaiser Family Foundation. Rosen, L. D., Carrier, M. L., & Cheever, N. A. (2013). Facebook and texting made me do it: Media-induced task-switching while studying. Computers in Human Behavior, 29(3), 948–958. Rosen, L. D., Lim, A. F., Carrier, L. M., & Cheever, N. A. (2011). An empirical examination of the educational impact of text message-Induced task-switching in the classroom: Educational implications and strategies to enhance learning. Psicologia Educativa, 17(2), 163–177. Rosen, L. D., Whaling, K., Carrier, L. M., Cheever, N. a., & Rokkum, J. (2013). The media and technology usage and attitudes scale: An empirical investigation. Computers in Human Behavior, 29(6), 2501–2511. https://doi.org/10.1016/j.chb.2013.06.006 128 Rosen, L. D., Whaling, K., Rab, S., Carrier, L. M., & Cheever, N. a. (2013). Is Facebook creating “iDisorders”? The link between clinical symptoms of psychiatric disorders and technology use, attitudes and anxiety. Computers in Human Behavior, 29(3), 1243–1254. Roseth, C. J., Lee, Y. K., & Saltarelli, W. A. (2018). Reconsidering jigsaw social psychology: Longitudinal effects on social interdependence, sociocognitive conflict regulation, motivation, and achievement. Journal of Educational Psychology. DOI: 10.1037/edu0000257 Rothbart, M. K., & Posner, M. I. (2015). The developing brain in a multitasking world. Developmental Review, 35, 42–63. https://doi.org/10.1016/j.dr.2014.12.006 Rubin, A. & Babbie, E. (2009). Essential research methods for social work (second edition). Pacific Grove, CA: Brooks/Cole. Rueda, M. R., Posner, M. I., & Rothbart, M. K. (2005). The development of executive attention: Contributions to the emergence of self-regulation. Developmental Neuropsychology, 28:2 (July 2014), 37–41. Salvucci, D. D., Taatgen, N. A., & Borst, J. (2009). Toward a unified theory of the multitasking continuum: From concurrent performance to task-switching, interruption, and resumption. CHI, 1819–1828. Sana, F., Weston, T., & Cepeda, N. J. (2013). Laptop multitasking hinders classroom learning for both users and nearby peers. Computers & Education, 62, 24–31. Schuur, W. a., Baumgartner, S. E., Sumter, S. R., & Valkenburg, P. M. (2015). The consequences of media multitasking for youth: A review. Computers in Human Behavior, 53, 204–215. Schunk, D. H. (2008). Metacognition, self-regulation, and self-regulated learning: Research recommendations. Educational Psychology Review, 20(4), 463–467. doi:10.1007/s10648- 008-9086-3 Schunk, Dale (2011). Learning theories: An educational perspective. Boston, MA. Allyn & Bacon. Shao, D. H., & Shao, L. P. (2012). The effects of multitasking on individual’s task performance. International Journal of Business Strategy, 12(1), 75-80. Skinner, E., Furrer, C., Marchand, G., & Kindermann, T. (2008). Engagement and disaffection in the classroom: Part of a larger motivational dynamic? Journal of Educational Psychology, 100(4), 765–781. https://doi.org/10.1037/a0012840 Smallwood, J., & Schooler, J. W. (2006). The restless mind. Psychological Bulletin, 132(6), 946–58. 129 Standage, M., Duda, J. L., & Ntoumanis, N. (2005). A test of self-determination theory in school physical education. The British Journal of Educational Psychology, 75(Pt 3), 411–433. Steinfield, C., Ellison, N. B., & Lampe, C. (2008). Social capital, self-esteem, and use of online social network sites: A longitudinal analysis. Journal of Applied Developmental Psychology, 29(6), 434–445. Strayer, D. L., Watson, J. M., & Drews, F. A. (2011). Cognitive distraction while multitasking in the automobile. The Psychology of Learning and Motivation, 54, 29–58. Stone, L. (2009). Continuous partial attention. Retrieved from: https://lindastone.net/qa/continuous-partial-attention/. April 2018. Terry, C. A., Mishra, P., & Roseth, C. J. (2016). Preference for multitasking, technological dependency, student metacognition, & pervasive technology use: An experimental intervention. Computers in Human Behavior, 65, 241-251. Tindell, D. R., & Bohlander, R. W. (2012). The use and abuse of cell phones and text messaging in the classroom: A survey of college students. College Teaching, 60(1), 1–9. Turkle, S. (2012). Alone together: Why we expect more from technology and less from each other. New York, NY: Basic Books. Turkle, S. (2015). Talk to me: How to teach in an age of distraction. The Chronicle of Higher Education, 62(6), B6. Unsworth, N., Redick, T. S., Lakey, C. E., & Young, D. L. (2010). Lapses in sustained attention and their relation to executive control and fluid abilities: An individual differences investigation. Intelligence, 38(1), 111–122. Unsworth, N., McMillan, B. D., Brewer, G. a, & Spillers, G. J. (2012). Everyday attention failures: An individual differences investigation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38(6), 1765–72. Van der Schuur, W., Baumgartner, S. E., Sumter, S. R., & Valkenburg, P. M. (2015). The consequences of media multitasking for youth: A review. Computers in Human Behavior, 53, 204–215. Wang, Z., Irwin, M., Cooper, C. and Srivastava, J. (2015), Multidimensions of media multitasking and adaptive media selection. Human Communication Research, 41: 102– 127. Wang, Z., & Tchernev, J. M. (2012). The “myth” of media multitasking: Reciprocal dynamics of media multitasking, personal needs, and gratifications. Journal of Communication, 62(3), 493–513. 130 Wei, F.-Y. F., Wang, Y. K., & Klausner, M. (2012). Rethinking college students’ self-regulation and sustained attention: Does text messaging during class influence cognitive learning? Communication Education, 61(3), 185–204. West, B., Welch, B., & Galecki, A. (2006). Linear Mixed Methods – A Practical Guide Using Statistical Software. London. Chapman & Hall Publishing. Willingham, D. T. (2010). Have technology and multitasking rewired how students learn? American Educator, Summer, 23–29. Wilson, T. D., Reinhard, D. a, Westgate, E. C., Gilbert, D. T., Ellerbeck, N., Hahn, C., … Shaked, A. (2014). Just think: The challenges of the disengaged mind. Science, 345(6192), 75–77. Wood, R. E., & Locke, E. A. (1987). The relation of self-efficacy and grade goals to academic performance. Educational and Psychological Measurement, 47, 1013-1024. Wood, E., Zivcakova, L., Gentile, P., Archer, K., De Pasquale, D., & Nosko, A. (2012). Examining the impact of off-task multi-tasking with technology on real-time classroom learning. Computers & Education, 58(1), 365–374. Wu, F., & Fan, W. (2016). Academic procrastination in linking motivation and achievement- related behaviours: a perspective of expectancy-value theory. Educational Psychology, 3410 (May), 1–17. Xu, S., Wang, Z., & David, P. (2016). Media multitasking and well-being of university students. Computers in Human Behavior, 55, 242–250. Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory Into Practice, 41(2), 64–70. Zhang, W. (2015). Learning variables, in-class laptop multitasking and academic performance: A path analysis. Computers & Education, 81, 82–88. https://doi.org/10.1016/j.compedu.2014.09.012 131