COMMUNICATION NEUROSCIENCE ON A SHOESTRING: EXAMINING ELECTROCORTICAL RESPONSES TO VISUAL MESSAGES VIA MOBILE EEG By Nolan T. Jahn A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Communication – Master of Arts 2020 ABSTRACT COMMUNICATION NEUROSCIENCE ON A SHOESTRING: EXAMINING ELECTROCORTICAL RESPONSES TO VISUAL MESSAGES VIA MOBILE EEG By Nolan T. Jahn Visual communication plays a crucial role in sharing relevant social information. Vision has been studied extensively in the domain of neuroscience, and visual communication has been explored through traditional social science avenues. However, the field can benefit greatly at the crossroads of communication neuroscience, similar to the intersection of biology and chemistry - biochemistry. One roadblock has been the cost and difficulty of incorporating neuroscience methods in communication studies. This study tested a novel electroencephalography (EEG) device that is far cheaper, easier to use, and more mobile than previous devices. The EEG system was used to compare event related potentials (ERPs) to affective visual stimuli - representative of the kinds of engaging content that pervades modern social media. While no differences were found between positive and neutral stimuli, ERPs were successfully detected by the new EEG system and the moderate strength of our affect manipulation may have precluded stronger effects. Additionally, making use of a “foot-in-the-door” compliance gaining technique in participant instructions led to significantly improved data capture. These results support the use of this EEG system in future communication studies and provides evidence for an easy social influence tactic that can improve data quality as neuroscience is being scaled up to big-data studies. Having an affordable and mobile EEG system makes it possible to incorporate neuroimaging into a variety of communication paradigms, extending beyond visual communication. ACKNOWLEDGEMENTS I want to thank my committee for their guidance on this thesis project. Specifically, I want to highlight, Dr. Ralf Schmälzle, who is the best advisor an aspiring researcher could ask for, which is why I have chosen to stay at Michigan State to continue to learn from him. Additionally, the Department of Communication quickly became a family that supported me through all my work. I would not have finished my degree without the help of Thomi and Marge, especially as life shifted to the virtual world due to Covid-19. I was fortunate to be a part an amazing cohort of master’s students, and they were a great help throughout the entire program. I will cherish the friendships that were formed through all grueling work. Lastly, I want to thank my friends and family for supporting me the last two years. Graduate school can be an emotional roller coaster, and they have always been there when I needed them. As the youngest of four in my family, I have three awesome older siblings and parents that are always pushing me forward in my career, and I cannot thank them enough. iii TABLE OF CONTENTS LIST OF FIGURES ...............................................................................................................v KEY TO ABBREVIATIONS ................................................................................................vi Introduction ............................................................................................................................1 Visual communication ........................................................................................................1 Neuroscience of vision ........................................................................................................2 Motivated attention to emotional visual content .................................................................3 Research Challenges ...........................................................................................................5 The current study goals .......................................................................................................7 Goal one ...........................................................................................................................7 Goal two ...........................................................................................................................8 Goal three .........................................................................................................................11 Methods..................................................................................................................................13 Participants ..........................................................................................................................13 Apparatus ............................................................................................................................13 Stimuli .................................................................................................................................13 Part B stimuli ......................................................................................................................14 Procedure ............................................................................................................................15 Part B procedure .................................................................................................................16 EEG data analysis ...............................................................................................................16 Results ....................................................................................................................................18 ERP results ..........................................................................................................................18 Passive viewing of positive vs. neutral Images ..................................................................18 ERP stability analysis .........................................................................................................19 “Foot-in-the-door” analysis ................................................................................................20 Part B analysis.....................................................................................................................22 Discussion ..............................................................................................................................24 Strengths .............................................................................................................................28 Limitations ..........................................................................................................................29 Future research ....................................................................................................................30 Conclusion .............................................................................................................................32 REFERENCES ......................................................................................................................33 iv LIST OF FIGURES Figure 1: Measuring ERPs from affective images with Muse EEG device. IAPS images will be presented one at a time to participants wearing the Muse system and ERPs will be recorded. A difference in ERPs is expected when viewing a chair and an astronaut in space ..................7 Figure 2: IAPS image ratings. The ratings for arousal and valence for the 60 images used as stimuli in this study were graphed in this scatter plot. The images are clearly organized into two groups: positive and neutral ...................................................................................................14 Figure 3: Comparing stimuli from Part A to Part B. The image on the left is the unedited neutral image, used to in the positive versus neutral stimuli test. The image on the right is the edited image, with the green square, to signify that it is a target to be counted by participants. These edits were the same on the neutral and positively valenced stimuli ......................................15 Figure 4: Grandaverage ERP waveforms for positive and neutral images. The schematic figure in the center illustrates the approximate position of the MUSE device, which measures EEG data from four channels (reference electrodes lie between AF7 and AF9) ...................................18 Figure 5: Grandaverage ERP waveforms for positive and neutral images, averaged across sensors TP9 and TP10. Top Figure: shows analysis across the entire sample, and the figures below are the two subsamples, which help demonstrate the stability of the results. The bottom right figure is from Flaisch et al. (2008) and shows results from a sensor that corresponds to TP9, with a reference similar to the Muse’s. The similarity between the ERP waveforms supports the results from the Muse ........................................................................................................................20 Figure 6: Comparative Analysis of the “Foot-in-the-Door” conditions. The figure compares the sample drop percentage of the control and experimental groups and is broken down by session one and two. The difference between the experimental group’s sample drop rate in session one and two was significant (p = 0.0485). There was a significant difference (p = 0.0032) between the control group and experimental group in session two .....................................................22 Figure 7: The ERP waveform results for counted targets versus non-targets. The figure on the left, displays the waveforms for the targets (in gray) and the non-targets (in green), and it reveals that there was no significant difference in the waveforms. The figure on the right is ERP results from the oddball paradigm from Krigolson et al. (2017). While the waveforms may at first appear dissimilar in overall morphology and amplitude (due to differences in presentation rate, content, etc), closer inspection does reveal similar waveform trajectories in the 200-350 ms interval ...................................................................................................................................22 v KEY TO ABBREVIATIONS EEG ERP ERPs IAPS LPP Electroencephalography Event-related potential Event-related potentials International Affective Picture System Late Positive Potential LC4MP Limited Capacity Model of Motivated Mediated Message Processing vi Introduction Images are one of the fastest-growing trends on social media and they strongly impact millions of recipients. On Facebook and Twitter, posts that contain images gather about ten- times more engagement than text-only posts; in recent years, numerous dedicated image-based platforms, such as Instagram or Snapchat, have emerged and their growth outpaces that of Facebook and Twitter (Balm, 2014); lastly, images are also key to popular platforms like Pinterest, Tumblr, Flickr - or the thumbnails used for Netflix’s and YouTube’s previews. From a communication perspective, the success of images raises questions as to why images are so interesting, appealing, and how the people who receive them are affected by the content conveyed in images. Visual communication Visual communication, that is messages that come in the form of images, has played a significant role throughout human history as a means to share personally relevant social information. From ancient cave drawings (Tylén et al. 2020) to modern-day posts on popular social media (Waterloo et al. 2018) - an image often says more than a thousand words. Over the course of history, visual messages were perhaps less dominant than text-based ones, but images are clearly on the rise as a means to express oneself, communicate experiences, and affect the recipient in a powerful and immediate way. Across dozens of social media platforms like twitter and Facebook, and dedicated image-sharing platforms like Snapchat and Instagram, there is now an abundance of visual content that is posted. The trend towards visual content is further underscored by the large user base of Instagram, one billion monthly users, hundreds of millions more than Twitter (Torkildson, 2019). Beyond social media, images have also always played a central role in advertising and public communication campaigns where it has been taken for 1 granted that they affect the recipient in a powerful and very immediate manner (Messaris, 1997). Moreover, because vision can be considered a universal language that is shared across cultures, images are also uniquely positioned to overcome language barriers. Thus, visual messages play a central role in modern communication, yet far more emphasis has been given to the study of reception and processing of textual as compared to visual communication, and widely used methods are largely based on verbal information (Messaris, 1997). Neuroscience of vision Although visual communication has received relatively less attention from communication science, vision is among the best-studied domains within neuroscience and many of its mechanisms have already been deciphered (Werner & Chalupa, 2014; Bear, Connors, & Paradiso, 2016). Specifically, over the past century, we have learned a great deal about how patterns of light are converted into neural impulses, how these neural impulses travel via the thalamus to the primary visual cortex in the occipital lobe, and from there into the parietal and temporal lobes (Bear et al., 2016). Since the advent of functional neuroimaging, a large body of knowledge has been gathered on how the visual system perceives color motion, and how objects are recognized by matching the incoming information with stored memory representations (Bear et al., 2016). However, the interaction between basic visual processing and higher-order processes that relate to social-cognition, emotion, and attention is far less understood. Based on their content, visual stimuli can activate core motivational circuits and interface with social-cognitive processes. When visual messages connect to evolutionarily relevant topics like procreation, danger avoidance, or feeding, they can attract attention in an almost reflex-like fashion and command deeper processing of the depicted content (Bradley, Keil, & Lang, 2012). Moreover, because humans are naturally social, many of these motivational mechanisms are 2 strongly interfaced with social processes. There are dedicated perception systems in the brains of primates that deal with social information like faces to infer face-identity (to recognize others), face-expression (to infer their emotional state), face-evaluation (to detect e.g. attractiveness and other traits) (Zebrowitz 2011; Adolphs et al. 2016; Todorov 2017). These social processes are intimately interwoven with affective brain circuits (Forgas et al. 2013; Adolphs 2003). This also helps explain, at least in part, why social content is so popular in the media (e.g. ‘clickbait’ about the fate and current looks of celebrities, basic gossip, or simply images with moderate nudity). Motivated attention to emotional visual content The motivated attention model (Bradley, Codispoti, Cuthbert, & Lang, 2001) and the more foundational bio-informational theory of emotional imagery (Lang, 1979) detail how attention is attracted to stimuli that carry motivational significance. In brief, motivational significance is considered to be a function of the valence and arousal of the incoming stimulus, which is widely viewed as two fundamental organizing dimensions of the human affect system (Lang et al., 1993; Russell, 1980). To give some examples, high-arousing and positively valenced images typically fall under the genre of erotica and sports-related images that often feature humans are also positively valenced and relatively arousing. On the negative side, by contrast, high-arousing and negative images are pictures depicting attacks on people or mutilations. Ample support for this bi-dimensional model of emotion organization comes from several studies examining psychophysiological responses to affective images (Lang et al., 1993, Bradley et al., 2001, Schupp et al. 2004; Bradley et al., 2015; Ferrari et al., 2011). In addition to clinical-psychophysiological (Lang et al., 1993) and media-psychological (Potter and Bolls, 2011) research, many neuroimaging studies have examined the reception of affective visual images by the brain. For instance, Schupp and colleagues (2004) used 3 electroencephalography (EEG) to examine event-related potentials (ERPs) in response to emotional images. Using the international affective picture system (IAPS) Lang, Bradley, and Cuthbert (2008) presented images that were high in arousal and extreme in valence polarity, like erotica and mutilations, and compared these against neutral images, like household objects. It was found that the P300 component and subsequent late positivities (LPP) of the ERP is larger for positive and negative stimuli compared to neutral stimuli. The P3/LPP is an endogenous component that reflects post-sensory processing and its generation depends on subcortical- cortical interactions that comprise arousal-related functions (Nieuwenhuis et al. 2005; Nieuwenhuis et al. 2011). Functionally, these effects have been interpreted as a form of motivated attention, a form of natural selective attention that emerges without instruction and is reflected in the enhanced allocation of processing resources, deeper processing, and behaviorally as response facilitation in reaction times or subsequent memory performance (Weymar et al., 2012; Lang et al., 1993). Motivated attention also plays a key role in the LC4MP and the IAPS system itself makes use of images as mediated representations of the actual real-life phenomena (Lang, 2009). Indeed, there exist some EEG studies that examine the reception and processing of emotional visual media content using neuroimaging methods, albeit these studies were carried out with a focus on audiovisual media like TV, advertising, and cinema (Stróż;ak and Francuz 2017; Reeves and Thorson 1986). Numerous ERP studies that have examined the emotional impact of images have largely focused on stimuli at the extremes of valence and arousal in order to strongly express the phenomenon under study. While this strategy seems appropriate to study motivated attention under neuroscientific laboratory conditions, it can be argued that such extreme content is not representative of everyday visual communication, where images are less extreme and the 4 affective responses they evoke tend to be more moderate, at least in normal web-traffic. As discussed above, the visual domain is on the rise in communication, and the limited work in communication studies has yet to incorporate neuroscience methods similar to the studies mentioned above. Studying more readily observable visual stimuli will be fruitful for communication work going forward. Research challenges In addition to moving neuroscientific studies more towards common visual stimuli, efforts are also needed to make neuroscience methods compatible with visual communication research. Schupp et al. (2004) and Ito et al. (1998), showed the value of using EEG in studying emotional responses to visual stimuli, revealing split-second differences in brain activity between emotional vs. neutral images. A further benefit of EEG is that measures can be taken without interrupting the perceptual, cognitive, or affective process, and circumventing the need to ask any overt and language-dependent questions. As such, EEG-methods seem promising for use as nonreactive measures that might be able to assess affective responses to images that are common during visual communication (e.g. images of disasters in newspaper websites, affective content on Instagram etc.). Unfortunately, there are key obstacles when it comes to incorporating neuroscience methods into communication research: money, time, immobility, and technical skills required for it. However, these limitations holding EEG back, are well known, and commercial companies are seeking to provide solutions. The progression of EEG has resulted in easy to use, portable, and most importantly, inexpensive devices; there are four companies offering EEG devices under $1,000 and another nine that offer devices under $25,000 (Farnsworth, 2019). These new devices can help bring neuroscience measures into other fields of research, with their low-cost, mobility, and ease of use. 5 One EEG system on the market that has the potential for use in communication research, is the Muse device from Interaxon Inc. (SCR_014418). It is a small EEG device with four sensors - AF7, AF8, TP9, and TP10 - and the Fpz electrode to serve as the reference (Krigolson, Williams, Norton, Hassall, and Colino, 2017). The unit, which is shown in the schematic Figure 1, is low-cost, can transmit data through a Bluetooth connection, and can be set-up in a matter of minutes. This device is sold commercially for meditation. While it has not been primarily developed for research, its dry-electrodes and amplifier specs can be considered adequate for EEG research. In fact, the MUSE system has already been used to detect the P300 during an oddball paradigm (Krigolson et al., 2017). They tested and validated the Muse device with two tasks, an oddball task with different colored shapes, and a reward task. There was a standard group, wearing an EEG cap with 64 sensors and then a test group with the Muse headband. When performing the analysis of the data, they performed a standard of analysis of the 64-sensor cap, and a reduced analysis using the same four sensors and reference that the Muse system uses. The reduced analysis from the cap had similar results to the analysis of the Muse data, which successfully detected P300 and N200. The results for the P300 for the Muse system were comparable to the full cap, however since the sensors for the Muse were at TP9 and TP10 with the reference at Fpz, the polarity was reversed. This was also the case for when the 64-sensor-cap had its analysis reduced to mimic the data from the Muse system. The success of Krigolson et al. (2017) is encouraging, but more testing is needed to certify the use of the Muse device in communication research, particularly its suitability to examine brain responses to emotional images that play a major role in visual communication. Overall, the low-cost and portable Muse system could be a promising platform to bring neuroscience methods into a new field of study, and thus warrants additional testing. 6 The current study goals Goal one In this study, we propose to expand the scope of the Muse system from studying cognitive ERPs as done by Krigolson et al. (2017), towards the domain of affective visual stimuli. In brief, participants will view images from IAPS, while EEG responses are recorded. The images chosen will focus on the positively valenced images and neutral stimuli. Images high in valence and arousal, mutilations, and erotica, were excluded to focus on stimuli that are more regularly encountered during day-to-day web browsing activities. Figure 1: Measuring ERPs from affective images with Muse EEG device. IAPS images will be presented one at a time to participants wearing the Muse system and ERPs will be recorded. A difference in ERPs is expected when viewing a chair and an astronaut in space. A large body of research on motivated attention has consistently demonstrated that affectively valenced images prompt enhanced late positivities in the EEG, and these selectively enhanced responses seem to be based on the images’ content as they are observed without any explicit instruction. Accordingly, we also expect a difference in ERPs towards emotional versus neutral images. In sum, this study will test the utility of the Muse EEG device and replicate the findings of Schupp et al. (2004). Thus, hypothesis one is as follows. 7 H1: There will be a discernible difference between event-related potentials (ERPs) for the neutral stimuli compared to the positively valenced stimuli. Goal two There will be an aim to improve data collection through the “foot-in-the-door” compliance gaining technique, by giving everyone brief and easy to follow instructions, and giving some participants more detailed and intensive instructions as the experiment progresses. If this simple compliance gaining technique can improve the information gathered, it will be useful to implement it in all future experiments. EEG quality metrics can serve as an indirect and objective outcome metric for persuasion success. The applied motivation for this goal lies in the fact that EEG data are noisy and that even minor movements create artifact signals that are multiple times larger than the to-be- measured EEG signals. Thus, great care is taken to ensure that participants comply with instructions that maximize data quality. However, as previously expensive neuroscientific equipment becomes commodified, as is the case with the Muse system (less than 200 USD), more inexperienced users, citizen scientists, and early career researchers may be inclined to enter this area. These users may be less aware of the issue, it will become more important to provide best-practices, standardized instructions, and other types of training to ensure that the data they gather is of use for science. Of note, the issue of standardization of analysis pipelines and reproducibility has recently been very prominent in neuroscience and has begun to attract attention in communication (Poldrack et al. 2017; Gorgolewski et al. 2016), but the related issue of quality-control through well-standardized and outcome-optimized laboratory practices and how these are influenced by social-psychological factors has been far less prominent. For example, Orne’s famous article “On the social psychology of the psychological experiment” 8 (1962), seems to have been largely forgotten, although these issues are perhaps one of the most promising avenues to enhance reproducibility of often context-dependent social-psychological effects (Van Bavel et al., 2016). Aside from these practical considerations, there is also much scientific merit in studying the role of instructions as a communication and social-cognition phenomenon (De Houwer et al., 2017). Clearly, the way that instructions are delivered and worded can have a profound effect on participants’ behavior within an experiment - a simple and ubiquitous real-life instance of persuasion or compliance gaining. One powerful compliance-gaining technique is the “foot in the door” technique. In 1966, Freedman and Fraser demonstrated that when homeowners received a small request that was followed by a later, larger request, they were far more likely to comply with the larger request later than a control group. The same effect was replicated in an organ donation in which some subjects were asked to fill out a questionnaire about organ donation, and two weeks later those subjects who took the questionnaire were significantly more willing to be organ donors than subjects who did not get the questionnaire (Carducci, Deuser, Bauer, Large, & Ramaekers, 1989). In 1999, Gueguen and Fischer-Lokou took this phenomenon to the street and asked people for money, but some of the subjects would first be asked for the time of day. Their results showed that subjects who were asked the time were more likely to give money and they gave more money on average (Gueguen & Fischer-Lokou, 1999). The “foot-in-the-door” is a tried and true method for compliance gaining, and studies have replicated results supporting it across many different contexts. As discussed above, it is crucial for the success of neuroscientific studies that participants listen and adhere to instructions from the administrators. Indeed, having participants remain 9 relatively still is often one of the most important aspects to obtaining clean data. There are often a lot of instructions given to participants before the start of an experiment and requesting them to limit all body and head movement is a big ask that they are unlikely to adhere to. Furthermore, as the experiments go on, complying with this instruction is associated with a fair amount of discomfort and self-regulation. However, if a smaller request, such as “please sit still,” precedes the much larger request, participants will more likely heed your instructions, according to the “foot-in-the-door” literature. It would be incredibly advantageous if such a simple change in directions to participants can yield better results for these types of studies. Moreover, given that EEG quality metrics - i.e. the number of unusable trials, eye-blinks, or paroxysmal artifacts due to body motion - comprise an objective measure that is not distorted by social desirability or introspection bias, we can use these metrics as an objective outcome of the “foot-in-the-door” compliance-gaining technique, which is generally considered the gold-standard in persuasion research (Rhodes and Ewoldsen 2013). An additional advantage is that EEG quality metrics, such as the sample drop rate (i.e. the fraction of trials that are dropped due to missing quality standards), can be considered an implicit measure (Nosek et al. 2011). Thus, whereas the decisions to donate money, fill out organ donation forms, or other outcomes of “foot-in-the- door” studies involve a rather high amount of reflective cognitions (Strack and Deutsch 2004), the somatomotor functions that control bodily motion during an EEG-experiment are far more automatized and likely to operate outside of conscious awareness. In sum, these considerations lead us to deduce the second hypothesis for the study. H2: Data sample drop percentage from participants that received additional instructions, experimental group, will be less compared to participants that did not receive additional instructions (control). 10 Goal three Another goal, pursued within a subsample of participants, was to examine the processing of target vs. non-target stimuli, which is more closely related to classical work on the P300 ERP component. 1The P3 component is perhaps the most prominent ERP component, with thousands of published P3-ERP experiments focusing on mostly cognitive tasks (Luck, 2005). Thus, while the main goal of this study was to examine the reception of affective images, to help further validate the Muse device testing a paradigm that resembles the early work investigating the P3 would be beneficial. In the majority of classical P3 studies (e.g. Courchesne et al., 1975; Isreal et al., 1980; Kutas et al., 1977) the participants’ task was to discern targets from non-target stimuli, or attend to sequences of stimuli that were interrupted by salient ‘oddball’-stimuli. These tasks all elicited strong P3 responses, and further research found that the amplitude of the P3 was related to the uncertainty of a target stimuli and the resource demands of the task (Luck, 2005). The more uncertainty there is the more amplitude to the P3 when a target is finally presented, and a stronger response will be recorded the more resources allocated to the task (Luck, 2005). However, if subjects struggle to discern the difference between the targets and non-targets, the P3 amplitude will not be as strong. This leads to the task created for this study, which will ask participants to count target stimuli versus non-target stimuli. The target stimuli will be edited images from the affective images used in the first part of the study, which will be easily identifiable by the participants. There will be a large sample of images presented in a two-minute period, which will require a great deal of resource allocation. This leads to the final hypothesis for this study. 1 The study made use of two subsamples with the second sample receiving an additional test, which served to further validate the use of the Muse device by trying to capture an ERP during a task. This third goal is represented moving forward as “Part B” in this study. 11 H3: There will be a significant difference between event-related potentials during a counting task for target stimuli compared to non-target stimuli. 12 Participants Methods Undergraduates (n = 70) from a large Midwestern university were randomly selected to the control or experimental group. There were two subsamples each consisting of 35 participants. The second sample participated in additional ERP research, which will be referred to as Part B, involving the task of counting target stimuli. All additional testing happened after the original research trials. Some participants were excluded due to technical issues with the EEG device and excessive participant movement. Apparatus All participants wore an original Muse EEG device from Interaxon Inc. in Toronto, Canada (SCR_014418). The Muse headband is a commercial EEG device that has four electrodes at locations corresponding to AF7, AF8, TP9, and TP10. There is a reference at Fpz. The headband recorded the EEG data at a sampling rate of 250 Hz. The device streamed data via a Bluetooth connection through an application called Bluemuse (Kowaleski, 2019). This was connected to a Python notebook (Keasson, 2019), which visualized the data stream in live time and stored the data. Data was streamed for the whole duration of the study, and the stimulus was presented through PyschoPy (Peirce et al., 2019). This software package enabled markers to be present in the EEG data stream to note when and what stimulus was presented, to aid in the process of ERP analysis. The visual stimuli were presented on a 14” LCD monitor with full brightness. The distance from the monitor was maintained the same for each participant. Stimuli The stimuli used in the experiment were a total of 60 IAPS images (Lang et al., 2008). 30 were positively valenced images and 30 were neutral images. The positively valenced images 13 were mildly arousing (nature and sports images), as defined by IAPS norms. Highly arousing erotica images were not used. The neutral images are household objects in neutral colors, which are low in arousal and neutral in valence. Figure 2 shows normative ratings of valence and arousal of the 60 images. Figure 2: IAPS image ratings. The ratings for arousal and valence for the 60 images used as stimuli in this study were graphed in this scatter plot. The images are clearly organized into two groups: positive and neutral. Part B stimuli In this portion of the study the same 60 images were used, however half of the images were edited to distinguish them as targets to count for the participants. Half of the positive and half of the neutral stimuli were randomly selected to be edited with a bright green square in the middle. This made them distinctive targets for participants to focus on and count. Figure 3 depicts the differences in an image for the first part of the study and the edited version used in Part B. 14 Figure 3: Comparing stimuli from Part A to Part B. The image on the left is the unedited neutral image, used to in the positive versus neutral stimuli test. The image on the right is the edited image, with the green square, to signify that it is a target to be counted by participants. These edits were the same on the neutral and positively valenced stimuli. Procedure Upon arrival, participants were shown to the computer station within the lab. They were instructed to read the consent form and to sign at the bottom. The participants received a brief verbal description regarding the EEG device and instructed how to put it on. Once the device was on, the quality of the signal was checked by the data stream visualization tool in Python. Any issues with the data stream were resolved by adjusting the EEG position or by reestablishing the Bluetooth connection. Once there is a quality data stream, the participants were instructed to “Please sit still and focus on the middle of the screen.” A two-minute stimulus package presented the IAPS images, which is viewing session one. The participants viewed the assortment of IAPS images for two minutes, which were time locked and marked in the EEG recording through PsychoPy software. After the session one was completed, the participants were instructed that they can relax before session two. Once the participants were ready for session two, the signal was checked again. Before starting the stimulus, participants received one of two instructions. The control group received the same instruction as session one “Please sit still and focus on the middle of the 15 screen.” The participants in the experimental group received the instruction below which served as the more intensive request. “It is imperative to focus on the middle of the screen. Please relax and limit all body, head, and facial movements. If possible, minimize the amount that you blink during the short two presentations. Any movement can affect the signal and we hope to obtain the best results possible during this video.” After the instruction was given the stimulus package was started. It was the same package of photos, consisting of a series of neutral and positively valenced images. Following the end of the second viewing, the participants were told they can remove the EEG headband, debriefed, thanked, and compensated for participation. If the participants were a part of the second subsample, there was an additional viewing. Part B procedure For participants in the second subsample, there was an additional run that immediately followed the two picture-viewings. This run, however, contained edited and unedited IAPS images from the previous viewings and participants were given the instruction to count the target stimuli, i.e. the images that were edited to contain green squares in the center. The presentation lasted for a similar amount of time as the previous viewing sessions. After the end of the viewing session, participants were asked the number of target images they counted, and then there were told they could remove the EEG device. EEG data analysis EEG data were analyzed using MNE-Python software (Gramfort et al., 2013). The MUSE device has four sensors - AF7, AF8, TP9, TP10 - with a reference at Fpz, which serves as the reference during analysis. The recorded data were loaded and filtered with a bandpass filter 16 from 0.1 Hz to 15 Hz, and epoched from 100 milliseconds before the stimulus to 800 milliseconds after the presentation of the stimulus. Rejection of epochs due to artifacts was done based on MNE’s automated artifact rejection routines and complemented by visual inspection (Jas et al. 2017; Gramfort et al. 2014). The segmented data were separated by the conditions of the experiment - neutral and positive images -, based on the marker signal sent out by the presentation PsychoPy software (Peirce and MacAskill, 2018). Finally, clean epochs from each condition (positive, neutral) were averaged to create ERPs for each individual subject and averaged across subjects to produce grand-average waveforms. From the 240 trials that each participant saw over the course of the experiment (of note, some participants received minimally fewer trials due to change in the code, but all received well over 200 trials), we applied fairly strict artifact control criteria that led to a rejection of about 49% of the trials (average across participants: 49.8%, SD = 19.9%). The ERP waveforms were subsequently computed by averaging together the clean epochs for positive and neutral images, respectively. These averages were based on approximately 56 epochs per condition (mpos = 56.74, sd = 22.6; mneu = 56.06, sd = 22.94; t-test for dependent samples, n.s.). Finally, the ERPs for positive/neutral images from individual subjects were averaged to derive a grand average ERP, and subtracted from each other to obtain a difference waveform, which represents the cortical signature of the difference between positive and neutral stimuli. Similar analysis steps were used in Part B, but instead of positive and neutral stimuli, it was target versus non-target. 17 ERP results Results The ERP results demonstrate that the Muse-device was able to capture the millisecond- by-millisecond electrocortical signature evoked during passive picture viewing. As is evident from the grand-averaged ERPs shown in Figure 4, the signal measured at sensors TP9 and TP10 reveal waveforms that are consistent with the ERP literature on visual picture viewing (Luck, 2005). These results provide strong evidence that the MUSE devices can be used to conduct event-related potential studies. Figure 4: Grandaverage ERP waveforms for positive and neutral images. The schematic figure in the center illustrates the approximate position of the MUSE device, which measures EEG data from four channels (reference electrodes lie between AF7 and AF9). Passive viewing of positive vs. neutral Images With respect to the ERPs towards mildly positive versus neutral images, there was no evidence of a differential effect. Rather, the ERP waveforms for positive and neutral images 18 largely resembled each other, which further attests to the robustness of ERP measurement itself, although it is contrary to our hypothesis. To statistically analyze the waveforms for the two conditions, mean amplitudes for each grandaverage were created around a time window from 150 - 300 ms. The mean amplitude for the positive images was 2.29 microvolts and 2.02 microvolts for the neutral images. A t-test (t = 0.43, p = 0.67) revealed there was no significant difference between the two conditions. ERP stability analysis In an additional analysis, we explored the stability of these results. Specifically, we conducted two independent analyses with two independent subsamples. As illustrated in Figure 5, the ERP waveform between two smaller samples show very high consistency, with temporal correlations of rpositive: A vs. B = 0.945 and rneutral: A vs. B = 0.936 (p’s < 0.0001). Aside from demonstrating the consistency of the overall waveform morphology, we also find that the peaks are temporally very similar (positives: sample A: 254 ms, sample B: 262 ms; neutrals: sample A: 234 ms, sample B: 250 ms. Furthermore, the bottom right plot in Figure 5 shows for comparison ERP waveforms that were obtained in earlier research on affective picture processing using a high-density 256-channel EEG system (for reference, see Flaisch et al., 2008). Of note, these ERP results stem from a similar affective image viewing paradigm that included, many high- arousing stimuli and used a faster presentation-rate. To facilitate comparison with the current results, we re-referenced the ERP waveforms to the FPz reference sensor, which is the Muse’s physical reference, and the sensor shown in Figure 5 corresponds roughly to the sensor TP9 from the Muse system. As can be seen, the ERP results are quite similar across the two studies, which further underscores that the Muse system is capable of recording ERP with adequate quality. Overall, these findings bolster our confidence in the results, provide further validation for using 19 the Muse EEG device as a scientific research tool, and they suggest that this is possible even with moderately sized samples. Figure 5: Grandaverage ERP waveforms for positive and neutral images, averaged across sensors TP9 and TP10. Top Figure: shows analysis across the entire sample, and the figures below are the two subsamples, which help demonstrate the stability of the results. The bottom right figure is from Flaisch et al. (2008) and shows results from a sensor that corresponds to TP9, with a reference similar to the Muse’s. The similarity between the ERP waveforms supports the results from the Muse. “Foot-in-the-door” analysis Our second hypothesis was about whether the “foot-in-the-door” compliance gaining technique could be used to improve signal quality. Specifically, we reasoned that the additional instruction that the experiment group received would lead to a decrease in the sample drop rate in the EEG data, which in this case serves as an implicit measure of persuasion success. In order to 20 quantify this, we analyzed the sample drop percentage rates for each session separately for the control and experimental groups. This analysis revealed that the average sample drop rate for the first session of the control group (M = 44.4, SD = 18.4) was similar to the first session of the experimental group (M = 40.6, SD = 26.3). For the second session, the experimental group (i.e. with added “foot-in-the door” directions) had a mean drop rate of 31.1 (SD = 18.9 ), whereas the control group had a mean drop rate of 49.2 (SD = 20.8). An ANOVA with “Session” as the within- and “Condition” as between-subjects factor revealed a significant interaction effect between the two conditions. Following up on this interaction, t-tests revealed a significant difference between the control and experimental groups’ drop rates in the second session (p = 0.0032) and a significant reduction between the drop rates from session one to session two in the experimental group (p = 0.0485). These results can be seen in Figure 6. These results support the second hypothesis, in that the “foot-in-the-door” compliance gaining technique resulted in better signal quality, as seen in a significantly lower sample drop rate. 21 Figure 6: Comparative Analysis of the “Foot-in-the-Door” conditions. The figure compares the sample drop percentage of the control and experimental groups and is broken down by session one and two. The difference between the experimental group’s sample drop rate in session one and two was significant (p = 0.0485). There was a significant difference (p = 0.0032) between the control group and experimental group in session two. Part B analysis The analysis for Part B, was similar to the analysis for the first hypothesis, with the difference that we now compared ERPs towards target versus non-target images as opposed to positive versus neutral images. Additionally, only the second subsample participated in this part of the study, resulting in only 23 participants that provided usable data for the current analysis. In Figure 7, the ERP waveforms for the targets and non-targets are both similar. The waveform difference was calculated in similar a fashion as the ERP results above (the time window was changed to 300 - 600 ms), and the t-test revealed that there was no significant difference (t = 0.758, p = 0.457) between the target’s amplitude mean (M = 1.21) and the non-target’s amplitude mean (M=0.90). Additionally, Figure 6 shows a comparison to Krigolson and colleague’s (2017) ERP results for the oddball paradigm task. While the results do not look all 22 that similar, the positivity at 300 ms for the oddball condition for Krigolson et al. (2017) and in condition 2 (the target stimuli) potentially resemble each other. Reasons for the waveforms differing between the two studies and the lack of support for hypothesis three are discussed below, and could be attributed to the task paradigm, the stimuli, and the smaller sample size for this part of the study. Figure 7: The ERP waveform results for counted targets versus non-targets. The figure on the left, displays the waveforms for the targets (in gray) and the non-targets (in green), and it reveals that there was no significant difference in the waveforms. The figure on the right is ERP results from the oddball paradigm from Krigolson et al. (2017). While the waveforms may at first appear dissimilar in overall morphology and amplitude (due to differences in presentation rate, content, etc), closer inspection does reveal similar waveform trajectories in the 200-350 ms interval. 23 Discussion This study serves as methodological advancement as it validates the use of a novel, affordable, and mobile EEG device in communication research by performing an ERP experiment with affective IAPS images. Additionally, this study tested the “foot-in-the-door” compliance gaining technique, utilizing social influence to gather more reliable neuroimaging data. Positively and neutrally valenced IAPS images served as stimuli for participants to view in serial presentation, while wearing the Muse EEG device. The first hypothesis expected a significant difference between the ERPs for positive images compared to neutral images. While the Muse EEG successfully captured event-related potentials, the results indicate that no difference was found between the two conditions. The second hypothesis was related to the social influence test; participants who received additional and more intensive instructions after the first viewing session would have a lower sample drop percentage than those who did not receive the additional instructions. This implicit measure of the “foot-in-the-door” persuasive technique was supported in the results with a significantly lower sample drop rate for the experimental group. The third hypothesis sought to further test the Muse device with a task for participants, but the results failed to support the hypothesis. In sum, this experiment employed an emerging methodology to study communication, validated the use of a new EEG system, and showed that social influence techniques can be implemented to improve data collection. The first hypothesis predicted a significant difference in event-related potentials when participants viewed positively valenced or neutral IAPS images. This hypothesis was not supported by the results, which can be seen in Figures 4 and 5. The Muse EEG detected ERPs for both positive and neutral stimuli, but the ERPs did not differ from each other. While no psychological effects were detected in this study, the novel EEG system was able to capture 24 potentials for the visual stimuli, which validates the research applications of the device. The lack of support for hypothesis one could be due to positive stimuli only receiving moderate ratings of arousal and valence. One participant noted in the debrief that they did not identify the difference between positive and neutral images. The IAPS images that depicted erotica were excluded from this study in order to maintain a focus on visual communication that would be encountered on common social networking websites, like Instagram and Facebook, but those stronger stimuli could have elicited a detectable effect. Hypothesis one was not supported by the results, but the ERP results were still promising, in that they supported the use of the Muse EEG device. The second hypothesis predicted that using the “foot-in-the-door” compliance gaining technique on participants would result in a lower sample drop percentage than participants that did not receive the social influence tactic. This hypothesis was supported by results seen in Figure 6. EEG data can be affected by any bodily movements, and significant movements can result in epochs being rejected. Every participant in ERP studies has epochs rejected and it presented a unique opportunity as an implicit measure for the success of social influence techniques. Participants who received more intensive instructions after the first viewing session had a significantly lower percentage of epochs being rejected. This result is promising, as a simple change in instructions could yield better data from a variety of neuroimaging and psychophysiological studies that rely on participants limiting their body movements. Additionally, these results support previous studies (Freedman and Fraser, 1966; Carducci et al., 1989; and Gueguen & Fischer-Lokou, 1999) that have tested the “foot-in-the-door” compliance gaining technique. The support for the second hypothesis can help guide the future uses of the Muse EEG in studies by improving data quality through carefully crafted instructions that make use of social influence. 25 The third hypothesis predicted that there would be a difference in ERPs between target and non-target stimuli in a task-based scenario, but the data did not support this hypothesis. The fact that we did not observe a significant difference between the target and non-target stimuli might be due to several factors. The stimuli used in Part B, were the same stimuli, with some edited to distinguish them as targets, as the images used in Part A, and it was shown that these images on their own, and without direction, prompted an identifiable ERP response - they did so again in Part B. Any ERP response elicited by the target counting task, thus would have to be stronger than this baseline to get a noticeable result. Furthermore, target probability and high uncertainty are known to strongly influence the amplitude of the P3 response (Luck, 2005). In this test, the frequency of targets was the same as for the non-targets - 50%. Thus, it was only the target versus non-target status of the stimuli that could elicit a P3-enhancement, but not the oddball-effect (i.e. rare stimuli prompting high attention from a sense of surprisal). We can only speculate why the instruction to count the targets was not sufficient to prompt a P3-enhancement. One possibility is that participants internally performed a yes-no-categorization task in which both targets as well as non-targets are relevant (although targets still would require additional updating of the count). In any case, going forward it would be advisable to increase the strength of the target/non-target manipulation by adding in an oddball-type manipulation (e.g. presenting 20 vs. 80% stimuli as in Krigolson et al. 2017), by making the targets more salient, or by removing the images altogether and work with simplified circles and squares. Overall, while the results failed to support the third hypothesis, there is reason to test this again with a new paradigm that is more likely to elicit a strong P3 response. From a methodological perspective, with the goal of validating the use of the Muse device in communication research, this study successfully showed that the Muse is capable of 26 capturing an ERP in a picture-viewing paradigm that is relevant for communication research, particularly image-sharing on social media and associated topics (Meshi et al., 2015). This is an important finding, as communication and neuroscience continue to evolve and become more integrated (Schmälzle and Meshi, 2020; Weber et al., 2015). The Muse system is an affordable and easy to use EEG system, which makes it a very attractive option to incorporate neuroimaging into communication research given cost-limitations, but also the high potential scalability of such a system. Krigolson et al. (2017) revealed the capabilities of the Muse to capture ERPs in a reward and oddball paradigm, but this study moved the Muse into visual communication, which expands the use of this commercially available device. Results, that can be seen in Figure 5, illustrate the capability of the Muse to successfully capture ERPs in a communication study. Furthermore, having two subsamples in this study provided additional evidence, also seen in Figure 5, for the consistency of this EEG unit. This study demonstrated the potential of EEG research in the field of communication, and the use of the Muse device going forward. It no longer seems far-fetched that EEG-devices will be integrated into “wearables”, such as the iPhone, smartwatches, or glasses, and the rise of VR platforms certainly creates unique opportunities for integration of bio-behavioral measures to study media reception and consumption processes. While the study was successful methodologically speaking, in failing to detect any differences in brain activity for the IAPS images, the study failed to support the motivated attention model. The results in this study failed to replicate the findings from Schupp et al. (2004), which showed that images that were positively and negatively valenced had significantly different ERPs from neutral images. This failure could be due to the focus on different visual stimuli. Schupp et al. (2004) incorporated the use of erotica and mutilations, which are far more 27 arousing and have higher ratings of valence compared to the images that were used in this study. Krigolson et al. (2017) were able to show that the Muse is sensitive enough to detect differences in motivational processes in their reward paradigm. The differences in the stimuli used in this study may not have been significant enough to elicit the psychological effect of motivated attention that is found in Schupp et al. (2004). Strengths The lack of significant ERP differences aside, there are many positives that came from this study. The field of communication can benefit greatly from incorporating new methods from the field of neuroscience, but neuroscience methods traditionally have been expensive and difficult to implement. However, the Muse EEG system is incredibly affordable, easy to use, and scalable for large studies. The set-up time for this device is a matter of minutes, enabling far more rapid experiment times compared to older EEG systems or fMRI. The affordable price makes it far more accessible for researchers everywhere. These factors make it easy to scale studies up to much larger samples, including “big-data” samples of thousands of participants. Furthermore, multiple Muses can be used at a time, which opens it up for use in interaction studies and analyzing the reactions of audiences (Schmaelzle and Grall; Dikker et al. 2017). In all, the validation of this system makes the inclusion of neuroimaging into a variety of communication studies far more feasible. In addition to the validation of the Muse, this study provides strong support for the inclusion of persuasive techniques in the instructions for neuroimaging studies to improve data quality. The use of the “foot-in-the-door” technique in this study significantly improved the data collection. EEG data has an uphill battle against outside noise, especially as devices become more affordable and mobile, so any opportunity to improve data quality is important. The “foot- 28 in-the-door” compliance gaining technique has been thoroughly tested (Freedman and Fraser, 1966; Carducci et al., 1989; and Gueguen & Fischer-Lokou, 1999), but this study provides support in its use to gain improved data in neuroimaging studies. Limitations There were some limitations to consider in this study regarding the sample drop rates for participants, the valence and arousal ratings of the stimuli, only having two conditions in the “foot-in-the-door” test, and the paradigm issues in Part B. The sample drop rate average was almost 50 percent, with some viewing sessions for participants exceeding that. This could be due to the four dry sensors on the device. Many EEG units make use of conductive gels or liquid solutions to help to record the electrical activity under the skull. Having dry sensors makes it more difficult to record that signal and makes it more difficult to combat the noise. Additionally, the signal is amplified within the device, which is a far cry from the much larger amplifiers that of other EEG systems. Lastly that signal is sent to the recording computer through a Bluetooth connection which can drop data in the process. However, this study collected data in two subsamples, and when comparing the subsamples, the ERP results are highly correlated, showing that the device is capable of consistency, despite the drop rate. Another limitation is the stimuli selected for the positive and neutral images. Previous IAPS studies (Schupp et al., 2004; Lang et al., 1993) made use of IAPS images that contained mutilations and erotica, which are significantly different in arousal and valence ratings compared the neutral stimuli. In this study, only moderately positive stimuli were used, which may not have elicited the psychological effects that were desired, or any affect they may have had were not significant enough to be detected by the Muse. Additionally, the study relied on the original IAPS ratings, and participants were not asked to rate the images after participating. Future 29 studies could test the Muse with the same stimuli that were used in previous studies (Schupp et al., 2004; Lang et al., 1993). In designing this study, only two conditions were created for the “foot-in-the-door” test, and a third condition with the participants receiving the intensive instructions in the beginning of the study was lacking. As it stands, it cannot be ruled out that the more intensive instructions in the beginning would have resulted in a lower sample drop, and the third condition would have helped in that. This can be tested again in future studies, however as it is such an easy change to participant instructions, it seems like a valued addition to any EEG study going forward. Lastly, for Part B of this study, the task created, and the stimuli used could be improved to elicit the strong P3 response in other cognitive ERP studies. Making the target presentation more uncertain, by changing the ratio of target-to-nontarget stimuli would help elicit the P3 response that was expected in this test (Luck et al., 2005). Additionally, using stimuli better designed for this task, which lacked the confound of producing an ERP themselves, would help in identifying any recognizable ERP components. Furthermore, only the second subsample participated in this portion of the study, which means the sample size was much smaller. This task-based ERP test could be repeated, adhering closer to paradigms from the original P3 studies, to further validate the Muse in ERP research moving forward. Future research Based on our encouraging results, future studies may now begin to tackle a broad variety of social-cognitive processes using immensely scalable neuroimaging methods. Although, of course, the spatial and temporal resolution of these devices remains below that of state-of-the-art tools (3-7T fMRI or high-density EEG), the high scalability, ease-of-use, and practically no-cost nature represent a decisive factor that can boost the adoption of mechanistically focused 30 neuroimaging methods in the communication discipline. One area for which this approach is obviously promising is the study of social-media sharing decisions (Scholz et al. 2019; Meshi et al. 2015) and the mass appeal and virality of emotional and social content more broadly (Hu et al. 2014; Tong et al. 2020). The current study focused on the role of visual images for social-emotional processes, but going forward, it is clear that the benefits of the Muse EEG and other low-cost EEG systems apply to other modalities. For instance, similar arguments as we laid out for the impact of emotional and social visual content can be made for spoken and written messages. Indeed, several precursor studies exist in this domain examining the neural reception of buzzwords (Kissler et al., 2007) or clashing moral statements (Van Berkum et al., 2009). This work could likewise be scaled up using the strategy proposed in this article. Another avenue for research is the study of dynamic audio-visual media, be it on the order of seconds (YouTube, Snapchat, or Instagram) or hours (movies, TV, cinema). In the current study, we’ve focused on event-related potential methods because they use repeated presentations to increase the signal-to-noise ratio, but future work should explore whether more advanced EEG analysis methods like inter-subject correlation analysis or entrainment-based methods (Schmaelzle and Grall; Lalor et al., 2006; Crosse et al., 2016) can be employed to examine the reception of dynamic messages. 31 Conclusion Throughout history visual communication has played a crucial role in sharing relevant social information, and its value is as important as ever with the internet serving as a platform for social media websites like Instagram and Facebook that rely heavily on visual communication. There is immense value in studying visual communication from a communication neuroscience viewpoint, as visual processing has been studied extensively in neuroscience already. Incorporating neuroscience in communication has been held back by difficult and expensive methodologies, but advancements in EEG technology has opened the door for this research. The Muse EEG system is an easy to use and extremely affordable EEG unit that can provide the temporal resolution needed to study visual processing. This study validates the use of the Muse in communication research by performing an ERP analysis of participants viewing affective images. The Muse successfully was able to detect ERPs when participants viewed images, and data quality was significantly improved through the “foot-in-the-door” compliance gaining technique. The successful capture of ERPs with the Muse validates the use of the device in future communication studies and helps serve as a crossroad for the methods of neuroscience and study of communication. 32 REFERENCES 33 REFERENCES Adolphs, R. (2003). Cognitive Neuroscience of Human Social Behaviour. Nature Reviews. Neuroscience, 4, 165–78. Adolphs R., Nummenmaa L., Todorov A., & Haxby, J.V. (2016). Data-Driven Approaches in the Investigation of Social Perception. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 371. https://doi.org/10.1098/rstb.2015.0367. Balm, J. (2014, August 11). The power of pictures. How we can use images to promote and communicate science. Biomed Central. http://blogs.biomedcentral.com/bmcblog/2014/08/11/the-power-of-pictures-how-we-can- use-images-to-promote-and-communicate-science/ Bear, M. F., Connors, B. W., & Paradiso, M. A. (2016). Neuroscience: Exploring the brain (Fourth edition.). Philadelphia, PA: Wolters Kluwer. Bradley, M.M., Codispoti, M., Cuthbert, B.N., & Lang, P.J. (2001). Emotion and motivation I: defensive and appetitive reactions in picture processing. Emotion, 3, 276-298. Bradley, M.M., Keil, A., & Lang, P.J. (2012). Orienting and emotional perception: facilitation, attenuation, and interference. Frontiers in Psychology. 10.3389/fpsyg.2012.00493. Bradley, M. M., Costa, V. D., Ferrari, V., Codispoti, M., Fitzsimmons, J. R., & Lang, P. J. (2015). Imaging distributed and massed repetitions of natural scenes: Spontaneous retrieval and maintenance. Human Brain Mapping, 36(4), 1381–1392. Carducci, B., Deuser, P., Bauer, A., Large, M., & Ramaekers, M. (1989). An application of the foot-in-the-door to organ donation. Journal of Business and Psychology, 4, 245–249. Courchesne, E., Hillyard, S.A., & Galambos, R. (1975). Stimulus novelty, task relevance and the visual evoked potential in man. Electroencephalography and Clinical Neurophysiology, 39, 131-143. Crosse, M. J., Di Liberto, G. M., Bednar, A., & Lalor, E. C. (2016). The Multivariate Temporal Response Function (mTRF) Toolbox: A MATLAB Toolbox for Relating Neural Signals to Continuous Stimuli. Frontiers in Human Neuroscience, 10, 604. De Houwer, J., Hughes, S., & Brass, M. (2017). Toward a unified framework for research on instructions and other messages: An introduction to the special issue on the power of instructions. Neuroscience and Biobehavioral Reviews, 81, 1–3. Dikker, S., Wan, L., Davidesco, I., Kaggen, L., Oostrik, M., McClintock, J., Rowland, J., 34 Michalareas, G., Van Bavel, J. J., Ding, M., & Poeppel, D. (2017). Brain-to-Brain Synchrony Tracks Real-World Dynamic Group Interactions in the Classroom. Current Biology, 27(9), 1375-1380. https://doi.org/10.1016/j.cub.2017.04.002 Farnsworth, B. (2019). EEG Headset Prices – An Overview of 15+ EEG Devices. https://imotions.com/blog/eeg-headset-prices/ Ferrari, V., Bradley, M. M., Codispoti, M., & Lang, P. J. (2011). Repetitive exposure: brain and reflex measures of emotion and attention. Psychophysiology, 48(4), 515–522. Forgas, J. P., Vincze, O., & László, J. (2013). Social Cognition and Communication. Psychology Press. Flaisch, T., Junghöfer, M., Bradley, M. M., Schupp, H. T., & Lang, P. J. (2008). Rapid picture processing: affective primes and targets. Psychophysiology, 45(1), 1–10. Freedman, J.L., & Fraser S.C. (1966). Compliance without pressure: The foot-in-the-door technique. Journal of Personality and Social Psychology, 4, 195-202. Gramfort, A., Luessi, M., Larson, E., Engemann, D., Strohmeier, D., Brodbeck, C., Goj, R., Jas, M., Brooks, T., Parkkonen, L., & Hämäläinen, M. (2013). MEG and EEG data analysis with MNE-Python. Frontiers in Neuroscience, 7. Gramfort, A., Luessi, M., Larson, E., Engemann, D. A., Strohmeier, D., Brodbeck, C., Parkkonen, L., & Hämäläinen, M. S. (2014). MNE software for processing MEG and EEG data. NeuroImage, 86, 446–460. Gorgolewski, K. J., Auer, T., Calhoun, V. D., Cameron Craddock, R., Das, S., Duff, E. P., Flandin, G., Ghosh, S. S., Glatard, T., Halchenko, Y. O., Handwerker, D. A., Hanke, M., Keator, D., Li, X., Michael, Z., Maumet, C., Nolan Nichols, B., Nichols, T. E., Pellman, J., … Poldrack, R. A. (2016). The Brain Imaging Data Structure: a standard for organizing and describing outputs of neuroimaging experiments. In bioRxiv (p. 034561). https://doi.org/10.1101/034561 Gueguen, N., & Fischer-Lokou, J. (1999). Sequential request strategy: Effect on donor Generosity. The Journal of Social Psychology, 135, 669-671. Hu, Y., Manikonda, L., & Kambhampati, S. (2014). What we instagram: A first analysis of instagram photo content and user types. Eighth International AAAI Conference on Weblogs and Social Media. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM14/paper/viewPaper/8118 Ito, T.A., Larsen, J.T., & Cacioppo, J.T. (1998). Negative information weighs more heavily on the brain: the negativity bias in evaluative categorizations. Journal of Personality and Social Psychology, 887-900. 35 Isreal, J. B., Wickens, C. D., Chesney, G. L., & Donchin, E. (1980). The Event-Related Brain Potential as an Index of Display-Monitoring Workload. Human Factors, 22(2), 211–224. https://doi.org/10.1177/001872088002200210 Jas, M., Engemann, D. A., Bekhti, Y., Raimondo, F., & Gramfort, A. (2017). Autoreject: Automated artifact rejection for MEG and EEG data. NeuroImage, 159, 417–429. Kowaleski, J. (2019). BlueMuse. https://github.com/kowalej/BlueMuse Keasson, A. (2019). EEG Notebooks. https://github.com/NeuroTechX/eeg-notebooks Kissler, J., Herbert, C., Peyk, P., & Junghofer, M. (2007). Buzzwords: early cortical responses to emotional words during reading. Psychological Science, 18(6), 475–480. Krigolson, O. E., Williams, C.C., Norton, A., Hassall, C.D., & Colino, F.L. (2017). Choosing MUSE: Validation of a Low-Cost, Portable EEG System for ERP Research. Frontiers in Neuroscience. doi: 10.3389/fnins.2017.00109 Kutas, M., McCarthy, G., & Donchin, E. (1977). Augmenting mental chronometry: the P300 as a measure of stimulus evaluation time. Science, 197(4305), 792–795. Lalor, E. C., Pearlmutter, B. A., Reilly, R. B., McDarby, G., & Foxe, J. J. (2006). The VESPA: a method for the rapid estimation of a visual evoked potential. NeuroImage, 32(4), 1549– 1561. Lang, P. J. (1979). A Bio-Informational Theory of Emotional Imagery. Psychophysiology, 16(6), 495–512. Lang, P.J., Greenwald, M.K., Bradley, M.M., & Hamm, A.O. (1993). Looking at pictures: affective, facial, visceral, and behavioral reactions. Psychophysiology, 30, 261- 273. Lang, P.J., Bradley, M.M., & Cuthbert, B.N. (2008). International affective picture system (IAPS): Affective ratings of pictures and instruction manual. Technical Report A-8. University of Florida, Gainesville, FL. Lang, A. (2009). The limited capacity model of motivated mediated message processing. The SAGE Handbook of Media Processes and Effects, 193–204. Luck, S. (2005). An introduction to the event-related potential technique. MIT Press. Meshi, D., Tamir, D. I., & Heekeren, H. R. (2015). The Emerging Neuroscience of Social Media. Trends in Cognitive Sciences, 19(12), 771–782. Messaris, P. (1997). Visual Persuasion: The role of images in advertising. SAGE. Nieuwenhuis, S., Aston-Jones, G., & Cohen, J. D. (2005). Decision making, the P3, and the locus coeruleus-norepinephrine system. Psychological Bulletin, 131(4), 510–532. 36 Nieuwenhuis, S., De Geus, E. J., & Aston-Jones, G. (2011). The anatomical and functional relationship between the P3 and autonomic components of the orienting response. Psychophysiology, 48(2), 162–175. Nosek, B. A., Hawkins, C. B., & Frazier, R. S. (2011). Implicit social cognition: from measures to mechanisms. Trends in Cognitive Sciences, 15(4), 152–159. Orne, M. T. (1962). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. The American Psychologist, 17(11), 776. Peirce, J.W., & MacAskill, M. (2018). Building Experiments in PsychoPy. SAGE. Peirce, J. W., Gray, J. R., Simpson, S., MacAskill, M. R., Höchenberger, R., Sogo, H., Kastman, E., & Lindeløv, J. (2019). PsychoPy2: experiments in behavior made easy. Behavior Research Methods. doi: 10.3758/s13428-018-01193-y Poldrack, R. A., Baker, C. I., Durnez, J., Gorgolewski, K. J., Matthews, P. M., Munafò, M. R., Nichols, T. E., Poline, J.-B., Vul, E., & Yarkoni, T. (2017). Scanning the horizon: towards transparent and reproducible neuroimaging research. Nature Reviews. Neuroscience, 18(2), 115–126. Reeves, B., & Thorson, E. (1986). WATCHING TELEVISION: Experiments on the Viewing Process. Communication Research, 13(3), 343–361. Rhodes, N., & Ewoldsen, D. R. (2013). Outcomes of persuasion: Behavioral, cognitive, and social. The SAGE Handbook of Persuasion: Developments in Theory and Practice, 53– 69. Russell J.A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39, 1161-1178. Scholz, C., Jovanova, M., Baek, E. C., & Falk, E. B. (2019). Media Content Sharing as a Value-Based Decision. Current Opinion in Psychology. https://doi.org/10.1016/j.copsyc.2019.08.004 Schmälzle, R., & Meshi, D. (2020). Communication Neuroscience: Theory, Methodology and Experimental Approaches.Communication Methods and Measures, 1–20. https://doi.org/10.1080/19312458.2019.1708283 Schmaelzle, R., & Grall, C. (n.d.). Mediated Messages and Synchronized Brains. In R. Weber & K. Floyd (Eds.), Handbook of Communication and Biology. Schupp, H.T., Junghofer, M., Weike, A.I., & Hamm, A.O. (2004). The selective processing of briefly presented affective pictures: An ERP analysis. Psychophysiology, 41, 441-449. 37 Strack, F., & Deutsch, R. (2004). Reflective and impulsive determinants of social behavior. Personality and Social Psychology Review: An Official Journal of the Society for Personality and Social Psychology, Inc, 8(3), 220–247. Stróż;ak, P., & Francuz, P. (2017). Event-Related Potential Correlates of Attention to Mediated Message Processing. Media Psychology, 20(2), 291–316. Todorov, A. (2017). Face Value: The Irresistible Influence of First Impressions. Princeton University Press. Tong, L. C., Acikalin, M. Y., Genevsky, A., Shiv, B., & Knutson, B. (2020). Brain activity forecasts video engagement in an internet attention market. Proceedings of the National Academy of Sciences of the United States of America, 117(12), 6936-6941. https://doi.org/10.1073/pnas.1905178117 Torkildson, A. (2019). Twitter vs. Instagram: Which platform is better for branding? https://socialmediaexplorer.com/digital-marketing/twitter-vs-instagram-which-platform- is-better-for-branding/ Tylén, K., Fusaroli, R., Rojo, S., Heimann, K., Fay, N., Johannsen, N.N., Riede, F., and Lombard, M. (2020). The evolution of early symbolic behavior in Homo sapiens. Proceedings of the National Academy of Sciences of the United States of America, 117, 4578-4584. Van Bavel, J. J., Mende-Siedlecki, P., Brady, W. J., & Reinero, D. A. (2016). Contextual sensitivity in scientific reproducibility. Proceedings of the National Academy of Sciences of the United States of America, 113(23), 6454–6459. Van Berkum, J. J. A., Holleman, B., Nieuwland, M., Otten, M., & Murre, J. (2009). Right or wrong? The brain’s fast response to morally objectionable statements. Psychological Science, 20(9), 1092–1099. Waterloo S.F., Baumgartner, S.E., Peter, J., and Valkenburg, P.M. (2018). Norms of online expressions of emotion: Comparing Facebook, Twitter, Instagram, and WhatsApp. New Media Sociology, 20, 1813-1831. Weber, R., Eden, A., Huskey, R., Mangus, J. M., & Falk, E. (2015). Bridging Media Psychology and Cognitive Neuroscience. Journal of Media Psychology, 27(3), 146–156. Werner, J.S., & Chalupa L.M. (2014). The new visual neurosciences. MIT Press. Weymar, M., Gerdes, A.B.M., Low, A., Alpers, G.W., and Hamm, A.O. (2012). Specific fear modulates attentional selectivity during visual search: Electrophysiological insights from the N2pc. Psychophysiology, 50, 139-148. 38 Zebrowitz, L.A. (2011). Ecological and Social Approaches to Face Perception. In Oxford Handbook of Face Perception, edited by Gillian Rhodes, Andy Calder, Mark Johnson, and James V. Haxby. Oxford University Press. 39