RELATIONSHIPS BETWEEN INHIBITORY CONTROL AND SEMANTIC PROCESSING IN DIFFICULT LISTENING ENVIRONMENTS By Stephanie L Nagy A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Communicative Sciences and Disorders – Master of Arts 2020 ABSTRACT RELATIONSHIPS BETWEEN INHIBITORY CONTROL AND SEMANTIC PROCESSING IN DIFFICULT LISTENING ENVIRONMENTS By Stephanie L Nagy Listening in challenging environments is more difficult for some individuals than others and may require individuals to devote more cognitive resources to listening, also known as greater listening effort. Difficulty with listening effort can negatively impact an individual’s quality of life and reduce participation in enjoyable situations. Individuals may be affected differently by the environment, based on their cognitive, attentional, and linguistic skills. Recent findings suggest increased cognitive resources are allocated for semantic processing as listening difficulty increases. The current study evaluated relationships between individual variability in cognitive proficiencies (inhibitory control, nonverbal IQ, and language abilities) and semantic processing in challenging listening environments. Thirty-nine adults with typical hearing and language skills completed a battery of language and cognitive assessments. Event- related potentials (ERPs) were acquired during a speech-in-noise task with varying difficulty levels. Results revealed semantic processing in difficult listening environments was uniquely impacted by an individual’s inhibitory control, expressive language, and receptive language skills. Together, these findings suggest that the use of cognitive resources to support semantic processing in challenging listening environments varies as a function of cognitive strengths and weaknesses. TABLE OF CONTENTS LIST OF TABLES ........................................................................................................................... v LIST OF FIGURES ........................................................................................................................ vi Introduction ............................................................................................................................... 1 Listening Effort ......................................................................................................................... 2 Inhibitory Control ..................................................................................................................... 4 Neural Processes Underlying Listening Effort .......................................................................... 7 Neural Processes Underlying Semantics ................................................................................ 9 Effects of Individual Variability in Cognitive Skills on Listening Effort ..................................... 16 Gaps in the Literature .............................................................................................................. 20 The Current Study ................................................................................................................... 21 Method ..................................................................................................................................... 23 Participants ............................................................................................................................. 23 Background Assessments ........................................................................................................ 24 ERP Stimuli .............................................................................................................................. 27 ERP Paradigm .......................................................................................................................... 28 Procedure ................................................................................................................................ 30 EEG Data Analyses ................................................................................................................... 32 Statistical Analyses .................................................................................................................. 34 Results ....................................................................................................................................... 36 Behavioral Performance .......................................................................................................... 36 Performance Accuracy and Response Times ......................................................................... 37 N400 Mean Amplitude ............................................................................................................ 40 Stroop Performance ............................................................................................................. 44 Nonverbal IQ ......................................................................................................................... 45 Overall Language .................................................................................................................. 46 Expressive Language ............................................................................................................. 47 Receptive Language .............................................................................................................. 49 Discussion .................................................................................................................................. 51 Current Study Summary .......................................................................................................... 51 iii Impacts of Listening Conditions on Individual Task Performance ........................................... 52 Inhibition and Semantic Processing in Noise ........................................................................... 53 Nonverbal IQ and Semantic Processing in Noise ..................................................................... 56 Expressive and Receptive Language Skills and Semantic Processing in Noise ........................ 57 Limitations of the Current Study and Future Implications ...................................................... 60 Conclusion ............................................................................................................................... 62 APPENDICES .............................................................................................................................. 63 APPENDIX A ............................................................................................................................ 64 APPENDIX B ............................................................................................................................ 65 REFERENCES .............................................................................................................................. 67 iv LIST OF TABLES Table 1: Median (range) for all participants and mean (SE) standard scores for participants with Higher and Lower performance on Stroop: Color and Word Test, Test of Nonverbal Intelligence – Fourth Edition (TONI-4), and Test of Adolescent and Adult Language – Third Edition (TOAL-3) .................................................................................................................................................. 37 Table 2: Mean (SE) Accuracy (%) and Response Times for final word identification for High and Low Cloze probability final words across all participants ......................................................... 39 Table 3: Higher and Lower Performance Groups based on Stroop, TONI, and TOAL performance across all participants ................................................................................................................ 65 v LIST OF FIGURES Figure 1: Illustration of results from Kutas & Hillyard (1984) ................................................... 12 Figure 2: Illustration of results from Kemp et al. (2019) ........................................................... 16 Figure 3: Illustration of the neural multifunctionality model (Cahana-Amitay & Albert, 2014, 2015, Miyake, 2000) .................................................................................................................. 18 Figure 4: The sequence of events during a single trial in the experimental ERP paradigm ...... 32 Figure 5: Performance Accuracy for High and Low Cloze probability words for the Long, Middle, and Short response time deadlines (RTD) ................................................................................. 39 Figure 6: Grand average event-related brain potential (ERP) waveforms for the High (black) and Low (red) Cloze probability sentences for the Long response time deadline (RTD) condition. 41 Figure 7: Grand average event-related brain potential (ERP) waveforms for the High (black) and Low (red) Cloze probability sentences for the Middle response time deadline (RTD) condition .................................................................................................................................................. 42 Figure 8: Grand average event-related brain potential (ERP) waveforms for the High (black) and Low (red) Cloze probability sentences for the Short response time deadline (RTD) condition .................................................................................................................................................. 43 Figure 9: N400 mean amplitudes elicited by High Cloze and Low Cloze probability final words across each of the response time deadlines for individuals with Higher and Lower Stroop Performance ............................................................................................................................. 45 Figure 10: N400 mean amplitudes elicited by High Cloze and Low Cloze probability final words across Long, Middle and Short RTD conditions for individuals with Higher and Lower Overall Language Performance ............................................................................................................. 47 Figure 11: N400 mean amplitudes elicited by High Cloze and Low Cloze probability final words across Long, Middle and Short RTD conditions for individuals with Higher and Lower Expressive Language Performance ............................................................................................................. 48 vi Figure 12: N400 mean amplitudes elicited by High Cloze and Low Cloze probability final words across Long, Middle and Short RTD conditions for individuals with Higher and Lower Receptive Language Performance ............................................................................................................. 50 vii Introduction Listening effort (LE) has been defined as “the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a [listening task]” (Pichora-Fuller et al., 2016, p. 5). Similarly, McGarrigle et al. (2014) stated, “LE is the mental exertion required to attend to, and understand, an auditory message” (p. 4). Research examining LE suggests complex listening environments require individuals to devote more cognitive resources in order to understand speech than quiet environment, or in other words, require greater LE (Desjardins & Doherty, 2013; Fraser et al., 2010; Picou et al., 2016; Sarampalis et al., 2009; Rönnberg et al., 2008). Complex listening environments may increase effort required to understand the message, or greater listening effort. Individuals across society, with and without hearing loss, are frequently exposed to noisy environments, such as at restaurants, jobs, concerts, parties, and other social gatherings. Compensation with hearing aids or cochlear implants aims to improve speech recognition in noisy environments (Murphy et al., 2000; Picou et al., 2011, 2013; Stone & Moore, 2008). However, these devices are limited in that they may increase an individuals’ sensitivity to their surroundings and amplify background noise (Stone & Moore, 2004, 2008). Thus, hearing aids or cochlear implants may end up increasing the LE and hinder speech recognition (Gustafsson & Arlinger, 1994; Hällgren et al., 2005; Stone & Moore, 2008; Winn et al., 2016). Importantly, difficulty with LE can negatively impact individual’s quality of life, social interactions with family and friends, and participation in enjoyable situations (Chisolm et al., 2007; Hetu et al., 1988; Murlow et al., 1990). 1 Much of the literature on LE has focused on environmental factors that make listening easier or harder, including fluctuating noise levels and directionality of incoming noise (Desjardins & Doherty, 2013, McGarrigle et al., 2014, Picou et al., 2013, Zekveld et al., 2011). Although many studies have looked at individual factors that influence LE (McGarrigle et al., 2014, Picou et al., 2011, 2013), understanding of the neural mechanisms underlying LE are still limited. Additionally, individuals may be affected differently by the environment, based on their cognitive, attentional or linguistic abilities. A recent study revealed that changes in task difficulty can affect semantic processing, an important process in understanding speech (Kemp et al., 2019). However, it is not clear how individual variability in cognitive or linguistic skills may impact semantic processing in difficult listening situations. The current project aims to fill this gap by investigating how differences in inhibitory control, an important skill for understanding speech in noisy environments, may impact semantic processing in noise. Listening Effort Although most of the literature on LE has focused on the listening in noise in individuals with hearing loss, studies have also assessed LE in individuals without hearing loss. A variety of methodologies have been used, including self-report, behavioral measures, physiological measures, functional magnetic resonance imagining (fMRI), and electroencephalography (EEG) paradigms (Desjardins & Doherty, 2013; McGarrigle et al., 2014; Picou et al., 2011, 2013; Rudner, 2016). Tasks measuring LE with subjective rating scales or surveys often ask the subject to rate their effort needed to hear all words/sentences presented, using a 5-10-point rating scale 2 (Desjardins & Doherty, 2013; Larsby et al., 2005; Rudner, 2016; Picou et al., 2013). Subjective rating scales are beneficial for assessing listening effort in individuals of various ages, ranging from young children to adults, in subjects with and without hearing loss, and across a wide variety of noise levels. Overall, these studies suggest increased task difficulty, often associated with decreased task performance on a lexical, visual, or auditory processing task, is linked to greater reported LE. However, there is great variability when using subjective rating scales, as different individuals may have different definitions of what greater listening effort feels like. Rating scales are also not designed to measure individual differences in physiological changes, including fatigue, motivation, and allocation of cognitive resources, in subjects of various age ranges and with various hearing abilities. Other studies have looked at relationships between listening effort and physiological markers, such as skin conductance levels (Mackersie & Calderon-Moultrie, 2016; Mackersie & Cones, 2011; Pichora-Fuller et al., 2016), heart rate variability (HRV), and pupil dilation (Kuchinsky et al., 2016; Zeveld et al., 2010), to provide more objective evidence regarding LE (McGarrigle et al., 2014). Research measuring skin conductance suggests when subjects are listening in the most difficult conditions, skin conductance levels increased (Mackersie & Calderon-Moultrie, 2016; Mackersie & Cones, 2011). HRV was also examined during difficult listening conditions. Decreased parasympathetic activity in the heart, or the flexible mechanism that is active during resting state, is elicited when the listening environment becomes more taxing. These studies also reported more difficult listening environments and potential changes in fatigue, arousal, and motivation, affect multiple physiological measures, specifically HRV and 3 skin conductance (Mackersie & Calderon-Moultrie, 2016; Mackersie & Cones, 2011 Pichora- Fuller et al., 2016). Additionally, pupillometry has been linked to listening effort (Kuchinsky et al., 2016; Zeveld et al., 2010). Specifically, pupil dilation size increased as listening environments became more difficult. Although these studies provide evidence that physiological measures, specifically skin conductance, (Mackersie & Calderon-Moultrie, 2016; Mackersie & Cones, 2011 Pichora- Fuller et al., 2016), heart rate variability (HRV), and pupil dilation (Kuchinsky et al., 2016; Zeveld et al., 2010), are significantly impacted by difficult listening environments, these different measurement techniques are based upon various assumptions of underlying mechanisms of LE (McGarrigle et al., 2014). In other words, these various physiological measurements are not able to provide information about the neural mechanisms underlying LE. Inhibitory Control Inhibitory control is the ability to ignore irrelevant thoughts or actions in order to focus attention on relevant information while completing a task (Rothbart & Posner, 1985; Stoltzfus et al., 1993). Inhibitory control also involves suppressing a dominant response in order to activate to a less frequent response (Dong et al., 2009; Kok et al., 1986; Kok et al., 1999). Previous research has suggested that stronger performance on inhibition tasks is related to better language comprehension and word retrieval abilities (Banks et al., 2015; Janse & Adank, 2012; Sommers & Danielson, 1999). Other recent findings confirmed this result, suggesting stronger inhibitory/ attention skills, as measured by other tasks, are associated with stronger 4 speech perception and lexical discrimination abilities (Huyck & Johnsrude, 2012; Kim & Phillips, 2014; Shao et al., 2014; Sommers & Danielson, 1999; Taler et al., 2010; Ou et al., 2015). Inhibition skills are also important for listening in noise. For example, while talking with a friend at a loud party, one must inhibit other peoples’ voices, music, and other background sounds to attend to the conversation with the friend (Eriksen & Eriksen, 1974; Kok, 1999). Inhibitory control may be related to semantic processing in noise, but very few studies have assessed the ways individual variability in inhibitory control may relate to neural processes underlying semantics in noise when task demands increase. For instance, in order to attend to a conversation in noise, an individual needs to engage cognitive resources for inhibition because they need to suppress distractor noises or incorrect responses to focus closely on the conversation (Banks et al., 2015; Cahana-Amitay et al., 2016; Constantinidou et al., 2014; Marsh et al., 2015; Rönnberg et al., 2008; Sommers & Danielson, 1999; Wagner et al., 2001). Some research has examined the relationship between inhibitory control, via the Stroop task (Golden et al., 2003), and perception of unfamiliar accented speech in noise. In a study by Taler et al. (2010) healthy, older individuals were instructed to listen to sentences in three- talker babble at 70% or 95% SNR accuracy that were either High or Low Cloze probability with target words that varied in frequency (high vs. low) and neighborhood density (high vs. low). Their findings aligned with other studies and indicated that that individuals with weaker inhibitory skills, indexed by lower performance on the Stroop task, performed worse on word recall for more difficult (high neighborhood density) words during cognitively taxing listening conditions (Low Cloze probability; noise). Similarly, Janse and Adank (2012) found a significant relationship between speech perception, using a phoneme identification task in multi-talker 5 speech, and inhibitory control. Cahana-Amitay and Albert (2016) found similar results, suggesting inhibitory skills may play a role in helping select the appropriate target word among competitors when perceiving speech in background noise. In a study by Banks et al. (2015), participants were instructed to listen to sentences presented in unfamiliar English regional accents in background noise and recall as much of the sentence as possible, without the speaker’s accent. Sentences were presented in background noise, in which the SNR was altered throughout the task to increase listening difficulty. Findings revealed that participants with stronger inhibitory skills were more accurate and faster at perceiving unfamiliar, accented speech (Banks et al., 2015). However, Banks et al. (2015) noted that no correlations were observed between individual inhibitory skills and the ability to listen in noise. The authors noted that these findings may be due to fewer task stimuli, as stimuli only consisted of fifteen unaccented sentences in noise (Banks et al., 2015), compared to other studies that observed a significant correlation between these skills (Deacon et al., 2004; Debruille, 2007; Janse & Adank, 2012; Kim & Phillips, 2014; Ou et al., 2015; Taler et al., 2010). Moreover, these findings indicate that individual inhibitory skills may impact language comprehension abilities in noise (Banks et al., 2015; Cahana-Amitay & Albert, 2016; Deacon et al., 2004; Debruille, 2007; Janse & Adank, 2012; Sommers & Danielson, 1999; Taler et al., 2010). Other studies examining relationships between inhibitory control and speech perception in normal listening environments found similar results (Banks et al., 2015; Deacon et al., 2004; Debruille, 2007; Janse & Adank, 2012; Kim & Phillips, 2014; Ou et al., 2015; Sommers & Danielson, 1999; Taler et al., 2010). Studies using other variations of the Stroop task, such as the Day/Night task (Berlin & Bohlin, 2002; Montgomery & Koeltzow, 2010), have found that 6 children with better inhibitory processing skills have stronger language comprehension abilities (Kim & Phillips, 2014). Ou et al. (2015) assessed the relationship between inhibitory skills, as measured by the cancellation subtest of the WAIS-IV (Wechsler, 2010), and speech perception, specifically a tone perception task. Consistent with other findings, their results suggest inhibitory control plays a role in speech perception, as faster tone discrimination was associated with stronger inhibitory skills. Neural Processes Underlying Listening Effort One way of measuring listening effort is to look at changes in brain activity in different listening conditions. EEG provides a non-invasive measurement of electrical activity of populations of neurons firing in synchrony. EEG paradigms allow for high temporal resolution, on the order of milliseconds. EEG can be time-locked to a specific target to measure event- related potentials (ERPs), which reflect the brain’s response to a given stimulus. ERPs allow for examination of changes (e.g., magnitude, timing) and patterns in neural processes in response to target stimuli (Nunez, 1995; Luck, 2014). ERPs can be used to examine neural patterns that exist while processing information over time, such as speech in conversation. ERPs have previously been used to study neural processes for LE (Bertoli & Bodmer, 2014, Desjardins & Doherty, 2013, Kemp et al., 2019, McGarrigle et al., 2014, Picou et al., 2011, 2013). A common way to study neural processes underlying LE is to use dual-task paradigms. Dual-task paradigms typically ask participants to attend to a primary task while simultaneously completing a secondary task. Dual-task ERP paradigms have been used to assess neural mechanisms underlying listening in noise 7 (McGarrigle et al., 2014; Picou et al., 2013). In these tasks, participants are frequently asked to simultaneously complete a primary listening task and a secondary visual processing or memory task (McGarrigle et al., 2014; Desjardins & Doherty, 2013; Fraser et al., 2010; Hornsby, 2013; Picou et al., 2011, 2013). Findings from these studies suggest that performance on the secondary task is often worse in more difficult listening environments, likely because people have to work harder to achieve the same performance and engage more cognitive resources to listening during noisier conditions. Fewer cognitive resources may be available for the secondary task as a result of more difficult listening, visual input, or fatigue, and thus, reduced task performance has been reported compared to easier listening conditions (Bertoli & Bodmer, 2014). Interestingly, the primary listening task may also be affected by more difficult listening conditions. When the tasks or listening conditions become more difficult, the secondary task may impact performance on the primary listening task. The inclusion of a secondary task may impose an increased cognitive demand on an individual and require greater engagement of cognitive resources. As a result of increased cognitive engagement or fatigue, other important cognitive functions, such as attention, inhibition skills, working memory, may be negatively affected. A limitation of dual-task paradigms is that they measure LE indirectly through secondary task performance (Anderson Gosselin & Gagne, 2011; Desjardins & Doherty, 2013; Hornsby, 2013; Picou et al., 2011, 2013). Thus, dual-task paradigms make it difficult to directly determine the effects of LE on cognitive performance because the secondary task may require greater engagement of cognitive load, may reduce available attention or motivation, may cause mental fatigue, and may confound LE during each task. Instead of independently assessing the effects of LE and visual memory on cognitive abilities, LE is measured indirectly through 8 secondary visual processing task performance (Anderson Gosselin & Gagne, 2011; Desjardins & Doherty, 2013; Hornsby, 2013; Picou et al., 2013; Sarampalis et al., 2009). Other research using single-task ERP paradigms, which require completing only one task at a time, has also examined the impact of cognitive workload on LE in adults with normal hearing (Allison & Polich, 2008; Cahana-Amitay & Albert, 2014, 2015; Cahana-Amitay et al., 2016; Kemp et al., 2019). Single-task ERP paradigms, such as sentence processing in noise tasks or sentence completion in noise tasks, may provide more direct measurement of the effects of listening effort on cognitive performance compared to dual-task paradigms, as they directly measure changes in neural processes in challenging listening conditions without the requirement of the second task (Allison & Polich, 2008; Kemp et al., 2019). For instance, single- task ERP paradigms may be less distracting to the participant providing responses, as participants only need to attend to and complete one task. Recent research using single-task paradigms provides evidence that increased cognitive resources are devoted to a listening task during more difficult listening conditions (Kemp et al., 2019). Neural Processes Underlying Semantics The current project focuses on neural processes underlying semantics as indexed by the endogenous event-related brain potential (ERP) component, the N400, which is sensitive to the cognitive analyses of the stimulus, such as probability, location, and quality. Research consistently shows the N400 reflects ease of lexical access and/or integration, and larger amplitudes generally reflect more effort required for lexical integration (Federmeier et al., 2007; Kutas & Federmeier, 2011; Lau et al., 2008). The N400 component has been elicited by 9 verbal stimuli (spoken, written, or signed words) and nonverbal stimuli (gestures, faces, movies, sounds, drawings; Holcomb & Neville, 1990; Van Petten & Rheinfelder, 1995). It is a negative component that typically peaks around 400 ms post-stimulus onset (Kutas & Federmeier, 2000). Although there is a rich body of literature focused on the N400, there is ongoing debate about the theoretical basis of the N400, or what neural processes it represents. One theory, the pre- lexical view, suggests the N400 reflects pre-lexical processing of a stimulus and phonological analysis (Deacon et al., 2004; Kutas & Federmeier, 2011), in which one accesses pre-existing features of the stimulus to understand its meaning. This theory is supported by findings where the N400 is elicited during processing of non-words. Another theory, the post-lexical view, suggests the N400 reflects post-lexical access of a word, in which the N400 reflects integration of meaning of the stimulus with meaning-level representations of similar words held in memory (Federmeier et al., 2007; Kutas & Federmeier, 2000, 2011; Lau et al., 2008). A third theory, the inhibition hypothesis, suggests the N400 indexes an inhibition process, in which incompatible, competing meanings of a stimulus are inhibited and the other relevant meanings are activated (i.e., only relevant meanings of the stimulus reach conscious awareness; Debruille et al., 2008; Debruille, 2007; Howard et al., 2006; Shao et al., 2014). A fourth theory, the bi-modal interactive-activation model, includes pieces of each of the above theories. This theory suggests the early phase of the N400 incorporates components of pre-lexical activation, while later phases are hypothesized to reflect post-lexical integration and/or post-lexical inhibition (Diependaele et al., 2010; Grainger & Holcomb 2009; Kutas & Federmeier, 2011). Although resolving this theoretical debate is beyond the scope of the current study, there is abundant evidence suggesting the N400 multimodal semantic access, 10 involving automatic and controlled processing, and reflects both pre-lexical and post-lexical integration. Importantly, there is limited understanding of how semantic processing is affected by other cognitive abilities, such as receptive language skills or inhibitory control (Debruille, 2007; Deacon et al., 2004; Federmeier et al., 2007). Many studies have found that N400 amplitudes elicited by perception of incongruent final words in sentences were larger (greater negativity) than N400 amplitudes elicited by… congruent final words (Benau et al., 2011; Jamison et al., 2016; Kutas & Hillyard, 1984; Wang et al., 2012). This result is illustrated in Figure 1, which represents results from Kutas & Hillyard’s (1984) classic N400 experiment. Benau et al. (2011) examined semantic processing in children and adults using a visual sentence presentation task which included either congruent, moderately incongruent, or strongly incongruent final words. Participants were asked to determine if the sentence made sense or not. Findings revealed that strongly incongruent final words elicited the largest N400 amplitudes in adults, compared to moderately incongruent or congruent final words, suggesting a graded N400 effect across frontal (FZ and FCZ), central (CZ), and centroparietal (CPZ and PZ) electrode sites. Consistent with other research, the N400 graded effect was exhibited at electrode sites reflecting the dynamic cognitive system important to semantic processing (i.e. near the middle, superior, and inferior temporal regions, temporal-parietal junction, and dorsolateral prefrontal regions) (Benau et al., 2011; Kutas & Federmeier, 2011). This graded N400 effect supports theories that the N400 reflects pre-lexical activation of the final, incongruent word. Interestingly, N400 amplitudes in children elicited by both strong and moderate incongruent final words were larger than those elicited by congruent sentences. The authors hypothesized that children may be engaging inhibitory processes during 11 incongruent speech perception, reflected by larger N400 amplitudes. These findings support the bimodal interactive-activation model proposed by Holcomb & Grainger (2007). Furthermore, related to the scope of the current study, these findings suggest that cognitive effort or engagement for lexical integration may be increased when processing low context/ incongruent words. Figure 1: Illustration of results from Kutas & Hillyard (1984). N400 mean amplitudes elicited by High, Medium, and Low Cloze probability final words. Negative values are plotted upward. N400 amplitudes increased as Cloze probability decreased, with the largest N400 amplitudes for high context sentences with Low Cloze probability final words (Kutas & Hillyard, 1984). 12 Previous studies have found that specific aspects of language comprehension, such as word recall and semantic processing abilities, as well as attentional processing, are negatively affected by increased listening difficulty (Erlbeck et al., 2014; Jamison et al., 2016; Marsh et al., 2015; Uslar et al., 2013). For example, a recent study assessed LE in adults without hearing loss (Marsh et al., 2015). Participants were asked to listen to and recall spoken words from multiple categories. Half of the word lists were presented in noise, at a signal to noise ratio (SNR) of 5 decibels, while the other half were read in quiet listening environments. Results revealed that when listening in noise, participants were able to recall fewer words, displayed increased difficulty identifying categories, falsely recalled spoken critical words, and clustered semantically related items. The authors concluded that listening in noise negatively impacts higher-order semantic processing (Marsh et al., 2015). In other words, in order to comprehend the stimulus, representations of the target word in noise need to be accessed in long-term memory stores, which requires more cognitive resources devoted to working memory (Banks et al., 2015; Huyck & Johnsrude, 2012; Marsh et al., 2015; Rönnberg et al., 2008; Uslar et al., 2013). Together with previous findings, research suggests stronger working memory, attentional, and inhibition processes, as well as perceptual learning over time contribute to more effective processing speech in noise. However, these studies did not directly relate inhibition and semantic processing skills. Erlbeck et al., (2014) measured LE with a different approach, by examining the relationships between attention and semantic processing using varied task instructions (focused; passive; ignore) during a sentence processing, word priming, and an oddball paradigm. During the sentence and word priming paradigms, individuals were required to 13 identify incongruent and congruent using two keys as quickly and accurately as possible during the forced condition. For the oddball paradigm, various harmonic sounds were presented, with standard stimuli that occur frequently and deviant stimuli that occur infrequently. Individuals were instructed to press a button when they heard a deviant sound. No response was required during the passive condition. In the ignore condition, individuals watched a silent movie and instructed to respond to a given scene every time it occurred, while ignoring auditory stimuli. Individuals indicated the level of effort required to complete the task using subjective rating scale. The largest N400 amplitudes were elicited by the forced condition, with small amplitudes elicited by the passive condition. No N400 response was elicited by the ignored condition. These findings suggest that when more attentional resources are engaged in sentence processing, there is greater effort engaged in lexical access (Erlbeck et al., 2014; Jamison et al., 2016; Obleser & Kotz, 2011). A recent study assessed the impact of listening difficulty on the neural processes underlying semantics in adults with typical hearing (Kemp et al., 2019). This study used a single task paradigm to assess semantic processing in noise when task demands varied based on response time pressure and sentence context. In this paradigm, sentences were presented in four-talker babble, where background noise from four non-familiar speakers was simultaneously played with sentences. After hearing sentences presented in four-talker babble, the participants were asked to identify the final word, which was either a High or Low Cloze probability word, given four choice options that were visually presented for various response time deadlines (RTD). Task difficulty was manipulated in two dimensions: 1) final word cloze probability (High or Low), and 2) amount of time response options were presented on the 14 screen (400, 700, or 1000 ms). They found that N400 amplitudes elicited the most difficult listening conditions – low context sentences with high response time pressure, when visual response choices were only visible for a short period of time – were largest. This study also reported that N400 amplitudes were smallest for the easiest condition – high context sentences, when visual response choices were displayed for a long period of time, as illustrated in Figure 2. This study revealed that more cognitive resources were engaged in semantic processing as listening difficulty increased. This was the first study to demonstrate that neural processes underlying semantics changed as a function of listening effort in a single task paradigm. However, there is a high degree of individual variability in the effects of listening effort on neural processes for semantics. To date, it is unclear which cognitive factors may influence semantic processing in difficult listening conditions. 15 Figure 2: Illustration of results from Kemp et al. (2019). N400 mean amplitudes elicited by the final word in the High and Low Cloze probability sentences for the Long (closed circle), Middle (open circle), and Short (triangle) response time deadlines (RTDs) in a sentence in noise task. Negative is plotted upward. For the Low Cloze conditions, the largest N400 amplitudes were elicited by the Middle and Short RTDs (more difficult listening conditions) and the smallest N400 amplitudes were elicited by the Long RTD. For the High Cloze conditions, there were no significant differences between RTDs (Kemp et al., 2019). Effects of Individual Variability in Cognitive Skills on Listening Effort Semantic processing has been shown to be affected by various aspects of cognition, including attention, cognitive load, working memory, and inhibitory control (Cahana-Amitay et al., 2016; Janse & Adank, 2012; Miyake et al., 2000; Ou et al., 2015; Taler et al., 2010; Uslar et 16 al., 2013). To better understand relationships between language processing and executive functions, Miyake et al. (2000) developed a model suggesting that various executive functions share a degree of commonality and diversity in the ways they interact with language skills (Wagner et al., 2001). Specifically, inhibitory control, attention shifting, and working memory abilities make up a cognitive control system that may be recruited during lexical processing and account for individual differences in semantic retrieval, language comprehension, and speech production tasks (Badre & Wagner, 2007; Cahana-Amitay & Albert, 2014; Friedman & Miyake, 2004; Miyake et al., 2000; Wagner et al., 2001). Based on Miyake’s (2000) model of executive function, Cahana-Amitay and Albert (2014, 2015) proposed the neural multifunctionality model, which suggests that neural processes underlying semantic processing have a dynamic interaction with neural processes underlying executive functions, specifically inhibitory control (Figure 3). Moreover, during semantic processing tasks, a cognitive control system made up of inhibition, attention shifting, and working memory skills may be recruited to retrieve information from long-term memory stores to help define the meaning of a word and to inhibit competitors, such as incorrect distractor choices or background noise (Cahana-Amitay & Albert, 2014, 2015; Higby et al., 2019; Miyake et al., 2000; Wagner et al., 2001). In other words, the neural multi-functionality model suggests that semantic processes engaged to understand a message are dependent on complex interactions between language abilities and executive functions, such as inhibitory control (Cahana-Amitay & Albert, 2014, 2015; Higby et al., 2019). However, it is still unclear how and to what extent inhibitory, attention, and working memory skills contribute to neural processes underlying lexical access and semantic retrieval in younger adults. 17 Figure 3: Illustration of the neural multifunctionality model (Cahana-Amitay & Albert, 2014, 2015, Miyake, 2000). The neural multifunctionality model suggests that individuals need to recruit a complex cognitive control system made up of cognitive functions including attention, inhibition, working memory, and language skills, such as semantic processing. in order to understand speech. Attention Inhibition Working memory Understand Speech Semantic processing Few studies to date have used ERPs to assess relationships between inhibitory control and speech perception. One study examined the role of inhibition in naming picture of objects and actions (Shao et al., 2014). This study looked at amplitudes of the N200 component, a component that reflects non-selective and selective response inhibition (Dong et al., 2009; Kok et al., 1986; Kok et al., 1999; Shao et al., 2014; Silton et al., 2010). Findings revealed individuals use selective inhibition during the most un-probable naming condition, in which the word has multiple common names (i.e. a picture of a young person may be called a baby, infant, toddler, child, or girl) and elicits multiple lexical concepts, compared to the most probable name 18 condition (i.e. a picture of a dog almost always called dog) (Shao et al., 2014). This result is in support of the inhibition hypothesis underlying lexical selection, suggesting it is a competitive process (Debruille, 2007; Howard et al., 2006; Shao et al., 2014). It is important to understand factors that contribute to individual variability in listening effort in noisy conditions in adults with and without hearing loss. Adults without hearing loss are regularly exposed to challenging listening environments (Shargorodsky et al., 2010) and individuals may be affected differently by the environment, based on their cognitive, attentional, and linguistic skills. Better understanding of individual variabilty in adults without hearing loss could inform understanding of listening effort in adults with hearing loss or cognitive deficits, helping to contibute to better assessments and interventions. Moreover, additional research is needed to examine individual variability in LE in order to understand why some people are more negatively affected by distractions and background noise than others and how listening environments may affect one’s ability to socially interact with others. One way to assess individual variability in listening effort is to look at relationships between cognitive skills, such as inhibition, and language processing, specifically semantic processing. Semantic processing is the process of encoding the message to make meaning of what was said (American Speech Language and Hearing Association, 2019). Inhibitory control is important to LE because in order to effectively attend to a conversation in noise, an individual must inhibit distractor noises in the background and focus closely on the target conversation. However, understanding relationships between inhibitory control and semantic processing is limited (Banks et al., 2015; Huyck & Johnsrude, 2012; Marsh et al., 2015; Rönnberg et al., 2008; Uslar et al., 2013). Understanding how individual differences in inhibitory control may impact semantic 19 processing abilities, a crucial component of understanding speech, may provide insight into why some individuals have more difficulty understanding. Gaps in the Literature To date, few studies have examined the effects of inhibitory control on speech perception in noise. Understanding of the ways individual variability in executive function abilities, such as inhibitory control, impacts which language processing, specifically semantic processing, in noise is limited. Of the other current studies examining these relationships, each one measures different aspects of language comprehension (accented speech; phoneme targeting; tone perception) in different ways, perceptually and behaviorally. Current models and findings suggest that executive function skills, like inhibitory control, are important for perceiving and recalling language (Banks et al., 2015; Cahana-Amitay et al., 2016; Janse & Adank, 2012; Miyake et al., 2000; Ou et al., 2015; Sommers & Danielson, 1999; Taler et al., 2010). These studies examined speech perception abilities using sentence recall tasks that measure accuracy, speed, or the number of words the participant repeated. Very few studies have assessed how neural processes underlying speech perception may be impacted by executive function skills in younger adults (Cahana-Amitay et al., 2016; Shao et al., 2014). Of these, almost no research exists examining the effects of inhibitory control on the N400, during quiet listening or in noise. More research is needed to better understand the impact of inhibitory skills on neural processes underlying language, specifically the N400 underlying semantic processing. 20 The Current Study The current study is designed to increase understanding of the ways in which individual differences, or variability, in inhibition skills may impact semantic processing in noisy environments. Therefore, the current study aims to answer the question, “What is the relationship between inhibition skills and neural processes underlying semantics in noisy listening environments in adults without hearing loss?” This study extends the previous work by Kemp and colleagues (2019). Specifically, adults without hearing loss completed a sentence processing in noise task. Participants were asked to identify either High Cloze or Low Cloze probability final word in sentences that were presented in four-talker babble at 70% SNR in a four-alternative forced choice task, in which the response options were presented for Long, (1000 ms), Middle (700 ms), and Short (400 ms) RTDs. All of the participants also completed a comprehensive battery of standardized language and cognitive assessments, including measures of inhibition, nonverbal intelligence, and receptive and expressive language. Evidence suggests nonverbal intelligence (Hampton Wray & Weber-Fox, 2013) and receptive and expressive language abilities (Newman et al., 2012; Weber-Fox et al., 2003) uniquely interact with semantic processing. Moreover, measures of nonverbal intelligence and receptive and expressive language skills were included to understand if thy impact semantic processing similarly or differently than inhibitory skills. ERPs elicited by the final word reflected semantic processing. The current study evaluated the ways in which sentence processing in noise were related to inhibitory skills, nonverbal intelligence, and receptive and expressive language abilities. 21 Based on previous studies of LE, semantic processing, and inhibition, we hypothesized that adults with better inhibitory skills would have more efficient semantic processing abilities in challenging listening environments, indexed by smaller N400 amplitudes. We also predict that adults with better inhibition skills would perform more accurately than individuals with weaker inhibition skills for the most difficult conditions. 22 Method The current thesis project builds on a study that has been completed and published (Kemp et al., 2019). Participants Thirty-eight participants, aged 18-39 (Meanage (SE): 22.83 (.85) years), were recruited from Michigan State University research recruitment website, SONA. Participants were right- handed, monolingual speakers of English, and had normal or corrected-to-normal vision. Twenty-four females and fourteen males were included. Individuals with a reported history of communication disorders or neurological diseases (attention-deficit/ hyperactivity disorder, traumatic brain injuries, epilepsy, concussion), or taking medication impacting neurological function were excluded from this study. All participants completed a hearing screening with tones presented at thresholds of 20 dB HL and frequencies between 500 and 8000 Hz and scored within normal hearing levels at the time of study (American National Standard Institute, 2010). Specific hearing thresholds for each participant were recorded. Participants completed a questionnaire to gather background information as well as parental education level, professions, and interests. The Hollingshead scale (1975) for education level was used to measure each participant’s education level. The scale is: 1 = less than seventh grade, 2 = junior high school, 3 = partial high school, 4 = high school graduate, 5 = partial college, 6 = standard college, 7 = graduate professional. Mean participant education level reflected partial college (Mean (SE): 5.36 (0.14), range 4-7), and their mean maternal education levels (Mean (SE): 5.44 (0.19), range 3-7) also reflected partial college. Study procedures were approved by the 23 institutional review board at Michigan State University and Informed consent was obtained from all participants. All participants were paid for their participation in the current study. Background Assessments Participants completed a battery of standardized tasks to assess individual skills across multiple cognitive domains, specifically inhibitory control, nonverbal intelligence, and receptive and expressive language. All participants scored within the normal range on the executive function, nonverbal intelligence, and language assessments. The tasks administered are described below. Inhibitory Control Task: The Stroop: Color and Word Test (Golden, et al., 2003; Stroop, 1935), which focuses on the ability to inhibit one response and provide another, using color and word naming, was administered to measure inhibitory control (Stroop). Individuals were first asked to read a color word (word naming subtest), then asked to identify the color of each group of X’s on the page (color naming). In the third task, individuals were presented with color words that are printed in a different color than the color name (e.g., the word, “red” is written and printed in “blue”) and asked to say the color of the ink, not the color word (color-word reading). This is the task that requires inhibition, because individuals must provide the less dominant response of naming the ink color. Raw scores on each subtest, color naming, word naming, and color-word reading, were calculated based on the number of correct responses to each set of stimuli. Using individuals’ age and education level, predicted scores for each subtest were calculated and used to obtain residual scores, by subtracting participant’s raw scores from the predicted scores (Residual = Predicted – Raw). The calculated residual scores were 24 converted to standard scores for each subtest. Inhibition scores (color-word interference) were calculated by subtracting the predicted color-word scores from the standard color-word scores. Consistent test-retest reliability has been demonstrated across various versions of The Stroop: Color and Word Test. Specifically, good test-retest reliability was observed for color (α = .89), word (α = .84), and color-word scores (α = .73) across a normalization sample of 450 high- school aged participants. Nonverbal Intelligence Task: The Test of Nonverbal Intelligence – 4th edition (Brown et al., 2010) was administered to measure nonverbal intelligence (TONI-4), specifically abstract reasoning and problem solving. TONI-4 consists of sixty items, with each item consisting of a sequence of abstract figures, with a missing figure in the sequence. Sequences vary based on shape, position, direction, rotation, contiguity, shading, size, and/or movement. Individuals were instructed to identify the missing figure in the sequence nonverbally, using gestures, pointing, nodding, or blinking. Correct responses were recorded as one point and incorrect responses were recorded as zero points and were used to calculate raw scores. Raw scores were converted to standard scores, percentiles, and age equivalents based on normative data. TONI-4 was normed on 2,272 participants of various gender, ethnicity, race, residence, and cultural backgrounds from 32 states, representative of the U.S. population, according to the U.S. census bureau. Internal consistency, test-retest reliability, alternate form reliability, and interrater agreement reliability estimates confirm that TONI-4 has good reliability for screening, diagnostic, and research purposes (α = .96, σx̅ = 3). Content, construct, concurrent, and predictive validity estimates confirm that TONI-4 is valid for the populations represented (Ritter et al., 2011). 25 Language Task: The Test of Adolescence and Adult Language - 3rd Edition (Hammill et al., 1994) measured receptive and expressive language abilities (TOAL-3). TOAL-3 consists of eight subtests: Speaking/ Grammar, Speaking/ Vocab, Listening/ Grammar, Listening/ Vocab, Writing/ Grammar, Writing/ Vocab, Reading/ Grammar, and Reading/ Vocab. This study focused on Overall Language, Expressive Language, and Receptive Language, so only the Speaking/ Grammar, Speaking/ Vocab, Listening/ Grammar, and Listening/ Vocab were administered. In the Speaking/ Grammar subtest, the examiner presented sentences verbally and instructed the individual to repeat the sentence aloud. Individuals were presented with a target word and instructed to produce a meaningful sentence with the correct usage of the target word in the Speaking/ Vocab task. In the Listening/ Grammar subtest, participants were instructed to identify the two sentences that have the same meaning out of three sentences presented aloud. In the Listening/ Vocab task, the examiner presented a stimulus word aloud and instructed the individual to select two pictures that are related to the target word. Raw scores from each subtest were converted to standard scores, using normative data. Standard scores on each subtest were then used to determine composite scores defining participants Overall Language Performance, Expressive Language Performance, and Receptive Language Performance. Specifically, Overall Language Performance (Spoken SS) was determined from the sum of standard scores for all four TOAL-3 subtests. Expressive Language Performance (Speaking SS) was determined by adding standard scores from Speaking/ Vocab and Speaking/ Grammar subtests. Receptive Language Performance (Listening SS) was determined by adding standard scores from Listening/ Vocab and Listening/ Grammar subtests. TOAL-3 was normed on 3,056 individuals, aged 12-25, of various gender, ethnicity, race, residence, and cultural 26 backgrounds. This population was determined to be representative of the U.S. population, according to the U.S. census bureau. Internal consistency, reported stability, and interscorer reliability estimates based on the composite scores confirm TOAL-3 has good reliability. Criterion-referenced validity estimates confirm TOAL-3 is valid for the populations represented. ERP stimuli ERP stimuli included 300 sentences, each 6-11 words in length. The final word of each sentence, the target stimulus, served as either a High Cloze (more likely) or Low Cloze (less likely) probability ending, e.g., “At the movies, I always get popcorn.” vs. “At the movies, I always get cookies.” The same target word was used as both a High Cloze and Low Cloze target word in separate sentences, so that each condition was counterbalanced. All target words were chosen using the MacArthur Communication Development Inventories (Fenson et al., 2007) and/ or Spoken Word Count of frequent words used by 5-year-olds (Wepman & Hass, 1969) to ensure all participants would be highly familiar with the target words, and also to allow the paradigm to be used with children in future studies (Kemp et al., 2019). A native English speaker with a neutral American accent recorded stimuli using a headset microphone and a digital Linear PCM Recorder at 44100 Hz. An overall energy equivalence level for the four-talker babble was determined to ensure the same energetic content was in the individual sentences. Using MATLAB, each sentence was then root-mean-square normalized using an individual gain constant. In order to time lock the EEG data to the onset of each target word, two independent raters identified the onset of each final word based on visual and auditory inspection of the initial sentence recordings using Praat software. Discrepancies greater than 20 ms in word 27 onset were resolved by a third independent rater. Sentences were rerecorded or replaced by a sentence with clearly identified final word onsets if the word onset could not be initially determined. Using Adobe Audition, a wave file illustrated the exact onset (ms) of each word. Cloze probability for final words was determined by administering written surveys to 31 adults who did not participate in the current study (Kemp et al., 2019; Hagoort & Brown, 2000; Taylor, 1953). Individuals were presented with a sentence from the task without the final word and asked to fill in the final word. Based on survey responses, final word probability was calculated for the target words. 150 High Cloze sentences were selected with a mean Cloze probability of 60% (range: 30%-100%) and Low Cloze final words were determined based on survey responses to create a matched set of an additional 150 Low Cloze sentences (range: 0- 30%). Each sentence base was used twice, once with a High Cloze final word and once with a Low Cloze final word. Each target final word was also used twice, in one High and one Low Cloze sentence. ERP Paradigm All sentences were presented in the presence of four-talker babble (Auditec by St. Louis). The four-talker babble was presented at an individualized signal-to-noise ratio (SNR), where each participant responded consistently with 70% accuracy. Individual 70% SNR was determined in a training block presented prior to the experimental paradigm. During the training block, participants listened to 30 Low Cloze sentences in noise (not included in the ERP test paradigm) and identified the final word in a four-alternative forced choice task. Participants were instructed to respond as quickly as possible but did not have response time limitations. 28 The multi-talker babble was varied throughout the training block based on the following criteria: If the participant identified two final words in a row correctly, the noise level increased by 2 dB. If the participant answered incorrectly, the noise level decreased by 1 dB. This was designed to determine the noise level at which participants were 70% accurate. The noise level at the end of the 30 sentences was determined to be the individual’s 70% SNR. The four-talker babble was then presented at this loudness throughout the ERP paradigm. The noise level of the four-talker babble was individualized for each participant. On average, four-talker babble was presented at 65 db SPL (SD = 1.4 dB; range: 62.5–70 dB SPL). The sentence stimuli were presented at ~62 db SPL for all participants (Kemp et al., 2019). After each sentence was presented, individuals selected the sentence final word they heard from a visual four-alternative forced choice task. Three phonetically similar nouns were used as alternative response choices. Target words were intermixed with distractor choices that contained a similar number of syllables and similar rhyme structure to the target word (i.e. for the target word “cake”, choices included “lake”, “make”, and “steak”). Response choices were arranged in a “plus” pattern, illustrated in Figure 4. The correct response location was pseudo- randomized between sentences and blocks. The amount of time the choices were available on the screen following each sentence, response time deadlines (Benikos et al., 2013), varied between blocks to further manipulate task difficulty. Visual choices were either displayed for 1000 ms (Long), 700 ms (Middle), or 400 ms (Short) RTDs. After choices disappeared from the screen, individuals had additional time to respond. Analyses of behavioral responses during the ERP paradigm included responses that occurred between 200 and 2,500 ms. 29 A total of 300 sentences were pseudo-randomly divided into 12 blocks that varied based on RTD (Long, Middle, Short), so that there were four blocks per RTD. Each block contained 25 sentences. No sentence base or target word was repeated within a block. Sentence presentation within each block was also pseudorandomized, so that High or Low Cloze sentences were not presented greater than 3 times consecutively. Three block presentation orders were established to vary the order in which RTDs for specific sentences were presented between participants (Kemp et al., 2019). To help familiarize the task to the participants, each block presentation order started with a Long RTD. RTD block presentation order was also pseudorandomized after the first block, so that the same RTD was not presented twice in a row. The ERP paradigm is illustrated in Figure 4. Procedure All individuals completed the ERP paradigm described above, followed by the behavioral assessments, in one 2.5 – 3-hour session, taking breaks as needed. All participants were seated in a sound-attenuating booth after providing written consent for participation, passing the hearing screening assessment, and placement of the elastic electrode cap (Biosemi, Amsterdam, Netherlands). A computer monitor was placed 145 cm in front of the participant to display the four response options during each trial (Kemp et al., 2019). The sentences and four- talker babble were presented via ER-3A insert earphones. Before beginning the paradigm, participants completed training trials and SNR calibration was also completed. Specific instructions presented to the participants are listed in Appendix A. 30 At the beginning of the ERP session, four-talker babble was presented for 10,000 ms to help participants become familiar with the noise. The babble repeated every 18 min 40 s. Each trial began with a “READY?” screen. Participants pressed a button to begin the trial. Seven hundred and fifty ms after the participants pressed a button, a small crosshair appeared and remained on the center throughout the sentence. The sentence was presented 800 – 1,200 milliseconds after a jittered stimulus onset asynchrony. 1,000-ms after each sentence was presented, four visual response options were presented for the time of the RTD (400, 700, or 1,000 ms) in Calibri font (size 68), subtending a visual angle of 4.74° vertically and ~7.5° horizontally. Immediate feedback regarding whether the participant’s choice was correct or incorrect was provided to help increase participant’s motivation and encourage participants to respond accurately. Following feedback, participants received a 750-ms break, and then the “READY?” screen re-appeared, and the next trial began. Participants were told how much response time (Long, Middle, Short) would be available at the start of each block. 31 Figure 4: The sequence of events during a single trial in the experimental ERP paradigm. First, a “Ready?” screen was displayed, and participants pressed any button on a handheld remote to continue with the task. Four talker babble at an individualized signal-to-noise ratio was played throughout the entire paradigm. Participants heard the target sentences after a small crosshair on the monitor appeared. Participants were instructed to select the final word they heard from four response choices on the screen that were presented for either 1000, 700, or 400 ms. Feedback regarding whether the participant’s choice was correct or not was provided after each response to help increase response accuracy. A short break, indicated by a blank screen, was followed by a “Ready?” screen to begin the next trial. EEG Data Analyses An elastic cap containing 32 Ag/Ag-Cl electrodes (Biosemi, Amsterdam, Netherlands) located at sites consistent with the International 10-20 classification was used to collect EEG data. Electrodes were also placed above the left and right outer canthi and below the left orbital ridge to track eye movements, as well as on the left and right mastoids, which served offline as reference electrodes. Data were collected at 512 Hz. Electrode offsets, a reflection of 32 electrode impedance, were kept between +/- 20 mV for all participants, lower than the +/- 40 mV recommended by the Active-Two system (Biosemi, Amsterdam, Netherlands), to ensure high quality data were acquired. EEG data were down-sampled offline to 256 Hz for analysis. Data were referenced offline to the average of data acquired from the left and right mastoids (Kemp et al., 2019). The data processing steps and statistical analyses described below are the pieces of this project that were completed for this thesis project. EEGLAB (Delorme & Makeig, 2004) and ERPLAB (Lopez-Calderon & Luck, 2014) were used to conduct EEG and ERP analyses. Data were band pass filtered from 0.01 to 40 Hz. Independent component analysis (ICA) was used to separate out eye artifact (blinks; lateral eye movements) in the data. Based on the assumption that artifact and cortical data are independent of each other, ICA works by creating various independent components that represent distinct EEG source activity (i.e. cortical and artifact source) for all electrodes. Two independent raters identified the appropriate ICA components to be removed from the data in order to remove eye artifact and other movement artifact components from each data set. When necessary, a third rater resolved any discrepancies of components to be removed between raters. Data were then divided into epochs between 100 ms prior to target word onset, the time window for baseline correction, to 800 ms after the onset of the target word. Analyses only included trials where participants correctly identified the target word. Automatic artifact rejection procedures were completed using a 200 ms time window that moved at 50 ms increments. This automatic rejection procedure marked eye data that changed more than 100 µV and/or data in other channels that changed more than 200 µV within the 200 ms time window as artifact. Manual inspection of each epoch was then 33 conducted to identify and remove any additional epochs containing artifact. For High Cloze sentences, the number of accepted trials was lower for Long RTD compared to Middle and Short RTDs (RTD, F(1,38) = 9.53, p < 0.01, ηp2 = 0.2). No differences were observed in number of accepted trials for the Low Cloze sentences across the 3 RTDs (RTD, F(1,38) < 1, p = 0.4, ηp2 = 0.02). This study replicated and expanded findings of Kemp et al. (2019). Moreover, data analysis yielded ERPs that were time-locked to the onset of the final word. N400 mean amplitudes were calculated between 350-750 ms post-onset of the final word (Erlbeck et al., 2014; Holcomb & Neville, 1990; Kemp et al., 2019; Kutas & Federmeier, 2011). N400 composite measures were computed by averaging mean amplitude values across 12 centroparietal electrode sites, where the N400 is most prominent, specifically across electrodes C3/4, CP5/6, P3/4, P7/8, PO3/4, and O1/2. Statistical Analyses Median performance across all participants on each cognitive test, Stroop Color-Word Interference, TONI-4, Overall Language, Expressive Language, and Receptive Language. was calculated. Participants were then divided into two groups, a Higher and Lower Performing Group, around the median score, for all cognitive assessments. Median scores for each task are included in Table 1. Accuracy was determined for each participant by the total number of correct responses during the ERP task. Accuracy was computed separately for each condition (e.g., High Cloze, Long RTD). Response times were also calculated for each participant for each condition based 34 on correct responses that occurred between 200 and 2500 ms post-onset of the four visual choices and were included in analyses. Repeated measures analysis of variance (ANOVAs) were run to determine whether time pressure and final word probability impacted individuals’ Accuracy and Response Time on the semantic processing task. Additionally, ANOVAs were used to determine whether Accuracy and Response Times for each condition significantly differed between the Higher and Lower Performing Groups. Separate ANOVAs were computed for Accuracy and Response Time for each behavioral task (e.g., Stroop, TONI, Overall TOAL) with a between-subjects factor of Performance Group for each task (higher, lower) and within-subject factors of RTD (Long, Middle, Short) and Cloze probability (Cloze: High, Low). ERP data were analyzed using repeated measures ANOVAs with a between-subject factor of performance Group (Higher, Lower) and within-subject factors of RTD (Long, Middle, Short) and Cloze probability (Cloze: High and Low). Separate ANOVAs were calculated for Stroop, TONI, and Overall TOAL Groups. Alpha power was set at p < .05 as there were three planned comparisons. In order to further analyze potentially meaningful interactions, step- down ANOVAs were run for interactions involving group with p < .1. Step-down ANOVAs had the same structure described above except for the factor being divided. For example, for the High Cloze condition, the ANOVA would have a between-subject factor of Performance Group (Higher, Lower) and within-subject factors of RTD. A similar ANOVA would be used for the Low Cloze condition. For all effects with greater than one degree of freedom, Huynh-Feldt corrections are reported. Partial eta squared values are reported for all significant effects. 35 Results Behavioral Performance Median (range) score across all participants and mean (SE) performance of Higher and Lower Performing Groups for each cognitive assessment (Stroop; TONI-4; TOAL-3) is presented in Table 1. Participants were divided into Higher and Lower Performance Groups based on their standardized score on each measure. Grouping for each individual based on performance on each standardized assessment (Higher/Lower Group) is presented in Appendix B. Higher Performance Groups consisted of 19 participants for Stroop Color-Word Interference, 20 participants for TONI-4, 21 participants for Overall Language, 19 participants for Expressive Language, and 22 participants for Receptive Language. Lower Performance Groups consisted of 19 participants for Stroop Color-Word Interference, 18 participants for TONI-4, 17 participants for Overall Language, 19 participants for Expressive Language and 16 participants for Receptive Language. 4 participants were in the Higher Performance group on all three tasks, and 4 participants in the Lower Performance group on all tasks. The remaining participants were in Higher Performance on some tasks, and Lower Performance on other tasks. 36 Table 1: Median (range) for all participants and mean (SE) standard scores for participants with Higher and Lower performance on Stroop: Color and Word Test, Test of Nonverbal Intelligence – Fourth Edition (TONI-4), and Test of Adolescent and Adult Language – Third Edition (TOAL-3). Stroop Color – Word Interference Nonverbal IQ Overall Language (Spoken SS) Expressive Language (Speaking SS) Median (range) Mean (SE) 57 (31 – 75) 57.36 (1.36) 43 (29 – 51) 22 (15 – 28) 104 (83 – 117) 102.46 (1.48) 42.13 (0.96) 22.03 (0.54) Receptive Language (Listening SS) 20 (13 – 26) 20.10 (0.55) 62.59 (1.02) 109.4 (1.02) 46.38 (0.59) 24.26 (0.37) 22.36 (0.47) 50.58 (1.56) 95.16 (1.36) 37.17 (0.98) 18.81 (0.48) 17.17 (0.46) t(38) = 12.01, p < .001 t(38) = 14.24, p < .001 t(38) = 9.21, p < .001 t(38) = 5.45, p < .001 t(38) = 5.19, p < .001 Higher Performance Lower Performance Group Difference Performance Accuracy and Response Times Mean (SE) Accuracy and Response Times for final word identification are presented in Table 2. Across all participants, a main effect of Cloze probability was observed (F(1,38) = 203.26, p < .01, ηp2 = 0.84). This effect revealed that participants identified High Cloze final words more accurately than Low Cloze final words. An effect of RTD (F(1,38) = 33.08, p < .01, ηp2 = 0.47) as well as a trend toward significance between RTD and Cloze conditions (F(1,38) = 6.59, p = .06, ηp2 = 0.15) were observed. Step-down ANOVAs comparing Accuracy across RTD conditions separately for High and Low Cloze final words were conducted to better understand the relationships between Accuracy and RTD. Figure 5 reveals that Accuracy was higher for the 37 Long RTD compared to the Middle RTD and Short RTD for both High Cloze (F(1,38) =9.62, p < .01, ηp2 = 0.20) and Low Cloze final words (F(1,38) = 40.52, p < .01, ηp2 = 0.52). There were no differences in Response Time across the three RTD conditions (F(1,38) < 1, p = .391) or between High and Low Cloze conditions (F(1,38) < 1, p = .594). The interaction between RTD and Cloze was not significant (all Fs < 1.85, all ps > .18). ANOVAs were conducted to compare Performance Accuracy and Response Times between Higher and Lower Performance Groups based on each cognitive assessment (Stroop, TONI, Overall Language). No differences in Accuracy (all Fs < 1.56, all ps > .22) or Response Time (all Fs < 1.81, all ps > .18) were revealed between Higher and Lower Stroop Performance Groups. Additionally, no significant interactions were observed between Higher and Lower Nonverbal IQ Groups for Accuracy (all Fs < 2.5, all ps > .12) or Response Time (all Fs < 1.35, all ps > .25). No differences as a function of Performance Accuracy (all Fs < 2.06, all ps > .16) or Response Time (all Fs < .92, all ps > .34) were observed between individuals with Higher and Lower Overall Language Performance. 38 Table 2: Mean (SE) Accuracy (%) and Response Times for final word identification for High and Low Cloze probability final words across all participants. RTD Long Middle Short Cloze High Low High Low High Low Accuracy (%) Response Times 91.72 (1.21) 86.12 (1.16) 88.01 (1.37) 75.07 (1.79) 83.81 (2.04) 77.23 (1.38) 1161.62 (33.2) 1142.74 (43.97) 1076.52 (31.11) 1133.89 (34.84) 1164.78 (87.39) 1083.14 (34.10) Figure 5: Performance Accuracy for High and Low Cloze probability words for the Long, Middle, and Short response time deadlines (RTD). High Cloze final words were identified more accurately than Low Cloze final words. Participants identified High and Low Cloze words more accurately for the Long RTD compared to the Middle or Short RTDs. Behavioral Accuracy across Response Time Delay and Cloze Conditions ) % ( y c a r u c c A 95 90 85 80 75 70 Long Middle Short Response Time Delay (RTD) High Cloze Low Cloze 39 N400 Mean Amplitude As hypothesized, the expected N400 pattern, with larger N400 amplitudes elicited by Low Cloze compared to High Cloze final words, was observed. This pattern can be seen for the Long RTD in Figure 6, for the Middle RTD in Figure 7, and for the Short RTD in Figure 8. The time window for analysis is marked on electrode C4 and the N400 is highlighted on electrode C3 for illustrative purposes only. 40 Figure 6: Grand average event-related brain potential (ERP) waveforms for the High (black) and Low (red) Cloze probability sentences for the Long response time deadline (RTD) condition. For illustrative purposes, N400 is labeled at C3 and the 350–750 ms time window is highlighted at electrode C4. ERPs elicited by the Long RTD condition VEOG HEOG F7 FT7 T7 CP5 P7 PO3 Fz Cz Pz F3 FC5 C5 C3 P3 O1 N400 High Cloze Low Cloze F4 FC6 C6 C4 P4 O2 N400 time window F8 FT8 T8 CP6 P8 PO4 -6 -100 +6 µV ms 1000 41 Figure 7: Grand average event-related brain potential (ERP) waveforms for the High (black) and Low (red) Cloze probability sentences for the Middle response time deadline (RTD) condition. For illustrative purposes, N400 is labeled at C3 and the 350–750 ms time window is highlighted at electrode C4. ERPs elicited by the Middle RTD condition VEOG HEOG F7 FT7 T7 CP5 P7 PO3 Fz Cz Pz N400 F3 FC5 C5 C3 P3 O1 High Cloze Low Cloze F4 FC6 C6 C4 P4 O2 N400 time window F8 FT8 T8 CP6 P8 PO4 -6 -100 +6 µV ms 1000 42 Figure 8: Grand average event-related brain potential (ERP) waveforms for the High (black) and Low (red) Cloze probability sentences for the Short response time deadline (RTD) condition. For illustrative purposes, N400 is labeled at C3 and the 350–750 ms time window is highlighted at electrode C4. ERPs elicited by the Short RTD condition VEOG HEOG F7 FT7 T7 CP5 P7 PO3 Fz Cz Pz N400 F3 FC5 C5 C3 P3 O1 High Cloze Low Cloze F4 FC6 C6 C4 P4 O2 N400 time window F8 FT8 T8 CP6 P8 PO4 -6 -100 +6 µV ms 1000 43 Stroop Performance No overall effect of group was observed (F(1, 38) < 1, p = 0.555, ηp2 = 0.10). However, a significant Cloze x Stroop Performance interaction revealed that participants with Higher Stroop Performance had a larger (more negative) N400 for the Low Cloze condition than the Lower Stroop Performance Group (Cloze x Group, F(1, 38) = 4.25, p = .046, ηp2 = 0.103), as illustrated in Figure 9. A RTD x Cloze x Stroop Performance trend was observed (RTD x Group, F(1,38) < 1, p = .096, ηp2 = .062) as well (all Fs < 4.25, all ps > .046). To determine whether these interactions were driven by the High or Low Cloze condition, step-down ANOVAs comparing Stroop performance separately for the High and Low Cloze condition across the three RTDs were conducted. Step-down ANOVAs revealed no significant differences were revealed in the High Cloze condition. However, significant differences between Stroop performance and RTD were observed for the Low Cloze condition, primarily for Long RTDs, as Low Cloze words are harder to process (RTD x Stroop, F(1, 38) = 3.14, p = .049, ηp2 = .078). As can be seen in Figure 9, N400 mean amplitudes are comparable between the Higher and Lower Stroop Performance Groups for the High Cloze condition for all three RTDs and for the Low Cloze condition for Middle and Short RTDs. However, N400 mean amplitudes were larger for the Higher Stroop Group for the Low Cloze condition for the Long RTD. 44 Figure 9: N400 mean amplitudes elicited by High Cloze and Low Cloze probability final words across each of the response time deadlines for individuals with Higher and Lower Stroop Performance. Negative values are plotted upward. N400 amplitudes elicited by Low Cloze probability final words for the Long RTD were larger (more negative) for the Higher Stroop Group. N400 Mean Amplitudes for each condition based on Higher and Lower Stroop Performance ) V (cid:43) ( e d u t i l p m A n a e M -1 0 1 2 3 4 Low High Cloze Cloze Middle RTD Low High Cloze Cloze Short RTD Low High Cloze Cloze Long RTD Lower Stroop Higher Stroop Nonverbal IQ No significant effects of Group or interactions including Group were observed as a function of TONI performance (all Fs < 0.105, all ps > 0.836). These results demonstrate comparable N400 amplitudes between Higher and Lower performing TONI Groups across all Cloze and RTD conditions. 45 Overall Language A trend toward significance was observed for the interaction between Cloze x Overall Language Performance (Cloze x Group, F(1, 38) = 2.92, p = 0.096, ηp2 = .073). No other Group effects or interactions involving Group were observed (all Fs < 2.92, all ps > 0.096). This trend suggests similar N400 amplitudes for Low Cloze final words for the Lower and Higher Overall Language Performance Groups. However, the Lower Overall Language Group tended to have a larger N400 mean amplitude (more negative) in response to High Cloze words than the Higher Overall Language Group (Figure 10). Given that the trend was observed for overall language proficiency, and this was a receptive language task, further analyses were completed to determine if differences in N400 amplitudes associated with language might be more closely related to receptive or expressive language abilities. 46 Figure 10: N400 mean amplitudes elicited by High Cloze and Low Cloze probability final words across Long, Middle and Short RTD conditions for individuals with Higher and Lower Overall Language Performance. Negative values are plotted upward. Differences in N400 amplitudes between individuals with Higher and Lower Overall Language Performance were only observed for the High Cloze condition. N400 amplitudes elicited by High Cloze probability final words were larger (more negative) for the Lower Overall Language Group. ) V (cid:43) ( e d u t i l p m A n a e M 0 1 2 3 4 N400 Mean Amplitudes for High and Low Cloze Conditions based on Overall Language Performance Lower Receptive Language Group Higher Receptive Language Group High Cloze Low Cloze Expressive Language An overall effect of Group was observed (F(1, 38) = 7.11, p = 0.011, ηp2 = 0.161). No significant within-subject interactions including Group were found (all Fs < 1.184, all ps > 0.284). This finding reflects larger N400 mean amplitudes for the Lower compared to Higher Expressive Language Group across all conditions, as illustrated in Figure 11. Both Higher and Lower 47 Expressive Language Performance Groups showed the expected N400 effect (Lower Cloze > Higher Cloze). Figure 11: N400 mean amplitudes elicited by High Cloze and Low Cloze probability final words across Long, Middle and Short RTD conditions for individuals with Higher and Lower Expressive Language Performance. Negative values are plotted upward. Overall differences in N400 amplitudes between individuals with Higher and Lower Expressive Language Performance were observed. N400 amplitudes were smaller (less negative) for all conditions for the Higher Expressive Language Group. ) V (cid:43) ( e d u t i l p m A n a e M 0 1 2 3 4 N400 Mean Amplitudes for High and Low Cloze Conditions based on Expressive Language Performance Lower Receptive Language Group Higher Receptive Language Group High Cloze Low Cloze 48 Receptive Language A significant Cloze x Receptive Language Performance interaction was observed (Cloze x Group, F(1, 38) = 6.99, p = 0.012, ηp2 = 0.159). This interaction is illustrated in Figure 11. No overall effects of Group or other interactions with Group were observed (all Fs < 0.785, all ps > 0.381). To determine whether this difference was elicited by the High or Low Cloze condition, step-down ANOVAs comparing Receptive Language Performance separately for High and Low Cloze conditions across three RTDs (Long, Middle, Short). Step-down ANOVAs revealed no differences between Groups for the Low Cloze condition (all Fs < 0.328, all ps > 0.664). A trend toward significance was observed for the between subjects effect for the High Cloze condition (Group: F(1, 38) = 3.95, p = 0.054, ηp2 =.096), reflecting smaller N400 amplitudes for the High Cloze condition for the Higher compared to Lower Receptive Language Group (Figure 12). 49 Figure 12: N400 mean amplitudes elicited by High Cloze and Low Cloze probability final words across Long, Middle and Short RTD conditions for individuals with Higher and Lower Receptive Language Performance. Negative values are plotted upward. Differences in N400 amplitudes between individuals with Higher and Lower Receptive Language Performance were only observed for the High Cloze condition. N400 amplitudes elicited by High Cloze probability final words were smaller (less negative) for the Higher Receptive Language Group. N400 Mean Amplitudes for High and Low Cloze Conditions based on Receptive Language Performance 0 1 2 3 ) V (cid:43) ( e d u t i l p m A n a e M Lower Receptive Language Group Higher Receptive Language Group High Cloze Low Cloze 50 Discussion Current Study Summary The aim of the current study was to increase understanding of the ways in which individual variability in inhibition skills, nonverbal intelligence, and expressive and receptive language may impact speech perception and sentence processing in noisy environments. This study extended previous work by Kemp and colleagues (2019) to further understand relationships between individual cognitive and language skills and semantic processing. ERPs were collected to examine neural underpinnings of semantic processing. Adults with normal hearing completed a sentence processing in noise task, where they were asked to identify sentence final words with either High Cloze or Low Cloze probability. Response options were presented for Long, (1000 ms), Middle (700 ms), and Short (400 ms) RTDs in a visual four- alternative forced choice task. Four-talker babble at 70% SNR was played while the sentences were presented. We measured participant’s inhibition, nonverbal intelligence, and language skills using standardized language and cognitive assessments, including the Stroop: Color and Word Test (Golden, et al., 2003; Stroop, 1935), the TONI-4 (Brown et al., 2010) and the TOAL-3 (Hammill et al., 1994). Repeated measures ANOVAs revealed that people with better inhibitory skills, as indexed by Higher Stroop performance, exhibited greater differentiation between easier and harder listening tasks, regardless of time pressure to respond. Individuals with stronger language skills exhibited more mature semantic processing, as indexed by smaller N400 amplitudes, compared to individuals with weaker language skills. No differences in semantic processing were observed as a function of nonverbal IQ performance. These findings highlight differences in semantic processing as a function of inhibitory control and language 51 skills and suggest that it is important to account for cognitive skills when assessing semantic processing, both in typical and disordered populations. Impacts of Listening Conditions on Individual Task Performance Consistent with other literature, High Cloze words were identified more accurately than Low Cloze words (Benikos et al., 2013, Kemp et al., 2019). This result was expected as High Cloze final words are words that are more fitting in a specific context (ex: “I like my coffee with cream and sugar.” vs. “I like my coffee with cream and tacos.”), and thus are easier to process and identify. Results also revealed that both High and Low Cloze final words were identified more accurately during the Long RTD compared to the Middle or Short RTDs. This is consistent with other literature (Benikos et al., 2013, Kemp et al., 2019) and logical because participants had more time to read all the word options, making it more likely that they were able to more accurately identify the correct response choice, whether the final word was highly predictable or not. Contrary to what was expected, no significant interactions between Response Time and RTDs or Cloze probability were revealed. Individuals were not faster at identifying higher probability compared to lower probability final words or faster at identifying final words based on response time pressures. As the task already required high cognitive engagement to process sentences in noise, participants may have needed to allocate cognitive resources for semantic processing broadly, potentially washing out differences in processing speed or Response time. 52 Inhibition and Semantic Processing in Noise Consistent with other literature, greater N400 amplitudes were elicited by less probable sentence contexts, suggesting greater ease of lexical access for highly probable sentences (Kutas & Federmeier, 2011, Kutas & Hillyard, 1980, 1984). Together with previous research, the expected N400 effect observed reveals that higher context words facilitate and support semantic processing in challenging listening environments (Kemp et al., 2019; Erlbeck et al., 2014). The interaction between RTD x Stroop Performance, as seen in Figure 9, revealed that individuals with Higher Stroop Performance exhibit larger N400 effects (Low Cloze > High Cloze) than participants with Lower Stroop Performance for the Long RTDs. Individuals with stronger inhibitory skills exhibited a similar N400 effect, or semantic processing pattern, across Long, Middle, and Short RTDs. These findings suggest that better inhibitory skills were associated with better differentiation of higher and lower probability sentences, even for less difficult, or longer RTD, tasks. Results illustrated in Figure 9 are in support of our hypothesis, suggesting that individuals with stronger inhibitory control will have more efficient semantic processing, even without high time pressure to respond. In other words, people with stronger inhibitory skills may be better able to understand a message in a complex listening environment as they are able to more efficiently allocate cognitive resources for semantic processing (i.e., fewer resources for High Cloze sentences and more for Low Cloze sentences) across listening conditions (i.e., easier and more difficult) compared to peers with weaker inhibitory skills. Semantic processing in noise for people with lower inhibitory skills is comparable to peers with higher inhibitory control when greater time pressures (Middle, Short RTD) exist. 53 Different than what was predicted, when time pressures exist, individuals with stronger and weaker inhibitory skills had similar semantic processing patterns. However, with low time pressure (Long RTD), individuals with lower inhibitory control are not engaging as many cognitive resources for the Low Cloze condition compared to individuals with stronger inhibitory control. As illustrated in Figure 9, N400 amplitudes were similar for both High and Low Cloze sentences for the Long RTD. This suggests that the Lower Stroop Performance Group is not processing less predictable final words differently than highly predictable final words when they know they have a long time to respond. Individuals with weaker inhibitory control may have less efficient semantic processing of Low Cloze sentences during low time pressure conditions compared to individuals with stronger inhibitory skills. People with weaker inhibition skills may be using a different processing strategy, where they are not recruiting as many cognitive resources towards semantic processing and inhibition until time pressures are higher. Moreover, these findings suggest that individuals with weaker inhibitory skills may not be getting the same benefit of context when low time pressures exist. The current findings of interactions between inhibitory control and semantic processing in noise are consistent with other research examining inhibitory skill during speech perception in noise (Cahana-Amitay & Albert, 2016; Debruille, 2007; Janse & Adank, 2012; Taler et al., 2010; Ulsar et al., 2013). Other research revealed that individuals with stronger inhibitory skills, measured by Stroop performance, had stronger accuracy recognizing stimuli during semantic processing in noise tasks (Janse & Adank, 2012; Taler et al., 2010; Uslar et al., 2013). Additional research examining relationships between inhibition and semantic processing using ERPs, focused on neural processes underlying selective inhibition (N200) during a lexical naming task 54 (Kok et al., 1986, 1999; Shao et al., 2014). Shao et al., 2014 observed larger N200 amplitudes and longer naming reaction times in response to low probability naming conditions compared to high probability naming conditions, suggesting inhibition may support lexical selection. The current results similarly revealed low context final words elicited larger N400 amplitudes compared to high context final words in noise, suggesting low context final words require greater cognitive engagement (Shao et al., 2014; Kemp et al., 2019). To further understand relationships between inhibitory control and semantic processing, this study focused on relationships between inhibitory control and neural processes underlying semantic task performance (N400), rather than accuracy on the semantic processing task or a different neural component. By doing so, results revealed that individuals with stronger inhibitory control better differentiated high and low probability sentences, potentially due to greater allocation of cognitive resources to semantic processing, inhibition, or other cognitive skills regardless of response time pressure (Cahana-Amitay et al., 2014). Similar to other research observing greater N200 amplitudes in response to low probability naming conditions compared to high probability conditions, Figure 8 reveals that greater N400 amplitudes were elicited by Low Cloze compared to High Cloze sentences in both Stroop Performance Groups when time pressures exist. This finding extends previous evidence suggesting inhibitory control may help facilitate and support semantic processing in noisy environments (Shao et al., 2014). Although previous studies have examined interactions between inhibitory control and semantic processing, this study gives us greater insight into the ways individual variability in inhibitory skills contributes to semantic processing in difficult listening environments. Together, results suggest cognitive resources may be devoted to inhibition during low probability naming 55 conditions, potentially to inhibit competitor choices or other distractions (Cahana-Amitay et al., 2014, 2015; Shao et al., 2014). Consistent with the neural multi-functionality model, the current results suggest that executive function skills, like inhibitory control, may be important for perceiving language in difficult listening environments (Banks et al., 2015; Cahana-Amitay et al., 2014, 2015, 2016; Janse & Adank, 2012; Miyake et al., 2000; Ou et al., 2015; Taler et al., 2010;). Neural processes underlying semantics may have a dynamic interaction with other cognitive processes, such as inhibition, working memory, and attention. In other words, increased cognitive resources may be devoted to language skills as well as executive functions, including inhibition or attention shifting, in order to understand stimuli in noisy environments. Specifically, inhibitory control may play a role in semantic processing in noise to help define the meaning of the word and inhibit competitor meanings of the target word, distractor choices, or background noise (Debruille et al., 2008; Debruille, 2007; Cahana-Amitay et al., 2014, 2015; Shao et al., 2014). These results support the neural multi-functionality model, suggesting additional neural areas that control language as well as various executive functions, including inhibitory control, may be involved in lexical processing in noisy environments (Cahana-Amitay et al., 2014). Nonverbal IQ and Semantic Processing in Noise Contrary to what was predicted, no significant effects were observed between Nonverbal IQ Performance Groups for semantic processing in noise. These results suggest nonverbal intelligence abilities may not have a direct effect on semantic processing in noise in adults. Although significant effects between nonverbal IQ and semantic processing have been 56 observed in previous research, evidence suggests that Nonverbal IQ Performance and recruitment of cognitive resources for semantic processing in noise may be independent of each other (Hampton Wray & Weber-Fox, 2013). More specifically, nonverbal IQ has been suggested to play a role in timing or processing speed of lexical integration and may be independent of resource allocation for semantic processing. The lack of significant interactions observed between nonverbal intelligence and semantic processing in noise may be due to the independent relationship between nonverbal IQ and engagement of cognitive resources for lexical integration. These results likely did not reveal interactions between nonverbal IQ and semantic processing because as this task became more difficult, participants needed to recruit additional resources for semantic processing broadly, potentially washing out differences in processing speed. Our results, together with previous research, suggest that nonverbal IQ and allocation of cognitive resources during semantic processing in noise may be independent of each other when processing individual sentences (Hampton Wray & Weber-Fox, 2013). Expressive and Receptive Language Skills and Semantic Processing in Noise More efficient semantic processing in difficult listening environments was associated with stronger expressive language skills. Smaller neural responses across Cloze conditions and RTDs were observed in individuals with stronger expressive language skills compared to peers with weaker expressive language skills. In other words, individuals with stronger expressive language skills have generally more efficient semantic processing, regardless of listening condition, whereas individuals with weaker expressive language abilities may have generally less efficient sentence processing in noise. One potential implication of this may be that 57 stronger expressive language abilities may facilitate listening in noisy environments, easing some effort required to have conversations in noisy backgrounds. Significant differences were revealed between N400 amplitudes elicited by the High Cloze condition for the Higher and Lower Receptive Language Performance Groups. This interaction is similar to the trend revealed for the Overall Language Performance Groups. As seen in Figure 12, N400 amplitudes elicited by the High Cloze condition were smaller for individuals with higher receptive language abilities compared to peers with weaker receptive language skills. In other words, stronger receptive language skills were associated with more efficient semantic processing in noise, indexed by smaller N400 amplitudes, when sentence final words were highly predictable. Individuals with stronger receptive language skills may be better able to use sentence context to in order to aid in processing sentences in noise. Both Higher and Lower Receptive Language Groups exhibited the expected N400 effect, larger N400 amplitudes for lower probability sentences compared to higher probability sentences, as well as similar neural responses in response to Low Cloze final words, as seen in Figure 12. Consistent with previous research, when context was less expected, both groups have to recruit more cognitive resources to process the final word, indexed by larger N400 amplitudes for lower probability sentences (Kutas & Hillyard, 1980). However, individuals with higher receptive language performance exhibited smaller N400 amplitudes elicited by the High Cloze sentences than N400s observed to the same sentences in participants with lower receptive language performance. These results suggest stronger receptive language skills may help facilitate semantic processing in noise when the task is easier (High Cloze sentences), promoting listening in challenging environments. 58 Taken together, these findings suggest differences in N400 amplitudes associated with language may be closely related to expressive and receptive language abilities. People with better expressive language abilities may have generally more efficient semantic processing, regardless of the listening condition. On a similar note, individuals with stronger receptive language skills have more efficient semantic processing in highly predictable contexts than individuals with weaker receptive language skills. These individuals appear to engage fewer cognitive resources towards processing the sentence, reflected by smaller N400 amplitudes. People with weaker receptive language abilities may not be getting the same benefit of context as peers with better receptive language skills. These results are similar to evidence from other studies that examined the relationships between language proficiency, using TOAL-3, and neural processes underlying semantics (N400). Similar to recent findings, a more pronounced N400 effect was observed in individuals with stronger language proficiency, in this case receptive language proficiency (Newman et al., 2012). Additionally, these results align with previous evidence that suggests that N400 amplitudes elicited by individuals with stronger language proficiency were smaller (more positive) than peers with weaker language proficiency when processing highly and less predictable final words (Weber-Fox et al., 2003). Expanding on this research, the current results revealed stronger language proficiency was associated with more efficient semantic processing in noise, marked by smaller N400 amplitudes. Overall, individual variability in expressive and receptive language abilities contributes to differences in semantic processing in noise. 59 Limitations of the Current Study and Future Implications Several factors should be considered when evaluating results from the current study. One limitation was the length of the overall paradigm, including the length of individual trials. Each trial was at least 2000 ms. As participants were asked to listen to the sentences during the ERP portion for approximately 70 to 90 minutes, participants may have had decreased motivation as well as increased fatigue or frustration as the experiment continued. To shorten study length in the future, Middle and Short RTDs could be combined into one condition, as similar neural responses were observed between conditions. Another limitation of this study was that participants were asked to not blink until after the response options were presented. Participants may have tried to not blink during the sentence presentation and subsequently blinked when they heard the final word. Large amounts of eye artifact may have occurred before response options were presented as well as while the four-alternative response options were visually presented on the screen. Other studies have used a two-alternative, forced-choice task (Benikos et al., 2013) or a free-choice verbal response (Groppe et al., 2010) to limit eye artifact. A rapid two-alternative forced-choice task may elicit similar task demand to make a rapid response as a four-alternative forced-choice task. A longer delay between sentence presentation and response options may help reduce eye movement during the task as well. Other cognitive functions that were not examined in this study, including working memory and attention shifting, may also play a role in semantic processing in noisy environments and should be analyzed in future research (Cahana-Amitay et al., 2014, 2015). A next step in the line of research is to more precisely examine ways various cognitive abilities, 60 which are important for attending to and understanding the message, impact resource allocation for semantics when task demands are high. The examination of how specific cognitive abilities within a complex cognitive network uniquely impact semantic processing will provide a more comprehensive understanding about the dynamic interactions between cognitive skills and neural processes underlying semantics. Future research should also examine the impact of difficult listening conditions on cognitive processes underlying semantics in other populations, including children and individuals affected by hearing loss, communication disorders, or various executive functioning difficulties. Additionally, other types of changes in task demands may impact neural processes available for language (Kemp et al., 2019; Pichora-Fuller et al., 2016). Specifically, environmental factors, such as clarity of the speech signal or level of background noise may impact individuals’ perception of the signal. Manipulating the purpose of the task (i.e. identification/ naming vs. comprehension), may also alter the task demand. Other methods may help increase participant motivation and value of accurate task performance as well as reduce fatigue (Pichora-Fuller et al., 2016). Knowledge from this study can help improve assessments and treatments for clients with attentional, inhibitory, or semantic processing difficulties and adults with these deficits with or without hearing loss. Specifically, these results can help clinicians use more specific and sensitive assessments that better assess an individual’s cognitive and linguistic skills in adverse listening conditions. More reliable assessments can help improve development of treatment plans that target naturalization and generalization of cognitive and linguistic abilities for clients who struggle to participate fully in conversation in difficult listening environments (i.e. jobs, 61 concerts, restaurants, parties, etc.). Additionally, understanding how allocation of neural resources for one skill may impact other cognitive abilities may help clinicians develop better interventions that simultaneously target multiple aspects of cognition in adults with and without hearing loss. Conclusion Results from the current study indicate that inhibitory control as well as language abilities account for some of the individual variability in the allocation of cognitive resources during semantic processing in noise. Stronger inhibitory skills were associated with more efficient semantic processing, even when pressure to respond was low. Individuals with weaker inhibitory skills may employ a different processing strategy and may not experience the same benefit of context in difficult listening conditions. Similarly, individuals with stronger expressive language skills exhibited more efficient semantic processing, regardless of final word context or time pressure to respond. When processing highly contextual words in noise, individuals with stronger receptive language abilities similarly displayed fewer allocation of cognitive resources, reflecting more efficient semantic processing. In contrast, nonverbal intelligence abilities were not associated with neural processes underlying semantics in noise. These results demonstrate that stronger inhibitory control and language abilities have unique relationships with neural processes underlying semantics in challenging listening environments. Together, these findings suggest Individuals employ different semantic processing strategies, as a function of their cognitive strengths and weaknesses, to allocate neural resources to attend to and understand speech in noise. 62 APPENDICES 63 APPENDIX A The instructions listed below were presented to all participants during the paradigm, specifically before starting the signal-to-noise calibration portion and during experimental training trials (Kemp et al., 2019). “For this part of the study, we will be evaluating your hearing level in noise. You will hear continuous noise and individual sentences will be overlaid on the noise. The noise will sound like you are at a busy restaurant or party. For each sentence, a “Ready?” will appear on the screen. Press any button to begin the sentence when you are ready. Next, a crosshair will appear on the screen. While the crosshair is on the screen, look at the crosshair and try not to blink while the sentence you are listening for is played. At the end of the sentence, four words will appear on the screen. These words are arranged in the same layout as your response pad, so press the button that corresponds to the last word you heard. The words may or may not fit well in the sentence. Once the response choices disappear from the screen, nothing else will happen until you select an answer, so choose the word you heard, or the word closest to what you heard.” 64 APPENDIX B Table 3: Higher and Lower Performance Groups based on Stroop, TONI, and TOAL performance across all participants. Participant 312km 313ka 314sm 315jr 319nu 320lc 321bm 322ra 323as 324ks 325vrPC 326om 327ad 328ae 329gw 330kb 332ss 335mj 336am 337ko 339eb 340dh 341jh 342gs 343jm 347mg 348cs 349cz 350rv 351je 352gg 353jh 356jo 357mb Stroop Interference TONI HIGH HIGH LOW LOW HIGH LOW HIGH HIGH LOW HIGH HIGH LOW HIGH HIGH HIGH HIGH HIGH HIGH HIGH LOW LOW LOW LOW LOW HIGH LOW LOW HIGH LOW LOW LOW HIGH HIGH HIGH HIGH HIGH LOW HIGH HIGH HIGH LOW LOW HIGH LOW HIGH HIGH HIGH LOW LOW HIGH LOW LOW LOW LOW LOW HIGH LOW HIGH HIGH LOW HIGH HIGH HIGH LOW LOW HIGH LOW LOW Overall Language HIGH HIGH HIGH LOW HIGH HIGH HIGH HIGH HIGH LOW HIGH LOW HIGH HIGH LOW HIGH LOW LOW LOW LOW LOW HIGH HIGH LOW LOW HIGH LOW LOW HIGH HIGH HIGH LOW HIGH LOW 65 Expressive Language HIGH HIGH HIGH LOW HIGH HIGH HIGH HIGH LOW LOW HIGH LOW HIGH HIGH LOW HIGH LOW HIGH LOW LOW LOW HIGH HIGH LOW LOW HIGH LOW LOW HIGH LOW LOW LOW HIGH LOW Receptive Language HIGH HIGH HIGH HIGH LOW HIGH HIGH HIGH HIGH LOW HIGH LOW HIGH HIGH LOW HIGH LOW LOW LOW HIGH HIGH HIGH HIGH LOW LOW HIGH LOW LOW HIGH HIGH HIGH LOW LOW LOW Table 3 (cont’d). 358rw 359vr 360em 362mb 363ai LOW LOW HIGH HIGH LOW LOW HIGH LOW LOW HIGH LOW LOW HIGH LOW LOW LOW HIGH LOW HIGH LOW LOW LOW HIGH HIGH LOW 66 REFERENCES 67 REFERENCES Allison, B. Z., & Polich, J. (2008). Workload assessment of computer gaming using a single- stimulus event-related potential paradigm. Biological Psychology, 77(3), 277–283. American Speech Language and Hearing Association. (2019). Language in Brief. Retrieved from https://www.asha.org/Practice-Portal/Clinical-Topics/Spoken-Language- Disorders/Language-In--Brief/ Anderson Gosselin, P., & Gagne, J. P. (2011). Older Adults Expend More Listening Effort Than Young Adults Recognizing Speech in Noise. Journal of Speech Language and Hearing Research, 54(3), 944. Badre, D., & Wagner, A. D. (2007). Left ventrolateral prefrontal cortex and the cognitive control of memory. Neuropsychologia, 45(13), 2883–2901. Banks, B., Gowen, E., Munro, K.J., & Adank P. (2015). Cognitive predictors of perceptual adaptation to accented speech. The Journal of the Acoustical Society of America, 137(4), 2015-2024. Benau, E. M., Morris, J., & Coupreus, J. W. (2011). Semantic processing in children and adults: Incongruity and the N400. Journal of Psycholinguistic Research, 40, 225-239. Benikos, N., Johnstone, S. J., & Roodenrys, S. J. (2013). Varying task difficulty in the go/nogo task: The effects of inhibitory control, arousal, and perceived effort on ERP components. International Journal of Psychophysiology, 87(3), 262-272. Berlin, L., & Bohlin, G. (2002). Response inhibition, hyperactivity, and conduct problems among preschool children. Journal of Clinical Child and Adolescent Psychology, 31, 242–251. Bertoli, S., & Bodmer, D. (2014). Novel sounds as a psychophysiological measure of listening effort in older listeners with and without hearing loss. Clinical Neurophysiology, 125(5), 1030–1041. Borton, S. A., Mauze, E., & Lieu, J. E. C. (2010). Quality of life in children with unilateral hearling loss: A pilot study. American Journal of Audiology, 19(1), 61-72. Brown, L., Sherbenou, R. J., & Johnsen, S. K. (2010). Test of Nonverbal Intelligence, Fourth Edition (TONI-4). San Antonio, TX: Pro-Ed, Inc. Cahana-Amitay, D., & Albert, M. L. (2014). Brain and language: Evidence for neural multifunctionality. Behavioural Neurology, 2014, 1–16. 68 Cahana-Amitay, D., & Albert, M. L. (2015). Redefining recovery from aphasia. New York, NY: Oxford University Press. Cahana-Amitay, D., Spiro III, A., Sayers, J. T., Oveis, A. C., Higby, E., Ojo, E. A., Duncan, S., Goral, M., Hyun, J., Albert, M. L., & Oblser, L. K. (2016). How older adults use cognition in sentence final word recognition. Aging, Neuropsychology, and Cognition, 23(4), 418-444. Ciorba, A., Bianchini, C., Pelucchi, S., & Pastore, A. (2012). The impact of hearing loss on quality of life of elderly adults. Clinical Interventions in Aging, 7, 159-163. Chisolm, T. H., Johnson, C. E., Danhauer, J. L., Portz, L. J., Abrams, H. B., Lesner, S., . . . Newman, C. W. (2007). A systematic review of health-related quality of life and hearing aids: Final report of the American Academy of Audiology task force on the health-related quality of life benefits of amplification in adults. Journal of the American Academy of Audiology, 18(2), 151–183. Deacon, D., Dynowskia, A., Ritter, W., & Grose-Fifer, J. (2004). Repetition and and semantic priming of nonwords: Implications for theories of N400 and word recognition. Psychophysiology, 41, 60-74. Debruille, J. B., Ramirez, D., Wolf, Y., Schaefer, A., Nguyen, T. V., Bacon, B. A., Renoult, L., & Brodeur, M. (2008). Knowledge inhibition and N400: A within- and between-subjects study with distractor words. Brain Research, 1187, 167-183. Debruille, J. B. (2007). The N400 potential could index a semantic inhibition. Brain Research Reviews, 56, 472-477. Delorme, A., & Makeig, S. (2004). EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods, 134(1), 9–21. Desjardins, J. L., & Doherty, K. A. (2013). Age-related changes in listening effort for various types of masker noises. Ear and Hearing, 34(3), 261–272. Diependaele, K., Ziegler, J., & Grainger, J. (2010). Fast phonology and the bi-modal interactive activation model. European Journal of Cognitive Psychology, 22(5), 764-778. Dong, G. H., Yang, L. Z., Hu, Y. B., & Jiang, Y. (2009). Is N2 associated with successful suppression of behavior responses in impulse control processes?. NeuroReport, 20, 537–542. Eriksen, B., & Eriksen C. (1974). Effects of noise letters in the identification of target letters in a non-search task. Perception and Psychophysics, 16, 143–149. 69 Erlbeck, H., KÃbler, A., Kotchoubey, B., & Veser, S. (2014). Task instructions modulate the attentional mode affecting the auditory MMN and the semantic N400. Frontiers in Human Neuroscience, 8. Federmeier, K. D., Wlotko, E. W., De Ochoa-Dewald, E., & Kutas, M. (2007). Multiple effects of sentential constraint on word processing. Brain Research, 1146, 75-84. Fenson, L., Marchman, V. A., Thal, D. J., Dale, P. S., Reznick, J. S., & Bates, E. (2007). MacArthur– Bates Communicative Development Inventories. Baltimore, MD: Brookes. Fraser, S., Gagné, J.-P., Alepins, M., & Dubois, P. (2010). Evaluating the effort expended to understand speech in noise using a dual-task paradigm: The effects of providing visual speech cues. Journal of Speech, Language, and Hearing Research, 53(1), 18–33. Friedman, N. P., & Miyake, A. (2004). The relations among inhibition and interference control processes: A latent variable analysis. Journal of Experimental Psychology: General, 133, 101–135. Golden, C. J., Freshwater, S. M., & Golden, Z. (2003). Stroop Color and Word Test. Wood Dale, IL: Stoelting Co. Grainger, J., & Holcomb, P. J. (2009). Watching the word go by: On the time-course of component processes in visual word recognition. Language and Linguistics Compass, 3, 128–156. Gustafsson, H. A. & Arlinger, S. D. (1994). Masking of speech by amplitude-modulated noise. The Journal of Acoustical Society of America, 95 (1), 518-529. Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: Semantic ERP effects. Neuropsychologia, 38(11), 1518–1530. Hällgren, M., Larsby, B., Lyxell, B., & Arlinger, S. (2005). Speech understanding in quiet and noise, with and without hearing aids, International Journal of Audiology, 44(10), 574- 583. Hammill, D. D., Brown, V. L., Larsen, S. C., & Widerholt, J. L. (1994). Test of Adolescent and Adult Language, Third Ed. (TOAL-3) (3rd ed.). Austin, TX: Pro-Ed, Inc. Hampton Wray, A., & Weber-Fox, C. (2013). Specific aspects of cognitive and language proficiency account for variability in neural indices of semantic and syntactic processing in children. Developmental Cognitive Neuroscience, 5, 149-171. 70 Hetu, R., Riverin, L., Lalande, N., Getty, L., & St-Cyr, C. (1988). Qualitative analysis of the handicap associated with occupational hearing loss. British Journal of Audiology, 22, 251–264. Higby, E., Cahana-Amitay, D., Vogel-Envy, A., Spiro III, A., Albert, M. L., & Obler, L. K. (2019). The role of executive functions in object & action-naming among older adults. Experimental aging research, 45(4), 306-330. Hoffman, H. J., Dobie, R. A., Losonczy, K. G., Themann, C. L., & Flamme, G. A. (2017). Declining Prevalence of Hearing Loss in US Adults Aged 20 to 69 Years. JAMA Otolaryngology– Head & Neck Surgery, 143(3), 274–285. Holcomb, P. J., & Grainger, J. (2007). Exploring the temporal dynamics of visual word recognition in the masked repetition priming paradigm using event-related potentials. Brain Research, 1180, 39-58. Holcomb, P. J., & Neville, H. J. (1990). Auditory and visual semantic priming in lexical decision: A comparison using event-related brain potentials. Language and Cognitive Processes, 5(4), 281–312. Hornsby, B. W. (2013). The effects of hearing aid use on listening effort and mental fatigue associated with sustained speech processing demands. Ear and Hearing, 34(5), 523–534. Howard, D., Nickels, L., Coltheart, M., & Cole-Virtue, J. (2006). Cumulative semantic inhibition in picture naming: experimental and computational studies. Cognition, 100, 464–482. Huyck, J. J., & Johnsrude, I. S. (2012). Rapid perceptual learning of noise-vocoded speech requires attention. Journal of Acoustical Society of America, 131, EL236–EL242. Jahncke H., & Halin N. (2012). Performance, fatigue and stress in open-plan offices: The effects of noise and restoration on hearing impaired and normal hearing individuals. Noise Health, 14, 260–272. Jamison, C., Aiken, S. J., Kiefte, M., Newman, A. J., Bance, M., & Sculthorpe-Petley, L. (2016). Preliminary investigation of the passively evoked N400 as a tool for estimating speech- in-noise thresholds. American Journal of Audiology, 25(4), 344. Janse, E., & Adank, P. (2012). Predicting foreign-accent adaptation in older adults. Qualitative Journal of Experimental Psychology, 65, 1563–1585. Kahneman, D. (1973). Attention and effort. Englewood Cliffs, N.J: Prentice-Hall. 71 Kemp, A., Eddins, D., Shrivastav, R., & Hampton Wray, A. (2019). Effects of task difficulty on neural processes underlying semantics: An event related potentials study. Journal of Speech, Language, and Hearing research. Kim, Y. S., & Phillips, B. (2014). Cognitive correlates of listening comprehension. Reading Research Quarterly, 49(3), 269-281. Kok, A. (1999). Varieties of inhibition: Manifestations in cognition, event-related potentials, and aging. Acta Psychologica, 101(2-3), 129-158. Kok, A. (1986). Effects of degradation of visual stimulation on components of the event-related potential (ERP) in go/no-go reaction tasks. Biological Psychology, 23, 21–38. Kuchinsky, S. E., Vaden, K. I., Ahlstrom, J. B., Cute, S. L., Humes, L. E., Dubno, J. R., & Eckert, M. A. (2016). Task-Related Vigilance During Word Recognition in Noise for Older Adults with Hearing Loss. Experimental Aging Research, 42(1), 50–66. Kutas, M., & Federmeier, K. D. (2011). Thirty years and counting: Finding meaning in the N400 component of the event-related brain potential (ERP). Annual Review of Psychology, 62(1), 621–647. Kutas, M., & Federmeier, K. D. (2000). Electrophysiology reveals semantic memory use in language comprehension. Trends in Cognitive Sciences, 4(12), 463–470. Kutas, M. & Hillyard, S. A. (1984). Brain potentials during reading reflect word expectancy and semantic association. Nature, 307, 161-163. Kutas, M., & Hillyard, S. A. (1980). Reading senseless sentences: Brain potentials reflect semantic incongruity. Science, 207(4427), 203–205. Larsby, B., Hällgren, M., Lyxell, B., & Arlinger, S. (2005). Cognitive performance and perceived effort in speech processing tasks: Effects of different noise backgrounds in normal- hearing and hearing-impaired subjects. International Journal of Audiology, 44, 131–143. Lau, E. F., Phillips, C., & Poeppel, D. (2008). A cortical network for semantics: (De)constructing the N400. Nature Reviews Neuroscience, 9(12), 920–933. Lev-Ari, S., & Peperkamp, S. (2014). The influence of inhibitory skill on phonological representations in production and perception☆. Journal of Phonetics, 47, 36–46. Lopez-Calderon, J., & Luck, S. J. (2014). ERPLAB: an open-source toolbox for the analysis of event-related potentials. Frontiers in Human Neuroscience, 8. 72 Luck, S. J. (2014). An introduction to the event-related potential technique (2nd ed.). Cambridge, MA: The MIT Press. Marsh, J. E., Ljung, R., Nöstl, A., Threadgold, E., & Campbell, T. A. (2015). Failing to get the gist of what’s being said: Background noise impairs higher-order cognitive processing. Frontiers in Psychology, 6(548). McGarrigle, R., Munro, K. J., Dawes, P., Stewart, A. J., Moore, D. R., Barry, J. G., & Amitay, S. (2014). Listening effort and fatigue: What exactly are we measuring? A British Society of Audiology Cognition in Hearing Special Interest Group ‘white paper.’ International Journal of Audiology, 53(7), 433–445. Miyake, A., Friedman, N. P., Emerson, M. J., Witzki, A. H., Howerter, A., & Wager, T. D. (2000). The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive Psychology, 41, 49–100. Montgomery, D. E. & Koeltzow, T. E. (2010). A review of the day-night task: The Stroop paradigm and interference control in young children. Developmental Review, 30, 308- 330. Murlow, C. D., Aguilar, C., Endicott, J. E., Velez, R., Tuley, M. R., Charlip, W. S., & Hill, J. A. (1990). Association between hear- ing impairment and the quality of life of elderly individuals. Journal of the American Geriatrics Society, 38, 45–50. Murphy, D. R., Craik, F. I. M., Li, K. Z. H., & Schneider, B. A. (2000). Comparing the effects of aging and background noise on short-term memory performance. Psychology and Aging, 15(2), 323-334. Newman, A. J., Tremblay, A., Nichols, E. S., Neville H. J., & Ullman, M. T. (2012). The influence of language proficiency on lexical semantic processing in native and late learners of English. Journal of Cognitive Neuroscience, 24(5), 1205-1223. Nunez, P. L. (1995). Neocortical dynamics and human EEG rhythms (1st ed.). New York, NY: Oxford University Press. Ou, J., Law, S. P., & Fung, R. (2015). Relationship between individual differences in speech processing and cognitive functions. Psychonomic Bulletin & Review, 22, 1725-1732. Peele, J. E., & Wingfield, A. (2016). The neural consequences of age-related hearing loss. Trends in Neurosciences, 39(7), 486-497. Pichora-Fuller, M. K. (2016). How Social Psychological Factors May Modulate Auditory and Cognitive Functioning During Listening: Ear and Hearing, 37, 92S-100S. 73 Pichora-Fuller, M. K., Kramer, S. E., Eckert, M. A., Edwards, B., Hornsby, B. W. Y., Humes, L. E., ... Wingfield, A. (2016). Hearing impairment and cognitive energy: The framework for understanding effortful listening (FUEL). Ear and Hearing, 37(S1), 5S-27S. Picou, E. M., Gordon, J., & Ricketts, T. A. (2016). The effects of noise and reverberation on listening effort in adults with normal hearing. Ear and Hearing, 37(1), 1–13. Picou, E. M., Ricketts, T. A., & Hornsby, B. W. (2013). How hearing aids, background noise, and visual cues influence objective listening effort. Ear and Hearing, 34(5), e52–e64. Picou, E. M., Ricketts, T. A., & Hornsby, B. W. Y. (2011). Visual cues and listening effort: Individual variability. Journal of Speech, Language, and Hearing Research, 54(5), 1416– 1430. Ritter, N., Kilinc, E., Navruz, B., & Bae, T. (2011). Test Review [Review of Test of Nonverbal Intelligence-4 (TONI-4)]. Journal of Psychoeducational Assessment, 29(5), 484-488. Rönnberg, J., Rudner, M., Foo, C., & Lunner, T. (2008). Cognition counts: A working memory system for ease of language understanding (ELU). International Journal of Audiology, 47, S99-S105. Rothbart, M.K., & Posner, M. I. (1985). Temperament and the development of self-regulation. The Neuropsychology of Individual Differences: A Developmental Perspective, 99-123. Rudner, M. (2016). Cognitive spare capacity as an index of listening effort. Ear and Hearing, 37, 69S-76S. Sarampalis, A., Kalluri, S., Edwards, B., & Hafter, E. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research, 52(5), 1230-1240. Shao, Z., Roelofs, A., Acheson, D. J., & Meyer, A. S. (2014). Electrophysiological evidence that inhibition supports lexical selection in picture naming. Brain Research, 1586, 130-142. Shargorodsky, J., Curhan, S. G., Curhan, G. C., & Eavey, R. (2010). Change in Prevalence of Hearing Loss in US Adolescents. JAMA, 304(7), 772-778. Silton, R., Heller, W., Towers, D., Engels, A., Spielberg, J., Edgar, J., Sass, S., Stewart, J., Sutton, B., Banich, M., & Miller, G. (2010). The time course of activity in dorsolateral prefrontal cortex and anterior cingulate cortex during top-down attentional control. NeuroImage, 50, 1292–1302. 74 Sommers, M. S., & Danielson, S. M. (1999). Inhibitory processes and spoken word recognition in young and older adults: The interaction of lexical competition and semantic context. Psychology and Aging, 14(3), 458–472. Smith, P. (2011). Attention, working memory, and grammatically judgement in typical young adults. Journal of Speech, Language, and Hearing Research, 54, 918-931. Stoltzfus, E.R., Hasher, L., Zacks, R. T., Ulivi, M., & Goldstein, D. (1993). Investigations of inhibition and interference in younger and older adults. Journal of Gerontology: Psychological Sciences, 48(4), 179-188. Stone, M. A., & Moore, B. C. J. (2008). Effects of spectro-temporal modulation changes produced by multi-channel compression on intelligibility in a competing speech task. The Journal of Acoustical Society of America, 123(2), 1063-1076. Stone, M. A., & Moore, B. C. J. (2004). Side effects of fast-acting dynamic range compression that affect intelligibility in a competing speech task. The Journal of Acoustical Society of America, 116(4), 2311-2323. Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18(6), 643. Taler, V., Aaron, G. P., Steinmetz, L. G., & Pisoni, D. B. (2010). Lexical neighborhood density effects on spoken word recognition and production in healthy aging. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 65B(5), 551–560. Uslar, V. N., Carroll, R., Hanke, M., Hamann, C., Ruigendijk, E., Brand, T., et al. (2013). Development and evaluation of a linguistically and audiologically controlled sentence intelligibility test. Journal of Acoustical Society of America, 134, 3039–3056. Van Petten, C., & Rheinfelder, H., 1995. Conceptual relationships between spoken words and environmental sounds: event-related brain potential measures. Neuropsychologia, 33, 485–508. Wagner, A. D., Pare-Blagoev, E. J., Clark, J., & Poldrack, R. A. (2001). Recovering meaning: left prefrontal cortex guides controlled semantic retrieval. Neuron, 31(2), 329–338. Wang, X., Ma, Q., & Wang, C. (2012). N400 as an index of uncontrolled categorization processing in brand extension. Neuroscience Letters, 525(1), 76-81. Weber-Fox, C., Davis, L. J., & Cuadrado, E. (2003). Event-related brain potential markers of high- language proficiency in adults. Brain and Language, 85(2), 231-244. Wechsler, D. (2010). Wechsler Adult Intelligence Scale. San Antonio, TX: Pearson. 75 Wepman, J. M., & Hass, W. (1969). A spoken word count: Children- ages 5, 6, and 7. Chicago, IL: Language Research Associates. Wild, C. J., Yusaf, A., Wilson, D. E., Peelle, J. E., Davis, M. H., & Johnsrude, I. S. (2012). Effortful listening: The processing of degraded speech depends critically on attention. The Journal of Neuroscience, 32(40), 14010-14021. Winn, M. B. (2016). Rapid release from listening effort resulting from semantic context, and effects of spectral degradation and cochlear implants. Trends in Hearing. Advance online publication. Zekveld, A. A., Kramer, S. E., & Festen, J. M. (2011). Cognitive load during speech perception in noise: The influence of age, hearing loss, and cognition on the pupil response. Ear and Hearing, 32(4), 498–510. 76