HIERACHICAL NEURAL STRUCTURES FOR SPATIAL AND FEATURE-BASED ATTENTION IN FRONTOPARIETAL NETWORK By Youyang Hou A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF ARTS Psychology 2012 ABSTRACT HIERACHICAL NEURAL STRUCTURES FOR SPATIAL AND FEATURE-BASED ATTENTION IN FRONTOPARIETAL NETWORK By Youyang Hou Selective attention facilitates our ability in detecting important information by optimizing limited attentional capacity. Previous studies have shown that a common frontoparietal network is involved in the top-down control of both spatial and feature-based attention, yet its functions in different attention tasks are not clear. In the current study, we used fMRI and multivariate pattern analysis (similarity and cluster analysis) to examine the relationship between attentional control of spatial and feature-based attention. Participants viewed a compound stimulus that contained multiple dot fields in two colors (red, green), two directions (upward, downward), and two spatial locations (left, right). An auditory cue instructed participants to attend to a particular feature or location on a given trial and to perform a change detection task on the cued dot fields. Different attention tasks activated a similar top-down attentional network in frontoparietal regions including intraparietal sulcus, frontal eye field and ventral precentral sulcus. There were only a few ROIs showed magnitude difference between different attention types. More importantly, cluster analysis showed clear hierarchical cluster structure in frontoparietal cortex for different attention tasks. In particular, activities belonged to same attention type shared similar multivoxel response patterns. This suggests that frontoparietal network controls different types of attentional selection with distinct, hierarchically organized neural substrates. ACKNOWLEDGEMENTS I would like to express my appreciation to my advisory committee: Dr. Taosheng Liu, Dr. Susan Ravizza, and Dr. Pang-Ning Tan. Special thanks to Dr. Liu for his time, patience, and understanding. Thanks to Dr. Tan and Dr. Ravizza for their help in providing helpful suggestions in computing algorithms and experimental design. Also, thanks to Scarlett Doyle and Dr. David Zhu for their help in data collection and the Department of Radiology at Michigan State University for the support of imaging research. My gratitude also goes to the Neuroimaging of Perception and Attention Lab. Thanks to Sarah Young, Mattew Zeigenfuse, James Miller and Michael Jigo for their help in improving the experiment. I also appreciate all my participants for their patience and kindness during the process of the experiments. The most special thanks go to my best friend Yang Li and my family for their unconditional support and love through all this long process. TABLE OF CONTENTS LIST OF TABLES ......................................................................................................................... vi LIST OF FIGURES ...................................................................................................................... vii KEY TO ABBREVIATIONS ...................................................................................................... viii CHAPTER 1. INTRODUCTION ................................................................................................... 1 CHAPTER 2. METHODS .............................................................................................................. 4 2.1. Participants ........................................................................................................................... 4 2.2. Stimulus and display ............................................................................................................ 4 2.3. Design and Procedure........................................................................................................... 5 2.3.1. Attention experiment ..................................................................................................... 5 2.3.2. Practice and eye tracking ............................................................................................... 7 2.3.3. Retinotopic mapping...................................................................................................... 7 2.4. MRI Data Acquisition .......................................................................................................... 8 2.5. fMRI data analysis ............................................................................................................... 8 2.5.1. Univariate analysis ........................................................................................................ 9 2.5.2. Surface-based registration and visualization of group data ......................................... 10 2.5.3. Similarity analysis of fMRI response .......................................................................... 11 2.5.4. Cluster analysis of fMRI response............................................................................... 12 CHAPTER 3. RESULTS .............................................................................................................. 13 3.1. Behavioral results ............................................................................................................... 13 3.2. Cortical areas modulated by different attention tasks ........................................................ 14 3.3. fMRI response amplitude ................................................................................................... 16 3.4. Similarity structure of fMRI response ................................................................................ 18 3.5. Cluster structure of fMRI response .................................................................................... 19 3.6. Consistency of similarity structure of fMRI response ....................................................... 22 CHAPTER 4. DISCUSSION ........................................................................................................ 24 iv CHAPTER 5. CONCLUSION...................................................................................................... 27 REFERENCES ............................................................................................................................. 29 v LIST OF TABLES TABLE 3.1 Results of statistical analysis of fMRI response amplitude ...................................... 18 vi LIST OF FIGURES FIGURE 2.1 Schematic of an “up” trial in the attention task ......................................................... 5 FIGURE 3.1 Behavioral results in three attention types ............................................................... 14 FIGURE 3.2 Group r2 map and averaged task-defined brain areas .............................................. 15 FIGURE 3.3 Group-averaged contrast map on an atlas surface ................................................... 16 FIGURE 3.4 Mean time course (N=12) data of 12 regions of interest in three attention types ... 17 FIGURE 3.5 Averaged similarity analysis results across participants (N=12) for each brain area ....................................................................................................................................................... 19 FIGURE 3.6 Cluster analysis of between-attention correlation ................................................... 20 FIGURE 3.7 Averaged similarity analysis in different condition across participants (N=12) for each brain area .............................................................................................................................. 22 FIGURE 3.8 Statistical analysis of variance across participants (N=12) for each brain area ...... 23 vii KEY TO ABBREVIATIONS AF All feature attention conditions AUD Auditory cortex BT Between attention types FEF Frontal eye field fMRI Functional magnetic resonance imaging FS Between feature and spatial attention conditions hMT+ Human motion-sensitive area IPS Intraparietal sulcus ROI Region of interest vPCS Ventral pre-central sulcus WT Within attention types viii CHAPTER 1. INTRODUCTION Selective attention is an ability to intentionally focus on specific information and it facilitates the processing of important feature, shape and locations by optimizing limited processing capacity (Ungerleider, 2000). Contemporary theories suggest that visual selection influences sensory competition by biasing neural processes in favor of behaviorally relevant stimuli (Desimone & Duncan, 1995; Duncan, Humphreys, & Ward, 1997). Hence, when we attend to a particular location or feature (e.g., color), behavioral and neuronal responses to stimuli that share the selected properties are enhanced (Chawla, Rees, & Friston, 1999; Corbetta, Miezin, Dobmeyer, Shulman, & Petersen, 1990; Frohlich, 1994; Giesbrecht, Weissman, Woldorff, & Mangun, 2006; Schoenfeld, et al., 2007).These amplified neural representations are believed to result from top-down control signals biasing bottom-up sensory processing (Desimone and Duncan, 1995; Kastner and Ungerleider, 2000; Corbetta and Shulman, 2002; Yantis and Serences, 2003; Maunsell and Treue, 2006). Neural mechanisms of different types of attentional selection has been studied in recent years, with a special emphasis in dissociation between different types of selection. Several studies combine fMRI and cued attention paradigm (Posner, Snyder, & Davidson, 1980) to identify the neural underpinnings of attentional control. Many of these studies reveal that a frontoparietal network is involved in the top-down control of spatial attention (Corbetta, Kincade, Ollinger, McAvoy, & Shulman, 2000; Corbetta, et al., 2005; Hopfinger, Buonocore, & Mangun, 2000; Thakral & Slotnick, 2009; Woldorff, et al., 2004) as well as non-spatial feature attention such as color and motion (Liu, Hospadaruk, Zhu, & Gardner, 2011; Luks & Simpson, 2004; Shulman, et al., 1999; Weissman, Mangun, & Woldorff, 2002). Importantly, only parietal and frontal areas increases equally strong for directed attention in the absence and in the 1 presence of visual stimuli (Kanwisher & Wojciulik, 2000). As the majority of such activations appear common to spatial and feature-based conditions, it is suggested that selection may be subserved by a generalized top-down mechanism (H. A. Slagter, Kok, Mol, & Kenemans, 2005). However, comparing the activated loci across participants and even across studies can be problematic because of the necessarily imperfect alignment of anatomically different brains, as well as differences in stimulus, task, and participation situations between different experiments. Recently, more studies have directly compared different top-down control signals for spatial and non-spatial attention and many researchers proposed a domain-general attentional control network in frontoparietal cortex. For instance, Slagter et al. (2007) examined intermixed and blocked design of color and spatial attention task, and discovered an overlapped dorsal frontal and parietal network for both attention tasks. Another study found that several spatial and non-spatial visual attention tasks produced overlapping activations in the intraparietal sulcus, which was consistent with the hypothesis that these areas support several modes of visual selection (Wojciulik & Kanwisher, 1999). However, other studies also show differences in brain activity for spatial and featurebased attentional control. For example, a fMRI study revealed both common (left IFG, parietal cortex, and preCG) and different (superior frontal and parietal cortex) frontoparietal areas between spatial and non-spatial (color) orienting signals during a preparation period (Giesbrecht, Woldorff, Song, & Mangun, 2003). Similarity, TMS study revealed different neural mechanism of spatial and feature attention (Schenkluhn, Ruff, Heinen, & Chambers, 2008). Stimulation on supramarginal gyrus (SMG) only disrupted spatial cueing, whereas TMS on anterior intraparietal sulcus (aIPS) disrupted both spatial and feature cueing. These again suggest some areas, like 2 aIPS, might contain the general abstract attention salience representation, whereas other regions (SMG) are specific to spatial attention. The above reviewed studies have always compared relative neural activity in a local brain area among different attention conditions. These comparisons rely on signal averaging and can potentially miss important organizational details on a fine level. A very powerful tool developed in recent research is the multivoxel pattern analysis, which focuses on the information contained in patterns of neural activity distributed across voxels in a given brain area. For example, similarity between multivoxel patterns evoked in ventral visual pathway by visual images correlates with the categorization structure based on behavioral measures (Haxby, et al., 2001; Weber, Thompson-Schill, Osherson, Haxby, & Parsons, 2009). In addition, Sigala et al. (2008) used both similarity analysis and cluster analysis to show a hierarchical structure of cognitive control signal in prefrontal cortex for sequential task stages. Although multivariate pattern analysis has been used to decode potential fine scale differences within the shared frontoparietal network during the attention shift (Greenberg, Esterman, Wilson, Serences, & Yantis, 2010) and different feature-based attention tasks (Liu, et al., 2011), there are not yet any studies exploring the neural response patterns during the maintenance of spatial and feature-based attention in frontoparietal attentional control network and the similarity structure of different attention tasks. In the current study, we designed a task that required participants to attend to different locations, colors and motion directions. To examine the relationship between signals for different types of attentional control, we performed multivoxel similarity and cluster analysis on the fMRI response. Our finding suggested that distinct neuronal patterns in frontoparietal areas subserved different types of attentional control, forming domain-specific mechanisms at the fine scale. 3 CHAPTER 2. METHODS 2.1. Participants Twelve individuals (6 females) participated in the experiment; all had normal or corrected-to-normal vision. One of the participants was left handed and all the rest were right handed. Two of the participants were authors, the rest were graduate and undergraduate students at Michigan State University. All participants gave informed consent according to the study protocol that was approved by the Institutional Review Board at Michigan State University. Participants were compensated at the rate of $30 per scanning session. 2.2. Stimulus and display The visual display consisted of two circular aperture (9°in diameter) containing coherently moving dot fields centered 8°to the left or right of a white central fixation disk (0.3° diameter) on a black background. Individual dots subtended 0.9°degree of visual angel. In each of the two apertures, half of the dots were rendered in red and the other half in green; within each color group, half of the dots moved upward and the other half moved downward. Thus there were eight dot fields in total, four in the left aperture and four in the right aperture (2 spatial locations x 2 colors x 2 directions), with each dot fields containing 15 dots. The speed of dot movement varied for each dot fields between 1.7-2.5° (FIGURE 2.1). /s 4 FIGURE 2.1 Schematic of an “up” trial in the attention task. Arrows show the moving direction of dots. For interpretation of the reference to color in this and all other figures, the reader is referred to the electronic version of this thesis. All stimuli were generated using MGL (http://gru.brain.riken.jp/doku.php?id=mgl:overview), a set of custom OpenGL libraries running in Matlab (Mathworks, Natick, MA). Images were projected on a rear-projection screen located in the scanner bore by a Toshiba TDP-TW100U projector outfitted with a custom zoom-lens (Navitar, Rochester, NY). The screen resolution was set to 1024x768 and the display was updated at 60 Hz. Participants viewed the screen via an angled mirror attached to the head coil at a viewing distance of 60 cm. 2.3. Design and Procedure 2.3.1. Attention experiment Participants were instructed to fixate on the central disk throughout the experiment. At the beginning of each trial, an audio cue was played through the headphone that participates wore. There were three types of cues: two spatial cues (“left” or “right”) instructed the participants to maintain attention on dots in either the left or right aperture, two color cues (“red”, “green”) indicated the participants to maintain attention on either the red or green dots in both apertures and two direction cues (“up” or “down”) indicated the participant to maintain attention to either upward-moving dots or downward-moving dots in both apertures. At 1.1 s after the onset of the audio cue, two dot apertures appeared on the left and right side of screen for 6.6 s. Participants were required to press a button when they detected size increases on the attended dot 5 fields. For example, in the “up” trials, participants need to attend to all the dots moved upward and press the button when the noticed any upward moving dots increased their size (FIGURE 2.1). The size increase always occurred on one of the eight dot fields (15 dots), and was either a target (occurred on one of the four cued dot fields), or a distractor (occurred on one of the four uncued dot fields). On each trial, there was either one target, one distractor, or one target and one distractor. A jittered inter-trial interval followed the dot stimuli (3.3 s, 5.5 s, or 7.7 s). In each scanning run, there were 4 trials for each cue condition, for a total of 24 trials. Trial order was randomly determined for each run. Participants performed 10 runs in the scanner, resulting in a total of 240 trials, with 40 trials per cue condition. The red and green colors were set at isoluminance via heterochromatic flicker photometry. During this procedure, a red/gray checkboard pattern was counter-phase flickered at 8.3 Hz in the same annulus and participants adjusted the luminance of the gray color to minimize flicker (the red color was fixed). Then the same procedure was repeated for a gray/green checkboard. Each participant set the isoluminance point outside the scanner for three times during the practice session. The average of the three settings was used as the luminance of the green color in the attention experiment. Before the scan, a threshold task was run to determine the size change magnitude for the change detection task. The task was identical to the attention task above, except that the magnitude of size increase was controlled via three separate 1-up 2-down staircases, one for each attention type (location, color, motion). We fitted the staircase data with Weibull functions and selected size increase threshold that yielded ~85% correct performance for three attention types. 6 2.3.2. Practice and eye tracking Each participant practiced the attention task in the behavioral lab for at least 1 hr before the fMRI scan. The practice session served to familiarize participants with the attention task. The first part of practice consisted of performing the color calibration task and threshold task in a staircase procedure. Once participants achieved stable thresholds over several runs, we fixed the size change and practiced them in the scanner version of the task. During these practice trials, we also monitored their eye position with an Eyelink II system (SR Research, Ontario, Canada) at 250 Hz. All participants took part in the eye tracking session, with each performing two runs of the attention task. Eye position data were analyzed offline using custom Matlab code. 2.3.3. Retinotopic mapping Early visual cortex and posterior parietal areas containing topographic maps were defined in a separate scanning session for each participant. We used rotating wedge and expanding/contracting rings to map the polar angle and radial component, respectively (DeYoe, et al., 1996; Engel, Glover, & Wandell, 1997; Sereno, et al., 1995). Borders between visual areas were defined as phase reversals in a polar angle map of the visual field. Phase maps were visualized on computationally flattened representations of the cortical surface, which were generated from the high resolution anatomical image using FreeSurfer and custom Matlab code. In addition to occipital visual areas, our retinotopic mapping procedure also identified topographic areas in the parietal areas, intraparietal sulcus(IPS) areas, IPS 1-4 (Liu, et al., 2011; Swisher, Halko, Merabet, McMains, & Somers, 2007). In a separate run, we also presented moving vs. stationary dots in alternating blocks and localized the human motion-sensitive area, hMT+, as an area near the junction of the occipital and temporal cortex that responded more to moving than stationary dots (Watson, et al., 1993).Thus for each participant, we indentified the 7 following areas: V1, V2, V3, V3ab, V4, V7, hMT+ and four full-field maps in the IPS: IPS1, IPS2, IPS3, and IPS4. 2.4. MRI Data Acquisition All functional and structural brain images were acquired using a GE Healthcare (Waukesha, WI) 3T Signa HDx MRI scanner with an 8-channel head coil, in the Department of Radiology at Michigan State University. For each participant, high-resolution anatomical images were acquired using a T1-weighted MP-RAGE sequence (FOV = 256 mm x 256 mm, 180 sagittal slices, 1mm isotropic voxels) for surface reconstruction and alignment purposes. Functional images were acquired using a T2*-weighted echo planar imaging sequence consisted of 30 slices (TR = 2.2 s, TE = 30 ms, flip angle = 80° matrix size = 64x64, in-plane resolution = , 3mm x3 mm, slice thickness = 4 mm, interleaved, no gap). In each scanning session, a 2D T1weighted anatomical image was also acquired that had the same slice prescription as the functional scans, but with higher in-plane resolution (0.75 mm x 0.75 mm x 4 mm) for the purpose of aligning functional data to high resolution structural data. 2.5. fMRI data analysis Data were processed and analyzed using mrTools (http://www.cns.nyu.edu/heegerlab/wiki/doku.php?id=mrtools:top) and custom code in Matlab. Preprocessing of function data included head movement correction, linear detrend and temporal high pass filtering at 0.01Hz. The functional images were then aligned to high resolution anatomical images for each participant. Functional data were converted to percent signal change by dividing the time course of each voxel by its mean signal over a run, and data from the 10 scanning runs were concatenated for subsequent analysis. 8 2.5.1. Univariate analysis For univariate analysis, each voxel’s time series were fitted with a general linear model whose regressors contained six attentional states (left, right, red, green, up, down). Each regressor modeled the fMRI response in a 25 s window after the onset of trial. The design matrix was pseudo-inversed and multiplied by the time series to obtain an estimate of the hemodynamic response evoked by the attention task. To measure the response magnitude of a region, we averaged the deconvolved response across all the voxels in a region-of-interest (ROI). In addition to the visual and parietal regions defined by retinotopic mapping, we also defined ROIs active during the attention task. This was done by using the goodness of fit measure (r2 value), which is the amount of variance in the fMRI time series explained by the deconvolution model. The statistical significance of the r2 value was evaluated via a permutation test by randomizing event times and recalculating the r2 value using the deconvolution model. One thousand permutations were performed and the largest r2 value in each permutation formed a null distribution expected at chance (Nichols & Holmes, 2002). Each voxel’s p-value was then calculated as the percentile of voxels in the null distribution that exceeded the r2 value of that voxel. Using a cut-off p-value of 0.01, we defined four additional areas that were active during the attention task: auditory cortex (AUD), frontal eye field (FEF), ventral posterior central sulcus (vPCS) in both hemispheres. To localize cortical areas differentially involved in different attention types, we performed three linear contrasts analysis (location vs. color, location vs. direction, and color vs. direction) after first removing the common variance associated with each pair of regressors. Two values were obtained for each voxel for each contrast analysis: the difference in the fitted coefficients (beta weights), and the amount of variance in the time series explained by the model 9 (r2c). The r2c value indicates how well a voxel’s time course is explained by the experimental paradigm. We evaluated the statistical significance of the beta weights and r2c values by a permutation test (see below for details), and chose a beta weight and r2c threshold value corresponding to a p-value of 0.003 (uncorrected for multiple comparisons). 2.5.2. Surface-based registration and visualization of group data All analyses were performed on individual participant data, and all quantitative results reported were based on averages across individual participant results. However, to visualize the task-related brain areas, we also performed group averaging of the individual maps (see FIGURE 3.2). Each participant’s two hemispherical surfaces were first imported into Caret and affine-transformed into the 711-2B space of the Washington University at St. Louis. The surface was then inflated to a sphere and six landmarks were drawn, which were used for spherical registration to the landmarks in the Population-Average, Landmark- and Surface-based (PALS) atlas (Van Essen, et al., 2001). We then transformed individual maps to the PALS atlas space and performed group averaging, before visualizing the results on the PALS atlas surface. To correct for multiple comparisons, we set the threshold of the maps based on individual voxel level pvalue in combination with a cluster constraint. For the r2 map (FIGURE 3.2), we derived a voxel level p-value based on aggregating the null distributions generated from the permutation test for each individual participant. Specifically, for the r2 map, 1000 randomizations were performed; in each randomization we randomly selected one sample (with replacement) from each participant’s distribution (of 1000 values). This generated a distribution of 12000 values, which represented the maximum r2 values for all voxels expected to be at the chance level across participants. For the contrast map (FIGURE 3.3), 1000 randomizations were performed by randomizing the label of different attention 10 conditions to construct a null distribution. The p-value of each individual voxel was thus the percentile of voxels that has a higher r2 value in the null distribution. For both types of maps, we then performed 10,000 Monte-Carlo simulations with AFNI’s AlphaSim program, to determine the appropriate cluster size given a particular voxel-level p-value, to control for the whole-brain false positive rate (cut-off p-value = 0.01, cluster size = 3, whole-brain corrected false positive rate = 0.003). 2.5.3. Similarity analysis of fMRI response For each voxel in a ROI, We first derived a response amplitude measure on each single trial. This single trial response was obtained in two steps. First we performed a ROI-based deconvolution using one regressor for all trial types. This yielded an estimate of the canonical hemodynamic response in the ROI. In the second step, we took the canonical hemodynamic response and convolved it with each trial to construct a design matrix. A general linear model was performed to obtain an estimate of the voxel’s response on every trial. For each attention condition, we then calculated a mean fMRI response of 40 trials, resulting in a vector fMRI response. This vector was then normalized to have a norm of 1 (by rescaling and dividing by the sum of squares, implemented by the Matlab pdist function). We refer to this as the response vector, which captured the multivoxel response pattern in a ROI for a particular condition. This normalization procedure ensures that the similarity measurement was insensitive to mean differences in different ROIs and avoided positive correlation between all attention conditions. Then we assessed the similarity of fMRI response in different attention conditions by computing the correlation between these vectors. Reliability assesses the stability of activity pattern in different attention condition and is necessary for interpreting results for the correlation analysis. We measured the reliability of each 11 activity vector by using the Spearman-Brown formula (Nunnally, 1978). Specifically, a split-half reliability (r) was first calculated by correlating the response vector from a random half of the data and that from the other half of the data, then the corrected reliability was calculated using the formula: r’=2 * r/ (1+ r). 2.5.4. Cluster analysis of fMRI response All similarity analysis was performed on individual participant data, and we averaged the similarity results across individual participants and got an averages similarity structure of different attention conditions. To assess the significance of the similarity results, we conducted cluster analysis to organize these correlations into different clusters. We used complete linkage algorithm to build the hierarchical cluster structure. Complete linkage, or “furthest neighbor linkage,” used the largest distance between objects to separate two clusters (Stanberry, Nandy, & Cordes, 2003). A dendrogram plot can be constructed based on cluster analysis which revealed the hierarchical structure of fMRI activity patterns for different attention conditions. In addition, to demonstrate the validity of hierarchical cluster and similarity structure in different attention types, we divided the similarity results into four groups: pairs of response vectors belonging to the same attention types (WT, e.g. left vs. right, red vs. green), vectors belonging to different attention types (BT, e.g. left vs. red, red vs. up), vectors within all feature attention conditions (AF, e.g. red vs. green, red vs. up), and vectors belonging to feature and spatial types (FS, e.g. left vs. red, right vs. up). In addition, we performed pair-wise t-test between two pairs of these groups (WT vs. BT, AF vs. FS) to support the hierarchical clustering. 12 CHAPTER 3. RESULTS 3.1. Behavioral results Behavior results showed that participants were able to selectively attend to the cued group of dots (FIGURE 3.1). The repeated measure ANOVA of threshold on three attention types showed significant differences between attending to location (M = 2.62, SD = 0.57), attending to color (M = 2.56, SD = 0.52), and attending to direction (M = 2.90, SD = 0.55), F (2, 35) = 6.47, p < .01. Participants were able to respond higher than 60% of correction rate. In addition, there was no significant difference between the response accuracies (Hit-False Alarm) among three attention types (one-way repeated measures ANOVA, F (2, 22) = 1.929, p > .05) or six attention conditions (F (2, 22) = 1.205, p > .05). We also conducted signal detection analysis on these data and found no difference between attention types in discrimination index d’ (F (2, 22) =.824, p >.05) or bias index C (F (2, 22) = 1.835, p >.05), neither in different attention conditions (d’: F (2, 22) = 1.005, p > .05; C: F (2, 22) = 1.674, p > .05).This pattern of results suggested that participants were able to attend to the cued group of dots and ignore the uncued group of dots, and they performed equivalently for different selection tasks. Eye position data averaged across trials and participants revealed no significant difference in mean eye position within a trial between three attention types, for either the horizontal (F (2, 22) = .148, p > .05) or vertical eye position (F (2, 22) =1.34, p > .05), suggesting participants maintained their fixation during the experiment and there was no systematic difference between fixation behavior in different attention conditions. 13 FIGURE 3.1 Behavioral results in three attention types. Error bars indicate ± s.e.m. across 1 participants (N=12). 3.2. Cortical areas modulated by different attention tasks We first examined cortical areas activities during the attention task, using the r2 value (see METHODS). This criterion selected voxels whose activities were consistently modulated by the task, regardless of their relative response amplitude activities between different attention conditions. The group-averaged r2 map was projected onto the atlas surface and shown in FIGURE 3.2. Attention modulated activity in a network of areas in occipital, parietal, and frontal cortex. The occipital activity overlaps with localizer-defined areas (V1, V2, V3, V3ab, V4, V7, hMT+). The parietal activity ran along the IPS areas. To simplify data presentation, for this and following analysis we combined the four IPS areas into two areas IPS12 and IPS34. Frontal activity included a region around posterior superior frontal sulcus and precentral sulcus, the 14 putative human frontal-eye-field (FEF, see Paus, 1996) and ventral pre central sulcus (vPCS). We also defined an auditory cortex region (AUD) in temporal lobe. All these areas were found in both hemispheres, displaying largely a bilateral symmetry. FIGURE 3.2 Group r2 map and averaged task-defined brain areas shown on an inflated Caret atlas surface. The approximate locations of the three task-defined areas (AUD, FEF, vPCS) and two combined IPS regions (IPS12, IPS34) were shown on the map. Color bar indicated the scale of r2 value. Maps were thresholded at a voxelwise r2 value of 0.15, corresponding to an estimated p-value of 0.01, and a cluster size of 3 voxels. This corresponded to a whole-brain corrected false positive rate of 0.003 according to AlphaSim (see METHODS). FIGURE 3.3 showed group-averaged contrast maps of spatial attention vs. color attention, spatial attention vs. direction attention, and color attention vs. direction attention types. Positive values (yellow-red) indicate larger responses for the first condition and negative values (cyan-blue) indicate larger responses for the second condition. Direction attention evoked stronger responses than spatial and color attention types in three clusters in frontal and parietal cortex: along the FEF, vPCS, and IPS regions. It is worth pointing out that these voxels that exhibited differential response magnitude is a small subset of voxels that showed an overall 15 modulation of response (compared FIGURE 3.3 with FIGURE 3.2). In other words, the majority of voxels did not show significant difference in terms of fMRI response amplitude. FIGURE 3.3 Group-averaged contrast map on an atlas surface. (A) spatial vs. color; (B) spatial vs. direction; (C) color vs. direction. Positive values (yellow-red) indicate larger response for the first condition and negative values (cyan-blue) indicate larger response for the second condition. 3.3. fMRI response amplitude We next examined the mean fMRI response amplitudes in three attention types in individually defined ROIs. All areas showed an increase in fMRI response relative to the baseline (fixation during inter-trial interval). FIGURE 3.4 shows fMRI time course from 12 16 ROIs. We compared the average response across 2nd-8th time points in the trial between three attention types. Overall, the three types of attention tasks (location, color, direction) elicited equivalent levels of neural activity in most ROIs (p > .066), except IPS 12 (F (2, 22) = 4.645, p < .05) and vPCS (F (2, 22) = 4.458, p < .05). FIGURE 3.4 Mean time course (N=12) data of 12 regions of interest in three attention types. Error bars denote ± s.e.m. across participants. 1 We then further looked at the time course of fMRI response in two hemispheres during spatial attention trials. To test the effect of spatial attention, we performed a 2 x 2 repeated measure ANOVA between hemisphere (left, right) and attended location (attending left vs. attending right). The ANOVA results were presented in TABLE 3.1 for 12 ROIs. The interaction effect showed in V2, V3, V3ab, as well as IPS12, FEF and vPCS. These results 17 indicated that people successfully performed the spatial attention task because contralateral hemisphere was modulated by the deployment of spatial attention. TABLE 3.1 Results of statistical analysis of fMRI response amplitude. A two way repeated measures ANOVA was performed for each brain regions. Shown here are the statistical significance level of the main effects and their interactions (H: Hemisphere, AL: Attended location). *: p < .05; **: p < .01. FEF vPCS IPS34 IPS12 V7 V3ab V4 V3 V2 Factors V1 Brain areas AUD Frontoparietal areas hMT+ Visual areas * ** H AL H x AL * * * * 3.4. Similarity structure of fMRI response We first performed similarity analysis for each individual participant (see METHODS). Then we averaged the similarity results of all participants to get a mean similarity matrix, as shown in FIGURE 3.5. The diagonal entries showed the reliability of response vectors for each individual attention condition. The median of the reliabilities of six attention conditions was shown on the top of each correlation matrix and all the ROIs had median reliability around .70. The results suggested a highly repeatable pattern of fMRI response vectors for each attention condition. These high reliabilities indicated proper data for correlation analysis. Off-diagonal entries in FIGURE 3.5 showed the complete matrix of correlation between activity vectors for different attention conditions. In general, similarities of different attention tasks were higher in frontoparietal cortex than visual cortex. In addition, activity vectors belonged to same attention 18 type elicited high correlation in terms of the multivoxel response pattern compared with vectors belong to different attention types. FIGURE 3.5 Averaged similarity analysis results across participants (N=12) for each brain area. Diagonal entries showed the reliability of each attention condition and the median of reliabilities was shown beside the name of the ROI. Off-diagonal entries showed similarity (correlation) between each pair of attention conditions. Symbols represented different attention conditions: = attending to left; = attending to right; attending to dots upward motion; = attending to red; = attending to green; = = attending to downward motion. 3.5. Cluster structure of fMRI response To summarize the results, we took each correlation coefficient (r) as a measure of distance between two attention conditions (distance = 1- r) and input the resulting distances into cluster analysis (completion linkage algorithm, see METHODS). FIGURE 3.6 showed hierarchical clusters for the pairwise distance measures. We can see three clear clusters, each 19 corresponding to a single attention type in frontal cortex (MFG, vPCS). These three clusters further organize into two pairs, corresponding to spatial and feature attention. We can also see two clear clusters of two feature types in posterior parietal cortex (IPS12, IPS34). These results showed strong correlation for events of the same attention type, weak positive correlation for events of different feature attention types, and weaker correlation for events of feature and spatial types. We also observed feature and a spatial cluster in some of the visual cortex regions (V2, V4) and AUD. FIGURE 3.6 Cluster analysis of between-attention correlation. Y axis shows the distance (1- r) between different attention conditions. Symbols represented different attention conditions: = attending to left; = attending to right; = attending to red; 20 = attending to green; = attending to dots upward motion; = attending to downward motion. To assess the statistical significance of these differences, we organized the pair-wise correlation data into the groups suggested by the cluster analysis: all pairs within attention types (WT), pairs between different attention types (BT), pairs in all features attention conditions (AF), and pairs between feature and spatial attention conditions (FS). The median of average similarity of WT across visual cortex and AUD was 0.59, and the median of average similarity of WT in frontoparietal cortex were around 0.71. The median of average similarity of BT in visual cortex and AUD was 0.53, and the median of average similarity of BT in frontoparietal cortex was around .60. The median of average similarity of AF in visual cortex and AUD was 0.64, and the median of average similarity of AF in frontoparietal cortex was around .73. The median of average similarity of FS in visual cortex and AUD was 0.46, and the median of average similarity of FS in frontoparietal cortex was around .58. Again frontoparietal cortex had higher similarity in the average similarity of these groups compared with visual cortex. In addition, there was significant difference between similarity in WT and BT in frontoparietal cortex (V7, IPS12, IPS34, FEF, vPCS, p< .01), AF and FS in some visual cortex (V1, V2, V3ab, V4, V7, hMT+, p<.05) and frontoparietal cortex (IPS12, IPS34, FEF, vPCS, p< .01). These results indicated that the clusters we got from the hierarchical cluster analysis were statistically valid: similarities between vectors of same attention types were higher than vectors between attention types; similarities between vectors of attention types were higher than vectors between feature and spatial attention types. 21 FIGURE 3.7 Averaged similarity analysis in different condition across participants (N=12) for each brain area. The diagram shows averaged similarity of pairs within attention types (WT), pairs between different attention types (BT), pairs in all features attention conditions (AF), and pairs between feature and spatial attention conditions (FS).. *: Pair-wise t-test result for average similarity value in WT vs. BT; *: Pair-wise t-test result for average similarity value in AF vs. FS. Error bars are ± s.e.m. across participants. *: p < .05; **: p < .01; ***: p< .005 1 3.6. Consistency of similarity structure of fMRI response In addition to showing mean correlations in different groups of similarity analysis, FIGURE 3.7 also showed variability across participants in these correlations. It was apparent that the variability (error bars) in visual areas were greater than that in frontoparietal areas. This observation suggested that the similarity patterns were fairly consistent in frontoparietal cortex, but less consistent in visual cortex. We assessed the statistical reliability of this observation by performing an F-test on the variance between V1 and other regions (FIGURE 3.8). The color indicated the p value of the F-test and a smaller p value indicated significant difference in the variance across participants between two ROIs. Results suggested that there were significant differences in the variation of similarity structure between V1 and frontoparietal regions as well as V7, but not between V1 and other visual cortex regions and AUD. These supported the fact that the variance of visual cortex similarity structure is bigger than frontoparietal cortex and frontoparietal control network showed more constant similarity structure across participants. 22 FIGURE 3.8 Statistical analysis of variance across participants (N=12) for each brain area. An F-test for variance of each similarity value was performed for each brain regions. Color bar indicated the scale of p value of the F-test. 23 CHAPTER 4. DISCUSSION This study showed that both spatial and feature attention tasks were modulated by similar top-down attentional network in frontoparietal regions, which is consistent with previous studies (Freedman & Assad, 2009; Giesbrecht, et al., 2003; Schenkluhn, et al., 2008; H. Slagter, et al., 2007). More importantly, although most of frontoparietal network and visual cortex showed equivalent fMRI response during different attention tasks, we found dense coding of attentional control signals across neural populations in the frontoparietal network. In particular, similar attentional control tasks were associated with similar multi-voxel activity patterns in various subregions of the frontoparietal control network. Response vectors were also more similar between different feature attentional control signals than between spatial and feature based attentional control signals. Finally, the similarity structure was more consistent across different participants in frontoparietal cortex than visual cortex. These findings complemented recent study about the pattern structure of spatial and feature attention in frontoparietal regions (Greenberg, et al., 2010; Liu, et al., 2011), and these parietal and frontal areas could serve as plausible sources of attentional feedback to early visual areas. The convergence of these studies strengthened diverse neural patterns for distinct attention control priority signals. For different types of attention task, we found approximately orthogonal patterns of neural activity in frontoparietal cortex. Although some voxels discriminated different attention types (FIGURE 3.3), this was not achieved by separate voxels uniquely responsive to a particular type of attention task. Instead, many voxels were active in each attention task. Within one attention type, in contrast, we found correlated activity patterns for different attention conditions. Together, these results showed a hierarchical representation, with one basic activity 24 pattern associated with each attention type. Both within and between attention types, frontoparietal representation were also modulated by feature and spatial attention. The benefit of distributed, orthogonal coding are well known, providing efficient representation and discrimination of many independent event in a fixed population of cells (Hinton, Mcclelland, & Rumelhart, 1986). Different attention task can contain arbitrary number of attended targets, dimensions, and each requires different information and operations. Orthogonal coding may allow frontoparietal cortex to support a large number of these somewhat independent types of attention control operations. Similarity between two distributed representations allows for similarity of their functional effects. In our experiment, attentional control signal is presumably important in separate operation appropriate for each task: for retrieving the associate target, or maintaining target description. Correlated coding for different attentional instructions within a type reflected similar cognitive operation applied to different information. It will be interesting to collect further evidence to support the relationship between the similarity structure of attention control signals and stimulus selectivity and/or temporal phase selectivity. For example, there might be less correlated activity patterns for attention control in different stimuli (e.g., auditory and visual) and different attentional phases (preparatory, maintain, switch). Additional work is needed to understand the mechanism of such similarity structure of attention control at the neural level. Because of the limitation of the spatial resolution of fMRI technique, it is not possible to decide whether the similarity structure was derived from distinct group of neurons or same group of neurons with different firing patterns. Further single neuron 25 recording research might elucidate the neuronal basis of attention control for different types of selection demand. 26 CHAPTER 5. CONCLUSION These results showed that each attention task was associated with its own distinct pattern of fMRI activity although not necessary with differentiated response magnitude in frontoparietal network. These patterns were similar for trials in the same attention type, and the feature attention trials are similar to each other than compared with spatial attention trials. For different attention types, these data showed approximately independent or orthogonal frontoparietal representation. Furthermore, the similarity and cluster patterns were more consistent and stable across participants in frontoparietal cortex than visual cortex. Selective attention is a complex cognitive function with preset goals and a series of cognitive operations. For each attention task, the brain must know what kind of information is desired and what signals needed to be boosted. Our data suggest distinct activity patterns marking the separate attention control signals in frontoparietal cortex. Orthogonal codes may underlie complex, dissimilar cognitive component, and correlated codes is efficient when fixed cognitive process is applied to varying stimulus content. 27 REFERENCES 28 REFERENCES Chawla, D., Rees, G., & Friston, K. (1999). The physiological basis of attentional modulation in extrastriate visual areas. Nature neuroscience, 2, 671-676. Corbetta, M., Kincade, J. M., Ollinger, J. M., McAvoy, M. P., & Shulman, G. L. (2000). Voluntary orienting is dissociated from target detection in human posterior parietal cortex. Nature neuroscience, 3, 292-297. Corbetta, M., Miezin, F. M., Dobmeyer, S., Shulman, G. L., & Petersen, S. E. (1990). Attentional modulation of neural processing of shape, color, and velocity in humans. Science, 248(4962), 1556. Corbetta, M., Tansy, A. P., Stanley, C. M., Astafiev, S. V., Snyder, A. Z., & Shulman, G. L. (2005). A functional MRI study of preparatory signals for spatial location and objects. Neuropsychologia, 43(14), 2041-2056. Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual review of neuroscience, 18(1), 193-222. DeYoe, E. A., Carman, G. J., Bandettini, P., Glickman, S., Wieser, J., Cox, R., et al. (1996). Mapping striate and extrastriate visual areas in human cerebral cortex. Proc Natl Acad Sci U S A, 93(6), 2382-2386. Duncan, J., Humphreys, G., & Ward, R. (1997). Integrated mechanisms of selective attention. Current Opinion in Biology, 7(255-261), 2. Engel, S. A., Glover, G. H., & Wandell, B. A. (1997). Retinotopic organization in human visual cortex and the spatial precision of functional MRI. Cereb Cortex, 7(2), 181-192. Freedman, D. J., & Assad, J. A. (2009). Distinct encoding of spatial and nonspatial visual information in parietal cortex. The Journal of Neuroscience, 29(17), 5671-5680. Frohlich, Z. (1994). Combined spatial and temporal imaging of brain activity during visual selective attention in humans. Nature, 372, 8. Giesbrecht, B., Weissman, D. H., Woldorff, M. G., & Mangun, G. R. (2006). Pre-target activity in visual cortex predicts behavioral performance on spatial and feature attention tasks. Brain research, 1080(1), 63-72. 29 Giesbrecht, B., Woldorff, M., Song, A., & Mangun, G. (2003). Neural mechanisms of top-down control during spatial and feature attention. Neuroimage, 19(3), 496-512. Greenberg, A. S., Esterman, M., Wilson, D., Serences, J. T., & Yantis, S. (2010). Control of spatial and feature-based attention in frontoparietal cortex. The Journal of Neuroscience, 30(43), 14330-14339. Haxby, J. V., Gobbini, M. I., Furey, M. L., Ishai, A., Schouten, J. L., & Pietrini, P. (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293(5539), 2425-2430. Hinton, G. E., Mcclelland, J. L., & Rumelhart, D. E. (1986). Distributed representations, Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations: MIT Press, Cambridge, MA. Hopfinger, J., Buonocore, M., & Mangun, G. (2000). The neural mechanisms of top-down attentional control. Nature neuroscience, 3, 284-291. Kanwisher, N., & Wojciulik, E. (2000). Visual attention: insights from brain imaging. Nature Reviews Neuroscience, 1(2), 91-100. Liu, T., Hospadaruk, L., Zhu, D. C., & Gardner, J. L. (2011). Feature-specific attentional priority signals in human cortex. The Journal of Neuroscience, 31(12), 4484. Luks, T. L., & Simpson, G. V. (2004). Preparatory deployment of attention to motion activates higher-order motion-processing brain regions. Neuroimage, 22(4), 1515-1522. Nichols, T. E., & Holmes, A. P. (2002). Nonparametric permutation tests for functional neuroimaging: a primer with examples. Hum Brain Mapp, 15(1), 1-25. Nunnally, J. C. (1978). Psychometric. Psychometric. Paus, T. (1996). Location and function of the human frontal eye-field: a selective review. Neuropsychologia, 34(6), 475-483. Posner, M. I., Snyder, C. R., & Davidson, B. J. (1980). Attention and the detection of signals. Journal of experimental psychology: General, 109(2), 160. Schenkluhn, B., Ruff, C. C., Heinen, K., & Chambers, C. D. (2008). Parietal stimulation decouples spatial and feature-based attention. The Journal of Neuroscience, 28(44), 11106. Schoenfeld, M., Hopf, J., Martinez, A., Mai, H., Sattler, C., Gasde, A., et al. (2007). Spatiotemporal analysis of feature-based attention. Cerebral Cortex, 17(10), 2468-2477. 30 Sereno, M. I., Dale, A. M., Reppas, J. B., Kwong, K. K., Belliveau, J. W., Brady, T. J., et al. (1995). Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging. Science, 268(5212), 889-893. Shulman, G. L., Ollinger, J. M., Akbudak, E., Conturo, T. E., Snyder, A. Z., Petersen, S. E., et al. (1999). Areas involved in encoding and applying directional expectations to moving objects. The Journal of Neuroscience, 19(21), 9480-9496. Sigala, N., Kusunoki, M., Nimmo-Smith, I., Gaffan, D., & Duncan, J. (2008). Hierarchical coding for sequential task events in the monkey prefrontal cortex. Proceedings of the National Academy of Sciences, 105(33), 11969. Slagter, H., Giesbrecht, B., Kok, A., Weissman, D., Kenemans, J., Woldorff, M., et al. (2007). fMRI evidence for both generalized and specialized components of attentional control. Brain research, 1177, 90-102. Slagter, H. A., Kok, A., Mol, N., & Kenemans, J. L. (2005). Spatio-temporal dynamics of topdown control: directing attention to location and/or color as revealed by ERPs and source modeling. Cognitive Brain Research, 22(3), 333-348. Stanberry, L., Nandy, R., & Cordes, D. (2003). Cluster analysis of fMRI data using dendrogram sharpening. Human brain mapping, 20(4), 201-219. Swisher, J. D., Halko, M. A., Merabet, L. B., McMains, S. A., & Somers, D. C. (2007). Visual topography of human intraparietal sulcus. The Journal of Neuroscience, 27(20), 53265337. Thakral, P. P., & Slotnick, S. D. (2009). The role of parietal cortex during sustained visual spatial attention. Brain research, 1302, 157-166. Ungerleider, S. K. L. G. (2000). Mechanisms of visual attention in the human cortex. Annual review of neuroscience, 23(1), 315-341. Van Essen, D. C., Drury, H. A., Dickson, J., Harwell, J., Hanlon, D., & Anderson, C. H. (2001). An integrated software suite for surface-based analyses of cerebral cortex. J Am Med Inform Assoc, 8(5), 443-459. Watson, J. D., Myers, R., Frackowiak, R. S., Hajnal, J. V., Woods, R. P., Mazziotta, J. C., et al. (1993). Area V5 of the human brain: evidence from a combined study using positron emission tomography and magnetic resonance imaging. Cereb Cortex, 3(2), 79-94. Weber, M., Thompson-Schill, S. L., Osherson, D., Haxby, J., & Parsons, L. (2009). Predicting judged similarity of natural categories from their neural representations. Neuropsychologia, 47(3), 859-868. 31 Weissman, D., Mangun, G., & Woldorff, M. (2002). A role for top-down attentional orienting during interference between global and local aspects of hierarchical stimuli. Neuroimage, 17(3), 1266-1276. Wojciulik, E., & Kanwisher, N. (1999). The generality of parietal involvement in visual attention. Neuron, 23(4), 747-764. Woldorff, M. G., Hazlett, C. J., Fichtenholtz, H. M., Weissman, D. H., Dale, A. M., & Song, A. W. (2004). Functional parcellation of attentional control regions of the brain. Journal of Cognitive Neuroscience, 16(1), 149-165. 32