EXAMINING RACIAL BIAS IN EVIDENCE ACCUMULATION: EXPLORING THE IMPACT OF OBJECT SEARCH By Alejandro Carrillo A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Psychology—Doctor of Philosophy 2024 ABSTRACT Prior investigations into racial bias in fatal police shootings have predominantly employed the First-Person Shooter Task (FPST) and the Weapon Identification Task (WIT). These paradigms have revealed consistent patterns of bias, including faster correct decisions to shoot armed Black targets (as shown in the FPST) and a bias towards misidentifying harmless objects as weapons after exposure to Black primes (evidenced in the WIT). While these findings are valuable, they overlook the role of visual search in these high-stakes decision-making processes. The influence of visual search processes and their associated cognitive mechanisms— such as those described by Drift Diffusion Modelling (DDM)—remain relatively unexplored. This dissertation bridged this gap by examining the impact of race on search efficiency within complex visual environments and its reflection in evidence accumulation. Across two studies, I found that race did not significantly impact search efficiency or evidence accumulation. Instead, a consistent target type effect emerged, indicating that searches for guns were more efficient than for other objects, irrespective of racial primes, and this was mirrored in credibly stronger rates of evidence accumulation. This work serves as a first step into understanding the dynamics of racial biases within decision-making processes in high-stakes situations, emphasizing the examination of search behaviors. ACKNOWLEDGEMENTS First, this work was only possible with the help of my advisor, Joseph Cesario, who helped me untangle my mess of ideas into something tangible. Likewise, thanks to my committee member Mark Brandt, who always made time for my random pop-ins and urgent questions. Also, I'm grateful for David Johnson's guidance when working on a challenging portion of the analyses. I doubt I would have figured out posterior predictive checks without his help. I also want to thank research assistants Eva Valverde, Leela Grimsby, Moon Ha Tran, Annika Schoenherr, Alina Acosta, and Nerissa Viswanathan, who assisted me with my data collection. Their contribution made it possible for me to complete my work on time. I want to thank my fellow students in the program for their tremendous support. While I appreciate all the graduate students, I am especially grateful for Jeewon Oh and our late-night work sessions. Prachi Solanki was an exceptional lab mate who helped me get through the program, and I couldn't have done it without her. I also want to give a special shout-out to Kenya Mulwa for being there with the right words of encouragement when I needed them most. All of this wouldn't have been possible without the support of my family. I'm deeply grateful to my brother in law Steven; our fishing trips are a highlight I continually look forward to. My thanks also to my brother and sister, Victor and Sabrina, for their ever-present encouragement and guidance. I owe a special thank you to my mom, Ana, who may not have always understood my work but was always my loudest supporter. And to my dad, Victor, who is no longer with us, I strive every day to live up to the example he set. While my family and collegaues have been incredible, the support didn't stop there. My friends—Sergio Marquez, Andrew Rakhshani, Emma Zblewski, Devin Fairbourn, Alex Bauer, and Eric and Juliee Chantland— were invaluable. Our weekly(ish) board game nights kept me iii sane; I can't imagine surviving the stress of grad school without your support. Eric, words cannot describe how much your friendship has meant to me. Our gaming sessions gave me something to look forward to, and whenever I felt like I couldn't finish this degree, you always managed to bring me out of the slump. Of course, this section wouldn't be complete without thanking my cat Artemis, who spent most of the dissertation (sleeping) by my side. iv TABLE OF CONTENTS INTRODUCTION .......................................................................................................................... 1 STUDY 1 ...................................................................................................................................... 19 STUDY 2 ...................................................................................................................................... 33 GENERAL DISCUSSION ........................................................................................................... 50 CONCLUSION ............................................................................................................................. 60 REFERENCES ............................................................................................................................. 61 APPENDIX A: METHOD TABLES AND CODE ...................................................................... 68 APPENDIX B: BEHAVIORAL RESULTS TABLES................................................................. 72 APPENDIX C: DDM EFFECTS TABLES ................................................................................ 105 APPENDIX D: POSTERIOR PREDICTIVE CHECKS ............................................................ 111 v INTRODUCTION In recent years, the issue of racial bias in police shootings has emerged as a pivotal topic in national conversations, particularly in the context of social justice and law enforcement reform. This issue has been brought into sharp focus by numerous high-profile cases involving unarmed Black Americans who were fatally shot by police officers, often under circumstances that have raised serious questions about the use of lethal force. These tragic incidents have sparked widespread public outcry and underscored the urgent need for a deeper understanding of the underlying factors contributing to these outcomes. In response to this pressing societal issue, social psychologists have investigated the decision-making processes involved in police shootings. One paradigm that has been used to study shooting decisions is the First-Person Shooter Task (FPST), developed by Correll et al. (2002; see also Figure 1), which was designed to study the role of racial bias in simulated police shooting scenarios. The task attempts to mimic the high-pressure, instantaneous decision-making situations that law enforcement officers may encounter. In this task, participants are first shown a series of images that depict various neighborhood scenes without any people. These scenes serve as the backdrop for the task, creating the context for the presentation of target individuals. After presenting one to four empty scenes, an image of a person is suddenly introduced. This individual is typically a Black or White male and is depicted as holding an object. The object could either be harmless (e.g., a wallet or cellphone) or threatening (e.g., a gun). Participants are tasked with making a rapid 'shoot' or 'don't shoot' decision based on the perceived threat posed by the target individual within a constrained time window of 630 to 850 milliseconds. This time constraint is designed to simulate the urgency often associated with real- 1 life police shooting incidents. The FPST incorporates a payoff matrix that is structured to encourage shooting, reflecting the potential real-world consequences of failing to respond to a genuine threat. The payoff matrix rewards points for fast and correct decisions and penalizes slow or incorrect ones. The FPST provides a controlled environment for studying the cognitive and social factors influencing decision-making in potentially life-threatening situations, contributing to our understanding of the complex dynamics involved in police shootings (Correll et al., 2002). Figure 1: An example of a typical FPST trial. A separate but related paradigm is the Weapon Identification Task (WIT; Payne, 2001; see Figure 2), which complements the FPST in studying racial bias. The WIT is a sequential priming task designed to assess the speed and accuracy of object identification, with the objects in question being either "weapons" or "tools." A prime face, either Black or White, is presented for 200 milliseconds in a typical WIT trial. This is immediately followed by a target image of either a tool or a gun, also displayed for 200 milliseconds, which is then replaced by a visual mask. 2 Participants must respond by pressing a key corresponding to either "tool" or "gun" during the presentation of a visual mask. A distinguishing feature of the WIT, as compared to the FPST, is that the prime images are generally headshots, and the target images are displayed against a neutral, empty background. This design feature allows for the isolation of the influence of racial priming on object identification, free from the potential confounding effects of contextual information. The WIT thus provides a valuable tool for investigating the cognitive mechanisms underlying racial bias in object identification and how such bias may influence decision-making in critical situations (Payne, 2001). Figure 2: A typical WIT trial. In both tasks, racial bias is measured based on participants' error rates or response times. For instance, in the FPST, racial bias may manifest as participants being more likely to shoot unarmed Black suspects than unarmed White suspects or responding "shoot" faster to armed Black suspects than armed White suspects (Cesario & Carrillo, 2024; Mekawi & Bresin, 2015). Similarly, racial bias is evident in the WIT when participants are faster or more accurate at identifying guns following Black faces or tools following White faces, compared to the reverse pairings (Rivers, 2017). A significant focus of these research lines has been on the misidentification of harmless objects. This line of inquiry has been largely driven by real-world incidents where police officers 3 have mistakenly perceived harmless objects (or no objects at all) as weapons, as in the tragic case of Amadou Diallo. However, this research tradition, while valuable, may not fully capture the complexity of decision-making processes in police encounters. An integral aspect of these encounters, often overlooked in research, is the process of not just identifying a target object but also locating it. This process, visual search, is a key component of police academy training. Officers are taught to scan various potential threat locations, such as hands, waists, backpacks, and key locations in the general surroundings. In situations involving multiple officers, roles may be divided, with one officer engaging the suspect while others scan the environment for potential hazards. Despite its importance in real-world policing, the implications of visual search for decision-making have not been fully appreciated in the existing literature. For example, the WIT is primarily designed to study object identification without search elements. While the FPST incorporates some elements of visual search, it does so in such a way that salience or anchoring may play an important role. That is, the participants view several empty background scenes back-to-back before the rapid presentation of the suspect, which acts as a signal to rapidly guide attention. Consideration of how this may shape search efficiency and object identification is generally not discussed. Given these gaps in the current understanding, this dissertation proposes to investigate the role of visual search in shaping weapon identification. Visual Search Visual search is a cognitive process that involves identifying and localizing a target object within a visual field populated with other objects, often referred to as distractors. This process is a fundamental component of many tasks requiring object identification and is typically studied in a controlled laboratory setting. These tasks can vary enormously from searches for a 4 friend in a crowd of people, finding your favorite brand of cereal at the supermarket to scanning for threats in airport baggage. The standard experimental procedure generally involves participants scanning an array of objects for a target object that differs from the distractors by one or more features (Wolfe, 2020; Wolfe & Horowitz, 2017). The efficiency of visual search, defined as the speed and accuracy with which a target is identified among distractors, can be influenced by various factors. These include the degree of differences between the target and distractors, such as their size, color, and shape (Duncan & Humphreys, 1989). The number of items in the visual field, also known as the set size, can also impact search efficiency, with larger set sizes generally leading to longer search times (Treisman & Gelade, 1980; Wolfe et al., 2010; Wolfe, 2014). Top-down factors such as the goals and expectations of the observer can also play a significant role in visual search efficiency (Wolfe, 1994, 2020). For example, if an observer is actively looking for a specific object, they may be able to identify it more quickly than if they were passively scanning the visual field. When actively searching for their keys on a cluttered desk, an individual quickly zeroes in on specific cues like shape and shine, facilitating rapid identification. In contrast, a casual glance across the same desk, without a specific target in mind, can easily miss the keys among the clutter. Researchers commonly examine search slopes to assess visual search efficiency, which quantitatively measure how response time increases with the number of items in the search array (Treisman & Gelade, 1980). Search slopes reflect the rate at which response time grows as the set size, or the number of items in the display, increases. A steeper search slope indicates a larger increase in response time per additional item, suggesting a less efficient search process. Conversely, a shallower search slope indicates a smaller increase in response time, indicating a more efficient search process (Wolfe, 1998). That is, search efficiency exists on a continuum. In 5 highly efficient or highly guided search, the target object "pops out" from the display, meaning that it can be quickly and effortlessly detected regardless of the number of distractors present (Egeth et al., 1972). This phenomenon is known as a "pop-out" search, where the target object captures attention automatically and stands out from the distractors. In pop-out searches, the search slope is nearly flat or absent, indicating that response time remains constant regardless of the set size. For example, a single red apple among a cluster of green apples effortlessly captures attention, showcasing pop-out search through the immediate draw of its distinct color. This contrasts with more demanding or less efficient search tasks, where the search slope is steeper, indicating a longer response time as the set size increases. As an example, imagine performing a search for a green apple among green pears; the task becomes slightly more difficult and involves looking through more of the items. Note, however, that while the use of set size is commonplace in the literature, differences in search efficiency can be reflected in either the search process, the identification process, or some combination of both (Kristjánsson, 2015; Wolfe, 2016). Guided Search Model 6.0 Determining what guides visual search is a complex task. Wolfe (2021) offers an updated model of visual search, known as Guided Search 6.0 (GS6), which provides a comprehensive framework for understanding this process. The GS6 model assumes that although we can see various items throughout a scene, our capacity for recognizing more than a handful at a time is restricted. To address this limitation, attention is utilized to select items, allowing their features to be "bound" together into recognizable objects. This attention is not random but "guided," allowing items to be processed in an efficient order. According to the GS6 model, this guidance is derived from five sources of pre-attentive 6 information. These include (1) top-down feature guidance, which refers to the influence of the observer's goals, expectations, and guiding templates; (2) bottom-up feature guidance, which is driven by the salient features of the items in the visual field (Theeuwes 1992); (3) prior history, such as priming effects where previous exposure to an item influences its subsequent recognition; (4) reward, which can bias attention towards items associated with positive outcomes (Anderson et al., 2011; Lee & Shomstein, 2013); and (5) scene syntax and semantics, which refers to the influence of contextual information and the overall meaning of the scene (Boettcher et al. 2018; Henderson & Hayes, 2017). These sources of guidance are integrated into a spatial "priority map," a dynamic attentional landscape that evolves throughout the search process. This map helps determine the order in which items are attended to and processed, thereby guiding the visual search process. The selected object(s) are compared to target templates in long-term memory. Wolfe (2021) proposes that this process unfolds through an 'asynchronous diffuser.' In essence, the identification of one item can start before the identification of the previous item has been completed. This asynchronous process allows for a more fluid and efficient search. Although many aspects of visual search warrant discussion, I will focus on top-down guidance, priming effects, and the importance of templates. These elements have the potential to be influenced by factors such as race. Understanding how these components function and how racial biases might shape them can provide valuable insights into the broader dynamics of visual search processes and their implications for tasks like the FPST or WIT and, ultimately, police use of force. 7 Social Information and Search Processes Top-down feature guidance is a form of attentional guidance that is influenced by an observer's knowledge or expectations about the target's features. Higher cognitive processes drive this form of guidance and direct attention toward specific features of a target that align with the observer's expectations (Eimer, 2014). For instance, if an observer is searching for a green apple among green pears, their knowledge about the shape of an apple would guide their attention toward objects with that shape. Prior history, including priming effects, significantly impacts attentional guidance, drawing from an observer's past experiences. Priming effects can operate in multiple ways, with the most well-studied including intertrial and cueing. For intertrial priming, if an observer has recently seen a red apple, they are more likely to notice red objects in their visual field in subsequent trials (Kruijne & Meeter, 2015). In cueing, priming of emotional facial cues can facilitate search processes for unrelated target objects (Becker, 2009), and exposure to specific semantic categories prior to the presentation of the visual array can guide attention to semantically similar target objects (Robbins & Hout, 2015, 2020). An important element to highlight is the role of search templates in this process. Wolfe (2021) posits that two forms of templates significantly contribute to visual search: guiding templates and target templates. Guiding templates are cognitive representations of features that guide attention by highlighting areas in the visual field that match these features (Bravo & Farid, 2009, 2012; Malcolm & Henderson, 2009; Vickery et al., 2005; Wolfe et al., 2004). These templates are flexible and can include multiple features; importantly, there are ongoing debates about the number of templates that can be held in working memory (Bahle et al., 2020; Ort & Olivers, 2020) with clear costs in speed and accuracy for multiple object searches (Menneer et 8 al., 2012; Stroud et al., 2012). On the other hand, target templates are more specific and represent the target the searcher is looking for. They help in identifying targets and rejecting distractors during the search. When an item is selected, it is compared to a target representation, determining whether it is the target or a distractor. To fully assess the impact of race on visual search, it is crucial to consider the role of guidance, not just identification. Race information may enhance the effectiveness of the search process through the interplay between guiding templates and top-down feature guidance. For example, if a participant is tasked with finding a green apple among green pears, working visual memory may adopt abstract features or attributes of that green apple to facilitate the search. The exact mechanisms of how these representations are developed and utilized are still a subject of ongoing research. However, Yu et al. (2023) proposes that it likely adheres to a "good-enough" principle. This principle suggests that attentional guidance is often based on the simplest, sufficient information that can provide a high-quality estimate of a potential target object's location. Importantly, Yu et al. (2023) highlight that this is context-dependent. For example, in searching for a green apple among green pears, the feature "green" would not be useful, but shape and size might be. In contrast, color would be the useful defining feature if searching for a red apple among green apples. The contents of this guiding template are influenced by many factors, including the priming of social or categorical information (Yu et al., 2023). Robbins and Hout (2020) demonstrated the influence of scene priming on visual search tasks. Participants were primed with images of scenes contextually related to the target object they were searching for in an array. They found that semantic information activated by the scene guided attention to semantically similar items in the search array, resulting in faster response times following 9 congruent primes. Research using empty backgrounds and classic search arrays has shown that categorical information can influence features in a template. When primed with categorical information, guiding templates can facilitate the search for items, including object features that are typical of a category (Robbins & Hout, 2015) or that are consistent across exemplars of a category (Hout et al., 2017; Yu et al., 2016). Categorical information can also be in the form of social identities; for example, Chiao et al. (2006) primed racial identities and had participants scan a search array for Black or White faces, finding that guidance was faster after congruent priming. Despite limited research directly exploring the impact of race on visual search, evidence suggests that social and categorical information can influence object search. Given these insights, it is sensible to study both object identification and the potential influence of race on search efficiency. This exploration can be achieved by examining behavioral data such as response times and understanding underlying cognitive processes. Drift Diffusion Modeling Researchers have turned to computational models such as the Drift Diffusion Model (DDM) to investigate how race affects decision-making processes. This model offers a comprehensive view of the cognitive processes that drive decision-making and facilitates a nuanced analysis of the effects of racial bias. For example, a review by Johnson et al. (2017) demonstrated that DDM can provide novel insights into the cognitive processes underlying decision tasks like the FPST. The DDM is a widely used sequential sampling model in cognitive psychology that explains the cognitive processes involved in decision-making tasks (Ratcliff, 1978). It posits that decisions are made by accumulating evidence over time until a decision threshold is reached. 10 The DDM has been applied to tasks like the FPST and WIT to gain insights into the role of race in decision-making processes (Correll et al., 2015; Harder, 2017, 2020; Johnson et al., 2018; Johnson et al., 2021; Pleskac et al., 2018; Todd et al., 2021). Figure 3: The drift diffusion model. The DDM consists of four parameters (see Figure 3): Beta (start point), Delta (drift rate), Alpha (evidence threshold), and Tau (non-decision time). Beta, the start point, signifies the initial bias or predisposition before the process of evidence accumulation begins. In the context of the FPST, this could reflect a participant's initial bias towards shooting or not shooting, e.g., being "trigger happy." Delta, or the drift rate, represents the rate of evidence accumulation over time. It mirrors the strength or quality of the evidence being processed during decision-making per unit of time. A steeper drift rate (higher delta) indicates a stronger accumulation of evidence, leading to quicker decisions (all else equal). Conversely, a shallow drift rate (lower delta) suggests weaker evidence accumulation, resulting in slower decisions. Factors such as the clarity of the visual stimuli or prior information can influence this parameter. Alpha, the evidence threshold, denotes the amount of information or evidence required to make a decision. This parameter is linked to the speed-accuracy trade-off. For example, when 11 response time windows are shorter, evidence thresholds tend to be lower, suggesting a faster but potentially less accurate decision-making process (Pleskac et al., 2018). Lastly, Tau, the non- decision time, accounts for the time taken for processes other than decision-making, such as motor response time. This parameter helps distinguish the cognitive decision-making process from the physical response, thereby providing a more accurate depiction of the cognitive processes involved in tasks like the FPST. The different parameters of the DDM - Beta (start point), Delta (drift rate), Alpha (evidence threshold), and Tau (non-decision time) - work in concert to provide a comprehensive understanding of the decision-making process in tasks like the FPST and WIT. When applying the DDM to the FPST, significant effects of race on the drift rate or delta are observed. Specifically, evidence is stronger to support a 'shoot' decision when the target is Black rather than White (Correll et al., 2015; Johnson et al., 2018; Pleskac et al., 2018). This suggests that participants gather and process decision-making information more efficiently when the target is Black. In contrast, when applying DDM to the WIT, race generally has no observed effect on the drift rate. This discrepancy could be attributed to a variety of differences between the tasks. However, as this latter WIT finding is based on a single published paper (Todd et al., 2021) and unpublished data from the Cesario lab, it should be interpreted with caution. In the context of shooting decisions, shifts in these parameters would result in distinct response time and error rate changes (see Figure 4). For instance, if participants receive dispatch information indicating that a suspect at the scene is armed, we might expect the beta parameter to increase or start closer to the shoot threshold. This adjustment would likely result in faster responses when the bias aligns with the correct response. However, this could come at the cost of accuracy if the bias favors an incorrect decision. Additionally, the alpha parameter, or evidence 12 threshold, can be influenced by the allotted response time windows. In shooting decisions, extending the time allowed for a response generally enhances accuracy. A lengthened response time window increases boundary separation, leading to longer response times but typically higher accuracy, as decisions are made with greater certainty. Figure 4: "An illustration of how changing diffusion model parameters impacts decisions and response time distributions (in blue). We assume that evidence is correctly accumulated toward Option A. Top panel: higher relative start point b increases the likelihood and speed of selecting Option A by primarily increasing modal response speed. Middle panel: higher threshold a increases the likelihood of choosing Option A and decreases the speed of choosing both options by shifting the mode and lengthening the tails of responses. Bottom panel: higher drift rate d increases the likelihood and speed of selecting Option A by shortening the tails of the responses. Nondecision time t is not depicted as it simply shifts both distributions by a fixed amount.” 13 Figure 4 (cont’d): Adapted from "Advancing Research on Cognitive Processes in Social and Personality Psychology: A Hierarchical Drift Diffusion Model Primer," by D. Johnson, C. Hopwood, J. Cesario, & T. Pleskac, 2017, Social Psychological and Personality Science, 8, p. 2. https://doi.org/10.1177/1948550617703174. Reprinted with permission. The drift rate, can be influenced by the quality or strength of the information presented in the stimuli. For example, a suspect holding a rifle, as opposed to a smaller handgun, provides stronger information, which could increase the drift rate and lead to overall faster decision- making. Finally, the non-decision time (tau) impacts response time but does not directly affect decision accuracy. An increase in tau uniformly extends the response time across all trials, irrespective of the decision difficulty or accuracy. Visual Search and the DDM Drift rate, representing the strength of evidence accumulation, is a multifaceted parameter influenced by numerous factors. Yet pinpointing what specifically drives changes in the drift rate can be challenging. Factors such as the clarity of visual stimuli, prior information, or the complexity of the task can all impact the rate at which evidence is accumulated. However, these are just a few examples, and the drift rate can be influenced by many other factors, some of which may not be immediately obvious or easy to measure. One such factor that has been somewhat overlooked is the role of object search. The process of searching for a specific object or feature within a visual scene could potentially influence the drift rate, as it affects how efficiently evidence can be gathered and processed. However, the exact nature of this relationship is not yet fully understood. One key factor that comes into play is discriminability, which refers to the ability to distinguish between different stimuli. In a study conducted by Pleskac et al. (2018), the FPST was modified by blurring the object held by the target. This manipulation effectively reduced the discriminability of the object, making it harder for participants to identify it. The results showed 14 that this decrease in discriminability led to a slower rate of evidence accumulation, as reflected in a lower drift rate. In other words, when the target object was blurred, participants took longer to gather and process the necessary evidence to make a decision about the object's identity. This study is particularly important as it provides empirical support for interpreting the drift rate (delta) as a measure of evidence strength. It demonstrates that changes in the quality of the visual stimuli, such as a decrease in discriminability, can directly impact the rate at which evidence is accumulated during decision-making tasks. In a modification of the FPST, Johnson et al. (2018) sought to simulate real-world situations where officers receive dispatch information. They presented participants with race and/or weapon information for 2000 ms, operationalizing the dispatch information typically received by officers. In a within-subjects manipulation where this information was not provided, participants viewed a fixation point for the same duration instead . The findings of Johnson et al. (2018) revealed that providing race information reduced the role of racial bias in evidence accumulation. Interestingly, providing weapon information had a dual effect: it led to stronger drift rates when the weapon information was correct but weaker drift rates when the weapon information was incorrect. Johnson et al. (2018) hypothesized that these effects could be attributed to differences in search strategies. Specifically, they proposed that participants engage in an exploratory search when no prior information is given (i.e., "What object is being held?"). However, when information is given, participants shift to a confirmatory search (i.e., "Is that person holding a gun?"). This can also be conceptualized in terms of the role of templates in facilitating search and identification. In general, participants are not given information before each trial, resulting in the possibility of broad templates and possible memory searches. However, when participants are 15 informed that the suspect is armed, a template that includes more gun-like features may be used. This shift would result in stronger evidence accumulation when the suspect is armed and substantially weaker evidence accumulation when the suspect is unarmed. However, due to the design of the task, it's challenging to determine whether these effects reflect differences in search or identification processes. Correll et al. (2015) utilized eye-tracking technology to explore the impact of visual processing on racial bias. They examined participants' eye movements during the FPST and calculated the visual angle between the fixation point and the target object. A larger visual angle suggests a greater deviation between where participants were looking and the target object. The findings revealed that participants had larger visual angles for Black targets than White targets, irrespective of whether the target was armed or unarmed. That is, participants directed their gaze toward areas other than the suspect's hand when the suspect was Black. However, even though participants did not fully fixate on the target item when the suspect was Black, Correll et al. (2015) observed steeper drift rates for guns with Black targets. They concluded that this reflects a stereotype consistency effect, where objects appear more like guns when paired with Black targets. This could imply that identification was more accurate when stereotypes were congruent, but search efficiency might have been compromised by race. However, it is worth noting that the drift diffusion model employed in their study was overly restrictive due to the absence of hierarchical modeling. Specifically, without the use of hierarchical modeling or Bayesian estimation, the model estimates are generated from a very small number of trials, which constrain the extent to which parameters are allowed to vary (Johnson et al., 2017). This limitation imposed artificial constraints on the model's interpretability. For example, the 16 evidence threshold was not allowed to vary by race. As an alternative account, a lower threshold for Black targets may also work to explain the fact that participants made a decision before full fixation of the target object. In addition, without manipulating search difficulty, strong conclusions about search efficiency cannot be made. While their effort serves as a valuable first attempt, the model constraints limit the insights about visual processes that can be drawn from this study. The potential influence of visual search processes on the drift rate, or evidence accumulation, is a common thread in the findings discussed. However, direct evidence for such an effect has not been systematically studied. Furthermore, it remains unclear how, or even if, race might influence search efficiency. Thus, a comprehensive understanding on the role of visual search is essential for deciphering the mechanisms underlying racial bias in decision- making tasks. The Current Research Proposal There are significant gaps in our understanding of the factors influencing evidence accumulation and decision-making in the decision to shoot. In particular, the influence of object search, an important aspect of visual perception and attention, has been largely overlooked in the existing literature. By investigating the role of race in guiding search efficiency in complex visual environments, this research proposal aimed to fill this gap and provide a more comprehensive understanding of racial bias in deadly force decisions. Therefore, this research proposal addresses the following research questions: (1) Does race influence search efficiency? (2) Do differences in search efficiency result in distinct patterns in drift rate? This research proposal explored these questions by introducing object search to weapon identification tasks by adding a random search array and manipulating set sizes. A larger 17 set size typically leads to more challenging search tasks, as the target object becomes more difficult to locate among an increased number of distractors. By analyzing participants' search performance across different set sizes, efficiency in finding the target can be evaluated. In Study 1, the response time window was 10 seconds to ensure that participants could perform the search task accurately. This choice is made for several reasons: first, creating a meaningful time restriction without a baseline is challenging, and second, in studies where set size is manipulated, reaction time is the measure of interest. The reaction time of correct identifications across set sizes is used to calculate the search efficiency or the search slope. A strict deadline without proper consideration may impose artificial limitations on the search slope, such as giving the appearance of a flatter or more efficient slope while ignoring incorrect responses. However, the long response windows limit error rates and thus precluded the use of DDM. Building on the findings of Study 1, Study 2 introduced a response time window informed by the distributions observed in the first study. This introduced errors in the task, allowing for the application of DDM, which gives further insight into the role of race in search. 18 STUDY 1 While various methods can be used to investigate the impact of race on search efficiency, this study employed a random search array broken into an 8x8 grid with set sizes of 12,16 and 20 (Hout & Goldinger, 2010, 2012, 2015). Race information in the form of a headshot of a Black or White Man and target information in the form of categorical word cues were provided before the presentation of the visual search array. These manipulations were chosen for several reasons. First, using set sizes to understand search efficiency by analyzing the function of the reaction time by set size is a well-established experimental method in the visual search literature (Eckstein, 2011; Treisman & Gelade, 1980; Wolfe, 2021). Second, allowing the target object to appear in any cell mitigates potential contextual cueing effects (Chun, 2000). For example, if a circular array was implemented, participants could adopt a search strategy driven by looking at possible object positions before object presentation, diminishing the role of search. In addition, similar to Hout and Goldinger (2010, 2012, 2015), the grid was broken into four quadrants where 3, 4, or 5 items appear (based on the current set size), which is intended to prevent the effects of object clustering. In general, if each object's position were allowed to vary completely at random, issues with object overlap or tightly clustered items could influence attentional guidance. Third, using an empty background in the search array allows for careful manipulation of the set size, whereas working with naturalistic scenes requires additional consideration of which elements may draw attention (Henderson & Hayes, 2017). Along the same vein, when using naturalistic scenes, careful consideration must be given to how the scene syntax and semantic meanings guide search (Castelhano & Heaven, 2011; Spotorno et al., 2014). Fourth, although using categorical word cues is a departure from FPST and WIT type studies, it serves two critical purposes here. It highlights which items should be searched for in 19 each trial, differentiating the targets from distractors. It also works to control the specificity of the categories participants are searching for, which better aligns with the fact that search efficiency is enhanced when more precise information about the target object is provided (Hout & Goldinger, 2015; Maxfield & Zelinsky, 2012; Schmidt & Zelinsky, 2009; Yang & Zelinsky, 2009). For example, in the shooter bias literature, the object to be identified is either a gun or any of many non-gun objects (tools, phones, soda cans, etc.). The notion of a handgun encompasses a more specific category with commonly shared features, whereas the idea of "non-gun objects" is more diffuse, encompassing diverse items possessing distinct characteristics. Participants performed the visual search task under three conditions: Black prime, White prime, or no prime control. The search slopes were compared between the three conditions to assess the influence of race primes on search efficiency. Three levels of race presentation (Black vs. White vs. No prime) and two levels of target object type (Gun vs. Non-Gun) were manipulated within-subjects. In each condition, three levels of set size (12, 16, 20) were manipulated in equal proportions. Method Participants Student participants were recruited via Michigan State University's Department of Psychology HPR/SONA system to complete an "Attention and Perception" task for a full credit towards the fulfillment of required credit hours in their introductory psychology class. Three hundred and twenty-nine participants were recruited; however, 6 participants who had over 100 errors in the task were excluded. In addition, 7 participants were excluded that did not have a matching Qualtrics survey due to experimenter error in set up. The remaining sample (N = 316; 20 55 men; 252 women, 9 NA, mean age = 19.75) was primarily White (69%), with marginal representation for Asian (14%), Black (7%), and other/multiracial (10%). Apparatus Participants completed the task in PsychoPy (Version 2023.2.1; Peirce et al., 2019) on a 24-in. monitor (20.88 by 10.75 in.). Participants were seated approximately 21 in. (or 55cm) away from the monitor but could adjust this distance. Note that one monitor was 22 in (9.5 x 11.5 in.), all stimuli were scaled appropriately. Stimuli Images of real-world objects, such as handguns and hand-held harmless objects, were used as target stimuli. There were 33 items, 17 handguns, and 16 harmless objects. Due to experimenter error, an extra handgun was left in the stimuli set. Non-guns were comprised of the following categories: wallet, hairbrush, cellphone, hammer, flashlight, game controller, stapler, and soda can. Although the harmless objects are made up of multiple categories, participants received specific item information before each trial. Distractor objects were images of real-world objects that are visually and categorically dissimilar from the target objects, such as fruit, bicycles, and barbies (see Table 1 for the full list of distractor objects). Most objects were sourced from the Massive Memory Database (Konkle et al., 2010), with the exception of the wallet and cellphone photos, which were taken from online searches. In the race priming condition, 40 neutral emotion headshot images of Black and White males wearing the same clothing were used as prime stimuli, with 20 images featuring Black males and 20 featuring White males. These images were obtained from the Chicago Face Database (Ma et al., 2015). Each face appeared in each object condition three times and each set size condition four times. 21 (All stimuli, materials, and data can be found at OSF | Examining Racial Bias in Evidence Accumulation: Exploring the Impact of Object Search) Search Array Organization A structured random search array was employed to mimic cluttered environments and facilitate object presentation (see Figure 5). The display was organized as an 8x8 grid, which divides the screen into four equal 4x4 quadrants. However, the four innermost cells were excluded to prevent participants' gaze from falling on items close to the fixation point. Each quadrant contained an equal number of objects, depending on the set size (3, 4, or 5 objects per quadrant). The grid was designed to maintain a visual angle of 2-2.5° for objects and a minimum of 1.5° between adjacent objects and between objects and the screen edges. Visual angles are a measure of the apparent size of an object when perceived from a certain distance. In this context, visual angles allow for precise control of how objects are rendered on a screen using the object's size and the observer's distance. These visual angles were chosen as an analog to those found in the literature (e.g., Hout & Goldinger, 2015) and ensure a sufficient distance between objects to minimize crowding effects. However, compared to the 6x6 grid employed by Hout and Goldinger (2015) on a 21- inch monitor, the current 8x8 grid is scaled for a 24-inch monitor and maintains the visual angle requirements for the objects and their separation. To ensure that target positions from the center of the screen are equivalent across conditions, a random assignment method was used to distribute the target objects across the grid cells. Each trial randomly assigned the target objects to these cells. The randomness of this assignment was evaluated and confirmed through a simulation. In this simulation, 360 trials with 300 iterations were conducted, each representing a participant, with varying conditions of race 22 presentation (Black, White, and No Prime), target object type (Gun or Non-gun), and set sizes (12, 16, 20). The average distances from the center for each condition, averaged across all participants, were computed and are presented in Table 2. As indicated by the results, the average distance from the center is approximately equal across all conditions and participants, suggesting that the random assignment method did not introduce a systematic bias in the positioning of the target objects. Appendix A presents a detailed account of the simulation process and the Python script used to generate these values. Figure 5: An example of the visual search array. Note that the grid lines and grey boxes will not be shown during the task. Procedure Participants' task was to locate and identify the target object among the distractors. Participants were instructed to respond as quickly as possible while remaining accurate. At the beginning of each trial, a fixation cross appeared for 500 ms, followed by word cues of the target items for 1000 ms, followed by the visual search display, which remained until a response was recorded or 10 seconds elapsed. The race stimuli appeared for 500 ms after the word cues in the race priming condition. Participants used a keyboard to make decisions, with "Q" representing 23 "gun" and "P" representing "non-gun." Reaction times were measured from display onset to a button press. In any given trial, only one target object from either of the two categories appeared (See Figure 6 for an example). Figure 6: An example of a typical trial. Note that in the no prime condition, an additional fixation point was used in place of the face. The paired word cues always had a gun as one category and a randomly selected non-gun as the other. To encourage an active search for the target items, an additional manipulation was added to the task. In 36 randomly selected trials, all objects were replaced by a number from 1 to the current number of items in the display (See Figure 7). Using a mouse response, participants then selected the number that replaced the target object. After eight practice trials, 360 experimental trials were presented in 3 blocks of 120. There were 240 trials for the race condition and 120 trials for the no-race condition. Within each block, there were 40 trials in each set size. Within each set size, there were 20 trials for each object type. Across blocks, this 24 resulted in 20 trials per race by object by set size condition. In the race condition blocks, each face was paired with two guns and two non-gun items for a total of six trials per block. Participants were given one minute of rest or longer between blocks. Block order was randomized across participants. Set size, object type, and target location were randomized within blocks. . Figure 7: An example of the manipulation check. Once participants made a decision, the objects were replaced with numbers. Manipulation check Undergraduate research assistants reported during preliminary trial testing that when they could not rapidly identify a gun on the screen, they would default to choosing the non-gun response. This behavior suggests a search strategy that mirrors a target present versus target absent decision-making process, potentially leading to variations in response times compared to scenarios where participants actively identify an item before responding. An additional manipulation adapted from Hout and Goldinger (2015) was introduced in 36 randomly selected trials to address this. In these trials, upon making a decision, all objects on the screen were replaced with a number ranging from 1 to the total number of items displayed. After a 2-second 25 interval, this screen was replaced by a lineup of four numbers. Participants were then required to select, via mouse response, the number corresponding to the target object. This manipulation, evenly distributed across the various conditions, aimed to encourage active item search by the participants. Results Behavioral The experimental design involves three independent variables: Race condition (Black vs. White vs. No prime), Set size (12 vs. 16 vs. 20), and Object type (Gun vs. Non-Gun). These variables were manipulated within participants, with the order of conditions randomly assigned. The dependent variable is the Response Time (RT) for each trial, representing the time participants take to locate and identify the target object accurately. Incorrect response times and response times below 300 ms and above 10000 ms were removed. See Figure 8 for average response times and error rates. The data were analyzed using a linear mixed effect model to predict the response time to the target object as a function of the race condition, set size, object type, and their interactions. In additon, a logistic mixed effect model was specified to predict accuracy across conditions. To account for non-independence across participants and targets (Judd et al., 2012), I initially proposed specifying (1) the participant intercept, race condition slope, set size slope, object type slope, and their interactions for participants, (2) the target intercept and set size slope for targets, and (3) the prime intercept and object type slope for primes.. However, this initial model proved too complex for practical specification. Subsequent testing of each model component revealed that a simplified model, which specified only the participant and target intercepts as random effects, was most effective. Including slopes generally leads to convergence issues, while 26 random intercepts for primes introduced singularity effects. The race condition, set size, and object type were effects coded. The analysis was conducted using the lme4 (Bates et al., 2023), lmerTest (Kuznetsova et al., 2020), and emmeans (Lenth et al., 2024) packages in R. Figure 8: Correct response times (top) and proportion errors (bottom) for all conditions. Response Time. A multilevel linear regression analysis was conducted to predict response time. The model included fixed effects for race condition, set size, object type, and their interactions. Random effects included random intercepts for participants and targets. In this model, there was a main effect of target type (b = -203.04 ms, 95% CI [-288.14, -117.93]) such that participants responded faster to guns (M = 1316 ms, 95% CI [1193, 1439]) than non-guns (M = 1316 ms, 95% CI [1193, 1439]), b = -406 ms, 95% CI [-17.694, -8.272]). The expected 27 effect of set size was found with faster responses in set 12 (b = -169.00 ms, 95% CI [-176.49, - 161.51]) and slower responses in set size 20 (b = 173.79 ms, 95% CI [166.28, 181.29]). There was an interaction between race and target type (b = -9.22 ms, 95% CI [-16.72, - 1.72]) such that participants responses were slower on White non-gun trials (M = 1733 ms, 95% CI [1606,1860]) compared to no prime non-gun trials (M = 1709 ms, 95% CI [1582,1836], b = 24.19, 95% CI [5.82, 42.55]). In addition, participants responded marginally faster to White gun trials (M = 1309 ms, 95% CI [1185,1432]) than to Black gun trials (M = 1326 ms, 95% CI [1203,1450], b = -17.88 ms, 95% CI [-36.23, 0.49]). An interaction was observed between target type and set sizes 12 (b = 18.85 ms, 95% CI[11.36, 26.34]) and 20 (b = -21.24 ms, 95% CI [-28.75, -13.74]), indicating that participants responses were faster in gun trials across set sizes compared to non-gun. Table 7 summarizes the mean response times and 95% confidence intervals by set size and target type. A linear contrast test indicated that participants' searches were more efficient when the target item was a gun (b = 80.2 ms, 95% CI [54.2, 106.2]). That is, the increase in response times associated with increased set sizes was smaller on gun trials (See Figure 9). No significant interactions were found between race and set size, and the observed two-way interactions did not extend to a three-way interaction. Overall, participants were faster on gun trials than non-gun trials, and the typical race by object interactions were not found. 28 Figure 9: Search slopes by Target Type and Set Size. Bars are 95% CI. Error Rates. To predict the proportion of correct responses, a multilevel logistic regression was estimated with fixed effects for race condition, set size, object type, and their interactions. Random effects included random intercepts for participants and targets and random slopes for set size for targets. The only effects to emerge were the main effects of set size 12 (b = 0.08, 95% CI [0.03, 0.14]) and set size 20 (b =-0.09, 95% CI [-0.16, -0.03]) however the differences are small and not particularly informative (12: M = 3.83, 95% CI [3.72, 3.95]; 16: M = 3.76, 95% CI [3.64, 3.88]; 20: M = 3.66, 95% CI [3.55, 3.77]) No other effects were found and thus the exploratory analysis focused on response times. Exploratory Analysis – Block Order. As part of an exploratory analysis, a noticeable decrease in response times across blocks was observed, suggesting a possible practice effect. To account for this, a multilevel linear regression analysis was conducted to predict response time while controlling for the effect of practice. The model included fixed effects for race condition, 29 set size, object type, and their interactions. Additionally, block order was included as a covariate but not as part of any interaction terms. Random effects included random intercepts for participants' targets. Initial observations indicated an effect of block order, with participants’ response times decreasing across blocks (Block 1: M = 1638 ms, 95% CI [1546, 1730]; Block 2: M = 1485 ms, 95% CI [1393, 1576]; Block 3: M = 1435 ms, 95% CI [1343, 1527]). Further analysis using a polynomial contrast test revealed significant linear and quadratic trends, indicating that although participants' responses sped up across blocks (b = -203 ms, 95% CI [- 215.9, -190]), this effect plateaued going from block 2 to block 3 (b = 104 ms, 95% CI [81.2, 126]). That is, it appears that participants more or less understood the task by the final block. Despite these trends, the effects related to race condition, set size, and object type remained consistent (See Tables 10-15). Exploratory Analysis – Manipulation Check. As an additional exploratory analysis, the error rate distributions from the manipulation check were examined by condition and across participants (See Figure 10). The findings show that the majority of participants demonstrated a high degree of accuracy, with 84 percent having fewer than five errors. To investigate if response times differ across participants with higher errors, the manipulation check error rates were mean- centered and added to the multilevel model predicting response times. This model accounted for the original fixed effects and the new interaction between mouse task errors, set size, and target type. Similar to the inclusion of block order, adding manipulation check error rates to the model did not substantially alter the overall findings (See Tables 16-21). However, for each additional error, response times increased (b = 5.17 ms, 95% CI [0.75, 9.6]). Additionally, an interaction between target type and the manipulation check suggests that participants who 30 performed worse responded faster to non-guns (M = 7.87 ms, 95% CI [3.40, 12.35]) than guns (M = 2.48 ms, 95% CI [-2.0, 6.95]; b = -5.40 ms, 95% CI [-6.786, -4.014]). There was no interaction between set size and manipulation check errors, nor were these effects qualified by the three-way interaction with set size. The longer response times may be attributed to a target present search versus target-absent search such that responses are generally longer in this scenario (Wolfe, 2021), but given the small amount of data, this may be better attributed to noise or inattentiveness. Figure 10: Count of errors by participant (top) and proportion of errors across conditions (bottom). 31 Process Level Given that participants were given a large response time window to encourage accurate search, there is not enough error rate data to apply a Drift Diffusion Model. Discussion The purpose of Study 1 was twofold: first, to investigate if race would affect search efficiency, and second, to establish a possible response time window for Study 2. Given that there were little to no differences in error rates across different conditions, this discussion will be limited to the analyses of response times. The interactions that would have suggested a search efficiency effect for race would have been found in either the race by set size interactions or the race by set size by target type interaction; however, neither reached significance, indicating that in this task, race did not improve or impair the search process meaningfully. Further, these interactions did not emerge when controlling for block order or the manipulation check. That being said, an effect of search efficiency did emerge such that participants' searches for gun objects were more efficient than participants' searches for non-gun items. It is not clear what is driving this effect, as it could be the case that participants are attuned to the fact that guns are likely the key item in the task, guns are simply easier to distinguish because of some combination of features, or it could be the case that the diverse item set for non-gun items impaired the search process. In addition, it is worth noting that the typical race-by-target type interaction was not found. That is, the literature generally supports the idea that gun responses following Black primes are faster and non-gun responses following Black primes are slower. However, this study found that an interaction emerged, such that participants' responses were slower following White and Black primes than those with no primes on the non-gun trial. 32 STUDY 2 The primary objective of this study was to deepen our understanding of the relationship between race and search efficiency. This was achieved by introducing a time window constraint to induce errors in the search process, thereby allowing for drift-diffusion modeling. However, Study 1 did not find an effect of race on search efficiency; this outcome could have several interpretations. It might suggest that, in this specific task context, race is not a useful source of information, which could be represented in no credible differences between Black and White race primes. Alternatively, counter-stereotypic attitudes could influence the results, leading to findings that diverge from the broader literature on racial biases. An example of this can be seen in recent work from the Cesario lab on shooter tasks. Participants demonstrated no behavioral differences in response times or error rates, yet DDM results indicated that this was driven by lower starting points and higher evidence thresholds for Black targets. In addition, if the target type search efficiency effect emerges once again, the DDM parameters can be used to infer if these search differences are reflected in evidence accumulation. The response window was set to capture a range of approximately 70% of the original response times for non-guns in set size 20 (2000 ms) in Study 1. The time window was still relatively long, given that the focus is on search efficiency, and using a constraining time window may undermine the connection to Study 1. By reducing the response window in this way, I expected to maintain the task's sensitivity to our manipulations of interest while ensuring that participants were sufficiently pressured to respond quickly. Three levels of race presentation (Black vs. White vs. No prime) and two levels of target object type (Gun vs. Non-gun) were 33 manipulated within subjects. In each condition, three levels of set size (12, 16, 20) were manipulated in equal proportions. Method Participants Student participants were recruited via Michigan State University's Department of Psychology HPR/SONA system to complete an "Attention and Perception" task for a full credit towards the fulfillment of required credit hours in their introductory psychology class. Three hundred and twenty-two participants were recruited; however, 9 participants who had over 100 errors in the task were excluded. In addition, 6 participants were excluded that did not have a matching Qualtrics survey due to experimenter error in setup. The remaining sample (N = 308; 118 men; 188 women, 2 NA, mean age = 19.5) was primarily White (73%), with marginal representation for Asian (10%), Black (8%), and other/multiracial (9%). Data collection was paused after 19 participants and 52 participants had completed the task to ensure that the response time window selected was appropriate. This was done by looking at the response times and error rates to determine if the response time window behaved as expected. The first 19 participants had a response window of 2300 ms with an error rate of less than 5% at the highest set size. This was not suitable for DDM entry, so the response window was restricted to 2000 ms. This second session produced an error rate between 5% and 9% from set size 12 to 20. These participants were not included in the final sample. Apparatus Participants completed the task in PsychoPy (Version 2023.2.1; Peirce et al., 2019) on a 24-in. monitor (20.88 by 10.75 in.). Participants were seated approximately 21 in. (or 55cm) 34 away from the monitor but could adjust this distance. Note that there was one monitor that was 22 in (9.5 x 11.5 in.), and items were scaled down appropriately. Stimuli All stimuli were the same as the stimuli used in Study 1. Search Array Organization The specifications of the search array are the same as those listed in Study 1. Procedure The procedure is similar to Study 1, with a few exceptions. First, a 2000ms response time window was implemented. Second, if participants responded outside of the window, they were prompted to "Please respond faster." Manipulation check Several changes were made to the manipulation check to ensure it only appeared when participants made a correct decision within the response time window. Given that participants were expected to make more errors, the manipulation check only appeared on correct identification trials. Further, performing the manipulation check only if participants were within the response time window aimed to reduce noise from guessing after the items disappeared. Doing it this way, it was not guaranteed that conditions would be evenly split if some participants made more errors for specific combinations of targets and races, but the code was designed to cycle through each combination iteratively and display the manipulation check for the set of conditions with the lowest count value. It only aimed to record 36 trials, 2 per possible condition. 35 Results Behavioral The design of Study 2 closely mirrors that of Study 1. The experimental design remains the same, with three independent variables manipulated within participants: Race Condition (Black vs. White vs. No prime), Set Size (12 vs. 16 vs. 20), and Object Type (Gun vs. Non-gun). The order of conditions continued to be randomly assigned. There are two dependent variables: response time, which represents the time participants take to correctly find and identify the target object, and error rates, which quantify the frequency of incorrect identifications or timeouts across trials. Responses that fell below 300 ms or above 4000 ms were excluded from both the response time and error rate analysis (Ratcliff et al., 2018). Timeouts were not treated as errors, but in the response time analysis only correct response times were used. As in Study 1, MLM was employed. A linear model was used for response times, and a logistic model was used for error rates. See Figure 11 for average response times and error rates. Response Time. A multilevel linear regression analysis was conducted to predict response time. The model included fixed effects for race condition, set size, object type, and their interactions. Random effects included random intercepts for participants, targets, and prime faces. In this model, there was a main effect of target type (b = -122.88, 95% CI [-166.34, - 79.42]) such that gun responses (M = 981 ms, 95% CI [919,1043]) were faster than non-gun responses (M = 1227 ms, 95% CI [1163, 1291]). The expected effect of set size was found with faster responses in set 12 (b = -74.48 ms, 95% CI [-78.02, -70.93]) and slower responses in set size 20 (b = 70.71 ms, 95% CI [67.13, 74.28]). There was also an interaction between target type and set sizes 12 (b = 5.03 ms, 95% CI [1.49, 8.58]) and 20 (b = -6.50 ms, 95% CI [-10.08, -2.92]), indicating that participants’ 36 responses were faster in gun trials across set sizes compared to non-gun. Table 26 summarizes the mean response times and 95% confidence intervals by set size and target type. A linear contrast test indicated that participants' searches were more efficient when the target item was a gun (b = 23.07 ms, 95% CI[10.7, 35.4]). That is, the increase in response times associated with increased set sizes was smaller on gun trials (See Figure 12). No significant interactions were found between race and set size or race and object, and the observed two-way interactions did not extend to a three-way interaction. Overall, the response time findings closely match those of Study 1, with faster response times for guns than non-guns. In addition, the constrained response time window did lead to faster responses in general. Figure 11: Correct response times (top) and proportion errors (bottom) for all conditions. 37 Error Rates. To predict the proportion of correct responses, a multilevel logistic regression was estimated with fixed effects for race condition, set size, object type, and their interactions. Random effects included random intercepts for participants and targets. There was a main effect of race for Black (b = 0.05, 95% CI [0.02, 0.08]) such that participants were more accurate following Black primes than no primes (b = 0.10, 95% CI [0.05, 0.16]). There was a main effect for set size 12 (b = 0.24, 95% CI [0.21, 0.27]) and set size 20 (b = -0.20, 95% CI [- 0.24, -0.17]) such that as set sizes increased, participants accuracy decreased (12: M = 2.90, 95% CI [2.79, 3.02]; 16: M = 2.63, 95% CI [2.52, 2.74]; 20: M = 2.46, 95% CI [2.35, 2.57]). There was an interaction between prime race and target type (b = -0.03, 95% CI [-0.07, - 1.02e-03]). Participants responses were more accurate on non-gun trials following White (b= 0.11, 95% CI [0.03, 0.19]) and Black (b= 0.18, 95% CI [0.10, 0.26]) primes compared to the no prime condition. There was also an interaction between set size and target such that as set size increased , participants' accuracy decreased more for guns (12: M = 2.91, 95% CI [2.76, 3.06]; 16: M = 2.56, 95% CI [2.41, 2.70]; 20: M = 2.31, 95% CI [2.16, 2.46]; b = -0.32, 95% CI [-0.43, -0.21]) than non-guns (12: M = 2.89, 95% CI [2.74, 3.05]; 16: M = 2.71 95% CI [2.55, 2.86]; 20: M = 2.61, 95% CI [2.46, 2.76]). A three-way interaction qualified these effects (See Figure 13) such that as set size increased, participants' accuracy decreased more following White primes than no primes in the non-gun condition (b =-0.20, 95% CI [-0.40, -0.002]). However, it’s worth noting that although participants' accuracy decreased more following White primes, they also started and ended with overall higher accuracy than the no prime condition. Overall, accuracy decreased as the set size increased, but the most interesting aspect is that participants made more errors on 38 gun-trials than non-gun trials. This particular effect is unexpected and may be better explained by performance in the manipulation check. Figure 12: Search slopes by target type and set size. Bars are 95% CI. Figure 13: Predicted accuracy (on the logit scale) for race, target type, and set size. Bars are 95% CI. 39 Exploratory analysis – Block Order. Similar to Study 1, a noticeable decrease in response times across blocks was observed, suggesting a possible practice effect. To account for this, a multilevel linear regression analysis was conducted to predict response time while controlling for the effect of practice. The model included fixed effects for race condition, set size, object type, and their interactions. Additionally, block order was included as a covariate but not as part of any interaction terms. Random effects included random intercepts for participants and targets. The corresponding multilevel logistic model predicting accuracy was specified similarly. Initial observations indicated an effect of block order, with participants’ response times decreasing across blocks (Block 1: M = 1168 ms, 95% CI [1123, 1214]; Block 2: M = 1086 ms, 95% CI [1041, 1132]; Block 3: M = 1059 ms, 95% CI [1014, 1105]). A polynomial contrast analysis revealed linear and quadratic trends such that participants responded faster across blocks (b = -109 ms, 95% CI [-115.4, -103.1]). However, this effect was less pronounced in the later block (b = 55.3 ms, 95% CI [44.7, 65.9]). Not only did participants respond faster, but they also became more accurate across blocks (M = 2.58, 95% CI [2.46, 2.69]) to block 2 (M = 2.71, 95% CI [2.60, 2.83]; b = -0.14, 95% CI [-0.19, -0.08]) but plateaued going to block 3 (M = 2.71, 95% CI [2.59, 2.82]; b = 0.01, 95% CI [0.06, -0.05]). Interestingly, while the original response time model had no effect on race, when block order was added an effect of race emerged (b = 5.08, 95% CI [0.92, 9.24]) such that participants responses were slower following White primes (M = 1107 ms, 95% CI [1061, 1152], b = 8.37 ms, 95% CI [2.22, 14.52]) and Black primes (M = 1108 ms, 95% CI [1063, 1154], b = 9.95 ms, 95% CI [3.81, 16.09]) compared to no prime (M = 1098 ms, 95% CI [1053, 1144]). An interaction with target type qualified this effect such that participants responses to non-guns were 40 slower for White primes (M = 1233 ms, 95% CI [1169, 1297], b = 15.70 ms, 95% CI [7.03, 24.37]) and Black primes (M = 1231 ms, 95% CI [1168, 1295], b = 13.95 ms, 95% CI [5.29, 22.60]) compared to no primes (M = 1217 ms, 95% CI [1154, 1281]). All other effects relating to race condition, set size, and object type in the error rate and response time model remained consistent (Tables 33 -47). Exploratory analysis – Manipulation Check. As an additional exploratory analysis, the error rate distributions from the manipulation check were examined by condition and across participants (See Figure 14). The findings show that the majority of participants demonstrated a high degree of accuracy, with 77 percent having fewer than five errors. Thus, to investigate if response times and error rates differ across participants with worse performance on the manipulation check, the error rates were mean-centered and added to the multilevel models predicting response times and accuracy. This model accounted for the original fixed effects and the new interaction between mouse task errors, set size, and target type. Adjusting for manipulation check errors in the model, much like controlling for block order, left most results unchanged but revealed a significant effect of race (b = 5.08 ms, 95% CI [0.92, 9.24]) such that participants responded slower following White primes (M = 1107 ms, 95% CI [1061, 1152], b = 8.37 ms, 95% CI [2.22, 14.52]) and Black primes (M = 1108 ms, 95% CI [1063, 1154], b = 9.95 ms, 95% CI [3.81, 16.09]) compared to no prime (M = 1098 ms, 95% CI [1053, 1144]). Alone, the effect of manipulation check errors is statistically non-significant (b = -1.270 ms, 95% CI [-2.98, 0.44]). However, there was an effect of target type and the manipulation check such that as participants performed worse on the manipulation check, they responded faster, primarily driven by the gun condition (b = -0.713 ms, 95% CI [-1.391, -0.034]; see Table 53). The three-way interaction with set size did not qualify these effects. 41 Figure 14: Count of errors by participant (top) and proportion of errors across conditions (bottom). When manipulation check errors were mean-centered and added to the error rate model, there was a main effect such that overall errors increased for each additional error made in the manipulation check (b = -0.013, 95%CI [-0.021, -0.004]). This was qualified by an interaction with target type such that as participants performed worse in the manipulation check, more errors were made in the gun than non-gun trials (b = -0.030, 95%CI [−0.036,−0.024]). The three-way interaction with set size led to convergence issues and was omitted from the model. Together, the response time and error rate models suggest that participants with worse performance on the manipulation check responded faster, and this was especially noticeable on gun trials. This increase in response time may also explain why more errors were made for gun targets; if 42 participants who generally performed worse or didn’t engage in the task as expected to find the gun rapidly, then in larger set sizes, the quitting threshold (Wolfe, 2021) might be smaller leading to more misses for guns. In addition, although they are correctly choosing non-gun when a gun is present, without fixating the target, more errors would be made for non-guns on the manipulation check. As in Study 1, there is less data with poor performance, but this seems to suggest a pattern of differential search strategies. Summary The behavioral data show no effect of race on search efficiency in either the two-way or three-way interaction. For race, the drift-diffusion model can highlight how or if race is being used in this task. For example, as in previous work in the lab, we have found the stereotypical response in the race by object drift rates such that participants accumulate stronger evidence when gun is paired with Black than with White, but these effects were masked by participants setting wider thresholds for Black targets such that they needed more evidence to make a decision. So it could be that some similar phenomenon masks race differences between Black and White primes, or perhaps it’s the case that, like the behavioral results, no differences emerge between White and Black and instead solely manifest between these two racial categories and the no prime condition. However, like Study 1, there is a search efficiency effect for guns such that participants' responses are more efficient for gun items than non-gun items. However, this search efficiency effect did not lead to greater accuracy; in fact, it appears that starting at set size 16, participants missed more guns than non-guns. This pattern of results is unexpected. It's possible that this pattern was driven by participants with higher manipulation check errors, such that participants with more errors responded faster and were less accurate at finding guns because of how they 43 engaged in the search process. Thus, it seems possible that participants sometimes rapidly found the gun item they were searching for, and when they did not, they would default to a non-gun response. This could potentially explain why error rates increased more for guns at the higher set sizes because if participants could not quickly find the gun, an assumption could be made that it was not present. It is not clear how these different results will affect the drift diffusion model parameters, given that the decreased response times for gun items should lead to stronger drift rates or evidence accumulation, but the higher error rate suggests more noise in the evidence accumulation process. Or, in other words, a weaker drift rate. Thus, the drift diffusion model can disentangle what is happening to drive the target type search efficiency effects and the race results. Process Level A Hierarchical Bayesian Drift Diffusion Model (HDDM) was implemented with the guidelines outlined by Pleskac et al. (2018). This version of the model allowed the start point to vary according to race prime, the threshold to vary by race prime and set size, and the drift rate and non-decision time to vary according to race of the prime, object type, and set size. Uninformative priors were used for each parameter to let the data have a maximal influence on the posterior estimates. This model was estimated using a Markov Chain Monte Carlo (MCMC) simulation in Just Another Gibbs Sampler (JAGS), as suggested by Plummer (2003), in conjunction with the Wiener module (Wabersich & Vandekerckhove, 2014). The analysis gathered a specified number of samples (10000) with an adaptive phase of (1000) and a burn-in period set at (1000). 44 Bayesian methods of inference were employed, which provides a distribution of credible values for each parameter. These credible values represent a range of potential values for a parameter that is consistent with the observed data. The most credible value, or the mode of the posterior distribution (i.e., the value with the highest probability), is reported for each parameter. In addition, the Highest Density Interval (HDI) is reported. The HDI, encompassing 95% of the posterior distribution, represents the range of credible values. An effect, such as a race by object interaction on drift rates, is considered credible when the HDI does not contain zero. If the HDI contains zero, this suggests that the null hypothesis is within the range of credible values, lowering confidence that there is a difference between conditions. Subsequently, given that this is a novel application of the diffusion model, posterior predictive checks were performed for each condition, namely, Black/White/No Prime and Gun/Non-gun across set sizes. These checks analyzed decision probabilities (Gun/Non-gun) and the means and distributions of response latencies. This procedure involves simulating data using the model described above, which is then compared to the original data. Posterior Predictive Checks revealed systematic discrepancies between observed data and predictions across various conditions. Hit rates are overestimated , and false alarms are slightly underestimated, though the extent of this misestimation is minimal. When examining response times, correct responses to gun stimuli are consistently overestimated, implying the model predicts slower decision times than observed, which in turn explains the overestimated accuracy. In contrast, correct response times for non-gun stimuli align more closely with model predictions, suggesting a more accurate fit. However, both incorrect gun and non-gun responses are underestimated by a large margin, suggesting that model-predicted error responses are faster than observed. 45 Exploratory analyses revealed that this is likely caused by unaccounted variation in drift rate for different target non-gun items. Specifically, some non-gun items were much harder to identify and create multimodal distributions of response times that were unaccounted for in the current DDM specification. Although gun items were relatively stable, this raises implications for the drift rate such that within non-guns, there is a larger degree of variation in evidence accumulation, which may reflect an overestimation of the true drift rates for non-gun items. Regarding the search efficiency effect between guns and non-guns, the model's ability to capture meaningful differences is not conclusive; thus, the results are speculative, and though the various gun and non-gun effects are reported, they will be discussed within this context. Model fit, diagnostics, and all plots are listed in Appendix D. Results. DDM results can be seen in Tables 61 to 69 and Figure 15. Contrary to expectations, the hypothesis that Black primes would lead to a higher starting point was not supported. Specifically, there were no credible differences between White primes and no primes (b = -0.001, d = -0.040 [-0.350, 0.260]) or between White and Black primes (b = 0.008, d = 0.280 [-0.030, 0.580]). However, there was a near credible effect such that Black primes had a lower starting point than no primes (b = -0.010, d = -0.310 [-0.610, 0.010]). Analysis of alpha effects revealed that participants threshold separation was wider for both White primes versus no primes (b = 0.030, d = 0.170 [0.020, 0.320]) and for Black primes versus no primes (b = 0.049, d = 0.260 [0.100, 0.410]). This indicates a preference for speed over accuracy, coinciding with the faster response times found for no primes. There were no credible differences in boundary separation between White and Black primes (b = -0.016, d = - 0.090 [-0.240, 0.070]). Additionally, boundary separation was found to increase with set size, demonstrating credible differences between 12 and 16 items (b = -0.049, d = -0.250 [-0.400, - 46 0.100]), 12 and 20 items (b = -0.100, d = -0.540 [-0.680, -0.370]), and 16 and 20 items (b = - 0.055, d = -0.540 [-0.440, -0.140]). These main effects were qualified by an interaction such that participants' boundary separation was wider at set size 12 for White primes compared to no primes (b = 0.055, d = 0.320 [0.030, 0.560]). A similar effect was observed for Black primes (b = 0.082, d = 0.43 [0.15, 0.68]). No other combination of conditions was found to be credibly different (see Table 65 for details). Figure 15: Diffusion model parameters as a function of prime race, set size, and target type for Study 2. Shapes represent modal posterior predictions at the condition level; bars are 95% HDI. 47 For drift rates, no credible differences were found between White and no primes (b = 0.009, d = 0.030 [-0.07, 0.13], White and Black primes (b = -0.018, d = -0.07 [-0.16, 0.04]), or Black and no primes (b = 0.022, d = 0.08 [-0.01, 0.19]). A credible main effect was observed for target type, with guns showing stronger drift rates than non-guns (b = 0.114, d = 0.43 [0.33, 0.52]). As predicted, drift rates became weaker as set size increased, with credible differences between 12 and 16 items (b = 0.203, d = 0.77 [0.66, 0.87]), 12 and 20 items (b = 0.337, d = 1.26 [1.15, 1.39]), and 16 and 20 items (b = 0.136, d = 0.50, [0.40, 0.60]). These main effects were not qualified by the anticipated race by object interaction or the race by set size interaction (see Table 67 - 69 for details) However, a series of credible interactions were observed between target type and set size, such that drift rates were stronger for guns compared to non-guns at set sizes 12 (b = 0.259, d = 0.98, [0.82, 1.13]) and 16 (b = 0.093, d = 0.35, [0.20, 0.50]), but not set size 20 (b = -0.011, d = -0.04, [-0.18, 0.10]). A polynomial contrast test revealed a credible linear trend, with drift rates for guns decreasing more sharply than for non-guns as set size increased (b = -0.194, d = -0.71 [- 0.74, -0.70]), but this effect diminished at the final set size (b = 0.012, d = 0.04 [0.04, 0.05]). A three-way interaction did not qualify these effects. Discussion The purpose of Study 2 was to gain a better understanding of how race is being used in this task, if at all, and to what degree search efficiency can be observed in differences in the drift rate. Regarding race, the only credible difference to emerge was that participants collected more evidence following White and Black primes than they did when there was no prime. Further, there is no conclusive evidence that the null behavioral results between Black and White primes 48 were hiding or masking race-driven differences in evidence accumulation, start point bias, or the evidence threshold. Next, the behavioral differences in search efficiency for guns over non-guns appear to have been seen in the drift rates such that gun items initially had stronger drift rate values, but the rate of decrease in evidence accumulation was reflected in the higher error rate for gun targets. That is, the model captured a higher rate of decrease in the drift rate of gun items than non-gun items, but it is unclear if this pattern of results would have emerged if the true effects of drift rate were estimated. It is possible that when the full range of variation is modeled, search efficiency effects are limited to specific non-gun conditions such that some drift rates are likely much weaker than other non-gun objects. However, these results are speculative until modifications can be made to the drift diffusion model code to allow intercepts to be created for target objects, similar to how they are implemented in a random effect context where variation due to non-independence is accounted for. 49 GENERAL DISCUSSION The current work explored how or if race would affect search efficiency in a visual search task. Further, this work aimed to explore if differences in search efficiency would manifest in evidence accumulation, highlighting a possible unexplored mechanism to explain differences in information processing. Across two studies, race was not found to affect search efficiency in either race by set size interaction or the race by set size by target type interaction. In addition, the application of drift-diffusion modeling did not reveal a pattern of results indicative of the typical racial bias effect. However, a consistent search efficiency effect for target type emerged, such that participants' searches for guns were more efficient than those for non-guns in both Study 1 and Study 2. In Study 2, this effect was seen as stronger drift rates for gun items; however, participants' errors increased, which was then reflected in a large decrease in the strength of evidence accumulation. The strength and direction of these effects are still speculative, given the model misfit highlighted by the posterior predictive checks performed . Search Efficiency and Modeling This work was exploratory to test if race would work to guide attention to specific items in the search array. An effect of race on search efficiency was not found in either the two-way or three-way interactions in response time; this could have been due to several reasons. One may be that race is not providing information that is useful or is not meaningfully changing the contents of working memory. However, a more nuanced discussion on the specific effects of race will be discussed later; here, I will focus on elements of search efficiency. The second major question this dissertation aimed to answer was whether differences in search efficiency would reflect differences in delta or evidence accumulation. I t was found that participants' searches for guns were more efficient than those for non-guns, which was reflected 50 in overall stronger drift rates for gun items and weaker drift rates for non-gun items. Notably and contrary to expectations, drift rates decreased far more for guns than non-guns. Note that when this work was first proposed, I anticipated that if there were a three-way interaction between race, target, and set size, this would be reflected such that as set size increased, the rate of decrease for the more efficient searches would be less than the rate of decrease for the less efficient search. That is, I expected differences in search efficiency to emerge not in the overall strength of the evidence accumulation process but in the steps between each set size. This was based on an assumption that equated greater search efficiency with greater accuracy, but that was not the behavioral effect produced. That is, although the gun searches appear more efficient, this effect was inflated since we are only looking at correct response times, and guns in Study 2 generated more errors than non- guns. Seemingly, gun items were either incredibly fast to spot or were missed in favor of a non- gun response. This effect seems to be driven by participants who made more errors in the manipulation check. If the participants assume that the gun item is consistent in every trial and an initial search doesn’t point towards items with gun-like features, then choosing non-gun is an adequate solution. This particular strategy would explain why the greatest amount of errors occurs for guns at the highest set size, where, theoretically, they would be the hardest to find from a brief search. Regardless of the speeded response time, the increase in error rates would result from weaker evidence accumulation across set sizes. Importantly, however, until the drift diffusion model is refit, we can only speculate on whether the observed differences in evidence accumulation remain in the same direction and magnitude. If we assume that unaccounted variation in delta for target objects is the main driver of misfit, it would be interesting to see what differences emerge among the target object drift 51 rates. The level of specificity of getting a drift rate for either target object or target object categories allows us to explore the possibility that stronger or weaker drift rates can be explained by differences in features of the items ( i.e., color, shape, size, sharp or rounded features). In line with this, some non-gun stimuli seemed harder to find and identify than others, driving multimodal response time distributions. This is not necessarily bad, as real-life items can be varied and introduce these response time differences. Nevertheless, it does have implications for the comparison of search efficiency, such that comparing a static category (gun) where items are more similar in nature and receive more similar responses to a dynamic category (non-gun) with drastic differences in features and in response times may introduce task-specific effects such that search efficiency effects emerge because there is more variation in one group. Or, put another way, if a different set of non-gun stimuli were used that varied in ways that stood out from the search display, would the same gun search efficiency effect be found. Nailing down what possible features of items that impair or enhance the search process for guns could be useful in studying and identifying where police training could be improved. Individual Differences in Search Process First, it was not anticipated that participants would approach the task using different search strategies, which is why the manipulation check was used as an attempt to encourage participants to engage in an active search for the target items. The different search strategies could have been prompted by several design choices that diverged from the original Hout and Goldinger (2015) work. For example, participants were always looking for either a gun or some specified non-gun object, hinting that guns are the targets of interest. Effectively, then, if the instruction set is to “answer as quickly as possible while remaining accurate,” participants 52 benefit from using a search strategy that prioritizes guns over non-guns. This would especially be the case when non-gun items are harder to find in the search array. Although the manipulation check was added, due to the limited number of trials that it appeared in, it is not quite possible to disentangle all possible approaches participants used, but here are some possibilities. Participants actively search for the gun and non-gun objects simultaneously which might reflect in higher accuracy (Cave et al., 2018; Ort & Olivers, 2020; Stroud et al., 2012). Note that the majority of participants in both Studies 1 and 2 had fewer than five manipulation check errors. The second is that participants search for guns and, when not present, default to non-gun without fixation of the target item. Now, this may explain the results of the manipulation check in Study 2. In Study 2, participants with more manipulation check errors were generally faster than participants with lower manipulation check errors, and this is especially noticeable among the gun targets. When searches are treated as target-present vs. target-absent, target-absent searches are characterized by longer search times as the entire array must be scanned (Wolfe 2021) as seen in study 1, but if a response time window were to make this untenable, then at higher set sizes we might expect more errors or even correct guessing to occur. This might explain a portion of participants whose manipulation check errors were driven by non-gun items and less so by gun items. A third approach is simply inattentiveness; participants either do not engage with the task, do not read the pretrial text cues, or even vary in attention over blocks. This particular approach may be characterized by an overall high manipulation check error rate and task error rate equivalent across conditions. However, there are participants who have high manipulation check error rates and a low overall error rate, which might suggest a strategy that precludes fixation of the target items. For example, it is possible for items to be processed in the field of view 53 surrounding the current point of fixation (Wolfe, 2021). To the extent then that participants see an item in the periphery, they could respond prior to full fixation. It seems likely that participants are approaching the task in varied ways, but the manipulation check fails to meaningfully capture or pinpoint exact differences which should be addressed in any future work. Race and Priming Studies 1 and 2 show a distinct deviation from traditional findings in racial bias research using the WIT and FPST. Unlike the expected anti-Black race-by-object interactions typically found in this literature (Cesario and Carrillo, 2024), the results revealed no direct differences in object identification between White and Black primes. Instead, significant differences were observed when comparing conditions with a racial prime (either White or Black) to those without any prime. Further analysis using the DDM indicated that these unexpected findings did not obscure complex race interactions within the model’s parameters. The only credible difference was noted in the evidence threshold at set size 12 between the race conditions and the no prime condition, an effect that diminished with increasing set sizes. This pattern suggests that the presence of a racial prime, rather than its specific racial identity, influenced object recognition by slowing responses. There may be several explanations for why the typical race effects did not emerge; first, it could be that in the last four years since the death of George Floyd, it's plausible that there has been a decrease in anti-Black bias influenced by the broader political and social discourse. Such shifts could reflect how participants respond, potentially leaning towards socially desirable behaviors (Huddy & Feldman, 2009). Second, the study design may not facilitate the use of race as a mental shortcut. Future research should focus on thoroughly examining the mechanisms through which priming may alter search processes. 54 First, the design of our study, which involved rapid, consecutive trials, may account for the observed tendency of race effects to manifest similarly for both White and Black primes, compared to conditions without a prime. The brief intervals between trials might have led to a compounded racial priming effect, where the priming did not sufficiently decay before the next stimulus was presented. Research on priming decay suggests that longer intervals can reduce residual effects (Neely et al., 2010). To explore this further, one modification could involve adding intervals within the trial design, similar to those used in the FPST. The FPST approach incorporated sequences of empty backgrounds between trials, potentially allowing the initial priming effect to diminish. In this visual search task it may be useful to add buffers between trials of either empty screens or just a fixation cross that is shown for a longer period of time. Alternatively, adopting a between-subjects design could provide another means to examine these effects. In such a design, participants would be exposed to only one racial category throughout the experiment. This approach would circumvent the need for additional time in the task as only one prime is relevant throughout. Second, in the current design, the race prime is displayed for 500 milliseconds, which may be sufficiently long to allow participants some control over their responses. For example, when primed with race participants may decide to be more cautious which is reflected by the higher evidence thresholds following the prime. To investigate whether the length of exposure to the prime affects the strength of the priming effect or the participants ability to control their responses a reduced exposure time, perhaps closer to 200 milliseconds as used in the WIT (Payne, 2005). In this way the rapid display of the prime prevents participants from actively changing their search strategies. 55 Third, we may think that race information may be used in ambiguous situations (Duncan, 1976; Sagar & Schofield, 1980), and in this study, participants receive word cues indicating which items will appear. These text cues might substantially influence cognitive processes more than race cues in low ambiguous situations. In line with this, Johnson et al. (2018) research into dispatch information found that providing participants with specific information about a target reduced racial biases, evidenced by changes in the drift rate. This indicates that when participants have access to more nuanced information, their reliance on racial stereotypes diminishes, leading to more accurate decision- making processes. In terms of visual search, the content of participants' search templates might be influenced by greater, more specific information given by the pretrial text cues rather than race cues (Yu et al., 2023). One method to test this idea is by designing a follow-up study where participants are tasked with identifying items belonging only to a gun category. Then, participants are given different levels of pretrial information ranging from race information only, text cues, text cues and race information, and specific images of the target items to appear. We would expect that as information gets more specific, attentional guidance would increase leading to faster and more accurate responses. This would allow for additional insights into when and how race information is used. Moreover, this allows for broader insigts into the effectiveness of priming in weapon search tasks. It is important to consider several factors if the study's focus shifts exclusively to weapon trials. Specifically, simplifying the search task to include only gun items then changes the type of response made to gun present vs gun absent. To maintain an appropriate level of challenge and ensure that the task effectively measures search efficiency, one adjustment could be varying the orientation of the weapon in each trial. By having the gun face left or right randomly, the task 56 would then require participants to determine the direction the gun is pointing. This modification then ensures that the participants are actively searching for the target item and gives specific insights into search efficiency for guns without the obfuscating influence of an additional non- gun target. While there may be additional aspects of the design that warrant further exploration, these proposed manipulations collectively enable a more comprehensive investigation into the specific conditions under which racial biases might influence search efficiency. Should these manipulations fail to reveal racial bias, it would prompt a reevaluation of whether racial biases are primarily manifested during identification processes rather than during the search processes themselves. This distinction could significantly refine our understanding of the use of race information in deadly force decisions. Limitations and Future Directions A notable limitation of this work is that differences in search efficiency alone do not inform us whether these differences emerge from search processes, identification processes, or a combination of both (Kristjánsson, 2015; Wolfe, 2016). Thus, a useful path forward would be to integrate eye-tracking technology in follow-up studies as it can provide additional information above and beyond simple differences in search slopes, such as the point of the first fixation of the target item and the point of identification (Godwin et al., 2021). Additionally, eye-tracking studies will allow researchers to specifically identify how participants are performing the task, such as whether the participant fully fixates the item before the decision is made. Eye-tracking could also be used to explain why response times and, consequently, evidence thresholds are larger when there is a race prime versus no prime. For example, differences in evidence thresholds are generally found when response times for both correct and 57 incorrect responses shift in one direction (Ratcliff, 1978); if a participant has the time to be more accurate, this is reflected in longer response times for both correct and incorrect decisions and a lower error rate. One explanation may be that this would reflect in longer decision times once participants have fixated the target, but that fails to capture why response time for errors would increase. Thus, a likely explanation could be that when primed , participants spend slightly more time scanning the search array for the target than when they are not primed. Eye tracking could then provide insight into whether this is the case or if a combination of search times and decision times plays a role. Another limitation is that the design of the tasks may meaningfully shape participants’ search strategies. For instance, pretrial text cues consistently highlighted the presence of a gun, potentially biasing participants towards prioritizing guns over non-gun items. It is never the case that the text cues could be two different non-gun items, alerting participants to the fact that the items of central importance are guns and not non-guns. Previous research has indicated that instruction sets can significantly influence performance (Katsimpokis et al., 2020), suggesting that altering task instructions can impact the amount of evidence collected. Further, task-specific instructions (e.g., "gun or no gun" to "shoot or don't shoot" or "threat or non-threat") could modify how information is collected and processed. Importantly, domain-specific experts (i.e., radiologists, TSA) have been found to perform task-specific searches more efficiently than lay people (Papesh et al., 2021). To the extent that the end goal is to understand how police make decisions to shoot, future studies should specifically be designed with this in mind. As an example, deadly force decisions may not always conform to a gun or non-gun decision but rather might be something along threat or non-threat dimension, which can impact search and identification times (Blanchette, 2006; though see Wolfe & Horowitz, 2017). 58 In this vein, another design limitation concerns the absence of a payoff matrix or reward system to penalize critical errors, such as failing to identify a gun (Correll et al., 2002; Johnson et al., 2018). The lack of negative feedback may lead some participants to prefer a target present versus a target-absent decision-making process, especially under challenging conditions. Such a strategy would explain the observed higher error rates for gun identification at increased set sizes. Taking the feedback as an example, failure to identify a gun is a costly mistake that could result in injury or death of the officer or other civilians. Importantly, this current work does not capture how experiences of threat can shape the search process, which may be an interesting avenue for future work. 59 CONCLUSION Despite these limitations, the present study highlighted the use of visual science manipulations as a way of further teasing apart racial bias in weapon identification. The current work did not find racial differences in search efficiency but instead found that searches for guns were more efficient than non-guns. This finding underscores the importance of understanding which features may impair or enhance search efficiency in such decision-making processes. Although it is premature to draw definitive conclusions, there is evidence that search is an important element, but significantly more work is needed to understand how and under what conditions it can interact with race or other social information. For instance, subsequent studies could investigate various search manipulations and modes of presenting racial information to investigate whether racial bias is predominantly a function of identification differences or if it can manifest during the search process itself. 60 REFERENCES Anderson, B. A., Laurent, P. A., & Yantis, S. (2011). Value-driven attentional capture. Proceedings of the National Academy of Sciences, 108(25), 10367–10371. https://doi.org/10.1073/pnas.1104047108 Bahle, B., Thayer, D. D., Mordkoff, J. T., & Hollingworth, A. (2020). The architecture of working memory: Features from multiple remembered objects produce parallel, coactive guidance of attention in visual search. Journal of Experimental Psychology: General, 149(5), 967–983. https://doi.org/10.1037/xge0000694 Bates, D., Maechler, M., Bolker [aut, B., cre, Walker, S., Christensen, R. H. B., Singmann, H., Dai, B., Scheipl, F., Grothendieck, G., Green, P., Fox, J., Bauer, A., & simulate. formula), P. N. K. (shared copyright on. (2023). Lme4: Linear Mixed-Effects Models using' Eigen' and S4. Blanchette, I. (2006). Snakes, spiders, guns, and syringes: How specific are evolutionary constraints on the detection of threatening stimuli? Quarterly Journal of Experimental Psychology, 59(8), 1484–1504. https://doi.org/10.1080/02724980543000204 Becker, M. W. (2009). Panic Search: Fear Produces Efficient Visual Search for Nonthreatening Objects. Psychological Science, 20(4), 435–437. https://doi.org/10.1111/j.1467- 9280.2009.02303.x Boettcher, S. E. P., Draschkow, D., Dienhart, E., & Võ, M. L.-H. (2018). Anchoring visual search in scenes: Assessing the role of anchor objects on eye movements during visual search. Journal of Vision, 18(13), 11. https://doi.org/10.1167/18.13.11 Bravo, M. J., & Farid, H. (2009). The specificity of the search template. Journal of Vision, 9(1), 34. https://doi.org/10.1167/9.1.34 Bravo, M. J., & Farid, H. (2012). Task demands determine the specificity of the search template. Attention, Perception, & Psychophysics, 74(1), 124–131. https://doi.org/10.3758/s13414- 011-0224-5 Castelhano, M. S., & Heaven, C. (2011). Scene context influences without scene gist: Eye movements guided by spatial associations in visual search. Psychonomic Bulletin & Review, 18(5), 890–896. https://doi.org/10.3758/s13423-011-0107-8 Cave, K. R., Menneer, T., Nomani, M. S., Stroud, M. J., & Donnelly, N. (2018). Dual target search is neither purely simultaneous nor purely successive. Quarterly Journal of Experimental Psychology, 71(1), 169–178. https://doi.org/10.1080/17470218.2017.1307425 Cesario, J., & Carrillo, A. (2024). Racial bias in police officer deadly force decisions: What has social cognition learned? In Carlston, D. E., Johnson, K., & Hugenberg, K. (Eds.), The Oxford handbook of social cognition (2nd ed.). Oxford University Press. 61 Chiao, J. Y., Heck, H. E., Nakayama, K., & Ambady, N. (2006). Priming Race in Biracial Observers Affects Visual Search for Black and White Faces. Psychological Science, 17(5), 387–392. https://doi.org/10.1111/j.1467-9280.2006.01717.x Chun, M. M. (2000). Contextual cueing of visual attention. Trends in Cognitive Sciences, 4, 170- 178. Correll, J., Park, B., Judd, C. M., & Wittenbrink, B. (2002). The police officer's dilemma: Using ethnicity to disambiguate potentially threatening individuals. Journal of Personality and Social Psychology, 83(6), 1314–1329. https://doi.org/10.1037/0022-3514.83.6.1314 Correll, J., Wittenbrink, B., Crawford, M. T., & Sadler, M. S. (2015). Stereotypic vision: How stereotypes disambiguate visual stimuli. Journal of Personality and Social Psychology, 108(2), 219–233. https://doi.org/10.1037/pspa0000015 Duncan, B. L. (1976). Differential social perception and attribution of intergroup violence: Testing the lower limits of stereotyping of Blacks. Journal of Personality and Social Psychology, 34, 590-598. Duncan, J., & Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96(3), 433–458. https://doi.org/10.1037/0033-295X.96.3.433 Eckstein, M. P. (2011). Visual search: A retrospective. Journal of Vision, 11(5), 14. https://doi.org/10.1167/11.5.14 Egeth, H., Jonides, J., & Wall, S. (1972). Parallel processing of multielement displays. Cognitive Psychology, 3(4), 674–698. https://doi.org/10.1016/0010-0285(72)90026-6 Eimer, M. (2014). The neural basis of attentional control in visual search. Trends in Cognitive Sciences, 18(10), 526–535. https://doi.org/10.1016/j.tics.2014.05.005 Harder, J. A. (2017). Perceptions of Life History Strategy and the Decision to Shoot [Master's thesis]. Michigan State University. Harder, J. A. (2020). Modeling Decision Processes in the Use of Lethal Force: The Role of Racial Bias in Judging Faces [PhD thesis]. Michigan State University. Hebart, M. N., Dickter, A. H., Kidder, A., Kwok, W. Y., Corriveau, A., Wicklin, C. V., & Baker, C. I. (2019). THINGS: A database of 1,854 object concepts and more than 26,000 naturalistic object images. PLOS ONE, 14(10), e0223792. https://doi.org/10.1371/journal.pone.0223792 Henderson, J. M., & Hayes, T. R. (2017). Meaning-based guidance of attention in scenes as revealed by meaning maps. Nature Human Behaviour, 1(10), 743–747. https://doi.org/10.1038/s41562-017-0208-0 Hout, M. C., & Goldinger, S. D. (2010). Learning in repeated visual search. Attention, Perception, & Psychophysics, 72(5), 1267–1282. https://doi.org/10.3758/APP.72.5.1267 62 Hout, M. C., & Goldinger, S. D. (2012). Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: Evidence from eye movements. Journal of Experimental Psychology. Human Perception and Performance, 38(1), 90–112. https://doi.org/10.1037/a0023894 Hout, M. C., & Goldinger, S. D. (2015). Target templates: The precision of mental representations affects attentional guidance and decision-making in visual search. Attention, Perception, & Psychophysics, 77(1), 128–149. https://doi.org/10.3758/s13414- 014-0764-6 Hout, M. C., Robbins, A., Godwin, H. J., Fitzsimmons, G., & Scarince, C. (2017). Categorical templates are more useful when features are consistent: Evidence from eye movements during search for societally important vehicles. Attention, Perception, & Psychophysics, 79(6), 1578–1592. https://doi.org/10.3758/s13414-017-1354-1 Huddy, L., & Feldman, S. (2009). On assessing the political effects of racial prejudice. Annual Review of Political Science, 12, 423–447. Godwin, H. J., Hout, M. C., Alexdóttir, K. J., Walenchok, S. C., & Barnhart, A. S. (2021). Avoiding potential pitfalls in visual search and eye-movement experiments: A tutorial review. Attention, Perception, & Psychophysics, 83(7), 2753–2783. https://doi.org/10.3758/s13414-021-02326-w Johnson, D., Hopwood, C., Cesario, J., & Pleskac, T. (2017). Advancing Research on Cognitive Processes in Social and Personality Psychology: A Hierarchical Drift Diffusion Model Primer. Social Psychological and Personality Science, 8, 194855061770317. https://doi.org/10.1177/1948550617703174 Johnson, D. J., Cesario, J., & Pleskac, T. J. (2018). How prior information and police experience impact decisions to shoot. Journal of Personality and Social Psychology, 115(4), 601– 623. https://doi.org/10.1037/pspa0000130 Johnson, D. J., Stepan, M. E., Cesario, J., & Fenn, K. M. (2021). Sleep Deprivation and Racial Bias in the Decision to Shoot: A Diffusion Model Analysis. Social Psychological and Personality Science, 12(5), 638–647. https://doi.org/10.1177/1948550620932723 Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology, 103(1), 54–69. https://doi.org/10.1037/a0028347 Katsimpokis, D., Hawkins, G. E., & van Maanen, L. (2020). Not all Speed -Accuracy Trade-Off Manipulations Have the Same Psychological Effect. Computational Brain & Behavior, 3(3), 252–268. https://doi.org/10.1007/s42113-020-00074-y Konkle, T., Brady, T. F., Alvarez, G. A., & Oliva, A. (2010). Conceptual distinctiveness supports detailed visual long-term memory for real-world objects. Journal of Experimental Psychology: General, 139(3), 558–578. https://doi.org/10.1037/a0019165 63 Kristjánsson, Á. (2015). Reconsidering Visual Search. I-Perception, 6(6), 2041669515614670. https://doi.org/10.1177/2041669515614670 Kruijne, W., & Meeter, M. (2015). The long and the short of priming in visual search. Attention, Perception, & Psychophysics, 77, 1558-1573. Kruschke, J. (2014). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan. Kuznetsova, A., Brockhoff, P. B., Christensen, R. H. B., & Jensen, S. P. (2020). lmerTest: Tests in Linear Mixed Effects Models. Lee, J., & Shomstein, S. (2013). The Differential Effects of Reward on Space- and Object-Based Attentional Allocation. Journal of Neuroscience, 33(26), 10625–10633. https://doi.org/10.1523/JNEUROSCI.5575-12.2013 Lenth, R. V., Bolker, B., Buerkner, P., Giné-Vázquez, I., Herve, M., Jung, M., Love, J., Miguez, F., Riebl, H., & Singmann, H. (2024). Emmeans: Estimated Marginal Means, aka Least- Squares Means. Ma, D. S., Correll, J., & Wittenbrink, B. (2015). The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods, 47(4), 1122–1135. https://doi.org/10.3758/s13428-014-0532-5 Malcolm, G. L., & Henderson, J. M. (2009). The effects of target template specificity on visual search in real-world scenes: Evidence from eye movements. Journal of Vision, 9(11), 8. https://doi.org/10.1167/9.11.8 Maxfield, J. T., & Zelinsky, G. J. (2012). Searching through the hierarchy: How level of target categorization affects visual search. Visual Cognition, 20(10), 1153–1163. https://doi.org/10.1080/13506285.2012.735718 Mekawi, Y., & Bresin, K. (2015). Is the evidence from racial bias shooting task studies a smoking gun? Results from a meta-analysis. Journal of Experimental Social Psychology, 61, 120–130. https://doi.org/10.1016/j.jesp.2015.08.002 Menneer, T., Stroud, M. J., Cave, K. R., Li, X., Godwin, H. J., Liversedge, S. P., & Donnelly, N. (2012). Search for two categories of target produces fewer fixations to target-color items. Journal of Experimental Psychology: Applied, 18(4), 404–418. https://doi.org/10.1037/a0031032 Neely, J. H., O’Connor, P. A., & Calabrese, G. (2010). Fast trial pacing in a lexical decision task reveals a decay of automatic semantic activation. Acta Psychologica, 133(2), 127–136. https://doi.org/10.1016/j.actpsy.2009.11.001 Ort, E., & Olivers, C. N. L. (2020). The capacity of multiple-target search. Visual Cognition, 28(5–8), 330–355. https://doi.org/10.1080/13506285.2020.1772430 64 Papesh, M. H., Hout, M. C., Guevara Pinto, J. D., Robbins, A., & Lopez, A. (2021). Eye movements reflect expertise development in hybrid search. Cognitive Research: Principles and Implications, 6(1), 7. https://doi.org/10.1186/s41235-020-00269-8 Payne, B. K. (2001). Prejudice and perception: The role of automatic and controlled processes in misperceiving a weapon. Journal of Personality and Social Psychology, 81(2), 181–192. https://doi.org/10.1037/0022-3514.81.2.181 Peirce, J., Gray, J. R., Simpson, S., MacAskill, M., Höchenberger, R., Sogo, H., Kastman, E., & Lindeløv, J. K. (2019). PsychoPy2: Experiments in behavior made easy. Behavior Research Methods, 51(1), 195–203. https://doi.org/10.3758/s13428-018-01193-y Pleskac, T. J., Cesario, J., & Johnson, D. J. (2018). How race affects evidence accumulation during the decision to shoot. Psychonomic Bulletin & Review, 25(4), 1301–1330. https://doi.org/10.3758/s13423-017-1369-6 Plummer, M. (2003). JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. Working Papers. Ratcliff, R. (1978). A theory of memory retrieval. Psychological Review, 85(2), 59–108. https://doi.org/10.1037/0033-295X.85.2.59 Ratcliff, R., Huang-Pollock, C., & McKoon, G. (2018). Modeling individual differences in the go/no-go task with a diffusion model. Decision, 5(1), 42–62. Rivers, A. M. (2017). The Weapons Identification Task: Recommendations for adequately powered research. PLOS ONE, 12(6), e0177857. https://doi.org/10.1371/journal.pone.0177857 Robbins, A., & Hout, M. C. (2015). Categorical target templates: Typical category members are found and identified quickly during word-cued search. Visual Cognition, 23(7), 817–821. https://doi.org/10.1080/13506285.2015.1093247 Robbins, A., & Hout, M. C. (2020). Scene priming provides clues about target appearance that improve attentional guidance during categorical search. Journal of Experimental Psychology: Human Perception and Performance, 46, 220–230. https://doi.org/10.1037/xhp0000707 Sagar, H. A., & Schofield, J. W. (1980). Racial and behavioral cues in Black and White children's perceptions of ambiguously aggressive acts. Journal of Personality and Social Psychology, 39, 590–598. Sandry, J., & Ricker, T. J. (2022). Motor speed does not impact the drift rate: A computational HDDM approach to differentiate cognitive and motor speed. Cognitive Research: Principles and Implications, 7(1), 66. https://doi.org/10.1186/s41235-022-00412-7 65 Schmidt, J., & Zelinsky, G. J. (2009). Short article: Search guidance is proportional to the categorical specificity of a target cue. Quarterly Journal of Experimental Psychology, 62(10), 1904–1914. https://doi.org/10.1080/17470210902853530 Spotorno, S., Malcolm, G. L., & Tatler, B. W. (2014). How context information and target information guide the eyes from the first epoch of search in real-world scenes. Journal of Vision, 14(2), 7. https://doi.org/10.1167/14.2.7 Starns, J. J., & Ratcliff, R. (2012). Age-related differences in diffusion model boundary optimality with both trial-limited and time-limited tasks. Psychonomic Bulletin & Review, 19(1), 139–145. https://doi.org/10.3758/s13423-011-0189-3 Stroud, M., Menneer, T., Cave, K., & Donnelly, N. (2012). Using the Dual-Target Cost to Explore the Nature of Search Target Representations. Journal of Experimental Psychology. Human Perception and Performance, 38, 113–122. https://doi.org/10.1037/a0025887 Theeuwes, J. (1992). Perceptual selectivity for color and form. Perception & Psychophysics, 51(6), 599–606. https://doi.org/10.3758/BF03211656 Todd, A. R., Johnson, D. J., Lassetter, B., Neel, R., Simpson, A. J., & Cesario, J. (2021). Category salience and racial bias in weapon identification: A diffusion modeling approach. Journal of Personality and Social Psychology, 120, 672–693. https://doi.org/10.1037/pspi0000279 Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12(1), 97–136. https://doi.org/10.1016/0010-0285(80)90005-5 Vickery, T. J., King, L.-W., & Jiang, Y. (2005). Setting up the target template in visual search. Journal of Vision, 5(1), 8. https://doi.org/10.1167/5.1.8 Wabersich, D., & Vandekerckhove, J. (2014). Extending JAGS: A tutorial on adding custom distributions to JAGS (with a diffusion model example). Behavior Research Methods, 46(1), 15–28. https://doi.org/10.3758/s13428-013-0369-3 Wolfe, J. M. (1994). Guided Search 2.0 A revised model of visual search. Psychonomic Bulletin & Review, 1(2), 202–238. https://doi.org/10.3758/BF03200774 Wolfe, J. M. (1998). What Can 1 Million Trials Tell Us About Visual Search? Psychological Science, 9(1), 33–39. https://doi.org/10.1111/1467-9280.00006 Wolfe, J. M. (2014). Approaches to visual search: Feature intergation theory and guided search. The Oxford Handbook of Attention, 11, 35–44. Wolfe, J. M. (2016). Visual Search Revived: The Slopes Are Not That Slippery: A Reply to Kristjansson (2015). I-Perception, 7(3), 2041669516643244. https://doi.org/10.1177/2041669516643244 66 Wolfe, J. M. (2020). Visual Search: How Do We Find What We Are Looking For? Annual Review of Vision Science, 6(1), 539–562. https://doi.org/10.1146/annurev-vision- 091718-015048 Wolfe, J. M. (2021). Guided Search 6.0: An updated model of visual search. Psychonomic Bulletin & Review, 28(4), 1060–1092. https://doi.org/10.3758/s13423-020-01859-9 Wolfe, J. M., & Horowitz, T. S. (2017). Five factors that guide attention in visual search. Nature Human Behaviour, 1(3), 1–8. https://doi.org/10.1038/s41562-017-0058 Wolfe, J. M., Horowitz, T. S., Kenner, N., Hyle, M., & Vasan, N. (2004). How fast can you change your mind? The speed of top-down guidance in visual search. Vision Research, 44(12), 1411–1426. https://doi.org/10.1016/j.visres.2003.11.024 Wolfe, J. M., Palmer, E. M., & Horowitz, T. S. (2010). Reaction time distributions constrain models of visual search. Vision Research, 50(14), 1304–1311. https://doi.org/10.1016/j.visres.2009.11.002 Yang, H., & Zelinsky, G. J. (2009). Visual search is guided to categorically-defined targets. Vision Research, 49(16), 2095–2103. https://doi.org/10.1016/j.visres.2009.05.017 Yu, C.-P., Maxfield, J. T., & Zelinsky, G. J. (2016). Searching for Category-Consistent Features: A Computational Approach to Understanding Visual Category Representation. Psychological Science, 27(6), 870–884. https://doi.org/10.1177/0956797616640237 Yu, X., Zhou, Z., Becker, S. I., Boettcher, S. E. P., & Geng, J. J. (2023). Good -enough attentional guidance. Trends in Cognitive Sciences, 27(4), 391–403. https://doi.org/10.1016/j.tics.2023.01.007 67 APPENDIX A: METHOD TABLES AND CODE Table 1 List of categories used abacus airplane apple armyguy babushkadolls babycarriage backpack bagel ball balloon barbiedoll baseballcards basket bathsuit beanbagchair bearteddy bed beermug bell bench bike bill binoculars bird bongo bonzai boot bottle bowl bowtie breadloaf broom bucket Butterfly button cake camcorder camera Categories carfront fish hook cat flag ceilingfan frame chair frisbee cheese garbagetrash cheesegrater gift cherubstatue glove goggle chessboard christmasstocking golfball grill ornamantball guitar cigarette handbag clock hanger coatrack hat coffeemug headband coin headphone collar helmet compass hourglass computer_key jack-o-lantern cookie jacket cookingpan kayak cookpot key crib keyboard cupsaucer keychain cushion lamp decorativescreen lantern desk lawnmower dog leaves doll lei dollhouse licenseplate domino lipstick donut lock doorknob magazinecovers dresser makeupcompact dumbbell earings mask easteregg_redo meat exercise_equip. microscope microwave fan spoon stamp stool suit suitcase tablesmall tape telescope tennisracquet tent toiletseat tongs toothpaste toyhorse toyrabbit train tree tricycle trophy trumpet trunk turtle tv vase watch wig windchime motorcycle mp3player muffins mushroom nailpolish necklace necktie orifan pants patioloungechair pen pipe pitcher pizza pokercard powerstrip radio razor recordplayer ring ringbinder roadsign rock rollerskates rosary rug saddle saltpeppershake wineglass sandwich scale scrunchie seashell shoe sippycup snowglobe socks sofa speakers Note: There are 17 images per category. All distractor items derived from Massive Memory Database. 68 Table 2 Distance from center from screen Object Set Size Black (Average Distance) White (Average Distance) 12 16 20 12 16 20 Gun Gun Gun Non-gun Non-gun Non-gun Note: The values are based on the Euclidean distance between the center of a grid item and center of the screen. The Euclidean distance here is expressed as a unitless value because it is a relative measurement used for comparison rather than an absolute distance in physical units like meters or inches. 3.20 3.21 3.17 3.21 3.26 3.19 3.20 3.12 3.25 3.20 3.22 3.23 No Prime (Average Distance) 3.21 3.23 3.18 3.18 3.19 3.15 69 Chat-GPT 4 generated code to test random assignment. (Verified) import numpy as np import random # Define grid size grid_size = 8 # Define center coordinates center_x, center_y = 3.5, 3.5 # Create an empty list to store cell coordinates and their distances from the center cell_distances = [] # Iterate over all cells in the grid for i in range(grid_size): for j in range(grid_size): # Exclude the 4 center cells if not (3 <= i <= 4 and 3 <= j <= 4): # Calculate Euclidean distance from the center distance = np.sqrt((i - center_x) ** 2 + (j - center_y) ** 2) # Store the cell coordinates and the distance cell_distances.append(((i, j), distance)) # Define the conditions and set sizes race_conditions = ['Black', 'White', 'NoPrime'] object_types = ['Gun', 'Non-gun'] set_sizes = [12, 16, 20] # Create a list to store the average distances for each participant all_participants_avg_distances = [] # Repeat the cell assignment and average distance calculation 300 times for _ in range(300): # Shuffle the cell_distances list random.shuffle(cell_distances) # Split the list into six equal parts, for each combination of race condition and object type parts = [cell_distances[i::6] for i in range(6)] # Create a dictionary to store the cells for each condition and their average distances condition_cells = {} # Assign the cells to the conditions for i, race_condition in enumerate(race_conditions): for j, object_type in enumerate(object_types): 70 # Get the cells for this combination of race condition and object type cells = parts[i * len(object_types) + j] # Further split the cells into parts for each set size set_size_parts = [cells[i::3] for i in range(3)] for k, set_size in enumerate(set_sizes): # Get the cells for this set size set_size_cells = set_size_parts[k] # Calculate the average distance of these cells from the center avg_distance = np.mean([dist for _, dist in set_size_cells]) # Store the cells and their average distance in the dictionary condition = (race_condition, object_type, set_size) condition_cells[condition] = (set_size_cells, avg_distance) # Store the average distances for this participant all_participants_avg_distances.append(condition_cells) # Create a dictionary to store the total distances for each condition total_distances = {condition: 0 for condition in condition_cells.keys()} # Iterate over all participants for participant in all_participants_avg_distances: # Add the average distances of this participant to the total distances for condition, (cells, avg_distance) in participant.items(): total_distances[condition] += avg_distance # Calculate the average distance for each condition avg_distances_all_participants = {condition: total / len(all_participants_avg_distances) for condition, total in total_distances.items()} # Print the average distances for each condition for condition, avg_distance in avg_distances_all_participants.items(): print(f"Condition {condition}: Average Distance = {avg_distance}") 71 APPENDIX B: BEHAVIORAL RESULTS TABLES b SE 31.008 1.657 6.368 df 41.585 1519.188 46.759 3.825 109572.681 3.824 109573.024 Table 3 Multilevel Linear Regression Predicting Response Time from Race, Target Type, and Set Size in Study 1 Fixed Effects Intercept Prime RaceW Prime RaceB Target Type Set Size12 Set Size20 Prime RaceW x Target Type Prime RaceB x Target Type Prime RaceW x Set Size12 Prime RaceB x Set Size12 Prime RaceW x Set Size20 Prime RaceB x Set Size20 Target Type x Set Size12 Target Type x Set Size20 Prime RaceW x Target Type x Set Size12 Prime RaceW x Target Type x Set Size20 Prime RaceB x Target Type x Set Size12 Prime RaceB x Target Type x Set Size20 -203.035 43.423 -168.999 3.822 109572.580 173.786 3.828 109572.704 3.825 109572.462 -9.222 3.824 109572.464 3.942 5.406 109572.560 1.569 5.405 109572.775 -2.721 5.413 109572.580 -1.389 5.412 109572.575 4.233 18.852 3.822 109572.517 -21.244 3.828 109572.696 5.406 109572.589 2.503 5.405 109572.556 1.432 5.413 109572.512 1.962 5.412 109572.614 -1.145 t 32.489 0.433 1.665 -4.675 -44.217 45.403 -2.411 1.031 0.290 -0.503 -0.257 0.782 4.932 -5.550 0.463 0.264 0.362 -0.211 p <0.000 0.665 0.096 <0.000 <0.000 <0.000 0.016 0.303 0.772 0.615 0.797 0.434 <0.000 <0.000 0.643 0.791 0.717 0.833 Random Effects Participant Target Observations N 316 33 109936 Variance 95080 61924 Note: Race, target type, and set size were effect coded for analysis. "W" represents White primes, and "B" indicates Black primes. 72 Table 4 Contrast Tests of Estimated Marginal Means for Response Time Across Prime Race Effect White Black None Contrasts White - Black White - None Black - None Mean 1521 1526 1511 Estimate -4.711 9.683 14.394 95% Confidence Limit Lower 1429 1434 1419 Upper 1613 1618 1603 95% Confidence Limit Lower -17.694 -3.303 1.413 Upper 8.272 22.670 27.376 Table 5 Contrast Tests of Estimated Marginal Means for Response Time Across Target Type Effect Gun Nongun Mean 1316 1722 95% Confidence Limit Lower 1193 1595 Upper 1439 1849 95% Confidence Limit Contrasts Gun - Nongun Estimate -406 Lower -17.694 Upper 8.272 Table 6 Contrast Tests of Estimated Marginal Means for Response Time Across Set Size Effect 12 16 20 Contrasts 12 - 20 12 - 16 20 - 16 Mean 1350 1514 1693 Estimate -343 -164 179 95% Confidence Limit Lower 1258 1422 1601 Upper 1442 1606 1785 95% Confidence Limit Lower -355.770 -177.185 165.581 Upper -329.800 -151.237 191.567 73 Table 7 Contrast Tests of Estimated Marginal Means for Response Time Across Set Size and Target Type 95% Confidence Limit Effect 12 Gun 16 Gun 20 Gun 12 Nongun 16 Nongun 20 Nongun Mean 1166 1314 1469 1534 1715 1917 Lower 1042 1190 1345 1407 1588 1790 Upper 1290 1438 1592 1662 1842 2045 Contrasts Linear Nongun - Gun Quadratic Nongun - Gun Estimate 32.918 -127.466 Lower 6.971 -172.479 Upper 58.867 -82.454 95% Confidence Limit Table 8 Contrast Tests of Estimated Marginal Means for Response Time Across Prime Race and Target Type 95% Confidence Limit Effect White Gun Black Gun None Gun White Nongun Black Nongun None Nongun Contrasts White - Black Gun White – None Gun Black – None Gun White – Black Nongun White – None Nongun Black – None Nongun Mean 1309 1326 1313 1733 1725 1709 Estimate -17.875 -4.819 13.056 8.453 24.186 15.733 Lower 1185 1203 1190 1606 1597 1582 Upper 1432 1450 1437 1860 1852 1836 95% Confidence Limit Lower -36.234 -23.182 -5.305 -9.911 5.819 -2.622 Upper 0.484 13.543 31.416 26.816 42.553 34.088 74 Table 9 Multilevel Logistic Regression Predicting Correct Decisions from Race, Target Type, and Set Size in Study 1 Fixed Effects Intercept Prime RaceW Prime RaceB Target Type Set Size12 Set Size20 Prime RaceW x Target Type Prime RaceB x Target Type Prime RaceW x Set Size12 Prime RaceB x Set Size12 Prime RaceW x Set Size20 Prime RaceB x Set Size20 Target Type x Set Size12 Target Type x Set Size20 Prime RaceW x Target Type x Set Size12 Prime RaceW x Target Type x Set Size20 Prime RaceB x Target Type x Set Size12 Prime RaceB x Target Type x Set Size20 z 73.446 -0.798 1.196 -0.477 2.909 -2.751 1.052 -0.755 0.904 -0.521 0.182 -0.072 1.867 -1.587 0.672 -1.109 -1.746 1.515 SE 46.759 3.825 3.824 43.423 3.822 3.828 3.825 3.824 5.406 5.405 5.413 5.412 3.822 3.828 5.406 5.405 5.413 5.412 b 3.751 -0.020 0.030 -0.012 0.083 -0.093 0.026 -0.019 0.032 -0.018 0.006 -0.002 0.053 -0.053 0.024 -0.039 -0.059 0.052 Pr(>|z) 0.000 0.425 0.232 0.633 0.004 0.006 0.293 0.450 0.366 0.603 0.856 0.943 0.062 0.113 0.502 0.267 0.081 0.130 Random Effects Participant Target Target-Set Size12 Target-Set Size20 Observations N 316 33 113366 Variance 0.558 0.012 0.006 0.018 Note: Race, target type, and set size were effect coded for analysis. "W" represents White primes, and "B" indicates Black primes. 75 b SE 31.006 0.553 5.251 df 41.572 1519.035 46.762 3.808 109570.682 3.806 109571.025 Table 10 Multilevel Linear Regression Predicting Response Time from Race, Target Type, Set Size, and Block Order in Study 1 Fixed Effects Intercept Prime RaceW Prime RaceB Target Type Set Size12 Set Size20 Block Order1 Block Order3 Prime RaceW x Target Type Prime RaceB x Target Type Prime RaceW x Set Size12 Prime RaceB x Set Size12 Prime RaceW x Set Size20 Prime RaceB x Set Size20 Target Type x Set Size12 Target Type x Set Size20 Prime RaceW x Target Type x Set Size12 Prime RaceW x Target Type x Set Size20 Prime RaceB x Target Type x Set Size12 Prime RaceB x Target Type x Set Size20 -203.264 43.428 -169.113 3.804 109570.579 173.793 3.810 109570.702 118.755 3.807 109571.509 -84.222 3.811 109573.399 3.807 109570.461 -9.116 3.806 109570.463 3.969 5.381 109570.559 1.688 5.380 109570.772 -2.762 5.388 109570.579 -1.431 5.387 109570.574 4.353 18.615 3.804 109570.517 -21.155 3.810 109570.694 5.381 109570.588 2.616 5.380 109570.554 1.228 5.388 109570.511 2.136 5.387 109570.611 -1.366 t 32.485 0.145 1.380 -4.681 -44.454 45.617 31.197 -22.099 -2.394 1.043 0.314 -0.513 -0.266 0.808 4.893 -5.553 0.486 0.228 0.396 -0.254 p <0.000 0.884 0.168 <0.000 <0.000 <0.000 <0.000 <0.000 0.017 0.297 0.754 0.608 0.791 0.419 <0.000 <0.000 0.627 0.819 0.692 0.800 Random Effects Participant Target Observations N 316 33 109936 Variance 95080 61924 Note: Race, target type, set size, and block order were effect coded for analysis. "W" represents White primes, and "B" indicates Black primes. Block order 1 and 3 are the first and last blocks. 76 Table 11 Contrast Tests of Estimated Marginal Means for Response Time Across Target Type Effect Gun Nongun Contrasts Gun - Nongun Mean 1316 1722 Estimate -406.529 95% Confidence Limit Lower 1192 1595 Upper 1439 1849 95% Confidence Limit Lower -576.762 Upper -236.296 Table 12 Contrast Tests of Estimated Marginal Means for Response Time Across Set Size Effect 12 16 20 Contrasts 12 - 20 12 - 16 20 - 16 Mean 1350 1514 1693 Estimate -342.907 -164.434 178.473 95% Confidence Limit Lower 1258 1422 1601 Upper 1442 1606 1785 95% Confidence Limit Lower -355.831 -177.348 165.54 Upper -329.982 -151.52 191.406 Table 13 Contrast Tests of Estimated Marginal Means for Response Time Across Block Order Effect 0 1 2 Contrasts Linear Quadratic Mean 1638 1485 1435 Estimate -153.288 252.667 95% Confidence Limit Lower 1546 1393 1343 Upper 1730 1576 1527 95% Confidence Limit Lower -166.201 230.258 Upper -140.375 275.077 77 Table 14 Contrast Tests of Estimated Marginal Means for Response Time Across Set Size and Target Type 95% Confidence Limit Effect 12 Gun 16 Gun 20 Gun 12 Nongun 16 Nongun 20 Nongun Contrasts Linear Quadratic Mean 1165 1314 1468 1535 1715 1917 Lower 1042 1190 1345 1407 1588 1790 Upper 1289 1437 1592 1662 1842 2045 95% Confidence Limit Estimate -153.288 252.6673 Lower -166.2009 230.2581 Upper -140.3751 275.0766 Table 15 Contrast Tests of Estimated Marginal Means for Response Time Across Prime Race and Target Type 95% Confidence Limit Effect White Gun Black Gun None Gun White Nongun Black Nongun None Nongun Contrasts White - Black Gun White – None Gun Black – None Gun White – Black Nongun White – None Nongun Black – None Nongun Mean 1307 1325 1315 1732 1724 1711 Estimate -17.783 -7.905 9.878 8.388 20.621 12.233 Lower 1183 1201 1191 1605 1596 1584 Upper 1431 1449 1439 1859 1851 1839 95% Confidence Limit Lower -36.056 -26.183 -8.399 -9.89 2.337 -6.039 Upper 0.491 10.374 28.154 26.666 38.904 30.504 78 SE 1518.533 46.709 Table 16 Multilevel Linear Regression Predicting Response Time from Race, Target Type, Set Size, and Manipulation Check Errors in Study 1 Fixed Effects b Intercept Prime RaceW Prime RaceB Target Type Set Size12 Set Size20 mouseC Prime RaceW x Target Type Prime RaceB x Target Type Prime RaceW x Set Size12 Prime RaceB x Set Size12 Prime RaceW x Set Size20 Prime RaceB x Set Size20 Target Type x Set Size12 Target Type x Set Size20 Target Type x mouseC Set Size12 x mouseC Set Size20 x mouseC Prime RaceW x Target Type x Set Size12 Prime RaceW x Target Type x Set Size20 Prime RaceB x Target Type x Set Size12 Prime RaceB x Target Type x Set Size20 Target Type x Set Size12 x mouseC Target Type x Set Size20 x mouseC t df 32.510 41.439 3.824 109567.691 0.439 3.823 109568.037 1.666 43.414 -4.677 31.006 3.821 109567.581 -44.211 3.827 109567.698 45.384 2.257 2.293 314.161 3.824 109567.467 -2.411 3.823 109567.471 1.029 5.404 109567.567 0.298 5.403 109567.785 -0.514 5.412 109567.581 -0.263 5.411 109567.582 0.795 3.821 109567.513 4.948 3.827 109567.667 -5.577 0.353 109571.585 -7.633 0.499 109567.691 0.439 0.501 109568.014 -1.517 5.404 109567.590 0.459 5.403 109567.550 0.265 5.412 109567.513 0.372 5.411 109567.606 -0.214 0.499 109567.784 0.621 0.501 109568.016 -0.929 1.679 6.368 -203.054 -168.935 173.675 5.174 -9.222 3.932 1.608 -2.778 -1.422 4.300 18.909 -21.343 -2.698 0.341 -0.760 2.482 1.431 2.013 -1.156 0.310 -0.465 p 0.000 0.661 0.096 0.000 0.000 0.000 0.023 0.016 0.304 0.766 0.607 0.793 0.427 0.000 0.000 0.000 0.494 0.129 0.646 0.791 0.710 0.831 0.535 0.353 Random Effects Participant Target Observations N 316 33 109936 Variance 93817 61900 Note: Race, target type, and set size were effect coded for analysis. "W" represents White primes, and "B" indicates Black primes. MouseC is the centered manipulation check errors. 79 Table 17 Contrast Tests of Estimated Marginal Means for Response Time Across Target Type Effect Gun Nongun Contrasts Gun - Nongun Mean 1315 1722 Estimate -406.108 95% Confidence Limit Lower 1192 1595 Upper 1439 1848 95% Confidence Limit Lower -576.289 Upper -235.927 Table 18 Contrast Tests of Estimated Marginal Means for Response Time Across Set Size Effect 12 16 20 Contrasts 12 - 16 12 - 20 16 - 20 Mean 1350 1514 1692 Estimate -164.196 -342.61 -178.415 95% Confidence Limit Lower 1258 1422 1600 Upper 1441 1606 1784 95% Confidence Limit Lower -177.167 -355.592 -191.405 Upper -151.225 -329.629 -165.424 Table 19 Contrast Tests of Estimated Marginal Means for Response Time Across Block Order Effect 0 1 2 Contrasts Linear Quadratic Mean 1638 1485 1435 Estimate -153.288 252.667 95% Confidence Limit Lower 1546 1393 1343 Upper 1730 1576 1527 95% Confidence Limit Lower -166.201 230.258 Upper -140.375 275.077 80 Table 20 Contrast Tests of Estimated Marginal Means for Response Time Across Set Size and Target Type 95% Confidence Limit Effect 12 Gun 16 Gun 20 Gun 12 Nongun 16 Nongun 20 Nongun Mean 1165 1313 1468 1534 1714 1917 Lower 1042 1189 1344 1407 1587 1789 Upper 1289 1437 1591 1661 1842 2044 Contrasts Linear nongun - gun Quadratic nongun - gun Estimate 80.504 14.607 Lower 54.54 -30.357 Upper 106.467 59.572 95% Confidence Limit Table 21 Contrast Tests of Estimated Marginal Means for Response Time Across Manipulation Check Errors and Target Type 95% Confidence Limit Effect -1SD Gun 0.00 Gun 1 SD Gun -1SD Nongun 0.00 Nongun 1 SD Nongun Trends Gun Nongun Mean 1297 1315 1334 1661 1722 1782 Estimate 2.48 7.87 Lower 1169 1192 1207 1530 1595 1651 Upper 1425 1439 1462 1793 1848 1913 95% Confidence Limit Lower -2 3.4 Upper 6.95 12.35 81 b SE 2.309 3.945 df 36.893 6.763 6.753 31.004 Table 22 Multilevel Linear Regression Predicting Response Time from Race, Target Type, and Set Size in Study 2 Fixed Effects Intercept Prime RaceW Prime RaceB Target Type Set Size12 Set Size20 Prime RaceW x Target Type Prime RaceB x Target Type Prime RaceW x Set Size12 Prime RaceB x Set Size12 Prime RaceW x Set Size20 Prime RaceB x Set Size20 Target Type x Set Size12 Target Type x Set Size20 Prime RaceW x Target Type x Set Size12 Prime RaceW x Target Type x Set Size20 Prime RaceB x Target Type x Set Size12 Prime RaceB x Target Type x Set Size20 1104.302 23.167 2.761 2.760 -122.881 22.173 -74.478 1.808 101207.551 1.825 101209.035 70.707 1.817 101211.105 -3.535 1.816 101211.800 -0.186 2.558 101208.207 3.245 2.554 101208.649 -1.374 2.582 101209.483 -1.165 2.580 101209.996 2.159 1.809 101208.869 5.034 1.825 101211.259 -6.501 2.558 101209.801 -2.896 2.554 101208.646 -0.198 2.582 101212.937 2.549 2.580 101211.647 -2.170 t 47.667 0.837 1.429 -5.542 -41.183 38.734 -1.945 -0.103 1.269 -0.538 -0.451 0.837 2.783 -3.561 -1.132 -0.078 0.987 -0.841 p <0.000 0.431 0.197 <0.000 <0.000 <0.000 0.052 0.918 0.205 0.591 0.652 0.403 0.005 <0.000 0.258 0.938 0.323 0.400 Random Effects Participant Prime Face Target Observations N 308 41 33 101597 Variance 12719.43 31.09 16154.10 Note: Race, target type, and set size were effect coded for analysis. "W" represents White primes, and "B" indicates Black primes. 82 Table 23 Contrast Tests of Estimated Marginal Means for Response Time Across Prime Race Effect White Black None Contrasts White - Black White - None Black - None Mean 1107 1108 1098 Estimate -1.635 8.563 10.199 95% Confidence Limit Lower 1061 1063 1051 Upper 1152 1154 1145 95% Confidence Limit Lower -8.702 -4.225 -2.587 Upper 5.432 21.352 22.984 Table 24 Contrast Tests of Estimated Marginal Means for Response Time Across Target Type Effect Gun Nongun Contrasts Gun - Nongun Mean 981 1227 Estimate -245.762 95% Confidence Limit Lower 919 1163 Upper 1043 1291 95% Confidence Limit Lower -332.676 Upper -158.847 Table 25 Contrast Tests of Estimated Marginal Means for Response Time Across Set Size Effect 12 16 20 Contrasts 12 - 20 12 - 16 20 - 16 Mean 1030 1108 1175 Estimate -145.185 -78.25 66.935 95% Confidence Limit Lower 984 1063 1129 Upper 1075 1154 1221 95% Confidence Limit Lower -151.352 -84.391 60.736 Upper -139.018 -72.108 73.134 83 Table 26 Contrast Tests of Estimated Marginal Means for Response Time Across Set Size And Target Type 95% Confidence Limit Effect 12 Gun 16 Gun 20 Gun 12 Nongun 16 Nongun 20 Nongun Mean 912 987 1046 1148 1229 1304 Lower 850 925 984 1084 1166 1241 Upper 974 1049 1108 1212 1293 1368 Contrasts Linear Nongun - Gun Quadratic Nongun - Gun Estimate 23.070 8.819 Lower 10.734 -12.560 Upper 35.405 30.198 95% Confidence Limit 84 Table 27 Multilevel Logistic Regression Predicting Correct Decisions from Race, Target Type, and Set Size in Study 2 Fixed Effects Intercept Prime RaceW Prime RaceB Target Type Set Size12 Set Size20 Prime RaceW x Target Type Prime RaceB x Target Type Prime RaceW x Set Size12 Prime RaceB x Set Size12 Prime RaceW x Set Size20 Prime RaceB x Set Size20 Target Type x Set Size12 Target Type x Set Size20 Prime RaceW x Target Type x Set Size12 Prime RaceW x Target Type x Set Size20 Prime RaceB x Target Type x Set Size12 Prime RaceB x Target Type x Set Size20 z 47.498 0.097 3.048 -1.547 13.816 -12.868 -0.591 -2.021 0.499 1.111 -0.429 -0.646 4.717 -4.954 -2.325 0.662 0.271 0.156 Pr(>|z) <0.000 0.923 0.002 0.122 <0.000 <0.000 0.554 0.043 0.618 0.267 0.668 0.518 <0.000 <0.000 0.020 0.508 0.787 0.876 b 2.665 0.002 0.051 -0.072 0.239 -0.204 -0.010 -0.034 0.012 0.028 -0.010 -0.015 0.082 -0.079 -0.057 0.016 0.006 0.004 SE 0.056 0.017 0.017 0.047 0.017 0.016 0.017 0.017 0.025 0.025 0.022 0.023 0.017 0.016 0.025 0.025 0.022 0.023 Random Effects Participant Target Observations N 308 33 109925 Variance 0.293 0.067 Note: Race, target type, and set size were effect coded for analysis. "W" represents White primes, and "B" indicates Black primes. 85 Table 28 Contrast Tests of Estimated Marginal Means for Accuracy Across Prime Race Effect White Black None Contrasts White - Black White - None Black - None Mean 2.67 2.72 2.61 Estimate -0.049 0.054 0.103 95% Confidence Limit Lower 2.55 2.6 2.5 Upper 2.78 2.83 2.73 95% Confidence Limit Lower -0.106 -0.001 0.047 Upper 0.007 0.109 0.159 Table 29 Contrast Tests of Estimated Marginal Means for Accuracy Across Target Type Effect Gun Nongun Contrasts Gun - Nongun Mean 2.59 2.74 Estimate -0.144 95% Confidence Limit Lower 2.45 2.59 Upper 2.73 2.88 95% Confidence Limit Lower -0.333 Upper -0.159 Table 30 Contrast Tests of Estimated Marginal Means for Accuracy Across Set Size 95% Confidence Limit Effect 12 16 20 Contrasts 12 - 20 12 - 16 20 - 16 Mean 2.9 2.63 2.46 Estimate 0.274 0.443 0.169 Lower 2.79 2.52 2.35 Upper 3.02 2.74 2.57 95% Confidence Limit Lower 0.216 0.387 0.116 Upper 0.333 0.5 0.222 86 Table 31 Contrast Tests of Estimated Marginal Means for Accuracy Across Set Size and Target Type 95% Confidence Limit Effect 12 Gun 16 Gun 20 Gun 12 Nongun 16 Nongun 20 Nongun Mean 2.91 2.56 2.31 2.89 2.71 2.61 Lower 2.76 2.41 2.16 2.74 2.55 2.46 Upper 3.06 2.7 2.46 3.05 2.86 2.76 Contrasts Linear Nongun - Gun Quadratic Nongun - Gun Estimate -0.320 0.018 Lower -0.434 -0.174 Upper -0.207 0.211 95% Confidence Limit 87 Table 32 Contrast Tests of Estimated Marginal Means for Accuracy Across Prime Race, Set Size, and Target Type 95% Confidence Limit Effect White 12 Gun Black 12 Gun None 12 Gun White 16 Gun Black 16 Gun None 16 Gun White 20 Gun Black 20 Gun None 20 Gun White 12 Nongun Black 12 Nongun None 12 Nongun White 16 Nongun Black 16 Nongun None 16 Nongun White 20 Nongun Black 20 Nongun None 20 Nongun Mean 2.86 2.98 2.91 2.6 2.54 2.53 2.3 2.32 2.32 2.98 2.99 2.72 2.66 2.8 2.66 2.61 2.68 2.55 Lower 2.69 2.8 2.73 2.43 2.37 2.37 2.14 2.16 2.16 2.8 2.81 2.55 2.49 2.62 2.49 2.44 2.51 2.38 Upper 3.03 3.15 3.08 2.76 2.7 2.7 2.46 2.48 2.48 3.15 3.17 2.89 2.83 2.97 2.83 2.78 2.85 2.72 95% Confidence Limit Contrasts Linear White-Black Gun Quadratic White-Black Gun Linear White-None Gun Quadratic White-None Gun Linear Black-None Gun Quadratic Black-None Gun Linear White-Black Nongun Quadratic White-Black Nongun Linear White-None Nongun Quadratic White-None Nongun Linear Black-None Nongun Quadratic Black-None Nongun Estimate 0.096 -0.244 0.027 -0.191 -0.069 0.052 -0.056 0.182 -0.199 0.3 -0.143 0.118 Lower -0.096 -0.568 -0.163 -0.514 -0.262 -0.27 -0.261 -0.166 -0.397 -0.036 -0.342 -0.225 Upper 0.288 0.08 0.217 0.131 0.124 0.374 0.15 0.529 -0.002 0.636 0.055 0.462 88 b SE 31.004 2.262 3.846 df 36.426 1104.548 23.093 1.807 101241.062 1.805 101242.134 Table 33 Multilevel Linear Regression Predicting Response Time from Race, Target Type, Set Size, and Block Order in Study 2 Fixed Effects Intercept Prime RaceW Prime RaceB Target Type Set Size12 Set Size20 Block Order1 Block Order3 Prime RaceW x Target Type Prime RaceB x Target Type Prime RaceW x Set Size12 Prime RaceB x Set Size12 Prime RaceW x Set Size20 Prime RaceB x Set Size20 Target Type x Set Size12 Target Type x Set Size20 Prime RaceW x Target Type x Set Size12 Prime RaceW x Target Type x Set Size20 Prime RaceB x Target Type x Set Size12 Prime RaceB x Target Type x Set Size20 -122.827 22.176 -74.603 1.797 101240.555 1.814 101241.038 70.691 63.825 1.810 101245.755 -45.390 1.806 101244.599 1.806 101241.110 -3.555 1.804 101241.144 -0.219 2.541 101240.667 3.318 2.538 101240.563 -1.346 2.565 101240.813 -1.252 2.563 101240.930 2.147 1.797 101240.931 4.951 1.814 101241.060 -6.404 2.541 101240.352 -2.757 2.538 101240.462 -0.207 2.565 101241.123 2.616 2.563 101240.627 -2.197 t 32.485 0.145 1.380 -4.681 -44.454 45.617 31.197 -22.099 -2.394 1.043 0.314 -0.513 -0.266 0.808 4.893 -5.553 0.486 0.228 0.396 -0.254 p <0.000 0.211 0.033 <0.000 <0.000 <0.000 <0.000 <0.000 0.049 0.904 0.192 0.596 0.625 0.402 0.006 <0.000 0.278 0.935 0.308 0.391 Random Effects Participant Target Observations N 308 33 101597 Variance 12774 16160 Note: Race, target type, set size, and block order were effect coded for analysis. "W" represents White primes, and "B" indicates Black primes. Block order 1 and 3 are the first and last blocks. 89 Table 34 Contrast Tests of Estimated Marginal Means for Response Time Across Prime Race Effect White Black None Contrasts White - Black White - None Black - None Mean 1107 1108 1098 Estimate -1.583 8.37 9.954 95% Confidence Limit Lower 1061 1063 1053 Upper 1152 1154 1144 95% Confidence Limit Lower -7.708 2.225 3.813 Upper 4.542 14.516 16.094 Table 35 Contrast Tests of Estimated Marginal Means for Response Time Across Target Type Effect Gun Nongun Contrasts Gun - Nongun Mean 982 1227 Estimate -245.654 95% Confidence Limit Lower 920 1164 Upper 1044 1291 95% Confidence Limit Lower -332.584 Upper -158.724 Table 36 Contrast Tests of Estimated Marginal Means for Response Time Across Set Size Effect 12 16 20 Contrasts 12 - 20 12 - 16 20 - 16 Mean 1030 1108 1175 Estimate -145.294 -78.516 66.778 95% Confidence Limit Lower 984 1063 1129 Upper 1075 1154 1221 95% Confidence Limit Lower -151.422 -84.618 60.619 Upper -139.166 -72.414 72.938 90 Table 37 Contrast Tests of Estimated Marginal Means for Response Time Across Block Order Effect 0 1 2 Contrasts Linear Quadratic Mean 1168 1086 1059 Estimate -109.215 55.303 95% Confidence Limit Lower 1123 1041 1014 Upper 1214 1132 1105 95% Confidence Limit Lower -115.354 44.68 Upper -103.076 65.927 Table 38 Contrast Tests of Estimated Marginal Means for Response Time Across Set Size And Target Type 95% Confidence Limit Effect 12 Gun 16 Gun 20 Gun 12 Nongun 16 Nongun 20 Nongun Mean 912 987 1046 1148 1230 1304 Lower 850 925 984 1084 1166 1241 Upper 974 1049 1108 1212 1294 1368 Contrasts Linear Nongun - Gun Quadratic Nongun - Gun Estimate 22.71 8.72 Lower 10.455 -12.522 Upper 34.966 29.961 95% Confidence Limit 91 Table 39 Contrast Tests of Estimated Marginal Means for Response Time Across Prime Race and Target Type 95% Confidence Limit Effect White Gun Black Gun None Gun White Nongun Black Nongun None Nongun Contrasts White - Black Gun White – None Gun Black – None Gun White – Black Nongun White – None Nongun Black – None Nongun Mean 980 985 979 1233 1231 1217 Estimate -4.92 1.042 5.961 1.753 15.699 13.946 Lower 918 923 917 1169 1168 1154 Upper 1042 1047 1041 1297 1295 1281 95% Confidence Limit Lower -13.607 -7.652 -2.733 -6.884 7.026 5.288 Upper 3.767 9.735 14.655 10.391 24.372 22.603 92 Table 40 Multilevel Logistic Regression Predicting Correct Decisions from Race, Target Type, Set Size and Block Order in Study 2 Fixed Effects Intercept Prime RaceW Prime RaceB Target Type Set Size12 Set Size20 Block Order1 Block Order3 Prime RaceW x Target Type Prime RaceB x Target Type Prime RaceW x Set Size12 Prime RaceB x Set Size12 Prime RaceW x Set Size20 Prime RaceB x Set Size20 Target Type x Set Size12 Target Type x Set Size20 Prime RaceW x Target Type x Set Size12 Prime RaceW x Target Type x Set Size20 Prime RaceB x Target Type x Set Size12 Prime RaceB x Target Type x Set Size20 z 47.497 0.042 2.991 -1.545 13.824 -12.874 -5.549 2.532 -0.600 -2.015 0.489 1.113 -0.416 -0.649 4.702 -4.946 -2.323 0.645 0.289 0.151 Pr(>|z) <0.000 0.967 0.003 0.122 <0.000 <0.000 <0.000 0.011 0.549 0.044 0.625 0.266 0.677 0.516 <0.000 <0.000 0.020 0.519 0.773 0.880 b 2.667 0.001 0.050 -0.072 0.239 -0.204 -0.089 0.042 -0.010 -0.034 0.012 0.028 -0.009 -0.015 0.081 -0.078 -0.057 0.016 0.006 0.003 SE 0.056 0.017 0.017 0.047 0.017 0.016 0.016 0.016 0.017 0.017 0.025 0.025 0.022 0.023 0.017 0.016 0.025 0.025 0.022 0.023 Random Effects Participant Target Observations N 308 33 109925 Variance 0.282 0.067 Note: Race, target type, set size, and block order were effect coded for analysis. "W" represents White primes, and "B" indicates Black primes. Block order 1 and 3 are the first and last blocks. 93 Table 41 Contrast Tests of Estimated Marginal Means for Accuracy Across Prime Race Effect White Black None Contrasts White - Black White - None Black - None Mean 2.67 2.72 2.62 Estimate -0.049 0.051 0.101 95% Confidence Limit Lower 2.55 2.6 2.5 Upper 2.78 2.83 2.73 95% Confidence Limit Lower -0.106 -0.004 0.045 Upper 0.007 0.107 0.157 Table 42 Contrast Tests of Estimated Marginal Means for Accuracy Across Target Type Effect Gun Nongun Contrasts Gun - Nongun Mean 2.59 2.74 Estimate -0.144 95% Confidence Limit Lower 2.45 2.59 Upper 2.74 2.88 95% Confidence Limit Lower 0.093 Upper -1.545 Table 43 Contrast Tests of Estimated Marginal Means for Accuracy Across Set Size 95% Confidence Limit Effect 12 16 20 Contrasts 12 - 20 12 - 16 20 - 16 Mean 2.91 2.63 2.46 Estimate 0.275 0.444 0.169 Lower 2.79 2.52 2.35 Upper 3.02 2.75 2.58 95% Confidence Limit Lower 0.216 0.387 0.116 Upper 0.333 0.500 0.222 94 Table 44 Contrast Tests of Estimated Marginal Means for Accuracy Across Block Order Effect 0 1 2 Contrasts Linear Quadratic Note: Mean 2.58 2.71 2.71 Estimate 0.275 0.169 95% Confidence Limit Lower 2.46 2.6 2.59 Upper 2.69 2.83 2.82 95% Confidence Limit Lower 0.216 0.116 Upper 0.333 0.222 Table 45 Contrast Tests of Estimated Marginal Means for Accuracy Across Set Size and Target Type 95% Confidence Limit Effect 12 Gun 16 Gun 20 Gun 12 Nongun 16 Nongun 20 Nongun Mean 2.92 2.56 2.31 2.9 2.71 2.61 Lower 2.76 2.41 2.17 2.74 2.55 2.46 Upper 3.07 2.7 2.46 3.05 2.86 2.76 Contrasts Linear Nongun - Gun Quadratic Nongun - Gun Estimate -0.320 0.018 Lower -0.434 -0.174 Upper -0.207 0.211 95% Confidence Limit 95 Table 46 Contrast Tests of Estimated Marginal Means for Accuracy Across Prime Race, Set Size, and Target Type 95% Confidence Limit Effect White 12 Gun Black 12 Gun None 12 Gun White 16 Gun Black 16 Gun None 16 Gun White 20 Gun Black 20 Gun None 20 Gun White 12 Nongun Black 12 Nongun None 12 Nongun White 16 Nongun Black 16 Nongun None 16 Nongun White 20 Nongun Black 20 Nongun None 20 Nongun Mean 2.86 2.98 2.91 2.6 2.54 2.53 2.3 2.32 2.32 2.98 2.99 2.72 2.66 2.8 2.66 2.61 2.68 2.55 Lower 2.69 2.8 2.73 2.43 2.37 2.37 2.14 2.16 2.16 2.8 2.81 2.55 2.49 2.62 2.49 2.44 2.51 2.38 Upper 3.03 3.15 3.08 2.76 2.7 2.7 2.46 2.48 2.48 3.15 3.17 2.89 2.83 2.97 2.83 2.78 2.85 2.72 95% Confidence Limit Contrasts Linear White-Black Gun Quadratic White-Black Gun Linear White-None Gun Quadratic White-None Gun Linear Black-None Gun Quadratic Black-None Gun Linear White-Black Nongun Quadratic White-Black Nongun Linear White-None Nongun Quadratic White-None Nongun Linear Black-None Nongun Quadratic Black-None Nongun Estimate 0.096 -0.244 0.027 -0.191 -0.069 0.052 -0.056 0.182 -0.199 0.3 -0.143 0.118 Lower -0.096 -0.568 -0.163 -0.514 -0.262 -0.27 -0.261 -0.166 -0.397 -0.036 -0.342 -0.225 Upper 0.288 0.08 0.217 0.131 0.124 0.374 0.15 0.529 -0.002 0.636 0.055 0.462 96 SE 31.006 306.185 2.299 3.945 df 36.387 1104.375 23.080 1.817 101238.078 1.816 101239.181 Table 47 Multilevel Linear Regression Predicting Response Time from Race, Target Type, Set Size, and Manipulation Check Errors in Study 2 Fixed Effects b Intercept Prime RaceW Prime RaceB Target Type Set Size12 Set Size20 mouseC Prime RaceW x Target Type Prime RaceB x Target Type Prime RaceW x Set Size12 Prime RaceB x Set Size12 Prime RaceW x Set Size20 Prime RaceB x Set Size20 Target Type x Set Size12 Target Type x Set Size20 Target Type x mouseC Set Size12 x mouseC Set Size20 x mouseC Prime RaceW x Target Type x Set Size12 Prime RaceW x Target Type x Set Size20 Prime RaceB x Target Type x Set Size12 Prime RaceB x Target Type x Set Size20 Target Type x Set Size12 x mouseC Target Type x Set Size20 x mouseC -122.892 22.171 -74.441 1.809 101237.591 1.826 101238.068 70.673 0.873 -1.270 1.817 101238.143 -3.538 1.816 101238.176 -0.183 2.558 101237.703 3.254 2.554 101237.584 -1.366 2.582 101237.850 -1.171 2.580 101237.972 2.147 1.809 101237.951 5.062 1.826 101238.108 -6.530 0.173 101244.165 -0.364 0.243 101237.613 0.411 0.246 101237.958 -0.288 2.558 101237.368 -2.900 2.554 101237.486 -0.186 2.582 101238.162 2.549 2.580 101237.660 -2.185 0.243 101238.261 0.454 0.246 101238.300 -0.390 t 47.849 1.265 2.173 -5.543 -41.159 38.713 -1.455 -1.947 -0.101 1.272 -0.535 -0.454 0.832 2.799 -3.577 -2.106 1.688 -1.173 -1.134 -0.073 0.987 -0.847 1.868 -1.586 p 0.000 0.206 0.030 0.000 0.000 0.000 0.147 0.052 0.920 0.203 0.593 0.650 0.405 0.005 0.000 0.035 0.091 0.241 0.257 0.942 0.323 0.397 0.062 0.113 Random Effects Participant Target Observations N 308 33 101597 Variance 12673 16151 Note: Race, target type, and set size were effect coded for analysis. "W" represents White primes, and "B" indicates Black primes. MouseC is the centered manipulation check errors. 97 Table 48 Contrast Tests of Estimated Marginal Means for Response Time Across Prime Race Effect White Black None Contrasts White - Black White - None Black - None Mean 1107 1108 1098 Estimate -1.646 8.544 10.190 95% Confidence Limit Lower 1061 1063 1051 Upper 1152 1154 1144 95% Confidence Limit Lower -7.811 2.370 4.021 Upper 4.519 14.718 16.359 Table 49 Contrast Tests of Estimated Marginal Means for Response Time Across Target Type Effect Gun Nongun Contrasts Gun - Nongun Mean 981 1227 Estimate -245.783 95% Confidence Limit Lower 920 1164 Upper 1043 1291 95% Confidence Limit Lower -332.691 Upper -158.876 Table 50 Contrast Tests of Estimated Marginal Means for Response Time Across Set Size Effect 12 16 20 Contrasts 12 - 20 12 - 16 20 - 16 Mean 1030 1108 1175 Estimate -78.236 -145.171 -66.935 95% Confidence Limit Lower 985 1063 1130 Upper 1075 1154 1220 95% Confidence Limit Lower -84.378 -151.339 -73.134 Upper -72.094 -139.004 -60.736 98 Table 51 Contrast Tests of Estimated Marginal Means for Response Time Across Prime Race and Target Type 95% Confidence Limit Effect White Gun Black Gun None Gun White Nongun Black Nongun None Nongun Contrasts White - Black Gun White – None Gun Black – None Gun White – Black Nongun White – None Nongun Black – None Nongun Mean 980 985 979 1233 1231 1217 Estimate -4.979 1.293 6.272 1.71 15.805 14.095 Lower 918 923 917 1169 1168 1153 Upper 1042 1047 1041 1297 1295 1281 95% Confidence Limit Lower -13.722 -7.449 -2.47 -6.984 7.083 5.389 Upper 3.764 10.035 15.014 10.403 24.526 22.801 Table 52 Contrast Tests of Estimated Marginal Means for Response Time Across Set Size And Target Type 95% Confidence Limit Effect 12 Gun 16 Gun 20 Gun 12 Nongun 16 Nongun 20 Nongun Mean 912 987 1046 1148 1230 1304 Lower 850 925 984 1084 1166 1241 Upper 974 1049 1108 1212 1293 1368 Contrasts Linear Nongun - Gun Quadratic Nongun - Gun Estimate 23.086 8.836 Lower 10.751 -12.543 Upper 35.421 30.215 95% Confidence Limit 99 Table 53 Contrast Tests of Estimated Marginal Means for Response Time Across Manipulation Check Errors and Target Type 95% Confidence Limit Effect -1SD Gun 0.00 Gun 1 SD Gun -1SD Nongun 0.00 Nongun 1 SD Nongun Trends Gun Nongun Gun - Nongun Mean 994 981 969 1234 1227 1221 Estimate -1.62 -0.907 -0.713 Lower 930 920 906 1169 1164 1156 Upper 1057 1043 1033 1299 1291 1285 95% Confidence Limit Lower -3.37 -2.65 -1.391 Upper 0.126 0.836 -0.034 100 Table 54 Multilevel Logistic Regression Predicting Correct Decisions from Race, Target Type, Set Size, and Manipulation Check Errors in Study 2 Fixed Effects Intercept Prime RaceW Prime RaceB Target Type Set Size12 Set Size20 mouseC Prime RaceW x Target Type Prime RaceB x Target Type Prime RaceW x Set Size12 Prime RaceB x Set Size12 Prime RaceW x Set Size20 Prime RaceB x Set Size20 Target Type x Set Size12 Target Type x Set Size20 Target Type x mouseC Prime RaceW x Target Type x Set Size12 Prime RaceW x Target Type x Set Size20 Prime RaceB x Target Type x Set Size12 Prime RaceB x Target Type x Set Size20 z 47.772 0.097 3.037 -1.359 13.844 -12.897 -2.963 -0.599 -2.022 0.497 1.113 -0.423 -0.654 4.733 -4.980 -9.739 -2.330 0.666 0.269 0.161 Pr(>|z) 0.000 0.922 0.002 0.174 0.000 0.000 0.003 0.549 0.043 0.619 0.266 0.672 0.513 0.000 0.000 0.000 0.020 0.505 0.788 0.872 b 2.670 0.002 0.051 -0.063 0.240 -0.205 -0.013 -0.010 -0.034 0.012 0.028 -0.009 -0.015 0.082 -0.079 -0.015 -0.057 0.017 0.006 0.004 SE 0.056 0.017 0.017 0.047 0.017 0.016 0.004 0.017 0.017 0.025 0.025 0.022 0.023 0.017 0.016 0.002 0.025 0.025 0.022 0.023 Random Effects Participant Target Observations N 308 33 109925 Variance 0.282 0.067 Note: Race, target type, and set size were effect coded for analysis. "W" represents White primes, and "B" indicates Black primes. MouseC is the centered manipulation check errors. 101 Table 55 Contrast Tests of Estimated Marginal Means for Accuracy Across Prime Race Effect White Black None Contrasts White - Black White - None Black - None Mean 2.67 2.72 2.62 Estimate -0.049 0.054 0.103 95% Confidence Limit Lower 2.56 2.61 2.5 Upper 2.79 2.84 2.73 95% Confidence Limit Lower -0.106 -0.002 0.047 Upper 0.008 0.109 0.159 Table 56 Contrast Tests of Estimated Marginal Means for Accuracy Across Target Type Effect Gun Nongun Contrasts Gun - Nongun Mean 2.61 2.73 Estimate -0.127 95% Confidence Limit Lower 2.47 2.59 Upper 2.75 2.88 95% Confidence Limit Lower −0.310 Upper 0.056 Table 57 Contrast Tests of Estimated Marginal Means for Accuracy Across Set Size 95% Confidence Limit Effect 12 16 20 Contrasts 12 - 20 12 - 16 20 - 16 Mean 2.91 2.64 2.46 Estimate 0.274 0.443 0.169 Lower 2.8 2.52 2.35 Upper 3.03 2.75 2.58 95% Confidence Limit Lower 0.216 0.387 0.116 Upper 0.333 0.500 0.222 102 Table 58 Contrast Tests of Estimated Marginal Means for Accuracy Across Set Size and Target Type 95% Confidence Limit Effect 12 Gun 16 Gun 20 Gun 12 Nongun 16 Nongun 20 Nongun Mean 2.93 2.57 2.32 2.89 2.7 2.61 Lower 2.78 2.42 2.17 2.74 2.55 2.46 Upper 3.08 2.72 2.47 3.04 2.85 2.76 Contrasts Linear Nongun - Gun Quadratic Nongun - Gun Estimate -0.320 0.018 Lower -0.434 -0.174 Upper -0.209 0.210 95% Confidence Limit Table 59 Contrast Tests of Estimated Marginal Means for Response Time Across Manipulation Check Errors and Target Type 95% Confidence Limit Effect -1SD Gun 0.00 Gun 1 SD Gun -1SD Nongun 0.00 Nongun 1 SD Nongun Trends Gun Nongun Gun - Nongun Mean 2.82 2.61 2.4 2.72 2.73 2.75 Estimate -0.028 0.002 -0.030 Lower 2.66 2.47 2.24 2.56 2.59 2.59 Upper 2.97 2.75 2.55 2.88 2.88 2.91 95% Confidence Limit Lower -0.037 -0.007 −0.036 Upper -0.019 0.011 -0.024 103 Table 60 Contrast Tests of Estimated Marginal Means for Accuracy Across Prime Race, Set Size, and Target Type 95% Confidence Limit Effect White 12 Gun Black 12 Gun None 12 Gun White 16 Gun Black 16 Gun None 16 Gun White 20 Gun Black 20 Gun None 20 Gun White 12 Nongun Black 12 Nongun None 12 Nongun White 16 Nongun Black 16 Nongun None 16 Nongun White 20 Nongun Black 20 Nongun None 20 Nongun Mean 2.88 2.99 2.92 2.97 2.99 2.71 2.61 2.55 2.55 2.66 2.8 2.65 2.31 2.33 2.33 2.6 2.67 2.55 Lower 2.71 2.82 2.75 2.79 2.81 2.54 2.44 2.39 2.38 2.49 2.62 2.48 2.15 2.17 2.17 2.43 2.5 2.38 Upper 3.05 3.17 3.1 3.15 3.17 2.89 2.78 2.72 2.71 2.83 2.97 2.82 2.47 2.49 2.49 2.77 2.85 2.71 95% Confidence Limit Contrasts Linear White-Black Gun Quadratic White-Black Gun Linear White-None Gun Quadratic White-None Gun Linear Black-None Gun Quadratic Black-None Gun Linear White-Black Nongun Quadratic White-Black Nongun Linear White-None Nongun Quadratic White-None Nongun Linear Black-None Nongun Quadratic Black-None Nongun Estimate 0.097 -0.245 0.028 -0.192 -0.069 0.053 -0.056 0.184 -0.199 0.301 -0.144 0.117 Lower -0.095 -0.569 -0.162 -0.515 -0.263 -0.270 -0.261 -0.164 -0.397 -0.035 -0.343 -0.226 Upper 0.289 0.080 0.218 0.132 0.125 0.376 0.150 0.531 -0.002 0.637 0.055 0.460 104 APPENDIX C: DDM EFFECTS TABLES Table 61 Main effects of Alpha 95% HDI Lower Condition 2.040 White 12 2.063 Black 12 1.985 None 12 2.080 White 16 2.090 Black 16 2.060 None 16 2.132 White 20 2.148 Black 20 None 20 2.111 Note: Numbers under factor represent the set size. ESS is the estimated sample size, Kruschke (2014) recommends an ESS of 10000 for stable HDI estimates. ESS 9371.700 9624.300 9704.000 10003.000 9646.200 9599.000 9554.500 10003.000 10003.000 Upper 2.112 2.136 2.054 2.147 2.160 2.128 2.201 2.217 2.178 Mode 2.076 2.098 2.021 2.110 2.121 2.094 2.167 2.179 2.146 Table 62 Main effects of Beta 95% HDI ESS Condition 8364.600 White 8120.500 Black None 8425.300 Note: ESS is the estimated sample size, Kruschke (2014) recommends an ESS of 10000 for stable HDI estimates. Lower 0.541 0.533 0.542 Upper 0.554 0.546 0.555 Mode 0.548 0.539 0.549 105 Table 63 Main effects of Tau 95% HDI Condition Lower 0.410 White 12 Gun 0.402 Black 12 Gun 0.413 None 12 Gun 0.416 White 16 Gun 0.408 Black 16 Gun 0.411 None 16 Gun 0.408 White 20 Gun 0.402 Black 20 Gun 0.414 None 20 Gun 0.403 White 12 Nongun 0.411 Black 12 Nongun 0.411 None 12 Nongun 0.422 White 16 Nongun 0.421 Black 16 Nongun 0.422 None 16 Nongun 0.432 White 20 Nongun 0.427 Black 20 Nongun 0.431 None 20 Nongun Note: Numbers under condition represent set size. ESS is the estimated sample size, Kruschke (2014) recommends an ESS of 10000 for stable HDI estimates. ESS 8175.700 10003.000 10003.000 10003.000 9606.400 9703.600 9314.600 10003.000 10003.000 10003.000 9498.500 10003.000 10003.000 9322.000 9587.200 9285.200 10003.000 10003.000 Upper 0.430 0.423 0.433 0.437 0.429 0.432 0.430 0.424 0.435 0.426 0.433 0.433 0.444 0.444 0.446 0.455 0.450 0.455 Mode 0.421 0.412 0.422 0.426 0.418 0.421 0.419 0.413 0.425 0.416 0.422 0.422 0.433 0.433 0.434 0.444 0.438 0.444 106 Table 64 Main effects of Delta 95% HDI Condition Lower 1.582 White 12 Gun 1.629 Black 12 Gun 1.564 None 12 Gun 1.327 White 16 Gun 1.317 Black 16 Gun 1.290 None 16 Gun 1.115 White 20 Gun 1.145 Black 20 Gun 1.120 None 20 Gun -1.434 White 12 Nongun -1.447 Black 12 Nongun -1.419 None 12 Nongun -1.296 White 16 Nongun -1.322 Black 16 Nongun -1.313 None 16 Nongun -1.231 White 20 Nongun -1.223 Black 20 Nongun -1.231 None 20 Nongun Note: Numbers under condition represent set size. ESS is the estimated sample size, Kruschke (2014) recommends an ESS of 10000 for stable HDI estimates. ESS 9712.500 9428.500 9575.100 9609.300 9690.200 10003.000 10003.000 9617.500 9631.700 10138.000 9809.200 10128.900 10003.000 10094.600 9694.400 9464.200 10003.000 10003.000 Mode 1.630 1.679 1.618 1.377 1.368 1.336 1.157 1.191 1.169 -1.383 -1.403 -1.373 -1.250 -1.273 -1.270 -1.187 -1.179 -1.186 Upper 1.685 1.733 1.668 1.424 1.414 1.389 1.210 1.237 1.214 -1.341 -1.356 -1.324 -1.206 -1.232 -1.224 -1.144 -1.137 -1.143 107 Table 65 Summary Effects of Alpha 95% HDI 95% HDI Effect White - Black White - None Black - None 12 - 16 12 - 20 16 - 20 WB and setSize 12 WB and setSize 16 WB and setSize 20 WN and setSize 12 WN and setSize 16 WN and setSize 20 BN and setSize 12 BN and setSize 16 BN and setSize 20 Note: Racial Group Comparisons: WB(White – Black), WN (White – None), BN(Black – None). Lower -0.237 0.020 0.104 -0.400 -0.679 -0.440 -0.401 -0.317 -0.330 0.029 -0.165 -0.145 0.151 -0.093 -0.070 Lower -0.044 0.004 0.019 -0.075 -0.126 -0.081 -0.074 -0.060 -0.062 0.007 -0.032 -0.027 0.028 -0.019 -0.013 d -0.090 0.166 0.260 -0.252 -0.535 -0.293 -0.102 -0.067 -0.095 0.320 0.095 0.087 0.434 0.144 0.177 Upper 0.013 0.060 0.075 -0.018 -0.070 -0.026 0.027 0.036 0.034 0.106 0.064 0.069 0.128 0.080 0.083 Mode -0.017 0.030 0.049 -0.049 -0.100 -0.055 -0.019 -0.013 -0.018 0.055 0.018 0.017 0.082 0.027 0.034 Upper 0.067 0.318 0.406 -0.097 -0.373 -0.144 0.140 0.195 0.186 0.558 0.348 0.368 0.683 0.435 0.440 Table 66 Summary Effects of Beta Effect White - Black White - None Black - None Mode 0.008 -0.001 -0.010 95% HDI 95% HDI Lower 0.001 -0.011 -0.018 Upper 0.017 0.075 0.000 d 0.284 -0.044 -0.310 Lower -0.027 -0.348 -0.611 Upper 0.576 0.260 0.006 108 Table 67 Summary Effects of Delta Gun-NonGun Main Effects setSize 12 - 16 setSize 12 - 20 setSize 16 - 20 White - Black White - None Black - None gun -nongun 95% HDI 95% HDI Mode 0.204 0.337 0.136 -0.018 0.009 0.022 0.114 Lower 0.177 0.310 0.108 -0.043 -0.019 -0.004 0.088 Upper 0.230 0.363 0.159 0.011 0.036 0.050 0.139 d 0.772 1.264 0.500 -0.066 0.033 0.084 0.433 Lower 0.659 1.155 0.405 -0.160 -0.072 -0.010 0.330 Upper 0.875 1.386 0.603 0.041 0.133 0.194 0.523 0.135 0.025 0.039 0.044 0.108 -0.038 -0.024 -0.019 0.056 -0.005 0.010 0.014 0.081 -0.145 -0.090 -0.069 0.208 -0.020 0.029 0.053 0.303 Interactions WB x Object WN x Object BN x Object targetType and SetSize 1216 targetType and SetSize 1220 targetType and SetSize 1620 targetType and SetSize 12 targetType and SetSize 16 targetType and SetSize 20 linear test quadratic test Note: Racial Group Comparisons: WB(White – Black), WN (White – None), BN(Black – None). Combined values of set size indicate an interaction between the two. 0.259 0.093 -0.011 -0.194 0.012 0.816 0.203 -0.184 -0.739 0.037 0.984 0.355 -0.041 -0.714 0.044 0.218 0.057 -0.048 -0.192 0.011 0.299 0.135 0.028 -0.188 0.013 0.405 0.105 0.505 0.202 0.053 0.159 0.108 0.027 0.077 0.091 0.147 0.169 0.408 0.604 0.295 1.132 0.496 0.100 -0.701 0.047 109 Table 68 Summary Effects of Delta Gun 95% HDI 95% HDI Interaction RWB-1216 RWB-1220 RWB-1620 RWN-1216 RWN-1220 RWN-1620 RBN-1216 RBN-1220 RBN-1620 Note: Abbreviations used in the 'Interaction' column: RWB: Race White-Black \ RWN: Race White-None \ RBN: Race Black-None. Numbers following the abbreviations represent Set Size interactions. Lower -0.289 -0.204 -0.098 -0.220 -0.137 -0.089 -0.111 -0.096 -0.176 Lower -0.077 -0.055 -0.027 -0.059 -0.036 -0.024 -0.029 -0.025 -0.044 d -0.098 -0.040 0.087 -0.027 0.044 0.071 0.061 0.070 0.003 Mode -0.025 -0.011 0.023 -0.007 0.012 0.019 0.016 0.018 0.001 Upper 0.072 0.150 0.244 0.145 0.220 0.251 0.251 0.260 0.170 Upper 0.019 0.039 0.064 0.038 0.058 0.066 0.067 0.069 0.048 Table 69 Summary Effects of Delta Gun 95% HDI 95% HDI Mode 0.008 -0.013 -0.016 0.017 0.007 -0.008 0.016 0.018 0.009 Interaction RWB-1216 RWB-1220 RWB-1620 RWN-1216 RWN-1220 RWN-1620 RBN-1216 RBN-1220 RBN-1620 Note: Abbreviations used in the 'Interaction' column: RWB: Race White-Black \ RWN: Race White-None \ RBN: Race Black-None. Numbers following the abbreviations represent Set Size interactions. Lower -0.153 -0.204 -0.216 -0.104 -0.133 -0.190 -0.118 -0.096 -0.139 Lower -0.040 -0.052 -0.058 -0.028 -0.036 -0.052 -0.032 -0.027 -0.037 d 0.029 -0.048 -0.062 0.062 0.027 -0.032 0.061 0.070 0.033 Upper 0.178 0.119 0.099 0.231 0.187 0.124 0.216 0.233 0.180 Upper 0.047 0.034 0.026 0.060 0.050 0.032 0.057 0.060 0.048 110 APPENDIX D: POSTERIOR PREDICTIVE CHECKS To evaluate the fit of the drift-diffusion model specified, I used JAGS to simulate decision and response time data based on the posterior condition level distributions derived from the DDM. Essentially, the posterior values are used to generate 10,000 sample datasets. This leads to a large amount of data, specifically, 360 trials x 308 participants x 10000 sampled values. This data was then aggregated at the condition level since study analyses were on condition-level estimates. Next, these data were used to summarize the choice probabilities, response times, and response time distributions. For the choice probabilities, the observed and model-predicted means were plotted for each condition and response type. Hit rates are overestimated , and false alarms are slightly underestimated, though the extent of this misestimation is minimal. That is the model generated data that suggested a higher accuracy than what is found in the observed data. For response times, the observed and model-predicted means were plotted for each condition and response type. These comparisons indicated that predicted means for correct gun responses were overestimated; however, the correct decision times for non-gun were accurate. However, the predicted incorrect gun and non-gun response times were faster than the observed data by a large margin. Finally, to better evaluate what may be causing these response time differences, the observed and model-predicted response time distributions were plotted for each condition such that the top of the figure indicates correct responses and the bottom of the figure indicates incorrect response for the same condition. In analyzing response time distributions for correct gun trials, the predicted model slightly underestimates the average response time (central tendency) while it tends to overestimate the frequency of longer response times (the right-hand 111 tail of the distribution. In the case of incorrect responses to gun stimuli, the predicted response times exhibit a strong right skew, and notably, the observed data shift towards longer response times (a rightward shift in central tendency) with increasing set sizes. When examining correct identifications of non-gun objects, the model tends to overestimate the average response time and underestimate the extremities of the response times. For incorrect responses to non-gun stimuli, the shift in observed response times across set sizes is less marked than in the gun conditions, but the rightward shift in central tendency is still observed. Moreover, the observed response time distributions are broader than the predicted, potentially indicating an overestimation of the drift rate. The fit issues observed cannot be readily attributed to non-decision time (alpha) or threshold (tau) parameters. Typically, discrepancies caused by these parameters would manifest as uniform changes in the shape of response time distributions across both correct and incorrect decisions. However, the analysis shows that correct decisions generally fit the model predictions better than incorrect ones, indicating a different source of error. In addition, it’s not likely that the start point (beta) is the cause, given that the starkest differences occur across set sizes in the incorrect decision response time distributions. That is, set size cannot be accurately modeled for the start point parameter since participants have no prior knowledge about the upcoming trial set size. Instead, the differences in model fit might be more closely associated with unaccounted for variations in drift rate (delta). For example, upon closer inspection of the observed incorrect response time distributions, something that stands out is that there are multiple peaks, suggesting that these distributions may be multimodal. Something that could account for this effect is practice effects. Recall that in Studies 1 and 2, practice effects were found such that participants' responses 112 decreased from block 1 to block 2 but plateaued from block 2 to block 3. As an exploratory analysis, I plotted the observed response time distributions for incorrect non-gun trials across blocks and set sizes to determine if the distribution was moving in a way that would create these multimodal peaks. However, looking at the first and last block, it’s not clear that this was the largest contributing factor. Notably, there are instances where the first block has normally distributed response times, but at the final block, two strong peaks emerge. This highlights that there may be more occurring, and one such moderator is the manipulation check. I figured this manipulation check might be related to different search strategies that participants engage in (i.e., target present/absent decisions versus slower specific target searches). To further extend this work, I looked at the manipulation check and plotted response times for people who were highly accurate (fewer than 1 error (36%) and people with varied errors (greater than 9 errors (14%]) by blocks 1 and 3. While not exact, most of the response time distributions for block 1 are relatively similar, with divergences occurring at the final block. Notice that for the low errors group across set sizes and race, a negatively skewed multi-modal distribution develops, while for the high errors group, there is less consistency in the change of the distribution. Neither variable may fully explain the multimodal distributions found in the overall response time distribution, given that these peaks are found in both. To address the fact that there are still multi-modal distributions, one possibility is that some of the non-gun items are more difficult to locate, leading to longer incorrect response times. The plots breaking down the response time distributions for correct and incorrect responses at set size 20 for non-gun objects reveals that this is the case. The multimodal peaks found in the data can be best explained by participants struggling with some of the non-gun items. The code for the drift-diffusion model 113 needs modification to include intercepts corresponding to different object categories for the drift rates. This adjustment is essential for optimizing the model's fit to the data. Figure 16: Posterior predictions of hit and false alarm rates for Study 2. X’s represent observed condition level choice proportions. Squares represent predicted condition level choice proportions. Bars are the 95% HDI. 114 Figure 17: Posterior predictions of response times for Study 2. X’s represent observed condition level choice proportions. Squares represent predicted condition level choice proportions. Bars are the 95% HDI. 115 Figure 18: Observed (black) and predicted (gray) response time distributions for each response type at the condition level in Study 2. The top part of the graph is correct responses, and bottom is incorrect. 116 Figure 19: Observed (black) and predicted (gray) response time distributions for each response type at the condition level in Study 2. The top part of the graph is correct responses, and bottom is incorrect. 117 Figure 20: Response time distributions for block 1 (red) and block 3(blue) for incorrect responses in the non-gun conditions. 118 Figure 21: Response time distributions for block 1 (red) and block 3(blue) for incorrect responses in the non-gun conditions in both low and high manipulation check error groups. 119 Figure 22: Correct (top) and incorrect (bottom) response time distributions for non-gun objects in set size 20. 120 Figure 23: Correct (top) and incorrect (bottom) response time distributions for non-gun objects in set size 20. 121