DISPATCH INFORMATION AND POLICE USE OF FORCE: COMPUTATIONALLY MODELING SIMULATED DECISIONS TO SHOOT By David J. Johnson A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Psychology—Doctor of Philosophy 2017 ABSTRACT DISPATCH INFORMATION AND POLICE USE OF FORCE: COMPUTATIONALLY MODELING SIMULATED DECISIONS TO SHOOT By David J. Johnson The decision to use lethal force against a civilian is one of the most difficult decisions police officers face. There has been increasing concern that racial bias amongst police officers has led to increased shootings of Black Americans. Researchers have used simplified shooting tasks to test this question in the laboratory. Such studies typically reveal a bias shoot unarmed Black men more than unarmed White men. However, such studies have major shortcomings in that they do not include several important features of the real world decision environment. The following four studies tested whether dispatch information, information about a suspect given to police by dispatch, influenced shooting decisions and racial bias. Untrained civilians and trained officers made better decisions when dispatch information was correct. Dispatch information was also sufficient to eliminate racial bias in shooting decisions. This demonstrates a limitation in generalizing findings of racial bias in the laboratory to real world shooting decisions. In addition, I also used a computational model, the drift diffusion model, to test how dispatch information, race, and expertise influence shooting decisions. This modeling showed these factors influence how individuals collect information for the decision to shoot in an online fashion, rather than creating an a priori bias to favor the decision to shoot. I discuss what part of dispatch information may reduce racial bias, as well as implications for police recruitment and training. This work is dedicated to my parents, Anne and Jeff Johnson. Your love and support provided the foundation I needed to succeed. This work is also dedicated to Fish and Lisa Seylar, who were always there for me when I needed encouragement. iii ACKNOWLEDGEMENTS This work would not have been possible without help from dozens of police officers from the Midwest who took the time to participate in this research. In addition to dutifully completing the experimental tasks, they also gave insightful comments on ways to improve the study design. They also provided revealing comments about their on the job experiences and the hardships that officers face in a time where officer-civilian relations are particularly strained. Without their support and trust this work would not be possible. I hope that this work provides a foundation for developing programs to help officers perform their duties and restore trust between officers and the communities they police. I am also thankful for the guidance given by my graduate advisor, Joseph Cesario, and the rest of my committee. Without Joe’s assistance I would never have pursued this avenue of research, let alone had the resources necessary to successfully partner with law enforcement agencies. The computational modeling class that Timothy Pleskac taught introduced me to the world of modeling that would prove to be an integral part of my dissertation and work more generally. Despite being half a world away, Tim was willing to answer my numerous emails and requests to meet. I am indebted to him for inadvertently becoming my second advisor. I would also like to thank my fellow graduate students, especially Peter Kvam, who helped me sort through some of the more complex analytical issues in my dissertation. These students—my friends—gave me the motivation to come into work each day and the support to keep working when I encountered difficulties. Finally, I’m grateful to Kesha Sebert, who provided the soundtrack to this manuscript. iv TABLE OF CONTENTS LIST OF TABLES ........................................................................................................................ vii LIST OF FIGURES ....................................................................................................................... ix INTRODUCTION ...........................................................................................................................1 Understanding the Decision to Shoot ..................................................................................2 Drift Diffusion Model ..........................................................................................................7 Existing DDM Research on the FPST .....................................................................9 Advantages of the DDM Approach .......................................................................11 Conceptual Advances.........................................................................................................13 Bayesian Estimation of the DDM ......................................................................................14 METHOD AND RESULTS ..........................................................................................................17 Overview of the Studies .....................................................................................................17 Study 1: The Role of Dispatch Information and Expertise ................................................18 Method ...................................................................................................................18 Participants.................................................................................................18 Procedure ...................................................................................................19 Results ....................................................................................................................20 Behavioral Analyses ..................................................................................20 Summary ........................................................................................23 Hierarchical DDM .....................................................................................24 Model Specification and Selection ................................................24 Hierarchical DDM Analysis ..........................................................24 Summary ........................................................................................28 Discussion ..............................................................................................................29 Role of Dispatch Information ....................................................................29 Unpacking Response Time Differences.....................................................30 The Lack of a Race Effect .........................................................................31 Study 2: Separating the Role of Race and Weapon Information .......................................32 Method ...................................................................................................................33 Participants and Study Design ...................................................................33 Results ....................................................................................................................33 Behavioral Analyses ..................................................................................33 Summary ........................................................................................36 Hierarchical DDM .....................................................................................37 Model Specification and Selection ................................................37 Hierarchical DDM Analyses ..........................................................37 Summary ........................................................................................40 Discussion ..............................................................................................................40 Study 3: Model Recovery of Start Point ............................................................................42 Method and Results................................................................................................42 Discussion ..............................................................................................................44 v Study 4: Model Validation with Payoff Manipulation ......................................................45 Method ...................................................................................................................45 Results ....................................................................................................................46 Behavioral Analysis ...................................................................................46 Summary ........................................................................................48 Hierarchical DDM .....................................................................................48 Model Specification and Selection ................................................48 Hierarchical DDM Analyses ..........................................................48 Summary ........................................................................................50 Discussion ..............................................................................................................51 GENERAL DISCUSSION ............................................................................................................54 What Part of Dispatch Information Reduces Racial Bias? ....................................................56 Modeling the Role of Dispatch Information and Expertise ...................................................58 Individual Differences in Racial Bias ....................................................................................61 Benefits of a Diffusion Model Approach to Decisions ..........................................................63 Conclusion .............................................................................................................................65 APPENDICES ...............................................................................................................................66 APPENDIX A: ANOVA Tables........................................................................................67 APPENDIX B: DDM Effects Tables.................................................................................73 APPENDIX C: Hierarchical Drift Diffusion Model ..........................................................76 APPENDIX D: JAGS Code ...............................................................................................77 APPENDIX E: Posterior Predictions .................................................................................79 REFERENCES ..............................................................................................................................92 vi LIST OF TABLES Table 1: Summary of Parameter Recovery Study..........................................................................44 Table 2: Payoff Values for the FPST by Block .............................................................................46 Table 3: ANOVA Summary Table for Study 1 Error Rates ..........................................................67 Table 4: ANOVA Summary Table for Study 1 Response Times ..................................................68 Table 5: ANOVA Summary Table for Study 2 Error Rates (Weapon vs. Control) ......................69 Table 6: ANOVA Summary Table for Study 2 Error Rates (Race vs. Control) ...........................69 Table 7: ANOVA Summary Table for Study 2 Error Rates (Weapon Condition) ........................70 Table 8: ANOVA Summary Table for Study 2 Error Rates (Race Condition) .............................70 Table 9: ANOVA Summary Table for Study 2 Response Times (Weapon vs. Control) ..............71 Table 10: ANOVA Summary Table for Study 2 Response Times (Race vs. Control) .................71 Table 11: ANOVA Summary Table for Study 4 Error Rates ........................................................72 Table 12: ANOVA Summary Table for Study 4 Response Times ................................................72 Table 13: Summary of Effects on Condition Level Threshold for Study 1 Students ....................73 Table 14: Summary of Effects on Condition Level Start Point for Study 1 Students ...................73 Table 15: Summary of Effects on Condition Level Non-Decision Time for Study 1 Students ....73 Table 16: Summary of Effects on Condition Level Drift Rate for Study 1 Students ....................73 Table 17: Summary of Effects on Condition Level Threshold for Study 1 Officers .....................74 Table 18: Summary of Effects on Condition Level Start Point for Study 1 Officers ....................74 Table 19: Summary of Effects on Condition Level Non-Decision Time for Study 1 Officers .....74 Table 20: Summary of Effects on Condition Level Drift Rate for Study 1 Officers .....................74 Table 21: Summary of Effects on Condition Level Threshold for Study 4 ...................................75 vii Table 22: Summary of Effects on Condition Level Start Point for Study 4 ..................................75 Table 23: Summary of Effects on Condition Level Non-Decision Time for Study 4 ...................75 Table 24: Summary of Effects on Condition Level Drift Rate for Study 4 ...................................75 viii LIST OF FIGURES Figure 1: The FPST. Participants must respond with “shoot” or “don’t shoot” and are given feedback after each trial. ..................................................................................................................4 Figure 2: The drift diffusion model. ................................................................................................8 Figure 3: The modified FPST used in Study 1. Participants always received accurate information about the race and sex of the target before each trial. On half of the trials they were informed (with 75% accuracy) that the target was armed. ............................................................................19 Figure 4: Proportion errors for students (left panel) and police (right panel) for all conditions. NI = No weapon information. WI = Weapon information. 95% confidence intervals were calculated using the methods outlined by Morey (2008). ...............................................................................21 Figure 5: Correct response times for students (left panel) and police (right panel) for all conditions. NI = No weapon information. WI = Weapon information. 95% confidence intervals were calculated using the methods outlined by Morey (2008). .....................................................22 Figure 6: Diffusion model parameters as a function of target race, dispatch information, and object for Study 1. Dots represent modal posterior predictions at the condition level; bars are 95% HDI. W = White, B = Black. NG = Nongun. GU = Gun. .....................................................25 Figure 7: Proportion errors (top panel) and correct response times (bottom panel) for all conditions. Response times were omitted for 22 participants because they failed to respond correctly in at least one condition. NG = Nongun. GU = Gun. 95% confidence intervals were calculated using the methods outlined by Morey (2008). ..............................................................35 Figure 8: Diffusion model parameters as a function of target race, dispatch information, and object for Study 2. Dots represent modal posterior predictions at the condition level; bars are 95% HDI. W = White, B = Black, NI = No Information...............................................................38 Figure 9: Proportion errors (left panel) and correct response times (right panel) for all conditions. DS = Payoff favors not shooting. SH = Payoff favors shooting. 95% confidence intervals were calculated using the methods outlined by Morey (2008). ..............................................................47 Figure 10: Diffusion model parameters as a function of target race, payoff structure, and object for Study 2. Dots represent modal posterior predictions at the condition level; bars are 95% HDI. DS = payoff favors not shooting, SH = payoff favors shooting. ...................................................49 Figure 11: Generic diagram of the hierarchical drift diffusion model. The kth response for subject j within condition i are generated by a drift diffusion process. Vertical lines on the normal distributions indicate that the priors were truncated normals. Prec = precision. ...........................76 Figure 12: Posterior predictions of hit and false alarm rates for Study 1. Squares represent ix observed condition level choice proportions. Diamonds represent predicted condition level choice proportions. Blue dots represent individual participant response times and have been jittered to better show the distribution of scores. W = White, B = Black ......................................81 Figure 13: Posterior predictions of response times for Study 1. Squares represent observed condition level choice proportions. Diamonds represent predicted condition level choice proportions. Blue dots represent individual participant response times and have been jittered to better show the distribution of scores. W = White, B = Black ......................................................82 Figure 14: Observed (black) and predicted (gray) response time distributions for each response type at the condition level for students in Study 1. .......................................................................83 Figure 15: Observed (black) and predicted (gray) response time distributions for each response type at the condition level for officers in Study 1. ........................................................................84 Figure 16: Posterior predictions of hit and false alarm rates for Study 2. Squares represent observed condition level choice proportions. Diamonds represent predicted condition level choice proportions. Blue dots represent individual participant response times and have been jittered to better show the distribution of scores. W = White, B = Black ......................................85 Figure 17: Posterior predictions of response times for Study 2. Squares represent observed condition level choice proportions. Diamonds represent predicted condition level choice proportions. Blue dots represent individual participant response times and have been jittered to better show the distribution of scores. W = White, B = Black ......................................................86 Figure 18: Observed (black) and predicted (gray) response time distributions for the shoot response at the condition level for students in Study 2. ................................................................87 Figure 19: Observed (black) and predicted (gray) response time distributions for the don’t shoot response at the condition level for students in Study 2. ................................................................88 Figure 20: Posterior predictions of hit and false alarm rates for Study 4. Squares represent observed condition level choice proportions. Diamonds represent predicted condition level choice proportions. Blue dots represent individual participant response times and have been jittered to better show the distribution of scores. DS = payoff favors not shooting, SH = payoff favors shooting. .............................................................................................................................89 Figure 21: Posterior predictions of response times for Study 4. Squares represent observed condition level choice proportions. Diamonds represent predicted condition level choice proportions. Blue dots represent individual participant response times and have been jittered to better show the distribution of scores. DS = payoff favors not shooting, SH = payoff favors shooting. ........................................................................................................................................90 Figure 22: Observed (black) and predicted (gray) response time distributions for each response type at the condition level for Study 4. .........................................................................................91 x INTRODUCTION In November 2014, two police officers responded to information from dispatch about a “Black male sitting on the swings…pointing [a gun] at people” (Lee, 2015a). When officers arrived on the scene they shot the individual within seconds, killing him. Unfortunately, the Black male was twelve-year-old Tamir Rice, who was playing with an airsoft pistol replica. For many people, Rice’s shooting represents bias in use of lethal force against Black Americans (Lee, 2015b). There is a widespread belief that Black men are shot at higher rates than White men due to racial biases held by police officers, and this belief compromises the legitimacy of the police institution. This results in civilians who are less likely to obey the law (Tyler, 2006), which undermines the ability of police to perform their duties (Jackson et al., 2012; Kane, 2005; Sunshine & Tyler, 2003; Tyler & Fagan, 2008). However the shooting of Tamir Rice also raises the possibility that dispatch information—information given to officers by police dispatch before seeing a suspect—might impact officers’ decisions to shoot civilians. The caller who reported Rice also told dispatch the pistol was “probably fake” and that he was “probably a juvenile” (Smith, 2015). Had the officers been given this information, they may not have decided to use lethal force. To avoid the negative consequences associated with (the perception of) police bias, there is a critical need to understand how officers make the decision to shoot, and whether dispatch information impacts this decision-making process. The current research addresses these questions by using a process model to demonstrate how this information impacts the decision to shoot, with a specific focus on how dispatch influences the effects of suspect race on shooting errors. In addition, this research also tests how expertise influences the role of suspect race and dispatch information in the decision to shoot. Because most experimental research on the 1 decision to shoot is based on untrained civilians (Correll, Park, Judd, & Wittenbrink, 2002; 2007; Correll, Park, Judd, Wittenbrink, Sadler, & Keesee, 2007; Correll, Wittenbrink, Park, Judd, & Goyle, 2011; Kenworthy, Barden, Diamond, & del Carmen, 2011; Ma, Correll, Wittenbrink, Bar-Anan, Sriram, & Nosek, 2011; Plant, Goplen, & Kunstman, 2011; Plant & Peruche, 2005), firm conclusions about the pervasiveness of race bias or lack thereof within the police are unclear. Furthermore, there is no research on how police officers might respond differently to dispatch information than untrained civilians. To address these questions I analyze data from a well-known laboratory shooting paradigm, the First-Person Shooter Task (Correll et al., 2002) using a sequential sampling model, the drift diffusion model (Ratcliff, 1978; Ratcliff & McKoon, 2008). This model of fast decisionmaking specifies how basic cognitive processes give rise to the decision to shoot. In addition I present data from both untrained (college students) and trained individuals (police officers). Thus, this research takes an initial step towards providing the information necessary to develop effective use of force training programs and good dispatch policies. Understanding the Decision to Shoot In 2015, at least 96 unarmed men were fatally shot by police in the United States (Tate, Jenkins, Kindy, Lowery, Alexander, & Rich, 2015; Swaine, Laughland, Lartey, & McCarthy, 2015). Although Blacks comprise only 12.0% of the male population, 39.6% of those killed were Black. In contrast, although Whites comprise 62.1% of the male population, only 34.4% of those killed were White.1 These reports are consistent with other recent studies of police shootings (Ross, 2015) and FBI data (US Department of Justice, 2001). Although incident reports are crucial for understanding the factors that influence the use of lethal force, they suffer from three problems. First, any conclusions depend on the accuracy 1 20.8% were Hispanics (17.9% of the male population), and 5.2% were another race (8.0% of the male population). 2 and completeness of those reports (James, Klinger, & Vila, 2014; James, Vila, & Klinger, 2013). If details are not recorded there is no way to understand how they impact decisions. Second, deadly force encounters involve many factors other than dispatch information and race (e.g., the suspect’s demeanor, attire, and location). This makes it difficult to isolate the impact that any one factor might have in these decisions. Finally, relying on these after-the-fact reports precludes examining how specific factors impact the psychological decision process. To overcome these problems, researchers have designed experimental tasks to study the decision to shoot. The most extensively used experimental paradigm to study the decision to shoot is the First-Person Shooter Task (FPST; Correll et al., 2002). On any given trial in the FPST, participants see a fixation point, followed by a series of neighborhood scenes (see Figure 1). Eventually, a person appears in a scene with an object. Participants are told to press a “shoot” button if the object is a gun or a “don’t shoot” button if the object is harmless. They are given feedback after each trial in the form of points. Correct decisions earn points: shooting an armed target earns 10 points and not shooting an unarmed target earns 5 points. Incorrect decisions incur penalties: shooting an unarmed target yields a penalty of 20 points and not shooting an armed target yields a penalty of 40 points. Thus, missing an armed target results in the worst outcome, somewhat mirroring payoffs faced by officers in the field (i.e., failing to shoot an armed target may result in the loss of the officer’s life). Finally, participants are penalized 50 points for responding slowly (after the response window has ended), as the decision to use force often requires a fast judgment. Because researchers have been most interested in how race impacts the decision to shoot, targets are typically Black men and White men. 3 FixaCon Point 250 – 1000ms 1 – 4 Backgrounds 500 – 1000ms each Response Window 630ms Feedback 2000ms good shot +10 Total points = 100 + Time Figure 1: The FPST. Participants must respond with “shoot” or “don’t shoot” and are given feedback after each trial. A major limitation of this design is that participants know nothing about a target until he appears on screen. Clearly, this is a gross oversimplification of what police officers face in the field. Officers often have dispatch information about a suspect before they are required to make any decision involving force. Yet, all existing variants of the shooter task provide no dispatch information about targets. Although the information dispatchers ask for varies widely based on the situation, dispatchers generally ask four interrogative questions: where is the emergency, what is the emergency, when did it happen, and who is involved (Norcomm, 2017; Kobb, 2016). The answers to these questions are passed onto officers who respond to the call. Importantly, answering the “who” question involves getting an accurate description of the suspect, including information about their sex, race, age, height, weight, hair color, and clothing. Thus, in many cases officers have accurate information about the race and sex of the suspect they are looking for far before they encounter that individual. In the case of a crime, dispatchers will routinely ask if weapons are present (Norcom, 2017). The presence of a weapon, particularly a gun, raises the priority of a call. Higher priority calls are responded to more quickly because of their sensitive nature. Information about whether a weapon is present (as well as who is in possession of that weapon) is also given to officers. 4 This information is sometimes inaccurate because objects are misidentified as weapons and dispatchers receive false reports about weapons. The former error is exemplified by shootings like Tamir Rice and John Crawford (Balko, 2014), where officers received incorrect information that the suspect was holding a firearm (both had airsoft replicas). However, a more common reason officers receive bad information that a suspect is armed is because civilians falsely report weapons to get faster police responses (Lance Langdon, personal communication, June 1, 2016). These faster responses come at a cost; officers are trained to approach these situations differently and this training and information may influence an officer’s perception of how threatening a suspect is acting. As highlighted in the shooting of Tamir Rice, officers’ decisions to use lethal force may have had more to do with the inaccurate description of him as armed than racial bias on the part of the officers. In addition to the omission of relevant factors like dispatch information in the FPST, there are also shortcomings in how these data are typically analyzed. In the FPST, two types of data are collected each trial: the decision (“shoot” or “don’t shoot”) and the speed of the decision (response time, in milliseconds). Standard practice in social psychology is to analyze these data separately. The typical finding is that participants are more likely (and faster) to shoot unarmed Black men than unarmed White men (Correll et al., 2002; 2007; 2011). Although race impacts the decision to shoot in this task, it is less clear how it does so. At least two competing accounts of how race impacts this decision have been proposed: that the “the police have one trigger finger for whites and another for blacks” (Takagi, 1974, p. 30), or that “a gun looks more like a gun when it appears in the hands of a Black man” (Correll, Wittenbrink, Crawford, & Sadler, 2015, p. 221). Although both of these proposed accounts are consistent with data that participants are 5 more likely to shoot unarmed Black men than unarmed White men, they are quite different with respect to which part of the decision process they predict to be influenced by race. In the “trigger finger” case, race bias is a predisposition to favor the shoot decision for Black men. In the other account, race bias is in the interpretation: objects look more like guns when held by Black men. Although process models such as signal detection and process dissociation have been used to understand race bias in the decision to shoot, these models have limitations (described later) that prevent them from being able to definitively address this question. To better address this issue, I use a computational model—the drift diffusion model (Ratcliff, 1978; Ratcliff & McKoon, 2008)—to identify how the race of a target affects the decision process, thus distinguishing between the two accounts above. This process-level analysis can also be used to understand how dispatch information impacts the decision process. Although dispatch information and race should influence the use of lethal force, the ways in which they impact the decision process may be very different. The benefits of a process-level analysis are more than theoretical; knowing how factors like dispatch information and race impact the decision process is crucial for designing effective training protocols. For example, a very different training program would be required to reduce perceptual bias in weapon identification for Black men than would be for a program designed to reduce a category bias to shoot Black men. In sum, two critical issues must be resolved in order to extrapolate current research from the FPST to real-world contexts. First, the effect of dispatch information on shooting decisions must be considered, given its potential to have a powerful impact on officers’ decisions. Second, decisions must be examined from a process-level analysis in order to understand how factors such as race and dispatch information influence the decision process. 6 Drift Diffusion Model A major limitation of prior analytic approaches to understanding the decision to shoot is that they typically focus on the behavioral level, examining either decisions made or the speed of those decisions (response times). These analyses are not well equipped to investigate processes that underlie those decisions, because no task is a “pure” indicator of an underlying process (Jacoby, 1991; Payne, 2005).2 To better understand these processes, I modeled the decision to shoot as a drift diffusion process. This sequential sampling model and others like it (e.g., the linear ballistic accumulator model) are commonly used in cognitive psychology to study how people make quick decisions (Bogacz, Brown, Moehlis, Holmes, & Cohen, 2006; Ratcliff & Smith, 2004), but are relatively underused in social psychology (but see Correll et al., 2015; Klauer, Voss, Schmitz, & Teige-Mocigemba, 2007). The drift diffusion model incorporates decision and response time data to provide a process-level account of the decision to shoot. Figure 2 displays a model of the diffusion process and lists its parameters. In the FPST, participants accumulate evidence towards a decision based on the features of a scene. As strength of evidence can vary over time, the drift rate (δ; delta) indicates the average strength of evidence (see Figure 2). It is most affected by the features relevant to the decision: the presence of a weapon. However, evidence accumulation is a noisy process and can be biased by other features, such as dispatch information or target race. Thus, participants may drift towards the wrong threshold (e.g., shoot when a gun is not present), resulting in an error. The amount of evidence that participants require to make a decision is indicated by the threshold (α; alpha). Crossing the upper or lower threshold boundary triggers a “shoot” or “don’t shoot” decision, respectively. Threshold captures the speed-accuracy trade-off and cannot be 2 Decisions are often analyzed using signal detection models, which reveal that participants set a more liberal shoot criterion (decision rule) for Black men. However, these models do not provide information about the psychological processes that give rise to criterion differences. 7 lower than zero. When threshold is high, decisions are more accurate but slower. When threshold is low, decisions are faster but less accurate. Shoot β∙α τ δ α Don’t Shoot Time Figure 2: The drift diffusion model. A preexisting bias to shoot or not shoot at the start of the evidence accumulation process is indicated by the start point (β; beta). When the start point is closer to the shoot threshold, participants are more likely to decide to shoot. Given the steep penalties for failing to shoot an armed target in the FPST, participants often show a start point favoring the shoot decision (Pleskac, Cesario, & Johnson 2017). All components of a response unrelated to deliberation, including encoding and motor response time are indicated by non-decision time (τ; tau). Non-decision time is an error term, reflecting these extraneous processes and other unknown contaminants. These contaminants are generally not separable and so a single estimate of non-decision time is produced. This estimate 8 is directly interpretable as the length of time that these unknown processes take. Existing DDM Research on the FPST Although the DDM has not been applied to explore the effects of dispatch information on the decision to shoot, it has been previously used to model data from the FPST. Across different parameterizations and populations, the key finding from this research is that the race of a target influences the evidence accumulation process, such that objects look more like guns when held by Black men than White men (Correll et al., 2015, Pleskac, Cesario, & Johnson, 2017). This is in contrast to the idea that early interpretation of a target’s race might bias the start point of the diffusion process. Said differently, participants do not appear more “trigger happy” for Black men than White men. This suggests that higher shooting rates (and faster decisions) for unarmed Black men versus unarmed White men in the FPST does not result from individuals ignoring relevant information. Rather, the race of a target influences the interpretation of the object, perhaps through stereotypic associations between Black men and violence (Correll et al., 2015). The race of a target may influence more than just the interpretation of the gun or nongun object. Across three studies, Pleskac et al. (2017) found that participants required more evidence (i.e., they set higher thresholds) when making decisions for Black targets than White targets. This may reflect the use of motivated strategies to reduce racial bias in the decision to shoot. Insofar as this process requires motivation and ability, it is possible that this threshold difference might be enhanced when participants are motivated to act accordance with egalitarian beliefs and have the ability to do so (i.e., under longer response windows). This demonstrates a benefit of the DDM; it can show how the race of a suspect may have opposing effects on the decision process. Although race pushes individuals to favor the shoot decision for Black men, increasing decision speed and false alarm rates, this tendency is attenuated by a countervailing increase in the 9 amount of evidence needed to make a decision. This makes decisions slower but more accurate. Initial work validating the appropriateness of the DDM as a process model of the FPST has also begun. For example, the structure of the DDM necessitates that start point should be influenced by task payoffs. Given that the payoff matrix in the FPST ensures better outcomes (more points on average) when the shoot decision is favored, the DDM would predict a start difference a priori. Consistent with this hypothesis, Pleskac et al. (2017) found that start points for all targets (both Black and White men) favored the shoot response. This difference is also reflected in the observed data; participants typically make fewer errors on trials where guns are present compared to trials where harmless objects are present.3 However, strong tests of this assumption require manipulations of the payoff matrix in order to see if the start point moves accordingly. In addition, the DDM also predicts that response window differences should influence the threshold parameter, which measures the speed-accuracy trade-off. The model predicts that threshold changes should lead to higher error rates for stereotype incongruent targets when the response window is short, and few error differences but slower correct responses for stereotype incongruent trials when response window is longer. Across three experiments with varying response windows (630, 750, and 850ms), Pleskac et al. (2017) found that the threshold parameter did in fact increase as response window was lengthened. In sum, existing work on the FPST shows that race influences the decision process by changing the accumulation of evidence, rather than by biasing individuals to favor one decision over another. The race of a target may also influence how much information individuals collect before making a decision. When a target is Black participants may withhold making a decision 3 Correll et al. (2015) actually found that start points were shifted towards the don’t shoot threshold, despite fewer errors on gun trials than nongun trials. These divergent results may be due to inadequacies in the estimation approach used (see Pleskac et al., 2017). 10 longer to ensure its accuracy. Finally, initial work has validated the psychological interpretation of the DDM parameters. The model adequately captures a general bias to shoot and responds appropriately to changes in response window. Advantages of the DDM Approach Why rely on the DDM approach over other models commonly used to understand decisions in the FPST, such as signal detection theory? A major strength of the drift diffusion model is that it can distinguish between the various accounts of how race bias affects shooting decisions. Specifically, race appears to change how evidence is accumulated (a drift rate change) rather than create an a priori bias towards a decision (a start point change). Dispatch information might also impact decision parameters in different ways. For example, information that a target is armed might bias participants to shoot before they even see the target, and might be reflected in movement of the start point towards the shoot decision. In contrast, within the FPST signal detection merely provides estimates of the ability to distinguish between guns and harmless objects (d’; sensitivity) and the decision rule set for how strong evidence must be before responding with shoot (c, criterion). While the criterion provides useful information about whether individuals are more likely to favor the shoot decision for Black or White targets, it does not describe the process that led to that decision rule. Again, only the DDM reveals this bias occurs because race influences the accumulation of evidence to shoot. A second advantage of the DDM is that its parameters map well onto how researchers have theoretically divided the decision-making process in the FPST. For example, consider the theoretical model of shooter bias proposed by Correll et al. (2002). Like most social cognition researchers, they invoke dual process theories (e.g., Chaiken & Trope, 1999, Sherman, Gawronski, & Trope, 2014) to explain shooting decisions. When a Black target is seen in the 11 FPST they assume (e.g., Bargh, 1989) “automatic” (efficient, uncontrollable, unintentional) stereotypic associations are activated first. These associations dominate decisions unless slower “controlled” processes have time to influence the decision. Although they describe three controlled processes that might be influenced by stereotypical associations (perception, interpretation, and decision certainty), they ultimately conclude that these automatic associations “may theoretically affect any or all of these processes, and it is difficult to disentangle them theoretically, let alone empirically” (p. 1326). The main problem is that dual process have difficulty unraveling the effects stereotypic associations might have on the decision-making process because “controlled” processes like perception, interpretation, and decision certainty are lumped together. The DDM eschews the controlled versus automatic dichotomy and instead focuses on those components. Thus, the effect of stereotypic associations can be estimated for each component. Specifically, start point reflects the perception stage, the degree to which participants use perceptual information (i.e., whether they are biased towards shoot or don’t shoot). Drift rate reflects the interpretation stage, whether a stimulus seems to look like a weapon or not. Finally, threshold captures the decision certainty stage, as it measures the level of information required to make a shoot or don’t shoot decision. In sum, the formulation of the DDM clearly separates the underlying components of the decision process and in doing so allows tests of theoretically motivated questions of process that have eluded examination. Finally, the DDM offers advantages over other common formal approaches to modeling the FPST, such as multinomial models like the process dissociation procedure (Payne, 2001) and the quadruple model (Conrey, Sherman, Gawronski, Hugenberg, & Groom, 2005). While these models explain decision data well, they do not model response times. Given that race bias in the 12 FPST is typically found in both decisions and response times, these models are ignoring useful information. Perhaps more important, they are also silent on whether the same processes that generate bias in decisions also produce bias in response times. The DDM shows how the same process can generate both correct and incorrect decisions as well as their speed. It utilizes all of the available data, making it a more complete account of the shooting decision process. Conceptual Advances Aside from the methodological advances of modeling the shoot decision via the DDM, what conceptual advances does the current research provide? The clearest answer is that this research provides an initial examination of how dispatch information impacts the decision to shoot in the FPST. In fact, almost no FPST research has examined how any prior information about weapons or the race of a target might impact the decision to shoot. One exception is work demonstrating that information about the dangerousness of the neighborhood impacts race bias in the decision to shoot. Correll et al. (2011) manipulated whether targets appeared in dangerous or neutral backgrounds, so that participants had prior information about the dangerousness of the situation. This information overwhelmed race bias, resulting in all targets being shot to the same high degree when they were presented in dangerous backgrounds. However, it is unclear where this change occurred in the decision process: were participants more “trigger happy,” were they rushing to make decisions, or did objects look more like guns to them? One reason there is little work on how dispatch information might impact the decision to shoot is because past research has almost exclusively focused on how suspect race, as perceived by the officer, impacts the decision to shoot. Some research suggests race can bias the shooting decision process (Correll et al., 2015; Pleskac et al., 2017), but how dispatch information might either exacerbate or attenuate race bias has yet to be tested. In the Rice case, many people 13 believe that the race information from dispatch made the officers more likely to use lethal force (Lee, 2015a). At the same time, officers may be highly motivated to avoid prejudiced actions and errors (e.g., officer Jesse Kidder refused to shoot a murder suspect that repeatedly charged him; Mazza, 2015). In other words, providing race information could decrease shooting errors for Black men because officers will wait longer to ensure their decision is correct. This would suggest that the decision to use force against Rice might be explained more by the dispatch information that he was armed. Although there are certainly cases where officers only receive dispatch information about the race of a target or whether a weapon is present (e.g., reports of “shots fired”), in many cases officers receive information about the race of a suspect and whether they have a weapon (Lance Langdon, personal communication, June 1, 2016). Thus, it is important to examine how dispatch reports with both types of information might influence officers’ decisions. Finally, research on the FPST has typically examined the decision to shoot using untrained individuals, although some work has examined trained police officers (Correll et al., 2007, James et al., 2013; Ma et al., 2013; Plant & Peruche, 2005; Sadler et al., 2012; Sim et al; 2013). Recruiting police officers is crucial because they likely respond differently than untrained civilians when faced with the decision to use lethal force. Indeed, past research shows that police officers typically outperform untrained civilians and show less race bias. However, no work has investigated how the decision process varies from officers to civilians, nor how officers might use dispatch information differently than civilians. Understanding how officers respond to these factors is crucial for establishing good dispatch practices and effective training programs. Bayesian Estimation of the DDM Like many paradigms in social psychology, data from the FPST is based on a relatively 14 small number of trials per condition (typically 20; see Correll et al., 2002; 2007; 2011), whereas relatively larger numbers of participants are collected. Thus, analyses focus on population level estimates. This can be contrasted with cognitive psychology research, where large amounts of data (thousands of trials; e.g., Ratcliff & Rouder, 1998) are collected from relatively few participants. Estimates then focus on the participant level. In order to reliably estimate the DDM parameters from the FPST, I embed my analyses in a hierarchical framework. Participants’ estimates inform other participants’ estimates as well as group level estimates. This is a more reliable method for estimating parameters because of the relatively few trials completed by any given participant. A consequence of this hierarchical framework is that it produces estimates at both the individual and group levels. Because I assume that individuals are randomly drawn from an unspecified population, the individual level estimates for all parameters are also random. This creates a complex random structure that is computational prohibitive to solve using maximum likelihood methods, which rely on optimization algorithms to find parameter values that maximize the likelihood of the data (Vandekerckhove et al., 2011). However, Bayesian methods only require the specification of a prior distribution and a tractable likelihood function to update that distribution, making such complex structures less problematic (see p. 115, Kruschke, 2014). Bayesian estimation provides an estimate of the posterior distribution of parameters after observing the data and in light of prior beliefs. The posterior distribution represents the degree of certainty regarding the parameters after observing the data. I allow the data to dominate the posterior estimate by setting uninformative priors, and I estimate the distributions via Markov Chain Monte Carlo methods, which approximate the posterior distribution given a large enough sample. More details on the estimation procedure, including the statistical model and priors used 15 can be found in the Appendix. Because the (marginal) posterior distribution represents certainty about a parameter, it can be used in hypothesis testing. I describe the posterior parameter distributions using their modal posterior value and 95% Highest Density Interval (HDI). The modal posterior value has the highest probability density, making it the most credible parameter estimate. Similarly, values within the 95% HDI have a higher probability density than values outside and so are more credible (Kruschke, 2014). Testing condition effects is accomplished by determining whether the 95% HDI for a contrast contains zero. When it does not, the effect of condition is credible. For example, to test whether target race impacts drift rate, I analyze whether the condition level posterior distribution for White targets is credibly different than that for Black targets. To verify that the DDM was an appropriate model of the FPST data, I conducted posterior predictive checks for each study and condition for the choice probabilities, mean response times, and response time distributions. Those checks are included in the Appendix. They indicated that the model gave a good account of the data relative to other work (e.g., Ratcliff, Thapar, and McKoon, 2006), although there is room for improvement. 16 METHOD AND RESULTS Overview of the Studies The following four studies tested how dispatch information and expertise impacts the decision to shoot and changes the effects of race on shooting decisions. Study 1 and 2 tested the whether dispatch information impacted the decision to shoot and whether those effects depended on expertise. Untrained students (Study 1 and 2) and trained police officers (Study 1) completed a modified version of the FPST where they received demographic information about targets before they saw them. They also occasionally received accurate information that those targets were armed. Data were examined using behavior-level analyses as well as process-level DDM analyses. Results from the first two studies demonstrated a clear effect of dispatch information on behavior and at the process level. However, the DDM also revealed a counterintuitive effect of dispatch information in Study 1, where information that a target was armed shifted participants start point away from the decision to shoot. Study 2 revealed the opposite pattern. To probe whether this effect was due to problems with the DDM, Study 3 tested whether the model could recover a simulated change in start point in the expected direction under conditions similar to Study 1. Results showed that the DDM could precisely recover the simulated difference between conditions. Study 4 was a validation study that demonstrated that the start point could be manipulated experimentally in predictable ways using an experimental manipulation of decision payoffs. In sum, these experiments provide evidence that the DDM is a reasonable and informative model of the FPST that provides novel insight into how dispatch information influences the decision to shoot. 17 Study 1: The Role of Dispatch Information and Expertise The purpose of Study 1 was to test how dispatch information indicating a person is armed influences the decision to shoot. In addition, by recruiting trained officers and untrained students, this experiment tested how trained police officers responded differently to dispatch information. Officer and student data were examined at the process level with the DDM. As no work has examined officer decisions to shoot through the lens of the DDM, this study provided the first test of how officers’ shooting decision process varied from untrained students. Method Participants. One hundred and six undergraduates completed a version of the FPST with dispatch information. One participant was removed for not following instructions (they always chose to not shoot), and three participants were removed for responding carelessly (responding faster than 300ms on 20% or more trials). The remaining 102 participants (Mage = 19.0, SD = 1.2) were 72.5% White, 13.7% Asian, 3.9% Black, with 9.8% from other groups. Men (88.2%) were oversampled to better match the demographics of officers nationally, who are overwhelmingly male (87.8%; Reaves, 2015). I also collected officer data from four different police departments in the Midwestern United States. Fifty-one officers from departments of various sizes (from 30 – 1,800 sworn officers) were recruited. The study was advertised to the officers during police training or shift briefings. Officers completed the study in the department before or after their shift, or during their training. They either were paid $30 for their participation or did the study voluntarily during their training. Officers were 68.6% men, with an average of 11.7 years of experience (SD = 9.5, range [0, 45]; not all officers reported their experience). 18 Procedure. Participants completed 160 trials (officers) or 320 trials (students) of a modified FPST with a 650ms response window. The task was the same as the traditional FPST expect participants were given dispatch information about the target before each trial. They were always given accurate demographic (race and sex) information. This experimental design reflects that misidentification of race and sex is unlikely for these targets, which were easily accurately categorizable by race and sex. In addition, on half of the trials participants received information that the target was armed. This information was accurate 75% of the time. Figure 3 shows an example of one trial from this task. InformaCon 2000ms 1 – 4 Backgrounds 500 – 1000ms each Response Window 650ms Dispatch: The suspect is an armed Black male. Feedback 2000ms good shot Time Figure 3: The modified FPST used in Study 1. Participants always received accurate information about the race and sex of the target before each trial. On half of the trials they were informed (with 75% accuracy) that the target was armed. The study was a 2 (object: gun, nongun) × 2 (race: Black, White) × 2 (dispatch: no weapons information, weapons information) within-subjects design with expertise (officers, students) as a between-subjects factor. Targets were more likely to be armed when this dispatch information was presented (75% armed) than when it was not (25% armed). This made the information (and its absence) informative as to whether the participant would encounter someone with a weapon. This also made the task more realistic. Officers encounter individuals with guns less frequently outside of calls where weapons are reported. 19 As another departure from the typical FPST design, I also did not give point-based feedback to students and officers on their performance. However, they did receive feedback about their decision accuracy. This decision was driven by the choice to recruit officers. The purpose of point-based feedback is to mimic the payoffs that officers receive on the job based on their decisions to use lethal force. Officers likely do not need this reminder and might find that allocating points trivializes these important decisions. I removed the point-based system for both groups to prevent this issue. This decision may influence the general bias of participants to shoot more often than not because the point-based system encourages shooting behavior. However, given that officers and students likely understand the nature of the task and are aware of the real world payoffs, they may continue to show a bias to shoot. Results Behavioral Analyses. To test whether dispatch information impacted the decision to shoot, I conducted an ANOVA on error rates, with target race, object, and information as withinsubjects factors.4 Expertise was entered as a between-subjects factor. Figure 4 shows the decision data for all conditions. Only two effects emerged. First, officers made fewer errors (M = .179, SD = .140) than students (M = .240, SD = .149), F(1, 151) = 13.23, p < .001. Second, the predicted two-way interaction between object and dispatch information was significant, F(1, 151) = 58.13, p < .001. As expected, when dispatch correctly indicated that the target was armed, participants made fewer errors (M = .191, SD = .106) than when dispatch was incorrect (M = .243, SD = .178), t(152) = -3.90, p < .001. In addition, when no dispatch information was given (thus unarmed individuals were more likely) participants made fewer mistakes for unarmed targets (M = .185, SD = .123) than armed targets (M = .260, SD = .162), t(152) = 6.33, p < .001. 4 Full ANOVA tables for all behavioral analyses are listed in the appendix. 20 0.35 0.35 white black 0.25 0.20 white 0.30 Proportion Errors Proportion Errors 0.30 black 0.25 0.20 0.15 0.15 0.10 0.10 Unarmed NI Armed NI Unarmed WI Armed WI Unarmed NI Armed NI Unarmed WI Armed WI Students Police Figure 4: Proportion errors for students (left panel) and police (right panel) for all conditions. NI = No weapon information. WI = Weapon information. 95% confidence intervals were calculated using the methods outlined by Morey (2008). The typical race by object interaction in error rates that is indicative of racial bias was not significant, F(1, 151) = 1.70, p = .19, nor was the three-way interaction with condition, F(1, 151) = 0.60, p = .44. In sum, there was no evidence that students or officers were impacted by the race of a target when dispatch information was incorporated into the task. This occurred regardless of whether dispatch information that the target was armed was given or not. An ANOVA with identical predictors was run on the correct response times. Figure 5 shows the response time data for all conditions. Response times 2.5 standard deviations above a participant’s mean were truncated to this value to reduce skew from inattentive responses. Officers (M = 612ms, SD = 83ms) were slower to respond than students (M = 560ms, SD = 79ms), F(1, 151) = 29.39, p < .001. Participants were also faster to respond to guns (M = 550ms, SD = 83ms) than nonguns (M = 604ms, SD = 76ms), F(1, 151) = 196.70, p < .001. 21 750 725 725 700 white 675 black 650 625 600 575 550 Correct Response Time (ms) Correct Response Time (ms) 750 700 white 675 black 650 625 600 575 550 525 525 500 500 Unarmed NI Armed NI Unarmed WI Armed WI Unarmed NI Armed NI Unarmed WI Armed WI Students Police Figure 5: Correct response times for students (left panel) and police (right panel) for all conditions. NI = No weapon information. WI = Weapon information. 95% confidence intervals were calculated using the methods outlined by Morey (2008). Mirroring the decision data, there was also an interaction between object and dispatch information, F(1, 151) = 10.93, p < .001. When dispatch indicated a target was armed, participants correctly shot (M = 540ms, SD = 72ms) armed targets faster than they correctly chose to not shot unarmed targets (M = 609ms, SD = 77ms), t(152) = 13.65, p < .001. When no dispatch information was provided (and thus unarmed individuals were more likely), participants were still faster to correctly shoot (M = 560, SD = 92ms) than to not shoot (M = 599, SD = 75ms), t(152) = 6.39, p < .001, but this difference was smaller (d = .84 vs. .46). There was also a two-way interaction between object and expertise, F(1, 151) = 16.38, p < .001. Officers correctly shot armed targets faster (M = 574ms, SD = 79ms) than they correctly chose to not shot unarmed targets (M = 650ms, SD = 68ms), t(50) = 11.97, p < .001. Students also correctly shot armed targets faster (M = 538ms, SD = 83ms) than they correctly chose to not shot unarmed targets (M = 581ms, SD = 69ms), t(101) = 8.35, p < .001, but this difference was 22 smaller (d = .93 vs. .54). Summary. Both students and officers made more mistakes and were slower to respond when dispatch information was wrong. This pattern could be due to increased bias to choose the decision associated with the dispatch information. Alternatively, dispatch information may change how participants accumulate information in an online fashion. The DDM can disentangle these hypotheses, as the former would show up as an effect of dispatch information on start point, and the latter an effect of dispatch information on drift rate. When dispatch information was incorporated into the task there was no evidence that race influenced students’ and officers’ errors. Participants were no more likely to shoot unarmed Black men than unarmed White men, nor did they not shoot armed White men more than armed Black men. The absence of bias occurred regardless of whether targets were described as armed or not. This raises the question of whether the demographic information (e.g., target race) or the weapon information might reduce race bias in shooting decisions. Although officers made fewer mistakes, they were also much slower to respond. This pattern is consistent with at least two process level explanations. The first is that officers require more information to make a decision, perhaps because these decisions are more important to them than they are to students. An effect of expertise on the threshold parameter would support this hypothesis. However, given that officers receive extensive training and have professional experience identifying objects, I actually expect them to be better at distinguishing between guns and harmless objects. This could result in them having higher drift rates than students, and—all else equal—would mean that officers would be faster and more accurate than students. If officers are better at distinguishing guns from harmless objects, they would have to be slower for other reasons. One possibility is that their slower response times might be explained 23 by aging. Reaction times slow with age, and this slowing is predominantly due to an increase in the length of non-decision processes (Ratcliff, Thapar, & McKoon, 2001; Ratcliff, Thapar, Gomez, & McKoon, 2004; Thapar, Ratcliff, & McKoon, 2005). If non-decision times are slower for officers than students (who are generally older than students), this might explain why officers are slower than students even if they are better at distinguishing guns from harmless objects. Hierarchical DDM. Model Specification and Selection. The hierarchical DDM was specified according to the guidelines set by Pleskac et al. (2017). All parameters were allowed to vary as a function of race and information, but only drift rate and non-decision time were allowed to vary by object. A diagram of the model is provided in the Appendix. This model estimates individual variation in the parameters of the DDM, but assumes that the parameters are fixed across trials (i.e., there is no trial-by-trial variability). I also tested more complicated versions of the model with trial-bytrial variability in drift rate and start point. However, the group-level conclusions drawn from these models did not vary from the simpler versions without variability. For the sake of parsimony, I report the model without trial-by-trial variability. Hierarchical DDM Analysis. Figure 6 shows condition-level estimates of the threshold, start point, drift rate, and non-decision time.5 I first examined whether officers collected more information than students. There was a small effect of expertise on threshold, µdiff = 0.028, 95% HDI [0.002, 0.57]), d = 0.20 [0.02, 0.42]. Officers had higher thresholds (µ = 1.069, 95% HDI [1.047, 1.092] than students (µ = 1.039, 95% HDI [1.024, 1.057]). This main effect was primarily driven by an effect of expertise when no weapon dispatch information was given, µdiff = 0.051, 95% HDI [.012, .090], d = .389 [.099, .687]. Officers had higher thresholds (µ = 1.123, 95% HDI [1.089, 1.153]) than students (µ = 1.07, 95% HDI [1.047, 1.093]). Although this effect partially 5 All parameter effects and interactions are reported in the Appendix for Studies 1 and 4. 24 explains why officers are slower than students when no dispatch information is given, it is not sufficient to explain the large difference in response time. 1.2 ● ● 1.1 ● police ● students 0.65 ● ● α ● ● 0.60 β 1.0 ● ● 0.55 ● ● ● ● ● ● 0.50 ● ● 0.9 W None B None W Armed 0.45 B Armed 0.45 W None B None 0.45 ● ● 0.42 0.42 ● ● τNG 0.39 ● τGU 0.39 0.36 ● ● ● ● ● 0.36 ● ● ● 0.33 0.33 ● 0.30 ● ● 0.30 W None B None W Armed B Armed W None 3.0 B None W Armed B Armed 3.0 2.7 2.4 W Armed B Armed 2.7 2.4 ● ● 2.1 2.1 ● |δNG| 1.8 1.5 ● ● δGU 1.8 ● ● ● 1.2 1.2 ● 0.9 0.9 0.6 0.6 W None B None W Armed B Armed ● ● ● 1.5 ● ● ● W None ● B None W Armed B Armed Figure 6: Diffusion model parameters as a function of target race, dispatch information, and object for Study 1. Dots represent modal posterior predictions at the condition level; bars are 95% HDI. W = White, B = Black. NG = Nongun. GU = Gun. Moving onto the non-decision time parameters, there was strong evidence that officers non-decision processes (µ = 403ms, 95% HDI [398ms, 409ms]) took longer than students (µ = 25 338ms, 95% HDI [.332, .344]), µdiff = 66ms, 95% HDI [57ms, 73ms], d = 1.09, 95% HDI [0.93, 1.22]. This finding provides strong evidence against the hypothesis that officers are slower because they are waiting longer to make decisions. Rather, their non-decision processes take much longer (67ms), which may be partially due to a slowdown of motor responses. Replicating past research (Correll et al., 2015; Pleskac et al., 2017) non-decision times were shorter for guns (µ = 360ms, [354ms, 365ms]) than harmless objects (µ = 382ms, 95% HDI [376ms, 387ms]), µdiff = 21ms, 95% HDI [14ms, 30ms], d = 0.34, 95% HDI [0.23, 0.48]. One possibility is that this reflects faster encoding for a well-defined category of objects (guns) than an ill-defined category (harmless objects). This difference was larger for police officers, µint = 20ms, 95% HDI [5ms, 36ms]. For officers, non-decision times were shorter for guns (µ = 388ms, 95% HDI [380ms, 395ms]) than for harmless objects (µ = 420ms, 95% HDI [412ms, 427ms]), µ = 32ms, 95% HDI [22ms, 42ms], d = 0.53, 95% HDI [0.35, 0.69]. There was a significant interaction between dispatch information and object in nondecision time, µint = 41ms, 95% HDI [25ms, 57ms]. When dispatch information was not given, non-decision processes were shorter for unarmed targets (µ = 369ms, 95% [361ms, 377ms]) than armed targets (µ = 394ms, 95% [386ms, 402ms]), µdiff = 25ms [13ms, 36ms], d = .0.41, 95% HDI [.23, .60]. When dispatch information that the target was armed was given, non-decision processes were shorter for armed targets (µ = 352ms, 95% [344ms, 360ms]) than unarmed targets (µ = 367ms, 95% [360ms, 376ms]), µdiff = 16ms [5ms, 27ms], d = 0.26, 95% HDI [0.08, 0.44]. Shorter non-decision times when the dispatch information was correct were not influenced by expertise. To test whether officers were better at distinguishing guns from non-guns than students, I tested whether drift rates varied as a function of expertise. Supporting this hypothesis, there was 26 a small effect of expertise on drift rate, µdiff = 0.35, 95% HDI [.26, .48], d = 0.47, 95% HDI [0.34, 0.64]. Officer drift rates (1.75, 95% HDI [1.66, 1.84]) were higher than student drift rates (µ = 1.39, 95% HDI [1.33, 1.45]). This was particularly true when the target was unarmed; officer drift rates (µ = 2.06, 95% HDI [1.91, 2.19]) were moderately higher than student drift rates (µ = 1.55, 95% HDI 1.45, 1.64), µdiff = .50, 95% HDI [.35, .68], d = 0.66, 95% HDI [0.46, 0.90]. In addition to this effect of expertise, there was a clear interaction between dispatch information and object, µint = 1.19, 95% CI [0.94, 1.42]. When dispatch correctly indicated that a target was armed, both students and officers accumulated evidence much more quickly to shoot (µ = 1.65, 95% HDI [1.55, 1.76]) than when no information was provided (µ = 1.04, 95% HDI [0.92, 1.17]), µdiff = 0.62, 95% HDI [0.43, 0.77], d = 0.81, [0.58, 1.04]. In contrast, when targets were unarmed and dispatch incorrectly identified the target as armed, participants accumulated evidence to not shoot more slowly (µ = 1.51, 95% HDI [1.39, 1.64]) than when this information was not given (µ = 2.07, 95% HDI [1.97, 2.19]), µdiff = -0.57, 95% HDI [-.73, .39], d = -0.75, 95% HDI [.52, .98]. Thus, dispatch information strongly shaped how participants collected information. When information was correct, both students and officers accumulated information for the correct decision more quickly. Similar to the behavioral analysis, there was no evidence of a race by object interaction in the accumulation of information for both students (µint = 0.17, 95% HDI [-0.09, 0.46]) and officers (µint = 0.12, 95% HDI [-0.29, 0.51]). Thus, there was no evidence that participants were influenced by the race of the target when they accumulated information for their decision. Finally, I examined the effect of the manipulations on participants’ start point. There were only two main effects. First, officers had a higher starting point (µ = .577, 95% HDI [.565, .589]) than students (µ = .533, 95% HDI [.526, .539]), µdiff = .043, 95% HDI [.030, .058], d = 27 0.86, 95% HDI [0.58, 1.16]. Second, both students and officers had lower start points when they received dispatch information that targets were armed (µ = .519, 95% HDI [.510, .530]) than when no dispatch information was given (µ = .590, 95% HDI [.580, .599]), µdiff = .070, 95% HDI [.056, .083], d = 1.35, 95% HDI [1.06, 1.69]. This is counterintuitive because dispatch was a reliable indicator (with 75% accuracy) of the presence of a weapon and should have biased participants to favor the shoot decision. This suggests that participants are interpreting the information given to them in an unusual way, or that there are limitations in the DDM as a description of this decision process. As an exploratory analysis, I examined whether officer experience in years predicted the rate at which participants collected evidence.6 There was no correlation between the individuallevel officer drift rates (collapsed across condition) and years of experience, r(44) = -.14, p = .34. Experience was not significantly correlated with start point r(44) = .09, p = .54, threshold, r(44) = .10, p = .50, or non-decision time, r(44) = .09, p = .56. Although officers showed higher rates of evidence accumulation than students, these differences may be primarily due to training officers receive as recruits, rather than on the job experience. Similarly, although officers had longer non-decision times than students, this is likely due to their increased age (which was not recorded) and not their experience. Although experience is correlated with age, it may be a poor proxy. This is especially true for officers who had another job before they became police. Summary. While the behavioral data showed that information and expertise influenced the speed and accuracy of participants’ decisions, the DDM revealed how these manipulations influenced the decision-making process. Officers’ slower and more accurate responses compared to students were due to longer non-decision times and stronger drift rates, respectively. Dispatch information also influenced decisions by increasing the accumulation of information when it was 6 Only 46 officers reported their years of experience. 28 correct, and decreasing it when it was incorrect. Finally, dispatch information had an unexpected effect on participants start points where participants were biased towards the option inconsistent with the information. Discussion To test the role of dispatch information on the decision to shoot, I modified the FPST so that participants received prior information about the person they would encounter. Participants always received accurate demographic information about the person, and occasionally received information that the target was armed (with 75% accuracy). To better understand the role of expertise, I recruited trained police officers as well as untrained students. This design made the shooter task more realistic and allowed me to test how dispatch information influenced decisions. Behavioral analyses showed that accurate dispatch information improved decision accuracy and speed for both students and officers, whereas inaccurate information increased errors and slowed responding. Officers were slower and more accurate than students, and the DDM provided a process-level explanation for why. Interestingly, race did not influence the decision to shoot when dispatch information was provided. I discuss each of these points below. Role of Dispatch Information. Dispatch information that a target was armed had a powerful influence on participants’ decisions. When this information was accurate, errors and response times decreased. Despite these clear behavioral effects, the results are consistent with different process level explanations for how dispatch information might impact the decision to shoot. Two possibilities were explored in this study. First, dispatch information might create a bias to favor the information consistent decision. Second, information might influence how people accumulate information when they are making the decision. The diffusion model provided a way to test these hypotheses. Results showed clear support for the latter. Participants 29 accumulated information more quickly when dispatch information was correct. This might be due to dispatch information changing how people search for information. In the absence of any prior information, individuals may search in an exploratory way, asking, “what object is that person holding?” But when participants receive information that the person is armed, they may search for confirmatory information, asking, “is the person holding a gun?” Although the current experiment cannot directly test whether dispatch information influences search strategies, self-reports from participants suggest it is a possibility. Multiple officers and students reported they ignored the dispatch information because it distracted them. The behavioral data clearly show these attempts were unsuccessful. However, insofar as ignoring information was a common strategy, it might explain why dispatch information influenced the decision process in unusual ways. Participants may have attempted to ignore the dispatch information that a person was armed by changing their bias to favor not shooting. This would explain the counterintuitive effect of such information on participants’ start point. Despite their attempts to correct the influence of this information, these expectations may have leaked into their information search. A central finding of research on confirmation bias is that individuals are unaware they are searching for expectation consistent information (Mynatt, Doherty, & Tweney, 1976; Wason & Johnson-Laird, 1972, for a review see Nickerson, 1998). This would explain why participants thought they had successfully ignored the information even though they accumulated evidence more quickly (had stronger drift rates) when the information was correct. Unpacking Response Time Differences. One of the clearest benefits of the DDM was to test hypotheses about differences in the underlying decision process for untrained students and trained officers. Recall that officers were both more accurate and slower than students when making shooting decisions. Without a formal model of decisions, it would be reasonable to 30 conclude that this was due to greater caution among officers. There are plausible reasons to support this logic; the task may be more meaningful to officers or they may be more concerned about appearing biased. The DDM provided a way to test this caution hypothesis directly and compare it to alternative explanations. As the DDM showed, although officers did in fact have higher thresholds than students for unarmed targets, the main reason why officers were slower than students was because their non-decision processes took longer. The protracted length of these processes obscures the fact that—all else equal—officers are better than students at distinguishing guns from harmless objects. In fact, if non-decision time were controlled for, officers would on average respond faster than untrained students because they accumulate information relevant to the decision to shoot more quickly. The Lack of a Race Effect. The race of a target did not influence participants’ decisions to shoot in the current study. This stands in contrast to past work that has found racial bias in shooting decisions (e.g., Correll et al., 2002; 2011; Pleskac et al., 2017). However, the current study modified the traditional FPST by providing dispatch information about targets race and armed status before each trial. These are relevant pieces of information that impact shooting decisions. If providing dispatch information eliminates bias in the decision to shoot, this would suggest that findings of racial bias in the FPST might not translate into real world decisions, because officers often have dispatch information about whom they will encounter. Race may bias decisions in the absence of contextual information, but be quickly undercut by more reliable information. This leads to a troubling possibility: current FPST work may provide a skewed picture of the degree to which officers show bias in the field. The dispatch information provided before each trial included race information as well as 31 information about whether the target was armed. This makes it difficult to disentangle exactly which part of the information was responsible for the reduction in racial bias in decisions. There are plausible reasons that both types of information might reduce racial bias. Starting with the information that a target is armed, this information may reduce impact racial bias in shooting decisions because it strongly shapes how participants collect information at the process level. This may be sufficient to suppress the effects of irrelevant information like race. In contrast, accurate race information might also reduce bias in participants’ decisions because it may enable them to better control automatic stereotypic associations between Black men and violence that would lead to higher rates of shooting Black targets. Study 2 directly tested these competing accounts by having participants receive each piece of dispatch information independently in a blocked design. Participants also completed a version of the FPST where no information was given before each trial, which served as a control condition. This condition replicated past FPST designs and tested whether there was evidence for racial bias in shooting decisions when no dispatch information was given. Study 2: Separating the Role of Race and Weapon Information The goal of Study 2 was to investigate what part of the dispatch information (race or weapon information) was relevant to the decision to shoot, as well as whether the accuracy of that information mattered. This study used the same modified FPST design as Study 1, except that the race and weapon information were blocked and counterbalanced. Information was presented in blocks of 80 trials, so that participants always received race information or weapon information. As a control, participants also completed a block of 80 trials of the traditional FPST where no dispatch information was presented. 32 Method Participants and Study Design. Undergraduate students (N = 122) completed 240 trials of the FPST across three blocks. Six participants were removed for responding carelessly (responding faster than 300ms on 20% or more trials). The remaining 116 participants (Mage = 19.1, SD = 2.0) were 77.6% White, 12.9% Black, 5.1% Asian, with 4.3% from other groups. The majority of participants (64.6%) were women; men were not oversampled in this study. Block order was counterbalanced. Each block contained 80 trials where target race (Black, White) and object (gun, nongun) were crossed. In the control block, no dispatch information was provided. In the other blocks, participants were given dispatch information about whether the target was Black/White or armed/unarmed, prior to the start of the trial. This information was accurate 75% of the time, reflecting that dispatch information is generally but not always accurate. The experiment was a 2 (target race: Black, White) by 2 (object: gun, nongun) by 3 (information: none, weapon, race) by 2 (information accuracy: correct, incorrect) within-subjects design.7 Results Behavioral Analyses. To test whether race and weapon dispatch information impacted the decision to shoot, I conducted ANOVAs on participants’ error rates, with target race, object, and information as within-subjects factors. Each ANOVA tested the effect of information (race or weapon) against the no information control. Follow up analyses tested how the accuracy of information impacted decisions. The same analyses were also conducted on response times for correct decisions. Error rates and response times for each condition are displayed in Figure 7. A follow up ANOVA with race, object, and information (unarmed, armed) as factors was conducted on the error rates from the weapon information condition. The expected interaction 7 In the case of the control condition, information accuracy was not applicable. 33 between object and information was significant, F(1, 115) = 99.89, p < .001. When dispatch correctly identified a target as armed, participants made far fewer errors (M = .232, SD = .157) than when dispatch incorrectly identified a person as unarmed (M = .403, SD = .275), t(115) = 8.25, p < .001. When dispatch correctly identified a target as unarmed, participants made far fewer errors (M = .199, SD = .128) than when dispatch incorrectly identified a person as armed (M = .393, SD = .258), t(115) = 8.77, p < .001. Thus, there was strong evidence that participants were using the dispatch information presented to them. The race by object interaction was also significant, F(1, 115) = 5.78, p = .018. Importantly, this interaction was not influenced by the accuracy of the dispatch information, F(1, 115) = 0.20, p = .65. Focusing on the response time data, an ANOVA with target race, object, and condition (weapon information, no information) as factors was run on the correct response time results. Only an effect of object emerged; participants were faster to respond to guns (M = 509ms, SD = 70ms) than harmless objects (M = 567ms, SD = 165ms), F(1, 115) = 70.99, p < .001. Follow up analyses looking at how the accuracy of dispatch information influenced decisions were not run because many participants had missing data (i.e., they made no correct responses when given incorrect information). Descriptively, participants were faster to respond when dispatch information was correct and slower when it was not. To test the role of race information, I conducted an ANOVA on error rates, with target race, object, and condition (race information, no information) as factors. The expected two-way interaction between object and target race was significant, F(1, 115) = 37.29, p < .001. Participants shot more unarmed Black men (M = .331, SD = .164) relative to unarmed White men (M = .272 SD = .164), t(115) = 6.26, p < .001. They also failed to shoot more armed White men (M = .275, SD = .128) relative to armed Black men (M = .256 SD = .137), t(115) = 2.18, p = 34 .031. Unlike the weapon dispatch information, this interaction was not influenced by condition, F(1, 115) = 2.07, p = .15. White 0.45 Proportion Errors Black 0.40 0.35 0.30 0.25 0.20 U k Bl ac Bl ac 700 Correct Response Time (ms) G G k N U te hi W W hi te N G G U m Ar ed m Ar ed N G G U rm ed U na U na rm ed N G G U e on N N on e N G G 0.15 White 680 Black 660 640 620 600 580 560 540 520 500 480 U G k Bl ac G Bl ac k N U hi te W W hi te N G G U ed m Ar Ar m ed N G G U ed na U rm na U rm ed N G G U e on N N on e N G G 460 Figure 7: Proportion errors (top panel) and correct response times (bottom panel) for all conditions. Response times were omitted for 22 participants because they failed to respond correctly in at least one condition. NG = Nongun. GU = Gun. 95% confidence intervals were calculated using the methods outlined by Morey (2008). 35 A follow up ANOVA with race, object, and race information (White, Black) as factors was conducted on error rates from the race information condition. The race by object interaction was significant, F(1, 115) = 5.78, p = .018. This interaction was primarily driven by unarmed targets; participants shot more unarmed Black men (M = .327, SD = .206) relative to unarmed White men (M = .274 SD = .210), t(115) = 3.67, p < .001. However this interaction was qualified by a significant interaction with information, F(1, 115) = 4.00, p = .048. Participants only shot more unarmed Black men (M = .330, SD = .158) than unarmed White men (M = .259, SD = .239) when given information that targets were Black, t(115) = 3.55, p < .001. No such difference was observed when dispatch stated the target was White, t(115) = 1.67, p = .097. Turning to the response time data, an ANOVA with target race, object, and condition (race information, no information) as factors was run on the correct response time results. Only a significant effect of object emerged; participants were faster to respond to guns (M = 522ms, SD = 164ms) than harmless objects (M = 568ms, SD = 75ms), F(1, 115) = 29.60, p < .001. Summary. The behavioral data demonstrate a clear effect of dispatch information about whether a person is armed. Participants made half as many errors when information was correct versus incorrect. The inclusion of weapon information also reduced racial bias in the shooting of unarmed targets. In contrast, the inclusion of weapons information did not influence racial bias in shooting decisions relative to a no information control condition. This suggests that the reduction in racial bias seen in Study 1 where both weapon and race information were given might be at least partially due to the presence of information that targets were armed. To further probe this question and to better understand the shooting decision-making process, I conducted a DDM analysis of the data. 36 Hierarchical DDM. Model Specification and Selection. The hierarchical DDM was again specified according to the guidelines set by Pleskac et al. (2017). All parameters were allowed to vary as a function of race and information, but only drift rate and non-decision time were allowed to vary by object. Hierarchical DDM Analysis. Figure 8 shows condition-level parameter estimates. Focusing on the threshold, I replicated the finding that participants thresholds were lower when dispatch listed them as armed (µ = 0.875, 95% HDI [0.855, 0.897]), than unarmed (µ = 0.923, 95% HDI [0.900, 0.943]), µdiff = -0.046, 95% HDI [-0.076, -0.017], d = -0.41, 95% HDI [-0.67, 0.14]. Thresholds were also higher for Black men (µ = 0.913, 95% HDI [0.893, 0.936]) than White men (µ = 0.884, 95% HDI [0.863, 0.904]), µdiff = 0.032, 95% HDI [0.000, 0.059], d = 0.28, 95% HDI [0.01, 0.53] when weapon information was given. Finally, thresholds were lower when incorrect race information was given (µ = 0.804, 95% HDI [0.778, 0.827]) than when the information was correct (µ = 0.889, 95% HDI [0.871, 0.908]), µdiff = -0.089, 95% HDI [-0.117, 0.057], d = -0.77, 95% HDI [-1.04, -0.49]. Moving onto the relative start point, there was not a credible effect of race on start point regardless of what type of dispatch information was (not) given. Although not credible, weapon information had the expected effect on start point, µdiff = .020, 95% HDI [.000, .039], d = 0.50, 95% HDI [-0.02, 0.96]. Participants’ favored the shoot decision more when targets were described as armed (µ = .542, 95% HDI [.529, .558]) than unarmed (µ = .523, 95% HDI [.510, .538]). This diverges from Study 1, where information that a target was armed decreased participants’ start point to more favor the don’t shoot decision. 37 0.96 0.58 ● 0.92 0.56 ● ● ● α 0.88 ● ● ● ● ● β 0.54 ● ● ● 0.84 0.52 ● 0.80 ● ● ● ● ● ● ● 0.50 IW N hite U IB na la c r U me k na d rm W Ar ed m B e Ar d W m e W dB hi te W W hi Bl te B ac k Bl W ac k B N N IW N hi U I B te na la c r U me k na d rm W Ar ed m B e Ar d W m e W dB hi te W W hi Bl te B ac k Bl W ac k B 0.76 0.40 0.40 ● ● 0.38 τNG 0.38 ● ● 0.36 τGU ● ● ● ● 0.34 ● ● ● ● ● ● ● 0.34 ● ● IW N hit U IB e na la r c U me k na d rm W Ar ed m B e Ar d W m W ed hi B t W eW hi Bl te B ac k Bl W ac k B N IW N hit U IB e na la r c U me k na d rm W Ar ed m B e Ar d W m W ed hi B t W eW hi Bl te B ac k Bl W ac k B 0.32 N ● ● ● ● ● ● ● δGU ● ● 1.8 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 ● ● ● ● ● ● ● ● ● ● N N IW N hite U IB na la r c U me k na d rm W Ar ed m B e Ar d W m e W dB hi te W W hi Bl te B ac k Bl W ac k B ● IW N hit U IB e na la r U me ck na d rm W Ar ed m B e Ar d W m W ed hi B t W eW hi Bl te B ac k Bl W ac k B |δNG| 0.36 ● 0.32 1.8 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 ● ● Figure 8: Diffusion model parameters as a function of target race, dispatch information, and object for Study 2. Dots represent modal posterior predictions at the condition level; bars are 95% HDI. W = White, B = Black, NI = No Information. Participant non-decision time for both guns and nonguns was shorter when the weapon 38 information was correct (µ = 343ms, 95% HDI [337ms, 349ms]) than incorrect (µ = 359ms, 95% HDI [352ms, 365ms]), µdiff = -16ms, 95% HDI [-25ms, -7ms], d = -0.24, 95% HDI [-0.37, -0.10]. Similarly, non-decision time for both guns and nonguns was shorter when the race information was correct (µ = 355ms, 95% HDI [349ms, 361ms]) than incorrect (µ = 386ms, 95% HDI [379ms, 393ms]), µdiff = -31ms, 95% HDI [-40ms, -22ms], d = -0.47, 95% HDI [-0.60, -0.33]. Replicating Study 1, non-decision time was smaller for guns (µ = 354ms, 95% HDI [350ms, 358ms]) than for nonguns (µ = 365ms, 95% HDI [361ms, 369ms]) objects, µdiff = -11ms, 95% HDI [-17ms, -5ms], d = -0.17, 95% HDI [-0.25, -0.08]. Turning to the drift rate, I replicated the interaction between dispatch information and object for weapon dispatch information, µint = 1.88, 95% CI [1.50, 2.24]. When dispatch correctly indicated that a target was armed, participants accumulated evidence much more quickly to shoot (µ = 1.59, 95% HDI [1.42, 1.73]) than when no information was provided (µ = 0.52, 95% HDI [0.32, 0.71]), µdiff = 1.08, 95% HDI [0.83, 1.32], d = 1.38, [1.06, 1.72]. When targets were unarmed and dispatch incorrectly identified the target as armed, participants accumulated evidence to not shoot more slowly (µ = 0.72, 95% HDI [0.54, 0.92]) than when this information was not given (µ = 1.53, 95% HDI [1.38, 1.67]), µdiff = -0.78, 95% HDI [-1.05, 0.57], d = -1.06, 95% HDI [1.37, 0.75]. When participants did not receive dispatch information race had a different effect on drift rate for armed and unarmed targets, µint = 0.64, 95% HDI [0.25, 1.02]. This is consistent with past work showing that race is accumulated as evidence to shoot alongside the object being held. Specifically, participants accumulated evidence more quickly to not shoot unarmed White targets (µ = 1.41, 95% HDI [1.21, 1.58]) than unarmed Black targets (µ = 1.00, 95% HDI [0.81, 1.17]), µdiff = 0.41, 95% HDI [0.16, 0.67], d = 0.51, 95% HDI [0.20, 0.87]. They also accumulated 39 evidence more quickly to shoot armed Black targets (µ = 1.44, 95% HDI [1.27, 1.64]) than armed White targets (µ = 1.19, 95% HDI [1.01, 1.38]), although this difference was not credible, µdiff = 0.24, 95% HDI [-0.02, 0.50], d = 0.34, 95% HDI [-0.02, 0.67]. Unlike Study 1, the same pattern of race bias was observed when participants were given weapon dispatch information, µint = 0.44, 95% HDI [0.07, 0.82]. Participants showed race bias for armed targets, µdiff = 0.28, 95% HDI [0.01, 0.51], d = 0.36, 95% HDI [0.00, 0.65] and unarmed targets, µdiff = 0.22, 95% HDI [-0.02, 0.46], d = 0.27, 95% HDI [-0.03, 0.59]. When participants were given race dispatch information they did not show a general pattern of bias, µint = 0.27, 95% HDI [-0.11, 0.69]. This was because participants only showed bias for unarmed targets, µdiff = 0.29, 95% HDI [0.05, 0.54], d = 0.38, 95% HDI [0.07, 0.71], and not for armed targets, µdiff = 0.01, 95% HDI [-0.27, 0.27], d = 0.02, 95% HDI [-0.35, 0.35]. Summary. The DDM results were largely consistent with and clarified the findings from Study 1. Dispatch information strongly influenced how participants collected information. Thresholds were lower when dispatch information stated that the target was armed. Non-decision times were also longer when information was incorrect. However, there were some differences between the studies. In the current study, both types of dispatch information reduced bias in the accumulation of information as measured by the drift rate, but neither eliminated it. Second, weapon information shifted participants’ starting bias to factor the decision that matched the information (although this was not credible). Discussion Both race and weapon dispatch information had unique influences on decisions to shoot and the underlying decision process. Weapon dispatch information had the primary effect of reducing errors when correct, due to changes in how participants accumulated evidence. This 40 process level finding could reflect changes in visual search strategies. Both types of information reduced the influence of race on evidence accumulation, but only race dispatch information eliminated racial bias in the accumulation of evidence for armed targets. In sum, there is some evidence that both race and weapon information have independent effects on the decision to shoot that reduce racial bias. Unlike Study 1, race generally impacted how participants accumulated evidence to shoot regardless of whether they did or did not receive dispatch information. This difference could be due to the fact that both pieces of information were presented separately, rather than together. If dispatch information reduces racial bias in the decision to shoot because it provides relevant information, giving demographic information about who to look for (race information) and information about whether they are armed (weapon information) may override the influence of stereotypic associations between Black men and violence. A related issue with the DDM is the mixed findings that dispatch information that the target was armed pushed participants start point to favor not shooting in Study 1 and to favor shooting in Study 2. Interpreting the start point parameter as a measure of prior bias is dependent on the measure being sensitive to factors that should change biases in a consistent way. If one holds an a priori assumption that probabilistic information about what response is correct should only create a prior bias to favor that response, the results of Study 1 rule out the DDM as a viable model. Even if this assumption is relaxed for a specific manipulation (e.g., dispatch information), interpreting the start point still requires validating it in other ways. Studies 3 and 4 did this two different ways. Study 3 tested whether the DDM could detect a simulated change in start point in the predicted direction under conditions similar to Study 1. Study 4 used an experimental manipulation to show that the start point responds predictably to a manipulation of payoffs when 41 using actual participants. Study 3: Model Recovery of Start Point The purpose of Study 3 was to demonstrate that the current hierarchical implementation of the DDM can detect predicted changes in start point. In Study 1 dispatch information had a counterintuitive effect on start point and also influenced how people accumulated information (a drift rate effect). In contrast, Study 2 showed a (noncredible) effect in the predicted direction. To rule out the possibility that this unusual pattern was due to tradeoffs between the model parameters that render the results of the model uninformative, I simulated 100 datasets based on the data from Study 1 but with the expected effect of information on start point. If the model can accurately recover differences in start point from simulated data, it would suggest that this unusual finding is not merely an artifact of the analytic method used. Method and Results I conducted a parameter recovery analysis to test whether the DDM can detect start point changes in a study design similar to the experimental studies described subsequently. Decision and reaction time data were simulated from the Study 1 condition level parameter estimates using the RWiener package (Wabersich & Vandekerckhove, 2014). In the absence of a strong a priori estimate for the effect size of a potential start point difference, I based the simulations on a medium-sized effect (Cohen’s d = .65). Taking into account the precision around the start point, this corresponds to start point difference of .033 between information conditions.8 I used this estimate to generate an information effect across the two information conditions. To make the design of the simulations as close to those studies as possible, I used the no dispatch and armed dispatch information conditions, collapsed across race and expertise. The 8 Given the condition level standard deviation of .052 from Study 1and a d of .65, the predicted difference required ! is !!"## .!"# ! = .65, where 𝜇!"## = .033. 42 condition level means and standard deviations for these conditions are reported in Table 1. From these condition level distributions, I created 100 unique parameter estimates, each representing a study with 100 participants. Forty trials were simulated for each participant in each condition: unarmed with no dispatch information, armed with no dispatch information, unarmed with armed dispatch information, and armed with armed dispatch information. The number of trials and participants were based on the experimental design that would be used in Study 4. I then fit the hierarchical DDM to each simulated experiment, but used the simpler fourcondition variant described above to minimize computation time. Table 1 reports the most credible parameter estimates across the 100 simulations, as well as the 95% HDI for each parameter. The model accurately captures all parameter values, including the start point for the no dispatch and armed dispatch information conditions. In addition, I examined whether the DDM recovered a credible information effect on start point by examining the difference between the condition level start points for each simulation. The proportion of times an effect was recovered an effect provides an estimate of power (Kruschke, 2014). This analysis revealed 85% power (95% HDI [.77, .91]) to detect a medium-sized condition level start point difference. In sum, the model recovery study shows that the hierarchical DDM can accurately and reliably recover the parameters used to generate data in simulated datasets. In particular, the model has an acceptable level of power to detect a medium-sized difference in start point across conditions. In other words, there is little evidence that within the current analytical framework— using the sample sizes and numbers of trials in used in this set of studies—that the model cannot accurately capture a start point difference. 43 Table 1: Summary of Parameter Recovery Study. Parameter True Value Mode 95% HDI ! 𝜇! 1.096 1.101 1.074 1.014 1.104 0.985 𝜇!! ! 0.134 0.135 0.119 1 𝜏 ! .538 .538 0.522 𝜇! ! .572 .569 0.556 𝜇! .052 .054 .044 1 𝜏! ! 𝜇!,! 0.369 0.368 0.354 ! 𝜇!,! 0.367 0.368 0.355 ! 𝜇!,! 0.394 0.393 0.382 ! 𝜇!,! 0.352 0.350 0.339 ! 0.061 0.061 0.058 1 𝜏 ! 𝜇!,! -2.07 -2.07 -2.20 ! 𝜇!,! 1.04 1.07 0.88 ! 𝜇!,! -1.52 -1.49 -1.66 ! 𝜇!,! 1.65 1.64 1.51 0.75 0.76 0.70 1 𝜏! ! ! .033 .034 .009 𝜇! − 𝜇! Note. Values for the mean and 95% HDI are averaged across the simulations. 1.122 1.043 0.152 .551 .582 .060 0.377 0.379 0.405 0.364 0.066 -1.89 1.20 -1.37 1.86 0.81 .054 Discussion The purpose of the model recovery study was to determine whether the DDM could detect simulated start point changes. This is important given the Study 1 result that dispatch information had a counterintuitive effect on the start point and also influenced the accumulation of evidence. This unusual finding raises the possibility that parameters in the DDM might trade off and create implausible parameter estimates. This concern was not supported by the simulation analysis. The DDM captured the difference in start point accurately, precisely, and with no evidence of trade-offs with any of the other parameters. The other model parameter estimates also showed no evidence of bias. Thus, there seems to be little reason to suggest that pilot study results are a mere artifact of the modeling process. However, there is still a question of whether the start point parameter actually measures bias to favor the shoot decision. In the 44 next study I validated the psychological interpretation of this parameter by testing whether a targeted experimental manipulation influences start point. Study 4: Model Validation with Payoff Manipulation The purpose of Study 4 was to validate the start point parameter as an index of bias to favor the shoot or don’t shoot decision. Unlike Study 5, which showed that the DDM can detect simulated differences in start point, this study tested whether the start point is sensitive to experimental manipulations that should influence this parameter. To test this, I manipulated the payoff matrix used in the FPST to award and deduct points based on performance. If this manipulation influences the start point, it would provide construct validity for the interpretation of the start point parameter, and would also show that it can be influenced experimentally. As the start point is not influenced by dispatch information, this would further support the interpretation that dispatch information exerts its effects on the evidence accumulation process, perhaps by changing how people search for information. Method One hundred five undergraduate women9 completed two blocks of the FPST with decision payoffs manipulated across blocks. Three participants were removed for responding carelessly, defined as responding faster than 300ms on 20% or more trials. The remaining 102 participants (Mage = 19.0, SD = 1.4) were 78.4% White, 7.8% Black, 9.8% Asian, with 3.9% from other groups. Each block contained 160 trials and the order of blocks was counterbalanced across participants. In order to change the likelihood of shoot or don’t shoot decisions, I manipulated the payoff matrix for decisions across the two blocks (see Table 2). To encourage a bias to shoot in 9 Studies 1 and 3 came from the same pool of undergraduates. Men comprise less of this subject pool than women and were oversampled in Study 1, which was completed before Study 3. There were either no men left to participate in Study 3 or they did not sign up before women filled the sign ups. 45 one block, shooting an armed target earned participants 25 points, whereas shooting an unarmed target only cost participants 5 points. In contrast, not shooting an armed target cost participants 25 points, whereas not shooting an unarmed target only earned participants 5 points. This creates a situation where choosing to shoot consistently leads to an average payoff of 10 points per trial (versus -10 for not shooting) when collapsing across object type. In contrast, in the block where shooting is discouraged the payoffs are mirrored so that choosing to not shoot consistently leads to an average payoff of 10 points per trial. In sum, the different payoff rates in the blocks should create a bias to shoot or not shoot. Table 2: Payoff Values for the FPST by Block. Shooting Encouraged Block Armed Target Unarmed Target Shoot 25 -5 Don’t Shoot -25 5 Note. FPST = First Person Shooter Task. Shooting Discouraged Block Armed Target Unarmed Target 5 -25 -5 25 Results Behavioral Analysis. To test whether the payoff manipulation impacted the decision to shoot, I conducted a within-subjects ANOVA on error rates, with target race, object, and payoff as factors. Figure 9 (left panel) shows the decision data for all conditions. The predicted two-way interaction between object and payoff was significant, F(1, 100) = 126.58, p < .001. As expected, when the payoff structure favored shooting, participants shot more unarmed targets (M = .399, SD = .203) than when it favored not shooting (M = .228, SD = .098), t(100) = 8.49, p < .001. In addition, participants failed to shoot armed targets less (M = .205, SD = .092) when the payoff structure favored shooting than when it favored not shooting (M = .410, SD = .166), t(100) = 12.60, p < .001. 46 0.45 550 Proportion Errors white 0.35 black 0.30 0.25 0.20 Correct Response Time (ms) 540 0.40 white 530 black 520 510 500 490 0.15 480 0.10 470 Unarmed DS Armed DS Unarmed SH Armed SH Unarmed DS Armed DS Unarmed SH Armed SH Figure 9: Proportion errors (left panel) and correct response times (right panel) for all conditions. DS = Payoff favors not shooting. SH = Payoff favors shooting. 95% confidence intervals were calculated using the methods outlined by Morey (2008). The typical interaction between race and object was also evident in error rates, F(1, 100) = 48.82, p < .001. Participants were more likely to shoot unarmed Black men (M = .340, SD = .179) than unarmed White men (M = .288 SD = .179), t(100) = 8.28, p < .001. They were also more likely to fail to shoot armed White men (M = .298, SD = .176) than armed Black men (M = .319, SD = .169), t(100) = 2.83, p = .006. Like Study 2, I replicated the typical finding of racial bias in the decision to shoot when dispatch information was not provided. There was no evidence for a three-way interaction, F(1, 100) = 0.46, p = .50. Focusing on the response time data, an ANOVA with identical predictors was run on the correct responses. Figure 9 (right panel) shows the response time data for all conditions. Consistent with past work, participants were faster to respond to guns (M = 501ms, SD = 54ms) than nonguns (M = 5132ms, SD = 66ms), F(1, 100) = 258.66, p < .001. This effect was qualified 47 by an interaction with payoff, F(1, 100) = 20.49, p < .001. Participants were faster to shoot armed targets (M = 495ms, SD = 63ms) when the payoff structure favored shooting than when it favored not shooting (M = 506ms, SD = 43ms), t(100) = 11.18, p < .001. Participants were not faster to not shoot unarmed targets when the payoff structure favored not shooting (M = 531ms, SD = 52ms) than when it favored shooting (M = 533ms, SD = 78ms), t(100) = 2.17, p = .72. Summary. The behavioral data show a clear effect of payoff structure. Participants more quickly and accurately shot armed targets when shooting was rewarded more than punished, and they more quickly and accurately chose to not shoot unarmed targets when the opposite was true. From a process perspective, these results are encouraging because they show a pattern consistent with a change in start bias to favor the shoot (or don’t shoot) decision. However, these results could also be explained by changes in participants’ accumulation of information, as indicated by the drift rate. If drift rate changes solely account for these findings, this would provide evidence against the start point as an index of bias. I disentangled these possibilities using the DDM. Hierarchical DDM. Model Specification and Selection. The DDM was embedded within a hierarchical framework. All parameters were allowed to vary as a function of race and payoff, but only drift rate and non-decision time were allowed to vary as a function of object (Pleskac et al., 2017). Hierarchical DDM Analysis. Figure 10 shows condition-level estimates of the threshold, start point, drift rate, and non-decision time. The central question motivating this experiment was whether the start point parameter would capture the effect of the payoff manipulation. There was a credible medium-sized effect of payoff on start point, µdiff = .037, 95% HDI [.024, .050], d = .771, 95% HDI [0.48, 1.05]. Participants showed an initial bias to favor the shoot response more when the payoff structure rewarded shooting (µ = .535, 95% HDI [.524, .543]) than when it 48 favored not shooting (µ = .496, 95% HDI [.488, .506]). This provides convergent validity for the start point parameter as an index of bias. 1.00 0.56 0.98 α 0.54 0.96 ● ● ● ● ● 0.94 β 0.52 ● 0.50 0.92 ● ● 0.48 0.90 White DS Black DS White S Black S White DS Black DS 0.36 White S Black S 0.36 ● 0.34 0.34 τNG ● 0.32 ● τGU ● ● ● ● ● 0.32 0.30 0.30 White DS Black DS White S Black S White DS Black DS White S 1.8 1.8 1.6 ● 1.6 ● ● 1.4 |δNG| 1.2 1.4 ● δGU 1.0 0.8 Black S ● 1.2 1.0 0.8 0.6 0.6 ● 0.4 0.4 White DS Black DS White S Black S ● ● White DS Black DS White S Black S Figure 10: Diffusion model parameters as a function of target race, payoff structure, and object for Study 2. Dots represent modal posterior predictions at the condition level; bars are 95% HDI. DS = payoff favors not shooting, SH = payoff favors shooting. I also tested whether the payoff matrix influenced other DDM parameters. The payoff 49 manipulation did not influence participants thresholds (µ = 0.012, 95% HDI [ -0.017, 0.038), non-decision time (µ = 8ms, 95% HDI [3ms, 19ms]), or drift rates (µ = 0.18, 95% HDI [0.05, 0.30]), providing provides divergent validity for the start point parameter as a measure of bias. However, I did find a credible interaction between drift rate and object, µ = 1.75, 95% HDI [1.47, 2.00]. For armed targets, participants showed stronger drift rates towards shoot when shooting was encouraged (µ = 1.58, 95% CI [1.44, 1.70]) than when it was discouraged (µ = 0.52, 95% HDI [0.40, 0.65]), d = 1.32, 95% HDI [1.07, 1.54]. For unarmed targets, participants showed stronger drift rates towards not shooting when shooting was discouraged (µ = -1.34, 95% HDI [1.47, -1.22]) than when it was encouraged (µ = -.65, 95% HDI [-0.53, -0.52]), d = 0.84, 95% HDI [.63, 1.09]. Both of these effects were large and demonstrate that the payoff manipulation influences multiple parts of the decision process. I also replicated the typical race effect in drift rates found Study 2 and other research (Correll et al., 2015; Pleskac et al., 2017). The interaction between race and object was credible, µ = 0.46, 95% HDI [0.21, 0.72]. Participants showed weaker rates to not shoot unarmed Black men (µ = -0.85, [-0.98, -.73]) than unarmed White men (µ = -1.16, 95% HDI [-1.28, 1.03]), d = 0.39, 95% HDI [0.16, 0.60]). In contrast, participants showed stronger drift rates to shoot armed Black men (µ = 1.11, 95% HDI [1.00, 1.25]) than armed White men (µ = .98, HDI [0.85, 1.10]), although this difference was not credible, d = -0.18, 95% HDI [-0.40, 0.03]. There was no evidence of a three-way interaction with payoff structure, µ = -0.10, 95% HDI [-0.62, 0.44], and race did not impact any other parameters in the model. Summary. As predicted, manipulating the payoff structure of the FPST influenced individuals’ start point. Individuals set start points closer to the shoot decision when the payoff structured rewarded shooting. This provides convergent evidence that the start point indexes an 50 initial bias to favor the shoot decision. The payoff manipulation did not influence the length of non-decision processes or participants’ threshold level, providing some divergent evidence that the effect of bias is isolated to the start point. However, there was also strong evidence that the payoff manipulation did influence participants’ drift rate. Insofar as the payoff manipulation is assumed to selectively influence the start point, interpreting the start point parameter as solely indicating a bias on the process level may be premature. Finally, I observed racial bias in drift rate; evidence for the shoot decision was stronger when targets were Black rather than White. Discussion Moving from the level of model parameters to the level of psychological processes requires validating the presumed interpretations of those parameters experimentally. To validate the interpretation of the start point parameter as a measure of initial bias to shoot, I adjusted the payoff structure of the FPST to reward or penalize shooting behavior. These payoffs had a clear effect on behavior that was reflected in changes in the DDM parameters. As predicted, when shooting was rewarded, the start point parameter shifted towards the shoot decision. When the opposite was true, participants start point shifted towards the don’t shoot decision. Combined with fact that the payoff manipulation did not influence participants’ threshold or non-decision time, this provides some validation that the start point parameter does index initial bias, and that changes in the start point parameter can be observed using experimental manipulations. One unexpected effect of the payoff manipulation was that it also influenced the rate at which participants accumulated evidence. Drift rates were stronger (weaker) for guns when the payoff information favored (not) shooting. This violates the principle of selective influence, where manipulations designed to target a single parameter should only influence that parameter. When selective influence is achieved, it provides powerful evidence that the parameter indexes 51 the relevant psychological construct. However, a lack of selective influence does not necessarily indicate that the psychological interpretation of a parameter is invalid. Another possibility is that the targeted experimental manipulation may have unintended consequences for other aspects of the decision process. For example, the manipulation of payoff structure was designed to influence participants bias for the shoot or don’t shoot decision. If this effect was not found, it would provide strong evidence against the start point as a valid index of bias, and the DDM as a model of the decision to shoot. However, the finding that the payoff manipulation influenced both the start point and drift rate is more ambiguous. It is possible that although the payoff manipulation was intended to solely influence an initial bias to favor one option or the other, it may have also influenced how they accumulated information during the decision process. Insofar as people were motivated to earn the most points that they could, this manipulation may have changed participants’ search strategies so that they were looking for confirmatory evidence for the decision that on average earned them more points. In addition to the effects of payoff structure on start point, I also replicated the typical pattern of race bias in shooting decisions. Participants shot unarmed Black men than unarmed White men, and failed to shoot armed White men more than armed Black men. In the DDM, this difference was isolated to changes in the drift rate as function of race. Participants accumulated evidence to not shoot unarmed Black men more slowly than unarmed White men, and they accumulated evidence to shoot armed Black men more quickly than armed White men, although this difference was not credible. This result differed starkly from Study 1, where participants showed no racial bias when they were given race and weapon dispatch information beforehand. Although not a direct test, this suggests that giving reliable dispatch information does mute the 52 influence of irrelevant information like race on the decision to shoot. 53 GENERAL DISCUSSION When officers are forced to make a decision about whether or not to use lethal force, this decision is not made in a vacuum. Officers responding to a call typically have, at minimum, demographic information about the person they are to interact with. Any pertinent information about the individual, such as whether or not he or she is armed, is also passed on to officers. Despite these policies, past research on the decision to shoot has focused on how individuals make such decisions in the absence of this information. While this is one reasonable starting point to understand shooting behavior, the ability to extrapolate these findings to real world shooting decisions is limited. The current studies found that that untrained civilians reliably showed racial bias in the decision to shoot when they were not given any additional contextual information. However, when officers and students received accurate demographic information from dispatch as well as information about whether a target was armed, they showed no racial bias in shooting decisions. Thus, simply providing relevant information about the person an officer will encounter might eliminate bias in shooting decisions. An important implication of this finding is that racial bias in shooting decisions as observed in laboratory studies might be limited to rare cases of shootings where an individual is holding an ambiguous object and the officer has no prior information about the individual. That is, the race of an individual might only impact the decision to shoot in the absence of more relevant disambiguating information. Insofar as these findings can be extrapolated to real world shootings, they provide another perspective for understanding why individuals like Tamir Rice are accidentally shot. The results from these experimental studies suggest that when officers encounter an individual that has been accurately identified by dispatch, information that the individual is armed influences how that 54 person’s actions are perceived. Rice’s movements may have seemed more dangerous to the officers because of the dispatch information they had received, independent of any prior bias. In sum, events like the shooting of Tamir Rice might result from inaccurate dispatch information in addition to or instead of racial bias on the part of the officers. The fact that race did not influence the decision to shoot when dispatch information was provided raises the question of the pervasiveness of racial bias in officer shooting decisions. Work using the traditional FPST with officers has found mixed results. Sometimes officers show no racial bias in their decisions (Correll et al., 2007). However, officers who routinely interact with minority individuals involved in gang-related crime show bias (Sim et al., 2013). Research on racial disparities in the real world have also shown mixed evidence for the existence of bias in shooting decisions, with some proposing that officers use lethal force more for Blacks than Whites (Ross, 2015). Others have not found support for such a conclusion (Cesario, Johnson, & Terrill, 2017; Fryer, 2016, Goff, Lloyd, Geller, Raphael, & Glaser, 2016).10 While a full discussion of whether certain officers or departments use lethal force disproportionately for Black civilians is beyond the scope of this research, this work highlights that bias is more likely in situations where officers have little advance information about the person they encounter. Even if under most circumstances the race of a civilian does not impact an officer’s use of lethal force, preventing police shootings of unarmed individuals is of considerable importance. Although there are other controversial policing practices that affect more people (e.g., stop and frisk polices), shooting incidents are important because they receive a disproportionate share of media coverage. Civilians trust and obey law enforcement when they believe their authority is legitimate (Tyler, 2006; Tyler, Goff, & MacCoun, 2015), and shootings of unarmed individuals 10 Although officers may not use lethal force disproportionately against Black individuals, there is evidence for greater law enforcement use of nonlethal force (e.g., Tazer use) more with Black individuals than White individuals (Fryer, 2016, Goff et al., 2016). 55 undermine legitimacy. Given that Black individuals already have low confidence in the police (31% of Blacks vs. 58% of Whites; Tyler et al., 2015), when police shoot individuals like Tamir Rice this raises questions about whether race played a role in the mind of the officers. Thus, it is particularly important to focus on improving officers’ abilities to rapidly and accurately identify objects to avoid mistakes that could be perceived as being motivated by racial animus, even if those mistakes were driven by other factors. What Part of Dispatch Information Reduces Racial Bias? Dispatch information refers to all information that officers receive from police dispatch before they encounter a suspect. This information varies largely depending on what situation an officer is called to, although certain information is always transmitted (where is the emergency, what is the emergency, when did it happen, and who is involved; Norcomm, 2017; Kobb, 2016). The current studies focused on two components commonly given by dispatch: demographic (race) information, and information about whether the suspect was armed. When both pieces of information were presented simultaneously (Study 1) it eliminated bias in the decision to shoot. Yet it is unclear from this design whether this reduction was due to the presence of race or weapon information. Study 2 probed whether race or weapon dispatch information alone eliminated bias by presenting information separately in a blocked design. In terms of the actual decision to shoot, the addition of weapon information reduced bias to shoot unarmed Black men relative to White men, but did not influence bias for armed targets. Race information did not significantly influence shooting decisions for armed or unarmed targets. At the process level, both weapon and race information weakened the effect of race on the evidence accumulation process. This effect was strongest when race dispatch information was given for armed targets; participants 56 accumulated evidence to shoot equally for White and Black targets. Thus, providing individuals with relevant information may independently decrease racial bias in the decision to shoot. At this point it seems appropriate to speculate how weapon and race dispatch information might separately reduce bias in the decision to shoot. Providing accurate weapon information is perhaps most clear. The FPST is at heart a visual identification task; participants are instructed to shoot when they see guns and not shoot when they see harmless objects. Whether a weapon is present is directly relevant to the decision. This is reflected in the fact that participants accumulate evidence to shoot much more quickly when they receive correct information that the target is armed. When weapon information is generally accurate, as it is in the current task, it reduces the ambiguity surrounding what decision individuals should make. Given that stereotypes are most likely to influence judgments in the absence of disambiguating information (Duncan, 1976; Sagar & Schofield, 1980), giving directly relevant information about weapons is likely to undermine their effects. For example, consider the role of race when evaluating job candidates. When candidates of different races are equally qualified, White candidates are sometimes selected at higher rates than Black candidates (Betrand & Mullainathan, 2004; McConahay, 1983). However, when there are clear differences in qualifications between candidates, this bias disappears. A similar process may occur for shooting decisions. When individuals have no prior information about whether a suspect is armed, race may influence their decision to shoot. However, when they have accurate information that the target is armed, the relevance of this information may overpower the influence of stereotypes that lead to disproportionately more shootings of Black men than White men. Accurate dispatch information about the race of the target may also have impacted 57 participants’ decisions. Work using a similar paradigm, the weapon identification task (WIT; Payne, 2001; 2006), has shown that individuals primed with pictures of Black men identify weapons faster and more accurately than when primed with faces of White men. Although this seems to suggest that the presence of race information might exacerbate bias in the decision to shoot, an important difference is that in the current study dispatch information was presented for a long period of time (2000ms) before the target appeared. In contrast, in the WIT, the prime appears for only a short period of time (200ms) and is immediately replaced by the target. When participants are given this information seconds in advance, they may be able to control automatic stereotypic associations between Black men and violence that lead to higher shooting rates for Black than White targets. In sum, the current experiments showed evidence that both weapon and race information might reduce race bias in shooting decisions. Future work should test the mechanisms by which these factors influence decisions, as well as investigate other relevant dispatch information that are missing from experimental shooting tasks. Modeling the Role of Dispatch Information and Expertise A secondary goal of these studies was to better understand how dispatch information and expertise impact the shooting decision process. I investigated this by modeling the decision to shoot using a common model of decision-making, the DDM. This model assumes that when making decisions between two choices individuals repeatedly sample information from their environment relevant to the decision. When they reach some threshold of evidence they make their decision. Study 1 tested how dispatch information impacted the decision to shoot within this model using a sample of untrained students and trained officers. Behaviorally, both officers and students were faster and more likely to shoot armed targets when dispatch information was 58 correct, and were slower and less likely to not shoot unarmed targets when the information was incorrect. The DDM isolated this pattern of results to changes in how participants accumulated information and a counterintuitive bias to ignore the information. A strength of the DDM is that it divides the decision process into different components that map well onto psychological constructs. By testing what parts of the decision process experimental factors influence, we can test different hypotheses about how those factors impact the decision to shoot. In order to move from the model parameters in the DDM to psychological constructs it is necessary to conduct studies validating that the parameters are good indicators of the relevant underlying processes. For example, validating the start point parameter as an index of bias to favor the shoot decision requires demonstrating the start point is sensitive to experimental manipulations designed to change participants’ biases. This issue was particularly important for the current work, given that providing participants with reliable information that targets were armed had a counterintuitive effect on their start point. This raises the question of whether the hierarchical DDM 1) can detect predicted differences in start point and 2) whether the start point can be manipulated experimentally. In Study 3, I tested the first question, whether the hierarchical DDM can detect simulated differences in start point while also accurately recovering the values of other parameters. The results from this study were unequivocal; given an experimental design very similar to Study 1, the DDM was able to capture a medium-sized simulated difference between conditions in start point with 85% power. In addition, the model showed no bias in recovering the other model parameters. In Study 4, I tested whether the start point could be manipulated by changing the payoff structure of the task. Participants in this study were more likely to shoot armed targets when shooting was rewarded, and not shoot unarmed targets when not shooting was rewarded. 59 These behavioral changes were mirrored by a similar change in start point, such that participants favored the decision consistent with higher payoffs. There was also evidence that these payoffs also influenced how participants accumulate information. Like dispatch information in Study 1, the payoff manipulation also influenced the evidence accumulation process. These results contrast with work in the cognitive domain. For example, Voss et al. (2004) validated the DDM parameters for a color discrimination task. Participants were asked to identifying whether blue or orange was more prevalent on a grid. When participants were rewarded for choosing one decision over the other, they showed a start point bias to favor the rewarded response. Importantly, this manipulation did not effect evidence accumulation. How do we reconcile these inconsistent results? One possibility is that the social nature of the FPST changes how participants respond to information that encourages shooting. Individuals may be easily biased to favor choosing orange over blue when those decisions have little real world value. But in a task like the shooter task where decisions are perceived as more impactful, individuals’ biases may be more rigid. Instead, this information may leak into how individuals search for the object. If the influence of biasing information depends on the nature of the task, other tasks where decisions are sensitive (e.g., an approach-avoidance task using members from different groups as stimuli), should show a similar effect of biasing information on the accumulation of evidence rather than the start point. The results that dispatch information influences the decision to shoot by impacting the process of information accumulation has important training implications. Both officers and students accumulated information more slowly when dispatch information was incorrect, but officers outperformed students because they were better at identifying objects than students. Identifying what aspect of their weapons training or on the job experience improves officers’ 60 performance is key to improving their ability to distinguish guns from harmless objects. This weapon identification training could then be strategically employed to assist officers who are particularly poor at rapidly identifying objects, as identified by tasks like the FPST. Training officers to identify weapons accurately is most likely to help officer decisions in high pressure situations where they need to rapidly identify an object in a suspect’s hand. Such training would be just one component of a broader use of force training focused on addressing other factors that officers must consider when using force (e.g., the intent of the person, the presence of bystanders). In many cases these other components may be more important to officer use of force. Nonetheless, given the gravity of accidentally shooting an unarmed individual, this training has a place as a part of a multi-faceted approach to improve officer decisions to shoot. Another way to tackle the issue that poor dispatch information increases mistakes in officer decisions to shoot is to consider the role of policy in shaping the information that dispatch passes onto officers. In the case of Tamir Rice, had dispatch passed on information that Rice was a juvenile and that the weapon was probably fake, perhaps the officers would not have used lethal force. Policy changes that ensure that this kind of information is passed on might well help officers make better decisions. The limitation of this approach is that even if dispatch policies are improved this will not prevent officers from getting incorrect information when it is misreported. Dispatchers, especially in metropolitan areas, often receive false reports that weapons are present on scene because civilians know this causes the police to respond more rapidly (Lance Langdon, personal communication, June 1, 2016). Thus, even if dispatch policies are improved, individual training to identify objects in these high-pressure situations is still needed. Individual Differences in Racial Bias The current studies were focused on racial bias in decisions at the group level. Especially 61 when dispatch information was given, participants did not show racial bias in shooting decisions. However, these group-level differences can obscure individual variation in decisions. Although on average participants did not show racial bias in shooting decisions, some participants were more likely to shoot unarmed Black men than unarmed White men, and some participants were more likely to shoot unarmed White men than unarmed Black men. Understanding what causes variation in these decisions is crucial for knowing why individuals are more likely to shoot those from certain racial groups. While the current experiments did not test this hypothesis, there are a number of individual differences that might be relevant for understanding why some individuals are more likely to shoot unarmed Black (or White men). One individual difference that may predict racial bias in shooting decisions is attitudes (measured directly or indirectly) towards White and Black individuals. Work in this area has demonstrated that an explicit belief about the criminality of Black individuals predicts racial officer bias in shooting decisions (Correll et al., 2007; Peruche & Plant, 2006). However, no work has focused on the converse, that a belief about the criminality of White individuals predicts anti-White shooter bias. In addition, this work has also relied on small sample sizes, making it unclear how reliable these effects are. Another individual difference that may to predict racial bias in shooting behavior is motivation to control prejudice (Plant & Devine, 1998). If individuals do hold different attitudes towards other groups that influence their shooting decisions, whether those biases translate into behavior may depend on how motivated participants are to avoid acting on those biases. Past research has not found evidence for the influence of internal or external motivation to respond without prejudice in officer samples (Correll et al., 2007) but this work did not test whether the influence of motivation to control prejudice depends on biased attitudes. Similarly, motivation to 62 control prejudice scales are explicitly designed to test motivation to avoid anti-Black prejudice, not anti-White prejudice. Since some officers showed bias to shoot White targets more than Black targets, it is important to devise measures that can tap into these attitudes and motivations. Finding predictors of individual differences in officer biases is important for not only isolating the psychological mechanisms responsible for shooting decisions, but for its practical implications. Individuals who show unbiased attitudes and/or a motivation to avoid acting in prejudiced ways could be favored during the officer recruitment and selection. In addition, the FPST could be used to assess individual differences in officers’ risk for accidentally shooting unarmed individuals. These individuals could receive weapon identification training to reduce the likelihood of such an outcome. Individual differences could even be incorporated into the DDM to understand not only whether they influence decisions, but how they influence them. For example, motivation to control prejudice may reduce racial bias in shooting decisions by increasing participant cautiousness, as indicated by increased thresholds for Black targets. Initial support for this hypothesis has been observed in untrained civilians (Pleskac et al., 2017). Benefits of a Diffusion Model Approach to Decisions The current studies used the DDM to understand the decision to shoot because its parameters map conceptually onto how researchers have theoretically divided the decision to shoot. Consistent with past work (Correll et al., 2015; Pleskac et al., 2017), when no dispatch information was provided, race influenced the decision to shoot by influencing the accumulation of evidence during the decision (a drift rate change) rather than creating a prior bias to shoot decision Black men relative to White men (a relative start point change). In addition to clarifying the process by which race caused individuals to shoot unarmed Black men more than unarmed White men, this process level examination also clarified that training to reduce bias in the 63 decision to shoot should focus on the accurate identification of weapons rather than trying to prevent officers from being trigger happy for Black men relative to White men. Similarly, the DDM also provided a framework for understanding the role of dispatch information and expertise in the decision to shoot. Focusing on the role of expertise, the DDM revealed that the reasons officers were more accurate and slower than students when making decisions was due to two separate aspects of the decision process. First, officers were slower than students because their non-decision processes took longer. Second, officers were more accurate than students because they were better at distinguishing guns from harmless objects. Without this process level analysis, it might be tempting to conclude that officers’ slower and more accurate performance is simply due to increased cautiousness. This ability to precisely test different process level accounts is a benefit of the DDM that extends beyond understanding the decision to shoot. As these examples show, the DDM provides a novel perspective to understand the effects of categorical information on fast decisions compared to classic dual process accounts (Chaiken & Trope, 1999; Sherman et al., 2014). From this account, bias in fast decisions like the decision to shoot is explained by the influence of fast automatic associations between a category and some concept. When these fast associations are at odds with slower controlled judgments, the two processes compete to determine behavior. This competition explains how categorical associations can automatically influence judgments, in this case, determining what object the person is holding. One consequence of this dual process account is that faster responses are often thought to represent greater automatic activation of concepts. This is the basic logic behind tasks like the lexical decision task and the implicit association task (Greenwald, McGhee, & Schwarz, 1998). 64 For example, faster responses to armed targets than unarmed targets in the FPST indicate that guns are more activated than harmless objects. But the diffusion model shows us is that there are different ways that fast or slow responses can come about. Within the FPST, shorter response times for guns are primarily due to shorter non-decision times than nonguns, and not because of differences in activation. Similarly, officers are slower than students despite being better able to distinguish guns from harmless objects because their non-decision times are slower. In sum, the DDM provides a new way to understand fast decision-making that does not assume slower responses always represent changes in the activation of concepts. Conclusion Whenever a police officer accidentally shoots an unarmed individual the result is tragic. When the victim is a twelve-year-old boy, the loss of life is even more grievous. Instances like the shooting of Tamir Rice often become catalysts for broader concerns about racial disparities in police use of force. However, it is difficult if not impossible to tease apart the various factors in any one shooting that might have contributed to an officer’s decision to use lethal force. Using the Rice case as an example, I designed an experimental shooting task to test whether dispatch information, race, and expertise influence shooting decisions. This analysis revealed a powerful impact of dispatch information on decisions. Good information helped individuals make better decisions and poor information resulted in worse decisions. Furthermore, dispatch information, whether accurate or not, overrode any influence the race had on the decision to shoot. Under these circumstances, dispatch information may play a more important role in whether an officer decides to use lethal force. 65 APPENDICES 66 APPENDIX A: ANOVA Tables Table 3: ANOVA Summary Table for Study 1 Error Rates Sum of Squares df Mean Square F p Race 0.008 1 0.008 1.062 0.304 Race × Exp 8.827e -4 1 8.827e -4 0.111 0.740 Residual 1.202 151 0.008 Object 0.023 1 0.023 0.852 0.358 Object × Exp 0.017 1 0.017 0.624 0.431 Residual 4.085 151 0.027 Info 0.003 1 0.003 0.379 0.539 Info × Exp 0.008 1 0.008 1.130 0.289 Residual 1.091 151 0.007 Race × Object 0.021 1 0.021 2.211 0.139 Race × Object × Exp 0.006 1 0.006 0.605 0.438 Residual 1.434 151 0.009 Race × Info 9.024e -5 1 9.024e -5 0.014 0.906 Race × Info × Exp 0.009 1 0.009 1.434 0.233 Residual 0.971 151 0.006 Object × Info 0.938 1 0.938 44.316 < .001 Object × Info × Exp 0.054 1 0.054 2.540 0.113 Residual 3.195 151 0.021 Race × Object × Info 8.589e -4 1 8.589e -4 0.117 0.733 Race × Object × Info × Exp 0.007 1 0.007 0.900 0.344 Residual 1.113 151 0.007 Exp 1.006 1 1.006 13.23 < .001 Residual 11.481 151 0.076 Note. Exp = Experience. Effects below the second solid black line are between subjects. 67 Table 4: ANOVA Summary Table for Study 1 Response Times Sum of Squares df Mean Square F p Race 3705.94 1 3705.94 1.620 0.205 Race × Exp 508.76 1 508.76 0.222 0.638 Residual 345384.29 151 2287.31 Object 974725.12 1 974725.12 196.699 < .001 Object × Exp 81167.16 1 81167.16 16.379 < .001 Residual 748269.49 151 4955.43 Info 11552.75 1 11552.75 6.222 0.014 Info × Exp 3449.69 1 3449.69 1.858 0.175 Residual 280354.48 151 1856.65 Race × Object 1085.33 1 1085.33 0.400 0.528 Race × Object × Exp 873.14 1 873.14 0.322 0.571 Residual 409316.61 151 2710.71 Race × Info 280.06 1 280.06 0.177 0.674 Race × Info × Exp 4048.16 1 4048.16 2.561 0.112 Residual 238668.82 151 1580.59 Object × Info 46106.77 1 46106.77 10.932 0.001 Object × Info × Exp 7483.01 1 7483.01 1.774 0.185 Residual 636857.02 151 4217.60 Race × Object × Info 840.01 1 840.01 0.379 0.539 Race × Object × Info × Exp 19.41 1 19.41 0.009 0.926 Residual 334746.87 151 2216.87 Exp 748510 1 748510 29.39 < .001 Residual 3.845e +6 151 25464 Note. Exp = Experience. Effects below the second solid black line are between subjects. 68 Table 5: ANOVA Summary Table for Study 2 Error Rates (Weapon vs. Control) Sum of Squares df Mean Square F Cond 0.042 1 0.042 2.840 Residual 1.704 115 0.015 Race 0.017 1 0.017 1.762 Residual 1.097 115 0.010 Object 0.317 1 0.317 9.023 Residual 4.040 115 0.035 Cond × Race 0.038 1 0.038 4.751 Residual 0.923 115 0.008 Cond × Object 0.022 1 0.022 2.388 Residual 1.074 115 0.009 Race × Object 0.278 1 0.278 33.288 Residual 0.959 115 0.008 Cond × Race × Object 0.042 1 0.042 5.085 Residual 0.952 115 0.008 Note. Cond = Condition (no information, weapon information). Table 6: ANOVA Summary Table for Study 2 Error Rates (Race vs. Control) Sum of Squares df Mean Square Cond 0.070 1 0.070 Residual 1.985 115 0.017 Race 0.090 1 0.090 Residual 1.081 115 0.009 Object 0.306 1 0.306 Residual 4.335 115 0.038 Cond × Race 6.061e -4 1 6.061e -4 Residual 1.100 115 0.010 Cond × Object 0.025 1 0.025 Residual 1.548 115 0.013 Race × Object 0.343 1 0.343 Residual 1.059 115 0.009 Cond × Race × Object 0.021 1 0.021 Residual 1.183 115 0.010 Note. Cond = Condition (no information, weapon information). 69 p 0.095 0.187 0.003 0.031 0.125 < .001 0.026 F 4.046 p 0.047 9.600 0.002 8.116 0.005 0.063 0.802 1.883 0.173 37.290 < .001 2.074 0.153 Table 7: ANOVA Summary Table for Study 2 Error Rates (Weapon Condition) Sum of Squares df Mean Square F Info 0.030 1 0.030 1.008 Residual 3.377 115 0.029 Race 5.280e -4 1 5.280e -4 0.021 Residual 2.899 115 0.025 Object 0.109 1 0.109 1.937 Residual 6.451 115 0.056 Info × Race 0.009 1 0.009 0.381 Residual 2.808 115 0.024 Info × Object 7.742 1 7.742 99.885 Residual 8.913 115 0.078 Race × Object 0.142 1 0.142 5.778 Residual 2.817 115 0.024 Info × Race × Object 0.006 1 0.006 0.205 Residual 3.202 115 0.028 Note. Info = Information (unarmed, armed). Table 8: ANOVA Summary Table for Study 2 Error Rates (Race Condition) Sum of Squares df Mean Square 0.013 0.013 Info 1 2.771 0.024 Residual 115 0.111 0.111 Race 1 3.145 0.027 Residual 115 0.047 0.047 Object 1 7.015 0.061 Residual 115 0.005 0.005 Info × Race 1 2.805 0.024 Residual 115 0.005 0.005 Info × Object 1 3.071 0.027 Residual 115 0.228 0.228 Race × Object 1 3.233 0.028 Residual 115 0.122 0.122 Info × Race × Object 1 3.502 0.030 Residual 115 Note. Info = Information (White, Black). 70 p 0.318 0.885 0.167 0.538 < .001 0.018 0.652 F p 0.539 0.465 4.043 0.047 0.777 0.380 0.220 0.640 0.204 0.652 8.116 0.005 3.998 0.048 Table 9: ANOVA Summary Table for Study 2 Response Times (Weapon vs. Control) Sum of Squares df Mean Square F Cond 40585 1 40585 2.815 Residual 1.658e +6 115 14415 Race 1779 1 1779 0.112 Residual 1.831e +6 115 15918 Object 773908 1 773908 70.985 Residual 1.254e +6 115 10902 Cond × Race 4110 1 4110 0.336 Residual 1.408e +6 115 12245 Cond × Object 15881 1 15881 1.206 Residual 1.514e +6 115 13168 Race × Object 24017 1 24017 2.689 Residual 1.027e +6 115 8933 Cond × Race × Object 26393 1 26393 2.207 Residual 1.375e +6 115 11959 Note. Cond = Condition (no information, weapon information). Table 10: ANOVA Summary Table for Study 2 Response Times (Race vs. Control) Sum of Squares df Mean Square F Cond 200.2 1 200.2 0.016 Residual 1.450e +6 115 12609.4 Race 10780.8 1 10780.8 1.248 Residual 993798.1 115 8641.7 Object 501595.5 1 501595.5 29.596 Residual 1.949e +6 115 16948.2 Cond × Race 6708.0 1 6708.0 0.515 Residual 1.497e +6 115 13021.6 Cond × Object 2067.1 1 2067.1 0.181 Residual 1.313e +6 115 11417.6 Race × Object 8143.3 1 8143.3 0.665 Residual 1.408e +6 115 12243.6 Cond × Race × Object 9550.1 1 9550.1 0.540 Residual 2.033e +6 115 17677.9 Note. Cond = Condition (no information, weapon information). 71 p 0.096 0.739 < .001 0.563 0.274 0.104 0.140 p 0.900 0.266 < .001 0.474 0.671 0.416 0.464 Table 11: ANOVA Summary Table for Study 4 Error Rates Sum of Squares df Mean Square Race 0.046 1 0.046 Residual 0.401 100 0.004 Object 0.007 1 0.007 Residual 2.584 100 0.026 Bias 0.061 1 0.061 Residual 1.120 100 0.011 Race × Object 0.262 1 0.262 Residual 0.537 100 0.005 Race × Bias 0.001 1 0.001 Residual 0.527 100 0.005 Object × Bias 7.124 1 7.124 Residual 5.628 100 0.056 Race × Object × Bias 0.002 1 0.002 Residual 0.490 100 0.005 Table 12: ANOVA Summary Table for Study 4 Response Times Sum of Squares df Mean Square Race 37.04 1 37.04 Residual 34506.08 100 345.06 Object 196079.91 1 196079.91 Residual 75806.71 100 758.07 Bias 4104.01 1 4104.01 Residual 526138.11 100 5261.38 Race × Object 12.63 1 12.63 Residual 31964.00 100 319.64 Race × Bias 938.91 1 938.91 Residual 26680.21 100 266.80 Object × Bias 9002.24 1 9002.24 Residual 43926.39 100 439.26 Race × Object × Bias 133.96 1 133.96 Residual 33667.66 100 336.68 72 F 11.364 p 0.001 0.278 0.599 5.432 0.022 48.818 < .001 0.186 0.667 126.579 < .001 0.461 0.499 F 0.107 p 0.744 258.658 < .001 0.780 0.379 0.039 0.843 3.519 0.064 20.494 < .001 0.398 0.530 APPENDIX B: DDM Effects Tables Table 13: Summary of Effects on Condition Level Threshold for Study 1 Students 95% HDI Factor Mode Lower Upper Race 0.008 -0.024 -0.041 Info 0.057 0.025 0.089 Race × Info 0.020 -0.026 0.082 Table 14: Summary of Effects on Condition Level Start Point for Study 1 Students 95% HDI Factor Mode Lower Upper Race .009 -.003 .022 Info .070 .057 .083 Race × Info -.003 -.030 .022 Table 15: Summary of Effects on Condition Level Non-decision Time for Study 1 Students 95% HDI Factor Mode Lower Upper Race 0 -12 11 Info -2 -14 9 Object 12 0 23 Race × Info 1 -22 25 Race × Object 4 -19 28 Info × Object -42 -65 -18 Race × Info × Object 1 -46 48 Table 16: Summary of Effects on Condition Level Drift Rate for Study 1 Students 95% HDI Factor Mode Lower Race 0.04 -0.10 Info -0.03 -0.18 Object 0.29 0.17 Race × Info 0.07 -0.17 Race × Object 0.17 -0.09 Info × Object 1.40 1.11 Race × Info × Object -0.03 -0.59 73 Upper 0.16 0.09 0.43 0.35 0.46 1.66 0.50 Table 17: Summary of Effects on Condition Level Threshold for Study 1 Officers 95% HDI Factor Mode Lower Race -0.007 -0.49 Info 0.102 0.059 Race × Info 0.016 -0.073 Upper 0.039 0.146 0.102 Table 18: Summary of Effects on Condition Level Start Point for Study 1 Officers 95% HDI Factor Mode Lower Upper Race -.001 -.024 .023 Info .072 .045 .072 Race × Info .007 -.041 .054 Table 19: Summary of Effects on Condition Level Non-Decision Time for Study 1 Officers 95% HDI Factor Mode Lower Upper Race 5 -5 16 Info -5 -17 4 Object 32 22 42 Race × Info -6 -25 16 Race × Object -8 -27 14 Info × Object -39 -60 -19 Race × Info × Object -7 -47 35 Table 20: Summary of Effects on Condition Level Drift Rate for Study 1 Officers 95% HDI Factor Mode Lower Race 0.03 -0.17 Info 0.00 -0.18 Object .059 0.40 Race × Info -0.01 -0.36 Race × Object 0.12 -0.29 Info × Object 0.96 0.54 Race × Info × Object 0.17 -0.53 74 Upper 0.19 0.18 0.80 0.37 0.51 1.35 1.07 Table 21: Summary of Effects on Condition Level Threshold for Study 4 95% HDI Factor Mode Lower Upper Race -0.003 -0.032 0.023 Payoff 0.012 -0.017 0.028 Race × Payoff 0.026 -0.031 -0.079 Table 22: Summary of Effects on Condition Level Start Point for Study 4 95% HDI Factor Mode Lower Upper Race .011 -.003 .023 Payoff -.037 -.050 -.024 Race × Payoff -.009 -.033 .018 Table 23: Summary of Effects on Condition Level Non-Decision Time for Study 4 95% HDI Factor Mode Lower Upper Race 2 -9 13 Payoff -8 -19 3 Object 4 -4 17 Race × Payoff -12 -34 9 Race × Object -5 -25 18 Payoff × Object -6 -26 18 Race × Payoff × Object -17 -59 27 Table 24: Summary of Effects on Condition Level Drift Rate for Study 4 95% HDI Factor Mode Lower Upper Race 0.07 -0.05 0.20 Payoff -0.18 -0.30 -0.05 Object -0.04 -0.17 0.09 Race × Payoff -0.07 -0.31 0.18 Race × Object 0.46 0.21 0.72 Payoff × Object 1.75 1.47 2.00 Race × Payoff × Object -0.10 -0.62 0.44 75 APPENDIX C: Hierarchical Drift Diffusion Model Sβ , Rβ Lβi , Uβi uniform ~ ... Sτ , Rτ Sδ , Rδ Lτi , Uτi gamma δ i,j ~ ... uniform ~ ... Sα , Rα Lδi , Uδi gamma ~ ... Lαi , Uαi gamma ~ ... uniform ~ ... uniform ~ ... μβi , precβ μτi , precτ μδi , precδ μαi , precα normal ~ ... normal ~ ... normal ~ ... normal ~ ... βi,j ⋅ αi,j τi,j δi,j αi,j gamma ~ ... condition i subject j trial k ~ ... yi,j,k Figure 11: Generic diagram of the hierarchical drift diffusion model. The kth response for subject j within condition i are generated by a drift diffusion process. Vertical lines on the normal distributions indicate that the priors were truncated normals. Prec = precision. 76 APPENDIX D: JAGS CODE JAGS Code for Model Used in Study 1 model { #likelihood function for (t in 1:nTrials) { y[t] ~ dwiener(alpha[Cond1[t], subject[t]], tau[Cond2[t], subject[t]], beta[Cond1[t], subject[t]], delta[Cond2[t], subject[t]]) } for (s in 1:nSubjects) { for (c1 in 1:nCond1) { alpha[c1, s] ~ dnorm(muAlpha[c1, [BCon[s]], precAlpha[BCon[s]]) T(.1, 5) beta[c1, s] ~ dnorm(muBeta[c1, [BCon[s]], precBeta[BCon[s]]) T(.1, .9) } for (c2 in 1:nCond2) { tau[c2, s] ~ dnorm(muTau[c2, [BCon[s]], precTau[BCon[s]]) T(.0001, 1) delta[c2, s] ~ dnorm(muDelta[c2, [BCon[s]], precDelta[BCon[s]]) T(-5, 5) } } #priors for (b in 1:nBCon){ precAlpha[b] ~ dgamma(.001, .001) precBeta[b] ~ dgamma(.001, .001) precTau[b] ~ dgamma(.001, .001) precDelta[b] ~ dgamma(.001, .001) for (c1 in 1:nCond1){ muAlpha[c1] ~ dunif(.1, 5) muBeta[c1] ~ dunif(.1, .9) } for (c2 in 1:nCond2){ muTau[c2] ~ dunif(.0001, 1) muDelta[c2] ~ dunif(-5, 5) } } } 77 JAGS Code for Model Used in Studies 2 – 4 model { #likelihood function for (t in 1:nTrials) { y[t] ~ dwiener(alpha[Cond1[t], subject[t]], tau[Cond2[t], subject[t]], beta[Cond1[t], subject[t]], delta[Cond2[t], subject[t]]) } for (s in 1:nSubjects) { for (c1 in 1:nCond1) { alpha[c1, s] ~ dnorm(muAlpha[c1], precAlpha) T(.1, 5) beta[c1, s] ~ dnorm(muBeta[c1], precBeta) T(.1, .9) } for (c2 in 1:nCond2) { tau[c2, s] ~ dnorm(muTau[c2], precTau) T(.0001, 1) delta[c2, s] ~ dnorm(muDelta[c2], precDelta) T(-5, 5) } } #priors for (c1 in 1:nCond1){ muAlpha[c1] ~ dunif(.1, 5) muBeta[c1] ~ dunif(.1, .9) } for (c2 in 1:nCond2){ muTau[c2] ~ dunif(.0001, 1) muDelta[c2] ~ dunif(-5, 5) } precAlpha ~ dgamma(.001, .001) precBeta ~ dgamma(.001, .001) precTau ~ dgamma(.001, .001) precDelta ~ dgamma(.001, .001) } 78 APPENDIX E: Posterior Predictions I tested how well the DDM predicted the data by examining the degree to which the model predictions corresponded with observed choice probabilities, response times, and response time distributions. This was done for each condition in Studies 1, 2, and 4. I used JAGS to predict decision and response time data from the DDM using the posterior condition level distributions. Decision and response time data were generated for each trial at each step in the chain (6,000 steps). This leads to an incredibly large amount of data (e.g., Study 3: 320 trials ×102 participants × 6,000 sampled values). The data were summarized to the condition level because 1) the study analyses were on condition level effects, and 2) there was not enough data at the individual level to accurately estimate response time distributions. I followed the procedures from Pleskac et al. (2017) to summarize the choice probabilities, response times, and response time distributions. For the choice probabilities, I plotted observed and model predicted means for each condition and response type. These plots were overlaid with the mean performance for each individual to show the spread of the data. In all studies there was less performance variability in conditions where the dispatch information was correct or the point structure favored the response (e.g., shooting armed targets when shooting was encouraged), and the DDM captured this change. In general the DDM recreated the student data well but performed more poorly for police. This was particularly evident in conditions with less data (the no information, armed target condition and armed information, unarmed target conditions). One noteworthy prediction miss is the overestimate of false alarms in the Study 2 no information conditions. However, this miss was not replicated in Study 1 or 4, 79 nor did it bias predicted response times, so I did not adjust the model to account for this. I also examined the predicted response times for all conditions and all response times. Generally, the model did a good job at predicting the spread of the data, although in Study 1 the model systematically predicted slower correct rejections for students and faster correct rejections for officers. One way to improve these fits would be to add in trial-by-trial variability in the parameters. However, the deviance information criterion values for those models indicate that although they have better fits, they come at a cost of parsimony. It is possible that these models may actually overfit the data. Finally, I examined the degree of correspondence between the observed and predicted response time distributions for each condition for each study. These allowed for a more detailed examination of whether the predicted response time distribution matches the shape of the observed distribution, rather than focusing on the mean and variability in the data. In Study 1, despite coming from two different populations, patterns of response times and misfits were extremely similar across students and officers. The model did a reasonable job recreating the data, although it consistently predicted more leptokurtic response time data for correct decisions when correct dispatch information is provided. The model also underestimated the degree of positive skew for misses when no dispatch information was provided. Study 2 showed high correspondence between the observed and predicted response time distributions. Finally, in Study 4, the model reasonably recreated the data, although it consistently predicted more leptokurtic response time data when decisions are correct (i.e., not shooting unarmed targets, shooting armed targets). 80 ● ● 0.8 ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ●● ●● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ●●● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ●●● ● ●● ● ● ● ●●●● ● ● ● ● ● ● ●● ● ● ● ● ●●●● ●● ● ● ● ● ● ● ● ●● ● ● ● ●● ●● ●● ●●● ●● ● ● ● ●● ●● ● 0.6 ●● ● ● ● ●●● ● ●● ● ●● ●● ● ●● ●● ● ●● ● ● ●● ● ● ●● ● ● ● ● ●● ● ●●●● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ●●● ● ●● ●● ● ● ●● ● ●● ● ● ● ● ● 0.4 ●● ● ● ● ● ● 0.6 ● ● ● 0.4 ●● ● ●● ● ●● ● ●● ● ● ● ● ● ●●● ● ● ●● ●● ●● ●●● ● ●● ● ●● ●● ● ●● 0.2 ● ●● ● ●●● ● ● ● ●●● ● ● ●● ● ● ●● ● ● ●●●● ●● ●● ● ● ●●●● ●● ● ● ● ● ● ● ● ● ●● ●● ●●●● ● ●● ● ●● ●●● ● ● ● ●●●● ●●●● ● e m ed ed Ar B Ar W ● ● ●●● ● ● ● ●● ● ● ● ● ● ●● 0.0 ●●● ● ●●● ● ● ● ●●● ● ● ● ●● ●● ● ●● ● ●●● ● ● ● ●● ● ● ● ● ●●● ● ●● ● ●● ● ● ● ●● ● ● ●● ● ●●●● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ●● ● ● ●● ● ● ●●● ● ●● ● ●● ●●● ● student ● ●● ●●●●● ● police m on N B W N on e e ● ●● ●●●● ●● on 0.2 ●● ● ● ●● ● ● ● N ● ●● ● ●● ●● ● ● ●● ● ● ● ● ● ● ●●● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ●● ●● ● ● W Hit Rate ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ●● ● ●● ● ● ● ●●● ● ● ●● ● ●● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ●● ● ● ● ●● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ●● ●● ● ● ●●●● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ●● ●● ●● ●●● m ed ● ● ● ● ● ●● ● ● 0.8 ● ● ● ● ● ● ●● ●● ●● ● ●● ●● ● ● ● ●● ● ●●● ●● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ● ●● ●● Ar ●● ● ●● ● ●● ●●● ● B ● ● ● ●● ed ● ● ●● ●●● m ● ●●● ● ● ●● ● ● ● ● ● ● ●● ● ●● ● ● ● ● Ar ● ● ●● ● ● ● ● ● ● ● ● ●● ●●● ● ●●●● ●●● ● ● ●● ●●●● ● ●● ●● ● ● ● ● ●●● ● ● ●●● ● ●● ● ●●●● ●● ● ● ●● ● ●●● ● ●● ● ●● ●● ●● ● ●● ●● ● ● ● W ●● ●● ● ● e ●● ●● ●● ● on ●● ● ● ● ● ● ● ● ● N ● ● ● ● ● ● ● ● B ●● ● ●● ● False Alarm Rate 1.0 Figure 12: Posterior predictions of hit and false alarm rates for Study 1. Squares represent observed condition level choice proportions. Diamonds represent predicted condition level choice proportions. Blue dots represent individual participant response times and have been jittered to better show the distribution of scores. W = White, B = Black 81 1250 1250 ● ● ● ● ● ● ● ● 750 ● ●● ● ● ●●●● 500 ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ● ● ●●● ●● ●●● ● ●●● ● ● ●●● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ●● ●● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ●● ●● ●● ● ● ● ● ● ● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●●●● ● ●● ● ●● ● ● ● ●● ● ● ● ● ●● ●● ●● ● ●● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ● ●● ● ● ● ●● ●●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ●●● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ●● ●● ● ●● ● ●● ● ●● ● ●●● ●● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ●● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● 750 ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ● ●● ●● ●● ● ● ●● ● ● ● 500 ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ●●● ●● ● ●● ● ●●●● ● ● ● ● ●● ●● ●● ● ● ● ●● ● ● ●● ● ●● ●●● ● ● ●● ● ●● ● ●● ●● ● ●● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● ●● ● ● police ● student ● ● ● ●● ● ●● ● ● ● ● ● ● ●● ● ●● ● ● ● ●● ●● ●● ●● ●● ● ● ●● ● ●● ● ● ●● ● ●● ●● ● ●● ●● ● ● ● ● ● ● ● ●●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ●● ● ● ● ●● ● ●● ●● ● ● ●● ● ● ● ●● ● ● ● ●● ● ●● ● ● ● ● ● ●● ● ●● ●● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● Ar N on B W m ● W ● e e ● B W Ar Ar N on B ● ●● ●● ● ● 250 ed m ed e e N on W ● ● ● ● ● ● ● ● ● ● ● ● ● N on ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●●● ● ●● ● ● ●● ●● ● ●● ● ● ●● ●● ●● ● ● ●● ● ● ● 250 ●● ● ● ●● ● ● ● ● ●● ●● ●●● ● ● ●● ● ● ● ● ●● ● ● ●●● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ●●●● ● ● ● ● ● ●●● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ed Hits (ms) ● ● ● ● ●● ● ● ● ● m ● ● ● ● ● ● ● B ● ● 1000 Ar ● m ed 1000 Correct Rejections (ms) ● ● ● 1250 1250 ● ● ● ● ● ● ● ● ● ● ● ● ● ● 1000 1000 ● ● ● 750 ● 500 ● ● ●● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ●●● ● ● ● ● ● ●● ● ●● ● ● ●●●● ●● ●●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ●● ● ● ●● ● ●● ●● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ● ● ● ●● ● ●● ● ●● ● ● ●● ● ● ● ● ●●● ● ●●● ● ●● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ●● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ●● ●●● ● ● ●● ● ● ●● ● ● ● ●● ● ●● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ●● ● ● ● ● ● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● ●●● ●● ● ● ● ●●●● ● ●● ● ● ●● ● ● ● ●●● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ●● ● ●● ● ● ● ● ●● ● ● ●● ● ●● ● ●● ● ● ● ●●● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● ● ● ●●● ● ● ● ●●● ● ● ●● ● ● ●● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● 750 ● ● ●● ● ● ● ●● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ●● 500 ●● ● ● ●● ● ● ● ● ● ●● ●● ● ●●●● ● ● ● ● ● ● ● ●●● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ●● ● ●●● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ●● ●● ●● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●●● ●●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ●●● ●● ●● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ●●● ●● ● ● ●● ●● ●● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ●● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●●●● ● ● ●● ● ● ●● ● ● ●● ● ●● ● ● ● ● ● ●● ●●● ● ●● ● ● ● ● ● ●● ● ● ● ● ● m Ar B Ar m ed ed e on W W N N m Ar B W Ar m ed ed e on N B N W ● on e 250 on e 250 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● B Misses (ms) ● False Alarms (ms) ● Figure 13: Posterior predictions of response times for Study 1. Squares represent observed condition level choice proportions. Diamonds represent predicted condition level choice proportions. Blue dots represent individual participant response times and have been jittered to better show the distribution of scores. W = White, B = Black 82 White, No Info, No Gun, Don't Shoot White, No Info, Gun, Don't Shoot White, No Info, No Gun, Shoot White, No Info, Gun, Shoot 250 500 750 1000 0 250 500 750 1000 Density 750 1000 0 250 500 750 1000 Black, No Info, No Gun, Shoot Black, No Info, Gun, Shoot 250 500 750 1000 0 250 500 750 1000 Density Black, No Info, Gun, Don't Shoot Density Black, No Info, No Gun, Don't Shoot Density Response Time (ms) 0 250 500 750 1000 0 250 500 750 1000 Response Time (ms) White, Armed Info, No Gun, Don't Shoot White, Armed Info, Gun, Don't Shoot White, Armed Info, No Gun, Don't Shoot White, Armed Info, Gun, Shoot 250 500 750 1000 0 250 500 750 1000 Density Response Time (ms) Density Response Time (ms) Density Response Time (ms) 0 250 500 750 1000 0 250 500 750 1000 Response Time (ms) Response Time (ms) Black, Armed Info, No Gun, Don't Shoot Black, Armed Info, Gun, Shoot Black, Armed Info, No Gun, Shoot Black, Armed Info, Gun, Shoot 250 500 750 1000 Response Time (ms) 0 250 500 750 1000 Density Response Time (ms) Density Response Time (ms) Density 0 500 Response Time (ms) Density 0 250 Response Time (ms) Density 0 0 Response Time (ms) Density 0 Density Density data Density model 0 Response Time (ms) 250 500 750 1000 Response Time (ms) 0 250 500 750 1000 Response Time (ms) Figure 14: Observed (black) and predicted (gray) response time distributions for each response type at the condition level for students in Study 1. 83 White, No Info, No Gun, Don't Shoot White, No Info, Gun, Don't Shoot White, No Info, No Gun, Shoot White, No Info, Gun, Shoot 250 500 750 1000 0 250 500 750 1000 Density 750 1000 0 250 500 750 1000 Black, No Info, No Gun, Shoot Black, No Info, Gun, Shoot 250 500 750 1000 0 250 500 750 1000 Density Black, No Info, Gun, Don't Shoot Density Black, No Info, No Gun, Don't Shoot Density Response Time (ms) 0 250 500 750 1000 0 250 500 750 1000 Response Time (ms) White, Armed Info, No Gun, Don't Shoot White, Armed Info, Gun, Don't Shoot White, Armed Info, No Gun, Don't Shoot White, Armed Info, Gun, Shoot 250 500 750 1000 0 250 500 750 1000 Density Response Time (ms) Density Response Time (ms) Density Response Time (ms) 0 250 500 750 1000 0 250 500 750 1000 Response Time (ms) Response Time (ms) Black, Armed Info, No Gun, Don't Shoot Black, Armed Info, Gun, Shoot Black, Armed Info, No Gun, Shoot Black, Armed Info, Gun, Shoot 250 500 750 1000 Response Time (ms) 0 250 500 750 1000 Density Response Time (ms) Density Response Time (ms) Density 0 500 Response Time (ms) Density 0 250 Response Time (ms) Density 0 0 Response Time (ms) Density 0 Density Density data Density model 0 Response Time (ms) 250 500 750 1000 Response Time (ms) 0 250 500 750 1000 Response Time (ms) Figure 15: Observed (black) and predicted (gray) response time distributions for each response type at the condition level for officers in Study 1. 84 0.50 ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ●● ● ● ● ● ●●● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ●● ● ● ● ● ● ● ● ● ●●● ●● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●●● ●●● ● ● ●● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.75 0.25 ● ● ● ●● ● ●● ● ● ●●● ● 0.50 ● ● ● ● ● ● ● ● ● 0.00 ● ●● ● ● ● ● ●●● ● ● ● ● ●● ● ●●● ● ● ●● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ●● ● ●● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ●● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● N N on e U No W na ne r U me B na d rm W Ar ed m B e Ar d W m e W dB hi te W W hi Bl te B ac k Bl W ac k B 0.00 ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●●● ● ●● ● ● ● ● 0.25 1.00 ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● on e U No W na ne r U me B na d rm W Ar ed m B e Ar d W m e W dB hi te W W hi Bl te B ac k Bl W ac k B Hit Rate 0.75 ● ● ●● False Alarm Rate 1.00 Figure 16: Posterior predictions of hit and false alarm rates for Study 2. Squares represent observed condition level choice proportions. Diamonds represent predicted condition level choice proportions. Blue dots represent individual participant response times and have been jittered to better show the distribution of scores. W = White, B = Black 85 ● ● 900 ● ● ● 800 ● ● ● Hits (ms) ● ● ●● 500 400 ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ●● ●● ●● ● ● ● ● ● ● ● ● ● ●●● ● ●● ● ● ●● ● ● ●● ● ●● ●● ● ●● ● ●● ● ●● ● ● ● ● ●●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●●● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 600 ● ● ● ● ● ● 700 ● ● ● ● Correct Rejections (ms) 900 ● ● ● ● ● ● ● ● ●● ● ●● ●● ●● ● ● ●● ● ● ●● ● ● ●●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ●● ● ● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ●● ● ● ●● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●●● ●● ● ● ●● ● ●● ●● ●● ● ●● ● ● ● ●● ● ● ●● ● ● ●●● ● ● ● ●● ●●● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●●● ● ● ●● ●● ●● ● ●● ●● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ●● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●●● ● ●●● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● 700 600 500 ● ● ●● ● ● ● ●●● ● ● ● ● ●● ● ●● ● ● ● ●● ● ●● ● ● ● ● ● ● ●● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ● ● ●● ● ● ● ●● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ●● 400 ● ●● ● ● ●● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ●●● ● ● ● ● ● ●● ●● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● 300 ● ● 800 ● 300 ● ● ● ●● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ●● ● ● ●● ● ●● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ●●● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ●● ● ● ●● ● ● ●● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ●● ●● ● ● ● ● ● ● ●● ● ● ●● ● ●●● ● ● ● ● ● ●● ● ● ● ● ●●● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ●● ●● ●● ●● ●● ● ●● ●● ● ●● ● ● ● ●● ● ● ●● ●● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ●● ● ● ● ● ● ●● ●●● ● ● ●● ● ● ●● ● ●● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ●● ●● ● ●● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ●● ● ● ● ●●● ●● ●● ● ● ● ● ● ● ●● ● ● ● ● ●● ●● ● ● ● ● ● ●●● ●● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ● ● ●● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ●●● ●● ● ●● ● ● ●● ●● ● ● ● ● ● ● on e U No W na ne r U me B na d rm W Ar ed m B e Ar d W m e W dB hi te W W hi Bl te B ac k Bl W ac k B ● ● ● ● ● ● N N on e U No W na ne r U me B na d rm W Ar ed m B e Ar d W m e W dB hi te W W hi Bl te B ac k Bl W ac k B ● ● ● ● ● ● ● ● ● ● ● 900 ● 900 ● ● ● ● ● ● ● Misses (ms) ● 500 400 ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ●● ●●● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ●●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ●● ●● ●● ● ● ● ● ● ● 600 ● ● ● 700 ● ● ● ●● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ● ●● ● ●● ●● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ●● ●● ● ● ● False Alarms (ms) 800 ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ●● ● ●● ● ● ● ●●● ●● ●● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ●●● ● ● ●● ● ● ●● ● ● ●● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ●● ●● ● ● ● ●● ● ●● ● ●● ● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ●● ● ● ● ●● ● ●● ● ●● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ●● ● ● ●● ●● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ●●● ●● ● ●● ● ● ● ● ● ● ● ●●● ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ●● ● ●● ● ● ● ● ● ● ●● ● ●● ● ●● ● ● ● ●● ●● ●● ●● ● ●●● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ●●● ● ●● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ●● ● ●● ● ● ● ●● ● ● ●● ●● ● ●● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ●● ●● 800 300 ● ● ● ● ●● 600 500 400 ● ●● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ●● ●● ●●● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ●● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ●● ● ●● ● ●●● ● ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ●● ●● ● ● ● ● ● ●● ●● ● ● ● ● ● ● 300 ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●●● ●● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ●● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ●● ●● ●● ●● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 700 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ●● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ●●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ●● ● ●● ●● ● ● ● ● ● ●● ●● ● ●● ● ●●● ● ● ●● ● ●● ● ●● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●●● ● ●●● ●● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ●● ● ● ● ● ●● ● ●● ●● ● ● ● ●● ● ●● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● ●● ● ● ● ● ●● ● ● ● ●● ● ●● ●● ● ● ●● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ●● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ●● ● ●●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● N on e U No W na ne r U me B na d rm W Ar ed m B e Ar d W m e W dB hi te W W hi Bl te B ac k Bl W ac k B N on e U No W na ne r U me B na d rm W Ar ed m B e Ar d W m e W dB hi te W W hi Bl te B ac k Bl W ac k B ● ● ● ● Figure 17: Posterior predictions of response times for Study 2. Squares represent observed condition level choice proportions. Diamonds represent predicted condition level choice proportions. Blue dots represent individual participant response times and have been jittered to better show the distribution of scores. W = White, B = Black 86 White, No Info, No Gun, Shoot Black, Armed Info, No Gun, Shoot White, No Info, Gun, Shoot Black, Armed Info, Gun, Shoot 250 500 750 1000 Response Time (ms) 0 250 500 750 1000 Response Time (ms) 250 500 750 1000 Response Time (ms) 250 500 750 1000 Response Time (ms) 250 500 750 1000 Response Time (ms) Density Density Black, White Info, Gun, Shoot 250 500 750 1000 Response Time (ms) 0 250 500 750 1000 Response Time (ms) White, Black Info, Gun, Shoot 250 500 750 1000 Response Time (ms) 0 Density White, Armed Info, Gun, Shoot 250 500 750 1000 Response Time (ms) 250 500 750 1000 Response Time (ms) Density 0 Black, Black Info, No Gun, Shoot 0 0 Black, Unarmed Info, Gun, Shoot Density Density 250 500 750 1000 Response Time (ms) 250 500 750 1000 Response Time (ms) Density 0 White, Armed Info, No Gun, Shoot 0 0 White, Black Info, No Gun, Shoot Density Density Black, Unarmed Info, No Gun, Shoot 0 0 Density 0 250 500 750 1000 Response Time (ms) White, White Info, Gun, Shoot White, Unarmed Info, Gun, Shoot Density Density 250 500 750 1000 Response Time (ms) Black, White Info, No Gun, Shoot 250 500 750 1000 Response Time (ms) 0 Black, No Info, Gun, Shoot Density 0 White, Unarmed Info, No Gun, Shoot 0 250 500 750 1000 Response Time (ms) Density 0 0 White, White Info, No Gun, Shoot Density Density Black, No Info, No Gun, Shoot 250 500 750 1000 Response Time (ms) 250 500 750 1000 Response Time (ms) Black, Black Info, Gun, Shoot Density 0 Density data Density Density model 0 250 500 750 1000 Response Time (ms) 0 250 500 750 1000 Response Time (ms) Figure 18: Observed (black) and predicted (gray) response time distributions for the shoot response at the condition level for students in Study 2. 87 White, No Info, No Gun, Don't Shoot Black, Armed Info, No Gun, Don't Shoot White, No Info, Gun, Don't Shoot Black, Armed Info, Gun, Don't Shoot 250 500 750 1000 Response Time (ms) 0 250 500 750 1000 Response Time (ms) 250 500 750 1000 Response Time (ms) 250 500 750 1000 Response Time (ms) 250 500 750 1000 Response Time (ms) Density Density Black, White Info, Gun, Don't Shoot 250 500 750 1000 Response Time (ms) 0 250 500 750 1000 Response Time (ms) White, Black Info, Gun, Don't Shoot 250 500 750 1000 Response Time (ms) 0 Density White, Armed Info, Gun, Don't Shoot 250 500 750 1000 Response Time (ms) 250 500 750 1000 Response Time (ms) Density 0 Black, Black Info, No Gun, Don't Shoot 0 0 Black, Unarmed Info, Gun, Don't Shoot Density Density 250 500 750 1000 Response Time (ms) 250 500 750 1000 Response Time (ms) Density 0 White, Armed Info, No Gun, Don't Shoot 0 0 White, Black Info, No Gun, Don't Shoot Density Density Black, Unarmed Info, No Gun, Don't Shoot 0 0 Density 0 250 500 750 1000 Response Time (ms) White, White Info, Gun, Don't Shoot White, Unarmed Info, Gun, Don't Shoot Density Density 250 500 750 1000 Response Time (ms) Black, White Info, No Gun, Don't Shoot 250 500 750 1000 Response Time (ms) 0 Black, No Info, Gun, Don't Shoot Density 0 White, Unarmed Info, No Gun, Don't Shoot 0 250 500 750 1000 Response Time (ms) Density 0 0 White, White Info, No Gun, Don't Shoot Density Density Black, No Info, No Gun, Don't Shoot 250 500 750 1000 Response Time (ms) 250 500 750 1000 Response Time (ms) Black, Black Info, Gun, Don't Shoot Density 0 Density data Density Density model 0 250 500 750 1000 Response Time (ms) 0 250 500 750 1000 Response Time (ms) Figure 19: Observed (black) and predicted (gray) response time distributions for the don’t shoot response at the condition level for students in Study 2. 88 ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ●●● ● ● ● 0.8 ● ● Hit Rate ●● ● ● ● ●● ●● ● ●●● ●●● ●● ● ● ● ● ● ●● ● ● ●●● 0.4 ● ● ●●● ● ● ●● ● ● ● ●●● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ●● ●● ● ● ● ● ● ● ●● ● ●●● ●● ● ●● ● ● ● ● ●● ● ● ● 0.6 ●● ● ● ● ● ● 0.4 ●● ●● ●● ● ●● ● ●● ● ●● ● ●● ● ● ●● ● ● ● ●●●● ● ● ● ● ●● ● ●●●●●● ●● ●●●● ●● ●● ● ●● ●● ● ● ●● ●● ● ● ● ●●●● ● ●● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●●● ● ● ● ●●● ● ●● ●● ● ●● ● ●●● ● ● ●●● ●● ●● ●● ●● ●● ● ●●● ●● ● ● ● ● ●● ●● ● ● ● ● ● ●● ● ●●●● ● ● ● ● ● ● ●● ●●● ●● ● ● ● ●● ●●●● ●●●● ● ●● ● ●● ● ● ●● ● ● ● ●● ● ● ●● ● ●●● ● ●●●● ● ● ●●●●●●● ● ● ● ●● ●● ● ● ●● ●● ● ● ●● ●● ● ●●● ● ● ●● ●● ● ● ● ●●● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ●● ● ●● ● ● ● ● ● ●● 0.2 ● ● ● ●● ● ●● ●● ●● ● ● ●● ● ● ● ● ● ●●● ●● ● ●● ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●●● ● ● ● ● ● ●● ●●● ● ●● ●● ●● ●● ● ● ● ● ● ● ● ● ● ●●● ●●● ●● ●● ● ● ● ● ● ● ● ●●●● ● ● ● ● ●● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ●●●●● ●● ● ●●● ● ● ●● ● ● ● ●● ● ● ● ● ●● ●●● ●●● ●● ● ●●● ● ●● ● ● ● ●● ● ● ●● ● ● ● ●● ●●●●● ● ● ● ●●●● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ●●● ●● ● ● ● ●● ● ● ● ● ● ●● ● ●●● ●● ● ●● ●● ● ● ●●● ● ●● ●● ● ●● ●● ● ● ● ● ● 0.6 ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ●●● ● ● 0.8 ● ● False Alarm Rate 1.0 ● ● ● ● ● ● ● ● ● ● 0.2 ● ● 0.0 ● White DS Black DS ● White S Black S White DS Black DS White S Black S Figure 20: Posterior predictions of hit and false alarm rates for Study 4. Squares represent observed condition level choice proportions. Diamonds represent predicted condition level choice proportions. Blue dots represent individual participant response times and have been jittered to better show the distribution of scores. DS = payoff favors not shooting, SH = payoff favors shooting. 89 900 900 Hits (ms) ● 700 ● ● ● ● ● ● ● ● ● ● ● ● ● 600 500 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ●● ●● ●● ● ●● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●●●● ● ● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ●● ● ● ● ● ●● ● ●● ●●● ● ●● ● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 400 ● ● ● ● ● ● ● ●● ●●● ●● ●● ● ● ● ● ●● ● ● ●● ●● ●●● ● ● ● ● ● ● ● ●● ● ●● ●●● ● ● ●● ●●● ● ● ● ● ●●● ● ● ●● ● ● ●● ● ● ●●● ● ● ● ● ● ● ●● ● ●● ●● ● ● ●● ● ●●● ● ● ● ● ● ● ●● ● ●● ● ● ●● ● ●● ● ●● ● ● ●● ● ●● ● ●●●● ● ●● ● ●●● ● ●●● ● ● ● ●● ● ● ●● ● ● ● ●● ●● ●● ● ●● ●● ●●●● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● 800 ● ● ● ● ● 700 ● ● ● ● ● ● ●● ● ● ● ● ● ● ● 600 500 ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ●● ● ● ● ●● ●● ● ● ●●● ● ● ●● ● ● ● ● ● ● ● ●● ●● ●●● ●● ● ● ●● ● ● ● ●● ● ●●●● ●● ● ● ●● ● ● ● ●● ● ● ●● ● ● ● ● ●● ● ● ● 400 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● ● ●●● ● ● ●● ●● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ●● ● ● ● ● ●●● ●● ● ●●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ●● ● ● ● ● ●●● ●● ●● ● ● ● ● ●●● ● ● ● ● ●● ● ●● ●● ● ● ● ● ●● ● ● ● ● ● ● ●●●●●● ● ● ●● ● ● ● ● ● ●● ● ● ●● ● ●● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ●● ● ● ● ● ● ●● ● ●●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●●●● ● ● ● ●● ●● ● ● ● ●●● ● ● ●●● ●● ● ●● ● ● ●● ●● ● ● ●● ●● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● Correct Rejections (ms) ● 800 ● ● 300 300 ● White DS Black DS White S Black S White DS Black DS ● ●● 900 White S Black S 900 ● ● 800 ● ● Misses (ms) ● 700 ● ● ● ● 600 500 400 ● ● ● ● ●● ● ● ● ●● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●●● ● ●● ● ● ● ●●●● ●● ● ●●●●● ●● ● ● ●● ● ●● ●● ● ●● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●●● ● ● ● ●● ●● ● ● ●● ●● ●●● ● ●● ● ● ●●● ● ● ●● ● ● ●●● ● ●● ●● ●● ● ● ● ●●● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ●● ●● ● ●●●● ● ● ● ●●● ● ●● ● ● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ● ● ● ●●● ● ●●● ● ● ● ●● ●●● ● ●● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ●●● ●● ● ● ●● ● ●● ● ● ● ●● ● ● ●● ● ● ●●● ●● ●●●●● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● False Alarms (ms) 800 300 ● ● ● ● ● ● ● ● ● ● ● ● 600 500 ● ●● ● ●● ●●●● ● ● ● ●●●● ●●●● ●● ●● ● ●● ● ●● ●●● ●● ● ● ● ● ●● ●● ● ● ● ● ● ●● ●● ● ● ●● ● ●●● ●● ● ● ● ● ● ● ● ●● ● ●● ●● ●●●● ●● ● ● ● ● ●● ● ●● ● 400 ● ● ● 700 ● ● ● ● ● ● ● ●● ● ● ●● ● ●●● ● ● ●● ● ●● ●●● ●● ● ● ● ● ●●●● ● ● ● ●● ● ● ●●● ● ● ● ●●● ●●● ● ● ●● ● ● ● ● ●● ●● ●●● ● ●● ● ●● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ●● ● ●● ●● ●● ●● ●● ● ● ● ● ●●● ● ● ●●●●● ●● ● ● ● ●● ●● ● ● ● ●● ● ● ● ●● ● ●●●● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ●●● ● ●● ● ● ●● ● ● ● ●●●● ●● ●● ●● ● ● ●●● ●● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● White DS Black DS White S 300 ● Black S White DS Black DS White S Black S Figure 21: Posterior predictions of response times for Study 4. Squares represent observed condition level choice proportions. Diamonds represent predicted condition level choice proportions. Blue dots represent individual participant response times and have been jittered to better show the distribution of scores. DS = payoff favors not shooting, SH = payoff favors shooting. 90 White, Shoot Payoff No Gun, Don't Shoot White, Shoot Payoff Gun, Don't Shoot White, Shoot Payoff No Gun, Shoot White, Shoot Payoff Gun, Shoot 250 500 750 1000 0 250 500 750 1000 Density 750 1000 0 250 500 750 1000 Black, Shoot Payoff No Gun, Shoot Black, Shoot Payoff Gun, Shoot 250 500 750 1000 0 250 500 750 1000 Density Black, Shoot Payoff Gun, Don't Shoot Density Black, Shoot Payoff No Gun, Don't Shoot Density Response Time (ms) 0 250 500 750 1000 0 250 500 750 1000 Response Time (ms) White, Don't Shoot Payoff, No Gun, Don't Shoot White, Don't Shoot Payoff, Gun, Don't Shoot White, Don't Shoot Payoff, No Gun, Don't Shoot White, Don't Shoot Payoff, Gun, Shoot 250 500 750 1000 0 250 500 750 1000 Density Response Time (ms) Density Response Time (ms) Density Response Time (ms) 0 250 500 750 1000 0 250 500 750 1000 Response Time (ms) Response Time (ms) Black, Don't Shoot Payoff, No Gun, Don't Shoot Black, Don't Shoot Payoff, Gun, Shoot Black, Don't Shoot Payoff, No Gun, Shoot Black, Don't Shoot Payoff, Gun, Shoot 250 500 750 1000 Response Time (ms) 0 250 500 750 1000 Density Response Time (ms) Density Response Time (ms) Density 0 500 Response Time (ms) Density 0 250 Response Time (ms) Density 0 0 Response Time (ms) Density 0 Density Density data Density model 0 Response Time (ms) 250 500 750 1000 Response Time (ms) 0 250 500 750 1000 Response Time (ms) Figure 22: Observed (black) and predicted (gray) response time distributions for each response type at the condition level for Study 4. 91 REFERENCES 92 REFERENCES Balko, R. (2014, September 25). Mass shooting hysteria and the death of John Crawford. The Washington Post. Retrieved from: https://www.washingtonpost.com/news/thewatch/wp/2014/09/25/mass-shooting-hysteria-and-the-death-of-john-crawford Bargh, J. A. (1989). Conditional automaticity: Varieties of automatic influence in social perception and cognition. In J. S. Uleman & J. A. Bargh (Eds.), Unintended Thought (pp. 3–51). New York, NY: Guilford. Bertrand, M., & Mullainathan, S. (2004). Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. The American Economic Review, 94, 991–1013. Bogacz, R., Brown, E., Moehlis, J., Holmes, P., & Cohen, J.D. (2006). The physics of optimal decision making: A formal analysis of models of performance in two-alternative forcedchoice tasks. Psychological Review, 113, 700-765. Cesario, J., Johnson, D.J., & Terrill, W. (2017). Is There Evidence of Racial Disparity in Police Use of Deadly Force? Analyses of Officer-Involved Shootings in 2015-2016. Manuscript submitted for publication. Chaiken, S., & Trope, Y. (Eds.). (1999). Dual process theories in social psychology. New York, NY: Guilford. Conrey, F. R., Sherman, J. W., Gawronski, B., Hugenberg, K., & Groom, C. J. (2005). Separating multiple processes in implicit social cognition: The quad model of implicit task performance. Journal of Personality and Social Psychology, 89, 469–487. Correll, J., Park, B., Judd, C. M., & Wittenbrink, B. (2002). The police officer's dilemma: Using ethnicity to disambiguate potentially threatening individuals. Journal of Personality and Social Psychology, 83, 1314–1329. Correll, J., Park, B., Judd, C.M., & Wittenbrink, B. (2007). The influence of stereotypes on decisions to shoot. European Journal of Social Psychology, 37, 1102-1117. Correll, J., Park, B., Judd, C.M., Wittenbrink, B., Sadler, M.S., & Keesee, T. (2007). Across the thin blue line: Police officers and racial bias in the decision to shoot. Journal of Personality and Social Psychology, 92, 1006-1023. Correll, J., Wittenbrink, B., Crawford, M.T., & Sadler, M.S. (2015). Stereotypic vision: How stereotypes disambiguate visual stimuli. Journal of Personality and Social Psychology, 108, 219-233. Correll, J., Wittenbrink, B., Park, B., Judd, C.M., & Goyle, A. (2011). Dangerous enough: Moderating racial bias with contextual threat cues. Journal of Experimental Social 93 Psychology, 47, 184-189. Duncan, B. L. (1976). Differential social perception and attribution of intergroup violence: Testing the lower limits of stereotyping of Blacks. Journal of Personality and Social Psychology, 34, 590-598. Fryer, R. G., Jr. (2016). An empirical analysis of racial differences in police use of force. National Bureau of Economic Research Working Paper Series, No. 22399. Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. (1998). Measuring individual differences in implicit cognition: the implicit association test. Journal of Personality and Social Psychology, 74, 1464-1480. Goff P.A., Lloyd T., Geller A., Raphael S., Glaser J. (2016). The science of justice: Race, arrests, and police use of force. Center for Policing Equity. Jackson, J., Bradford, B., Hough, M., Myhill, A., Quinton, P., & Tyler, T.R. (2012b). Why do people comply with the law? Legitimacy and the influence of legal institutions. British Journal of Criminology, 52, 1051–1071. Jacoby, L. L. (1991). A process dissociation framework: Separating automatic from intentional uses of memory, 30, 513–541. James, L., Klinger, D., & Vila, B. (2014). Racial and ethnic bias in decisions to shoot seen through a stronger lens: experimental results from high-fidelity laboratory simulations. Journal of Experimental Criminology, 10, 323-340. James, L., Vila, B., & Daratha, K. (2013). Results from experimental trials testing participant responses to White, Hispanic and Black suspects in high-fidelity deadly force judgment and decision-making simulations. Journal of Experimental Criminology, 9, 189-212. Kane, R. (2005). Linking Compromised police legitimacy to violent crime in structurally disadvantaged communities. Criminology, 43, 469-498. Klauer, K. C., Voss, A., Schmitz, F., & Teige-Mocigemba, S. (2007). Process components of the Implicit Association Test: A diffusion-model analysis. Journal of Personality and Social Psychology, 93, 353. Kobb, S. (2016, March 18). What are the standard questions asked by 911 emergency operators? Retrieved from: https://www.quora.com/What-are-the-standard-questions-asked-by-911emergency-operators Kruschke, J. (2014). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan. Academic Press. Lee, J. (2015a, September 24). How Cleveland police may have botched a 911 call just before killing Tamir Rice. Retrieved from http://www.motherjones.com/politics/2015/06/tamirrice-police-killing-911-call-investigation 94 Lee, J. (2015b, October 28). Outrage is growing over the Tamir Rice investigation. Retrieved from http://www.motherjones.com/politics/2015/10/tamir-rice-leaked-reports-grand-jury Ma, D.S., Correll, J., Wittenbrink, B., Bar-Anan, Y., Sriram, N., & Nosek, B.A. (2013). When fatigue turns deadly: The association between fatigue and racial bias in the decision to shoot. Basic and Applied Social Psychology, 35, 515-524. Mazza, E. (2015, April 20). Jesse Kiddler, police officer, takes down murder suspect without firing a shot. Retrieved from http://www.huffingtonpost.com/2015/04/19/jesse-kidderpolice-officer_n_7097618.html McConahay, J. B. (1983). Modern racism and modern discrimination: The effects of race, racial attitudes, and context on simulated hiring decisions. Personality and Social Psychology Bulletin, 9, 551–558. Morey, R. D. (2008). Confidence intervals from normalized data: A correction to Cousineau. (2005). Tutorial in Quantitative Methods for Psychology, 4, 61-64. Mynatt, C. R., Doherty, M. E., & Tweney, R. D. (1977). Confirmation bias in a simulated research environment: An experimental study of scientific inference. The Quarterly Journal of Experimental Psychology, 29, 85-95. Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2, 175-220. Norcomm. (2017). How 911 dispatch works. Retrieved from: http://www.superiorambulance.com/emergency-response/how-911-dispatch-works/ Payne, B.K. (2001). Prejudice and perception: The role of automatic and controlled processes in misperceiving a weapon. Journal of Personality and Social Psychology, 81, 181-192. Payne, B. K. (2005). Conceptualizing control in social cognition: How executive functioning modulates the expression of automatic stereotyping. Journal of Personality and Social Psychology, 89, 488–503. Payne, B. K. (2006). Weapon bias: Split-second decisions and unintended stereotyping. Current Directions in Psychological Science, 15, 287–291. Peruche, B. M., & Plant, E. A. (2006). The correlates of law enforcement officers’ automatic and controlled race-based responses to criminal suspects. Basic and Applied Social Psychology, 28, 193–199. Plant, E. A., & Devine, P. G. (1998). Internal and external motivation to respond without prejudice. Journal of Personality and Social Psychology, 75, 811. Plant, E.A., Goplen, J., Kunstman, J.W. (2011). Selective responses to threat: The roles of race and gender in decisions to shoot. Personality and Social Psychology Bulletin, 37, 12741281. 95 Plant, E.A., & Peruche, B.M. (2005). The consequences of race for police officers' responses to criminal suspects. Psychological Science, 16, 180-183. Pleskac, T.J., Cesario, J., & Johnson, D.J. (2017). How race affects evidence accumulation during the decision to shoot. Manuscript submitted for publication. Ratcliff, R. (1978). A theory of memory retrieval. Psychological Review, 85, 59-108. Ratcliff, R., McKoon, G. (2008). The diffusion decision model: Theory and data for two-choice decision tasks. Neural Computation, 20, 873-922. Ratcliff, R., & Rouder, J. N. (1998). Modeling response times for two-choice decisions. Psychological Science, 9, 347-356. Ratcliff, R. & Smith, P.L. (2004). A comparison of sequential sampling models for two-choice response time. Psychological Review, 111, 333-367. Ratcliff, R., Thapar, A., Gomez, P., & McKoon, G. (2004). A diffusion model analysis of the effects of aging in the lexical-decision task. Psychology and Aging, 19, 278–289. Ratcliff, R., Thapar, A., & McKoon, G. (2006). Aging and individual differences in rapid twochoice decisions. Psychonomic Bulletin & Review, 13, 626–635. Ratcliff, R., Thapar, A., & McKoon, G. (2001). The effects of aging on reaction time in a signal detection task. Psychology and Aging, 16, 323–341. Reaves, B. A. (2015). Local Police Departments, 2013: Personnel, policies, and practices. (NCJ No. 248677). Retrieved from http://www.bjs.gov/content/pub/pdf/lpd13ppp.pdf Ross, C. (2015). A multi-level Bayesian analysis of racial bias in police shootings at the countylevel in the United States, 2011-2014. PLoS ONE, 10, e0141854. Sadler, M.S., Correll, J., Park, B., & Judd, C.M. (2012). The world is not black and white: Racial bias in the decision to shoot in a multiethnic context. Journal of Social Issues, 68, 286– 313. Sagar, H. A., & Schofield, J. W. (1980). Racial and behavioral cues in Black and White children's perceptions of ambiguously aggressive acts. Journal of Personality and Social Psychology, 39, 590–598. Sherman, J.W., Gawronski, B., Trope, Y. (Eds). (2014). Dual-process theories of the social mind. New York, NY: Guilford. Sim, J.J., Correll, J., & Sadler, M.S. (2013). Understanding police and expert performance: When training attenuates (vs. exacerbates) stereotypic bias in the decision to shoot. Personality and Social Psychology Bulletin, 39, 291-304. Smith, M. (2015, November 29). Lawyers for Tamir Rice’s family release outside reports 96 criticizing shooting. Retrieved from http://www.nytimes.com/2015/11/30/us/lawyers-fortamir-rices-family-release-outside-reports-criticizing-shooting.html Sunshine, J., & Tyler, T. R. (2003). The role of procedural justice and legitimacy in shaping public support for policing. Law & Society Review, 37, 513-548. Swaine, J., Laughland, O., Lartey, J., & McCarthy, C. (2015, June 1). The counted: People killed by police in the US. Retrieved from http://www.theguardian.com/us-news/nginteractive/2015/jun/01/about-the-counted Takagi, P. (1974). A garrison state in a “democratic” society. Crime and Scholarly Justice, 1, 27– 33. Tate, J., Jenkins, J., Kindy, K., Lowery, W., Alexander, K.L., & Rich S. (2015, June 30). How the Washington Post is examining police shootings in the U.S. Retrieved from https://www.washingtonpost.com/national/how-the-washington-post-is-examiningpolice-shootings-in-the-us/2015/06/29/f42c10b2-151b-11e5-9518f9e0a8959f32_story.html Thapar, A., Ratcliff, R., & McKoon, G. (2003). A diffusion model analysis of the effects of aging on letter discrimination. Psychology and Aging, 18, 415–429. Tyler, T.R. (2006). Why people obey the law. New Haven: Yale University Press. Tyler, T.R. & Fagan, J. (2008). Legitimacy and cooperation: Why do people help the police fight crime in their communities? Ohio State Journal of Criminal Law, 6, 231-276. Tyler, T. R., Goff, P. A., & MacCoun, R. J. (2015). The impact of psychological science on policing in the United States: Procedural justice, legitimacy, and effective law enforcement. Psychological Science in the Public Interest, 16, 75–109. U.S. Department of Justice. (2001). Policing and homicide, 1976–98: Justifiable homicide by police, police officers murdered by felons (NCJ180987). Washington, DC: Bureau of Justice Statistics. Vandekerckhove, J., Tuerlinckx, F., & Lee, M. D. (2011). Hierarchical diffusion models for twochoice response times. Psychological Methods, 16, 44. Voss, A., Rothermund, K., & Voss, J. (2004). Interpreting the parameters of the diffusion model: an empirical validation. Memory & Cognition, 32, 1206-1220. Wabersich, D., & Vandekerckhove, J. (2014). Extending JAGS: A tutorial on adding custom distributions to JAGS (with a diffusion model example). Behavior Research Methods, 46, 15–28. Wason, P. C. & Johnson-Laird, P. N. (1972). Psychology of reasoning: Structure and content. Cambridge, MA: Harvard University. 97