THE ANCHORING EFFECT: A META-ANALYSIS By Clint Townson A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Communication – Doctor of Philosophy 2019 ABSTRACT THE ANCHORING EFFECT: A META-ANALYSIS By Clint Townson This dissertation proposed and undertook the meta-analysis of the anchoring effect, which utilizes a numeric suggestion to guide subsequent judgments of a target question. Anchoring is thought to operate through a selective accessibility function, such that one who receives an anchor then is primed to access specific information consistent with the number, which in turn pulls a final judgment closer to the anchoring point (Strack & Mussweiler, 1997). This phenomenon is of particular interest in the field of law, where anchors are frequently used to guide sentencing decisions and influence damage awards. Anchoring is frequently cited as a robust and strong effect, but the extent of this potency is unclear. Additionally, a number of moderators (chiefly, expertise and anchor extremity) have been hypothesized, but the true impact of these factors is obscured. Thus, a meta-analysis was conducted, using Schmidt and Hunter’s (2014) variance- centric approach. A total of eighty-four effect sizes (Pearson’s r) were calculated from the literature, and corrections for artifacts were made where possible. The resulting mean weighted effect size among all included anchoring studies was r = .401, with a corrected correlation estimate of .558. Initial results indicated greater variance than would be expected from sampling error alone, so a number of moderators were evaluated. Expertise was found to be an aggravating moderator; rather than mitigating the effect, having domain-relevant knowledge exacerbated it. Extremity of the anchor was found to slightly weaken the impact. The studies involving a law context (k = 34) were evaluated in a separate analysis, resulting in a mean weighted effect size of r = .360. Additional moderator analyses were conducted, with expertise following a similar pattern as the wider collection of studies. The meaningfulness of the anchor proved to significantly strengthen the effect. The implications of these results were discussed, specifically as they may apply in the courtroom. Future research directions and optimized practices are discussed. The limitations of this meta-analytic approach are acknowledged, and some reflect back upon the optimal practices for this line of research. ACKNOWLEDGEMENTS The first and biggest thank you goes to my parents and close family; without their support, I would not have come to Michigan State, would not have pursued graduate study, and would not have pushed to the finish. I love you and dedicate this dissertation to you. The next major thank you goes to the professor who has had the most profound impact on my academic study and my future aspirations, Dr. Frank Boster. This is the culmination of all you have taught me, and any shred of insight I generate in this dissertation is due to your continued stimulation and challenging of my intellect. There is of course a secondary but equally vital collection of professors who have taught me a great deal. A special thanks goes to my doctoral committee, for your countless comments and contributions. I also must express the fondest appreciation for the wonderful group of fellow graduate students at Michigan State, who keep me humbled and grounded. Thank you to my idiot friends, my com-squad gal pals, and everyone I’ve ever shared a tailgate beverage with or sung karaoke in front of. Beyond this group, thank you to my four brothers back home, who will never read this. But I would not be able to accomplish things like this without you guys to pull me away from it from time to time. There are undoubtedly others I will forget to thank, but please know that is a product of my head and not my heart. My work is only as good as all of those who have helped me along the way, so know that I am grateful for even the smallest contributions. iv Last and most definitely least, thank you to anyone who tried to obstruct me along my way. Proving you wrong will always be a source of motivation. v TABLE OF CONTENTS LIST OF TABLES..........................................................................................................................vi Introduction......................................................................................................................................1 Anchoring Effect .............................................................................................................................3 Anchoring in the Law Domain…................................................................................................6 Method ..........................................................................................................................................11 Selection Criteria .....................................................................................................................11 Analytic Approach and Artifacts..............................................................................................13 Results ..........................................................................................................................................16 Results Specific to the Domain of Law....................................................................................20 Discussion .....................................................................................................................................23 Conclusions for Anchoring at Large.........................................................................................23 Conclusions for Law Domain...................................................................................................27 Limitations of the Research......................................................................................................28 Implications/Applications of the Findings. ..............................................................................29 Implications/Applications of the Findings for Law..................................................................31 Final Conclusions......................................................................................................................33 REFERENCES..............................................................................................................................35 vi LIST OF TABLES Table 1. Summary of Studies Included in Analysis ……………………………………………16 vii Introduction The anchoring effect is a common and formidable tool used in negotiations and law contexts. Researchers in psychology have spent many years generating experimental data to show the power of impressing an anchoring point (typically, a numerical suggestion) upon a subject, who then provides a biased judgment shifted toward the anchor (Jacowitz & Kahneman, 1995). Many have leveraged this technique in law contexts: attorneys who argue for a specific award amount in a civil case may find that jurors’ judgments are invariably swayed in the direction of their suggestions. There is a body of literature that would suggest this approach can be very effective, although the precise magnitude of the effect remains elusive. Meta-analysis is a useful tool for social scientists, allowing the accumulation and synthesis of multiple studies into one, or a few, powerful statistics (Schmidt & Hunter, 2014). There is undeniably an opportunity to utilize this tool in the field of trial consulting, where researchers conduct hundreds of focus groups, surveys, and experiments each year. Yet only a few notable topic areas have garnered meta-analytic attention. This lacuna presents an opportunity to utilize this analytical strategy. Offered here is the execution of a meta-analysis on the effects of anchoring, with two primary goals in mind. First, it puts forth an updated view of anchoring, and one which may broaden the fields’ understanding of the phenomenon. A meta-analysis on anchoring effects will involve a moderate quantity of studies, the existing meta-analysis on anchoring (Orr & Guthrie, 2006) did not adequately correct for experimental artifacts, and it may clarify significant law- relevant moderators. Second, furthering the field’s understanding of the anchoring effect may have notable implications. Although many attorneys already attempt to use anchoring in civil cases to pull a 1 jury toward a desired monetary verdict, it seems reasonable that many may underestimate the power of the effect. The anchoring effect is also utilized by lawyers in negotiations, mediations, and arbitrations, making it a versatile tool utilized throughout one’s practice (Folberg & Golann, 2011). Using meta-analysis to show the strength of the effect could be very persuasive and meaningful in this field. It could also aid in the understanding of the multitude of potential moderators that have been examined with respect to the anchoring effect: how do things like anchor extremity, environmental context, etc., change the power of the effect? With these goals in mind, the extant theory and research on the anchoring effect was reviewed. A methodological approach to this specific meta-analysis was outlined. The meta- analysis was completed, with a thorough discussion of inclusion criteria and search methods. Finally, the results are extensively discussed and implications are drawn. 2 Anchoring Effect Imagine if someone asked you to estimate the weight of an African elephant in pounds. It might be a difficult estimate to make, but imagine if they then added that a Hummer H3 weighed 4700 pounds. Most people would probably mentally surmise that an elephant might weigh slightly more than that, and come to a conclusion that elephants weigh around 6000 lbs. Anchoring is generally regarded as a type of cognitive priming bias, in that the initial suggestion of a specific numerical point registers in a person’s mind and frames future decision making (Furnham & Boo, 2011; Strack & Mussweiler, 1997). It is hypothesized to operate as a fundamental heuristic cue: specifically, an anchor is a cue that a person may employ in the place of cognitively-tasking calculation or rumination (Epley & Gilovich, 2010; Tversky & Kahneman, 1974). In a way, people use the numerical suggestion as a starting point: in the above example, one might begin with the weight of the sport vehicle, and guess the target weight from that starting point (an African elephant actually weighs closer to 13,000 pounds). Strack and Mussweiler (1997) refer to this as the selective accessibility model: targets of anchoring mentally seek relevant information consistent with the anchor, which then steers their final judgment toward the anchoring point. In the above example, the target would find accessible information to confirm the anchor (an elephant is slightly larger than an SUV; a Hummer is an especially large SUV), and their subsequent estimation would reflect its impact. One crucial piece of the above example is the uncertainty associated with the original question: most people would exhibit low confidence in their ability to guess the weight of a large, exotic animal. Perhaps this is why anchoring is thought to be so powerful in civil jury damage awards: jurors may feel extremely unsure about what ‘fair’ compensation for pain and 3 suffering would be in a given case, so the anchor proves to greatly influence their final determination (Kahneman, Schkade, & Sunstein, 1998). Epley and Gilovich (2001) exhibited this phenomena when they asked people to estimate the year in which George Washington was first elected as president. Most subjects used 1776 (the recognized year in which the U.S. declared its independence) as an approximation, then adjusted their estimate accordingly (Epley & Gilovich, 2010). In this way, people may use an anchor as a shortcut for concluding a final judgment. An anchor may also merely pull an existing opinion. Certain targets of anchoring may have well-established and well-founded attitudes. In an experiment by Englich, Mussweiler, and Strack (2006), experienced prosecutors and judges were given a rape case summary. In this instance, their preconceived notions about proper sentencing appeared to be swayed by an anchoring point. Rather than establishing a numerical starting point, these anchors served to prime a discrepancy between the experts’ initial opinion and the anchor, thus moving their final opinion toward the anchor. An alternative explanation for the functioning of anchoring is offered by Markovsky (1988), who notes that anchors may operate through either assimilation or contrasting. The anchor still cognitively biases judgments, but Markovsky distinguishes instances where an anchor serves to push judgments away. He notes a classic study by Helson (1947), in which subjects who lifted a light object after lifting a heavy object estimated the second object’s weight to be substantially lower than those who lifted two of the light objects. In this way, an anchor may be utilized as a cognitive contrast. Markovsky (1988) also contributes a succinct set of three necessary conditions for anchoring, although more recent research would seem to refute two of these. According to his 4 research, anchoring would only occur when 1) the judgment is indeterminate (from the perspective of the judging individual), 2) the anchoring point explicitly exists, and 3) the anchor is made salient (p. 214). Accepting the second condition as intuitive, data seem to be inconsistent with the first and third conditions. As previously noted, even experts can be susceptible to the effects of an anchor (Englich, Mussweiler, & Strack, 2006); contemporaneous research also casts doubt on his assertion, as Northcraft and Neale (1987) noted that real estate agents were equally biased by an anchor’s listing price for a local property as were students. It seems inconceivable that either attorneys/judges or real estate agents would describe a given sentence or price listing, respectively, as ‘indeterminate’. As to Markovsky’s (1988) third condition, the salience of an anchor is reliant on its explicitness and its lack of extremity (p. 214). The findings of Mussweiler & Englich (2005) suggest that even anchors presented outside of the judges’ awareness still caused an assimilation effect. In that case, an anchoring point for estimating Germany’s annual temperature was subliminally offered on a computer screen, and these stimuli still biased subsequent judgments. Additionally, Mussweiler & Strack (2001) have shown that implausible anchors still have an impact on final estimates, often even stronger than the less extreme versions. Overall, the anchoring effect appears to be a robust psychological phenomenon. It is a heuristic cue utilized by many in situations where one is asked to make a numerical estimation. The effect appears to work consistently. Other factors appear to have a minimal impact on anchoring: increased cognitive ability only slightly decreases its effectiveness (Bergman et al., 2009), incentive to be accurate and forewarning of an anchor do not mitigate the strength of the anchor (Wilson et al., 1996), and the effect has been seen across many domains (Furnham & Boo, 2011). Anchoring in legal contexts has particular interest here, as there are practical and 5 vital outcomes, as opposed to those studies which view anchoring in estimating general knowledge (like the gestation period of an elephant in Epley & Gilovich, 2001). Anchors in the Law Domain. The utility of anchors exists in many domains, but perhaps none harnesses the power of anchoring like the law context. One notable context in which attorneys might use anchoring is in a negotiation. Its efficacy has proved powerful in a variety of negotiation settings, from real estate to contracts (Folberg & Golann, 2011). In a negotiation, parties may use anchors to leverage their position and pull an eventual compromise in their direction. It also makes sense that anchors might be functional for attorneys, as they appear to be effective even when the target is an experienced negotiator (Orr & Guthrie, 2006). Anchoring is not entirely different from sequential request compliance gaining strategies like foot-in-the-door (FITD) or door-in-the-face (DITF), which also require the expression of an initial request followed by a notably distinct second request (Dillard, Hunter, & Burgoon, 1984). The discrepancy in the two strategies is twofold: for one, FITD and DITF are predicated on the assumption that the target will either accept the small request or reject the extreme request, respectively. Anchoring makes no such assumption, as there may be cases where the judgment aligns precisely with the anchoring point (Hinsz & Indahl, 1995). Secondly, FITD and DITF are theorized by some to operate via self-perception and reciprocation causal forces, respectively (Dillard, Hunter, & Burgoon, 1984). Anchoring in negotiation may involve an element of reciprocation, but it is cognitively driven on the basis of the priming bias. In addition to its use in private negotiations, anchoring has worked its way into practice during trials. Moreover, its use in both criminal and civil cases has proven to be as powerful as in other areas like general knowledge and consumer pricing. Prosecutors may find they have the 6 advantage. By speaking first, they are able to anchor first in either a criminal or civil proceeding in their opening statement. In a criminal case, a prosecutor’s argument for a given sentence may not only impact a judge’s final decision but may even bias the defense attorney’s own sentencing suggestion (Englich, 2006). This effect once again shows the power of an anchor, such that even a litigator’s expertise does not ultimately mitigate the effect. In a civil case, anchors can markedly alter juries’ beliefs about damages, particularly punitive damages — which tend to be open to much more interpretation than compensatory or economic damages (Hastie, Schkade, & Payne, 1999; Reyna et al., 2015; Robbennolt & Studebaker, 1999). Even caps —which are intended to curb extreme awards — end up inflating the final awards decided upon by juries by inadvertently serving as an anchoring point (Robbennolt & Studebaker, 1999). Researchers have used both arbitrary and intentioned anchors in experiments. For example, the initial research on anchoring spun a ‘wheel of fortune’ to randomly determine an anchoring point (Tversky & Kahneman, 1974), whereas more recent research used current temperature as an anchor in perceptions of global warming (Joireman, Truelove, & Duell, 2010). Others critically determine ‘high’ and ‘low’ anchors related to a given target judgment, which elicits a greater contrast than a single anchor versus a control (Cervone & Peake, 1986). The effect seems to be quite powerful regardless of the nature of the anchor itself. This body of work has illustrated that the influence of anchors cuts in both directions. This is to say that a low anchor will move judgments down toward the anchor point in the same way that a high anchor moves judgments up (Jacowitz & Kahneman, 1995). The nature of the anchor may also impact its final influence. Specific to the legal profession, the numerical value of the anchor may be meaningful: the specific suggestion may be backed by specific evidence, previous rulings, expert testimony, or mathematical calculation 7 (Hans, Helm, Reyna, & Hall, 2018; Raitz, Greene, Goodman, & Loftus, 1990). Although considerable research has established that even large awards will anchor a judge’s assessments (Chapman & Bornstein, 1996), more recent research has found that pairing these large anchors with context-relevant meaning will fortify the effect – e.g., if an anchor is provided with justification for the dollar figure (Hans et al., 2018; Reyna et al., 2015). For instance, if an anchor in a civil case is framed as thirty years of median income, it may be a stronger anchor than one framed as the cost of renovating the courtroom (Hans et al., 2018). Thus, the environment in which an anchor is provided may alter its impact (Eply & Gilovich, 2010). The overall impact of anchoring appears to be quite strong. It has been made clear thus far that even those with expertise or experience in a given area are equally susceptible as amateurs. Myriad experiments show that anchoring significantly shapes the judgments of professionals like judges, lawyers, real estate agents, and finance professionals (Englich, 2006; Northcraft & Neale, 1987; Kaustia, Alho, & Puttonen, 2008). A few other notable variables have been hypothesized to exert a slight influence on the power of anchoring. Among them are the language in which an anchor is presented: for instance, caps may be presented as permissive (e.g., “you may award as much as X dollars”) or restrictive (e.g., “you may not award more than X dollars”). This particular feature may alter a person’s perception of the anchor itself (Robbennolt & Studebaker, 1999). Presenting the anchor in terms of its magnitude may also affect its final influence. Oppenheimer, LeBoeuf, and Brewer (2008) found that when people are presented with an anchor physically larger in magnitude (for this experiment, lines drawn on a map), their subsequent estimates of later stimuli grow accordingly. This is to say that priming the idea of largeness or smallness may moderate the impact of an anchoring point. 8 All-in-all, the anchoring effect appears to be a formidable and robust psychological phenomenon, and one that is only marginally impacted by a number of moderators. Its strength as a psychological primer makes it a ripe area for meta-analysis, given the effect is generally thought to be large in size and applicable across a variety of domains. Indeed, Orr and Guthrie (2006) performed a meta-analysis on anchoring in negotiations, and concluded that there is a .49 correlation between an initial anchor and the final determined outcome, and this correlation was uncorrected for artifacts. However, this meta-analysis still leaves something to be desired. First, it is now slightly dated. In the twelve years since its publication, there have been at least sixteen experimental studies on the anchoring effect since the aforementioned meta-analysis. These additional studies would warrant a new meta-analysis, seeing as the new studies would increase total sample size from their original paper by 125%. Second, Orr and Guthrie’s (2006) initial sample of studies was quite small, due to several restrictions they put on the studies included in the analysis (chief among these restrictions was their inclusion of only negotiation-based anchors). Their final tally of sixteen studies does not represent the body of work in the area, although it does seem their initial goal was only to capture this effect in the context of dyadic negotiation. Anchoring is a relevant psychological factor across civil cases, criminal cases, consumer purchasing, marketing, and attitude formation. The sampling of studies could be widened and the different domains of anchoring could be noted as moderators, which would yield a more complete meta-analysis. Indeed, the effect of anchoring might be nuanced, such that it is quite small in certain domains, but larger in others. Third, Orr and Guthrie (2006) failed to account adequately for experimental artifacts in each of the studies, which may attenuate the effect sizes reported in the meta-analysis (Schmidt 9 & Hunter, 2014). The correlations observed in many of these experiments could be markedly attenuated due to weak inductions, which may restrict the range of relevant outcome variables. There also may be measurement error in the dependent variable, as some studies might have utilized varied approaches to determining the strength of anchor. Orr and Guthrie note that the result they found is undeniably a large effect, but correction for these artifacts may reveal an even larger effect. Finally, these authors only included two of the above mentioned variables that may impact the overall power of an anchor: the environment in which the anchor is presented, and the expertise of the negotiator (Orr & Guthrie, 2006). They found results that were somewhat contrary to the literature, in that they asserted that experience does mitigate the pull of an anchor, but they do not indicate the statistical test they used to ascertain this conclusion. It appears that the contrast between the ‘expert’ correlation and the ‘novice’ correlation could be within sampling error of each other. Nonetheless, numerous studies contemporaneous with their meta- analysis document that anchoring appears to work just as effectively on experienced agents or experts as on laypersons (see Brewer et al., 2007; Englich, 2006; and Englich, Mussweiler, & Strack, 2006). Bearing all these shortcomings in mind, it appears that a new and thorough meta-analysis on the anchoring effect is warranted. The approach used to conduct this meta-analysis is detailed below. 10 Method The method of meta-analysis described by Schmidt and Hunter (2014), which focuses on the variance observed within and between studies, was utilized for conducting the meta-analysis itself. For this method, Pearson’s product-moment correlation coefficient (r) must be calculated from the effect size provided in a given study. These coefficients can then be corrected for a variety of artifacts (e.g., systematic error in measurement), weighted based on sample size, and then averaged to produce a final average effect size (Schmidt & Hunter, 2014). Studies were located via an extensive search utilizing a number of online research databases (chiefly, Michigan State University’s online library), using the terms “anchoring”, “adjustment”, and “heuristic.” Additionally, the citations of identified articles were employed in an effort to find other relevant studies. Particularly, the review article by Furnham and Boo (2011) proved useful for finding additional experiments for inclusion. A number of prominent anchoring scholars were contacted to seek out unpublished data, and two studies from Daniel Mochon were integrated into the analyses. Selection Criteria. For inclusion in this meta-analysis, studies needed to satisfy a number of criteria. First, studies must have had statistics readily available that would be necessary to calculate the relevant parameters [e.g., the study by Hinsz & Indahl (1995) was excluded as it included only nonparametric analyses, without means and standard deviations included]. For a number of studies, means and standard deviations had to be mathematically estimated using the approaches outlined by Hozo, Djulbegovic, and Hozo (2005) and Wang, Liu, and Tong (2014). Second, selected studies needed to be experimental in nature, such that participants were presented with a specific, numerical, high or low anchor (as an induction) and then offered a specific estimate or 11 decision following the anchor [e.g., the study by Oppenheimer, LeBoeuf, and Brewer (2008) was excluded as the experiments utilized visual stimuli as anchors]. This also excluded studies which presented a single anchor against a control condition with no anchor (e.g., Epley & Gilovich, 2001). Third, the anchoring tasks needed to be varied between subjects, such that participants were presented with a single anchor and made a single related judgment (e.g., Strack & Mussweiler’s 1997 study was excluded because anchors were varied within subjects). Finally, these estimates needed to be individually determined; this criterion is distinct from the Orr and Guthrie (2006) meta-analysis, in which a final outcome was dyadically negotiated. This also excludes group judgments (e.g., the decision of a mock jury) Making this distinction also expands the generalization of the meta-analysis beyond negotiation and into the area of interest, which is attitude-shaping and opinion-formation (specifically in law contexts). Specifically, excluding the dyadically-determined outcomes will help to isolate the psychological impact of an anchor. A number of other studies were excluded from the analyses for various methodological issues. Adame (2016) was excluded, as this study involved an extremely weak induction (r = .07) and the aim of the study was specifically to train participants to ignore an anchor. Mussweiler and Strack’s (2001) study was excluded because their anchors varied in polarity rather than magnitude (e.g., -700 degrees Celsius vs 900 degrees Celsius), and their study from 2000 was excluded because the target of the judgment task was ambiguous. Many of the published articles cited involved multiple experiments, so for most of these instances, each experiment was included as a separate study and effect size. A few were excluded for lack of consistency with the above criteria. Two others were excluded as outliers (defined as in the top or bottom 2% of the distribution, consistent with Schmidt & Hunter’s 12 recommendations): Critcher and Gilovich’s (2008) final experiment and McAuliff and Bornstein’s (2010) experiment, with effect sizes r = .02 and r = -.26, respectfully (Experiment 2 from Mussweiler & Strack, 2000 was excluded as a top 2% study, with effect size r = .85). The McAuliff and Bornstein (2010) result is worth additional comment. They did not vary the actual anchor amount, but presented it as a per hour, per day, per month, or lump sum damage award to mock jurors. It is interesting that this resulted in a negative correlation, and it may provide some insight for how to increase the power of an anchor presented in court. According to their result, it may be powerful to translate a damage request anchor as a per hour amount. Analytic Approach and Artifacts. In terms of analyses performed, the data provided within each study were used to calculate a common metric (Pearson’s r). Correlations were calculated from means and standard deviations, t-tests, and F-tests. For all cases, ‘high’ anchors were considered to be the experimental condition, and ‘low’ anchors were treated as the control condition. Several studies included more than just these two conditions: for example, some included extremely high and extremely low anchors in addition to the traditional high versus low. Control conditions were left out of the analyses performed. For studies with more than two conditions, results were examined for linearity (treating the low to high as a continuous variable), and if no substantial deviations from a linear pattern existed, a single effect size was calculated across conditions. For example, the Robbennolt & Studebaker (1999) study utilized three levels of anchors, and rather than calculating individual effect sizes comparing each condition, a single effect size described the magnitude of the linear pattern across the three. Next, the correlations were corrected for any attenuation due to error of measurement or restriction in range. Two types of error of measurement were calculated when the requisite 13 information was available. First, reliability in the dependent variable was only calculable for a single study, as most utilized a single item to measure the impact of an anchor. Indeed, most anchoring studies will not ask participants to make a judgment multiple times, meaning reliability cannot be determined. This is not to say, however, that the measurement of the judgment is perfectly reliable: one can assume that if the same participant were to complete the same anchoring task at multiple time points, there would be some variance in their answers. Thus, it can be assumed for the purpose of this analysis that some of the variance in the effect sizes is attributable to differential error of measurement in the dependent variable, but it cannot be accounted for mathematically. The second source of error of measurement lies within the independent variable, and this was also estimated for a number of studies. Specifically, these experiments posit a causal path in which their anchor induces some recognition/comparison of the number, which in turn drives subsequent judgments toward this numerical point. The former of these relationships can be measured via a manipulation check, and in a perfect study, the correlation between the two would be 1.00. Given few studies achieve a perfect induction, most final effect sizes will be attenuated (Boster, 2002). For those that included a manipulation check, this correlation was calculated, and the final effect size was corrected to reflect the unattenuated relationship. The final artifact which would attenuate correlations is restriction in range in either variable for the sample population. This would typically arise in cases of dichotomous independent variables or a specific selection criteria for the population. The latter does arise in a few of the studies where experts where studied, as one might expect their responses to vary less than a wider population. But crucially, many of these studies also have restricted variance due to the target judgment (e.g., if the judgment is a likelihood estimate, their responses are constrained 14 to 0-100). Furthermore, many of them have notable ceiling or floor effects, particularly in those cases which involve financial damages. Once again, the nature of these studies is such that restriction in range is difficult to estimate. The judgments derive from a variety of contexts, the populations are selected in different ways, and some allow for extremely variable responses (Saks et al., 1997). As a result, restriction in range could not be corrected in the effect sizes. This will be addressed further in the discussion section. These corrected coefficients were weighted by sample size, and then averaged. The homogeneity of effect sizes is typically assessed via the 75% rule, which asserts that if 75% of the variance in effect sizes can be accounted for by known and correctable artifacts, then it can be assumed that the remaining 25% is due to uncontrollable artifacts (Schmidt & Hunter, 2014). The effect size’s homogeneity was checked within levels of the variable. Finally, potential moderators were identified via assessment of effect sizes among studies utilizing different methodological approaches. Specifically, expertise, plausibility, and anchor relevance were all examined for moderating effects. 15 Results In total, 84 effect sizes from 39 published studies and 2 unpublished datasets were calculated. 9,813 participants were included as part of the collective sample (see Table 1). The weighted average effect size was r = .401 (unweighted average effect size = .437), with a weighted variance of .023 (unweighted variance = .022). Table 1. Summary of Studies Included in Analysis Author(s) Brewer et al. Study 1 Brewer et al. Study 2 Campbell et al. Cervone & Peake Study 1 Cervone & Peake Study 2 Chapman & Bornstein Study 1 Chapman & Bornstein Study 2 Chapman & Johnson Cheek et al. Study 1 Cheek et al. Study 2 Cheek et al. Study 3 Cheek et al. Study 4 Cheek et al. Study 5 Cheek et al. Study 6 Critcher & Gilovich Study 1 Critcher & Gilovich Study 2 Critcher & Gilovich Study 3 Critcher & Gilovich Study 4 Englich & Mussweiler Study 1 Englich & Mussweiler Study 2 Englich & Mussweiler Study 3 Englich et al. Englich et al. Study 1 Englich et al. Study 2 Englich et al. Study 3 Englich & Soder Study 1 Englich & Soder Study 2 Englich & Soder Study 3 Englich & Soder Study 4 N 81 191 320 52 23 25 85 159 63 79 90 94 74 131 260 201 102 49 19 44 16 42 42 39 52 85 78 52 59 r .29 .73 .53 .43 .52 .62 .20 .34 .39 .70 .25 .49 .44 .56 .13 .15 .25 .27 .55 .37 .42 .48 .38 .33 .36 .28 .48 .53 .40 Moderators1 2,8 3,8 1,2 n/a n/a 1,2,8 1, 2 2 n/a 7 8 7 8 n/a n/a 2 2 2 1,3 1,2 1,3 1,2 1,3,6,7 1,3,6,7 1,3,6,7 1,2,7 1,3,7 1,2,7 1,3,7 Year 2007 2007 2015 1986 1986 1996 1996 1999 2015 2015 2015 2015 2015 2015 2008 2008 2008 2008 2001 2001 2001 2005 2006 2006 2006 2009 2009 2009 2009 16 Table 1 (cont'd) Author(s) Greenstein & Velasquez Glockner & Englich Study 1 Glockner & Englich Study 2 Hans et al. Study 1 Hans et al. Study 2 Hastie et al. Joireman et al. Kaustia et al. Study 1 Kaustia et al. Study 2 Konig Lecci & Martin Malouff & Schutte Study 1 Malouff & Schutte Study 2 Marti & Wissler Study 1 Marti & Wissler Study 2 Markovsky Study 1 Markovsky Study 3 McElroy & Dowd Study 1 McElroy & Dowd Study 2 Mochon & Frederick Study 1 Mochon & Frederick Study 2 Mochon & Frederick Study 3 Mochon & Frederick Study 4 Mochon Study 1 (unpublished) Mochon Study 2 (unpublished) Mussweiler Study 1 Mussweiler Study 1 Mussweiler Study 3 Mussweiler Study 3 Mussweiler & Englich Study 1 Mussweiler & Englich Study 2 Mussweiler et al. Study 1 Mussweiler et al. Study 1 Mussweiler et al. Study 2 Northcraft & Neale Study 1 Northcraft & Neale Study 1 Northcraft & Neale Study 2 Northcraft & Neale Study 2 N 303 44 44 60 60 172 81 73 139 23 250 155 158 360 500 205 73 195 200 35 35 60 57 161 134 16 22 31 20 35 41 30 30 31 48 21 54 47 r .61 .53 .33 .45 .39 .31 .30 .27 .63 .59 .48 .57 .53 .45 .31 .30 .25 .51 .06 .55 .45 .42 .54 .51 .60 .53 .46 .71 .79 .35 .31 .64 .26 .62 .64 .60 .60 .52 Year 2017 2015 2015 2018 2018 1999 2010 2008 2008 2005 2018 1989 1989 2000 2000 1988 1988 2007 2007 2013 2013 2013 2013 n/a n/a 2001 2001 2001 2001 2005 2005 2000 2000 2000 1987 1987 1987 1987 17 Moderators1 2 1,2,5 1,2,6 1,2,5 1,2,6 1,2 2 3,8 2,8 n/a 1,2 1,2 1,2 1,2,7 1,2,8 2 2 2 2 2 2 2 2 2 2 n/a n/a n/a n/a n/a n/a 3,7 3,7 2,7 2,7 3,7 2,7 3,7 Table 1 (cont'd) Author(s) Year N 1989 1990 2015 2015 1999 1999 1997 1997 1997 2017 2008 2008 2008 2008 2008 1998 2004 1000 150 88 87 93 282 40 40 40 725 41 50 91 175 157 79 91 Plous Raitz et al. Reyna et al Study 1 Reyna et al Study 1 Robbennolt & Studebaker Study 1 Robbennolt & Studebaker Study 2 Saks et al. Study 1 Saks et al. Study 1 Saks et al. Study 1 Stein & Drouin Thomas & Handley Study 1 Thomas & Handley Study 2 Thorsteinson et al. Study 1 Thorsteinson et al. Study 2 Thorsteinson et al. Study 3 Wansink et al. Wistrich et al. 1. Moderators are as follows: 1: Law study, 2: Amateur population, 3: Expert population, 5: Meaningful anchor, 6: Meaningless anchor; 7: Plausible anchor, 8: Extreme/Implausible Anchor r Moderators1 .47 .29 .38 .19 .49 .25 .49 .47 .51 .20 .56 .31 .51 .38 .32 .56 .53 n/a 1,2 1,2,5 1,2,6 1,2 1,2 1,2 1,2 1,2 1,2 2,8 2,8 n/a n/a n/a n/a 1,3,5 When the available corrections are made for attenuation due to error in measurement, the average true score correlation is .558, with a variance of .046. Based on these data, a 95% confidence interval around the true population correlation ranges from .512 to .604. Furthermore, these artifacts account for 26% of the observed variance. Utilizing the 75% rule (Schmidt & Hunter, 2014), it was posited that there may be significant moderators that account for the additional variance beyond that which was corrected. It should also be reiterated here that very few corrections to the data could be made for the given artifacts, which would certainly mean that some of the variance unaccounted for may be attributable to these uncorrectable artifacts in individual studies. But given the data and the many moderators posited in the data, analyses for moderators were conducted. 18 First, the expertise and amateurism of participants was coded for in the data. Specifically, studies which selected a sample which was posited to specifically have knowledge in the judgment domain were coded as expert samples (e.g., judges in sentencing decisions or real estate agents in a housing value judgment). Conversely, samples in which participants were specifically posited not to have expertise in a judgment domain were coded as amateurs. This means that a sample of mock jurors who made compensation decisions were counted as amateurs (e.g., Hans et al., 2018), while judgments of self-performance or efficacy were excluded entirely (e.g., Cervone & Peake, 1986). This yielded 14 studies with expert judges (Mw= .504, Varw= .026, N = 762), and 50 studies with amateur judges (Mw= .378, Varw= .023, N = 6536). These results would indicate that the anchoring effect is larger in magnitude among experts than amateurs: z(62) = 2.64, p < .01; r = .32. Thus, it appears one significant moderator has been identified, albeit not in the direction expected by intuition, or by some of the individual studies. The second moderator analyzed was the extremity of an anchor value. Specifically, any study making an explicit mention of utilizing an implausible anchor value or an extremity value as an induction were coded as extreme anchor studies (e.g., Thomas & Handley, 2008). Conversely, studies which selected anchors which were realistic estimates of a true judgment were included as non-extreme or plausible anchors (e.g., Englich, Mussweiler, & Strack, 2009). Studies involving uncertain ‘correct’ values (e.g., a likelihood estimate of troop deployment: Chapman & Johnson, 1999) or compensation/damages studies that do not explicitly mention extremity (e.g., Reyna et al., 2015) were excluded from the analysis. This coding scheme yielded 10 studies with extreme anchor values (Mw= .420, Varw= .032, N = 1264), and 17 studies with plausible anchor values (Mw= .473, Varw= .012, N = 1175). The results would indicate that the 19 anchoring effect is modestly, but not significantly larger in magnitude when plausible anchors are utilized versus extreme anchors: z(25) = .843, ns. Finally, the type of judgment participants were asked to make was coded for and analyzed as a potential moderator. Although not specifically posited by any of the literature, an examination of effect sizes and respective variances seemed to indicate some differences in outcomes when subjects were asked to make a percentage, numerical, monetary, or Likert-scale judgment. Indeed, efforts to correct for restriction in range were largely thwarted by the variability between these different types of judgments made. Thus, each of the above four were coded for within each study, resulting in 8 studies in which participants made a likelihood or percentage estimate (Mw= .408, Varw= .032, N = 2104), 54 studies in which participants offered a numerical estimation (Mw= .402, Varw= .029, N = 4321), 19 studies in which participants made a monetary/compensation estimate (Mw= .397, Varw= .014, N = 2965), and 3 studies in which participants made a judgment utilizing a Likert scale (Mw= .387, Varw= .005, N = 423). A glance at these statistics would indicate that there is no significant difference in effect sizes across these different outcome scales, and only minor total differences in variance. A file drawer analysis was also conducted to ascertain the likelihood of ‘lost’ studies that may show that the anchor is not as robust as the published literature would suggest (Schmidt & Hunter, 2014). Results indicated that 253 studies (over 3 times the N for this analysis) each averaging null results would be necessary to bring down the mean effect size to r = .10, which reaffirms the robustness of the anchoring effect. Results Specific to the Domain of Law. Given the principal interest of this meta-analysis (anchoring in law contexts), separate analyses were conducted on studies that involved a law-related outcome (e.g., either sentencing 20 or compensatory damages). In total, 34 of the 84 studies (N = 4397) involved a law domain. When separately analyzed, the mean weighted average was .360 (unweighted average = .414), with a variance of .016 (unweighted variance = .013). None of these correlations were correctable for artifacts, so an estimate of the true population correlation is incalculable. 37% of this variance was due to sampling error alone, which indicates that there may be moderating factors in the effect. Once again expertise versus amateurism was compared, with the coding scheme being identical to the initial moderator analysis. This resulted in 8 studies utilizing expert participants – generally, actual judges or practicing attorneys (Mw= .439, Varw= .006, N = 396); and 26 studies involving amateur participants – generally, mock jurors or students (Mw= .362, Varw= .016, N = 4001). These analyses would indicate that the anchoring effect is modestly larger in magnitude among expert subjects than amateur subjects: z (32) = 2.09, p = .02. This result is consistent with the finding that experts may be slightly more susceptible to anchoring effects than amateurs, counter to the prevailing hypotheses in the literature. The second moderator of interest within this subset of data is meaningfulness of the anchor value itself. Specifically, it is proposed that within a law context, adding an element of rationale or explanation for the given anchor value may increase its power. For example, an anchor in a compensatory damage case may be accompanied by the information that it represents 10 years of a victim’s annual income (Hans et al., 2018). Thus, studies were coded as having a meaningful anchor if additional information was provided alongside the anchor which indicated its relevance (e.g., Reyna et al., 2015). Studies were coded as having a meaningless or irrelevant anchor if additional information was provided alongside the anchor that indicated its irrelevance 21 or ostensible randomness (e.g., Englich, Mussweiler, & Strack, 2006). In the entire corpus of data, these criteria were only noted in studies within the law domain. This resulted in 4 studies with meaningful/relevant anchors (Mw= .466, Varw= .004, N = 283) and 6 studies with meaningless/irrelevant anchors (Mw= .313, Varw= .006, N = 324). Notably, within this subset, 100% of the variance could be accounted for by sampling error. Further comparison of the weighted mean effect sizes indicated a substantial difference between the two types of anchors: z (8) = 3.40, p < .0001. This result would be consistent with the idea that meaningful anchors have a more powerful effect than meaningless anchors. 22 Conclusions for Anchoring at Large. Discussion The findings of this meta-analysis share some consistencies with the meta-analysis performed by Orr and Guthrie (2006), but also deviate sharply in several ways. First and foremost, anchoring does produce a strong effect. The mean weighted effect size found in this analysis was r = .401, which exceeds most effect sizes associated with other meta-analyzed social phenomenon. The difference in outcomes associated with low- vs. high-anchors is substantial in nearly every one of these cases, and this shows in the final weighted average effect size. Transformed into the d statistic, the effect size is equal to .88, which indicates a large effect. Interpreting this further, this statistic would indicate that the difference between low- and high- anchor groups is equivalent to nearly a full standard deviation on the relevant outcome variable. While Orr and Guthrie found a slightly higher correlation (.497), it is worth mentioning that they calculated it as a correlation between a single anchor amount and the final negotiation price. The above analysis captures instead the impact of inducing a high anchor rather than a low anchor. Furthermore, it is notable that this weighted average effect size undoubtedly underestimates the true population correlation. This underestimation is the product of a few factors. First, as previously mentioned, reported reliability of measurement in the dependent variable was nearly non-existent among the studies utilized in this analysis. The outcome measure for most anchoring studies is a single judgment following the anchoring induction, but this prevents any estimation of reliability. It is reasonable to assume that there would be some variation in responses if one was given the same anchoring task more than once (Nunnally, 1967) – after all, these numeric estimates tend to vary wildly between subjects. It would be worth the effort to undertake a very specific repeated measures approach to anchoring: one in which the 23 same subjects are exposed to identical anchoring tasks over the course of a few trials, to ascertain the within-subject variability. Alternatively, one could use different question formats directed at the same anchoring judgment task, which would also yield multiple indicators. Armed with that correlation, this reliability coefficient could be applied to these results, which would correct and increase the final obtained effect size. Second, error of measurement stemming from the induction was undoubtedly a source of attenuation. These studies differed greatly in terms of the magnitude of the anchor presented, how directly or indirectly the anchor was provided, and a few even had additional numbers included in the experimental materials that may have acted as secondary anchors. It is worth highlighting the results of the subliminal anchoring experiment by Mussweiler and Englich (2005): even anchors presented without participants’ knowledge resulted in effect sizes of r = .35 and .31, respectively. This would suggest that a number included in the stimulus that was not intended to be an anchor may still act as one, which might mitigate the final power of the induction. Researchers in this area should strive to include specific manipulation checks for this type of experiment: include an item testing recognition of the anchor following the critical judgment. From this, error of measurement in the independent variable could be accounted for, and the above average effect size would be corrected and increased, and additional variance among studies may be accounted explained. Finally, restriction in range is an artifact which could account for some of the variation between effect sizes. The final observed variance exceed that which was expected from sampling error alone, and it seems probable that the heterogeneity of variances is in part due to restriction in range. Normally, it could be corrected for by estimating a population level standard deviation, and comparing the observed standard deviation within each study to this parameter. With this 24 particular domain, the population level standard deviation is extremely difficult to estimate. An attempt was made to do so using the coefficient of variation, but there is massive variation in deviations between a number of studies, particularly, when comparing something like compensatory damages to the number of stairs one just climbed (Cheek et al., 2015; Reyna et al., 2015). When these coefficients are included in the meta-analysis, the heterogenic distribution of the variation is exacerbated. Thus, this approach was abandoned. Yet it reasonable to assume that some of the studies involved restriction in range among the samples utilized and in the dependent measure, which would attenuate the final effect size. For future primary researchers in anchoring, the simplest prescription is that means and standard deviations should always be reported for variables of interest; a number of the above studies did not provide this information, or even enough information to provide an accurate estimate. The inclusion of these statistics may have allowed for a more reasonable estimation of population variance. Bearing in mind all these unaccounted for artifacts, the analysis still yielded an estimated true population correlation of r = .558, which again indicates the strength of this effect. Anchors are clearly among the most powerful psychological effects that have been empirically observed, and it may have utility in several domains. Two moderators were also specifically evaluated as a reflection of the extant research and the heterogeneous variance. The expertise of a target has frequently been proposed as a potential moderator of anchoring such that experts are less prone to anchoring effects; this logically followed the conceptualization of anchoring arising in situations involving uncertainty (Tversky & Kahneman, 1974). The fundamental argument behind this hypothesis is that experts feel less uncertain about a correct or fair outcome of a given task where an anchor may be presented, and their subsequent judgment is guided by their experience, rather than the anchor (see Furnham & 25 Boo, 2011). Indeed, Orr and Guthrie (2006) indicate that experienced negotiators were substantially less affected by an anchor than novices (p. 623). However, the results of this analysis indicate experts are not any less prone to anchoring effects, and in fact, they are more affected than amateurs. The difference between effect sizes is not massive (Δr = .126) and accounting for differences in induction strength may make this contrast even smaller. Yet this is supportive of one of Furnham and Boo’s (2011) insights in their literature review: “[expert] judges elaborate and compare the reference with their existing knowledge and engage in more thorough information processing, hence [activating] the accessibility of anchor-consistent information and bias judgments” (p. 39). This exact process was likely illustrated in the Mussweiler, Strack, and Pfeiffer (2000) study, where used car experts and amateurs were asked to judge the value of a used car. The experts probably processed the anchor value in terms of its consistency with the other information about the car, and it biased their judgments slightly more than the students, who did not have the same experience from which to draw. At a minimum, this finding conclusively rejects the idea that expertise or knowledge mitigates anchoring. Based on these data and the insight of Furnham and Boo (2011), it would seem that some primary research should shift focus to how individuals cognitively process anchors, and expertise may prove to be a notable moderator in that relationship. The second moderator raised throughout much of the literature is the plausibility or extremity of the anchor value presented. Some research had found no difference in effects, suggesting that extreme anchors are processed in comparison to boundary plausible judgments (e.g., the maximum or minimum length of the Mississippi River), thus resulting in assimilation to the above boundary judgment (Mussweiler & Strack, 2001). Others had found that providing 26 extreme anchors might push final judgments to comparable extremes: even a $1 billion request from an attorney might drive up final verdict in terms of damages (Chapman & Bornstein, 1996). The results of this meta-analysis would suggest that extreme anchors tend to be slightly less powerful than more plausible anchors, but the difference between the two is quite modest (Δr = .053). These data did specifically exclude some studies that did not meet other inclusion criteria, which may alter this difference somewhat. But nonetheless, it appears that both plausible and extreme anchors significantly impact final judgments. Conclusions for Law Domain. Anchoring is an extremely important effect to consider in law contexts, particularly as it relates to sentencing decisions and damage awards. A substantial proportion of these selected studies utilized law scenarios, and a secondary analysis was performed on these studies specifically. Not surprisingly, a similar weighted average effect size was found for these studies specifically (Mw = .360), indicating that the presence of anchoring is decidedly not limited to mere trivia or volume assessments. In these situations, the anchor may have immense implications in terms of jail time served or financial punishment rendered. Revisiting the expertise moderator, it’s interesting to note that experts in law were still more susceptible to anchoring effects than amateurs or mock jurors (Δr = .077). This seems particularly concerning when it comes to judges: after all, these are the individuals who are charged by the public to be unbiased and fair. Yet the results of this meta-analysis would suggest that they are worse at ignoring this particular cognitive bias than a standard juror would be. This finding is not limited merely to justices either; one experiment found that even an opposing counsel’s sentencing demands in a criminal trial might be influenced by an anchor presented (Englich, Mussweiler, & Strack, 2005). 27 An additional moderator (not present in non-law studies) was the meaningfulness of a given anchor. Legal scholars have argued that the power of an anchor may be increased by providing specific rationale or support for why this number was selected (see Hans et al., 2018). In a context like the law (where evidence and logic are pillars of the system), providing reasoning behind an anchor – whether the reasoning is accurate or not – should bolster the effect on final damage awards. The results of the meta-analysis support this notion, as the subset of studies with meaningful or relevant anchors showed a substantially larger effect size than meaningless or irrelevant anchors (Δr = .153). Contrary to the findings regarding expertise, this effect may be somewhat encouraging to legal scholars. If the logic and reasoning are paramount values of the justice system, it is a good thing that they appear to enhance the impact numerical anchors presented as part of a case argument. Limitations of the Research. While this meta-analysis illuminates elements of the anchoring effect, it is not without its limitations. First and foremost, the inclusion criteria delineated certainly limit the scope of the findings. Several studies in this area were conducted using a control condition against a single anchor, rather than high versus low (e.g., Guthrie, Rachlinski, & Wistrich, 2001). A similar meta-analysis using those studies may find a more modest effect size. Additionally, anchors may be presented non-numerically (see Langeborg & Eriksson, 2016), and these sorts of inductions may warrant further exploration. A second limitation is the dearth of artifacts that were correctable in this analysis. Many studies did not include the necessary statistics to calculate assessments of reliability, and the difficulties in correcting for restriction in range have already been discussed. Similarly, some 28 standard deviations had to be estimated from other parameters for calculation of the effect sizes. As a result, the weighted average effect size is likely an underestimation of the true effect, and until the above artifacts can be accounted for, the magnitude of true correlation is obscured. A third limitation lies in the inability to account for the significant variance in effect sizes. This limitation in part stems from the lack of information on artifacts, but it also has implications for identifying moderators. The variance-centric approach outlined by Schmidt and Hunter (2014) and their 75% rule is designed to facilitate the ruling out of moderators, but a by- product is a greater risk of Type II error. The present analysis (and other excluded research) would not assert the presence of moderators beyond the ones mentioned here, but that is not to say they could not exist. The benefit of an additional meta-analysis is seen here as well, as different inclusion criteria may allow for identification of other moderators. Finally, the minimal unpublished data included and exclusion of certain studies leaves this analysis vulnerable to overstating the power of anchoring. While these data would affirm its robustness, others may assert that anchoring is quite malleable, and that this effect can be reduced with subtle changes to stimulus materials (Mussweiler, 2002) or a brief training period (Adame, 2016). Certainly the conditions under which anchoring does not occur would be worth further exploration. Implications/Applications of the Findings. There are numerous theoretical and applied conclusions that can be drawn from the above results. From a theoretical standpoint, this meta-analysis may allow anchoring researchers to recalibrate their focus, and depart from existing paradigms exploring expertise and extremity as moderating factors. This effect is extremely strong, and as such should continue to be utilized in various contexts and environments. This meta-analysis quantitatively affirms the robustness 29 asserted by many in the literature. But given the unexpected finding that amateurs are less impacted by anchoring than experts, the cognitive process should become the primary concern for future work in anchoring. The mechanism(s) for why anchoring works so consistently and effectively still eludes those interested in the effect. Anchoring as a heuristic cue or cognitive prime does not fully explain why experts would be more affected; after all, as an expert, they would presumably have more in terms of cues to utilize in making a judgment, which would dilute the impact of the anchor. As Furnham and Boo (2011) have proposed, it seems more likely that the anchors activate central processing factors among experts that foster considerations of consistency with experience. This process could possibly be mapped out with the right experimental procedure, and there may also be an opportunity to use neurological measures to ascertain the level at which anchors are processed. Anchoring’s place as a decision heuristic is also worth further theoretical consideration. It occupies a specific domain separate from framing or priming effects. While the phenomenon likely shares some similarities at the cognitive level to priming in particular, it is undoubtedly distinct in practice. Anchoring occurs in quantitative domains, as the primary marker of anchoring is the numerical suggestion. This is an important distinction, as one attempts to compare the effect of anchoring against something like framing. Extending this assertion further, the impact of multiple anchors or anchors working in tandem with other persuasive approaches might be worth exploring in an experiment. In the real world, anchors are rarely presented as the singular influence on a judgment (as tends to be the case in these experiments). Rather, anchors are typically presented alongside other decision heuristics. Its relative impact should be considered, in addition to possible interactions between cues. Can these other heuristics mitigate or strengthen the influence of the anchor? 30 This meta-analysis would also invite the exploration of anchoring in other domains, beyond law and trivia contexts. One study included viewed anchoring in a health risk communication scenario (Brewer et al., 2007); perhaps anchoring has value in attitude shaping for certain vital health contexts (e.g., the risk of contracting melanoma or heart disease). Another sought to explore anchors as they pertained to climate change (Joireman et al., 2010); anchoring could prove useful in guiding beliefs about pollution and energy efficiency, in order to promote pro-social changes in the population. The fact that anchoring is seen so consistently in lab contexts invites applied field research, which could illuminate the effect’s place in day-to-day life. This begins to expose the potential value of anchoring in applied settings. Numbers are omnipresent in the media especially, and this bombardment of anchors may impact peoples’ attitudes on politics, crime, health risk, and the environment, to name a few. It seems probable that the magnitude of this impact is underestimated. After all, few will critique the presence of numbers in media. But these results would suggest that they be noted. Implications/Applications of the Findings for Law. Perhaps the most important application of these findings is in the courtroom, where anchors clearly have a pervasive influence which is unmitigated by experience, expertise, or title. A consistent finding of these data is that sentencing decisions may be greatly influenced by an anchor presented in court. Some of these influential anchors are persuasive for a good reason: if they are presented as meaningful and truly fair, maybe the numerical suggestion is not a bad thing. After all, one would hope that perpetrators that commit the exact same crimes are also facing the exact same punishment. 31 A more pessimistic outlook would offer that sentencing decisions may largely stem from the whim of prosecutors, who are afforded the opportunity to demand specific sentence lengths shaped by their own biases. For example, implicit bias research would suggest that certain prosecutors might generally demand longer sentences for minority defendants than white defendants (Faigman et al., 2012). This meta-analysis would add that initial bias will have a ripple effect, impacting a judge, jury, or even defense attorney’s subsequent perceptions about a ‘fair’ sentence (Englich, 2006). Perhaps it is best for anchors to be excluded entirely when it comes to sentencing, and judges should fall back upon their own expertise and previous cases to inform their decisions. In civil cases, anchors warrant careful consideration by both plaintiff and defense attorneys. For plaintiffs, it would seem critical to present a numeric anchor as part of their argument, as damage awards appear to be greatly shaped by the amount requested. Attorneys should certainly select a high amount, but perhaps not an extreme quantity. Correspondingly, they should commit part of their closing argument to explaining the reasoning behind the number. If the plaintiff can provide a high anchor backed by a strong argument, it might serve their interests when the jury determines awards. From the perspective of the defense, a few studies would suggest that anchors presented by a plaintiff are extremely difficult to counter; the effect is only marginally mitigated by counter-anchoring (Marti & Wissler, 2000) and attacking the anchor appears to have no affect (Campbell et al., 2015). The way in which an anchor is attacked might be crucial, as these data would suggest that the meaningfulness of the anchor should be targeted. For instance, a defense attorney might attempt to persuade jurors that a plaintiff’s anchor was selected out of nowhere, and that they have no basis for its calculation. With that said, these data would suggest that a 32 defense attorney’s arguments ought to be focused on dissuading jurors from deciding liability, rather than trying to address the anchor. But further research may be needed in this area, to determine a strong strategy for overcoming a plaintiff’s anchor. Final Conclusions. This meta-analysis incorporated results from three decades of anchoring research, and the results allow for an illuminated view of the strength of the effect. Moreover, it clarifies some repeated questions posed in the literature: the extremity of the anchor does not greatly impact the resulting effect, and expertise does not diminish the power of an anchor. Instead, it exacerbates it. Anchoring research that integrates these factors into their experimental protocol should seek to measure the specific processing associated with each. The completion of the meta-analysis also exposes a methodological weakness of much of the anchoring research: assessments of reliability in measures are impractical given most designs. For errors in the independent variable, few studies attempt to assess the power of their induction, as most do not include a manipulation check. This issue is simply fixed by including a manipulation check and reporting the results. Assessing reliability in the dependent variable may be less straightforward, given the nature of anchoring studies typically involves a single target judgment. But even a single study designed to assess the test-retest correlation within participants might be worth pursuing, as it could yield a reliability estimation for all other anchoring studies. The results specific to law contexts should be of particular interest to major figures in the justice system. Judges should be wary of the impact anchors may have on sentencing decisions, and perhaps the regulations regarding sentencing demands should be re-examined to minimize their presence. In civil cases, award requests may also require greater restriction – although not 33 in the form of caps, which act as anchors themselves (Robbennolt & Studebaker, 1999). But perhaps judges should be more proactive in terms of allowing numerical anchors during arguments. However, these results would indicate that reducing the prevalence of anchors in court may be a fruitless effort. The effect is consistent and strong, and excluding sentencing demands or requests for damages may prove wholly impractical. Both judges and jurors will likely generate an anchor even in the absence of one delivered by an attorney. For example, judges will probably reflect on recent or similar cases for guidance on sentencing, and jurors may utilize the very first number mentioned in deliberations. The presentation of the numerical anchors in these kinds of studies varies greatly, from subliminally (Mussweiler & Englich, 2005) to explicitly with forewarning (Wilson et al., 1996). It is reasonable to assume that in a particularly ambiguous scenario (like assigning awards for pain and suffering), people will identify some kind of anchor. In deliberations, the final awarded amount might correlate highly with the first dollar amount mentioned in accordance with the case, whether it come from the attorney or a single juror. This effect may not be preventable. All of this reaffirms the central feature of anchoring: the effect is formidable and pervasive. This meta-analysis will hopefully encourage further work on the mechanisms underlying anchoring, and perhaps an expansion of its use to other domains. It may have particular value in shaping attitudes within pro-social areas like health-risk research. A meta- analysis such as this certainly provides answers to some of the persistent questions within anchoring research, but hopefully it inspires further exploration in this area. 34 REFERENCES 35 REFERENCES (* indicates inclusion in meta-analysis) Adame, B. J. (2016). Training in the mitigation of anchoring bias: A test of the consider-the- opposite strategy. Learning and Motivation, 53, 36–48. Boster, F. J. (2002). On making progress in communication science. Human Communication Research, 28(4), 473–490. *Brewer, N. T., Chapman, G. B., Schwartz, J. A., & Bergus, G. R. (2007). The influence of irrelevant anchors on the judgments and choices of doctors and patients. Medical Decision Making, 27(2), 203–211. *Campbell, J., Chao, B., Robertson, C., & Yokum, D. V. (2015). Countering the plaintiff’s anchor: Jury simulations to evaluate damages arguments. Iowa Law Review, 101, 543. *Cervone, D., & Peake, P. K. (1986). Anchoring, efficacy, and action: The influence of judgmental heuristics on self-efficacy judgments and behavior. Journal of Personality and Social Psychology, 50(3), 492–501. *Chapman, G. B., & Johnson, E. J. (1999). Anchoring, activation, and the construction of values. Organizational Behavior and Human Decision Processes, 79(2), 115–153. *Chapman, G. B., & Bornstein, B. H. (1996). The more you ask for, the more you get: Anchoring in personal injury verdicts. Applied Cognitive Psychology. *Cheek, N. N., Coe-Odess, S. J., & Schwartz, B. (2015). What have I just done? Anchoring, self- knowledge and judgments of recent behavior. Judgment and Decision Making, 10(1), 76– 85. *Critcher, C. R., & Gilovich, T. (2008). Incidental environmental anchors. Journal of Behavioral Decision Making, 251(21), 241–251. Dillard, J. P., Hunter, J. E., & Burgoon, M. (1984). Sequential-request persuasive strategies: Meta-Analysis of foot-in-the-door and door-in-the-face. Human Communication Research, 10(4), 461–488. Englich, B. (2006). Blind or biased? Justitia’s susceptibility to anchoring effects in the courtroom based on given numerical representations. Law and Policy, 28(4), 497–514. *Englich, B., & Mussweiler, T. (2001). Sentencing under uncertainty: Anchoring effects in the courtroom. Journal of Applied Social Psychology, 31(7), 1535–1551. 36 *Englich, B., Mussweiler, T., & Strack, F. (2005). The last word in court — A hidden disadvantage for the defense. Law and Human Behavior, 29(6), 705–722. *Englich, B., Mussweiler, T., & Strack, F. (2006). Playing dice with criminal sentences: The influence of irrelevant anchors on experts’ judicial decision making. Personality and Social Psychology Bulletin, 32(2), 188–200. *Englich, B., & Soder, K. (2009). Moody experts—How mood and expertise influence judgmental anchoring. Judgment and Decision Making, 4(1), 41–50. Epley, N., & Gilovich, T. (2001). Putting Adjustment Back in the Anchoring and Adjustment Heursitic: Differential Processing of Self-Generated and Experimenter-Provided Anchors. Psychological Science, 12(5), 391–396. Epley, N., & Gilovich, T. (2010). Anchoring unbound. Journal of Consumer Psychology, 20(1), 20–24. Faigman, D. L., Kang, J., Bennett, M. W., & Carbado, D. W. (2012). Implicit bias in the courtroom. UCLA Law Review, 1124, 1124–1186. Furnham, A., & Boo, H. C. (2011). A literature review of the anchoring effect. Journal of Socio- Economics, 40(1), 35–42. *Glöckner, A., & Englich, B. (2015). When relevance matters: Anchoring effects can be larger for relevant than for irrelevant anchors. Social Psychology, 46(1), 4–12. *Greenstein, M., & Velazquez, A. (2017). Not all anchors weigh equally: Differences between numeral and verbal anchors. Experimental Psychology, 64(6), 398–405. Guthrie, C., Rachlinski, J. J., & Wistrich, A. J. (2001). Inside the judicial mind. Cornell Law Review, 86, 778–830. *Hans, V. P., Helm, R. K., Reyna, V. F., & Hall, M. T. (2018). From meaning to money: Translating injury into dollars. Law and Human Behavior, 42(2), 95. *Hastie, R., Schkade, D. A., & Payne, J. W. (1999). Juror judgments in civil cases: Effects of plaintiff’s requests and plaintiff’s identity on punitive damage awards. Law and Human Behavior, 23(4), 445–470. Hinsz, V. B., & Indahl, K. E. (1995). Assimilation to anchors for damage awards in a mock civil trial. Journal of Applied Social Psychology, 25(11), 991–1026. Hozo, S. P., Djulbegovic, B., & Hozo, I. (2005). Estimating the mean and variance from the median, range, and the size of a sample. BMC Medical Research Methodology, 10(5), 1–10. 37 Jacowitz, K. E., & Kahneman, D. (1995). Measures of anchoring in estimation tasks. Personality and Social Psychology Bulletin, 21(11), 1161–1166. *Joireman, J., Barnes Truelove, H., & Duell, B. (2010). Effect of outdoor temperature, heat primes and anchoring on belief in global warming. Journal of Environmental Psychology, 30(4), 358–367. Kahneman, D., Schkade, D., & Sunstein, C. R. (1998). Shared outrage and erratic awards: The psychology of punitive damages. Journal of Risk and Uncertainty, 16(1), 49–86. *Kaustia, M., Alho, E., & Puttonen, V. (2008). How much does expertise reduce behavioral biases? The case of anchoring effects in stock return estimates. Financial Management, 37(3), 391–411. *Konig, C. J. (2005). Anchors distort estimates of expected duration. Psychological Reports, 96, 253–256. Langeborg, L., & Eriksson, M. (2016). Anchoring in numeric judgments of visual stimuli. Frontiers in Psychology, 7, 1–7. *Lecci, L., & Martin, A. (2018). The impact of clinical diagnosis and plaintiff’s award request on mock juror damage awards and injury perceptions. Psychiatry, Psychology and Law, 25(4), 1–17. *Malouff, J., & Schutte, N. S. (1989). Shaping juror attitudes: Effects of requesting different damages amounts in personal injury trials. The Journal of Social Psychology, 129(4), 491– 497. Markovsky, B. (1988). Anchoring justice. Social Psychology Quarterly, 51(3), 213–224. *Marti, M. W., & Wissler, R. L. (2000). Be careful what you ask for: The effect of anchors on personal injury damages awards. Journal of Experimental Psychology, 6(2), 91–103. McAuliff, B. D., & Bornstein, B. H. (2010). All anchors are not created equal: The effects of per diem versus lump sum requests on pain and suffering awards. Law and Human Behavior, 34, 164–174. *McElroy, T., & Dowd, K. (2007). Susceptibility to anchoring effects: How openness-to- experience influences responses to anchoring cues. Judgment and Decision Making, 2(1), 48–53. *Mochon, D. (2019). [Anchoring responses in two different scenarios]. Unpublished raw data. *Mochon, D., & Frederick, S. (2013). Organizational behavior and human decision processes anchoring in sequential judgments. Organizational Behavior and Human Decision Processes, 122(1), 69–79. 38 Mussweiler, T. (2002). The malleability of anchoring effects. Experimental Psychology, 49(1), 67–72. *Mussweiler, T. (2001). The durability of anchoring effects. European Journal of Social Psychology, 442(2001), 431–442. *Mussweiler, T., & Englich, B. (2005). Subliminal anchoring: Judgmental consequences and underlying mechanisms. Organizational Behavior and Human Decision Processes, 98(2), 133–143. Mussweiler, T., & Strack, F. (2000). Numeric judgments under uncertainty: The role of knowledge in anchoring. Journal of Experimental Social Psychology, 36(5), 495–518. Mussweiler, T., & Strack, F. (2001). Considering the impossible: Explaining the effects of implausible anchors. Social Cognition, 19(2), 145–160. *Mussweiler, T., Strack, F., & Pfeiffer, T. (2000). Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26(9), 1142–1150. *Northcraft, G. B., & Neale, M. A. (1987). Experts, amateurs, and real estate: An anchoring-and- adjustment perspective on property pricing decisions. Organizational Behavior and Human Decision Processes, 39(1), 84–97. Nunnally, J. C. (1967). Psychometric theory. New York: McGraw-Hill. Oppenheimer, D. M., LeBoeuf, R. A., & Brewer, N. T. (2008). Anchors aweigh: A demonstration of cross-modality anchoring and magnitude priming. Cognition, 106(1), 13– 26. Orr, D., & Guthrie, C. (2006). Anchoring, information, expertise, and negotiation: New insights from meta-analysis. Ohio State Journal on Dispute Resolution, 21(3), 597–628. *Plous, S. (1989). Thinking the unthinkable: The effects of anchoring on likelihood estimates of nuclear war. Journal of Applied Social Psychology, 19(1), 67–91. *Raitz, A., Greene, E., Goodman, J., & Loftus, E. F. (1990). Determining damages. Law and Human Behavior, 14(4), 385–395. https://doi.org/10.1007/BF01068163 *Reyna, V. F., Hans, V. P., Corbin, J. C., Yeh, R., Lin, K., & Royer, C. (2015). The gist of juries: Testing a model of damage award decision making. Psychology, Public Policy, and Law, 21(3), 280–294. *Robbennolt, J. K., & Studebaker, C. A. (1999). Anchoring in the courtroom: The effects of caps on punitive damages. Law and Human Behavior, 23(3), 353–373. 39 *Saks, M. J., Hollinger, L. A., Wissler, R. L., Lee, D., & Hart, A. J. (1997). Reducing variability in civil jury awards. Law and Human Behavior, 21(3), 243–256. Schmidt, F. L., & Hunter, J. E. (2014). Methods of meta-analysis: Correcting error and bias in research findings (3rd ed.). *Stein, C. T., & Drouin, M. (2017). Cognitive bias in the courtroom: Combating the anchoring effect in criminal sentencing. Indiana University-Purdue University Fort Wayne. Strack, F., & Mussweiler, T. (1997). Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of Personality and Social Psychology, 73(3), 437–446. *Thomas, K. E., & Handley, S. J. (2008). Anchoring in time estimation. Acta Psychologica, 127, 24–29. *Thorsteinson, T. J., Breier, J., Atwell, A., Hamilton, C., & Privette, M. (2008). Anchoring effects on performance judgments. Organizational Behavior and Human Decision Processes, 107, 29–40. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. Wang, W., Liu, J., & Tong, T. (2014). Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range. BMC Medical Research Methodology, 14(135), 1–13. *Wansink, B., Kent, R. J., & Hoch, S. J. (1998). An anchoring and adjustment model of purchase quantity decisions. Journal of Market Research, 35(1), 71–80. Wilson, T. D., Houston, C. E., Etling, K. M., & Brekke, N. (1996). A new look at anchoring effects: Basic anchoring and its antecedents. Journal of Experimental Psychology, 125(4), 387–402. *Wistrich, A. J., Guthrie, C., & Rachlinski, J. J. (2004). Can judges ignore inadmissible information? The difficulty of deliberately disregarding. University of Pennsylvania Law Review, 153, 1251–1345. 40