SUPREME COURT LEGITIMACY IN THE CONTEMPORARY ERA By Miles T. Armaly A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Political Science – Doctor of Philosophy 2017 ABSTRACT SUPREME COURT LEGITIMACY IN THE CONTEMPORARY ERA By Miles T. Armaly Institutional legitimacy stands alone as the most important form of political capital in democratic systems. In the United States, the Supreme Court, whose members are unelected and serve life terms, is constitutionally ill-equipped to generate this capital via conventional means. Unlike the elected branches of government, whose offices are replete with legitimacy as a result of free and fair elections, the federal judiciary must amass and maintain public support. Without this support, termed “diffuse support,” the elected branches on whom the judiciary relies for resources, deference, and enforcement of its decisions would not be incentivized to offer those political commodities. Luckily for the Court, the American public is largely supportive of the judicial branch and tends to offer this political capital in spades. Indeed, the Court is, simply, “different.” The theory of positivity bias suggests that preexisting support for the Supreme Court influences how citizens perceive Court actions and outcomes. Anterior support begets support. Furthermore, legitimizing judicial imagery – such as robes, gavels, and the dais on which the justices sit – bolsters support for the institution, even when one stands to lose on policy grounds. Concisely, the average American has a deep appreciation for the federal judiciary. In this project, I offer three essays that examine various aspects of Supreme Court legitimacy, its formation, its malleability, and its influence on separation of powers interactions. I first demonstrate, using experimental data gathered via Amazon’s Mechanical Turk (MTurk) platform, that manipulating the source of negative statements about the judiciary produces changes in one’s level of support for the Supreme Court. Individuals with negative valence toward a political figure increase their level of support for the judiciary after reading negative statements that figure made about the judiciary, and vice versa. What is more, while individuals do glean some ideological information from the cue and update their position relative the Court accordingly, changes are largely affective. Next, I capitalize on panel data fortuitously collected shortly before and shortly after the passing of Justice Antonin Scalia, as well as an experimental design embedded within the second cross-section, to examine how a sudden vacancy impacts attitudes toward the Supreme Court. Exposure to information regarding the legal importance of filling the vacancy, when coupled with exposure to legitimating judicial symbols, positively influences diffuse support. Democratic respondents, who stood to gain on policy grounds, were particularly susceptible to increases in support. The power of judicial imagery is sufficient to increase positivity even in the face of intense politicization of the Court by the elected branches. Finally, I demonstrate that a particular variant of public support conditions interactions between the judiciary and Congress. First, I consider how Congress’ commitment to acting on behalf of the public, as well as the difficulty of assessing diffuse support, incentivizes members of Congress to gauge short-term public support for the judiciary. Then, I detail how the imprecise measurement of key concepts has limited empirical inquiry in this line of research, offer a corrective strategy, and validate that the new measure behaves in a manner consistent with theory. Lastly, I provide evidence that congressional willingness to offer discretion and resources to the judiciary is contingent upon short-term, ephemeral support for the Court, as opposed to long-term, diffuse support. To my parents, who made me want to learn. iv ACKNOWLEDGEMENTS While I would generally prefer to forgo the sentimentalities that litter the next few pages, so much is owed to so many people. First, I would like to convey my great appreciation to my many colleagues who have made these projects – and these years – much better. Members of the American Politics Research Group – particularly Elizabeth Lane and Jessica Schoenherr – have provided extremely useful and constructive feedback that have helped me improve what follows. A special thanks goes to Bob “Mr. Bobbie Dario Rogers” Lupton, whose camaraderie has been and will continue to be greatly rewarding and whose advice and encyclopedic citation knowledge continues to expand the scope of my projects. Adam Enders deserves particular recognition. His thoughtful advice, assistance, and friendship have proven indispensable. Finally, Caelyn Ditz, who has shared in this expedition with me from day one, has strengthened my resolve at every step; although no level of thanks is sufficient, thank you for supporting me. Several people deserve acknowledgment regarding the data used in this project. First, Thomas Hammond and Charles Ostrom graciously provided funding for data collection via Amazon’s Mechanical Turk. I use those data in Chapter 2. Bob Lupton and Eric Juenke assisted in the collection of the data employed in Chapter 3. Thanks to them as well. I am thankful to Mike Nelson, who was a discussant on a panel at the 2016 Annual Meeting of the Southern Political Science Association where an early version of the analyses in Chapter 4 was presented and to Mike Zillis, who played the same role with the analysis in Chapter 2 at the 2017 SPSA meeting. Justin Wedeking provided helpful commentary on the second essay, and Patrick Wohlfarth and Joe Ura offered truly invaluable advice on the third. Finally, I owe an incalculable debt to the members of my dissertation committee. Ian Ostrander, who was a latecomer to this project, immediately provided thoughtful insights that have since been incorporated into this and other works. Cory Smidt helped me focus my academic efforts and has pushed me to consider what, exactly, legitimacy is and does. I would be remiss to not also thank Saundra Schneider, a member of my early guidance v committee, for her counsel and, especially, her part in my years at the ICPSR Summer Program. Bill Jacoby deserves particular acknowledgment. He has provided for me a great deal of assistance well beyond this project and my research, although his advice has proven helpful in that realm as well. First and foremost, Professor Jacoby exposed me to a way of thinking about social science – and the social and political world more generally – that fundamentally informs my perspective on research and on our discipline and its purpose. Beyond this orientation toward our craft, Professor Jacoby has provided me (or at least been instrumental in the provision of) several opportunities that have offered countless benefits. Most notably, my employment at the American Journal of Political Science and the ICPSR Summer Program have not only afforded distinct educational advantages, but have also guided my insights into our discipline, this form of education, and academic research. Professor Jacoby used to joke that he would turn me into a public opinion researcher one day. I hope he is okay only being half right, as that half can be attributed to his mentorship. I express my distinct gratitude to Bill for years of service, advice, and friendship. Lastly, and most importantly, I express my deepest gratitude to Ryan Black. The space here is too limited to detail the impact Ryan has had on my scholarship, outlook, and life. He once wrote to me, “This is a bipolar profession. High highs and low lows,” a truism if there ever was one. What followed was a microcosm of his guidance: “The key is to try and keep an even keel throughout it all.” He has offered me the tools to succeed, has detailed for me the steps, procedures, and best practices, has provided comments on countless papers, has told me how to deal with colleagues, students, and “the system.” But, it is his sober and uncommonly sage advice – which I’ll admit I haven’t always understood at first – that has helped me define what my version of success is. Coming up with that definition, bar none, has been the most defining moment of my young career. I cannot begin to pay back what Ryan has given me. I only hope to pay it forward. I’m proud I am able to call him my mentor and lucky to call him a friend. Thank you, Ryan. vi TABLE OF CONTENTS LIST OF TABLES viii LIST OF FIGURES ix Chapter 1: Introduction 1 Chapter 2: Extra-judicial Actor Induced Change in Supreme Court Legitimacy 2.1 Elite Cueing and Support for the Supreme Court 2.1.1 The Formation of Diffuse Support 2.2 Expected Changes in Diffuse Support 2.3 Data, Cues, and Methodology 2.4 Malleability Results 2.5 Affective-Cognitive Balance or Ideological Updating? 2.6 Discussion APPENDIX 5 8 9 12 13 17 20 24 28 Chapter 3: Politicized Nominations and Public Attitudes toward the Supreme Court in the Polarization Era 3.1 A Political Vacancy and Salient Non-Case Events 3.1.1 Pre-Nomination 3.1.2 Non-Case Events 3.1.3 The “New Normal” 3.1.4 Policy Losers and Political Perceptions 3.2 Research Design 3.2.1 Treatments 3.3 Experimental Evidence 3.4 Policy Losers and Diffuse Support 3.5 Beyond Support: Investigating Political Perceptions of the Court 3.6 Discussion APPENDIX 34 37 37 39 39 40 42 42 45 49 53 56 61 Chapter 4: Supreme Court Institutionalization and Congressional Appraisal of Public Support for the Judiciary 4.1 Congressional Assessment of Public Support 4.2 Question Wording, Confidence, and Public Support 4.3 Measurement Strategy: Methodology and Results 4.3.1 Developing the Series 4.3.2 Variable Measurement & Coding 4.3.3 Measurement Strategy: Analysis and Results 4.4 Supreme Court Institutionalization 4.5 Conclusion APPENDIX 67 70 74 76 77 80 82 85 89 92 BIBLIOGRAPHY 107 vii LIST OF TABLES Table 2.1: Experimental Outcomes 12 Table 2.2: OLS Regression on Change in Legitimacy for Treatment 21 Table 2.3: Randomization Check for Clinton Sample 29 Table 2.4: Randomization Check for Trump Sample 29 Table 2.5: Question Wording, Descriptive Statistics, and Psychometric Properties of Legitimacy Battery 30 Table 2.6: OLS Regression on Change in Legitimacy 31 Table 2.7: OLS Regression on Change in Legitimacy w/ Controls 32 Table 2.8: OLS Regression on Change in Legitimacy for Treatment w/ Controls 33 Table 3.1: Wilcoxon Signed-Rank Test 65 Table 3.2: Wilcoxon Signed-Rank Tests for Partisan Self-Identification 66 Table 4.1: ADL on Effects on Confidence in the Supreme Court 83 Table 4.2: Error Correction Models of Supreme Court Institutionalization 87 Table 4.3: Question wordings 96 Table 4.4: Variable coding and sources 99 Table 4.5: Augmented Dickey-Fuller Unit Root Tests 101 Table 4.6: Multicollinearity Diagnostics 103 Table 4.7: Autocorrelation Tests for Various Lag Orders 103 Table 4.8: Effect of IVs with Alternate Lag Orders for Diffuse ECM 104 Table 4.9: Granger Causality Test for Diffuse Series 105 Table 4.10: Replication of Ura & Wohlfarth for 1977-2004 106 Table 4.11: Replication of Ura & Wohlfarth with Ephemeral Series for 1977-2004 106 viii LIST OF FIGURES Figure 2.1: Density of change in legitimacy for Clinton sample. Dotted line refers to control group, solid line to the treatment group. 16 Figure 2.2: Effect of treatment on change in diffuse support across affect toward Clinton (left) and Trump (right). Gray line represents estimated effect of control and the black line treatment; dashed lines are 95% confidence intervals around those estimates. 18 Figure 2.3: Effect of affect and change in ideological distance on change in diffuse support. For the affect figures, in the left column, lines represent estimated effect of affect on change in diffuse support; dashed lines represent 95% confidence intervals around those estimates. For the change in ideological distance figures, in the right column, circles represent point estimates for each observed value of change in ideological distance on change in diffuse support; vertical bars represent 95% confidence intervals around those estimates. 22 Figure 3.1: Dotplot of paired difference in means tests across experimental treatment. Each column, separated by vertical dotted line, contains a pair of plotting symbols which represent mean diffuse support response (0-1 scale) for those who received the treatment listed on the x-axis; within each column, closed circle represents mean support for wave 1 & closed square represents mean support for wave 2. Vertical bars are 95% confidence intervals around mean estimates. Annotations at the bottom of each column are p-values for those relationships. Red annotation denotes p < 0.05 with respect to a two-tailed test. 47 Figure 3.2: Dotplot of paired difference in means tests across partisan self-identification. Each column, separated by vertical dotted line, contains mean estimates for each group; closed circles represent Democrats and closed squares represent Republicans. Within each column, for each party identification, the symbol on left is mean support for wave 1 & symbol on right is mean support for wave 2. Vertical bars are 95% confidence intervals around mean estimates. Annotations at the bottom of each column are p-values with respect to a two-tailed test for those relationships. 51 Figure 3.3: Dotplot of paired difference in means tests across experimental treatment. Each column, separated by vertical dotted line, contains a pair of closed circles, which represent mean politicization response (0-1 scale) for those who received the treatment listed on the x-axis; within each column, closed circle on left is mean politicization for wave 1 & closed square on right is mean politicization for wave 2. Vertical bars are 95% confidence intervals around mean estimates. Annotations at the bottom of each column are p-values with respect to a two-tailed test for those relationships. 55 Figure 3.4: Photograph showing the adornments of Scalia’s chair and the bench in front of his chair following his passing. Respondents assigned to the judicial symbols treatment groups viewed this photograph. Photograph from the Supreme Court of the United States. 63 Figure 3.5: Histogram of media exposure. Larger values indicate greater exposure to news stories. 65 Figure 4.1: Support for the United States Supreme Court. Circles represent values from ix individual surveys. Line indicates estimated confidence. Left plot displays values for the diffuse series; right for ephemeral. Larger values indicate a larger percentage of survey respondents reporting that they are confident in the Court. 79 Figure 4.2: Correlation between alternate starting values for ‘not confident’ series. 16.67 is the mean value and the value used to initialize in the main text. 101 Figure 4.3: Correlation between alternate starting variances for ‘not confident’ series. 25 is the variance used in the main text. 102 x Chapter 1: Introduction Do American citizens trust the U.S. Supreme Court to make decisions that are just, fair, and appropriate? Do they believe the institution to be authoritative and an effective agent of the public will? Are they predisposed toward respecting the decisions the judiciary makes, regardless of whether they support them politically? For decades, the scholarly answer to these questions was, generally, yes. More recently, new research has cast doubt on the indomitability of positive public assessments of the judiciary. It is decidedly the case that members of the mass public believe the federal judiciary to be legitimate and that they are diffusely supportive of its actions (see Caldeira and Gibson, 1992; Gibson, Caldeira and Baird, 1998; Gibson, Caldeira and Spence, 2003b). This support is only weakly related to immediate satisfaction with outputs (see Gibson, Caldeira and Baird, 1998), but is strongly linked to perceiving the procedure to be just (e.g., Baird and Gangl, 2006; Tyler, 2006, 2007) and fundamental values such as support for the rule of law (e.g., Gibson and Nelson, 2015). Preexisting psychological attachments – a “positivity bias” (e.g., Gibson, Caldeira and Spence, 2003a) – predict future support, even when politically displeasing decisions are made in intervening periods (Gibson and Caldeira, 2009b; Gibson, Caldeira and Spence, 2003b). Further, legitimizing judicial symbols, like robes and gavels, influence one’s willingness to accept outcomes (Gibson, Lodge and Woodson, 2014). Yet, the average American frequently relies on shortcuts when making political judgments. For instance, individuals look to cues from political elites or partisan information to determine “what goes with what” in politics (e.g., Bullock, 2011; Campbell et al., 1960; Rahn, 1993; Zaller, 1992). Specifically, Nicholson and Hansford (2014) show that partisan cues influence the acceptance of judicial outcomes. While it is true that evaluations of 1 the Supreme Court are, simply, “different” than evaluations of other political institutions as a result of positivity bias, there are too many cognitive biases, psychological shortcuts, and political heuristics to which humans are subject to assert that evaluations of the judiciary are entirely dissimilar from other political assessments at a fundamental level. Consider, for example, a self-identified conservative who supports federal spending on schools, believes in common sense gun control, is a proponent of LGBT rights, and believes affirmative action is an appropriate method by which to correct societal shortcomings. Such an individual would be what Ellis and Stimson (2012) deem a “conflicted conservative,” or one who identifies symbolically with the label of conservatism but supports decidedly liberal policy positions (also see Free and Cantril, 1967). As Bartels and Johnston (2013) and Hetherington and Smith (2007) demonstrate, it is possible that Supreme Court decisions upholding the Affordable Care Act or striking down abortion restrictions in Texas – outcomes a conflicted conservative should support operationally – would draw this individual’s ire because they were liberal decisions. In other words, irrespective of true policy congruence, the mere perception that one is ideologically distant from the Supreme Court is sufficient to reduce support for the institution. In addition to conflict in policy preferences, individuals have difficulty placing the Court in ideological space, and may further misperceive their relationship to judicial rulings (see Bartels and Johnston, 2013; Hetherington and Smith, 2007). It is precisely these types of phenomena that are influencing evaluations of the federal judiciary and which deserve greater scholarly attention. Such matters are scarcely trivial. As the Federalist Papers (1788) note, “...the judiciary is beyond comparison the weakest of the three departments of power.” Because the Supreme Court is equipped with no resources to enforce its decisions, it must rely on the amity of the branches beholden to the public. That is, the judiciary must rely on public goodwill to catalyze the legislature and executive to enforce Court decisions, provide it with adequate resources, and offer sufficient deference to ensure an independent judicial branch. At the extreme, a public entirely unsupportive of the judiciary could spur a constitutional crisis where the elected branches may simply ignore Court rulings. Or, the 2 elected branches could statutorily remove all but ministerial powers from the nation’s high court. Herein lies the importance of understanding the formation, stability, and influence of public support for the judiciary. Confidence in American institutions is trending downwards.1 So too is trust in government.2 As support decreases across the board, only the judiciary is constitutionally ill-equipped to weather the negativity storm. The elected branches derive their legitimacy from regularly scheduled elections; Congress can, and occasionally does, function and fulfill its constitutional duties with single-digit approval ratings. While the reservoir of goodwill toward the Court may be wide and deep (see Easton, 1965), the institution relies much more heavily on support as a unique form of political capital than do the other branches of government (Caldeira and Gibson, 1992). The conventional wisdom regarding legitimacy for the Courts may no longer describe the state of the world, and to assume stability in such assessments may be to disregard evidence of diminishing positivity. Conversely, positivity bias may be capable of withstanding the increasingly polarized and hostile political landscape. The broad goals of this project are twofold. First, I set out to determine how malleable diffuse support for the Supreme Court is, what can alter legitimacy attitudes, and what can safeguard against reductions in this crucial form of political capital. My contributions highlight the importance of considering the Supreme Court as a player in the larger political realm. The Court is spared when the elected branches treat it in an overtly political manner and it appears able to withstand direct indictments by political figures. But, changes in evaluations of the Court are subject to psychological attachments to parties and politicians, indicating that assessments of the Court are not free from broader political attitudes. Secondly, I turn to institutional interpretation of public support and investigate the types of information members of Congress rely on when making decisions about the Court’s independence. The durability of public support for the Court is all 1 http://www.gallup.com/poll/1597/confidence-institutions.aspx 2 http://www.people-press.org/2015/11/23/beyond-distrust-how-americans-viewtheir-government/ 3 for naught if the elected branches are interpreting fleeting disagreements with policy as preferences regarding institutional arrangements. Indeed, this appears to be the case, raising questions about how the unelected Supreme Court maintains such a wealth of independence in the current constitutional order. 4 Chapter 2: Extra-judicial Actor Induced Change in Supreme Court Legitimacy President Donald Trump has proven to be an effective rhetorician, inducing action from corporations and consumers alike when he makes proclamations. For instance, his tweet “Cancel order!” to Boeing compelled the aircraft manufacturer to commit to rein in costs of the new Air Force One project and to donate to Trump’s inauguration. Even actions only tangential to Trump have spurred action amongst consumers; #BoycottNiemans, #BoycottStarbucks, and #DeleteUber are grassroots responses to various actions of companies perceived to either support or oppose President Trump. As journalists write, “...the heads of big American companies are being confronted by a leader willing to call them out directly and publicly for his policy and political aims” (Shear and Drew, 2016). Perhaps most striking is that 51% of Trump supporters agree with his claim that the media is the enemy. This is all to say that people react when Trump speaks, be it via boycott, “buycott,” or altering or entrenching one’s political attitudes. Under scrutiny here is what might happen should the U.S. Supreme Court become the subject of Trump’s ire. That is, what is the outcome when a president who effectively compels action with his words sets his sights on an institution for which there is a strong basis of support, that is viewed as highly legitimate, and who relies on public support to expect the elected branches to enforce its decisions? More broadly, can political actors – such as President Trump or once presidential candidate Hillary Clinton – compel individuals to reevaluate their attitudes toward the Supreme Court and disrupt the delicate separation of powers balance? And, if so, are individuals altering their attitudes in a strictly affective manner or are they learning something about the ideological location of the Court? Members of the American public largely believe that the Supreme Court is worthy of 5 trust and that its actions are legitimate (Caldeira and Gibson, 1992). These psychological attachments to the judiciary – termed institutional legitimacy or diffuse support (Caldeira and Gibson, 1992) – tend to be connected to enduring orientations such as democratic values (Gibson and Nelson, 2015) and support for procedural justice (Baird, 2001; Tyler, 2006). Yet, recent research suggests there is mobility in legitimacy attitudes and that they are more closely connected to performance evaluations and political cues than previously believed (Bartels and Johnston, 2013; Christenson and Glick, 2015; Clark and Kastellec, 2015). Thus, there is conflicting evidence on whether positivity toward the Court can be altered. On the one hand, some argue that a wealth of positive attitudes insulates the judiciary even when it has made an unpopular decision (Gibson, Caldeira and Spence, 2003b). On the other hand, some salient and politically charged cases may cause people to reevaluate their position vis-`a-vis the judiciary and, ultimately, adjust their level of support (Christenson and Glick, 2015). Further, misperceptions of the ideological location of the Supreme Court appear capable of driving individual level support for the institution (Hetherington and Smith, 2007; Bartels and Johnston, 2013). Here, I set out to determine whether those misperceptions can be manipulated by extra-judicial political actors such as President Trump. There is little question that sustained disappointment with outcomes will lead to less support. It is the swiftness with which these changes occur that is open to debate. Further, it is assumed that individuals only adjust their assessments of the judiciary following the actions of the Court itself. Yet, members of the mass public frequently rely on heuristics and various source cues when generating opinions (Lupia, 1994; Goren, Federico and Kittilson, 2009; Clark and Kastellec, 2015). As Nicholson and Hansford (2014) relate, “In making political judgments, the public is most likely to draw on trusted and credible source cues” (2). Relatively unexamined in this line of research is the role of more expressly political figures in assessments of diffuse support for the Court. While evaluations of other political institutions are related to support for the Supreme Court (e.g., Caldeira, 1986; Ura and Wohlfarth, 2010), to the best of my knowledge no scholarship asks if individual political figures can cause modifications of individual levels of legitimacy 6 (although see Dolbeare and Hammond, 1968, who demonstrate that public attitudes toward the Court are related to whether one’s preferred political party controls the White House). I suggest that individuals may desire cognitive balance when considering their preferred political figures in relation to support for the judiciary. This is an important consideration, as political figures frequently discuss the Court, its actions, and actors. Should individuals alter their attitudes to align with frequent and occasionally disparate statements made by politicians, it calls into question whether attitudes regarding the Court are derived from assessments of the judiciary alone. In this paper, I use two original survey experiments to test whether salient political figures – in this case, Donald Trump and Hillary Clinton – are capable of modifying individual level positivity toward the judicial branch by making statements that indict the Court. Further, I ask whether any changes that may occur are driven by affective motivated reasoning or ideological updating. That is, are changes in evaluations of the Court a function of one’s affect toward Trump or Clinton or of receiving information from those sources and updating one’s position vis-`a-vis the Court ideologically? The results are clear: diffuse support is malleable and alterations are affective. Individuals who dislike a political figure increase their level of support for the Supreme Court after exposure to that person’s negative statements and, at least for Clinton, vice versa. There is a 15% difference in the change in evaluations of the Court across the range of support for Clinton amongst those in the treatment group and a similar 13% difference in the change in evaluations of the Court across the range of support for Trump. This study directly links statements of individual politicians – specifically, a presidential candidate and president elect, both of whom were in positions to frequently discuss the Supreme Court – to changes in diffuse support for the judiciary and demonstrates that those changes are affective in nature. Previous studies have linked particular cues to alterations in support (e.g., Christenson and Glick, 2015; Clark and Kastellec, 2015), but none have simultaneously examined diffuse support, individual political figures, and the mechanisms underpinning attitudinal change. These effects have very serious potential consequences for the Supreme Court’s ability to produce decisions that are enforced. 7 The public plays a crucial role in the separation of powers exchange such that the elected branches are compelled to offer deference to the Court when the public is supportive (Clark, 2009; Ura and Wohlfarth, 2010). That members of the elected branches, or salient political figures more generally, may be capable of altering this support is troublesome, as it would offer these institutions license to curb court authority. 2.1 Elite Cueing and Support for the Supreme Court Downs (1957) famously noted that members of the mass public “cannot be expert in all the fields of policy...Therefore, [one] will seek assistance from [those] who are experts in those fields, have the same political goals...and have good judgment” (233). Others suggest that the masses look to the elites to find out “what goes with what” in politics (Zaller, 1992). In other words, individuals can easily obtain information about political stimuli and form attitudes by looking to their preferred political leaders. Researchers argue that there is a “dominating impact” of group influence on political beliefs (Cohen, 2003) and that political elites frequently lead this influence (Zaller, 1992). Campbell, Converse, Miller and Stokes (1960) characterize political parties as “a supplier of cues by which the individual may evaluate the elements of politics” (128). Even when individuals are capable of making informed decisions, they frequently conform to the positions advocated by their preferred partisan group and “neglect policy information in reaching evaluations” (Rahn 1993, 492; see also Bullock 2011; Iyengar and Valentino 2000; Zaller 1992). And, particularly important for the purposes here, political information can actually produce changes in assessments; partisan information motivates individuals to align with their party when they initially indicated reticence to do so (Dilliplane, 2014). While there is some skepticism regarding the degree to which these source cues alone cause opinion change (Nicholson, 2011), it is generally accepted that cues have a formidable influence in opinion change. While these elite cues typically lead opinions on things like public policy preferences, there is little reason to believe that one’s evaluations of the judiciary should be free from elite cueing, group attachments, and informational short-cuts. Indeed, partisan cues 8 impact the degree to which one accepts particular decisions of the Court; Nicholson and Hansford (2014) show that partisan attributions (e.g., a “Republican” Court decision) impact acceptance of that decision more than the “imprimatur” of the Court. Likewise, Clark and Kastellec (2015) find that individuals oppose court curbing measures when out-party officials have advocated for their use. Concisely, elite cues are effective when it comes to attitudes about the Court. But, the Court is unique in its level of preexisting support; legitimacy is a function of factors more stable than simply the Court’s outputs (Gibson, Caldeira and Baird, 1998; Gibson and Nelson, 2015). Here, I consider the consequence of “checking-in” to the Court, or receiving information about the institution, when legitimating forces are not present (i.e., during a routine political event). In this study, I provide individuals with an information source that only some survey participants will find credible in order to determine if such sources are capable of impacting diffuse support. Specifically, I ask if negative statements regarding the Supreme Court by then-presidential candidate Hillary Clinton or then-president elect Donald Trump are capable of altering individual levels of the support for the judiciary. Further, where other studies examine support for particular decisions (Nicholson and Hansford, 2014) or whether the Court itself can impact support (Salamone, 2013; Zink, Spriggs and Scott, 2009), this study asks if source cues can impact diffuse support broadly. Although Clark and Kastellec (2015) examine broad levels of support, they comment that the items they utilize to tap support are different from previous studies, “incorporate aspects of both diffuse and specific support,” and that “these distinctions pose challenges of interpretation in the framework of diffuse and specific support” (525). Thus, there are two major differences in my study. First, I ask if an individual political figure is capable of moving attitudes toward the Supreme Court. Second, I ask if diffuse support is alterable. 2.1.1 The Formation of Diffuse Support In order to understand how cues can influence attitudes regarding the judiciary, it is important to note the psychological underpinnings of diffuse support. One process by which diffuse support is built is through a sequence of decisions with which one agrees 9 on policy grounds (Gibson, Caldeira and Baird, 1998). Some argue that support is merely a “running tally,” where individuals record favorable and unfavorable outcomes (Baird, 2001). Under this conceptualization, it may be possible to increase the tallies in the favorable column swiftly. Previous research suggests that the converse may not be true; dissatisfaction with decisions only produces short-term alterations to diffuse support (Durr, Martin and Wolbrecht, 2000; Mondak and Smithey, 1997). That is, tallies in the unfavorable category quickly dissipate and only briefly factor into the overall calculation of support. However, scant attention has been paid to extra-judicial causes of unfavorability and whether such forces can alter support to a greater degree than the Court’s own rulings. More concisely, while dissatisfaction with Court outputs quickly cedes to the democratic values that underpin support for the Court (Durr, Martin and Wolbrecht, 2000; Mondak and Smithey, 1997; Ura, 2014), other political stimuli – like a politician – may more fundamentally alter the considerations one makes when determining her level of support. Recently, scholars have shown that subjective ideological (dis)agreement – a form of satisfaction with the Court’s job performance – is related to legitimacy assessments (e.g., Bartels and Johnston, 2013; Christenson and Glick, 2015); up-to-date perceptions of the ideological distance between oneself and the Court predicts the level of support one offers the judiciary. Those who perceive themselves to be closer to the Court ideologically, regardless of the Court’s true position on the left-right policy continuum, attribute more support, and vice versa. Thus, it is not necessarily the number of tallies in the favorable column that influence support, but what types of information potential tallies in either column provide regarding one’s position vis-`a-vis the ideological position of the Court. As Gibson and Nelson (N.d.) note, developing these up-to-date perceptions of subjective agreement is a two step process where “...(1) citizens evaluate the [Court’s] decision, and then (2) recalculate the ideological distance between themselves and the Court, as revealed by its new decision” (4). Just as a salient Supreme Court decision can influence one’s running tally, so too may other political evaluations. In other words, it is possible that individuals use some political stimulus other than a Court decision and recalculate their relationship to the Court as revealed by that political information. Indeed, many 10 assessments of political institutions and actors are impacted by politically motivated covariates. For instance, presidential approval is affected by evaluations of economic performance (Burden and Mughan, 2003; Norpoth, Lewis-Beck and Lafay, 1991). Likewise, certain operationalizations of support for the Supreme Court are a function of presidential approval and political events unrelated to the judiciary (Caldeira, 1986; Ura, 2014). Therefore, there is reason to suspect that various assessments of the Court are not free from other political evaluations. One’s running tally of support for the Court may be influenced by non-judicial political stimuli. Finally, individuals may face some level of cognitive dissonance when faced with competing information regarding the Supreme Court. Because support for the Court is generally high, external challenges to the Court, particularly from a credible or favored source, may produce inconsistency in evaluations. The purpose of this study is to produce and record the impact of such inconsistencies and to determine if they are affective or a product of updating upon gaining new information. On the one hand, consistency theory (Zimbardo and Leippe, 1991) suggests that individuals desire consistency between their attitudes and will alter one or both to achieve relative balance. Further, this affective-cognitive consistency suggests that adding new information to the “attitude system” – typically via a persuasive message – may bring the attitudes into balance (Simonson, 1995; Zimbardo and Leippe, 1991). It is through this framework that I examine alterations to attitudes regarding the Supreme Court following the introduction of new information. Here, and as described in greater detail below, the new information is the knowledge that Hillary Clinton or Donald Trump is not supportive of the Supreme Court. Thus, consistency theory suggests that individuals who are affectively positive toward Clinton or Trump must reconcile their beliefs regarding the Supreme Court with that valence. On the other hand, some scholars suggest that priming experiments or source cues merely inform respondents, allowing them to make informed decisions to survey items (Lenz, 2009; Nicholson, 2011; Tesler, 2015). That is, there is a learning process that occurs. In this study, the mechanism of change can be determined. I record subjective 11 ideological disagreement with the Supreme Court both pre- and post-treatment. If individuals learned the position of the Supreme Court and altered their views accordingly, they will be said to have updated ideologically; individuals who exhibit no updating have responded to the cue. 2.2 Expected Changes in Diffuse Support The expectation is that priming survey respondents to consider their attitudes toward political figures when evaluating the Supreme Court can spur changes in those evaluations. However, given that partisan affect has powerful motivating properties (e.g., Iyengar and Westwood, 2015) and only credible sources are persuasive (Sternthal, Dholakia and Leavitt, 1978), only some respondents will find each figure’s commentary compelling. This leads to the following hypotheses: Diffuse Support Hypothesis: Affect toward Clinton/Trump will not impact support for the Supreme Court. Support Malleability Hypothesis: Individuals who have high (low) affect toward Clinton/Trump will attribute less (more) support to the Court relative to baseline levels of support following treatment. Judicial Autonomist/Court Hostile Hypothesis: Individuals who have high (low) affect toward Clinton/Trump will attribute more (less) support to the Court relative to baseline levels of support following treatment. These hypotheses are depicted in Table 2.1. Table 2.1: Experimental Outcomes Affect Positive Negative ∆ in Legitimacy Positive Negative None Positive Negative None 12 Outcome Judicial Autonomists Support Malleable Diffuse Supporters Support Malleable Court Hostile Diffuse Supporters Note that Trump and Clinton’s “statements” are negative, meaning agreement with them is in opposition to the Court. Those who display no change – conditions in dark gray – are “Diffuse Supporters,” steadfast in their opinions toward the Court; this is the null hypothesis. Those who have positive (negative) affect for the political figures and whose ascription of legitimacy is lower (higher) after the treatment – the conditions highlighted in light gray in Table 2.1 – are “Support Malleable,” meaning their attitudes toward the Court can be shaped by non-Court political figures. The other two conditions are not expected to occur systematically. First, those who have positive affect toward Trump or Clinton but state a greater amount of legitimacy following the treatment are deemed “Judicial Autonomists” – those who show more support for the Court than their preferred political candidate, possibly because they believe the Court should be free of partisan politics. Finally, regarding those who dislike Clinton or Trump but still attribute less support following the treatment – “Court Hostile” – may simply be amenable to any criticism of the Court, regardless of source. 2.3 Data, Cues, and Methodology This study asks two major questions. First, can elites move diffuse support? That is – in light of recent discoveries challenging the conventional wisdom that attitudes regarding the judiciary are stable (Bartels and Johnston, 2013; Gibson and Nelson, 2015) – is diffuse support free from the considerations and political biases that impact other political evaluations? Or, is support for the Court unique in that it is a function only of Court behavior? While the Court is uniquely able to confer legitimacy upon its own decisions (e.g., Salamone, 2013; Zink, Spriggs and Scott, 2009), it is not clear that considerations of the Court are exclusively related to the judiciary. Second, should changes in evaluations of the Court be present, are they motivated by the consistency in evaluations account or the informational account? That is, are individuals forced to reconcile different attitudes toward two political stimuli, or do they learn new information about the Court and adjust assessments accordingly? In order to test these questions, I conducted two original survey experiments, both 13 from Amazon’s Mechanical Turk (MTurk), to investigate the questions posed here. Although samples using MTurk as a recruitment tool are not as representative as national probability samples, they are generally valid (Berinsky, Huber and Lenz, 2012; Clifford, Jewell and Waggoner, 2015), are particularly useful for experimental designs (Horton, Rand and Zeckhauser, 2011), and have commonly been used to study public attitudes toward the Court (Christenson and Glick, 2015; Clark and Kastellec, 2015). First, in October 2016, 708 respondents were randomly assigned to either the Hillary Clinton treatment or the control group. In December 2016, 503 respondents were randomly assigned to either the Donald Trump treatment or the control group. After recording baseline levels of diffuse support using the Gibson, Caldeira and Spence (2003a) legitimacy battery, which asks individuals whether they agree with statements such as “The Court gets too mixed up in politics,” respondents from each survey were randomly assigned to either the control or treatment group.1 The treatment groups were presented with a vignette that read: Recently, in a speech given to supporters, Democratic presidential candidate Hillary Clinton (president-elect Donald Trump) made some controversial remarks regarding the United States Supreme Court. Below, some of her (his) critiques will be paraphrased. Please indicate your level of agreement with Hillary Clinton’s (Donald Trump’s) statements. Then, these respondents were presented with the original legitimacy battery items but were led to believe that Clinton (in the first survey) or Trump (in the second) had made those statements. For instance, instead of being asked whether they agree with “The U.S. Supreme Court ought to be made less independent so that it listens a lot more to what the people want,” respondents were told “Hillary Clinton (Donald Trump) commented 1 Question wordings and descriptive statistics for the 7 item legitimacy battery appear in the supplemental materials. Individual diffuse support scores are factor scores following exploratory factor analysis and are rescaled 0-1 such that larger values indicate greater diffuse support. All scales generated using these items throughout this project have desirable psychometric properties, such as reliability (average Cronbach’s alpha > 0.80) and unidimensionality (average eigenvalue for first unrotated factor > 3.0; for second < 1.0.). 14 that “The Supreme Court ought to be made less independent” so that it listens a lot more to what the people want. Do you agree or disagree?” The control group was simply asked to complete the legitimacy battery a second time without any vignette or cue. Importantly, I determined affect toward Clinton and Trump – as measured by a feeling thermometer ranging 0-100, where higher values indicate more positive or warm feelings – prior to random assignment to treatment groups. For the second portion of the experiment, which determines whether the mechanism underlying changes in diffuse support is affective or a learning process, I measure subjective ideological disagreement with the Supreme Court both before and after treatment. Consistent with Bartels and Johnston (2013), ideological disagreement is measured by determining the difference between one’s ideological self-placement on a 5-point scale from one’s placement of the Supreme Court on a 5-point scale. Thus, the directionality of the disagreement does not matter, as both assessments are subjective and irrespective of operational ideology (see Ellis and Stimson, 2012). The variable ranges from 0-4, where 0 means there is no difference between one’s placement of themselves and of the Court and 4 indicates maximal distance. For instance, one who leans liberal and believes the Court does as well will score 0; so too will one who leans conservative and believes the Court does. Conversely, one who leans liberal and believes the Court is solidly, but not extremely, conservative will score 3. The change in ideological distance – which ranges -4 to 4 – is the calculated by subtracting one’s pre-treatment ideological distance from her post-treatment ideological distance. In order to determine whether elite condemnations of the Court can impact public support for the judiciary, I simply subtract one’s pre-assignment legitimacy score from their post-assignment legitimacy score. The result is the change in one’s assessment of diffuse support. Such a calculation is consistent with other work examining changes in diffuse support (e.g., Christenson and Glick, 2015). The distribution of these changes for the Clinton sample is displayed in Figure 2.1 for both the control and treatment groups. As can be seen, there is much greater variance in the treatment group’s diffuse support distribution; the variance in the treatment distribution is 1.4 times that of the control distribution. A Kolmogorov-Smirnov test indicates that there is a significant difference 15 in the overall distribution of change in diffuse support for the two groups (p = 0.02); the same test produces the same information for the Trump sample (p = 0.00). This is to be expected. To be clear, the only difference between the two distributions in Figure 2.1 is that those in the treatment group believe Hillary Clinton has made negative statements about the Supreme Court, where those in the control group have responded to the same 0 1 2 3 4 5 negative sentiments that are unattributed to any particular political actor. -1 -.5 0 .5 1 Change in Diffuse Support Control Treatment Figure 2.1: Density of change in legitimacy for Clinton sample. Dotted line refers to control group, solid line to the treatment group. Next, I determine the effect of Clinton Affect and Trump Affect on the change in support for the Supreme Court. In order to do so, I estimate two separate models. The first regresses the change in legitimacy onto Clinton affect, a binary variable indicating presence in the control group or treatment condition, and an interaction between the two. The data used for this model come from the first experimental sample. The second does the same, but uses Trump affect; the data for this model come from the second experimental sample. Because assignment to the treatment group was randomized and randomization was successful, I exclude control variables, such as democratic values, how 16 politicized one believes the Court to be, and demographic characteristics.2 2.4 Malleability Results On the advice of (Brambor, Clark and Golder, 2006), I interpret the results of these interactive model graphically and omit a results table.3 Figure 2.2 displays these results. The results for the Clinton experiment appear on the left and those for the Trump experiment on the right. In each, the gray line represents the estimated effect across the range of affect toward the respective political figure on the change in diffuse support for the control group; the black line represents the same but for the treatment group. Dashed lines are 95% confidence intervals around the estimates.4 I begin with the control group (gray line) in the Clinton experiment (at left). Simply, individuals in the control group do not change their assessments of the Supreme Court based on evaluations of other political stimuli. That is, assessments of legitimacy are not dependent upon feelings toward Hillary Clinton. The same is true of the Trump experiment (gray line at right). These findings are consistent with expectations regarding the general stability of diffuse support (Caldeira and Gibson, 1992; Tyler, 2006). On the other hand, there are conditional effects of affect for both treatment groups. Beginning with Clinton, at left, for cold or negative feelings, from around 0-30, the effect is positive, suggesting that individuals who dislike Clinton increase their level of support for the Supreme Court upon hearing her criticisms of the justices and the judiciary. For moderate values of Clinton affect, around 30-55, there is no statistically significant effect. Finally, for high values, around 55-100, the effect of treatment on changes to diffuse support is negative, indicating that those who have positive feelings toward Clinton 2 See supplemental materials for randomization check information. Also included in these materials are regression models including control variables; no statistical or substantive conclusions presented here change in the presence of controls. 3 See supplemental materials for regression table. 4 Note that the confidence intervals pinch around the estimate near zero for the control group due to the wealth of respondents who rated their warmth toward both Clinton and Trump as 0 on the feeling thermometer. 17 Treatment Control 0 Clinton 20 40 60 80 100 Trump Change in Diffuse Support 0.2 0.1 0.0 −0.1 −0.2 0 20 40 60 80 100 Affect Figure 2.2: Effect of treatment on change in diffuse support across affect toward Clinton (left) and Trump (right). Gray line represents estimated effect of control and the black line treatment; dashed lines are 95% confidence intervals around those estimates. decrease their level of support for the Court upon hearing Clinton’s negative statements about the institution. I discuss the substantive effects of affect on changes in diffuse support for the treatment groups below. Moving next to the Trump experiment (at right), the result hold for negative/cold feelings toward Trump, but are different for moderate values and those with positive/warm feelings. First, from cold to moderate values, from 0-56, individuals increase their level of support for the Supreme Court upon hearing Trump’s criticisms of the judiciary. The effect is inconclusive from 57-100. I suspect this has little to do with Trump himself, although I can only speculate. There may be a substantive difference between considering how a presidential candidate treats the judiciary versus the president elect. That is, responses may differ for a hypothetical situation compared to a tangible situation. From a neurobiological perspective, Kang, Rangel, Camus and Camerer (2011) show that offering 18 research subjects hypothetical versus real choices activate different portions of the brain that value the stimuli differently. Given the well established anterior support for the Court (e.g. Gibson and Caldeira, 2009b), citizens may evaluate criticisms of the Supreme Court prior to the election on different grounds than after the election. That is, citizens may be more wary to vocalize dissatisfaction with the Court once a political figure has the force of an entire branch behind his or her words relative to when that was simply a hypothetical situation. These results are consistent with the support malleability hypothesis presented above. A salient extra-judicial actor can indeed alter levels of support for the Supreme Court. On the one hand, individuals who have high affect toward Trump or Clinton internalize their critiques of the Supreme Court and attribute less support to the judiciary (and vice versa). However, as the Trump experiment shows, while the ability to alter support is evident, the ability to decrease support is more nuanced. Given that diffuse support records the extent to which an individual believes an institution, its actors, and actions are “appropriate, proper, and just” (Tyler, 2006, 375), it is troublesome to discover that simply believing that a political figure has condemned the Court can alter one’s attitudes, even if it is in a positive manner. There are several possibilities for why this may be the case. First, diffuse support may be more strongly connected to political evaluations than previously believed. However, given the universal power and durability of legitimacy (Gibson, Caldeira and Baird, 1998), this claim seems unlikely. Alternatively, diffuse support may be more strongly connected to political evaluations now than in the past. This may account for recent challenges to the conventional wisdom that diffuse support is solid (e.g., Bartels and Johnston, 2013; Christenson and Glick, 2015). Finally, the primacy of salient political cues may simply overpower other evaluations. Although legitimacy has not been harmed by polarization generally (e.g., Gibson, 2007), affective polarization impacts a broad swath of both political and non-political judgments (Iyengar and Westwood, 2015). Affect toward political figures – such as Hillary Clinton and Donald Trump – may be so strong as to compel individuals to alter their attitudes toward other political stimuli to align 19 with that affect. Below, I consider whether changes in diffuse support are affective. 2.5 Affective-Cognitive Balance or Ideological Updating? Next, I move on to consider whether changes to diffuse support are a product of ideological updating or trying to balance one’s attitudes. I ask whether the affective or informational components of the elite cue dominate. More specifically, I determine whether respondents (1) attempt to bring their attitudes/beliefs into alignment using affective reasoning or (2) infer the ideological position of the Court using signals provided by the cue source whose ideological position vis-`a-vis the respondent is clearer than that of the Court. This is important because classical legitimacy theory suggests that evaluations of the judiciary should be institution specific (e.g., Tyler, 2006). If citizens update their assessments of the Court as a function of affect toward political figures, they may deprive the Court of the political capital on which it relies based on extra-judicial information. On the other hand, altering assessments upon learning information from an extra-judicial source is institution specific and consistent with legitimacy theory. To be clear, the ideological updating mechanism suggests that an individual has difficulty placing the Court in ideological space but has a much easier time placing a wellknown politician. If one knows her own position in relation to the politician and learns the position of that politician in relation to the Court, she can more easily place herself in relation to the Court. As opposed to evaluating a Court decision and recalculating one’s ideological distance as revealed by that decision, one is evaluating the politician’s signal as to the position of the Court and recalculating her ideological distance as revealed by the politician’s placement of the Court. I assume that people will be better able to place Clinton and Trump ideologically than the Court because, although the American public has not always demonstrated the ability to structure their political thinking ideologically (Converse, 1964; Lupton, Myers and Thornton, 2015), and does not accurately identify the Court’s ideological location (Hetherington and Smith, 2007; Bartels and Johnston, 2013), there is variability in the ideological content of various political stimuli (Jacoby, 1995). Given that presidential candidates are much more in the public eye than the 20 Court and that their ideological/policy views are on display and under scrutiny, it seems intuitive that Clinton and Trump will be easier to place ideologically than the Court. To test these theories, I estimate two models. For the first, I regress the change in legitimacy, operationalized in the same manner as above, onto Clinton affect and the change in ideological distance. Only individuals in the treatment group from the Clinton survey are included in this analysis, as only those exposed to the Clinton treatment would have the opportunity to learn from the priming cue about the Court’s ideological location. The same is true for the second model, swapping Trump for Clinton. Again, control variables are omitted due to the success of randomization. The results of this regression appear in Table 2.2. Table 2.2: OLS Regression on Change in Legitimacy for Treatment Variable Affect Clinton β Trump β 0.00* −0.46 0.00* −0.34 (0.00) (0.00) ∆Ideological Distance −0.02 −0.07 −0.01* 0.02 (0.12) (0.01) Constant 0.13* 0.12* (0.01) (0.17) Sample Size 348 246 2 Adjusted R 0.21 0.11 Cell entries are OLS coefficients; standard errors in parenthetical; β = standardized regression coefficients. DV is ∆Legitimacy from t1 → t2 The evidence for affect in Table 2.2 is clear: changes in legitimacy are a product of one’s feelings toward Clinton or Trump. Figure 2.3 displays each of these effects, with affect in the left column and ideological distance in the right column; ideological distance for the Clinton (Trump) sample appears in the top right (bottom right). The results for Clinton affect conform to what is presented above; those who dislike Clinton increase their support for the Court after hearing her negative commentary and those who like Clinton decrease Court support. The same is true for Trump, although only extremely warm feelings produce decreases in support. The changes in diffuse support across the range of affect toward Clinton and Trump, respectively, are 15% and 13%. In other words, differences in affect – a persistent and powerful force in modern politics (Iyengar 21 and Westwood, 2015) – can represent large changes in Supreme Court legitimacy. For instance, even one who only moderately disfavors Clinton may still alter her support for Change in Diffuse Support Change in Diffuse Support the Court by a tenth of the diffuse support scale. 0.1 0.0 −0.1 0 25 50 75 0.1 0.0 −0.1 100 −2 Affect toward Hillary Clinton 2 4 0.2 Change in Diffuse Support 0.10 Change in Diffuse Support 0 Change in Ideological Distance 0.05 0.00 0.1 0.0 −0.1 −0.2 −0.05 0 25 50 75 100 −2 Affect toward Donald Trump 0 2 4 Change in Ideological Distance Figure 2.3: Effect of affect and change in ideological distance on change in diffuse support. For the affect figures, in the left column, lines represent estimated effect of affect on change in diffuse support; dashed lines represent 95% confidence intervals around those estimates. For the change in ideological distance figures, in the right column, circles represent point estimates for each observed value of change in ideological distance on change in diffuse support; vertical bars represent 95% confidence intervals around those estimates. Next, the evidence for change in ideological distance appears, at first glance, mixed, but a more nuanced story unfolds when examined graphically in Figure 2.3. Beginning with Clinton, at top right, despite the statistically insignificant average treatment effect, those who decrease their perceived distance from the Court following exposure to the treatment (i.e., those with negative scores on change in ideological distance) change their evaluation of the Court to a meaningful degree. The converse is not true. That is, those who believe themselves to be farther from the Supreme Court after hearing Clinton’s 22 statements do not offer more support. The same is true for the Trump sample. Those who believe themselves to be closer to the Supreme Court after hearing Trump’s statements offer more support. These findings are consistent with findings in Bartels and Johnston (2013) and Christenson and Glick (2015). Despite the substantively small and statistically mixed effects for individuals who perceive themselves to be closer to the Court following treatment, it is clear that the effect for affect is greater in both samples. That is, there is some evidence to support both the affective reasoning hypothesis and the ideological distance hypothesis, but the results for the affective reasoning hypothesis are much stronger than for the ideological distance hypothesis. The expectation is that individuals who felt more distant from the Court after treatment would ascribe to the judiciary less legitimacy. This is not borne out in the data. It appears that the desire for cognitive-affective balance is greater than the effect of learning. This finding directly confronts research regarding preexisting positivity toward the Supreme Court (e.g., Gibson and Caldeira, 2009a). On average, positivity toward the Supreme Court is high. For instance, average affect toward the Court in the survey used here is 11 points higher than toward Clinton and 12 higher than Trump. However, when political figures discuss the Supreme Court – such as at presidential debates, campaign events, or the State of the Union address – individuals are not inundated with the positive and legitimating judicial symbols or accounts of the Court’s apolitical decisionmaking process that tend to fortify support for the Court (Baird and Gangl, 2006; Gibson and Caldeira, 2011; Scheb and Lyons, 2001). Moreover, positivity toward other political actors should have similarly insulating effects. Although the ability to reduce support for the Supreme Court is limited, the evidence presented here suggests that, when confronted with competing assessments of two political stimuli, affect outweighs what are generally perceived to be more calculated assessments. That is, in the absence of legitimating symbols, it appears that positivity (or negativity) toward other actors is capable of outweighing positivity toward the Court. This finding is normatively troublesome. It would still be disconcerting to learn that 23 extra-judicial actors could manipulate citizens into believing that the Supreme Court does or does not represent their policy wishes, but this would be at least a partially informed expression of actual preferences and the Court’s ability to be an effective agent of one’s political will. However, to discover that alterations to one’s level of support – the support on which the Court relies to expect compliance with its rulings – are largely affective speaks ill of the mass public. However, there is some cause for optimism. For all four effects (i.e., change in ideological distance for (1) Clinton and (2) Trump and affect toward (3) Clinton and (4) Trump), the treatment is effective at increasing diffuse support. On the other hand, only for Clinton affect is the treatment consistently capable of reducing support. That being said, when individuals face conflict in assessments of political stimuli, it is not clear for how long preexisting positivity toward the Court can withstand criticism from preferred political figures. Of course, in question is how frequently political leaders attack the Supreme Court. Given the effectiveness of appeals to emotion in politics (e.g., Brader, 2006), increasing partisan and ideological divisions (e.g., McCarty, Poole and Rosenthal, 2006), and the intense power of in-group preference and out-group disdain (e.g., Iyengar and Westwood, 2015; Mason, 2015) – not to mention protracted political battles regarding the Supreme Court, such as the refusal to act on President Obama’s nominee in an election year – it is plausible that such methods may become a tool in the separation of powers exchange. Couple this with the leverage that accompanies a rhetorically influential president (Tulis, 1988) – such as Trump – and the columns in the running tally that forms diffuse support that receive ticks may begin to change. 2.6 Discussion This study set out to answer two questions. First, can an extra-judicial political figure alter the level of diffuse support for the Supreme Court? The evidence points to yes. Individuals who were told that Hillary Clinton had made statements about the Court such that it got too mixed up in politics and ruled in favor of certain groups too frequently 24 revised how legitimate they believed the judiciary to be in a manner consistent with their feelings toward Clinton. Those who feel warmly (coldly) toward Clinton offered less (more) support after hearing her negative statements. The same was true of individuals who were led to believe Donald Trump made such remarks, although the Trump vignette was unable to reduce support for the Court to a meaningful degree. Again, I suspect this is due to differences in the timing of the surveys; while Clinton was a candidate, Trump was the president elect. This subtle difference may have altered the grounds on which respondents considered the vignettes. Regardless, these findings suggest that there is some degree of volatility in individual attributions of legitimacy to the Supreme Court. This counters evidence that legitimacy tends to be durable (e.g., Caldeira and Gibson, 1992; Gibson and Nelson, 2015) but builds on recent evidence that support is sensitive to other political assessments (e.g., Bartels and Johnston, 2013; Christenson and Glick, 2015). Second, are these attitude changes a product of bringing one’s attitudes regarding the Court into alignment with one’s feelings toward the extra-judicial political figure? Or did that figure offer some information as to the ideological location of the Supreme Court, which allowed one to reassess her perception of whether the Court’s rulings were aligned with her policy preferences? Here, there was evidence for both mechanisms of change, but support for the affective balance hypothesis outweighs the ideological updating hypothesis. That is, changes in diffuse support as a result of an extra-judicial political actor are largely due to affect toward that figure. This finding conforms to previous research regarding the power of elite cueing (e.g., Cohen, 2003; Zaller, 1992), particularly from a polarizing partisan figure (Dilliplane, 2014), as well as how various affective attachments can impact assessments of the Supreme Court and its decisions (Nicholson and Hansford, 2014). Concisely, cue-taking, at least in relation to the Supreme Court, is only somewhat informative but is related to affective political attachments. These findings are sensible, given that individuals find locating the Court on the leftright policy continuum difficult (Hetherington and Smith, 2007, although see Malhotra and Jessee 2014) and that knowledge of the Court is low, compared to other political 25 stimuli. And, in conjunction with the well-established evidence that individuals heavily rely on cues when forming opinions (e.g., Arceneaux, 2008; Kam, 2005), even when capable of utilizing issue-knowledge (e.g., Rahn, 1993), members of the mass public may be particularly reliant on cues in relation to the judiciary. This reliance may increase susceptibility to manipulations of judicial attitudes by members of the elected branches. Importantly, the experimental cue offered here was not issue specific. The conclusions would be different if respondents believed a political figure lambasted the Court for a particular ruling on, say, abortion or gun rights. Instead, politicians can impact general orientations toward the Court. Public orientations play a very important role in the Supreme Court’s ability to function properly. Mainly, support for the judiciary insulates the Court from institutional encroachments (Clark, 2009; Ura and Wohlfarth, 2010). However, the evidence presented here suggests that political actors are readily able to make adjustments to that necessary support. This power presents a problematic separation of powers issue. More specifically, it appears that members of the elected branches are capable of altering the public’s preferences regarding institutional arrangements, which may give those branches the public go-ahead to use their court curbing authority. More specifically, political figures may manipulate public assessments in a manner that would free them to limit the power of the judiciary. While not technically extralegal, manipulating the public for political expediency is normatively worrying. Of course, various cues are capable of altering opinions without changing attitudes (e.g., Iyengar and Kinder, 1987). Nevertheless, should the elected branches be capable of reducing or only selectively increasing support for the Court, they may choose to sample public opinion after an attempt to do just that. That is, even if cueing does not permanently alter attitudes, politicians may be capable of turning the tides long enough to have license to act. Further, Court outcomes tend to be in-step with public opinion (Epstein and Martin, 2010; Casillas, Enns and Wohlfarth, 2011). If politicians impact support for the Court, they may similarly influence desired policies; if the Court responds accordingly, institutional outcomes may be neither majoritarian nor counter-majoritarian. In light of 26 evidence that the Court occasionally leads public preferences (Ura, 2014), at the extreme, political manipulation may permute outcomes that the public finds acceptable. Finally, there are limitations in assessing changes in diffuse support that may be cause to allay overwhelming concern regarding the normative implications of these findings, although those implications are severe. First, it is unclear how durable these effects might be; there is some skepticism of the external validity of experimental treatments in assessments of legitimacy (e.g., Gibson and Nelson, N.d.). As Gibson and Nelson (2014) note, “after a shock, diffuse support gradually increases, eventually returning to its equilibrium level, as democratic values regenerate support for the Court” (206). However, Ura (2014) argues that this legitimation effect is due to the Supreme Court heralding positions on policy. Assuming that the Court is able to lead public views in this manner, one must consider that while the Court must await cases whose disposition is capable of producing the requisite shock that precedes legitimation, extra-judicial politicians could seemingly preempt support for the Court rhetorically. That is, it is not clear if (a) democratic values actively regenerate support for the Court in the face of rhetorical criticism or (b) elite condemnations of the judiciary are sufficiently powerful to stave off legitimation. Stated differently, it appears that a salient political figure is capable of producing several tallies in either the satisfied or dissatisfied column at once, which goes on to impact one’s calculation of support. Future research should seek to uncover the degree to which these tallies persist. It is possible that source cues whose presence in the political sphere persists – such as Donald Trump – may have a more lasting effect. Again, given the “dominating impact” of major political groups and figures (Cohen, 2003), of import is how diffuse support is shaped by salient extra-judicial actors. Research into the power of source cues and affective polarization might suggest that such considerations are interminable. Future work of a longitudinal nature should examine to a greater degree whether these effects are durable. Despite these potential limitations, to the best of my knowledge, this is the first evidence to show that affective attachments to a particular political figure can impact feelings toward the Court. 27 APPENDIX 28 Table 2.3: Randomization Check for Clinton Sample Average Attribute (coding/range) Control Treatment Age (18-79) 36.81 36.84 Female (%) 52.56 50.43 Education (0-4) 2.71 2.66 Income (0-11) 4.65 4.79 Differential Media (0-1) 0.52 0.53 Clinton Feeling Thermometer (0-100) 43.67 43.65 Court Feeling Thermometer (0-100) 55.22 54.73 Ideo. Distance (0-4) 1.20 1.20 Job Performance Satisfaction (%) 59.34 60.50 Ideology (1-7) 4.43 4.39 PID (1-7) 3.56 3.61 Politicization (0-1) 0.63 0.63 SC Knowledge (0-1) 0.77 0.78 Support for Minority Liberty (0-1) 0.67 0.65 Support for Rule of Law (0-1) 0.64 0.65 Absolute Difference 0.03 2.13 0.05 0.14 0.01 0.02 0.49 0.00 1.16 0.04 0.05 0.00 0.01 0.02 0.01 Table 2.4: Randomization Check for Trump Sample Attribute (coding/range) Age (18-71) Female (%) Education (0-4) Income (0-11) Trump Feeling Thermometer (0-100) Court Feeling Thermometer (0-100) Ideo. Distance (0-4) Ideology (1-7) PID (1-7) Politicization (0-1) 29 Average Control Treatment 34.67 35.01 35.57 41.6 2.82 2.86 3.21 3.27 47.59 44.82 60.51 56.31 1.11 1.05 3.44 3.34 3.72 3.62 0.59 0.57 Absolute Difference 0.34 6.03 0.04 0.06 2.77 4.20 0.06 0.10 0.10 0.02 Table 2.5: Question Wording, Descriptive Statistics, and Psychometric Properties of Legitimacy Battery % Disagree Question Wording If the Supreme Court started making decisions that most people disagree with, it might be better to do away with the Court The right of the Supreme Court to decide certain types of controversial issues should be reduced The U.S. Supreme Court gets too mixed up in politics Justices who consistently make decisions at odds with what a majority of the people want should be removed The U.S. Supreme Court ought to be made less independent so that it listens a lot more to what the people want We ought to have a stronger means of controlling for actions of the U.S. Supreme Court The Court favors some groups more than others 30 Clinton Trump Factor Loading Clinton 53 40 0.75 hide hide hide 46 33 0.78 hide hide hide 28 21 0.62 hide hide hide 42 32 0.74 hide hide hide 40 30 0.82 hide hide hide 35 25 0.80 hide hide hide 26 23 0.62 Trump 0.75 0.75 0.58 0.74 0.76 0.80 0.57 Table 2.6: OLS Regression on Change in Legitimacy Variable Coefficient Std. Err. Clinton Affect 0.00 0.00 Treatment 0.10* 0.02 Clinton Affect x Treatment 0.00* 0.00 Constant 0.03* 0.01 Sample Size 698 Adjusted R2 0.14 DV is ∆Legitimacy from t1 → t2 31 Table 2.7: OLS Regression on Change in Legitimacy w/ Controls Coefficient Variable (Std. Err.) Clinton Affect −0.001 (0.000) Politicization 0.165 (0.050) Job Peformance Satisfaction −0.033 (0.022) Court Affect 0.001 (0.000) Support for Minority Liberty 0.039 (0.034) Support for Rule of Law 0.046 (0.040) Ideological Distancet1 0.005 (0.008) Differential Media Exposure −0.080 (0.051) Party Identification −0.003 (0.005) Female 0.004 (0.015) Age 0.001 (0.001) Education 0.016 (0.010) Income −0.001 (0.003) Constant −0.147 (0.074) Sample Size 492 Adjusted R2 0.13 DV is ∆Legitimacy from t1 → t2 32 β −0.25 −0.20 −0.09 0.10 0.06 0.06 0.28 −0.07 −0.038 0.12 0.10 0.80 −0.02 Table 2.8: OLS Regression on Change in Legitimacy for Treatment w/ Controls Coefficient Variable (Std. Err.) Trump Affect −0.002 (0.000) ∆Ideological Distance −0.009 (0.015) Politicization 0.215 (0.068) Job Performance Satisfaction −0.046 (0.033) Court Affect 0.001 (0.001) Support for Minority Liberty −0.007 (0.048) Support for Rule of Law 0.066 (0.056) Differential Media Exposure −0.102 (0.074) Party Identification −0.001 (0.007) Female 0.011 (0.022) Age 0.002 (0.001) Education 0.025 (0.014) Income −0.002 (0.004) Constant −0.161 (0.104) Sample Size 248 Adjusted R2 0.27 DV is ∆Legitimacy from t1 → t2 33 β −0.40 −0.03 0.23 −0.11 0.09 −0.10 0.08 −0.08 −0.16 0.27 0.16 0.11 −0.25 Chapter 3: Politicized Nominations and Public Attitudes toward the Supreme Court in the Polarization Era The unexpected death of long-serving Supreme Court Justice Antonin Scalia provided a unique opportunity to study the opinions of the public regarding the unelected branch during the filling of a vacancy in an era of intense ideological and partisan divisions. Understanding how such an event impacts perceptions of and attitudes toward an institution that relies on the public conferral of legitimacy can carry exceedingly important connotations (Caldeira and Gibson, 1992; Gibson, Caldeira and Spence, 2003b; Gibson and Caldeira, 2009a). Since the 1970s, Supreme Court justices have served for an average of 26 years; if a sudden vacancy – or the overt politicking involved in filling a vacant seat – can alter legitimacy, then these effects may have long-term implications for the Court’s ability to produce enforceable decisions. Researchers are traditionally unable to capture support attitudes directly before a Supreme Court vacancy, and certainly less able to do so directly after. The lone exception to this is Gibson and Caldeira (2009a), who were able to resample individuals after Justice Alito’s nomination. I was able to record attitudes toward the Supreme Court just two weeks prior to Scalia’s death and collect follow-up attitudes two weeks after his death but prior to Merrick Garland’s nomination. This produces a unique set of data capable of investigating if, and how, individual’s attitudes toward the Court change following a major event not of the Court’s own making. This particular court event, by being at the forefront of a political fracas, is an especially suitable place to seek alterations to public attitudes about the Court. Legitimacy or diffuse support – the belief that the an institution is just and proper (Tyler, 2006) – is essential for the Court as it relies on the elected branches to execute its decisions (Caldeira and Gibson, 1992). Without public support, the elected branches are unlikely to act. By utilizing several priming vignettes in 34 the second survey wave, I probe how exposure to various conceptions of the importance of finding Scalia’s replacement (i.e., legal versus political importance), as well as exposure to legitimating judicial symbols, may have altered these orientations toward the Supreme Court. My results indicate the following: exposure to legitimating judicial symbols, when coupled with information regarding the legal importance of filling a vacancy, has a profound effect on diffuse support and perceptions of how political the Court is. Viewing a photograph of the Supreme Court bench decorated to memorialize Scalia (i.e., judicial symbols) positively impacts attitudes toward the Court, but only for those who stand to benefit on policy grounds from the vacancy (i.e., “policy winners”). These symbols appear to enhance preexisting positive attitudes. These findings uncover nuance in the theory of positivity bias, whereby existing predispositions and exposure to judicial imagery predict diffuse support. The context in which these data were collected – with overt partisan politicking characterizing the vacancy – and the changing nature of nomination and confirmation politics more generally serve to highlight the significance of these findings. First, this is a novel investigation into how a vacancy itself impacts attitudes towards the Court. More generally, it asks whether an event not of the Court’s own doing that places it in the public eye can affect its level of legitimacy. Most questions related to diffuse support focus on a case or the Court’s output more generally. Though useful, these efforts leave unanswered how extra-judicial political controversy impacts public support for the Court. Additionally, this particular vacancy produced circumstances ripe for observing change in attitudes regarding the Court. The politicization of the open seat, when coupled with the exuberance and polarizing nature of the justice being replaced, would reasonably produce shifts in opinions about the institution. While historically a routine political affair, the filling of a vacancy has become a politicized event (Farganis and Wedeking, 2014). And, not only have these proceedings become increasingly volatile, but vacancies – when they do occur – do not often occur when the Senate and president are of different parties. Indeed, the 1987 nomination of Anthony Kennedy and the 1991 nomination of Clarence 35 Thomas mark the two most recent confirmations during which the Senate has been of a different party than the nominating president. More concisely, the confluence of factors – the death of a polarizing justice, the ability of the nominating president to shift the ideological tenor of the Court, and the manifest partisan opposition to this outcome that exposed the political nature of the proceedings – conceivably make the 2016 vacancy the best opportunity to witness support for the Court stagger. Furthermore, even when nominations have occurred when there were inter -institutional partisan splits, intra-institutional divisions now exist to an unprecedented degree; the Senate is roughly 50% more polarized today than it was in either 1987 or 1991 (Poole and Rosenthal, 2011).1 Simply, both politics in general and the politics of nominations to the Supreme Court are more contentious now than at any point in the modern era and, seemingly, will continue to be that way into the future. How these factors may impact people’s attitudes toward the Court are highly important for an institution that relies on public support. In other words, if a contentious vacancy – such as the one to replace Scalia – can fundamentally alter the amount of legitimacy one holds toward the Court, it may impact not just acceptance of individual cases that counter an individual’s political wants, but wholesale acceptance of the Court. Indeed, President Obama made the connection between the political nature of the vacancy and the potential for faltering public support for the Court. Lithwick (2016) writes, “President Obama warned against exactly this form of dangerous and destructive politics. When people ‘just view the courts as an extension of our political parties – polarized political parties’ he warned, public confidence in the justice system is eroded. ‘If confidence in the courts consistently breaks down, then you see our attitudes about democracy generally start to break down, and legitimacy breaking down in ways that are very dangerous.’” Below, I detail the ways in which the vacancy created by Scalia represents the new normal in nomination politics. That is, blatant partisan use of the nomination as a 1 The difference in Senate party means, as calculated by DW-NOMINATE, was 0.60 in 1987 during Anthony Kennedy’s nomination and 0.63 in 1991 for Clarence Thomas’ nomination; the 2016 difference is 0.94. 36 means to a political end made apparent the openly political nature of nominations. This makes possible a direct investigation of the role of outside politicization of the Court on legitimacy attitudes. Following a description of the data collection and research design and demonstration of the effect of the treatments, I investigate heterogeneous treatment effects. Given that one group of supporters are “policy winners (losers)” in the sense that the Court was expected swing in (away from) their political favor, it may be the case that winners and losers react differently to the treatments. Finally, I discuss the implications of these findings and comment on the relationship between the Court, the public, and the other political branches in the new system of confirmation politics. 3.1 A Political Vacancy and Salient Non-Case Events The diffuse public support on which the Court relies is generally not impacted by immediate performance satisfaction (Caldeira and Gibson, 1992; Gibson, Caldeira and Baird, 1998). The theory of positivity bias – which suggests that “preexisting institutional loyalty shapes perceptions of and judgments about court decisions and events” (Gibson and Caldeira, 2009a) – may undergird the relative individual-level stability of these assessments. This theory also holds that judicial or legal symbols reinforce the good will the public holds toward the Court (Gibson, Lodge and Woodson, 2014; Gibson and Nelson, 2016). There are three important ways in which these data are uniquely suited to test and extend aspects of the theory of positivity bias: (1) They are collected pre-nomination, (2) they were collected during a highly salient Court event that the Court itself did not produce, and (3) they describe the new normal in confirmation politics. I detail each in turn below. 3.1.1 Pre-Nomination Although there is evidence regarding public perceptions before and after a Court vacancy (Gibson and Caldeira, 2009c), those data only cover the period following a nomination; in this paper I explore other contexts, specifically between a vacancy and nomination. Gibson and Caldeira (2009c) study public attitudes regarding the 2005 nomination and 2006 37 confirmation of Justice Alito. As is true here, they utilize a panel design to discover that long standing attitudes toward the Court predict one’s beliefs about the rightfulness of Alito’s confirmation. Individuals who have high levels of diffuse support rely more on “judiciousness,” which refers to “judicial qualifications, temperament, and role orientations (e.g., judicial restraintism), typically making extensive use of potent symbols of judicial legitimacy” (Gibson and Caldeira, 2009c, 140). They comment, “in a contentious confirmation, the American people confront two competing frames for evaluating nominees: the frame of judiciousness and that of ideology and partisanship.” However, focusing on the so-called “political theater” aspect of the nominations process – as opposed to on the nominee herself – is a fundamentally different question and may yield different results. Indeed, the frames Gibson and Caldeira reference are those that only appear after a nominee has been introduced to the public. Yet, in the aftermath of the death of Scalia, the public was inundated with two frames that preceded a nomination: (1) the legal importance of filling Scalia’s seat and (2) the political importance of the appointment. What is more, the pre-nomination nature of these data may invoke long-, as opposed to short-, term considerations regarding the outputs of the Court. As noted, Supreme Court justices now sit on the bench for an average of 26 years; filling a vacancy can produce a sea change in policy outputs. When considering how a vacancy, as opposed to a specific nominee, will impact future Court decisions, individuals may think more abstractly about the long-term effects of a change in Court demographics. And, while previous research has found the mechanisms by which policy losers accept disagreeable decisions (e.g., Gibson, Lodge and Woodson, 2014), untested is whether those who expect long-term policy losses – such as those supportive of policy outcomes pre-vacancy that will be opposed to policy outcomes post-confirmation – alter support for the Court.2 2 Of course, when these data were collected it was expected that, despite what was considered Republican posturing, President Obama would successfully nominate a judge to the Supreme Court. That this did not occur has no bearing on the results here presented. As such, Republicans are still “policy losers” in this context. 38 3.1.2 Non-Case Events Recent evidence has demonstrated that highly salient cases can impact views toward the Court (Christenson and Glick, 2015). But, in a same way that a highly salient case causes individuals to check into the Supreme Court, so too do vacancies on the bench, particularly given the changing media environment surrounding nominations proceedings (Epstein, Lindstadt, Segal and Westerland, 2006; Farganis and Wedeking, 2014). However, the influence of cases and the influence of vacancies are decidedly different questions. Vacancies provide a novel opportunity to study effects that may be absent or more difficult to discover following salient cases. And, although there is evidence regarding stability in diffuse support following a politicized Court decision (e.g., Bush v. Gore; see Gibson, Caldeira and Spence 2003b), less clear is what happens when the Court itself is politicized by external actors. In this way, this study differs greatly from those that come before it. Many studies record a person’s response when informed that the Court, a Justice, or the Justices had behaved in a political manner or that a particular decision (political or not) may compromise the Court’s ability to dispense justice evenhandedly and legally (e.g., Baird and Gangl, 2006; Zink, Spriggs and Scott, 2009; Salamone, 2013; Nicholson and Hansford, 2014; Christenson and Glick, 2015). Less studied are the attitudes of the public when the Court is being politicized, as opposed to behaving politically. For instance, individuals may differentiate between the Court making decisions using political motivations versus Presidents nominating an under-qualified ideologue to the bench. As I detail below, I expose people to the view that the Court can be a pawn in the political game or that the decisions (or non-decisions) of the elected branches can impact the Court’s ability to distribute justice. 3.1.3 The “New Normal” Dahl (1957) remarked, “Americans are not quite willing to accept the fact that the Court is a political institution and not quite capable of denying it” (279). The conspicuous partisan politicking that characterized the 2016 Supreme Court vacancy may have left 39 far less doubt on the matter. The obstructionist actions of Senate Republicans in refusing to consider any President Obama nominee exposed the openly political nature of Supreme Court nominations. As political commentator Paul Krugman (2016) writes, “Once upon a time, the death of a Supreme Court justice wouldn’t have brought America to the edge of constitutional crisis...In principle, losing a justice should cause at most a mild disturbance in the national scene.” Instead, this once routine political exercise was at the forefront of partisan politics. This style of confirmation politics, called by some “political paralysis,” is the “new normal” (O’Hehir, 2016; Perr, 2016). In light of the elite polarization evidence presented above, the stagnation of confirmations at all levels of the judicial hierarchy (Perr, 2016), and the changing nature of nominations themselves (Farganis and Wedeking, 2014), a return to a more congenial confirmations process seems unlikely. There are very serious repercussions to this shift. One commentator remarked, “How the Senate responds to Scalia’s vacancy...could decide whether the Supreme Court remains a viable player in our constitutional system. Why, after all, should a future president feel bound by the Court’s decisions if they know that every member of its bench was appointed via a partisan knife fight?” (Millhiser, 2016). Indeed, the precarious nature of the Supreme Court’s authority makes necessary support from other institutions. If we suspect that overtly political nominations can alter the views of other institutional actors, they may also affect public attitudes. Thus, it is important to test whether this “new normal” does indeed change the way the public views the Court. Succinctly, the “genie is out of the bottle” with regard to the openly political nature of Supreme Court nominations and confirmations. It is unlikely to go back to a harmonious political procedure. It is important to determine whether this new status quo will harm the Court and its ability to make decisions that are enforced. 3.1.4 Policy Losers and Political Perceptions Rarely is a president presented with the opportunity to shift the ideological tenor of the Court. Indeed, not since 1969 have Democratic appointees comprised a majority of the 40 seats on the Supreme Court. The particulars of this vacancy – a Democratic president provided the opportunity to replace a Republican appointee and staunch conservative – made it so the Court would suddenly have been closer to one group’s political policy preferences. That is, there were anticipated “policy losers” as a result of the vacancy. Explicitly, as macabre as it may be following a death, Democrats (Republicans) were expected policy winners (losers). Although there is evidence that judicial symbols help individuals accept decisions on which they lose on policy grounds (Gibson, Lodge and Woodson, 2014), decisions are short-term considerations. That is, while an individual may disagree with a decision, it does not affect their view of the Court altogether. And, although there is evidence that ideological disagreement decreases support (Bartels and Johnston 2013; but see Gibson and Nelson 2015), nominations have long-term implications for continued policy outputs. That is, immediate past dissatisfaction is distinct from expected future dissatisfaction. Those who are set to realize continued policy loss may alter their view of the Supreme Court. I am able to test this prospect by exploring changes for policy losers (Republicans) and policy winners (Democrats). The expectation is that only policy winners will be positively affected by news about the changing demographics of the Court and that policy losers will either decrease their level of support or display no changes. Finally, given the explicitly political nature of the 2016 vacancy, individuals may alter how political they believe the Court to be. Given that political perceptions of the Court have been shown to be related to diffuse support (Scheb and Lyons, 2001; Christenson and Glick, 2015), of import is to determine whether the elected branches can delegitimize the Court by making it appear political. Both survey waves collected data on perceptions of how political the Court is that can test this proposition empirically. Again, the particularities of the 2016 vacancy should make manipulating political perceptions of the Court rather trivial; individuals exposed to different experimental treatments may alter their perceptions of how political the Court is. 41 3.2 Research Design This research is based on a sample of 238 undergraduates at a large, public university and was conducted January 2015 - March 2015. The first wave took place from 20 January-31 January 2016. Justice Antonin Scalia died on 13 February, only thirteen days after the completion of the first wave. The second wave began on 3 March and responses were collected until the nomination of Merrick Garland on 16 March. Undergraduate samples can provide a conservative test of a treatment relative to a representative sample (Baird and Gangl, 2006). Nevertheless, undergraduate samples are less than ideal. That said, these data are, to the best of my knowledge, the only source of information regarding orientations toward the Court before and after a vacancy but before a nomination. While findings are interpreted with caution, I believe the data are sufficiently unique to offer a first look at this phenomenon. Limitations to the findings here presented as a result of the sample are considered in the discussion section. In the first wave, respondents completed a survey with several political items. Importantly, subjects were asked the traditional battery of questions used to measure diffuse support developed by Gibson, Caldeira and Spence (2003a). In the second wave, experimental treatments – which are detailed below – were embedded within the survey. In order to determine if the competing treatments differentially impact diffuse support, the treatments used here prime attitudes regarding the filling of the Supreme Court vacancy in a way that mimics stories persistently disseminated in the media following the death of Scalia. That is, this research design allows for the isolation of effects that rivaled each other in nature. It is likely that respondents were exposed to myriad information in “real time”; these treatments prime the various considerations to which respondents may have been exposed prior to treatment. 3.2.1 Treatments In this 2x2 experiment with a control group, participants were randomly assigned to one of three groups: (1) a control group that received no prime, (2) a legal group that read a vignette on the problematic nature of 4-4 ties on the Supreme Court, their failure to 42 create precedent, and the potential unequal application of the law that can result, or (3) a political group that read a vignette describing the relative ideological balance of the Court before Scalia’s death, his conservative voting behavior, Obama’s ability to shift the Court from conservative to liberal, conservative fear of this outcome, obstructionist behavior of Senate Republicans, and an explicit reference of using the vacancy as a means to achieve a political end. Within both the legal and political groups, respondents were further randomly assigned to a judicial symbols condition that displayed a photograph of the Supreme Court bench with Justice Scalia’s chair and the area in front of his bench adorned with black cloth; no additional text accompanied this photograph.3 While the purposes of the legal and political treatments are straightforward (i.e., they explicitly mention the importance of filling the vacancy), the symbols treatment is less clear. As Gibson, Lodge and Woodson (2014) note, viewing such images can unconsciously trigger positive affect before conscious information processing takes over. They state: ...only at the tail end of the decision stream does one become consciously aware of the associated thoughts and feelings unconsciously generated moments earlier in response to an external stimulus...Whenever a person sees a judicial symbol [their subconscious information processing] automatically triggers learned associated thoughts, which for most people in the United States have become connected with these symbols...[these thoughts] are typically ones of legitimacy and positivity. This activation leads to more conscious legitimating and positive thoughts in [conscious information processing]” (842). Here, judicial symbols may prime more permanent – and positive – attitudes toward the Court that precede any affect caused by the political fight to fill the vacancy. Given the “in real time” nature of this experiment, participations may have been exposed to many external factors. First, randomization assuages the concern that different groups were exposed to different stimuli outside of the experiment. Secondly, the panel nature of the surveys allows for the examination of within-effects, meaning the treatments detailed above were intended to prime particular pieces of information to which individu3 The language of each treatment, as well as the photograph for the symbols treatment, can be found in the supplemental materials. 43 als were likely exposed before treatment. Finally, the enormous amount of media content that spoke to both the legal and political importance of the vacancy helps increase the external validity of these treatments. For instance, similar to the political treatment, there were several articles detailing the potential for a swing in Court ideology following an appointment by President Obama (Hirshman, 2016), as well as the political nature of the obstructionist behavior of the Senate (Shear and Steinhauer, 2016; Parlapiano and Sanger-Katz, 2016). Consistent with the legal treatment, news snippets appeared only hours after Scalia’s death regarding the legal implications of a 4-4 tie on the Supreme Court (Victor, 2016). Finally, even the judicial symbols photograph that some respondents viewed appeared in a major news outlet (de Vogue and Scott, 2016). What is more, a representative sample of Americans indicated above-average exposure to the vacancy.4 After exposure to the treatment, subjects were asked to complete the Gibson, Caldeira and Spence (2003a) diffuse support battery. These questions ask respondents to indicate their level of agreement on a 5-point scale with statements such as “The U.S. Supreme Court gets too mixed up in politics” and “We ought to have a stronger means of controlling for actions of the U.S. Supreme Court.” The variable of interest – diffuse support or legitimacy– is a multi-item additive index of these questions. The hypotheses stemming from these treatments are as follows: Legal Importance: Exposure to the legal vignette will increase wave 2 legitimacy relative to wave 1. Political Importance: Exposure to the political vignette will decrease wave 2 legitimacy relative to wave 1. Judicial Symbols: Exposure to judicial symbols will increase wave 2 legitimacy relative to wave 1. Of course, the legal and judicial symbols hypotheses are intended to prime positive attitudes consistent with positivity theory (Gibson and Caldeira, 2011; Gibson, Lodge and 4 http://www.people-press.org/2016/02/22/majority-of-public-wants-senate-to-act-on- obamas-court-nominee/ 44 Woodson, 2014). Conversely, the political vignette is intended to conjure negative attitudes about a political Court and the perceived lack of procedural justice (Baird and Gangl, 2006; Christenson and Glick, 2015). Additionally, given that both the legal and symbols treatments are expected to increase legitimacy, there is an expectation that exposure to both will produce a larger effect than exposure to only one. Furthermore, regarding potential heterogeneity of treatment effects, hypotheses are as follows: Policy Losers: Those expecting to lose on policy grounds (i.e., Republican identifiers) will decrease wave 2 legitimacy relative to wave 1. Policy Winners: Those expecting to win on policy grounds (i.e., Democratic identifiers) will increase wave 2 legitimacy relative to wave 1. Finally, regarding political perceptions, hypotheses are as follows: Legal Importance: Exposure to the legal vignette will decrease wave 2 politicization relative to wave 1. Political Importance: Exposure to the political vignette will increase wave 2 politicization relative to wave 1. Judicial Symbols: Exposure to judicial symbols will decrease wave 2 politicization relative to wave 1. Much like for the diffuse support hypotheses above, the interaction of legal and symbolic treatments are expected to impact politicization in a synergistic manner, meaning those exposed to both are expected to reduce perceived politicization to a greater degree than those exposed to just the legal vignette. 3.3 Experimental Evidence Because the experimental treatments appear in a single cross-section of a panel study, and because a major Court event occurred naturally in between two waves, I am able to exploit both the cross-sectional and longitudinal nature of these data and determine if 45 individuals differ in their assessments of the Court before and after Justice Scalia’s death (i.e., before and after a sudden vacancy). Figure 3.1 displays within-subjects difference in means tests for each of the experimental treatment conditions.5 Within each column, the closed circle to the left represents the value for the first survey wave and the closed square to the right represents the value for the second survey wave; vertical bars are 95% confidence intervals around those values and annotations at the bottom refer to significance values for the relationship above. Note that an overlap in confidence intervals does not necessarily denote the lack of a statistically significant relationship (see Bolsen and Thornton, 2014). 5 Shapiro-Wilk tests place normality into question. However, as is shown in the supple- mental materials, nonparametric testing yields similar statistical and identical substantive results. As such, parametric t-tests are presented due to ease of interpretation. 46 Wave 1 0.80 Wave2 (a) (b) (c) (d) (e) p = 0.64 p = 0.62 p = 0.17 p = 0.18 p = 0.02 Control Political w/o Symbols Political w/ Symbols Legal w/o Symbols Legal w Symbols Diffuse Support 0.75 0.70 0.65 0.60 0.55 Experimental Treatment Figure 3.1: Dotplot of paired difference in means tests across experimental treatment. Each column, separated by vertical dotted line, contains a pair of plotting symbols which represent mean diffuse support response (0-1 scale) for those who received the treatment listed on the x-axis; within each column, closed circle represents mean support for wave 1 & closed square represents mean support for wave 2. Vertical bars are 95% confidence intervals around mean estimates. Annotations at the bottom of each column are p-values for those relationships. Red annotation denotes p < 0.05 with respect to a two-tailed test. 47 Beginning with column (a) of Figure 3.1, perhaps the most noteworthy relationship is the stability of diffuse support for the control group. Succinctly, in the absence of treatment primes, a sudden and politicized vacancy does not appear to impact the amount of support one offers the Court. Despite ubiquitous media coverage of both the legal and political importance of the vacancy, support for the Court does indeed appear to be a diffuse, durable characteristic. Normatively, this is an encouraging finding. The Supreme Court, who relies on a bank of benevolence in order to expect compliance with its rulings, does not appear to lose purchase due to events outside of its control. This evidence, which extends previous findings in the Court decision context to the vacancy context, is decidedly consistent with positivity theory and corroborative of many previous findings (e.g., Gibson and Caldeira, 2011). However, stability in the control group does not serve as evidence that treatments were not present in nature; instead, treatments rivaled one another in nature. As such, priming certain considerations may provide insight into their effects. Moving to the political conditions in columns (b) and (c), there is no statistical effect of priming political considerations. Individuals who considered the Supreme Court vacancy in terms of the potential shift in Court policy outcomes following a President Obama nominee, and Senate Republican’s intense opposition to such a nomination, were steadfast in their ascriptions of legitimacy across both time points. Here, exposure to the idea that the elected branches are using the Court for political gain does not reduce individual levels of diffuse support. Countering expectations, this holds true for those who viewed judicial symbols as well, although there is a small, statistically insignificant effect. This builds on evidence that individuals are uncompromising in their attitudes toward the Court, even when told the behavior of the justices was political (Nicholson and Howard, 2003; Baird and Gangl, 2006). Here, politicization of the Court by the elected branches has a similarly null effect. Finally, I turn to the legal conditions. First, countering expectations, those who were primed to contemplate the Supreme Court vacancy in terms of the legal importance of creating binding precedent and staving off unequal application of the law, but did not view 48 judicial symbols (column d), were staunch in their ascription of diffuse support. Again, there was a small but insignificant effect. However, the legal treatment, when coupled with judicial symbols (column e), produces a statistically significant positive change in the stated level of diffuse support. The effect of symbols on those in the legal treatment is greater than the effect of the legal treatment alone. Exposure to these treatments moves individuals, on average, from legitimacy scores of 0.67 to 0.73, nearly an 8% change. In other words, not only do symbols matter, they can intensify already positive feelings toward the Supreme Court. Priming these considerations can cause individuals to increase their level of diffuse support. This is consistent with extant research that shows viewing judicial imagery has a powerful positive effect on the amount of diffuse support one has for the Court (Gibson, Lodge and Woodson, 2014; Gibson and Nelson, 2016). Much like the control and political treatments evidence presented above, the legal symbols evidence extends previous findings to the vacancy context. In the event that the opportunity arises for people to reassess their support for the Court, and this opportunity is independent of the Court’s own actions, judicial symbols can thwart and even overpower outside attempts to paint the Court as political. While it cannot be said with certainty that there is no amount of external politicization of the Court that can reduce legitimacy, particularly in light of evidence presented in the preceding essay, it is clear that that amount is great. More pointedly, if the political hostilities characterizing the 2016 vacancy were insufficient to politicize the Court, what would be sufficient? When the Court is being used as a means to a political end, omnipresent judicial symbols are sufficient to maintain public support. 3.4 Policy Losers and Diffuse Support While the findings above cast a positive light on the relationship between the public and the Supreme Court, the results may not be analogous across all political demographics. That is, these treatment effects may be heterogeneous. Again, I suspect that there will be heterogeneous treatment effects because Democrats were (supposed to be) “policy winners” in regard to the 2016 vacancy. Figure 3.2 examines movements in within-subject 49 legitimacy scores for Democrats (closed circles) and Republicans (closed squares) for each experimental condition.6 This figure only displays the control group and experimental conditions for which there were statistically significant results. 6 The number of independent identifiers within each experimental group was very small. Therefore, I only look at differences amongst Democrats and Republicans. 50 Democrats (a) Republicans (b) (c) Diffuse Support 0.8 0.7 0.6 0.5 0.4 0.57 0.54 Control 0.00 0.51 Legal w/ Symbols 0.02 0.24 Political w/ Symbols Experimental Treatment Figure 3.2: Dotplot of paired difference in means tests across partisan selfidentification. Each column, separated by vertical dotted line, contains mean estimates for each group; closed circles represent Democrats and closed squares represent Republicans. Within each column, for each party identification, the symbol on left is mean support for wave 1 & symbol on right is mean support for wave 2. Vertical bars are 95% confidence intervals around mean estimates. Annotations at the bottom of each column are p-values with respect to a two-tailed test for those relationships. 51 Many of the findings when stratifying by party identification are identical to those found above. For instance, there are no changes for the control group (column a). The results not displayed here – exposure to the legal treatment without symbols and political treatment without symbols – are equally null across party identification. This indicates that party differences do not alter diffuse support attitudes. That symbolic predispositions do not impact attitudes toward the Court, even when the contention surrounding the vacancy is partisan in nature, is encouraging evidence. However, there are two treatment categories for which there are differences across parties. I begin with the legal treatment with symbols exposure (column b). Recall that above these treatments resulted in nearly an 8% change. Here, there is no effect for Republicans. However, legitimacy scores for Democrats who received both treatments move from 0.64 in the first wave to 0.74 in the second, a 15.5% change. This is consistent with the policy winners hypothesis presented above; there is no support for the policy losers hypothesis. Next, I turn to the political treatment with symbols exposure (column c). Recall that above these treatments produced no significant changes. Here too, there are no changes for Republicans. However, there is now significant movement for Democrats. These legitimacy scores move from 0.62 to 0.69, nearly an 11% change. Simply, even when people are provoked to consider a political Supreme Court – which may summon negative attitudes in regard to access to procedural justice and fair dispensation of the law – they increase their support when they recognize that the Court is (or may soon be) in their favor politically. However, much like the legal treatment, these effects are not present in the absence of judicial symbols. Again, this suggests that judicial symbols have the ability to reinforce already positive feelings, or alternatively, provide baseline positive feelings onto which other positive attitudes add on. These are important findings. While there is evidence that judicial symbols help policy losers acquiesce to disagreeable Court outputs (Gibson, Lodge and Woodson, 2014), that evidence refers to the decisions context. This is suggestive evidence that when it comes to changing the demographics of the Court – and possibly decades of policy outputs 52 – symbols may comfort policy losers in that they do not decrease support and excite policy winners. While these findings are consistent with positivity bias – again, symbols do increase support and support never decreases – they offer nuance for its effects. We might expect policy losers to decrease their levels of support, but this is not borne out in the data. This speaks to the strong and important effect of preexisting support. What is more, given that the political treatment specifically invokes partisan cues (i.e., refers to Republican obstructionism), this evidence conforms to research identifying a relationship between partisan predispositions, explicit partisan cues, and support for the Court (Clark and Kastellec, 2015). 3.5 Beyond Support: Investigating Political Perceptions of the Court Above, I demonstrate how – and for whom – a sudden vacancy impacts attitudes regarding diffuse support toward the Supreme Court. A theme that has run throughout the evidence is that it does not appear that the elected branches can make the Court appear more political, but such an assertion is difficult to assess based on null findings alone. Both survey waves collected data that can further examine this proposition empirically. To measure perceptions of how political the Court is and its justices are, I ask respondents to report their level of agreement – from “strongly disagree” to “strongly agree” – with three items: (1) “Supreme Court judges are little more than politicians in robes,” (2) “The justices of the Supreme Court cannot be trusted to tell us why they actually decide the way they do, but hide some ulterior motives for their decisions,” (3) “Judges may say that their decisions are based on the law and the Constitution, but in many cases, judges are really basing their decisions on their own personal beliefs.” The variable is an additive index recoded from 0 to 1 (1 = high belief that Court is political).7 As above, the question here asks whether a sudden vacancy – and the media portrayal thereof – can impact opinions regarding the Court. But, in this instance, it asks: can the partisan politicking of the elected branches succeed in making the Court appear more political in the minds of members of the mass public? If so, we would expect the Court 7 This scale has nice psychometric properties: Cronbach’s αt1 = 0.7524; αt2 = 0.7697. 53 politicization values for the second wave to be higher than the first. Again, the political contexts should make this outcome easily attainable. Figure 3.3 displays these results.8 8 Shapiro-Wilk tests indicate that these distributions are normal, satisfying an assump- tion of parametric difference in means tests. Therefore, no additional testing appears in the supplemental materials. 54 Wave 1 0.6 Wave2 (a) (b) (c) (d) (e) p = 0.24 p = 0.30 p = 0.14 p = 0.82 p = 0.01 Control Political w/o Symbols Political w/ Symbols Legal w/o Symbols Legal w Symbols Court Politicization 0.55 0.5 0.45 0.4 0.35 0.3 Experimental Treatment Figure 3.3: Dotplot of paired difference in means tests across experimental treatment. Each column, separated by vertical dotted line, contains a pair of closed circles, which represent mean politicization response (0-1 scale) for those who received the treatment listed on the x-axis; within each column, closed circle on left is mean politicization for wave 1 & closed square on right is mean politicization for wave 2. Vertical bars are 95% confidence intervals around mean estimates. Annotations at the bottom of each column are p-values with respect to a two-tailed test for those relationships. 55 Beginning with the control group in column (a), there is no change. And, perhaps most interestingly, no significant relationship exists for either political treatment category (columns b and c). Simply, receiving information regarding the political nature of Supreme Court vacancies does not appear to politicize the Court after a sudden vacancy, even when that vacancy was as fiercely political as the one to replace Scalia. Again, this is cause for normative optimism. If extra-judicial actors could succeed in politicizing the Court – and, perhaps, thereby decreasing perceptions of procedural justice and legitimacy – there may be no recourse by which to replenish the reservoir of goodwill. That is, if perceptions of the Court’s proper place in the political arena are not dictated by the Court itself, it is possible that it would experience difficulty in implementing public policy. It is not in question whether the vacancy was politicized, but to find no movement as a result of that politicization speaks to the resilience of preexisting support. Moving to the legal treatments, once again there is a statistically significant effect of the legal treatment with judicial symbols exposure (column e), although in this instance in the negative direction. Those exposed to these treatments believed the Court was less political, moving, on average, from 0.44 to 0.38, a -12% change. Considering the vacancy in terms of its legal importance, coupled with judicial symbols, can cause individuals to reconsider their position on whether the Court behaves politically. Once again, judicial symbols are a potent and persuasive source of Supreme Court power. 3.6 Discussion Using unique data collected via a fortuitously timed survey, I was able to answer questions regarding how a major non-case Court event – specifically, a sudden and highly political vacancy – and media portrayals thereof impacted public support for the Court. First, support begets support. Those exposed to no experimental treatments remained resolute in their apportionment of legitimacy. Those who read the legal prime – which detailed the importance of having a full complement of justices in order to avoid uneven dispensation of justice – also exhibited no changes in the allocation of legitimacy, except when also exposed to a photograph of the Supreme Court bench and the adornments 56 honoring Justice Scalia. Individuals consistently attribute to the Court more support after exposure to the legal treatment coupled with judicial symbols than before. In other words, the effects of these treatments – particularly symbols – are persistent, powerful, and legitimating. These effects were not uniform across all political demographics, however. Democrats alone were likely to be affected by legal symbols; they increased support when viewing symbols for both the legal and political treatment groups. These results suggest that policy winners are more highly susceptible to the legitimating power of symbols. I argue that those who anticipate repeated policy loss are indeed comforted by judicial symbols (they do not reduce support) and that symbols multiply the positive affect of those who anticipate repeated policy victory. Finally, exposure to the legal treatment with judicial symbols reduces how political one believes the Court to be. Despite obvious and undeniable politicization of the Court by the elected branches, people describe the Court as less political when encountering judicial symbols. Again, the evidence is clear: support precipitates support, even when taking into account the hyper-polarization and political gamesmanship that characterized the vacancy. Perhaps more importantly, when confronted with the idea that the legislature and executive are using a vacancy for political gain – a circumstance that may cause individuals to perceive the Court as being unable to provide justice evenhandedly (e.g., Baird and Gangl, 2006) – the results here suggest that individuals are no more or less likely to deem the Court legitimate relative to their prior assessments. Justice Scalia – who, despite some uncouth celebration following his death (Sawyer, 2016), was memorialized as an “intellectual giant” (Blake, 2016) with a “remarkable legacy” (Washington Post Editorial Board 2016) – was himself a polarizing figure. Indeed, his stature makes even more surprising that his death could not spur reductions in legitimacy. At the outset, one may have conjectured that it should have been effortless to diminish legitimacy in light of intense partisan and ideological divisions, the political one-upsmanship between the Senate and President Obama, and Scalia’s noteworthiness that characterized the 2016 vacancy. Yet, despite these indictments, the evidence presented here suggests that support 57 is indeed diffuse. More colloquially, it should have been easy to prime negative attitudes toward the Court – and subsequently reduce diffuse support – because politics in general are now so polarized, a polarizing figure died, and a game of political cat-and-mouse began immediately following the vacancy; it is remarkable to observe stability under these conditions. Not only do these circumstances speak to the resilience of diffuse support, but they also speak to the conservativeness of the tests that produced this evidence. There are a number of interesting normative implications of these findings. The Supreme Court is frequently constrained by uncertainty regarding reception to its decisions. The justices can never be certain how the public or other governmental actors will receive their decisions or if those decisions will be respected and enforced. Although certain characteristics of case outcomes can alter legitimacy (Zink, Spriggs and Scott, 2009; Christenson and Glick, 2015), the Court has little recourse when events not of its own doing place it in the spotlight. Further, the Court has precisely zero influence regarding how the media chooses to portray these events. Taking this into consideration, the findings presented here are normatively encouraging and corroborate the tenets of positivity bias and legitimacy theory (Baird, 2001; Tyler, 2007; Gibson and Caldeira, 2009a). What is more, they expand the province to which these theories apply; positivity bias extends beyond the Court’s outputs. Again, Gibson and Caldeira (2009a) comment, “preexisting institutional loyalty shapes perceptions of and judgments about court decisions and events” (emphasis added). Heretofore, the evidence showing this to be true has largely regarded court decisions; the evidence here regards court events, particularly events unrelated to Court activities. To wit, existing predispositions toward the Supreme Court are a robust source of continuing good will. These results indicate that little can be done to detrimentally impact the Court’s cistern of support and that subjection to information that highlights judicial imagery and the Court’s importance in deciding consequential legal questions can prove advantageous. The way that the public perceives the Supreme Court – a perception that is manipulable – can impact the legitimacy on which the Court relies to produce enforceable decisions. The Supreme Court and its justices tend not to engage in public relations in a man- 58 ner similar to the president or members of Congress. And while certain justices are more publicly outgoing than others (Black, Owens and Armaly, 2016), the Court is not institutionally equipped to frame salient events as it so chooses. As was the case following Scalia’s death, the elected branches can politicize salient Court events. To find that politicization of the Court does not reflect on legitimacy, but that perceptions of legal procedure and judicial symbols do, provides an auspicious view of the relationship between the Court and the public. In other words, legitimacy appears to be institution specific. Thus, if delegitimation of the Court is a political strategy in the separation of powers exchange, citizens – and the justices – can take solace in the fact that it does not appear to be an effective tactic. There are of course, limitations to this study. First and foremost, the student sample calls into question generalizability. Many who participated in this survey experiment were born in the mid-1990s; they did not experience turnover on the Supreme Court for much of their youth. Furthermore, in their lifetime, the Scalia vacancy was the first where the presidency and Senate were controlled by different parties. Thus, responses may be a function of (1) witnessing the first vacancy as members of the political realm or (2) witnessing the first contested vacancy in their lifetime. I believe these concerns can be assuaged. First, the results are consistent with what research using nationally representative samples have shown (e.g., Gibson, Lodge and Woodson, 2014). Secondly, to the best of my knowledge, these are the only data that allow researchers to examine this phenomenon untainted by evaluation of a new nominee. While the reach of the data is limited, they provide the sole insight into this crucial time in the replacement of a Supreme Court justice. What these data cannot say, but future scholarship should build on, is the durability of these effects in regard to the Supreme Court. Are these top of the head considerations, where the consideration most recently encountered influences support? Or, is exposure to judicial symbols a running tally, where the more exposure one has to them the greater their level of support will be? Despite uncertainty regarding the lastingness of these effects, their results are clear: the public supports the Supreme Court, and that support 59 is exclusively in the Court’s own hands. 60 APPENDIX 61 Experimental Treatments Legal Condition Those randomly assigned to the legal condition read the following passage: On February 13th, Supreme Court Justice Antonin Scalia died. As a result, only 8 justices remain to decide the rest of the cases this term (which ends in June). With an even number of justices there is a chance the Court could evenly split, 4-4, when voting on cases. When this happens, the Court’s opinions fail to create legal precedents. It can also cause there to be differences in how the law is applied to citizens in different parts of the country. Allowing the president to quickly fill the vacancy created by Justice Scalia’s death is therefore important to avoiding these negative outcomes. Political Condition Those randomly assigned to the political condition read the following passage: On February 13th, Supreme Court Justice Antonin Scalia died. Before his death, Scalia consistently voted in a conservative manner and the Court as a whole tended to vote in a moderate manner, with some liberal outcomes and some conservative outcomes. The opportunity to replace Scalia would provide President Obama with a chance to nominate a justice who might make the Court more consistently liberal than it has been in several decades. For this reason, the Republican-controlled Senate has stated they will block Obama’s nominees until after the 2016 presidential election, which might result in a Republican president replacing Scalia. Both Obama and Senate Republicans see the vacancy as an opportunity to achieve their political goals. Judicial Symbols Treatment A random subset of respondents from both the political condition and legal condition were also assigned to a judicial symbols treatment. They received one of the two vignettes above but also viewed the photograph below; the text of the vignette was unchanged. 62 Figure 3.4: Photograph showing the adornments of Scalia’s chair and the bench in front of his chair following his passing. Respondents assigned to the judicial symbols treatment groups viewed this photograph. Photograph from the Supreme Court of the United States. Diffuse Support Measurement and Psychometric Properties As is traditional, diffuse support for the Court is measured as a multi-item summative scale; the six items used to construct this scale are listed below (Gibson, Caldeira and Spence, 2003a). Consistent with Bartels and Johnston (2013), the scale is then recoded from 0 to 1 (where 1 = high legitimacy). As is common (e.g., Gibson and Nelson, 2015), these items form a highly reliable (α = 0.82 for the first wave; 0.83 for the second) and unidimensional scale (eigenvalue for first unrotated factor = 2.83, the next largest 0.30 for first wave; 2.98, 0.32 for second). 1. If the Supreme Court started making decisions that most people disagree with, it might be better to do away with the Court 2. The right of the Supreme Court to decide certain types of controversial issues should be reduced 3. The U.S. Supreme Court gets too mixed up in politics 4. Justices who consistently make decisions at odds with what a majority of the people want should be removed 63 5. The U.S. Supreme Court ought to be made less independent so that it listens a lot more to what the people want 6. We ought to have a stronger means of controlling for actions of the U.S. Supreme Court Media Exposure to Scalia’s Death Respondents were asked to indicate “how much [they had] heard about the following news events.” Five questions asked respondents about real events that had received a great deal of media attention immediately preceding the survey and a sixth question asked about a fabricated event. These items produce a reliable (Chronbach’s alpha = 0.7237) and unidimensional scale (eigenvalue for first unrotated factor = 2.08; second = 0.19). What is more, all of the items, save for the fabricated news event, correlate highly with the latent media exposure variable, and the item regarding Scalia’s death correlates the most highly (0.754). There is some evidence that respondents honestly answered the questions regarding media exposure (i.e., the fabricated item does not correlate highly with the latent scale) and that the item regarding Scalia’s death is highly related to this latent trait. Current events include: (1) “Senate Republican’s opposition to any of Obama’s Supreme Court nominees,” (2) “The spread of the Zika virus,” (3) “Donald Trump’s primary victory in South Carolina,” (4) “President Obama’s pledge to close the military prison at Guantanamo Bay,” (5) “The death of Supreme Court Justice Antonin Scalia,” and (6) “Widespread protests in Canada in January 2016.” The scale has nice psychometric properties: Cronbach’s α =0.6385. eigenvaluet1 =1.53 eigenvaluet2 =0.29 As shown in Figure 3.5, the levels of media exposure, which are recoded to range from 0-1, are quite high for the sample. The sample median = 0.702; ∼22% of respondents scored 0.50 or lower. Non-parametric Testing 64 2 1.5 Density 1 .5 0 0 .2 .4 .6 Media Exposure .8 1 Figure 3.5: Histogram of media exposure. Larger values indicate greater exposure to news stories. Table 3.1: Wilcoxon Signed-Rank Tests Condition Legal Legal Summary Statistics w/o Symbols w/ Symbols Control Wave 1 Median 0.625 0.686 0.625 Wave 2 Median 0.708 0.75 0.66 p-value 0.45 0.031* 0.672 Sample Size 47 42 84 * denotes p <.05 with respect to a two-tailed test. 65 Political w/o Symbols 0.583 0.625 0.476 33 Political w/ Symbols 0.625 0.708 0.194 31 Table 3.2: Wilcoxon Signed-Rank Tests for Partisan Self-Identification Condition Legal w/o Symbols Legal w/ Symbols Summary Statistics Control Democrats: Wave 1 Median 0.666 0.646 0.625 Wave 2 Median 0.687 0.75 0.666 p-value 0.76 0.02* 0.54 Republicans: Wave 1 Median 0.583 0.708 0.666 Wave 2 Median 0.708 0.729 0.708 p-value 0.68 0.38 0.58 * denotes p <.05 with respect to a two-tailed test. 66 Political w/o Symbols Political w/ Symbols 0.583 0.666 0.72 0.583 0.708 0.02* 0.50 0.666 0.21 0.75 0.625 0.17 Chapter 4: Supreme Court Institutionalization and Congressional Appraisal of Public Support for the Judiciary The arrangements that structure the relationships between the branches of federal government in the United States rest on public support toward each institution. Yet, the variant of public support on which these institutions rely is far from a settled question. Previous research implies that the attitudes upon which these relationships are contingent tend to be diffuse, enduring orientations toward at least one of the institutions (e.g., Clark, 2009). I argue that, when it comes to the institutional relations between Congress and the U.S. Supreme Court, the arrangement is structured by more ephemeral, transient public attitudes. Examining such an interaction between the Court and Congress is particularly enlightening, as different types of public support may motivate the behavior of each branch in different ways. Inasmuch as that behavior may impact the degree to which the judiciary can perform its function as an independent body, such considerations carry immense separation of powers implications. Diffuse public support for the judiciary – an important form of political capital (see Caldeira and Gibson, 1992) – protects the Court against encroachments from other branches. As Justice Frankfurter notes in Baker v. Carr (1962), Supreme Court decisions have no intrinsic authority, and the judiciary must rely on the elected branches “...for the efficacy of its judgments” (Hamilton, 1788). In the interest of reelection, Congress provides this efficacy when public support is in the Court’s favor by exhibiting restraint in the inter-institutional realm. Specifically, Congress offers resources and deference to the Court when the public is supportive (Ura and Wohlfarth, 2010). As Gibson and Caldeira (2007) remark, “...no institution depends more upon legitimacy than the judiciary” (2), and the public generally offers it in spades, making congressional extensions of resources politically expedient. 67 However, the research that indicates that these extensions of the olive branch are effective and that legitimacy protects the judiciary from institutional intrusions is subject to empirical limitations, especially the work that explores these interactions longitudinally. Frequently, the items used to measure public support in over time analyses come from a measure of public confidence in the Court (Grosskopf and Mondak, 1998). Gibson, Caldeira and Spence (2003a) discover that variation in confidence is related to specific support, or “satisfaction with the performance of a political institution” (Caldeira and Gibson, 1992, 1126). This is inconsistent with classical legitimacy theory, which argues that broad, long-term attitudes toward an institution should not be subject to political whims (Tyler, 2006). This means that it is unclear whether dynamics in things such as court curbing or the allocation of resources are due to changes in specific support, diffuse support, or both. Yet, researchers commonly conceptualize changes in confidence as something more akin to diffuse support. An example illustrates. Ura and Wohlfarth (2010) suggest confidence is, “an institution’s changing status in the public’s mind as an effective agent for its political will as well as judgments about the essential legitimacy of courts...separate from more temporal political concerns” (974; emphasis added). When using this or a similar definition of confidence, it is clear that theories being tested are related to diffuse support, but the dual nature of confidence prohibits exploring relationships in such terms. Conflating the two would be unproblematic if we did not have reason to think that Congress is responsive to short-term changes in public attitudes (Soroka and Wlezien, 2004; Wlezien, 1995) and that there are penalties for being out of step with mass opinions (Canes-Wrone, Brady and Cogan, 2002). With this responsiveness in mind, it is sensible that Congress will be attuned to short-run public attitudes regarding the Court’s policy outputs as opposed to steadfast orientations toward the institution, such as diffuse support. In this paper, I attempt to overcome the challenges detailed above and to test the theory that institutional relations between Congress and the Court are contingent upon short-term, transient support as opposed to long-term, obdurate support. In order to do so, I argue a question wording effect has pervaded this line of literature and propose a 68 new measurement strategy capable of differentiating the types of support in longitudinal confidence data. Although there are theoretical differences between diffuse support and confidence in an institution, analyses below demonstrate that the latter, when appropriately measured, performs in a way that conforms to the expectations of legitimacy theory (Tyler, 2006) and can be used in future analyses invoking diffuse support. This paper makes two major contributions. First and foremost, I demonstrate that an ephemeral, more fleeting type of support drives the congressional decision to empower the Supreme Court. This finding raises a series of normative questions about the malleability of the public sentiment on which Congress relies to make crucial separation of powers decisions. Secondly, I provide the means to study questions of a longitudinal nature pertaining to support for the judiciary by generating a new measure of confidence that approximates diffuse support. As Gibson and Nelson (2014) note, “The most pressing need for those seeking to understand judicial legitimacy is data capable of supporting dynamic analysis” (215). Thus, this paper offers data that are not only lacking in this field of research, but are also more appropriate to study questions relating to public support for the Court and how that influences separation of powers interactions. Diffuse support is a collective judgment, and theories that rely on diffuse support should be tested with measures that reflect that enduring collective judgment, as opposed to collective whim. In order to demonstrate that the interplay between the Court and Congress rests on ephemeral support, I begin by detailing Congress’s commitment to acting on behalf of the public and how this incentivizes gauging short-term public support. Then, I explore the problematic question wording and its ill-effects prior to introducing and defending the appropriate confidence survey item. Next, I explain and list the advantages of the measurement approach I utilize and detail the specifications used to develop the final series. I go on to show that this series is indeed more strongly related to diffuse support. Finally, I substantiate my theory by refining a prominent set of results (i.e., Ura and Wohlfarth, 2010) and showing that allocation of resources is unrelated to public sentiment when considering diffuse support. 69 4.1 Congressional Assessment of Public Support In 2004, Senator Jon Kyl (R-AZ) wrote in a report by the Senate Republican Policy Committee, “the American people must have a remedy when they believe that federal courts have overreached and interpreted the Constitution in ways that are fundamentally at odds with the people’s common constitutional understandings and expectations.” The report goes on to suggest that the appropriate method by which to remedy this ill is to utilize congressional court-curbing powers, particularly the ability to determine the areas over which the federal judiciary has jurisdiction. Moreover, the report argues that these corrections should be performed on behalf of the people. The legislature recognizes and admits what research has determined: public support constrains Congress’s relationship with the Court. Indeed, extant scholarship shows an empirical relationship between public attitudes toward the Supreme Court and the degree to which Congress empowers the judiciary, where increased positivity leads to increased empowerment (e.g., Clark, 2009; Ura and Wohlfarth, 2010). This literature, however, implicitly argues that this relationship is driven by enduring, diffuse attitudes toward the Supreme Court. I challenge this account on the grounds that resource-constrained members of Congress, who do not have access to enduring psychological attachments, instead rely on readily available short-term public assessments of the Supreme Court when fulfilling their commitment to act on behalf of the people in regards to the Court. There is an extensive literature showing that Congress is surprisingly attuned to shortrun constituent preferences (e.g., Wlezien, 2004). For instance, public preferences both inform and are informed by policy decisions, indicating that Congresspersons are sensitive to alterations in short-run constituent preference (Wlezien, 1995). Further, lawmakers face punishment when they are misaligned with public preferences (Ansolabehere, Snyder Jr and Stewart III, 2001; Erikson and Wright, 2000). Canes-Wrone, Brady and Cogan (2002) note that constituents hold their members of Congress accountable for their voting record such that incumbents receive a smaller vote share when they behave in a strictly partisan, as opposed to representative, manner. Elected officials further demonstrate 70 that they do heed public want when it comes to interactions with the judiciary. Senators adhere to constituent preference when voting to confirm judicial nominees (Kastellec, Lax and Phillips, 2010). Thus, legislators must balance their need to know constituent opinion regarding the Supreme Court with the high cost of doing so with acuity. Furthermore, there is evidence that members of Congress take opportunities to publicly laud the Court on its merits when they agree with a decision. For instance, Representative Andy Harris (R-MD) released a press statement on his website supporting the Court’s decision on a religious liberty case.1 They also offer praise when it comes to providing the Court resources. As Representative Sanford Bishop Jr. (D-GA) stated to Justices Kennedy and Breyer, who appeared to testify on the federal judiciary’s fiscal year 2016 budget: We have to be sure also to provide the Supreme Court – as both the final authority of our constitution and the most visible symbol of our system of justice – with sufficient resources to undertake...your judicial functions...whatever we can do to make sure that we have a strong, independent, well-funded judiciary, we want to do that. Given these findings and public statements, a Congress eager to exploit the Court’s popularity among the public (or avoid unpopular positions on the Court) would ascertain public support using some readily available heuristic or signal. That is, I argue it is far more likely that legislators determine the level of positivity toward the Court using reactions to recent cases or immediate performance satisfaction instead of the degree to which the public perceives the institution to be just and to possess legitimate constitutional authority. And, there is a good deal of evidence from surveys of congressional offices and statements and actions by congresspersons themselves to suggest that they do in fact try to determine specific constituent attitudes on various issues. The mechanisms and avenues through which they collect these data suggest clearly that they are gathering readily available, short-term attitudes. A detail a few of these below. As Abernathy (2015) notes, how congressional offices gauge constituent opinion is 1 https://harris.house.gov/press-release/congressman-harris-praises-supreme-court- decision-upholding-religious-liberty 71 varied, inconsistent, and scarcely measured. But, there are several reasons that congresspersons are likely to assess opinion using transitory assessments of the institution, such as immediate performance satisfaction. For starters, such reactions are readily available. Beyond traditional news outlets, popular press and social media frequently report public sentiment on salient issues and cases. A 2010 survey conducted by Congressional Management Foundation (2011b) shows that 64% of congressional staff members surveyed believe Facebook is “an important way to understand constituents’ views.” 42% believed the same of Twitter, when that service was in its nascency. More generally, this information suggests congressional offices find these sources of communication to be important for assessing public opinion. And, politicians themselves turn to social media to praise or condemn actions of the Supreme Court, as do many social media savvy citizens (Aslam, 2015). Second, members of Congress have repeatedly asserted their commitment to measuring constituent opinion. Historically, in the era before instantaneous public communication, legislators gleaned the preferences of their constituents via traditional methods, such as letters, telegrams, phone calls, and other forms of interpersonal communication. As one of Fenno’s (1978) subjects noted regarding his constituents, “I listen to you, believe me” (161). And, members of Congress have long indicated that they listen in myriad ways. Representative Estes Kefauver (D-TN) wrote that the “chief reliance in ‘feeling the pulse of the people’ must be placed on the mail” (Kefauver and Levin, 1947). Some officials take a more active approach, as one told Tacheron and Udall (1966), “...one of the ways in which we...keep in touch [with constituent preference] is to...stimulate mail” (72; emphasis added). They further suggest that some offices prepare questionnaires with items on specific issues or “include in their newsletter an “open-ended” request for the opinions of their constituents on any matters of concern to them” (72). Such methods of gauging constituent preference have adapted. 97% of senior congressional managers and communications staffers indicate that personal messages, including email, are important for understanding constituent opinions (Congressional Management Foundation, 2011b). Representative Brad Sherman (D-CA) has a federal issues question- 72 naire on his website.2 Representative Brad Wenstrup (R-OH) wrote, regarding surveying his constituents, “...we could measure trends and performance over time in order to make adjustments when constituents are clearly telling us something isn’t working.”3 Regardless of the actual method, it seems clear that congresspersons attempt to garner the immediate forethoughts or reactions of their constituents, as opposed to somehow obtaining unyielding orientations regarding the justness of the judiciary.4 I test the theory that the legislature uses short-term, impermanent support as a measure of the public’s level of satisfaction with the Court, not institutional legitimacy, when making empowerment decisions. Specifically, I argue that there is indeed a relationship between the public, Congress, and the Supreme Court, and that changes in Congress’ willingness to provide resources and deference to (i.e., to institutionalize) the Court is a product of external motivation. However, that motivation is driven by fleeting evaluations, not diffuse support. As noted in greater detail above, the dual nature of confidence data makes this theory difficult to test. Below, I detail a measurement strategy that allows the separation of the two traits – diffuse and more short-term support – in the confidence data that allows the investigation of this theory. 2 https://sherman.house.gov/contact/federal-issues-questionnaire http://wenstrup.house.gov/news/documentsingle.aspx?DocumentID=398634 4 One could argue that surveys are an effective way to gauge diffuse and durable atti3 tudes. But, this is unlikely to be the case or to be systematic. First, not all congressional offices conduct constituent surveys. Representative Wenstrup’s office criticizes those surveys, stating, “...most [surveys] ask loaded questions designed to elicit only expected answers.” Further, as Avey and Desch (2014) note, although policymakers suggest they turn to scholarship to inform their views, only 12.6% of policymakers surveyed believe social science directly applies to their work. Moreover, policymakers find least convincing “...approaches that employ the discipline’s [political science’s] most sophisticated methodologies” (3). Measuring diffuse support for the Supreme Court is stuff of great academic scrutiny (e.g., Gibson, Caldeira and Spence, 2003a), making it unlikely that congressional offices are employing multi-item survey batteries to tap attitudes they could gauge in a much simpler manner. 73 4.2 Question Wording, Confidence, and Public Support Question wording effects are well established in the social sciences (e.g., Bishop, Oldendick and Tuchfarber, 1978; Krosnick, 1989; Pasek and Krosnick, 2010). Simply, the words or phrases in a question can substantially alter the answers survey respondents provide. It is crucial that the question convey the intent in the most straightforward way so that respondents can interpret the question the same way. Prompt ambiguities increase the chances of differential item functioning, which prohibits respondents from interpreting, and subsequently answering, a survey item similarly to one another, producing error when aggregating responses. Such effects impact disparate attitudes and perceptions in the social sciences, ranging from an individual’s support for government spending (Rasinski, 1989) to perceptions of inflation (Bruine de Bruin, Vanderklaauw, Downs, Fischhoff, Topa and Armantier, 2010). These studies, of course, analyze question wording effects on individuals, but these effects are likely to impact analyses of those questions when aggregated. Epstein, Segal, Spaeth and Walker (2003) caution, “...care must be taken in interpreting survey results, as small differences in question wording can lead to substantial differences in aggregate responses” (714). In one example, Abramson and Ostrom (1991) show that differently worded items asking about partisanship produce disparate results in the aggregate. Where Gallup asked “In politics, as of today...” and the National Election Study asked “Generally speaking...” the authors discover that the former question wording is impacted by short-term forces but the latter is not. A question wording effect contributed to imprecise interpretations in the aggregate. In addition to the infrequent and inconsistent measurement of attitudes (Durr, Martin and Wolbrecht, 2000), aggregate legitimacy research is plagued by a question that inappropriately measures public support. Many survey institutions utilize the question wording: “As far as the people running these institutions are concerned, would you say you have a great deal of confidence, only some confidence, or hardly any confidence at all in them?” Several scholars question the appropriateness of the clause “As far as the peo- 74 ple running these institutions...” (or one similar in purpose but not language). Grosskopf and Mondak (1998) state: ...we see the confidence item as roughly comparable to familiar measures of presidential approval. Reference to the “people in charge of running the Supreme Court” likely encourages respondents to contemplate current events rather than institutional history when answering the question, and thus the item is not comparable to the measure of diffuse support (641). Likewise, Gibson, Caldeira and Spence (2003a) show that much of the variance in the confidence measure is due to short-term (dis)satisfaction in performance and ask “...why it focuses on individuals instead of institutions. One wonders who the “people running” the Supreme Court are - do respondents understand the question to refer to the chief justice, for instance?” (355). This ambiguity, and the resulting inability to separate the type of support recorded in the confidence question, leads to a failure to capture institutional support for the Supreme Court. Empirically, it cannot properly measure diffuse support, most notably because it varies too greatly with those things that should not impact an abstract, general level of support. Gibson, Caldeira and Spence (2003a) advocate for the abandonment of this particular question, arguing that it does not measure legitimacy. Yet, it is unclear whether the problem with the use of the confidence question lies with the particular survey item or the very concept of confidence as a stand-in for abstract support. The arguments made above suggest that a confidence question that did not invoke current events might better capture institutional support. Fortunately, other pollsters and research institutions use questions free from this ambiguity. For instance, Gallup consistently queries respondents: “I am going to read you a list of institutions in American society. Please tell me how much confidence you, yourself, have in each one.” Several other pollsters ask similar questions. Some simply ask “How much confidence do you have in the Supreme Court?” Others include a preamble that reminds respondents of the three branches of government and of what that branch is comprised or by whom it is headed. (See supplemental materials for a full listing.) So too have political scientists used this version of the question. However, this work (e.g., 75 Durr, Martin and Wolbrecht 2000) includes the badly-worded question as well. By using questions that do not contain the “people running” clause, I produce estimates of support for the Supreme Court that are free from short-term forces and capable of testing, among others, the theory relating public support to congressional willingness to offer the judiciary deference. However, there is still the issue of what exactly “confidence” means. Gibson (2007) asks if confidence is “...the same as predictability, or is it instead equivalent to confidence that the leaders will do what is right, and if the latter, right for the country, me, my group, or my ideological preferences?” (514). For the present purpose, I adopt Ura and Wohlfarth’s (2010) conception of confidence, where it is suggested that confidence in an institution reflects legitimacy and representative agency free from short-term political volatility. Most importantly, and as the analysis below will demonstrate, I argue that this operationalization of confidence can be used as a surrogate in testing aggregate theories of diffuse support. 4.3 Measurement Strategy: Methodology and Results One major criticism of over time analyses of Court support is that they contain “relatively few observations, confounding statistical inferences, and...require strong assumptions to justify linear interpolation,” and that such approaches “fail to take advantage of the information provided by other, smaller series that tap similar attitudes” (Durr, Martin and Wolbrecht, 2000, 769). Indeed, many authors who did not have yearly estimates simply linearly interpolated missing values (e.g., Caldeira, 1986), which may miss important information or unnecessarily assume a particular trend. The Kalman filter approach taken here addresses many of these concerns. This method, long advocated as a way to accurately measure the dynamic attidues or opinions of the mass public (see Beck, 1989; Green, Gerber and De Boef, 1999), uses a series of over time observations, each of which contains noise other inaccuracies (like measurement and survey error), and generates an estimate of a latent trait (θ) for each time frame (θt ; here t = year) that is more precise 76 than an estimate from any single measurement.5 4.3.1 Developing the Series Below I develop two measures of confidence in the Supreme Court. The first omits the “people running” ambiguity and the second includes it. For ease of reference, I call them the “diffuse” and “ephemeral” series, respectively. The diffuse series is expected to behave as diffuse support and the ephemeral series like more short-term, temperamental support. That is, the former should reflect stable attitudes regarding the rightness of the institution, and the latter more temporally constrained sentiments toward outputs. For the diffuse series, I used the Roper Center for Public Opinion Research iPOLL database to locate 68 items that did not invoke the problematic “individuals/people running” wording.6 To determine the percentage of individuals who are supportive of or are confident in the Court, the “great deal” and “quite a lot” responses are combined (the confident series). Conversely, “very little” and “none” are combined to find the percentage of those who are not supportive or are not confident (the not confident series). When there are multiple surveys in a single time period, the observed percentage is a sampleweighted average. I undertake the same procedure for the ephemeral series, using data 5 Please see the supplemental materials for a more detailed exposition, as well as tech- nical specifications such as starting values and the underlying model of the state space equation. 6 Data are available each year and are likely to continue to be, making the update of these estimates trivial. The analysis concludes at 2014 due to independent variable availability. Although many scholars consider the “modern Court” to have begun in 1947 (for example, Spaeth, Epstein, Ruger, Whittington, Segal and Martin 2010), and many judicial politics studies span back to the mid-twentieth century, support data typically only span to 1973, and the earliest systematic data appear only in the late-1960s (see Caldeira, 1986). While 37 years may not seem expansive by time series standards, the data produced here are nearly representative of the years typically analyzed when testing such theories. 77 from the General Social Survey, which includes the problematic question wording. The percentages from the confident and not confident series are entered into the Kalman filter and smoother. When the resulting series are produced, the final estimate of confidence in the Supreme Court is calculated:7 %Confident (%Confident + %Not Confident) (4.3.1) A visual representation of both series can be found in Figure 4.1; the displayed series were produced by the Kalman procedures. While they do, generally, trend together, the ephemeral series tends to look smoother, calling into question whether it accurately responds to changes in genuine support attitudes. The diffuse series varies to a greater degree than the ephemeral series, with the former’s variance 1.6 times as large as the latter’s. The ranges of these series are .537-.861 and .553-.770 for the diffuse and ephemeral series, respectively; the diffuse series has higher highs and lower lows than the ephemeral series. Importantly, both plots show that there is, generally, confidence, in the Supreme Court. However, both minimums are measured in 2014, the result of several consecutive years of decline. This suggests that we might be nearing the point where the public will, generally, not have confidence in the Court. As can be seen, confidence trends upward from the beginning of the series until the late 1980s, where a slight decline precedes a sharp decline in 1991. Confidence nearly returns to its early 1990s level in the early 2000s before beginning a steady downward trend that persists today. 7 An alternate calculation, as used by Durr, Martin and Wolbrecht (2000), is: 100 + (% Confident - % Not Confident). These two series are correlated at ρ=.99. Both calculations omit respondents who respond with the middle option. 78 Confidence in Supreme Court Diffuse Ephemeral 0.8 0.7 0.6 0.5 1980 1990 2000 2010 1980 1990 2000 2010 Figure 4.1: Support for the United States Supreme Court. Circles represent values from individual surveys. Line indicates estimated confidence. Left plot displays values for the diffuse series; right for ephemeral. Larger values indicate a larger percentage of survey respondents reporting that they are confident in the Court. Thus far the narrative has noted that the diffuse series should be more “stable” than the ephemeral series. To be clear, this does not mean that the diffuse series should display less movement than the ephemeral, nor does it suggest it should be flat. Again, movement in support over time is expected. That the diffuse series “moves” more in Figure 4.1 is consistent with expectations regarding legitimacy, provided that movement is reflective of true variance in diffuse support, as opposed to short-term volatility. Further, both series are subjected to the smoothing process; while we would expect less volatility in a series that approximates diffuse support (i.e., the diffuse series), variation has not been “smoothed” out by the Kalman smoother. Before moving on to test the institutionalization theory put forward above, I first determine whether observed movement in the two confidence series is a product of short-term volatility or actual changes in the level of confidence in the Court. 79 4.3.2 Variable Measurement & Coding In order to test if there are contemporaneous or short-term forces that impact Court support, which would be inconsistent with classical legitimacy theory, I move to autoregressive distributed lag models.8 First, I introduce the variables that may induce movement in support for the Supreme Court. A list of coding and data sources for all variables appears in the supplemental information. The two dependent variables, both termed Confidence, are described above. Because the visibility of the Court itself ebbs and flows, it is possible that changes in support follow these fluctuations (Caldeira, 1986). There are different types of media attention, such as focus on cases and on vacancies or other Court events. To account for various types of attention, I use multiple indicators; however, given the small number of observations, there are degrees of freedom concerns. As such, Media Attention is the values from a principal components analysis that includes two measures of media scrutiny. The first is the average number of newspapers in which each Court decision was featured in a given year as calculated by the Case Salience Index (Collins and Cooper, 2012), which accounts for attention to Court outputs. The second is the amount of news coverage (in minutes) that the evening news programs of ABC, CBS, and NBC dedicated to the Court each year (Vanderbilt University Television News Archive). If diffuse support is indeed diffuse, we would not expect the public’s evaluations of the Court to move with its conspicuousness. I also include Presidential Election, an indicator for years in which an election is held, as the Court is more frequently mentioned during the Presidential election season as candidates discuss how they may act if there is a vacancy should they be elected, litmus tests, agreement with past Court decisions, etc. It is also possible that when an individual indicates support for the Court, they are indicating support for American institutions generally (Caldeira, 1986). As Gibson, Caldeira and Spence (2003a) write, “...confidence in institutions is typically not insti8 Appropriate time series diagnostics appear in the supplemental materials. To address endogeneity and reverse causality, Granger causality tests also appear in the supplemental materials. 80 tution specific - indicating instead more general attitudes toward institutions.” Thus, Presidential Approval Index, which uses Gallup’s approval ratings, is the difference between those who approve of the President and those who do not. It is well established that there is some relationship between presidential popularity and the state of the economy (Norpoth, Lewis-Beck and Lafay, 1991; Burden and Mughan, 2003); such a relationship may not be institution specific (Caldeira, 1986; Ura, 2014). Economic Performance is the predicted values from a principal components analysis that includes inflation (yearly average of the consumer price index) and unemployment (from the Bureau of Labor Statistics).9 Further, there are some political attitudes one should reasonably expect to covary with support. Specifically, orientations toward government in general should reflect upon the judiciary. This is particularly important in order to differentiate a series that reflects diffuse support from one that covaries with nothing. In light of the meaningful implications and powerful effect of political trust (e.g., Hetherington, 1998, 2005), I include Trust, which is the American National Election Studies’ trust index. The general expectation is that as trust in government increases, so too will trust in the Supreme Court. Finally, many scholars show that the ideological distance between the Court and the public impacts the level of support expressed for the Court (Durr, Martin and Wolbrecht, 2000). More specifically, when the preferences of the Court diverge from those of the public, fewer people express support, at least in the short term (Ura, 2014). I remain agnostic on the effect of this variable. On the one hand, it might suggest ephemeral support, as opposed to diffuse support, if people are willing to change their attitudes quickly. Yet, it is an attitude explicitly related to the Court and a reasonable one by which the public may develop or alter their opinions. Court-Public Ideological Divergence measures the distance between Stimson’s (1991) policy mood and Martin and Quinn’s (2002) Court ideology score, as measured by each term’s median justice.10 9 These results, as well as those for Media Attention – both of which suggest unidimen- sionality – appear in the supplemental information. 10 Consistent with the measurement strategy used by Durr, Martin and Wolbrecht (2000), Divergence = -100 * [Stimson’s Mood - E(Stimson’s Mood)] x [Median Ideol81 4.3.3 Measurement Strategy: Analysis and Results Before continuing to test the theory that the congressional decision to institutionalize the judiciary relies on ephemeral, but not diffuse, public support, I must first determine whether the confidence series relate to temporal political considerations, which are theoretically at odds with diffuse support. To do so, I estimate autoregressive distributed lag models, where I regress, separately, the diffuse and ephemeral series onto both concurrent and lagged variables (Presidential Election is not lagged). All series containing a unit root are differenced. A lagged dependent variable is included to account for autocorrelated errors. Table 4.9 displays the results of these regressions; the results of the ephemeral series appear on the right and of the diffuse series on the left. I begin with the diffuse series on the left side of Table 4.9. No substantive variables appear as statistically significant, save for trust and ideological disagreement, the former a durable orientation toward government and the latter an evaluation of the public’s position vis-`a-vis the Supreme Court. In other words, media attention, presidential approval, economic performance, and election season have no short-term effect on confidence when measured with survey items free from the “people running” ambiguity. Finally, we would not expect individuals who distrust the government to be supportive of the Court, nor is it reasonable to expect a Court wildly divergent from public preferences to remain supported. This is precisely what is borne out in the results. These findings suggest that confidence, when properly measured, reflects support that is broad and rigid. In other words, this series is related to the things we might expect and unrelated to those things with which it should not share movement.11 Moving to the ephemeral series on the right, a different story unfolds. I discover that there are contemporaneous short-term forces that impact the level of confidence in the Supreme Court. As expected, when a larger portion of the public approves of ogy - E(Median Ideology)], where E indicates the expected value. 11 Diagnostic tests found in the supplemental materials show that multicollinearity is not problematic, as the largest variance inflation factor is 2.65, well below general rules of thumb (e.g. Fox, 2015). 82 Table 4.1: ADL on Effects on Confidence in the Supreme Court Without With Coefficient Coefficient Variable (Std. Err.) (Std. Err.) ∆Confidencet−1 −0.465∗ −0.044 (0.160) (0.156) Media Attention 0.001 0.002 (0.006) (0.005) Media Attention t−1 0.008 0.013∗ (0.006) (0.005) Election Year −0.006 0.010 (0.014) (0.012) ∆Ideological Divergence 0.000 0.000 (0.000) (0.000) ∆Ideological Divergencet−1 0.000∗ 0.000 (0.000) (0.000) ∆Economic Performance 0.025 −0.003 (0.014) (0.011) ∆Economic Performancet−1 0.003 0.004 (0.015) (0.011) ∆Approval Index 0.000 0.001∗ (0.000) (0.000) ∆Approval Indext−1 0.000 −0.001∗ (0.000) (0.000) ∆Trust Index 0.005 −0.005 (0.003) (0.003) ∗ ∆Trust Indext−1 0.008 0.008∗ (0.003) (0.003) Constant 0.001 −0.005 (0.008) (0.007) Adjusted R2 0.35 0.26 Portmanteau Test 0.99 0.63 Breusch-Godfrey Test 0.13 0.46 * denotes p <0.05 with respect to two-tailed test. 83 the President, a larger portion of the public indicates support for the Supreme Court. As Gibson, Caldeira and Spence (2003a) warned, when measured by survey items with the problematic question wording, confidence reflects support for institutions, not an institution. Further, an increase in media attention to the Supreme Court in the preceding year is related to an increase in support for the Court. The more exposed the public is, the more they support the Court. While the effect here is positive (i.e., media attention leads to increased support), the opposite implication is damning. That is, it is possible that a lack of attention to the Court could lead to dwindling support. These findings suggest that confidence as measured by the survey item with the problematic clause does not accurately measure diffuse support. Again, for support to be diffuse, apolitical and nonCourt related political factors must not drain (or fill) the reservoir of goodwill. Instead, this survey item appears to measure, as Grosskopf and Mondak suggest, something closer to immediate approval or perhaps specific support. Scholars who have called into question the utility of this measure appear to be correct in their scrutiny. These results suggests that there is indeed some effect of question wording on the way survey respondents interpret confidence questions. It seems that the “people running” clause indicates to people that they should consider the current regime – perhaps the sitting Chief Justice or a few noteworthy or outspoken Justices – as opposed to the institution itself over a broader period of time.12 12 One may argue that the smaller sample sizes in the GSS series bias these comparative results. I don’t believe this is a concern for a few reasons. First, due to the National Opinion Research Center’s resources, it is reasonable that the GSS survey error is likely to be lower in the first place. That is, I suspect that if the GSS used appropriate question wordings, their estimates would be closer to latent support. Second, the combined sample sizes of the diffuse series only begin to dwarf those in the GSS in the land line and cell phone sampling era. Finally, this criticism is still congruent with my argument – confidence, when properly measured, reflects long-term support and is not subject to economic, political, or social whims. 84 4.4 Supreme Court Institutionalization Finally, having devised an appropriate measurement strategy capable of separating diffuse and ephemeral support in confidence data, I turn to testing the theory that only provisional support leads to congressional willingness to institutionalize the Court. To recapitulate, in their investigation of the determinants of congressional support for the Supreme Court, Ura and Wohlfarth (2010) defend their use of the ambiguity-laden confidence measure on theoretical, empirical, and practical grounds (947). The authors’ pragmatism is compelling (i.e., no other measures of over-time support are available); however, the evidence is clear that the measure is problematic (e.g., Gibson, Caldeira and Spence, 2003a). Their findings suggest that Congress’ willingness to grant resources and discretion to the Supreme Court hinges upon both public support for the Court and for the legislature. They comment that “public confidence in the Supreme Court uniquely explains twelve percent of the observed variance in changes to the Court’s level of” institutional capacity. Below, I produce three models of Court institutionalization (i.e., institutional capacity): (1) a replication of Ura and Wohlfarth (2010) using their aggregate confidence series (i.e., the error-laden confidence question with an alternate method for interpolating missing data), (2) one using the ephemeral series operationalization of confidence, and (3) one using the diffuse operationalization of confidence. The dependent variable, Supreme Court Institutionalization, comes from Ura and Wohlfarth’s (2010) augmentation of McGuire (2004); all additional variables are measured as detailed in Ura and Wohlfarth (2010). The expectation is that the ephemeral series will replicate the findings in Ura and Wohlfarth (2010), but that the third column, using corrected confidence, will not. That is, I expect measures of pro tempore, impermanent support to predict institutional capacity. Again, I do not anticipate that diffuse support can reasonably be accessed by members of Congress when making funding decisions and that they instead rely on readily available assessments of constituent satisfaction. Therefore, I expect corrected confidence will produce a null finding. In keeping with Ura and Wohlfarth (2010), I utilize error 85 correction models to account for both short- and long-term effects and use Newey-West standard errors. Table 4.2 displays these results. Note that sample sizes differ because the error-laden confidence series extends to 1973; see the supplemental materials for more information. 86 Table 4.2: Error Correction Models of Supreme Court Institutionalization Variable Ura & Wohlfarth ‘With’ Series ‘Without’ Series Coefficient Coefficient Coefficient (Std. Err.) (Std. Err.) (Std. Err.) 12.81∗ 17.36∗ 0.672 (4.92) (6.34) (5.50) −7.152∗ −7.07∗ −1.433 (2.95) (3.00) (2.42) −0.177 −0.065 0.051 (1.04) (1.42) (1.90) 0.000 0.000 0.000 (0.00) (0.00) (0.000) 2.638 −4.129 −9.162 (3.64) (6.27) (5.84) −2.289 1.184 0.597 (2.41) (3.44) (3.81) 0.067 2.006 −0.209 (1.62) (2.05) (2.09) Long Run Effects Confidence in Courtt−1 Confidence in Congresst−1 Congress-Court Ideo.Distancet−1 Docket Size (Thousands)t−1 Short Run Effects ∆Confidence in Court ∆Confidence in Congress ∆Congress-Court Ideo. Distance ∆Docket Size (Thousands) Constant N 0.001∗ 0.001 0.002∗ (0.000) (0.00) (0.000) −14.96∗ 1.683 1.171 (5.67) (2.85) (4.16) 29 29 25 * denotes p <0.05 with respect to two-tailed test. Newey-West standard errors in parentheses. The Ura and Wohlfarth (2010) model on the left and the model at center produce 87 similar results and lend support to the implications found in Ura and Wohlfarth (2010). This is encouraging, as it shows that the ephemeral confidence series generated using the Kalman procedures is not meaningfully different from the confidence series produced by Ura and Wohlfarth (2010), who used a different interpolation method. This bolsters the claim that differences in the ephemeral and diffuse series are derived from question wording. That is, Supreme Court institutionalization does indeed depend on public support – specifically, short-term support – for both Congress and the Court. However, when moving to the model on the right, which uses the diffuse operationalization of confidence, which is free from the question wording ambiguity and is more reflective of diffuse support, a different story unfolds. Simply, confidence that reflects diffuse support is not a predictor of Court institutionalization.13 This highlights the differences between the ephemeral and diffuse series. Inasmuch as deeply held political orientations (such as diffuse support) are difficult to assess, current (dis)satisfaction with the Supreme Court (i.e., short-term support) is a more reasonable proxy by which Congress would judge public sentiment toward the Court. This is precisely what the model in Ura and Wohlfarth (2010) reveals, but the dual nature of confidence data makes difficult to conclude. As noted above, they argue that confidence represents “an institution’s changing status in the public’s mind...separate from more temporal political concerns” (974). The analysis above demonstrates that the ephemeral series does not clear this definition’s bar, but the ephemeral series does. This indicates that an operationalization of confidence that accounts exclusively for diffuse support orientations is unrelated to Supreme Court institutionalization. Drawing opposing conclusions when testing the same hypothesis (i.e., that support influences institutionalization) using enduring political attitudes – such as the diffuse series – versus short-term support, it follows that the findings in Ura and Wohlfarth (2010) can be attributed to specific support. If Congress is assessing public attitudes toward the Court, 13 Failing to find significance on the confidence in Congress variable is not likely due to the same question wording effects detailed above. There are indeed “people running” Congress (i.e., Congressional leadership), making the question less error-laden. 88 they are tapping performance satisfaction. Bluntly, congressional institutionalization of the judiciary depends more on short-term attitudes toward the Court than long-term orientations. 4.5 Conclusion My primary objective was to argue that the relationship between Congress and the mass public that underpins the theory of Supreme Court institutionalization was misjudged. When testing this theory, previous research relied on a measure – confidence in the Court – that is subject to measurement problems and incorporates both short- and long-term assessments of the institution (see Gibson, Caldeira and Spence, 2003a). Although wary to explicitly state that confidence measures diffuse support, researchers who utilized this measure still grounded their studies in the language of institutional legitimacy. Doing so implies that members of Congress are able to retrieve a very particular type of information from citizens when gauging the level of positivity to make Court empowerment decisions. That is, suggesting that confidence in the Court reflects legitimacy in any meaningful way, and that members of Congress assess the level of public confidence in the judiciary when making resource decisions, necessarily argues that Congresspersons can gauge legitimacy. This task – a tall one even academically – is difficult to defend. As Tyler (2006) notes, “Legitimacy is a psychological property” that leads individuals to believe that an institution is “appropriate, proper, and just” (375). Simply, it is hard to imagine that a resource constrained legislator is able to undertake the onerous task of determining her constituents’ psychological assessments of the judiciary’s propriety and justness. Instead, I argue that Congress only heeds short-term attitudes when determining whether to fund and offer deference to the judiciary. Such attitudes regarding the judiciary are much more plausibly accessible for legislators. Consuming popular and social media, surveying constituents, and fielding personal communication are all methods by which legislators could determine how their constituents feel about the judiciary. Scaling multi-item survey batteries or assessing a psychological orientation via other means, on 89 the other hand, is far less likely. But, in order to test the theory that Congress is relying on ephemeral support when making resource decisions, I first had to construct a valid and differentiable over time measurement of support for the Supreme Court by salvaging what is available in the confidence data. By using exclusively survey items that avoid the ambiguous “people running the institution” clause, I produced estimates that appropriately measure confidence and that reflect more persistent levels of support. I demonstrate that the series generated using Kalman procedures, with the unambiguous survey items used as observation data, does not vary with changes in the political, social, media, or economic environments. On the other hand, a series using ambiguous survey items – items used frequently in this line of research – was shown to vary with factors outside of the Court’s own control, suggesting it does not properly measure more deep-rooted concepts like confidence or institutional support. Using this new measurement strategy, I replicated and expanded upon Ura and Wohlfarth (2010), providing evidence for the hypothesis that confidence that reflects diffuse support is not appropriate for theories that invoke shorter-term evaluations of public attitudes. More specifically, Congresspersons are unlikely to assess the public’s deeply held beliefs about a political institution, and are more likely to rely on fleeting sentiments. This is borne out in the data, where the ephemeral confidence series containing short-term volatility is a predictor of Congressional institutionalization of the Court, but diffuse confidence, free from that built-in volatility, is not. There are two major implications from this work, the first substantive and the second empirical. First, it is normatively troublesome that Congress appears to make decisions regarding the independence of the United States judiciary based on the political caprice of the mass public. While it is difficult to fault Congress – again, measuring deeply held beliefs is challenging even for social scientists – the fact remains that determinations about the appropriate level of Supreme Court institutionalization rely on mutable and potentially turbulent evaluations of satisfaction with the judiciary. Further, these results muddy the argument that public opinion provides information as to preferences regarding institutional alignments. That is, it is unclear if the public intends for distaste with a 90 particular decision or set of decisions to be a signal to Congress to de-institutionalize the Court. Stated differently, if performance dissatisfaction is not meant as a signal, but Congress interprets it as one, the Court suffers because Congress is out of step with the public. The second implication of this work is empirical in nature. The results here exhibit the pressing need for an aggregate measure of diffuse support. Analyses that use errorladen confidence as a proxy for diffuse support fail to accurately test aggregate theories of legitimacy. Previously, this was done out of necessity, as no measures of aggregate support free from the specific-support error were in use. As shown, the confidence series produced here offers researchers a tool to use in testing theories of public level diffuse support. Data capable of supporting over time analyses are crucial for studies of Supreme Court legitimacy, as well as theories that suggest public opinion impacts Supreme Court behavior, other institutional decisions, and the interaction thereof. Legitimacy theory stands among the most normatively important concepts in judicial politics. For the three branches of government to synchronously operate, each must have the authoritative right to make decisions. And while the legitimacy of all institutions waxes and wanes with their support amongst the public, only the Court suffers from an institutionally designed lack of legitimacy. When the executive and the Congress are elected, their offices are replete with legitimacy due to a fair election. Further, elected official’s desire for reelection (Mayhew, 1974) places the Court in a unique and precarious position to play the role of countermajoritarian. Conversely, the Court must build and maintain their esteem. Thus, the reservoir of goodwill is necessary when the institution makes decisions that may counter the preferences of an individual or of the public and when perceptions of those decisions in turn impact the Court’s ability to act. With such data in hand, researchers can begin to examine longitudinal support for the Court and its importance vis-`a-vis other institutional actors free from error-laden measures. 91 APPENDIX 92 The Advantage of the Kalman Procedures Kalman filtering and smoothing – procedures used when a model is set up in state space form – are methods long advocated by scholars as a way to accurately measure the dynamic attitudes or opinions of the mass public (Beck, 1989; Green, Gerber and De Boef, 1999). The Kalman filter uses a series of over time observations, each of which contains noise and other inaccuracies (like measurement and survey error), and generates an estimate of a latent trait (θ) for each time frame (θt ; here t = year) that is more precise than an estimate from any single measurement. Although the advantages of these procedures are plenty, most important to the current purpose is the ability to handle missing data, reduce potentially biasing survey effects, and produce more accurate measures of public attitudes. Creating a series from multiple questions across several survey groups, as opposed to one question from one organization, increases the over time observations which aids in the generation of more acute estimates, as does accounting for the errors associated with those individual measurements. Indeed, Beck (1989) highlights some of these major advantages, stating “The Kalman filter comes into its own when we actually care about the error process” and that when measurement issues “are central...then Kalman filtering of models in [state space form] can be invaluable” (147-148). The Kalman procedures take observed values from the past, present, and future to construct estimates for the state vector, or latent trait (here, confidence). The transition equation describes how past and present values relate to one another (i.e., random walk, AR(1), etc.; here, a random walk), and generally assesses the speed at which opinion is changing. Finally, the measurement equation, comprised of observed input data, relates the latent state values to the observed values. More intuitively, the Kalman filter is an adjustment process in the transition equation that improves final predictions. Specifically, when there are no observations at a particular time (t), the estimated value of the latent trait (θt ) is the value predicted by the transition equation. When there is an observation at a particular time (t), the estimate of θt is the average of the predicted value and the observed value, weighted by both the observation 93 and transition equation’s error variances; the estimate approaches the observed value as the sample size increases. This process extends to future values via the Kalman smoother.14 Once the state space is initialized, the procedure uses observed percentages, as well as predicted percentages, to generate values for each year. The Kalman smoother then uses future values to fine tune the estimates. Because the underlying model is a random walk, large changes across subsequent time periods are not expected; the smoothing feature utilizes the past and future values to build in certainty that movement in the estimated series is indeed a product of movement in the latent construct, not of sampling or measurement error. That is, assuming consistent sample sizes, unbiased surveys, etc., if an observed value at time period t is 5%, at t+1 is 15%, and at t+2 is 6%, the estimated value at t+1 would be closer to those at t and t+2 than its observed value. There are a few technical specifications required. First, for starting values I use the arithmetic means and the variance is set at 25. The observed variances range from 0.022 to 3.09; a variance of 25 is chosen as a very conservative estimate. See the supplemental materials for alternate specifications. Further, the underlying model of the state space equation is a random walk, which is chosen as a way to remain on the fence in regards to point predictions. Kalman Filter Information The basic form of the Kalman filter and smoother in the state space form is as follows. yt represents a sample percentage y at time t from a particular survey; yt is a function of the true, unobserved percentage of interest, θt , plus random sampling error, t . When the sample size (nt ) is sufficiently large, we can assume that the error term is approximately normal: 14 There are several sources to consult for technical expositions. To list only a few: Beck (1989); Harvey (1990); Green, Gerber and De Boef (1999); Harrison and West (1999); Commandeur and Koopman (2007); Shumway and Stoffer (2010). 94 Yt = θt + t , where t ∼ N (0, yt (1 − yt )/nt ) (4.5.1) Equation (1) is the observation equation. Also involved in the estimation of θt is a transition equation (2) that details how past values relate to present values (and likewise for present and future values). Here, θt is specified as a random walk with drift, meaning the transition equation is as such: θt = θt−1 + ωt , where ωt ∼ N (0, σω2 ) (4.5.2) After using the observed value, the observation equation (1), and Bayes Theorem, the procedure uses the Kalman filter to fine tune the prediction value. When there are no observations at a particular time (t), the estimated value of θt is then the value predicted by the transition equation (2). When there is an observation at a particular time (t), the estimate of θt after the Kalman Filter is the average of the predicted value and the observed value, weighted by both the observation and transition equation’s error variances; the estimate approaches the observed value when the sample size increases. These estimates are then adjusted via backward smoothing, a procedure that considers the future observations along with the transition equation to tweak the past estimates of θ. Now, turning to a general form of the linear Gaussian state space model, we can see how this procedure incorporates survey error and the possibility of exogenous regressors: yt = Ht xt + A t z t + t , where t ∼ Nt (0, Rt ) xt = Ft xt−1 + Gt ut + wt , where wt ∼ Nd (0, Qt ) (4.5.3) (4.5.4) where t = 1, 2, ..., T. The observation equation (3) tells us how yt , our observed survey data, relates to our latent parameter of interest, xt , exogenous regressors zt , and the error term t. t is each survey’s observation error whose variance matrix, Rt , is able to include estimates of the survey’s sampling error. The transition equation (4) tells us how the state variables 95 change temporally as a function of their previous values (xt−1 ) and exogenous regressors (ut ). The researcher is able to specify the number of lags. The researcher also sets the initial values (x0 ) to either a probability distribution or a particular value and the transition equation is initialized via the Kalman Filter. Question Wordings Below is a list of the organization that fielded the survey, not the institution for whom the questions were asked. For instance, in 2000 Hart and Teeter fielded a survey for NBC/Wall Street Journal; the question wording from that survey appears below under Hart and Teeter. These question wordings were used to create the diffuse series. Table 4.3: Question wordings Survey Institution Gallup Question Wording I am going to read you a list of institutions in American society. Would you tell me how much confidence you, yourself, have in each one? Years Asked 1977, 1981, 19831988, 1991, 19931994, 1996-2002, 2008 How much confidence do you, your- 1978 self, have in these American institutions? Would you tell me how much confidence you, yourself, have in: 1980-1981 Now I am going to read you a list 1995, 2003-2007, of institutions in American society. 2009-2014 Please tell me how much confidence you, yourself, have in each one. 96 Table 4.3 – cont’d Survey Institution CBS/New Times Question Wording Years Asked As you know, our federal government 1998-2000, 2002is made up of three branches: an Ex- 2014 ecutive branch, headed by the President, a Judicial branch, headed by the US Supreme Court, and a Legislative branch, made up of the US Senate and House of Representatives. Let me ask you how much trust and confidence you have at this time in the Judicial branch consisting of the US Supreme Court York I am going to read you a list of insti- 1981 tutions in American society. Would you tell me how much confidence you, yourself, have in each one? How much confidence do you yourself have in the United States Supreme Court? 2000, 2004 ABC/Washington Post I’m going to mention the names of 1981, 1991, 2000 some institutions in American society. Would you tell me how much confidence you, yourself, have in each one? CBS News How much confidence do you yourself 2000-2001, have in the United States Supreme 2006, 2012 Court? Washington Post Now, I’m going to mention the names 1991 of some institutions in American society. Would you tell me how much confidence you, yourself, have in each one? I’m going to read you the names of 2002 some institutions in American society. Please tell me how much confidence you, yourself, have in each one. Princeton Survey Research Associates I’m going to read you the names of 1995, 2000 some institutions in American society. Please tell me how much confidence you, yourself have in each one. 97 2005- Table 4.3 – cont’d Survey Institution Question Wording Years Asked I am going to read you a list of insti- 2012 tutions in American society. Please tell me how much confidence you, yourself, have in each one. Hart and Teeter Research Companies I am going to read you a list of insti- 1997, 1999-2000 tutions in American society. Would you tell me how much confidence you, yourself, have in each one? I am going to read a list of institutions in American society, and I’d like you to tell me how much confidence you have in each one. 2000-2001, 2006 How much confidence do you have in 2005 the Supreme Court? Now I’m going to list some institutions in American society, and I’d like you to tell me how much confidence you have in each one. 2009, 2012 I’m going to list some institutions in 2009, 2012, 2014 American society, and I’d like you to tell me how much confidence you have in each one. International munications search Com- I’m going to read you the names of 2000 Re- some institutions in American society. Please tell me how much confidence you, yourself, have in each one. Belden, Russonello & I am going to read you a list of in- 2007 Stewart stitutions and groups. Please tell me how much confidence you, yourself, have in each one. 98 Variable Coding and Sources Table 4.4: Variable coding and sources Variable Diffuse Series Source & Coding Product of Kalman filter and smoother. Input data are from survey items administered 68 times from 1977-2014. Each survey question is free from the ‘people running’ ambiguity. “A Great Deal” and “Quite a Lot” are combined into Confident; “Very Little” and “None” are combined into Not Confident. Both the low and high confidence series are then input into the state space model. When resulting series are produced, the final series is cal%Confident culated as Without = (%Confident+%Not Confident) Ephemeral Series Product of Kalman filter and smoother. Input data are from the National Opinion Research Center’s General Social Survey. This survey item includes the ‘people running’ ambiguity. The GSS only offers three question options. The final series is calculated as With = %Confident (%Confident+%Not Confident) Media Attention Predicted values from principal components analysis of (1) the average number of citations per Supreme Court case in a given year per the Case Salience Index (Collins and Cooper, 2012) and (2) the number of minutes per year that the Evening News programs for ABC, CBS, and NBC spent discussing the Supreme Court. Data from Vanderbilt University Television News Archive. Presidential Approval Index The difference between percentage of people saying they approve of the job the president is doing and the percentage of people saying they do not approve. Data from Gallup. Court-Public Ideological Divergence The difference between Stimson’s Mood 1991 and Martin-Quinn Supreme Court ideology scores 2002. The Supreme Court ideology for 2005 is the average of 2005a and 2005b. Calculated as Divergence = -100 * [Stimson’s Mood - E(Stimson’s Mood)] x [Median Ideology - E(Median Ideology)], where E indicates the expected value. 99 Table 4.4 – cont’d Variable Economic Performance Source & Coding Predicted values from principal components analysis of (1) Consumer Price Index and (2) yearly unemployment. Data from Bureau of Labor Statistics. Presidential Election Indicates a year in which there was a Presidential election. Trust Index American National Election Studies’ trust index. Missing years imputed using state space model and Kalman filter. Correlation with linear imputation ρ =0.9946. Robustness to Alternate Starting Values To demonstrate that the confidence series produced using the Kalman processes are not a product of the researcher chosen starting values, I present the correlations between ‘not confidence’ series – which was a constitutive part of the diffuse series – using alternate specifications.15 Figure 4.2 displays the correlation between series using the different starting values along the axes and Figure 4.3 displays the correlation between series using different variances around the starting values. Finally, the correlation between the two series at the extreme (i.e., (σ = 15, µ = 25) & (σ = 35, µ = 5)) is 0.982. In other words, the final series are robust to various starting values and variances. Given the strength of the relationship between series using different values, I only present those alternate specifications for the ‘not confident’ series. Time Series Diagnostics: Unit Root Tests Before performing regressions, I first conduct unit root tests to determine if these series are stationary. Table 4.5 displays the integration orders for all series. On the left are 15 Although Pearson correlations can occasionally produce spurious results using time series, these series are indiscriminate from one another when attempting to display them graphically. In other words, the high correlation coefficients aptly describe how interrelated series using different starting values are. 100 1.000 25 20 0.995 16.67 0.990 15 10 0.985 5 0.980 5 10 15 16.67 20 25 Starting Values Figure 4.2: Correlation between alternate starting values for ‘not confident’ series. 16.67 is the mean value and the value used to initialize in the main text. the series included in the diffuse analysis and on the right, the ephemeral analysis. The differences between the confidence series are detailed in the main text; the series on the left are from 1973-2014, and those on the right are from 1977-2014. Confidence (as expected due to the random walk specification), Court-Public Ideological Divergence, and Inflation all contain a unit root. Table 4.5: Augmented Dickey-Fuller Unit Root Tests Variable Confidence Media Attention Pres. Approval Index Court-Public Ideo. Divergence Inflation Presidential Election Trust Index 101 Integration Order Ephemeral Diffuse 1 1 0 0 0 0 1 1 1 1 0 0 1 1 1.00000 35 0.99999 30 0.99998 25 0.99997 0.99996 20 0.99995 15 15 20 25 30 35 Starting Variances Figure 4.3: Correlation between alternate starting variances for ‘not confident’ series. 25 is the variance used in the main text. Regression and Time Series Tests 102 Table 4.6: Multicollinearity Diagnostics Variable Trust ∆ ∆t−1 Approval Index ∆ ∆t−1 Ideological Divergence ∆ ∆t−1 Economic Performance ∆ ∆t−1 Confidence ∆t−1 Media Attention t−1 Presidential Election Variance Inflation Factor Diffuse Ephemeral 2.65 2.28 2.33 1.98 2.13 1.35 1.77 1.57 1.36 1.71 1.39 1.48 1.45 1.58 1.33 1.42 1.43 1.32 1.43 1.28 1.24 1.29 1.39 1.29 Table 4.7: Autocorrelation Tests for Various Lag Orders Lag Order 1 2 3 4 5 6 7 8 9 10 Portmanteau Tests Diffuse Ephemeral 0.4649 0.6631 0.7611 0.9031 0.9018 0.9736 0.9657 0.8892 0.9890 0.9380 0.9461 0.9523 0.9745 0.9694 0.9797 0.9858 0.9904 0.9895 0.9888 0.9615 103 Table 4.8: Effect of IVs with Alternative Lag Orders for Diffuse ECM Lag Order Variable 2 3 4 1/2 Media Attention - - - - - - - - - - - - Election Year - - - - - - ∆Ideological Divergence - - - - - - ∆Ideological Divergencet−1 - - - - - - ∆Economic Performance - - - - - - ∆Economic Performancet−1 - - - - - - ∆Approval Index - - - - - - ∆Approval Indext−1 - - - - - - ∆Trust Index - - - - - - ∆Trust Indext−1 - - - + + + Constant - - - - - - Media Attention t−1 +: p < 0.05 with respect to two-tailed test - : p > 0.05 with respect to two-tailed test. 104 1/3 1/4 Table 4.9: Granger Causality Test for Diffuse Series Variable p-value Confidence Media Attention 0.77 Presidential Election 0.98 Ideological Divergence* 0.01 Economic Performance 0.48 Approval Index 0.21 Trust Index* 0.00 Media Attention Confidence 0.29 Presidential Election 0.81 Ideological Divergence 0.73 Economic Performance 0.33 Approval Index 0.36 Trust Index 0.74 Ideological Divergence Confidence 0.85 Media Attention 0.21 Presidential Election 0.59 Economic Performance 0.80 Approval Index 0.68 Trust Index 0.68 Economic Performance Confidence* 0.00 Media Attention 0.34 Presidential Election* 0.01 Ideological Divergence* 0.00 Approval Index 0.09 Trust Index* 0.00 Approval Index Confidence* 0.06 Media Attention 0.27 Presidential Election* 0.04 Ideological Divergence 0.26 Economic Performance 0.37 Trust Index 0.20 Trust Index Confidence 0.27 Media Attention 0.51 Presidential Election 0.11 Ideological Divergence 0.46 Economic Performance 0.96 Approval Index 0.91 * denotes p < 0.05 for two-tailed test. Presidential election is omitted because they are scheduled. 105 Table 4.10: Replication of Ura & Wohlfarth for 1977-2004 Variable Coefficient ∆Confidence in Court -0.082 Confidence in Courtt−1 13.983∗ ∆Confidence in Congress -0.396 Confidence in Congress t−1 -8.287∗ ∆Court-Congress Ideo. Distance 0.449 Court-Congress Ideo. Distancet−1 -0.591 ∆Docket Size (thousands) 0.002∗ Docket Size t−1 0.000 Constant -14.926∗ Error Correction -0.631∗ * denotes p < 0.05 for two-tailed test. (Std. Err.) (5.096) (6.302) (3.885) (3.682) (1.871) (1.359) (0.000) (0.000) (6.811) (0.289) Table 4.11: Replication of Ura & Wohlfarth with Ephemeral Series for 1977-2004 Variable Coefficient ∆Confidence in Court -4.129 Confidence in Courtt−1 17.357∗ ∆Confidence in Congress 1.184 Confidence in Congress t−1 -7.708∗ ∆Court-Congress Ideo. Distance 2.006 Court-Congress Ideo. Distancet−1 -0.066 ∆Docket Size (thousands) 0.001∗ Docket Size t−1 0.000 Constant 1.683 Error Correction -0.653∗ * denotes p < 0.05 for two-tailed test. 106 (Std. Err.) (6.273) (6.340) (3.440) (2.997) (2.055) (1.422) (0.001) (0.000) (2.854) (0.278) BIBLIOGRAPHY 107 BIBLIOGRAPHY Abernathy, Claire Elizabeth. 2015. Legislative Correspondence Management Practices: Congressional Offices and the Treatment of Constituent Opinion PhD thesis. Abramson, Paul R. and Charles W. Ostrom. 1991. “Macropartisanship: An Empirical Reassessment.” American Political Science Review 85(01):181–192. Ansolabehere, Stephen, James M. Snyder Jr and Charles Stewart III. 2001. “The Effects of Party and Preferences on Congressional Roll-Call Voting.” Legislative Studies Quarterly pp. 533–572. Arceneaux, Kevin. 2008. “Can Partisan Cues Diminish Democratic Accountability?” Political Behavior 30(2):139–160. Aslam, Yasmin. 2015. “#LoveWins on the Internet.” MSNBC.com. http://www.msnbc.com/msnbc/love-wins-the-internet (access 2/1/17). Online at: Avey, Paul C. and Michael C. Desch. 2014. “What Do Policymakers Want from Us? Results of a Survey of Current and Former Senior National Security Decision Makers.” International Studies Quarterly 58(2):227–246. Baird, Vanessa A. 2001. “Building Institutional Legitiimacy: The Role of Procedural Justice.” Political Research Quarterly 54(2):333–354. Baird, Vanessa and Amy Gangl. 2006. “Shattering the Myth of Legality: The Impact of the Media’s Framing of Supreme Court Procedures on Perceptions of Fairness.” Political Psychology 27(4):597–613. Bartels, Brandon L. and Christopher D. Johnston. 2013. “On the Ideological Foundations of Supreme Court Legitimacy in the American Public.” American Journal of Political Science 57(1):184–199. Beck, Nathaniel. 1989. “Estimating Dynamic Models using Kalman Filtering.” Political Analysis 1(1):121–156. Berinsky, Adam J, Gregory A. Huber and Gabriel S. Lenz. 2012. “Evaluating Online Labor Markets for Experimental Research: Amazon.com’s Mechanical Turk.” Political Analysis 20(3):351–368. Bishop, George F., Robert W. Oldendick and Alfred J. Tuchfarber. 1978. “Effects of Question Wording and Format on Political Attitude Consistency.” Public Opinion Quarterly 42(1):81–92. Black, Ryan C., Ryan J. Owens and Miles T. Armaly. 2016. “A Well-Traveled Lot: A Research Note on Judicial Travel by US Supreme Court Justices.” Justice System Journal pp. 1–18. 108 Blake, Meredith. 2016. “Stephen Colbert Pays Tribute to Supreme Court Justice Antonin Scalia.” The Los Angeles Times. 16 February 2016. Available: http://www.latimes.com/entertainment/tv/showtracker/ la-et-st-stephen-colbert-react-to-the-death-of-justice-antonin-scalia-20160216-story. html. Board, Washington Post Editorial. 2016. “Antonin Scalia’s Remarkable Legacy.” The Washington Post. 14 February 2016. Available: https://www. washingtonpost.com/opinions/antonin-scalias-remarkable-legacy/2016/02/14/ a845dfc2-d337-11e5-be55-2cc3c1e4b76b story.html. Bolsen, Toby and Judd R. Thornton. 2014. “Overlapping Confidence Intervals and Null Hypothesis Testing.” The Experimental Political Scientist 4(1):12–16. Brader, Ted. 2006. Campaigning for Hearts and Minds: How Emotional Appeals in Political Ads Work. University of Chicago Press. Brambor, Thomas, William Roberts Clark and Matt Golder. 2006. “Understanding Interaction Models: Improving Empirical Analysis.” Political Analysis 14(1):63–82. Bruine de Bruin, W¨andi, Wilbert Vanderklaauw, Julie S Downs, Baruch Fischhoff, Giorgio Topa and Olivier Armantier. 2010. “Expectations of Inflation: The role of Demographic Variables, Expectation Formation, and Financial Literacy.” Journal of Consumer Affairs 44(2):381–402. Bullock, John G. 2011. “Elite Influence on Public Opinion in an Informed Electorate.” American Political Science Review 105(03):496–515. Burden, Barry C. and Anthony Mughan. 2003. “The International Economy and Presidential Approval.” Public Opinion Quarterly 67(4):555–578. Caldeira, Gregory A. 1986. “Neither the Purse nor the Sword: Dynamics of Public Confidence in the Supreme Court.” American Political Science Review 80(04):1209– 1226. Caldeira, Gregory A. and James L. Gibson. 1992. “The Etiology of Public Support for the Supreme Court.” American Journal of Political Science 36:635–664. Campbell, Angus, Phillip E. Converse, Warren E. Miller and Donald E. Stokes. 1960. The American Voter. New York: Wiley. Canes-Wrone, Brandice, David W. Brady and John F. Cogan. 2002. “Out of Step, Out of Office: Electoral Accountability and House Members’ Voting.” American Political Science Review 96(01):127–140. Casillas, Christopher J., Peter K. Enns and Patrick C. Wohlfarth. 2011. “How Public Opinion Constrains the Supreme Court.” American Journal of Political Science 55(1):74–88. Christenson, Dino P. and David M. Glick. 2015. “Chief Justice Roberts’s Health Care Decision Disrobed: The Microfoundations of the Supreme Court’s Legitimacy.” American Journal of Political Science 59(2):403–418. 109 Clark, Tom S. 2009. “The Separation of Powers, Court-Curbing and Judicial Legitimacy.” American Journal of Political Science 53(4):971–989. Clark, Tom S. and Jonathan P. Kastellec. 2015. “Source Cues and Public Support for the Supreme Court.” American Politics Research . Clifford, Scott, Ryan M. Jewell and Philip D. Waggoner. 2015. “Are Samples Drawn from Mechanical Turk Valid for Research on Political Ideology?” Research & Politics 2(4):2053168015622072. Cohen, Geoffrey L. 2003. “Party Over Policy: The Dominating Impact of Group Influence on Political Beliefs.” Journal of Personality and Social Psychology 85(5):808. Collins, Todd A. and Christopher A. Cooper. 2012. “Case Salience and Media Coverage of Supreme Court Decisions Toward a New Measure.” Political Research Quarterly 65(2):396–407. Commandeur, Jacques J.F. and Siem Jan Koopman. 2007. An Introduction to State Space Time Series Analysis. Oxford University Press. Congressional Management Foundation. 2011b. “Communicating with Congress: Perceptions of Citizen Advocacy on Capitol Hill.”. Converse, Phillip E. 1964. The Nature of Belief Systems in the Mass Publics. In Ideology and Discontent, ed. David E. Apter. New York: Free Press pp. 206–261. Dahl, Robert A. 1957. “Decision-Making in a Democracy: The Supreme Court as a National Policy-Maker.” Journal of Public Law 6(2):279–295. de Vogue, Ariane and Eugene Scott. 2016. “Antonin Scalia to Lie in Repose at the Supreme Court on Friday.” The New York Times. 17 February 2016. Available: http: //www.cnn.com/2016/02/16/politics/antonin-scalia-bench-draped/. Dilliplane, Susanna. 2014. “Activation, Conversion, or Reinforcement? The Impact of Partisan News Exposure on Vote Choice.” American Journal of Political Science 58(1):79–94. Dolbeare, Kenneth M. and Phillip E. Hammond. 1968. “The Political Party Basis of Attitudes toward the Supreme Court.” Public Opinion Quarterly 32(1):16–30. Downs, Anthony. 1957. An Economic Theory of Democracy. New York: Harper. Durr, Robert H., Andrew D. Martin and Christina Wolbrecht. 2000. “Ideological Divergence and Public Support for the Supreme Court.” American Journal of Political Science 44(4):768–776. Easton, David. 1965. A Systems Analysis of Political Life. John Wiley and Sons, Inc. Ellis, Christopher and James A. Stimson. 2012. Ideology in America. Cambridge University Press. Epstein, Lee and Andrew D Martin. 2010. “Does Public Opinion Influence the Supreme Court? Possibly Yes (But We’re Not Sure Why).” U. Pa. J. Const. L. 13:263. 110 Epstein, Lee, Jeffrey A Segal, Howard Spaeth and Thomas Walker. 2003. “The Supreme Court Compendium.”. Epstein, Lee., Rene Lindstadt, Jeffrey A. Segal Segal and Chad Westerland. 2006. “The Changing Dynamics of Senate Voting on Supreme Court Nominees.” Journal of Politics 68(2):296–307. Erikson, Robert S. and Gerald C. Wright. 2000. “Representation of Constituency Ideology in Congress.” Continuity and change in house elections 148. Farganis, Dion and Justin Wedeking. 2014. Supreme Court Confirmation Hearings in the US Senate: Reconsidering the Charade. University of Michigan Press. Fenno, Richard F. 1978. Home Style: House Members in their Districts. Pearson College Division. Fox, John. 2015. Applied Regression Analysis and Generalized Linear Models. Sage Publications. Free, Lloyd A and Hadley Cantril. 1967. “Political Beliefs of Americans; A Study of Public Opinion.”. Gibson, James L. 2007. “The Legitimacy of the US Supreme Court in a Polarized Polity.” Journal of empirical legal studies 4(3):507–538. Gibson, James L. and Gregory A. Caldeira. 2009a. Citizens, Courts, and Confirmations: Positivity Theory and the Judgments of the American People. Princeton University Press. Gibson, James L. and Gregory A. Caldeira. 2009b. “Confirmation Politics and the Legitimacy of the US Supreme Court: Institutional Loyalty, Positivity Bias, and the Alito Nomination.” American Journal of Political Science 53(1):139–155. Gibson, James L. and Gregory A. Caldeira. 2009c. “Knowing the Supreme Court? A Reconsideration of Public Ignorance of the High Court.” The Journal of Politics 71(02):429–441. Gibson, James L. and Gregory A. Caldeira. 2011. “Has Legal Realism Damaged the Legitimacy of the U.S. Supreme Court?” Law and Society Review 45(1):195–219. Gibson, James L., Gregory A. Caldeira and Lester Kenyatta Spence. 2003a. “Measuring Attitudes toward the United States Supreme Court.” American Journal of Political Science 47(2):354–367. Gibson, James L., Gregory A. Caldeira and Lester Kenyatta Spence. 2003b. “The Supreme Court and the U.S. Presidential Election of 2000: Wounds, Self-Inflicted or Otherwise?” British Journal of Political Science 33(4):535–556. Gibson, James L., Gregory A. Caldeira and Vanessa A. Baird. 1998. “On the Legitimacy of National High Courts.” American Political Science Review 92(2):343–358. 111 Gibson, James L. and Gregory Caldeira. 2007. “Supreme Court Nominations, Legitimacy Theory, and the American Public: A Dynamic Test of the Theory of Positivity Bias.” Legitimacy Theory, and the American Public: A Dynamic Test of the Theory of Positivity Bias (July 4, 2007) . Gibson, James L. and Michael J. Nelson. 2014. “The Legitimacy of the US Supreme Court: Conventional Wisdoms and Recent Challenges Thereto.” Annual Review of Law and Social Science 10:201–219. Gibson, James L. and Michael J. Nelson. 2015. “Is the US Supreme Court’s Legitimacy Grounded in Performance Satisfaction and Ideology?” American Journal of Political Science 59(1):162–174. Gibson, James L. and Michael J. Nelson. 2016. “Change in institutional Support for the SU Supreme Court: Is the Court’s Legitimacy Imperiled by the Decisions it Makes?” Public Opinion Quarterly . Gibson, James L. and Michael J. Nelson. N.d. “Too Liberal, Too Conservative, or About Right? The Implications of Ideological Satisfaction for Supreme Court Legitimacy.”. Gibson, James L., Milton Lodge and Benjamin Woodson. 2014. “Losing, but Accepting: Legitimacy, Positivity Theory, and the Symbols of Judicial Authority.” Law & Society Review 48(4):837–866. Goren, Paul, Christopher M. Federico and Miki Caul Kittilson. 2009. “Source Cues, Partisan Identities, and Political Value Expression.” American Journal of Political Science 53(4):805–820. Green, Donald P, Alan S Gerber and Suzanna L De Boef. 1999. “Tracking Opinion Over Time: A Method for Reducing Sampling Error.” Public Opinion Quarterly pp. 178–192. Grosskopf, Anke and Jeffery J. Mondak. 1998. “Do Attitudes toward Specific Supreme Court Decisions Matter? The Impact of Webster and Texas v. Johnson on Public Confidence in the Supreme Court.” Political Research Quarterly 51(3):633–654. Hamilton, Alexander. 1788. “Federalist Number 78.”. Harrison, Jeff and Mike West. 1999. Bayesian Forecasting & Dynamic Models. Springer. Harvey, Andrew C. 1990. Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge University Press. Hetherington, Marc J. 1998. “The Political Relevance of Political Trust.” American Political Science Review 92(04):791–808. Hetherington, Marc J. 2005. Why Trust Matters: Declining Political Trust and the Demise of American Liberalism. Princeton University Press. Hetherington, Marc J. and Joseph L. Smith. 2007. “Issue Preferences and Evaluations of the US Supreme Court.” Public Opinion Quarterly 71(1):40–66. 112 Hirshman, Linda. 2016. “If Republicans Block Obama’s Supreme Court Nomination, He Wins Anyway.” Washington Post. 13 February 2016. Available: https://www.washingtonpost.com/posteverything/wp/2016/02/13/ if-republicans-block-obamas-supreme-court-nomination-he-wins-anyway/. Horton, John J., David G. Rand and Richard J. Zeckhauser. 2011. “The Online Laboratory: Conducting Experiments in a Real Labor Market.” Experimental Economics 14(3):399–425. Iyengar, Shanto and Donald Kinder. 1987. “News that Matters: Television and Public Opinion.” Chicago: University of Chicago . Iyengar, Shanto and Nicholas A. Valentino. 2000. “Who Says What? Source Credibility as a Mediator of Campaign Advertising.” Elements of Reason: Cognition, Choice, and the Bounds of Rationality pp. 108–129. Iyengar, Shanto and Sean J. Westwood. 2015. “Fear and Loathing across Party Lines: New Evidence on Group Polarization.” American Journal of Political Science 59(3):690–707. Jacoby, William G. 1995. “The Structure of Ideological Thinking in the American Electorate.” American Journal of Political Science pp. 314–335. Kam, Cindy D. 2005. “Who Toes the Party Line? Cues, Values, and Individual Differences.” Political Behavior 27(2):163–182. Kang, Min Jeong, Antonio Rangel, Mickael Camus and Colin F. Camerer. 2011. “Hypothetical and Real Choice Differentially Activate Common Valuation Areas.” Journal of Neuroscience 31(2):461–468. Kastellec, Jonathan P., Jeffrey R. Lax and Justin H. Phillips. 2010. “Public Opinion and Senate Confirmation of Supreme Court Nominees.” The Journal of Politics 72(3):767– 784. Kefauver, Estes and Jack Levin. 1947. A Twentieth-century Congress. Vol. 1 Essential Books, Duell, Sloan and Pearce. Krosnick, Jon A. 1989. “A Review: Question Wording and Reports of Survey Results: The Case of Louis Harris and Associates and Aetna Life and Casualty.” Public Opinion Quarterly pp. 107–113. Krugman, Paul. 2016. “How America Was Lost.” The New York Times. 14 February 2016. Available: http://www.nytimes.com/2016/02/15/opinion/how-america-was-lost. html?action=click&pgtype=Homepage&clickSource=story-heading&module= span-abc-region®ion=span-abc-region&WT.nav=span-abc-region. Lenz, Gabriel S. 2009. “Learning and Opinion Change, Not Priming: Reconsidering the Priming Hypothesis.” American Journal of Political Science 53(4):821–837. Lithwick, Dahlia. 2016. “Chuck Grassleys Supreme Court Coup.” Slate. 7 April 2016. Available: http://www.slate.com/articles/news and politics/jurisprudence/2016/04/ sen chuck grassley attacks the supreme court john roberts.html. 113 Lupia, Arthur. 1994. “Shortcuts versus Encyclopedias: Information and Voting Behavior in California Insurance Reform Elections.” American Political Science Review 88(01):63–76. Lupton, Robert N., William M. Myers and Judd R Thornton. 2015. “Political Sophistication and the Dimensionality of Elite and Mass Attitudes, 1980- 2004.” The Journal of Politics 77(2):368–380. Malhotra, Neil and Stephen A. Jessee. 2014. “Ideological Proximity and Support for the Supreme Court.” Political Behavior 36(4):817–846. Martin, Andrew D. and Kevin M. Quinn. 2002. “Dynamic Ideal Point Estimation via Markov Chain Monte Carlo for the U.S. Supreme Court, 1953–1999.” Political Analysis 10(2):134–153. Mason, Lilliana. 2015. “I Disrespectfully Agree: The Differential Effects of Partisan Sorting on Social and Issue Polarization.” American Journal of Political Science 59(1):128– 145. Mayhew, David R. 1974. Congress: The Electoral Connection. Fredricksburg, VA: Yale University Press. McCarty, Nolan, Keith T. Poole and Howard Rosenthal. 2006. Polarized America: The Dance of Political Ideology and Unequal Riches. Cambridge, MA: MIT Press. McGuire, Kevin T. 2004. “The Institutionalization of the U.S. Supreme Court.” Political Analysis 12(2):128–142. Millhiser, Ian. 2016. “Senate GOP Can’t Play Politics in Confirming President Obama’s Pick for Justice Antonin Scalia’s Replacement.” New York Daily News. 14 February 2016. Available: http://www.nydailynews.com/news/politics/ senate-play-politics-scalia-successor-article-1.2531013. Mondak, Jeffery J. and Shannon Ishiyama Smithey. 1997. “The Dynamics of Public Support for the Supreme Court.” The Journal of Politics 59(04):1114–1142. Nicholson, Stephen P. 2011. “Dominating Cues and the Limits of Elite Influence.” The Journal of Politics 73(04):1165–1177. Nicholson, Stephen P. and Robert M. Howard. 2003. “Framing Support for the Supreme Court in the Aftermath of Bush v. Gore.” Journal of Politics 65(3):676–695. Nicholson, Stephen P. and Thomas G. Hansford. 2014. “Partisans in Robes: Party Cues and Public Acceptance of Supreme Court Decisions.” American Journal of Political Science 58(3):620–636. Norpoth, Helmut, Michael S. Lewis-Beck and Jean-Dominique Lafay. 1991. Economics and Politics: The Calculus of Support. University of Michigan Press. O’Hehir, Andrew. 2016. “Political Paralysis is the New Normal: The GOPs Scalia Gamble May be Suicidal, but Its Not Illogical.” Salon. 17 February 2016. Available: http://www.salon.com/2016/02/17/political paralysis is the new normal the gops scalia gamble may be suicidal but its not illogical/. 114 Parlapiano, Alicia and Margot Sanger-Katz. 2016. “A Supreme Court With Merrick Garland Would Be the Most Liberal in Decades.” The New York Times. 18 February 2016. Available: http://www.nytimes.com/interactive/2016/02/18/ upshot/potential-for-the-most-liberal-supreme-court-in-decades.html?hp&action= click&pgtype=Homepage&clickSource=image&module=photo-spot-region®ion= top-news&WT.nav=top-news. Pasek, Josh and Jon A. Krosnick. 2010. “Optimizing Survey Questionnaire Design in Ppolitical Science: Insights from Psychology.” Oxford handbook of American elections and political behavior pp. 27–50. Perr, Jon. 2016. “How Republicans Turned the Unprecedented into the New Normal.” Daily Kos. 21 February 2016. Available: http://www.dailykos.com/stories/2016/2/21/ 1486973/-How-Republicans-turned-the-unprecedented-into-the-new-normal. Poole, Keith T. and Howard L. Rosenthal. 2011. Ideology and Ccongress. Vol. 1 Transaction Publishers. Rahn, Wendy M. 1993. “The Role of Partisan Stereotypes in Information Processing about Political Candidates.” American Journal of Political Science pp. 472–496. Rasinski, Kenneth A. 1989. “The Effect of Question Wording on Public Support for Government Spending.” Public Opinion Quarterly 53(3):388–394. Salamone, Michael F. 2013. “Judicial Consensus and Public Opinion: Conditional Response to Supreme Court Majority Size.” Political Research Quarterly p. 1065912913497840. Sawyer, Mark. 2016. “”Hooray! Scalia’s Dead!” A Man who Seriously Injured the USA, the Country he Claimed to Love, is Gone.” https://www.tremr.com/. Available: https: //www.tremr.com/msawpro/hooray-scalias-dead-a-man-who-seriously. Scheb, John M. and William Lyons. 2001. “Judicial Behavior and Public Opinion: Popular Expectations Regarding the Factors That Influence Supreme Court Decisions.” Political Behavior 23(2):181–94. Shear, Michael D. and Christopher Drew. 2016. “‘Cancel Order!’ Donald Trump Attacks Plans for Upgraded Air Force One.” The New York Times. 6 December 2016. Available: https://www.nytimes.com/2016/12/06/us/politics/trump-air-force-one-boeing.html. Shear, Michael D. and Jennifer Steinhauer. 2016. “More Republicans Say They’ll Block Supreme Court Nominee.” The New York Times. 15 February 2016. Available: http://www.nytimes.com/2016/02/16/us/politics/ more-republicans-say-theyll-block-supreme-court-nomination.html? r=0. Shumway, Robert H and David S Stoffer. 2010. Time Series Analysis and its Applications: With R Examples. Springer Science & Business Media. Simonson, Michael R. 1995. “Instructional Technology and Attitude Change.”. Soroka, Stuart N. and Christopher Wlezien. 2004. “Opinion Representation and Policy Feedback: Canada in Comparative Perspective.” Canadian Journal of Political Science 37(03):531–559. 115 Spaeth, Harold J., Lee Epstein, Ted Ruger, Keith Whittington, Jeffrey A. Segal and Andrew D. Martin. 2010. The Supreme Court Database. Saint Louis, MO: Washington University in Saint Louis, http://scdb.wustl.edu/index.php. Sternthal, Brian, Ruby Dholakia and Clark Leavitt. 1978. “The Persuasive Effect of Source Credibility: Tests of Cognitive Response.” Journal of Consumer Research 4(4):252–260. Stimson, James A. 1991. Public Opinion in America: Moods, Cycles, and Swings. Westview Press. Tacheron, Donald G. and Morris K. Udall. 1966. “The Job of the Congressman.” Indianapolis: Bobbs-Merrill . Tesler, Michael. 2015. “Priming Predispositions and Changing Policy Positions: An Account of When Mass Opinion is Primed or Changed.” American Journal of Political Science 59(4):806–824. Tulis, Jeffrey K. 1988. The Rhetorical Presidency. Princeton University Press. Tyler, Tom R. 2006. “Psychological Perspectives on Legitimacy and Legitimation.” Annual Review of Psychology 57:375–400. Tyler, Tom R. 2007. “Procedural Justice and the Courts.” Court Review 44:26–164. Ura, Joseph Daniel. 2014. “Backlash and Legitimation: Macro Political Responses to Supreme Court Decisions.” American Journal of Political Science 58(1):110–126. Ura, Joseph Daniel and Patrick C. Wohlfarth. 2010. “An Appeal to the People: Public Opinion and Congressional Support for the Supreme Court.” The Journal of Politics 72(04):939–956. Victor, Daniel. 2016. “What Happens in a 4-4 Tie?” The New York Times. 13 February 2016. Available: http://www.nytimes.com/live/ supreme-court-justice-antonin-scalia-dies-at-79/what-happens-in-a-4-4-tie/. Wlezien, Christopher. 1995. “The Public as Thermostat: Dynamics of Preferences for Spending.” American Journal of Political Science pp. 981–1000. Wlezien, Christopher. 2004. “Patterns of Representation: Dynamics of Public Preferences and Policy.” Journal of Politics 66(1):1–24. Zaller, John. 1992. The Nature and Origins of Mass Opinion. Cambridge University Press. Zimbardo, Philip G. and Michael R. Leippe. 1991. The Psychology of Attitude Change and Social Influence. Mcgraw-Hill Book Company. Zink, James R., James F. Spriggs, II and John T. Scott. 2009. “Courting the Public: The Influence of Decision Attributes on Individuals’ Views of Court Opinions.” Journal of Politics 71(3):909–925. 116