CRITICAL ALGORITHMIC LITERACY: POWER, EPISTEMOLOGY, AND PLATFORMS Kelley Marie Cotter A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Information and Media — Doctor of Philosophy 2020 By ABSTRACT By Kelley Marie Cotter CRITICAL ALGORITHMIC LITERACY: POWER, EPISTEMOLOGY, AND PLATFORMS As algorithms increasingly mediate everyday life, they build the world. Algorithms shape the ways we understand and navigate our lived realities, and enact sociopolitical order as reflections of human values and assumptions. Recognition of this so-called “algorithmic power” has prompted various efforts to govern algorithms. As “algorithmic literacy” begins to enter into the vernacular of those involved in such efforts, the purpose of this dissertation is to interrogate what it means to know algorithms. In this, I advocate for an expanded horizon of “consequential” knowledge about algorithms, and treat algorithmic literacy as a bottom-up tool of governance that can be used to confront algorithmic power. I accomplished this task through an exploration of two case studies in which practices of learning about and making sense of algorithms are particularly salient. The first case focuses on Instagram influencers’ pursuit of visibility; the second case focuses on “BreadTube,” a leftist online community concerned with the visibility of far right ideology on YouTube. Through investigating these cases, I developed the conceptual framework of critical algorithmic literacy, which recognizes knowledge as situated (Haraway, 1988), constructed within and in relation to the discursive landscape of social worlds (Clarke & Star, 2008), and involving the cultivation of a critical consciousness through recognizing and responding to algorithms as expressions of broader systems of power (Freire, 2000). This framework recognizes that there are multiple ways of knowing algorithms—that technical knowledge is merely one of multiple ways. It also recognizes that what people know about algorithms depends in large part on who they are. Further, the framework of critical algorithmic literacy recognizes that what people know about algorithms is the result of negotiations of power, particularly those that grant epistemic authority to some over others. In focusing on practices of learning about and making sense of platform algorithms, I demonstrated platforms’ centrality in these practices. I introduced the concept of platform epistemology in order to capture the ways that platforms set the conditions under which knowledge about algorithms may be constructed and legitimized. Through their design choices, in activating connections between users, and in connecting users to content based on inferred interests, platforms orchestrate the flow of information about algorithms online. Further, by shrouding their algorithms in secrecy and closing off access to user data from which insight could be drawn, platforms give rise to hierarchies of epistemic authority on algorithms. Thus, I show that platforms do not merely passively support learning about algorithms, they actively shape what is and can be known about algorithms. Ultimately, I argue that critical algorithmic literacy has the potential to be a means of increasing the involvement of private citizens in efforts to democratically govern algorithms in the public interest. However, in order to fully realize this potentiality, we need to ensure that users and other stakeholders can effectively advocate for their interests based on their unique insight about what algorithms mean to and for them. This means reimagining algorithmic literacy as contextual, heterogeneous, always partial, and inseparable from questions of power. Copyright by KELLEY MARIE COTTER 2020 I dedicate this dissertation to my family, who have instilled in me curiosity and thoughtfulness and who have always believed in me. Dad, thank you for teaching me patience and perseverance by example. I wish you were here to see this. v ACKNOWLEDGEMENTS I wish to thank the members of my dissertation committee—Bibi, Casey, and Stephanie. So much of this dissertation is a reflection of the guidance, feedback, and advice you have provided me since the very beginning of this project. I am incredibly appreciative of the care and patience you showed me when I needed it. Kjerstin – Your generosity, thoughtfulness, and endless optimism have made all the difference in my path. Your feedback—on this and various other projects—has challenged me to be a better scholar and fundamentally shaped my work. Your voice is always in my head when I’m writing. I am endlessly appreciative of the ways you have kept me grounded, in even the most challenging moments, with your ability to remain clear-eyed and lighthearted. You have made this journey more enjoyable (or, at times, simply bearable). Thank you for everything. To my friends and family – This was one of the most difficult things I’ve ever done, during a very challenging time. Your love, support, and encouragement made these last few years do-able. In particular, I could not have done this without you, Daniel, my best friend, indefatigable supporter, de-facto consultant on Internet culture, my partner. Thank you for listening to me rant, rave, and cry, and for making me laugh throughout. Thank you for believing in me and urging me to believe in myself, too. I love you. vi TABLE OF CONTENTS LIST OF TABLES LIST OF FIGURES CHAPTER 1: Introduction Instagram Influencers: Playing the Visibility Game BreadTube: Fighting the Visibility War Dissertation Outline CHAPTER 2: Background and Research Questions Algorithmic Power Reenacting Existing Order Ordering the World Anew Algorithmic Power as Epistemic Platform Power Framing Critical Algorithmic Literacy Situated Knowledges Knowledge-Building within Social Worlds Critical Literacy Challenges to Knowing Algorithms The Black-Boxing of Algorithms The Complexity of Algorithms The Dynamic Nature of Algorithms Ways of Knowing Algorithms Technical Knowledge Practical Knowledge Research Questions CHAPTER 3: Methods Cases Instagram Influencers BreadTube Data Collection Online Discourse Materials Interviews Recruitment Interview Questions and Techniques Data Analysis Ethical Considerations Influencers BreadTube CHAPTER 4: Assembling the Rules to the Visibility Game Learning by Doing vii x xi 1 5 7 10 14 15 16 17 20 22 26 29 32 34 36 36 38 38 39 40 41 43 45 46 46 53 58 59 61 62 63 63 65 66 72 75 77 Observation and Reflection Empirical Investigations Learning from Others Networking Connections at Facebook Communities of Practice Learning from Media Learning from Training Constructing Guru Authority Information Asymmetry Between Platforms and Users: Black-Box Gaslighting Conclusion CHAPTER 5: Strategizing How to Play the Visibility Game Technical Knowledge Design Knowledge Design Goals Iterative Design Methods Knowledge Engagement Timeliness Relationships Penalties Practical Knowledge Data Signals for Algorithmic Ranking The Algorithm as Involuntary Adversary The Authenticity Ideal Relational influence The Entrepreneurial Ideal Simulated influence The Gendering of Relational and Simulated Influence Conclusion Learning by Doing Observation and Reflection Empirical Investigations Learning from Others Online Discussion CHAPTER 6: Raising Algorithmic Consciousness in the BreadTube Community Learning from Content Creators Learning from those with Technical Expertise Learning from Media Media Coverage of Politics Media Coverage of Sociotechnical Issues Involving Algorithms Learning from Training Information Asymmetry Between Platforms and Users: Corporate Propaganda Conclusion viii 78 82 85 86 88 91 94 99 102 106 114 119 121 121 121 123 124 124 124 126 127 129 130 131 135 136 142 143 149 151 154 156 156 158 162 163 165 169 171 172 178 182 186 190 CHAPTER 7: Knowledge for the Greater Good Technical Knowledge Design Knowledge Design Goals Design Limitations Methods Knowledge Data Signals Algorithmic Recommendation Signals Algorithmic Moderation Signals Content-Based Filtering Collaborative Filtering Practical Knowledge BreadTube (Dis)Unity Rugged Individualism Survival Under Capitalism The Algorithm Demands a Sacrifice For the Greater-Good Conclusion CHAPTER 8: Discussion and Conclusion Platform Epistemology: Public vs. Private Interests for Better Governance Manufacturing Uneven Epistemic Authority Automating Inequality in Algorithmic Literacy Platform Design for Algorithmic Literacy and Governance Situating the Situated Nature of Algorithmic Literacy in Broader Contexts Implications for Future Work Implications for Governance Implications for Educational Interventions Critical Algorithmic Literacy: Confronting Algorithmic Power Cultivating a Critical Consciousness Asserting Agency The Limits of Critical Algorithmic Literacy for Empowerment Conclusion APPENDICES APPENDIX A: Interview Techniques APPENDIX B: Interview Questions APPENDIX C: Description of Interviewees APPENDIX D: Notes on Key Terms APPENDIX E: Analytical Maps APPENDIX E: “Whiteboarding” Notes BIBLIOGRAPHY ix 194 195 196 196 196 198 199 199 201 203 204 206 207 210 210 212 214 220 223 226 227 230 232 235 235 239 241 243 244 246 249 252 257 258 259 263 264 266 271 275 LIST OF TABLES Table 1. Sites and Dates of Data Collection Table 2. Description of interview data 58 62 x LIST OF FIGURES Figure 1. Instagram Influencer Tiers by Audience Sizes Figure 2. Chiara Ferragni’s Instagram Profile Figure 3. Nikki de Jager’s Instagram Profile Figure 4. Amanda Cerny’s Instagram Profile Figure 5. r/BreadTube Frontpage Figure 6. BreadTube Ideologies Figure 7. ContraPoints’ YouTube Channel and 10 Most Popular Videos Figure 8. Philosophy Tube’s YouTube Channel and 10 Most Popular Videos Figure 9. Hbomberguy’s YouTube Channel and 10 Most Popular Videos 47 49 50 51 54 55 56 56 57 Figure 10. Example of an early responses to a Facebook Group’s “Request to Join” Questions 68 Figure 11. Later response to a Facebook Group’s “Request to Join” Questions Figure 12. Instagram Insights post metrics Figure 13. Instagram Insights stories metrics Figure 14. Meme created by Manu Muraro, founder of Your Social Team Figure 15. Jarvee follow settings Figure 16. Jarvee follow settings (cont.) Figure 17. Tiny Pirate’s network analysis of BreadTube Figure 18. Tiny Pirate’s network analysis of “RightTube” Figure 19. Newspaper articles with keywords algorithm* and politic* by year Figure 20. Instagram Influencers Messy Situational Map Figure 21. Instagram Influencers Relational Map Figure 22. BreadTube Messy Situational Map xi 69 83 83 135 145 145 160 160 174 266 267 268 Figure 23. BreadTube Relational Map 1 Figure 24. BreadTube Relational Map 2 Figure 25. Situated Platform Epistemology Figure 26. Programmed (Platform) Epistemology Figure 27. BreadTube Critical Algorithmic Literacy Figure 28. Visibility Game vs. Visibility War 269 270 271 272 273 274 xii CHAPTER 1: Introduction In June 2020, four black YouTube content creators—Kimberly Carleste Newman, Lisa Cabrera, Catherine Jones, and Denotra Nicole Lewis—filed a lawsuit against YouTube for “knowingly, intentionally, and systematically” employing algorithms to discriminate against them and other black creators by removing their content and limiting their ability to earn advertising revenue (Newman v. Google LLC). In the face of myriad similar criticisms over the years, YouTube quickly issued wholesale denials of any algorithmic bias against black creators. In response to the lawsuit, YouTube CEO Susan Wojcicki told The Washington Post “It’s not like our systems understand race or any of those different demographics” (Albergotti, 2020). Similarly, YouTube spokesman Farshad Shadloo wrote “We’ve gone to extraordinary lengths to build our systems and enforce our policies in a neutral, consistent way,” and stated that the platform’s algorithms do not discriminate based on race (Albergotti, 2020). This lawsuit not only exemplifies how algorithms materially shape lives. It also illustrates the negotiations of power that underlie knowledge about algorithms. The entire lawsuit rests on the plaintiffs’ ability to convince a court of law that YouTube’s algorithms discriminate against them based on race, in contradiction to what YouTube itself has said. For these activists, filing this lawsuit was an outcome of years of building insight about YouTube’s algorithms on the ground, of triangulating information from their direct experiences, friends and colleagues, and media coverage in order to make sense of why they seemed to be consistently targeted in algorithmic moderation. They further needed to read insight about YouTube’s algorithms into a broader historical context of racial oppression, thus cultivating a critical stance about them. Although this may not be how it is normally described, all of this work of learning about and making sense of YouTube’s algorithms represents a process of building algorithmic literacy. 1 Algorithmic literacy is about understanding what algorithms do and why, but also about what they mean to and for individuals and communities. As such, building algorithmic literacy was the precondition for these activists to take action. It also gave them ground on which to stake their claims—in spite of YouTube’s denials—which underscores the importance of algorithmic literacy for governing platforms and their algorithms. In this lawsuit, and in general, adjudicating what counts as “rational” and “legitimate” knowledge about algorithms is not separate from questions of power. In this dissertation, I foreground this point and offer a provocation to existing work on what it means to know and govern algorithms. In doing so, I advocate for an expanded horizon of consequential knowledge about algorithms, and investigate algorithmic literacy as a bottom-up tool of governance that can be used to confront power mediated by algorithms. Algorithms mediate power by enacting social and political order as part of a broader assemblage of actors (Bucher, 2018; Noble, 2018). This phenomenon is referred to as “algorithmic power” (Beer, 2009). Algorithmic power tends to draw attention when it results in significant material harm—for example, racial discrimination like we saw in the aforementioned lawsuit—though algorithmic power also manifests as a subtler force—for example, by prescribing (and rewarding) desirable ways of being, doing, and thinking (e.g., Bucher, 2018). Further, while algorithmic power emerges from an assemblage of actors, the power asymmetry between platforms and users, particularly users who depend upon platforms for income, fundamentally shapes how and when algorithmic power manifests. In the past several years, the prevailing response to algorithmic power has been calls for greater transparency around algorithms, which is said to effect accountability (e.g., Diakopoulos, 2016; Pasquale, 2015). The basic premise is: the more we know about the design and functioning 2 of algorithms, the better we will be able to ensure they serve public values like privacy, fairness, and democratic control (Van Dijck et al., 2018). However, more recent work casts doubt on the transparency ideal by pointing out that “opening up the black box” does not guarantee stakeholders will understand how, why, and with what effect algorithms do what they do (e.g., Ananny & Crawford, 2016; Burrell, 2016). Moreover, greater transparency does not completely remediate the power asymmetry between platforms and users, which has implications for what is and can be known about algorithms. Recognizing the shortcomings of the transparency ideal, some have called for greater algorithmic literacy in order to support efforts to govern algorithms (e.g., Hargittai et al., 2020; Rainie & Anderson, 2017; Saurwein et al., 2015). These calls coincide with a growing body of research on what people know about algorithms. Yet, most of these calls and much of this research prioritize understandings of algorithms from a technical standpoint and emphasize cognitivist approaches to knowing algorithms. With some notable exceptions (e.g., Bishop, 2019; Bucher, 2018), less attention has been paid to other possible ways of knowing algorithms. Even less attention has been paid to the relationship between algorithmic knowledge and power. In this dissertation, I fill these gaps by developing the conceptual framework of critical algorithmic literacy. This framework recognizes knowledge as situated (Haraway, 1988), constructed within and in relation to the discursive landscape of social worlds (Clarke & Star, 2008), and involving the cultivation of a critical consciousness through recognizing and responding to algorithms as expressions of broader systems of power (Freire, 2000). This framework recognizes that there are multiple ways of knowing algorithms—that technical knowledge is merely one of multiple ways. It also recognizes that what people know about algorithms depends in large part on who they are. Algorithms become consequential and gain 3 meaning through the activities they mediate in people’s social worlds. As algorithms become matters of concern for people, the interests, sites, and activities that defined individuals’ worlds motivate and direct how and which information about algorithms reaches them. What people come to know about algorithms grows around the prominent discourses that define membership in their social worlds. Further, critical algorithmic literacy recognizes that what we collectively know about algorithms is the result of systems of power that grant epistemic authority to some over others. In particular, in studying platform algorithms, I show that platforms prescribe the conditions under which knowledge about algorithms may be constructed and legitimized. As part of this, by withholding and obscuring information about algorithms and limiting access to user data, platforms solidify their epistemic authority in adjudicating “truths” about their algorithms. In this dissertation, I argue that ignoring the contextual, partial, heterogeneous, and power-laden nature of algorithmic literacy is precisely what permits presumptions of epistemic authority premised on systems of power. This dissertation focuses on two case studies. The case studies feature communities who have suddenly been forced to confront algorithms as they have become a matter of concern for them. As a result, these cases render literacy practices around algorithms particularly salient. The first case focuses on Instagram influencers; the second case focuses on a leftist online community known as “BreadTube.” Drawing on multiple genres of data, I constructed thick description of critical algorithmic literacy practices—how Instagram influencers and BreadTubers learn about, make sense of, and respond to algorithms. Below, I provide background on the two case studies and describe the overarching analogies I used to frame them, and then present a roadmap for the rest of dissertation. 4 Instagram Influencers: Playing the Visibility Game In early 2016, Instagram announced that users’ feeds would soon be “ordered to show the moments we believe you will care about the most” (Instagram, 2016, p. n.p.). The sub-text of this announcement was that the company would be introducing algorithmic ranking to the platform’s main feed. As is the case with other platforms, algorithmic ranking determines who and what gains visibility on Instagram. By establishing the conditions by which Instagram users are seen, algorithms serve as disciplinary apparatuses that prescribe participatory norms (Bucher, 2012). Through observing the content and users that attain visibility, users discern the participatory norms that algorithms “reward” with visibility (Bucher, 2012). Chapters 4 and 5 of this dissertation explore conscious, instrumental interactions between Instagram influencers and algorithms on Instagram. In these chapters, I use the analogy of the visibility game as an overarching framework for explaining what critical algorithmic literacy looks like among Instagram influencers. The visibility game captures the core activity of the social world of Instagram influencers: influencers’ pursuit of visibility on Instagram. I suggest this pursuit resembles a game constructed around rules embedded in algorithms that regulate visibility. Galloway’s (Galloway, 2006) vision of video games as objects of algorithmic culture serves as a guiding analogy for the visibility game: “To play the game means to play the code of the game. To win means to know the system. And thus to interpret a game means to interpret its algorithm” (pp. 90–91). The visibility game does theoretical work as an analogy for the power dynamic between influencers and platforms owners, which algorithms mediate. Platform owners hold a significant degree of power in establishing the institutional conditions of influencers’ labor within platforms (Burgess & Green, 2008; Hearn & Schoenhoff, 2015; Hearn, 2010; van Dijck, 2013). In 5 particular, platform owners have the sole authority to define the technical specifications of a platform and delimit how the platform may be used in accordance with the affordances of those specifications (Andrejevic, 2014; Burgess & Green, 2018; Hearn & Schoenhoff, 2015; Hearn, 2010). Instagram, like other platform owners, defines appropriate user behavior and consequences for noncompliance through its Terms of Use and Community Guidelines. These documents serve as regulatory devices and a partial articulation of the “rules” of the visibility game. Instagram enforces these rules via algorithms (van Dijck, 2013). As Instagram makes decisions about which behavioral signals from users’ trace data demonstrate relevance, interest, and/or importance, and which demonstrate spam or inappropriate content, the company establishes certain forms of participation as more desirable than others (Bucher, 2012). Thus, as a material enactment of the rules of the visibility game, algorithms can be conceived of as instruments of governance—disciplinary apparatuses—that prescribe behavioral norms of the platform (Bucher, 2012; Just & Latzer, 2017). Influencers emphasize the importance of gathering information about algorithms to learn the rules of the visibility game and how to play. Influencers learn the rules of the game by assembling layered accounts of what algorithms are to them and what they mean for them vis-à- vis digital influence. Simply put, influencers need to learn about algorithms in order to understand how to achieve success as an influencer. The goal of achieving visibility motivates a primary interest in information about how Instagram’s algorithms work, as opposed to, say, their social impacts, though the latter is also an interest for some. Influencers tend to perceive Instagram’s algorithms as both powerful and mysterious. For example, in a Facebook group for Instagram influencers, one user described influencers as “kneeling in worship of the algorithm.” 6 Influencers cope with their perceived powerlessness in the face of these seemingly omnipotent algorithms by attempting to build an arsenal of knowledge about them. This is the core motivation for influencers to learn about algorithms. Instagram influencers’ experiences render the power struggle between users and platforms highly visible and, thus, help illustrate how this struggle shapes critical algorithmic literacy. However, the scale of Instagram influencers’ world makes it difficult to pinpoint and analyze how specific ideological commitments matter for critical algorithmic literacy. The second case in this dissertation zooms in to focus on the role of social worlds by looking at a relatively smaller, online leftist community, BreadTube. The BreadTube case study helps illuminate how people learn about and make sense of algorithms within particular contexts of their social worlds—the interests, ideologies, activities, and sites that bring people together. This case also provides the opportunity to explore how critical algorithmic literacy can facilitate more explicitly political action. In the next section, I give an introduction to the organizing analogy for the BreadTube case study: BreadTubers’ participation in a visibility war. This analogy is a variation on the visibility game and captures some of the key differences between the two cases. BreadTube: Fighting the Visibility War The last decade has witnessed a rise of far-right ideology in the U.S., ignited by several years of economic crisis and fueled by a variety of sociopolitical factors (Mirrlees, 2019). This movement has coalesced around a core group known as the “alt-right,” which the Southern Poverty Law Center defines as “a set of far right ideologies, groups and individuals whose core belief is that ‘white identity’ is under attack by multicultural forces using political correctness and social justice to undermine white people and their civilization” (Southern Poverty Law Center, 2017, n.p.). A flood of reports have raised red flags about the insidious propagation of 7 far-right content on YouTube, and the “radicalization” (Tufekci, 2018) of users as a result the platform’s recommendation algorithm “coaxing viewers into the deeper depths of the alt-right by presenting them with ever more extreme content” (SPLC Hatewatch Staff, 2018, n.p.). Some leftists have seen the reports of a growing alt-right presence on YouTube as an opportunity for them to take a stand by “trying to build a counterweight to YouTubeʼs far-right flank” (Roose, 2019, p. n.p.). These leftists have formed a loosely connected network known as BreadTube, and are engaged in what I call a visibility war with the alt-right (and, increasingly, more centrist ideologues, as well). The battleground for this war is YouTube, the mission is to achieve ideological dominance by way of visibility, and—for BreadTubers—the stakes are the status of the broader cultural revolution. Entering into this visibility war requires BreadTubers to build knowledge about YouTube’s algorithms and find ways to mobilize this knowledge in order to facilitate the collective visibility of the community. Below, I give a brief introduction to the BreadTube community and their visibility war. The BreadTube community, a community of leftist YouTubers and their audiences, originally formed around a goal of undermining the so-called “alt-right pipeline.” The alt-right pipeline refers to the psychosocial process by which individuals gradually come to embrace alt- right ideology as the result of exposure to increasingly more extreme right-wing ideologues, primarily via online media (Munn, 2019). Over time, BreadTubers have broadened their aims to educating the masses, more generally, about leftist perspectives on a variety of topics, in order to shift the Overton window, or the range of acceptable political ideas and policies in mainstream discourse. BreadTubers wish to lay an intellectual foundation from which to bring newcomers into the fold of leftist thought and activism. In this way, as Dmitry Kuznetsov and Milan Ismangil wrote, “BreadTube signifies a return to a classic, democratised socialist mass education 8 programme” (Kuznetsov & Ismangil, 2020, p. 215). As BreadTube has grown and evolved, the means by which the community seeks to accomplish its goals have remained the same: fighting a visibility war. A visibility war overlaps extensively with the visibility game introduced in the last section. Both the visibility game and visibility war are strategic pursuits of visibility in order to persuade audiences to behave or think in certain ways. Instagram influencers’ pursue visibility as part of a spirit of self-enterprise: they seek to grow an online following who they can promote partnered brands to in order to earn an income (and gain professional success) (Duffy, 2017). While individual BreadTube creators may similarly pursue visibility as part of career aspirations, the broader community collectively pursues visibility in order to reach would-be converts to the left. Both the visibility game and a visibility war also entail competition. Both Instagram influencers and BreadTuber creators alike must compete for audiences’ eyeballs in the attention economy. Like the visibility game, a visibility war requires following a set of rules determined by platforms and enforced via algorithms, which delimit the range of acceptable visibility tactics. Indeed, the rules of the visibility game are also the “rules of engagement” in the visibility war. Key contrasts in the contexts, concerns, and motivations that underlie knowledge building between the two cases help build out the framework of critical algorithmic literacy. For each case, I consider two different kinds of critical algorithmic literacy practices: how Instagram influencers and BreadTubers each acquire information about algorithms, and how they make sense of and mobilize this information.1 In this analysis, I highlight the situated nature of and negotiations of power within these practices. I will now give an overview of the structure and findings of this dissertation. 1 While these aspects of critical algorithmic literacy bleed into one another, for the purposes of analysis, I consider them separately. 9 Dissertation Outline This dissertation is organized into eight chapters, including this introductory chapter. Chapter 2 serves as an overview of relevant literature. Drawing on algorithm studies, platform studies, feminist epistemology, and critical literacy theories, chapter 2 demonstrates limitations of existing conceptualizations of algorithmic literacy. Using these literatures, I articulate a new framework for critical algorithmic literacy that pushes beyond preoccupations with technical insight, cognitivism, and industrial progress. In chapter 3, I describe my methods. I relied on a multi-sited data collection approach and collected multiple genres of data. Data collected fall into two broad categories: online discourse materials, consisting primarily of social media discussions, images, videos, and articles; and semi-structured interviews. In addition to describing my approach to data collection and analysis, in this chapter I also provide additional background information about the two cases and an overview of ethical considerations. In chapter 4, I describe influencers’ modes of acquiring information about the rules of the visibility game, and how these practices comprise a unique epistemology of (platform) algorithms built around the social, technical, and economic structures asserted by platforms. I call this platform epistemology. Under the framework of platform epistemology, Instagram’s secrecy about its algorithms, as well as the company’s design choices and resultant affordances of the platform, shape how influencers acquire information about algorithms. Further, for influencers, the visibility game—which is an artifact of a series of design choices by Instagram (and other platforms) directed by its commercial interests—acts as a mechanism for validating information about algorithms. In this, knowledge, visibility, and capital intersect. Previous work on influencers has demonstrated how those more likely to achieve digital influence come from 10 more privileged backgrounds (Duffy, 2017). I show that knowledge building among influencers extends the structural inequalities that govern opportunities for becoming an influencer. In chapter 5, I focus on how influencers take information about algorithms and make sense of it in different ways, as well as how the different ways influencers make sense of algorithms impacts the core activity of their social world: playing the visibility game. I describe different kinds of knowledge about algorithms—organized under the broad headings of technical and practical knowledge—and what these knowledges offer influencers. I show that influencers make sense of Instagram’s algorithms—and the game more broadly—in line with preexisting discourses about what it means to be an influencer. Further, different dominant ideals of digital influence result in divergent understandings of Instagram’s algorithms, which shape their adoption of different strategies in the visibility game. Thus, algorithmic literacy allows influencers to consciously draw on their self-concept—as prescribed by the discourses of their social world—to reassert their own agency in the face of algorithmic prescriptions for how Instagram should be used. Thus, I argue that while algorithms act as disciplinary apparatuses (Bucher, 2012), they do not unilaterally determine user behavior. Influencers ultimately direct and make sense of their own behavior through their understanding of themselves and the visibility game. This speaks to the “critical” nature of algorithmic literacy. In chapter 6, I extend the conceptualization of platform epistemology in chapter 4 to highlight the situated nature of learning about algorithms, as exemplified among BreadTubers. In chapter 4, I focused on the role of platforms in laying the conditions for the acquisition and legitimization of information about algorithms. In this chapter, I acknowledge platforms’ role while emphasizing how the interests, ideologies, sites, and activities of a social world further influence how people acquire information about algorithms. Specifically, I show that different 11 features of BreadTube’s social world—namely, their visibility war; ideological commitments; and organization around YouTube—shape how information about algorithms enters into and circulates within the community. Additionally, the community’s anti-capitalist stance and emphasis on collectivism make the community averse to a fee-based training model seen among Instagram influencers. The community’s anti-capitalist stance also ultimately repels BreadTubers from reliance on YouTube’s disclosures for learning about the platform’s algorithms. In chapter 7, I focus on what the BreadTube community does with the information they acquire about algorithms. I describe what BreadTubers know about algorithms and illustrate the value of technical and practical algorithmic knowledge for how BreadTubers fight their visibility war. In particular, I demonstrate that algorithmic literacy involves reconciling information about algorithms with individuals’ values, particularly those defined by their social worlds. BreadTubers read YouTube’s algorithms as encouraging a neoliberal capitalist order and participation in conflict, prescriptions for behavior that are at odds with the community’s core values of solidarity and equity. Rather than capitulate to what they think YouTube’s algorithms “want,” BreadTubers mobilize their knowledge to develop visibility tactics that reaffirm their values by prioritizing the integrity and sustainability of community above individual advantage. Here, we can see critical algorithmic literacy in action as BreadTubers attempt to subvert the order they see algorithms enacting. Chapter 8 weaves together the themes and concepts from the four preceding chapters, as structured around three main contributions: the concept of platform epistemology, the situated nature of critical algorithmic literacy, and the inextricable relationship between critical algorithmic literacy and algorithmic power. In this, I export empirical findings from the two case studies to a broader context to outline implications for future work on algorithmic literacy and 12 governance of algorithms. The core message of this concluding chapter is that “lay” users have unique and valuable insight about algorithms to which platforms, policymakers, and researchers often do not have access. As such, we should take care in how we conceive of and frame user understandings of algorithms. Beginning with an assumption of ignorance on users’ parts validates hierarchies of epistemic authority built around systems of power. Taking users’ insight seriously facilitates efforts to democratically govern algorithms in the public interest. 13 CHAPTER 2: Background and Research Questions Growing fascination with and concern over algorithms mark the rapid transition to an algorithmic society. As we have witnessed algorithms being deployed in all aspects of our daily lives (Willson, 2017), we have learned many lessons about the ways algorithms can alternately make our lives easier, but also, for many, harder (e.g., Eubanks, 2017; Noble, 2018). As knowledge objects, algorithms have some qualities that resemble and some qualities that set them apart from other information and communication technologies (ICTs) discussed in the context of digital literacy. Like other ICTs, while algorithms have been touted as powerful tools for advancing human life in the postindustrial age, many of the utopian claims about algorithms must be reconciled with the increasing number of social dilemmas they introduce. In contrast to other ICTs, algorithms are more opaque (Ananny & Crawford, 2016; Burrell, 2016; Kitchin, 2017; Pasquale, 2015), more explicitly encode values and assumptions about the world (Gillespie, 2014; Kitchin, 2017; Noble, 2018), and are intrinsically dynamic (Ananny & Crawford, 2016; Burrell, 2016; Kitchin, 2017). These characteristics present challenges for knowing algorithms (Ananny & Crawford, 2016; Bucher, 2018; Burrell, 2016; Kitchin, 2017). Some previous work has attempted to grapple with the peculiarities of algorithms as knowledge objects (e.g., Bucher, 2018; DeVito et al., 2018; Eslami et al., 2015; Hargittai et al., 2020). However, much of this work positions algorithmic literacy in too narrow terms, and overlooks important theoretical contributions to epistemology from critical theory, particularly from feminist epistemologies. There is an opportunity to more fully explicate what it means to know algorithms in a way that centers the relationship between knowledge and power. In this dissertation, I attempt to contribute to this goal by conceptualizing critical algorithmic literacy, as rooted in a critical approaches to knowledge and based on empirical work. 14 In this chapter, I outline relevant work on the “powerful” nature of algorithms, which requires us to consider algorithmic literacy as inextricably entangled with systems of power. Because I focus specifically on platform algorithms, I also discuss what platforms are and the power they wield, which adds another layer of complexity. I then describe relevant theoretical work on epistemology and critical literacy, which lay the foundation to my approach to critical algorithmic literacy. After this, I enumerate the unique challenges to knowing algorithms. In light of these challenges, I construct a theoretical framework for reimagining what it means to know algorithms, and close the chapter by laying out three research questions guiding my research. Algorithmic Power Algorithmic power refers to the force that algorithms exert on the world as part of broader sociotechnical systems. In this view, power should be understood as decentralized and relational—as Foucault suggested, “it is the moving substrate of force relations” (Foucault, 2012, p. 93) that disciplines and molds both mind and body. Algorithms act as part of this moving substrate. Algorithmic power lies in the ways algorithms participate in decision-making across a variety of domains, thereby making a difference in how we relate to and understand each other and the world around us. For example, algorithms assist in deciding how to rank information in social media feeds and search results, which job applicants best qualify for a position, which individuals make a good romantic match, whom should be prioritized for public benefits, and so on. Algorithms corral behavior by articulating the conditions under which we can achieve or avoid certain outcomes. Moreover, through their output, algorithms influence the relative importance of aspects of our identities, behaviors, principles, and environments. Algorithms help make certain modes of acting and knowing (im)possible. In other words, algorithms, like other technologies, are political. As Langdon Winner (Winner, 1980) famously argued, technologies 15 have political properties by nature of the ways they “build[] order in our world" (p. 127). Taking up this point, Taina Bucher (2018) suggests, "algorithms are political in the sense that they help to make the world appear in certain ways rather than others" (p. 3). From this perspective, algorithmic power can be expressed by reenacting existing social or political order or by ordering the world anew. However, algorithmic power also has an epistemic dimension built on the foundation of a scientific tradition that values quantitative, positivist approaches. In what follows, I describe these elements of algorithmic power. Reenacting Existing Order Most discussions of algorithmic power speak to the ways algorithms act as instruments of government in the Foucauldian sense (Bucher, 2018). Processes of designing and developing algorithms entail human choices. Humans must select or construct datasets to train models on, specify which attributes and variables to measure, and choose how to analyze data and interpret the results of analyses (boyd & Crawford, 2012; Burrell, 2016; Rieder, 2017). All of these choices enact the values, principles, and commitments of those who design algorithms, and, thereby, have the potential to reproduce pre-existing social structures and relations (Beer, 2009; Kitchin, 2017; Noble, 2018). As Safiya Noble (2018) succinctly explains, algorithms "function as an expression of a series of social, political, and economic relations, relations often obscured and normalized in technological practices" (p. 98). Because those with power tend to be in control of design processes, hegemonic worldviews are often encoded into algorithms and reified in their outputs and outcomes (Noble, 2018). Thus, algorithms contribute to the materiality of control mechanisms by constraining and enabling certain worldviews via algorithms' design and outputs. Algorithms also reenact existing order in the way they tend to prescribe and parametrize 16 behavior. Algorithms classify objects and users in order to determine how they should be treated within broader systems. Traditional classification systems and classification algorithms alike exert a normalizing or hegemonic force on the lives of individuals (Bowker & Star, 1999). From this perspective, John Cheney-Lippold (2011) argues that algorithmic processes that categorize users "softly persuade users towards models of normalized behavior and identity through the constant redefinition of categories of identity" (p. 177). He explains that the categorization of users shapes their lifeworlds by constraining the content and opportunities to which they are granted access online. Consequently, such algorithms exert a force over users' lives (Cheney- Lippold, 2011). Bucher (2012) similarly describes such a force, suggesting that because Facebook's news feed algorithm controls visibility such that not all published posts are guaranteed an audience, it imposes a "threat of invisibility." By imposing this threat, the algorithm urges users to attune themselves to its logic by interacting with others, thereby gaining popularity and increasing the odds of becoming visible (Bucher, 2012). Ordering the World Anew Algorithms do not merely reproduce and translate structure, but also act with agency to construct new ways of ordering the world. In this context, as Latour (2005) suggests, agency is distributed across a network of actors, such that "[a]ction is not done under the full control of consciousness; action should rather be felt as a node, a knot, and a conglomerate of many surprising sets of agencies that have to be slowly disentangled" (p. 44). Separating agency from intentionality accommodates the notion that nonhuman actors, like algorithms, may also exercise agency. Agency simply signifies "mak[ing] a difference in the course of some other agent's action" (Latour, 2005, p. 71). Algorithms act with agency in line with and in relation to other agencies, including the companies and individuals who produce them, the individuals who make 17 use of the broader systems of which algorithms are a part, other technical elements of the broader systems, the regulatory and legislative bodies that create the conditions under which algorithms may operate, and so on. From within this set of agencies, algorithms order the world anew when, as Rob Kitchin (2017) writes, they "make the world" (Kitchin, 2017, p. 18), and when, as Bucher (2018) writes, they "produce[] certain capacities to do and sense things" (p. 94). Machine learning algorithms, for example, produce categorizations of users not necessarily based on ground truth or the way users would self-identify, but based on endless streams of behavioral data (Cheney-Lippold, 2011). Thus, rather than reproducing extant understandings of different social categories like gender, class, or race, algorithms may also construct new meanings for these categories (Cheney- Lippold, 2011). Further, Adrian Mackenzie (2015) makes a similar argument, pointing out that the performativity of machine learning algorithms is recursive. That is, machine learning algorithms that predict user desires tend to classify by assuming stable and distinct categories, as well as a stable relationship between inputs and outputs. However, algorithms have feedback loop effects, as they both define what actions users can take as well as the result of those actions (Bucher, 2018; Mackenzie, 2015; Rader & Gray, 2015). As Mackenzie (2015) puts it, "the [algorithmic] production of prediction changes the world that predictions inhabit" (p. 441). For example, recommender system algorithms predict what content will be relevant to users; as users interact with the content algorithmically curated for them, the data from these interactions are fed back to the algorithms for future predictions (Mackenzie, 2015). In some cases, algorithms may also detect spurious relationships in skewed datasets and, enact self-fulfilling prophecies of what matters (Beer, 2016). Virginia Eubanks (2017) demonstrates this in documenting a child welfare agency's adoption of a statistical model to predict the likelihood of a child experiencing abuse or 18 neglect. One of the model's key predictors relied on reports of possible abuse or neglect, which are known to reflect racial biases—reporters report black and biracial families three and a half times more often than they report white families. As such, the model predicted greater risk among black and biracial children, although actual cases of abuse and neglect did not bear this out. Consequently, as algorithms "operate in the world, the more they tend to normalize the situations in which they are entangled" (Mackenzie, 2015, p. 442). Moreover, when algorithms assign novel meaning to individuals and things through classification and prediction, they “make a difference” in individuals' experience of the world, determining which people, ideas, and opportunities individuals encounter (Beer, 2009). Winner (1980) additionally argues that technologies can inherently require or strongly urge "particular institutionalized patterns of power and authority" (p. 134). That is, they do not work (properly) without the existence or establishment of certain social conditions. Algorithms are part of a long history of systems of measurement and "statistical thinking" (Beer, 2016; Rieder, 2017), which serve the ideological project of constructing hierarchies of value (Beer, 2016). Through processes ranking and classification, algorithms assign a numerical value to people and objects that implicitly appraises them on continua of “good” vs. “bad,” “right” vs. “wrong,” “relevant” vs. “irrelevant,” and so on. Eubanks (2017) offers an apt example of this in recounting instances of local governments in the United States using algorithmic models for the purpose of distributing public benefits. In this case, algorithms ultimately differentiate "deserving" poor from "undeserving" poor (Eubanks, 2017). In another example, ranking algorithms used by online search engines commonly seek to order results by quality. Google Search’s algorithms, for example, rely on evaluations of webpages in terms of expertise, authoritativeness, and trustworthiness (Google, n.d.). These algorithms inherently quantify tiers 19 of value in their output, made visible in the way the search results are ordered across ascending page numbers. In both of these examples, algorithms substantially facilitate institutional decision-making by handling data at great scale. Yet, with their core logics of quantifying individuals' and objects’ qualities, algorithms invite a political order premised on stratification. Although this order is not necessarily new, algorithms’ requirement of such an order rigidly authorizes it in a new way, obstructing alternatives. Algorithmic Power as Epistemic The ordering of the world that algorithms (re)produce is hardened through a "promise of objectivity" (Gillespie, 2014, p. 168). In the popular imagination, algorithms have a moral authority as they embody cold indifference to complex, and often sensitive, social matters (Gillespie, 2014; Introna, 2016). As Nick Seaver (2014) explains, "[a]lgorithms per se are supposed to be strictly rational concerns, marrying the certainties of mathematics with the objectivity of technology" (p. 2). Algorithms tend to be characterized as the polar opposite of biased, unreliable human decision-making (Introna, 2016; Rieder, 2017). The "procedural and technical practices" (Noble, 2018, p. 67) through which algorithms are evaluated belie the significant degree of human subjectivity that surrounds their application (boyd & Crawford, 2012; Burrell, 2016; Gillespie, 2014; Rieder, 2017). Companies that rely on algorithms exploit this discursive positioning of algorithms as bias and error free to solidify their value (Burrell, 2016; Gillespie, 2014). Imagining algorithms as neutral not only obscures the subjectivity folded into their processes, but also appeals to desires for universal truths that transcend contingencies. In this way, visions of data science mirror beliefs about the hard sciences, which are seen as generating fundamental truths about the world. Like particle detectors in physics, mainstream discussions of 20 algorithms often treat them as "scientific instruments which simply record nature, as transcription devices which themselves leave no trace" (Traweek, 1988, p. 160). Such figurations of algorithms grant special legitimacy to algorithmically produced knowledge by connecting it to broader debates about the scientific method, which tend to privilege quantitative, positivist approaches (boyd & Crawford, 2012). With the privileged status afforded by objectivity narratives, uses of algorithms have proliferated in recent years, broadening their epistemic power to various spheres—for example, in the humanities (boyd & Crawford, 2012). The rising prominence of algorithms results not only from valuations of their merit as instruments of knowledge that connect to a long tradition of quantification, but likely also as the result of a bandwagon effect (Fujimura, 1988). Joan Fujimura (1988) theorized a bandwagon effect in cancer research, describing it as a phenomenon in which multiple individuals and groups align with some solution to a problem and, thereby, the solution gains momentum and purchase. The bandwagon effect depends upon the existence of technical infrastructures. Additionally, a solution must provide standardized methods transportable across social worlds, which make them widely accessible by "reduc[ing] the amount of tacit knowledge, discretionary decision-making, or trial-and-error procedures needed to solve problems" (Fujimura, 1988, p. 278), and, thus, lowering entry costs in terms of time and money (Fujimura, 1988). Algorithms provide such standardized methods in the form of certain techniques, which package "a general rationale" with "formal calculative specifications" (Rieder, 2017, p. 102). Algorithmic techniques provide "a way of both looking at and acting in and on the world" (Rieder, 2017, p. 109), which facilitates software development by providing ready-made solutions to problems that need solving. Algorithms both construct social problems and make them "doable," which encourages their widespread adoption (Fujimura, 1988). Algorithms 21 facilitated the rise of big data and allow us to answer questions with big data across a wide variety of contexts. The fetishization of algorithms similarly contributes to their epistemic power. Fetishization refers to the attribution to algorithms certain capabilities extrinsic to their properties and/or functions, which become a part of their "promise" and are generative of new possibilities that might not have otherwise come to pass (Thomas et al., 2018). Therefore, Suzanne Thomas and colleagues (2018) contend that "[a]lgorithms are powerful because we invest in them the power to do things" (p. 1). That is, when people imagine algorithmic outputs or outcomes, they tend to imagine they can do more than is actually possible. Such algorithmic imaginaries may be self-fulfilling prophecies as they exert force on the figuration of algorithms by way of the practices and processes surrounding their design and use (Bucher, 2018). In the next section, I will explain how platforms establish themselves as powerful fixtures in contemporary society, which gives greater context for concerns about power at the heart of my empirical investigations. Platform Power When platform algorithms enact order, they do so in large part by mediating the values and interests of the companies that create them—companies that wield great economic, social, and political power. We are now in an era of platformization, in which “platforms have penetrated the heart of societies— affecting institutions, economic transactions, and social and cultural practices— hence forcing governments and states to adjust their legal and democratic structures” (Van Dijck et al., 2018, p. 2). In other words, platforms—mainly, a few key platforms—profoundly impact society by reconfiguring ways of being and doing in the world, particularly via the algorithms that underlie their core services (van Dijck, 2013). The platforms I 22 investigate in this dissertation, Instagram and YouTube, are owned by two of the “big five” tech companies: Facebook and Google, respectively. In 2019, Instagram and YouTube generated $20 billion (Price, 2020) and $15 billion in revenue (Sloane, 2020), respectively, and both companies hosted more than a billion active users worldwide (We Are Social et al., 2020). These numbers evidence both the extent of material resources at their disposal and their ubiquity in everyday life. These platforms have become essential to how we build and maintain relationships, receive news and information, find and perform labor, and participate in democracy (Duffy et al., 2019; Van Dijck et al., 2018). Indeed, withdrawing from platforms means facing not insignificant challenges to participating in social, cultural, and political life (Plantin & Punathambekar, 2019; Van Dijck et al., 2018). They have also become de facto monopolies due to network effects: “the more numerous the users who use a platform, the more valuable that platform becomes for everyone else” (Srnicek, 2017, p. 22). As a result of their exponentiating social and economic value, these platforms have been able to effectively evade efforts to curb their power. The power of platforms has implications for what people know about platform algorithms. As I will show, platforms structure users’ abilities to learn about algorithms, as well as which knowledge gains legitimacy. Thus, I summarize some relevant points about platform power below. In understanding the power of platforms, we must first recognize what Gillespie calls the politics of the term “platform” (Gillespie, 2010). As Gillespie argues, this term does considerable discursive work in capturing a complex array of meanings. In technical terms, it refers to “web- based applications whose technical architecture emphasizes the provision of connection, programmability, and data exchange with applications developed by others” (Plantin et al., 2018, p. 296). “Platform” also captures a new kind of business model centered around data, which characterizes contemporary expressions of capitalism. As Nick Srnicek writes, platforms have 23 become an “efficient way to monopolise, extract, analyse, and use the increasingly large amounts of data that were being recorded” (Srnicek, 2017, p. 21). Yet, these definitions do little to capture the discursive heft of the term. The term “platform” connotes “a progressive and egalitarian arrangement, promising to support those who stand upon it” (Gillespie, 2010, p. 350). This positioning helps tech companies position themselves as benevolent and neutral intermediaries for facilitating connections between people and delivering content, even as they repeatedly demonstrate that they are anything but neutral (Gillespie, 2010; Gillespie, 2018). In light of the above discussion, I find José van Dijck, Thomas Poell, and Martijn de Waal’s definition of platforms apropos. The authors define “platform” as “a programmable digital architecture designed to organize interactions between users—not just end users but also corporate entities and public bodies” (Van Dijck et al., 2018, p. 4). Although the term “platform” urges an understanding of platforms as mere conduits for everyday life, in reality, they prescribe particular norms and values through their architecture (van Dijck, 2013). Such norms and values reflect the particular sociocultural roots of their creators. Alice Marwick explained the ethos of Silicon Valley, writing “a combination of radical activism and business culture created a strain of idealistic techno-determinism that, rather than rejecting mainstream capitalism, embraces it” (Marwick, 2013, p. 25). Indeed, as various scholars have argued, platforms have introduced a new kind of capitalism, which “records, modifies, and commodifies everyday experience” (Zuboff, 2015, p. 81). In this, as noted, companies define the conditions for participating on platforms and enact penalties for noncompliance—namely, bans and invisibility—via algorithms in order to ensure their continued success. Central to the social worlds under study in this dissertation, platforms have significantly transformed cultural production in terms of practices of labor, creativity, and citizenship (Duffy 24 et al., 2019). Platforms also derive power from the careful management of public perceptions. Through various communication channels, as Gillespie argues, platforms “establish the very criteria by which these technologies will be judged, built directly into the terms by which we know them” (Gillespie, 2010, p. 359). In addition to attempts to “control the narrative” about them, platforms use public statements to control user behavior. For example, Caitlin Petre and colleagues found that platforms rhetorically frame behavior that conflicts with their material interests as immoral, which places additional pressure to behave on platforms in prescribed ways (Petre et al., 2019). Platforms also strategically withhold and obfuscate information about their operations, which compounds asymmetries between themselves and users (Zuboff, 2015). Platforms know much more about their users than their users know about the platforms’ algorithmic systems or business models. Pasquale (2015) describes this dynamic as a form of power: “To scrutinize others while avoiding scrutiny oneself is one of the most important forms of power” (p. 3). Shoshanna Zuboff (2015) frames the dynamic also as an asymmetry in privacy rights. Platforms need—and are easily able to maintain—privacy (i.e. secrecy) to generate profit by harvesting user data, but users are required to forfeit their privacy to these companies (Zuboff, 2015). As Zuboff summarizes, the companies “have extensive privacy rights and therefore many opportunities for secrets. These are increasingly used to deprive populations of choice in the matter of what about their lives remains secret” (2015, p. 83). I further discuss how platforms’ “privacy rights” specifically apply to algorithms in a later section. Although various stakeholders have increasingly raised red flags about platform power, as yet, little has been done to regulate these companies, nor have they been held to the same standards as traditional media companies (Hesmondhalgh, 2017). This is partly due to the 25 considerable political power platforms exercise as a result of their vast access to material resources. Indeed, platforms spend significant amounts of money on lobbying in order to retain their ability to self-regulate and continue to remain opaque in their operations (Pasquale, 2015; Van Dijck et al., 2018). There has been some more substantial movement on the regulatory front in the last few years. The European Union’s General Data Protection Regulation went into effect in May 2018, introducing new rules and regulations for the collection and use of data, which has done work to curb platform power (Solon, 2018). While the U.S. has yet to adopt similar regulation at the federal level, Facebook, Google, and other major tech companies face antitrust probes from the Justice Department. Still, these companies have quickly mobilized to fund stealth lobbying efforts in an attempt to continue to sway public opinion (Romm, 2020). As such, platforms continue to wield great power in public and private life, over users and institutions alike. In the next section, I will transition to addressing theories of epistemology that form the foundation of my conceptual framework of critical algorithmic literacy. Framing Critical Algorithmic Literacy Mounting concerns over algorithmic power have led to calls for greater attention to what people know about algorithms (Davidson, 2011; Klawitter & Hargittai, 2018; Rainie & Anderson, 2017). Yet, few studies have empirically explored this area thus far. As recognition of the value of knowledge building and meaning making around algorithms grows, a handful of studies have adopted the term “algorithmic literacy” (Davidson, 2011; Klawitter & Hargittai, 2018; Rainie & Anderson, 2017). The concept of literacy provides a useful model for work in this area as it connects to a centuries-long tradition interested in societal evolution, and conjures cross-disciplinary discussions of matters of broad social consequence. Historically, literacy has 26 been associated with progress, order, transformation, and control (Graff, 2010). Among other phenomena, literacy has factored into discussions around industrialization, migration, and political revolution (Kaestle, 1985). Literacy also connotes values of productive citizenship and self-fulfillment (Graff, 2010). Thus, from a sociological perspective, literacy acts as an important variable in the shaping and reshaping of society across time. Related to its entanglement with societal evolution, the classical approach to literacy has been associated with charting the advancement of individuals and nations. Typically, this endeavor is closely linked to schooling (Rowsell & Pahl, 2015). Here, literacy standards provide a means of assessing and remedying strengths and weaknesses in individuals’ skills, as well as benchmarking skills across social groups, cohorts, and populations. Notably, this approach has been usefully mobilized in recent decades by digital divide and inequality research that seeks to document and explain inequalities in digital skills (e.g., van Dijk & van Deursen, 2014). Although useful for planning and formulating interventions, the classical approach to literacy also invokes hierarchical dichotomies centered on deficits (e.g., "literate” vs. “pre-literate," "literate” vs. “illiterate," “haves” vs. “have-nots”) (Graff, 2010). Implicit in this deficit model is the naturalization of certain knowledge as the gateway to full participation in society. Research that calls attention to deficits across sociodemographic variables risks ignoring underlying social inequities and implying that certain knowledge universally translates to achievement (Graff, 2010). Classifying individuals or collectives as “illiterate” without considering economic, political, and cultural context often disadvantages those classified in this way by implying an individual failure or essentialist views. As the field of literacy studies has matured, scholars have productively critiqued the classical literacy approach in order to conceptualize a new paradigm. In general, contemporary 27 literacy studies have turned attention to contingency, complexity, and power (Graff, 2010; Rowsell & Pahl, 2015). In this, local contexts take center stage (Rowsell & Pahl, 2015). As Mark Warschauer (2003) summarizes, “there is no single construct of literacy that divides people into two cognitive camps. Rather, there are gradations and types of literacies, with a range of benefits closely related to the specific functions of literacy practices” (p. 43). This reconceptualization coincides with more critical perspectives that shift the outcomes of literacy from economic development and social advancement to collective action and inclusive social change. Along with this, literacy is positioned as “a sociocultural phenomenon, rather than a mental phenomenon” (Gee, 2015, p. 35). In other words, social practices are foregrounded over cognitive processes. As such, contemporary literacy studies also situate literacy beyond the institutional context of schooling and the modal context of text (Graff, 2010; Rowsell & Pahl, 2015). It is this contemporary rendering of literacy that I build on in my dissertation, as algorithmic literacy begins to feature in scholarly vernaculars. Further, in my approach to algorithmic literacy, I draw on feminist epistemology, science and technology studies, and critical literacy theories to construct a theoretical framework that addresses limitations in the existing work on knowledge-building and meaning-making around algorithms. In this body of work, three limitations are apparent. First, broadly speaking, existing research adopts an objectivist stance on knowing algorithms, which assumes a universal body of knowledge that can be identified, delimited, and used to assess levels of knowledge. Second, most studies treat knowledge-building and meaning-making as an individual pursuit independent of social context or relations. Third, relevant work generally focuses on cultivating skills for the individual to function better in the status quo, rather than cultivating skills for the individual to transform the 28 status quo so it functions better for them. In what follows, I address these shortcomings through a theoretical framework premised on situated knowledges, the social worlds framework, and critical literacy. Situated Knowledges Existing work on algorithmic knowledge and literacy predominantly revolves around what individuals know about the technical functioning of specific algorithms. Various studies probe users' understanding of social media algorithms, including basic awareness of algorithms, the factors and criteria certain algorithms consider in curating content, the motives underlying algorithms, and the detection of biases in certain algorithmic systems (DeVito et al., 2017; DeVito et al., 2018; Eslami et al., 2015; Eslami et al., 2017; Klawitter & Hargittai, 2018; Rader et al., 2018; Rader & Gray, 2015). These investigations also tend to bound knowledge to a domain implicitly assumed to be important (e.g., how algorithmic curation on social media works) and investigate the extent of participants' familiarity with this domain. Many studies offer judgments of whether people have "more" or "less" understanding of algorithms (e.g., Eslami et al., 2015; Eslami et al., 2016; Rader & Gray, 2015). Indeed, much of this work refers to participants' knowledge as "folk theories" (e.g., DeVito et al., 2017; DeVito et al., 2018; Eslami et al., 2016; Rader & Gray, 2015), which are considered "usually inaccurate in some way" (Rader & Gray, 2015, p. 176). This approach articulates an objectivist approach, which assumes there is a body of knowledge “out there” about algorithms that we all can and should understand in the same way. When we pose the question: "who gets to define this body of knowledge?", it becomes clear that this assumption implicitly establishes a dominant way of knowing algorithms. Postmodern perspectives—particularly feminist epistemologies—orient knowing around interpretivism, which destabilizes the idea of universal knowledge. Principally influential in this 29 perspective, feminist epistemologies disrupt commitments to essentialist renderings of identity, instead emphasizing heterogeneity and multiplicity. The most famous emblem of this notion is Donna Haraway's cyborg. The cyborg represents hybridity, the dissolution of boundaries between humans, machines, and animals. The cyborg resists dualisms (e.g., Male/female, mind/body, civilized/primitive) in favor of "partial identities and contradictory standpoints" (Haraway, 2000, p. 295). Susan Leigh Star (1990) builds upon this perspective to add that we maintain membership in multiple social worlds and, therefore, are "heterogeneous, split apart, multiple" (p. 29). The negotiation of these multiple memberships entails that "we are all marginal in some regard" (Star, 1990, p. 52), which requires near constant negotiation and mingling of repertoires from different worlds. As we do this, "we create metaphors—bridges between those different worlds" (Star, 1990, p. 52). However, some attempts to translate experience across worlds enact standards, and the question of whose translation gains traction is one of power. Bridging worlds reveals political commitments that "may heal or create, erase or violate, impose a voice or embody more than one voice" (Star, 1990, p. 52). This sense of fragmented or federated identity and worlds suggests a parallel sense of heterogeneity in ways of knowing—"Subjectivity is multidimensional; so, therefore, is vision" (Haraway, 1988, p. 586). Here, again, Haraway (1988) provides foundational framing in conceptualizing situated knowledges. The notion of situated knowledges rejects totalizing visions of the world, which present an "impartial" view of a presumed external, universal reality—a fiction she terms a "god trick" and "a conquering gaze from nowhere" (Haraway, 1988, p. 581). Totalizing visions ignore the variation in embodied human experiences that produces boundless knowledges. As such, assertions of the universality of knowledge are acts of power that prioritize 30 some perspectives and marginalize others. Thus, Haraway contends that objective knowledge should be understood as representing an always partial, plural, and embodied view locatable in time and space. Such partial, plural sights are accountable in their embodiment and, so, may be contested, critiqued, interpreted, decoded. They may also come together in "webbed connections," though these are not meant to unify vision in totalizing narratives. Instead, Haraway concludes, "Rational knowledge is a process of ongoing critical interpretation among 'fields' of interpreters and decoders. Rational knowledge is power-sensitive conversation" (1988, p. 590). From this perspective all knowledge claims must be evaluated in relation to whom and what is centered and marginalized by them. The field of literacy studies similarly understands literacy to be situated as it is heavily shaped by social and historical context (Rowsell & Pahl, 2015). As Jennifer Rowsell and Kate Pahl summarize, “literacy is ideologically and culturally situated, and…literacy practices are not fixed and static but complex, changing and contingent on identities, locality, time, space, context, culture and practice” (2015, p. 1). While classical approaches to literacy give the impression of a simple straightforward story of learning to write and read, more contemporary work suggests a more complex picture. Classical approaches draw on positivist modes of inquiry that prioritize technical, technical skills (e.g., word identification skills, phonological awareness, vocabulary development, etc.) and view texts as independent of the social and political contexts from which they spring (Freire & Macedo, 1987; Gee, 1999). Yet, historicizing conceptions of literacy across a broad timeline demonstrates that social context informs the goals, purposes, and practices of reading and writing. As Warschauer (2003) explains, “Reading is a transitive verb; learning to read inevitably means learning to read something…And to read and understand that something involves bringing to bear a vast amount of background knowledge, or schemata” (p. 44-45). 31 Thus, in contrast to previous approaches, I assume indeterminacy in knowing algorithms, which dissolves predefined normative assumptions about “right”/“wrong,” “good”/“bad,” “important”/“unimportant” knowledge. Instead, literacy is conceptualized as dependent upon the individual’s social, cultural, and political contexts. Knowledge-Building within Social Worlds Most existing studies treat knowledge-building around algorithms as an individual pursuit independent of social context or relations. These studies focus on what individuals know without exploring knowledge building and meaning-making processes, and/or they focus on individuals’ interactions with different platforms—as opposed to with other people—as the source of their knowledge (e.g., DeVito et al., 2017; Eslami et al., 2015; Klawitter & Hargittai, 2018; Rader & Gray, 2015; Rader, 2017). Yet, science and technology studies (STS) work has long argued that different social groups interpret technologies differently according to their respective values and interests (Bijker et al., 2012; Bijker & Law, 1992). Indeed, our membership in different social groups shapes our orientations towards technologies as nearly all learning is constructed within social worlds (Bowker & Star, 1999; Warschauer, 2003). Social worlds are "universes of discourse" (Clarke & Star, 2008, p. 113)—shared realities or "shared discursive spaces that are profoundly relational" (Clarke & Star, 2008, p. 113). Social worlds are understood to be “groups with shared commitments to certain activities, sharing resources of many kinds to achieve their goals and building shared ideologies about how to go about their business” (Clarke & Star, 2008, p. 113). This understanding of social worlds represents a wide variety of collectives, including recreation groups, occupations, and theoretical traditions. However, it also recognizes social worlds as “fluidly bounded” by collective action and thinking (Clarke & Star, 2008). The concept of social worlds captures the organization of life as people do things together and develop a 32 shared language and ethos for their “doing.” Importantly, individuals that comprise social worlds are members of multiple worlds, some of which may intersect with one another. According to the social worlds framework, our beliefs, attitudes, and values related to knowledge objects are constructed in social practice and via social relations (Clarke & Star, 2008). While some knowledge may be made written down, or otherwise made explicit—as facts, rules, or heuristics (Collins, 2012)—much of literacy remains tacit. As such, literacy constitutes “socialization, not verbalization” (Collins, 2012, p. 322). From this view, learning primarily occurs informally “as learners and experts observe, imitate, experiment, model, appropriate, and provide and receive feedback” (Warschauer, 2003, p. 121). Learning may be derived from books and other inscriptions in media, but much of the significant orientations towards a knowledge object are constructed via social interactions. In turn, social worlds are largely defined by meaning-making practices and relations around shared objects (Clarke & Star, 2008). Membership in a social world is constituted in large part by an individual's familiarity with the social world's discourses and practices around shared objects—a sort of "learning-as-membership and participation" (Bowker & Star, 1999, p. 295). Consequently, literacy involves identifying (or coming to identify) with a social world--"learning how is intimately tied up with learning to be, in other words developing the disposition, demeanor, outlook, and identity of the practitioners" (Warschauer, 2003, p. 122). In short, from this perspective, becoming literate means learning how to think, act, and interact with a class of shared objects within a social world. In the proposed research, I acknowledge that knowing cannot be divorced from the discursive spaces we occupy. Therefore, in contrast to previous work, my dissertation takes social worlds as an essential element of algorithmic literacy. From this perspective, social worlds 33 and algorithmic literacy are co-constituted. Next, I address how considerations of power alters theorizing around algorithmic literacy. Critical Literacy Previous investigations of algorithmic knowledge and literacy generally revolve around a core goal of improving the functioning of algorithmic systems and/or helping users make better use of algorithmic systems (e.g., Eslami et al., 2015; Klawitter & Hargittai, 2018; Rader & Gray, 2015). In the former case, it is assumed that what individuals know about algorithms will affect how they use these systems, which in turn affects the functioning of the systems. In the latter case, it is assumed that the more individuals know about how algorithms work, the more effectively they will be able to use systems to accomplish their goals. This belief continues the aims of digital divide and digital literacy literature, which argues that we need to promote digital literacy because "[t]hose who function better in the digital realm and participate more fully in digitally mediated social life enjoy advantages over their digitally disadvantaged counterparts" (Robinson et al., 2015, p. 570). More broadly, these ideas encapsulate the historical goal of industrial progress in literacy initiatives (Freire & Macedo, 1987)—particularly those focusing on computational skills (Freire & Macedo, 1987; Pangrazio & Selwyn, 2019). Proclamations that, for example, computer science is “a 'new basic' skill necessary for economic opportunity and social mobility” (The White House, 2016), or “a fundamental skill—just like reading, writing, and arithmetic—used by everyone by the middle of the 21st Century” (Wing, 2014) are quite common and articulate the underlying goal of facilitating overall labor and economic growth by improving life chances at the individual level. This is undoubtedly a salutary aim. However, this positioning of literacy presumes a path-dependent future that may not be fully welcome by all. It ignores the important ways that literacy can not only entail cultivating skills 34 for the individual to function better in the status quo, but also cultivating skills for the individual to transform the status quo so it functions better for them. Critical literacy perspectives take up this point, suggesting that literacy can serve a goal of liberation (Freire & Macedo, 1987; Mosley & Tucker-Raymond, 2007). Paulo Freire's seminal work characterizes critical literacy as the process of cultivating a critical consciousness (Freire, 2000). From this perspective, texts do not contain universal truths, but inscribe and naturalize certain discourses (Mosley & Tucker-Raymond, 2007). Thus, literacy constitutes a means by which individuals come to understand the power relations and systems of oppression that shape their lives and are reified through discourses within texts (Mosley & Tucker-Raymond, 2007; Vasquez, 2012). Critical literacy recognizes both knowledge and reality as constructed and indeterminate (Freire, 2000; Mosley & Tucker-Raymond, 2007). Literacy, then, involves interpreting and therefore giving meaning to texts (Kinnucan-Welsch, 2010). Becoming literate signifies a process of freeing oneself from the dominant prescriptions for social order through critical intervention that transforms "the concrete situation which begets oppression" (Freire, 2000, p. 50). Literacy entails deconstructing the ideological values and assumptions of texts. It also entails reconstructing visions of the world to privilege diversity and difference, and to recognize the plurality of human experiences and commitments in opposition to how certain experiences and commitments are normatively made central (Mosley & Tucker-Raymond, 2007). As such, critical literacy centers on enabling individuals to question the status quo represented in texts and to take transformative action in pursuit of social justice and equity (Vasquez, 2012). With this view of literacy, Freire (2000) argues that literacy is not a gift to be bestowed “by those who consider themselves knowledgeable upon those whom they consider to know nothing” (p. 72). This so-called “banking” model of education ultimately seeks to assimilate students— 35 particularly those who have been made marginal—into mainstream society (Freire, 2000). By contrast, critical literacy imagines a horizontal relationship between teachers and students— teachers and students working collaboratively to reveal the systems of power structuring social reality and to reassemble reality anew. In this approach, teachers are also students and students are also teachers. Moreover, as Freire asserts, “[t]he solution is not to ‘integrate’ [marginalized students] into the structure of oppression, but to transform that structure so that they can become ‘beings for themselves’” (2000, p. 74). Although Freire principally addresses formal educational environments, we can extrapolate this perspective to understand teaching and learning as situated within a broader landscape of knowers. With the specter of algorithmic power, the principles of critical literacy perspectives lend themselves to algorithmic literacy. My dissertation draws on critical literacy to reorient algorithmic literacy around the core aim of enabling individuals to discern, challenge, and reconstruct dominant visions of the world inscribed in algorithms. Challenges to Knowing Algorithms Any exploration of algorithmic literacy must clarify what it means to know algorithms. This task begins with accounting for the ways that algorithms are not knowable. In what follows, I will enumerate challenges to knowing algorithms, which include the black boxing, complexity, and the contingent and dynamic nature of algorithms. The Black-Boxing of Algorithms Algorithms are often quietly deployed, such that many users are not aware of their presence (Eslami et al., 2015; Rader & Gray, 2015), let alone anything else about them. Algorithms are typically intentionally concealed as part of companies' efforts to protect their proprietary intellectual property or as corporate secrecy (Burrell, 2016; Kroll et al., 2016; 36 Pasquale, 2015). Companies argue that algorithms are the “secret sauce” by which they generate revenue and, thus, in order to maintain a competitive advantage, the inner workings of algorithms must be kept secret (Bucher, 2018; Burrell, 2016; Pasquale, 2015). Companies also argue that they must constrain flows of information about their algorithms in order to prevent users from “gaming” them (Burrell, 2016; Pasquale, 2015). Such gaming is said to harm the integrity of the system and advantage “gamers” unfairly, unethically, and/or illegally (Kroll et al., 2016). Beyond these concerns, Frank Pasquale (2015) argues that companies obscure their algorithms in order to evade regulation and public scrutiny, therefore, maintaining freedom and agility in their business models, specifically in how they deploy algorithms. Secrecy enables companies to develop their algorithmic systems and iterate in ways that benefit their bottom line, while avoiding being ensnared or slowed down by critical public inquiry or litigation (Pasquale, 2015; Zuboff, 2015). In other words, secrecy helps companies “move fast and break things” (Henry, 2014) and stakeholders are not able to see or cast doubt on companies' methods. Still, companies like Facebook and Google, whose business models have long relied upon algorithms, have received criticism over the years over the ways their systems seem to privilege certain actors over others, produce discriminatory outcomes, manipulate users, and violate users' privacy (Nissenbaum & Introna, 2000; Noble, 2018; Pasquale, 2015; van Dijck, 2013). In particular, because algorithmic systems inherently depend upon data to function, many have criticized online platform owners for using secrecy to coercively urge users to use their platforms in specific ways and force dependency on them in order to predict and modify user behavior as a means of generating revenue and gaining market control (van Dijck, 2013; Zuboff, 2015). 37 The Complexity of Algorithms Beyond efforts to obscure what algorithms do, how, and why, the complexity of algorithmic systems fundamentally limits what we can comprehend about algorithmic processes (Ananny & Crawford, 2016; Burrell, 2016; Seaver, 2014). As Nick Seaver (2014) writes, “A determined focus on revealing the operations of algorithms risks taking for granted that they operate clearly in the first place” (p. 8). Most fundamentally, algorithms commonly process enormous datasets with a large number of heterogenous properties, and integrate a series of complex statistical and computational techniques in their code, which limits the capacity to comprehend what these algorithms do and how (Burrell, 2016). Relatedly, machine learning algorithms often function independently of human oversight—“unsupervised”—in identifying and weighting important variables. Thus, as Jenna Burrell (2016) explains, Where an algorithm does the ‘programming’ (i.e. optimally calculates its weights) then it logically follows that being intelligible to humans (part of the art of writing code) is no longer a concern, at least, not to the non-human ‘programmer.’ (Burrell, 2016) Further, even when companies can share how algorithms have weighted variables or which variables algorithms rely on, it still may not possible to discern why algorithms produce a particular models or outcomes (Burrell, 2016; Mackenzie, 2015). Randomness built into algorithmic processes additionally obscures how certain outputs or outcomes come to be (Kroll et al., 2016). Given all of these factors, to some extent, even engineers who work on algorithms may not be able to articulate how and why algorithms function the way that they do (Ananny & Crawford, 2016). The Dynamic Nature of Algorithms Randomness also points to the contingent and ontogenetic nature of algorithms (Kitchin, 38 2017). Some aspects of algorithmic systems “never take durable, observable forms” (Ananny & Crawford, 2016, p. 9), which means that some of algorithms’ work may not necessarily ever be captured or recorded. In the first place, software development typically relies on iterative methodologies in which insights gleaned during development, implementation, and maintenance can be used for improving precision and efficacy of algorithms (Kroll, 2018). Of course, algorithms also often rely on machine learning, which means their internal logic develops in situ in relation to training data and data produced by user activity (Burrell, 2016; Seaver, 2017). Machine learning algorithms also evolve alongside the broader system as interfaces, settings, capabilities, and user populations change (Ananny & Crawford, 2016). Technology companies rely on experimentation, or A/B testing, to continuously tweak their algorithms as they optimize for certain outcomes. At any given moment, a company may be running dozens of experiments, such that no single algorithm exists (Seaver, 2014). The different variations of an algorithm within A/B tests contributes to the dynamic quality of algorithms (Seaver, 2014). This dynamic quality makes it difficult to fix and extract details that can be known about algorithms. Ways of Knowing Algorithms The above-described challenges provide a starting point for understanding ways that algorithms are knowable. Primarily, they complicate notions that the “truth” of any given algorithm exists objectively and divorced from context (Ananny & Crawford, 2016; Bucher, 2018; Kitchin, 2017; Seaver, 2017). Instead, as others have argued, algorithms should be understood as emergent from the practices through which people engage with them (Bucher, 2018; Seaver, 2017). In this, algorithms can be considered from many different angles— technically, materially, philosophically, and so on (Kitchin, 2017). Bucher (2018) pushes this multidimensional conception of algorithms further proposing a manyfoldedness of them: 39 Whereas plurality assumes that there is one stable thing and different perspectives one can take on that thing, to speak of manyfoldedness implies the idea that algorithms exist on multiple levels as 'things done' in practice. In other words, algorithms can be magical and concrete, good and bad, technical and social. (p. 19) As such, there is no single algorithm to know, nor is meaning making stable (Seaver, 2017). From these ontological understandings of algorithms, I organize the epistemology of algorithms into two broad streams: a technical approach and a practical approach. Technical Knowledge Technical approaches to knowing algorithms focus on the design and functionality of algorithms—their processes, techniques, inputs, outputs, design goals, and so on. Technical knowledge may constitute knowledge about the functioning and design of specific algorithms or algorithms generally. It can involve knowing the “grammar” and “vocabulary” of using a programming language to accomplish a goal or solve a problem, but not necessarily. Technical knowledge rarely fully reveals algorithms or grants the ability to predict how algorithms will behave and what outcomes they will produce (Kroll et al., 2016; Kroll, 2018). However, algorithms can be understood at a higher level in terms of their underlying design goals and purpose (Kroll, 2018; Rieder, 2017). As such, technical knowledge does not usually “reveal some hidden truth about the exact workings of software or to unveil the precise formula of an algorithm” (Bucher, 2018, p. 61), but rather offers a general sense of what an algorithm “was designed to do, how it was designed to do that, and why it was designed in that particular way instead of some other way” (Kroll, 2018, p. 4). The extent to which individuals can understand algorithms from a technical standpoint is limited by their relative opacity, complexity, and ephemerality. Additionally, the relevance and salience of algorithms varies according to the lived experiences of different people in different social worlds. For some worlds, algorithms are central, and individuals interact extensively with 40 them for some purpose. For these worlds, algorithms can be considered naturalized (Bowker & Star, 1999), and, thus, technical knowledge will likely be more formalized. Consequently, individuals in these worlds will have greater opportunities for developing the skills to read algorithms for what they say. Here, computer scientists provide a paradigmatic example of a social world in which algorithms are naturalized as a result of their salience and relevance. Another example is the social world of digital advertising in which algorithms mediate such activities as bidding for advertising space, audience targeting, and search engine optimization. In both examples, members of these social worlds benefit from more readily available (formal and informal) learning resources about algorithms, as well as more frequent interaction with them. Practical Knowledge In contrast to technical approaches to knowing, algorithms are alternately knowable through phenomenological experiences. This approach does not locate “the algorithm” in source code awaiting discovery, but rather as emergent from situated encounters with algorithms. As Bucher (2018) writes, this approach also entails "a shift in attention away from questions of what algorithms are to what they do as part of specific situations" (p. 49). Knowing algorithms in this way rests on pragmatist and constructivist epistemology, which emphasizes meaning-making not in cognition, per se, but practice. Indeed, “practical” hearkens to practice theory—in particular, Pierre Bourdieu’s theory of practice, in which he argues individuals internalize principles and assumptions of broader systems and re-enact them in practice (Bourdieu, 1977). As Bourdieu writes of his conceptualization of the habitus: “subjects” are active and knowing agents endowed with a practical sense, that is, an acquired system of preferences, of principles of vision and division (what is usually called taste), and also a system of durable cognitive structures (which are essentially the 41 product of the internalization of objective structures) and of schemes of action which orient the perception of the situation and the appropriate response. The habitus is this kind of practical sense for what is to be done in a given situation—what is called in sport a “feel” for the game, that is, the art of anticipating the future of the gamer which is inscribed in the present state of play. (Bourdieu, 1998; emphasis in original) Thus, practice knowledge of algorithms captures this “practical sense,” a “’feel for the game.’” It is knowledge that is embodied, and often tacit (Polanyi, 2009). Importantly, individuals may not realize or consciously consider that they are interacting with “algorithms,” per se. Practical knowledge includes what Bucher refers to as the “algorithmic imaginary,” which addresses felt experiences with algorithms and attempts to “bring sense to what we do not know but know nonetheless” (Bucher, 2018, p. 61). For such knowledge, no rubric exists for evaluating its accuracy or adequacy, nor would such a rubric be useful. To the extent that individuals view practical knowledge as “real” or legitimate, it can be considered real in its effects (Thomas & Thomas, 1928). Understanding algorithms from a practical standpoint means understanding algorithms as part of participation in the social practices and relations within social worlds. It involves bringing to bear discourses that define a social world to understand what algorithms “want”—in other words, the particular prescriptions for behavior and thought algorithms issue. Further, practical knowledge materializes in how people consign meaning to algorithms through situated action (Suchman, 2007). As in the case of technical knowledge, the salience and relevance of algorithms to a social world matter. When algorithms are not highly salient or relevant, the practices around them may not yet (or ever) be routinized or homogeneous, which impedes the crystallization of meaning—algorithms are more ambiguous. Indeed, practical knowledge entails 42 having a sense of how algorithms should be approached in practice. In the next section, I will conclude this chapter by presenting the research questions guiding this dissertation. Research Questions Most existing work focuses on what individuals know about algorithms, rather than how they know what they know. A few studies identify some sources of knowledge (e.g., DeVito et al., 2017; Eslami et al., 2015; Eslami et al., 2017). However, these findings are either incidental and/or present simple bivariate relationships between different factors and awareness of algorithms, rather than deeply exploring practices by which users acquire, interpret, and assemble information. Moreover, most relevant work attends to sources of technical knowledge only, and has less to say about practical knowledge-building. Thus, the proposed research asks: what do algorithmic literacy practices look like—that is, how do individuals learn about, make sense of, and respond to algorithms? There is also little existing literature on the ways that learning and knowing algorithms happen within social worlds. Instead, existing studies focus on individuals' knowledge independent of their social context and relations. Although algorithms may be equally salient to different social worlds, the respective discourses of these worlds may shape constructions of algorithmic literacy in different ways (and vice versa). Therefore: How are social worlds and knowledge practices and processes around algorithms co-constituted? In other words, how does the discursive landscape of individuals' social worlds shape the way they make sense of algorithms, and how do the ways individuals make sense of algorithms shape the discursive landscape of their social worlds? Finally, the proposed research theorizes critical algorithmic literacy as a means of 43 confronting algorithmic power. Few studies have empirically addressed the potential relationship between algorithmic power and algorithmic knowledge. Thus, it is not clear whether, how, and under what circumstances algorithmic literacy may distort, attenuate, (re)configure, or otherwise transform power relations mediated by algorithms. Further, it is also not clear how different kinds of algorithmic power may condition variation in the (potential) relationship between algorithmic literacy and power. As such: How do knowledge practices and processes around algorithms intersect with power relations, particularly the material and phenomenological experience of algorithmic power? 44 CHAPTER 3: Methods This dissertation centers on two case studies: 1) Instagram influencers’ pursuit of visibility on the platform, and 2) the leftist community known as “BreadTube” and their concern over the relative visibility of right-wing (and leftist) ideology on YouTube. I selected these cases for three principal reasons. First, I expected the communities’ respective pursuits of and concerns over visibility would foreground practices of learning about and making sense of algorithms. Second, membership in these communities does not depend upon technical expertise. Most Instagram influencers and BreadTubers are not computer scientists, data scientists, or information scientists. Consequently, I assumed algorithms would be somewhat new to them and would allow me to more clearly see how algorithms gain legibility and coherence for these communities via knowledge building practices. Third, both worlds are marked by power struggles mediated by algorithms, as they involve, for example, the dynamics of platform and surveillance capitalism (Srnicek, 2017; Zuboff, 2015; Zuboff, 2019); hierarchies of fame, and social and cultural capital (Duffy, 2017; Marwick, 2015); the political economy of media industries (Nieborg & Poell, 2018). As part of the backdrop for the two cases, I hoped these power struggles would facilitate an investigation of the relationship between algorithmic literacy and power. Despite their similarities, the two cases have notable contrasts in terms of the main preoccupations of the respective communities. While Instagram influencers principally seek professional success and financial rewards in their pursuit of visibility, BreadTubers principally seek to contribute to broader social and political change in their collective pursuit of visibility. I expected these differences to provide a degree of variation in how the two communities would learn about and make sense of algorithms. Additionally, the two communities primarily seek 45 visibility on different platforms (Instagram and YouTube), which provided the opportunity to consider any variation resulting from differences in the technical or institutional structures of the two platforms. To explore these cases, I used a multi-sited qualitative approach, which involved collecting data from online discourse materials and from semi-structured interviews with Instagram influencers and BreadTubers. I will now turn to detailing my specific methods. I begin by offering additional background and context about who Instagram influencers and BreadTubers are in order to better set the stage for my findings. I will then describe the data I collected and analyzed, as well as how I collected and analyzed this data. Finally, I will offer some reflections on my ethical approach across the two cases. Instagram Influencers Cases Digital influencers are a type of celebrity, usually micro-celebrities (Senft, 2008), who have accrued a large number of followers on social media and often mobilize this social capital for material rewards or social cachet (Abidin, 2015; Duffy, 2017). Though influencers typically maintain a presence across social channels—blogs, Twitter, Instagram, etc.—Instagram is considered the platform of choice among influencers and marketing professionals (MediaKix, 2019a; Newberry, 2019). Influencer marketing revolves around the idea that influencers can impact their followers’ purchasing patterns as a result of the relational intimacy they cultivate with their followers. Thus, influencers primarily make their money mainly through sponsored posts and affiliate partnerships, wherein they earn commission from sharing links to products. Influencers can earn anywhere from $50 to tens of thousands per sponsored post. On average, influencers make 46 roughly $1,000 per 100,000 followers, or one cent per follower (Carbone, 2019). Influencer marketing campaigns may feature influencers of all audience sizes (see Figure 1). While brands desire to reach a wide audience, they prioritize ROI, which makes “micro-influencers” attractive for their relatively higher engagement rates (MediaKix, 2019b). Influencers typically position themselves within certain niche interests (e.g. travel, lifestyle, cooking), which facilitates marketing campaigns. Figure 1. Instagram Influencer Tiers by Audience Sizes Note. (MediaKix, 2019b) Some of the most popular influencers include Italian fashion influencer Chiara Ferragni, Dutch makeup artist influencer Nikkie de Jager, and American entertainer influencer Amanda Cerny. 33-year-old Ferragni started out as a fashion blogger during her time as a law student and 47 quickly grew her audience to over 10 million followers on Instagram after a series of features in prominent magazines and critical acclaim. de Jager began to vlog at age 14 in 2008, posting makeup tutorials on YouTube (McMeekin, 2018). de Jager took makeup classes and worked professionally as a makeup artist, while slowly growing her audience to nearly 14 million Instagram followers (Krause, 2020). 29-year-old Cerny rose to fame after gaining a large following on Vine for her comedic skits. Cerny was a teen model and was a Playboy Playmate of the Month in October 2011 while attending college. She currently has more than 26 million followers on Instagram. 48 Figure 2. Chiara Ferragni’s Instagram Profile 49 Figure 3. Nikki de Jager’s Instagram Profile 50 Figure 4. Amanda Cerny’s Instagram Profile Who influencers are, culturally speaking, rests on a tension between a conceptualization of influencers as ordinary individuals who have achieved micro-celebrity by actively pursuing it (Abidin, 2015; Duffy & Hund, 2015; Duffy, 2017; Marwick, 2015) and the reality that most 51 successful influencers tend to be those who come from privileged backgrounds (Duffy, 2017). For example, demonstrating the former understanding of influencers, for its list of “Top Influencers,” Forbes only counts influencers who create their own content and “who made it big by building their fame from the internet up, rather than celebrities who also happen to have large audiences online” (O'Connor, 2017, p. n.p.). Here, Forbes captures the common perception that digital influence is achieved purely through dedication, hard work, and skill. At the same time, there is a steep on-ramp to growing a monetizable following. Producing “engaging” and high- quality content requires a financial investment in travel, trendy products, photography equipment, editing software, social media management apps, paid services, and so on (Duffy, 2017). Thus, while those who achieve influence do typically earn it, stereotypes of influencers as privileged young people living a life of luxury, have a grain of truth. Not just “young people,” Instagram influencers tend to be young women. The predominance of female Instagram influencers is due in large part to the feminization of the Internet, wherein girls and women are more likely to maintain a blog or social media presence (Duffy, 2017). As Brooke Duffy explains, influencers’ labor “relies on historically constructed notions of femininity—particularly discourses of community, affect, and commodity-based self- expression” (Duffy, 2017, p. 8). Popular influencer niches—beauty, fashion, and parenting— reflect the gendered nature of Instagram influence as they represent domains traditionally associated with women. Moreover, lists of top influencers within these niches and others overwhelmingly feature women (e.g., O'Connor, 2017). The gendered nature of Instagram influence often invites sexist critiques of the influencers as “frivolous,” “vapid,” and “narcissistic” (Abidin, 2016; Duffy, 2017). Such critiques obscure or minimize the considerable skill and labor underpinning influencers’ 52 accomplishments (Abidin, 2016). Because influencers’ success depends upon the immaterial labor of relationship building, networking, and behind-the-scenes partnerships, many conclude that influencers simply gained their followers through sheer luck (Abidin, 2016). Yet, as many scholars have argued, in reality, influencers strategically capitalize on these narratives in creative, inventive ways (e.g., Abidin, 2016) BreadTube “BreadTube” refers to a loosely connected network of leftist YouTubers and their audiences. The name “BreadTube” comes from the 1892 book, The Conquest of Bread, written by left-wing anarchist, Peter Kropotkin ("The bread book", n.d.). The “bread book” became a meme among online leftists, coming to represent adherence to the book’s central commitment to creating “a world without poverty or repression” via leftist politics ("The bread book", n.d.). In an early post on r/BreadTube, a moderator explained the purpose of the "Read the bread book" meme: 1. To be a meme 2. To spread awareness and consciousness that alternatives exist to the standard US/Soviet model of organising societies where a group of people at the top tell control most things and tell everyone else what to do. You don't have to be a Capitalist or a Leninist, a Social Democrat or a Stalinist. You can be you. ([ -_-_-_-otalp-_-_-_- ], 2018) Turning theory into praxis (Kuznetsov & Ismangil, 2020), as mentioned, BreadTubers come together around the visibility war—a shared commitment to spreading leftist ideology and opposing the rise and indoctrination of far-right ideology online. As the BreadTube.tv website states: The goal of the community is to challenge the far-right content creators who have taken advantage of the profit-driven algorithms used by services like YouTube for the purpose of spreading hate. Instead we wish educate people on how their world operates, the alternative possible visions for our future, and how we organize ourselves to get there. (n.d.) 53 To pursue this goal, BreadTube comes together across various online venues, including YouTube, Twitter, and Reddit. Figure 5. r/BreadTube Frontpage Although leftist ideology defines the BreadTube community, as it has grown, it has become host to and has been influenced by a range of political movements, including socialism, progressivism, communism, and anarchism. Most BreadTubers actively agitate for the demise of capitalism, while pursuing social justice for marginalized groups. However, these pursuits sometimes splinter BreadTube into the “Dirtbag left,” which tends to primarily focus on class- based inequality and rejects “civility politics” ([u/Jethawk55], 2020), and the “Woke left” or “Social justice left,” which tends to primarily focus on multiculturalism and identity-based 54 oppression (see Figure 6). Figure 6. BreadTube Ideologies The most well-known BreadTubers include Natalie Wynn, a former philosophy Ph.D. student turned YouTuber known as “ContraPoints,” who draws on left-wing theory to argue against prominent right-wing talking points; Oliver Thorn, who utilizes his degrees in philosophy to explain philosophy to the masses on his YouTube channel, “Philosophy Tube,” often addressing sociopolitical topics like mental health, data rights, and sexuality; and Harris Brewis, known as Hbomberguy, who produces video essays presenting left-wing arguments debunking flat earth conspiracy theories, antifeminism, climate change denial, among others, as well as critical analyses of film, television shows, and video games. 55 Figure 7. ContraPoints’ YouTube Channel and 10 Most Popular Videos Figure 8. Philosophy Tube’s YouTube Channel and 10 Most Popular Videos 56 Figure 9. Hbomberguy’s YouTube Channel and 10 Most Popular Videos Although BreadTube has become more visible in mainstream culture over the last two years—each of the three aforementioned BreadTubers have more than 500,000 subscribers, and r/BreadTube has more than 100,000 subscribers—who constitutes “BreadTube” is highly contentious. Like other ICT-driven social movements, the boundaries of the BreadTube community remain “fuzzy and fluid” and should be considered “a moving target” (Donk et al., 2004, p. 2). An article published in The New York Times, which referred to “BreadTube” (Roose, 2019), ignited critiques about the “cliquish” nature of the community and gatekeeping according to social identity, political beliefs, topics, and aesthetics (e.g., Peterson, 2019a; Peterson, 2019b; Watanabe, 2019b; Wilkins, 2019a; [quangli], 2019). In particular, many have criticized BreadTube for being overwhelmingly White and even being hostile or unwelcoming to BreadTubers of color (e.g., .Peterson, 2019b; Wilkins, 2019b; [Angie Speaks], 2019). Thus, 57 while the “BreadTube community” represents a roughly cohesive community in terms of overarching ideology and interest in the visibility war, it is also subject to the same systems of power that animate social life in the U.S. more broadly. The multiplicity of “sub-ideologies,” as well as the very real prejudices of the community, fragment BreadTube and result in different interpretations of the causes and issues for which it stands. Data Collection To investigate these case studies, I used a multisite strategy (Clarke et al., 2018), drawing on online discourse materials and interview data. Such an approach acknowledges the mobility of empirical objects of study as they move about various social and virtual spaces (Caliandro & Gandini, 2016). In other words, in this research, I followed my object of study—knowledge of algorithms—across the web. This strategy led me to collect data from Facebook groups, Reddit, forums, Twitter, and YouTube (see Table 1). It also allowed for comparative analyses across genres of data (Clarke et al., 2018), thereby providing multiple lines of sight from which to observe critical algorithmic literacy practices. In addition to collecting data from online discourse materials, I conducted interviews with Instagram influencers and BreadTubers. My dataset for the Instagram influencers case study consisted of more than 500 files. My dataset for the BreadTube case study consisted of more than 550 data files. These files consisted primarily of screenshots of social media posts and comments; PDFs of blog posts, news articles, and webpages; video files and transcripts of YouTube videos; and images; I detail my multi-faceted approach in the following sections. Table 1. Sites and Dates of Data Collection Case Study Main Sites Data Collection 58 Table 1 (cont’d) Instagram influencers • Facebook groups September 2017 – May 2020 • r/Instagram • r/InstagramMarketing • BlackHatWorld • MP Social BreadTube • r/BreadTube January 2020 - May 2020 • Facebook group • BreadTube Discord • YouTube • Twitter Online Discourse Materials For the Instagram influencers case, I primarily gathered data from discussions among influencers about algorithms within 13 Facebook groups for aspiring and established influencers. These groups ranged in size from just over one thousand members to tens of thousands. I identified most of these groups by searching Facebook for keywords like “Instagram” and “influencers.” I identified a few groups as a result of someone referencing and linking to the group from outside of Facebook (e.g., on the admin’s website). I additionally encountered and joined a few relevant groups suggested by Facebook, after initially joining a handful of groups. As a complement to these Facebook groups, I collected data from the subreddits /r/Instagram, 59 /r/InstagramMarketing, and /r/socialmedia; from the Instagram subforum on BlackHatWorld, a forum for black-hat digital marketing techniques and services; and from MP Social, a forum for "Shar[ing] advice and ideas on how to succeed on social media” (MP Social, n.d.). From September 2017 to December 2017, I visited Facebook groups, Reddit, and forums at least once a week and collected data. After the initial intense data collection period, I continued to browse Facebook groups, Reddit, and forums more sporadically. In these venues, I occasionally browsed day-to-day discussions, but mainly actively searched them for combinations of relevant keywords (e.g., algorithms, bias, bots, etc.) iteratively gleaned, in order to identify pertinent discussions. I additionally “follow[ed] leads that emerge[d]” (Charmaz, 2014, p. 25) by consulting materials shared in Facebook groups, Reddit, and forums (e.g., guides, videos, articles, etc.), and by conducting targeted searches via various platforms (e.g., Google, Twitter, and YouTube) for unfamiliar terms, events, and people, as surfaced in interviews and/or online discussions. For the BreadTube case, I collected data primarily from r/BreadTube. However, I also collected data from one Facebook group for BreadTube fans, which I found in March 2020 by searching for “LeftTube” on Facebook. I additionally consulted r/BreadTube’s Discord channel for relevant discussions, which I found at the end of February 2020 via r/BreadTube. To collect data, I browsed and conducted targeted keyword searches (usually for “algorithm” and/or “YouTube”) in the subreddit about once every few days between February 2020 and May 2020. I similarly browsed and searched the BreadTube Facebook group and Discord channel about once a week between March 2020 and May 2020. Occasionally, I searched the subreddit, Facebook group, and Discord channel for posts and comments mentioning people, concepts, tools, etc., as they emerged in interviews and analyses. Alongside data collection from the Facebook group and 60 r/BreadTube, I spent significant time watching BreadTube content on YouTube. These videos helped illuminate discourses within the community, particularly those related to the visibility war and/or YouTube’s algorithms. I primarily watched videos I encountered in r/BreadTube, the Facebook group, and/or interviews that appeared to be relevant to my investigations. I occasionally visited specific BreadTube creators’ channels, as they were mentioned by subjects, and browsed their uploads for relevant content. Often BreadTube creators referenced other BreadTube creators and/or relevant content in their videos, which I watched. I also watched relevant videos as recommended by YouTube. In addition to consuming BreadTube content, as in the Instagram influencers case, I followed up on unfamiliar, but seemingly relevant events, people, terms, etc. on different platforms (e.g., Google, Twitter, and YouTube). In particular, I searched Twitter for relevant discussions tweeted by the BreadTube community about once every other week during the data collection period. In this case, I either reviewed specific BreadTubers’ timelines, or searched for “BreadTube” and/or more specific keywords emerging from my data. Interviews I complemented data from online discourse materials with data from semi-structured interviews with influencers (n = 17) and BreadTubers (n = 18) (see Appendix C for a general description of interviewees). All but one of the interviews with influencers took place between May 2018 and November 2018. One additional interview was conducted in March 2020, in order to clarify some findings. Interviews lasted an hour on average. Interviews with BreadTubers took place between February 2020 and April 2020, and lasted an hour and fifteen minutes on average. For both cases, the vast majority of interviews were conducted via video conference, though a handful were conducted in person. Interviews were recorded and transcribed. The influencers 61 case study produced nearly 17 hours of audio and 386 pages of transcripts (see Table 2)2. The BreadTube case study produced nearly 22 hours of audio and 426 pages of transcripts (see Table 2).3 Table 2. Description of interview data Instagram Influencers BreadTube Total hours of audio 16 hours 42 minutes 21 hours 40 minutes Average duration of interviews 1 hour 3 minutes 1 hour 16 minutes Total number of transcript pages 386 pages Average number of transcript pages 24.1 pages 426 pages 23.7 pages Total word count of transcripts 167,973 words 194,717 words Average word count of transcripts 10,498.3 10,817.6 words Recruitment Influencers were recruited mainly by contacting individuals who had produced a blog post or YouTube video about Instagram's algorithms. With this strategy, I reached out to influencers via email, explaining I had come across their blog post/video and was interested in what they had to say. In the email, I included a link to an informed consent document hosted on my personal website, which detailed the study background, procedures, and participants’ rights. Additional participants were recruited via snowball sampling. This recruitment strategy produced participants from all around the world, though the majority of participants were from the United 2 Due to an error, one interview was not recorded or transcribed. 3 At a participant’s request, one interview was conducted via text chat on Discord. The transcript for this chat is included in the page and word counts. 62 States. For the BreadTube case study, I primarily recruited participants by contacting Reddit users who had published posts or comments in r/BreadTube addressing social media algorithms. In this, I reached out to BreadTubers via direct message, using the same approach I used with influencer interview participants, referencing and linking to their post/comment I had found and linking to an informed consent document. I also asked participants recruited in this manner to recommend others for participation, which resulted in additional interviews. While I primarily interviewed American BreadTubers, I interviewed four international BreadTubers as well. Interview Questions and Techniques Interviews were structured around the project’s research questions, informed by data collected from online discourse materials, and probed for practices. In particular, interviews focused on the practices by which individuals construct and abstract insight about algorithms, and how they understood themselves and their social worlds in relation to algorithms, particularly with regard to the experience of algorithmic power. Interviews provided opportunities to follow up on themes identified in online materials to analyze the extent to which these featured in individuals' reflections on algorithms, as well as to add additional layers to analyses based on individuals' firsthand reflections on their experiences. Interviews additionally allowed for comparative analyses across participants, which helped surface nuance in perspectives and experiences. Interview protocols (see Appendix B) for both studies were refined through pilot interviews. Data Analysis I employed a constructivist grounded approach (Charmaz, 2014) for analysis. While I did not formally follow grounded theory in my methods, I applied many of its principles. Grounded 63 theory is an inductive approach in which the researcher draws insight by sticking close to data and using comparative methods of analysis (Charmaz, 2014). Constructivist grounded theory is rooted in interpretivist traditions, presuming the indeterminacy of reality and understanding meaning to be constructed dynamically through individuals' actions and interactions with others (Charmaz, 2014). In my analyses, I used an iterative approach. For both cases, I started by collecting and coding online materials, particularly attending to prominent discourses I discerned in these. I then explored these discourses in interviews and refined and expanded coding as I learned more. In analyzing interview data, I focused on coding actions and beliefs within a participants’ complete thought, as communicated in response to a question. After I finished initial coding of interviews, I conducted focused coding by further condensing and refining frequent or significant codes I had identified, which laid the foundation for theoretical explanations (Charmaz, 2014). To further understand connections between codes produced in focused coding, I sketched and organized my thoughts visually using a (digital) whiteboard (see Appendix E). Whiteboarding resulted in some additional refinement of codes. At this point, I employed comparative analyses of the two cases to consider similarities and differences between the two social worlds with regard to learning and sensemaking practices. This work helped further hone theoretical concepts. As part of my grounded approach, I used some techniques of situational analysis (SA) for analyzing data (Clarke et al., 2018). SA is an extension of GT, which accounts for paradigm shifts wrought by the interpretive turn (Clarke et al., 2018). Whereas GT focuses on basic social practices and processes, SA focuses on relationality and shifts the unit of analysis to the situation, broadly construed (Clarke et al., 2018). SA employs iterative mapping as its key analytical technique. In my analysis, I used situational and relational maps as tools for 64 understanding the broader ecology of actors and discourses and connections between them (see Appendix E). Situational maps are used to identify “the major elements in the situation” (Clarke et al., 2018, p. 104), while relational maps assist the researcher in understanding “the nature of relations among the different elements on the maps” (Clarke et al., 2018, p. 105). Early in my analyses for both cases, I drew situational maps to take stock of all the actors and discourses I had encountered in interviews and online discourse materials. This allowed me to think about knowledge building around algorithms as a “situation.” Drawing relational maps as an extension of my situational maps helped me to organize in my thoughts and reflect on the relationships between some actors and discourses therein. Ethical Considerations As more researchers have set their sights on the Internet as a new arena of social life, ethical principles guiding online data collection have evolved (Buchanan & Zimmer, 2016). Whereas initially many researchers viewed any public-facing materials online as beyond the scope of human subject protection and ethical considerations, more recent work has questioned this de facto rule (Buchanan & Zimmer, 2016). As with research ethics generally, multiple frameworks exist for conducting ethical Internet research (e.g., Utilitarianism, Deontology, Feminist Ethics) (franzke et al., 2020). The AoIR guidelines 2019 (Internet Research Ethics 3.0) adopt ethical pluralism and encourage researchers to engage in critical reflection throughout all research stages (franzke et al., 2020). This approach acknowledges the primacy of context for case-by-case assessments, which involves an “ongoing negotiation between researchers and their data sources or human subjects” (Buchanan & Zimmer, 2016, n.p.). This “process approach” rejects the notion that “one size fits all” and, rather, emphasizes continuous reflection that informs “judgment calls” (franzke et al., 2020). 65 Importantly, as exemplified in the shift in perspectives around public-facing materials online, ethical principles of online research are constantly evolving. In this dissertation, the evolving nature of ethical principles matter because I collected data over the course of two and a half years. During this span of time, many notable events occurred that provoked shifts in methods. This includes, for example, fallout from the Cambridge Analytica scandal and the implementation of the General Data Protection Regulation. AoIR also released their latest ethical guidelines in 2019. Developments like these have led to broader awareness of and changes in how people think about data privacy, which warrant corresponding changes to our approaches. In what follows, I capture my reflections on methodological choices and my resultant approach to data collection for each case study. In this, I highlight key differences between the two cases, which shaped my methods. Influencers As mentioned, for the Instagram influencers case study, I joined several Facebook groups in order to observe discussions among influencers about Instagram’s algorithms. When I began data collection, Facebook had three privacy settings for groups: Public, Closed, and Secret. The “Public” designation meant that any Facebook user could view the description of the group, posts, and list of members. The “Closed” designation meant that any Facebook user could view the description of the group and list of members, but only those who requested and were approved to join a closed group were able to view posts. The “Secret” designation meant that groups were completely invisible to non-members. In August 2019, Facebook simplified these settings to “Private” and “Public” (Lee, 2019). Closed groups are now “Private” groups; “Public” groups remain “Public” groups. While there is currently no “Secret” category for groups, privacy settings for groups can be adjusted so that they are not visible to nonmembers (Facebook, n.d.). 66 When I began data collection, Closed group admins could screen potential new members by requesting that they answer one or more questions. After reviewing responses, admins would then approve or deny the request to join the group. This functionality remains for Private groups. Before the change in privacy settings, all of the Facebook groups I observed were either Public or Closed. After the change in privacy settings, I joined one Facebook group that was Private. I found this group after a Reddit user linked to the group in /r/InstagramMarketing. This group is visible in Facebook search and other places on the platform. For the Closed/Private Facebook groups I joined, when I encountered screening questions, I answered these honestly, disclosing that I was a researcher and graduate student researching influencers and algorithms. As I exemplify in Figures 10 and 11, after more thought and reflection, when I joined the last group consulted for my research, I slightly altered how I communicated my intention for joining the group to make it more explicit. In responding to my request to join their groups, admins would have been able to see details from the "About" section of my public Facebook profile, which I ensured included my status as a researcher and graduate student at Michigan State University. None of my requests to join groups were denied nor I did not receive any follow-up inquiries. 67 Figure 10. Example of an early responses to a Facebook Group’s “Request to Join” Questions 68 Figure 11. Later response to a Facebook Group’s “Request to Join” Questions I chose not to announce my presence in the Facebook groups (closed or public) as doing 69 so would be nearly impossible without repeatedly posting to ensure the awareness of new members and could be considered disruptive to group norms and explicit rules for appropriate content. Most of the groups I joined had many thousands or tens of thousands of members (one has nearly 60,000 members to date) with members joining and leaving the group nearly constantly. As such, it did not seem practical or manageable to continually announce my presence and purpose in these groups, nor did it seem appropriate to ask admins to do work on my behalf to flag my presence. Such activities could easily amount to “research spam” (Buchanan & Zimmer, 2016). While I chose to “lurk,” rather than draw attention to myself, as Spicker (2011) points out, such covert observation does not necessarily imply deception, which “occurs where the nature of a researcher’s action is misrepresented to the research subject” (p. 119). Further, remaining unobserved eliminated concerns about changes in behavior as a result of being observed, i.e. the Hawthorne effect (Spicker, 2011). In terms of perceived privacy, as mentioned, most groups I joined had many thousands of members, which suggests that “group members could ‘reasonably expect to be observed by strangers’” (Townsend & Wallace, 2016, p. 8). Further, most of the Closed/Private groups I encountered seemed designed to attract a large number of members. Many groups were advertised off-Facebook on public webpages. As will later be discussed, Facebook groups like those I observed are intended to be sites for communities of practice among influencers, but are also part of the knowledge infrastructure built by Instagram “gurus” seeking broader audiences. Thus, most group members do not know each other personally before joining these groups and admins often intend for groups to gain visibility and grow. Thus, setting a group to “Closed” or “Private” seems to primarily be a means of maintaining quality control—i.e., minimizing the possibility of spam-like posts by reserving the right to remove and block those who do not abide 70 by group rules. Beyond privacy, the potential harm that disclosing information could pose for subjects represents an important consideration in Internet research ethics (Buchanan & Zimmer, 2016). In this, harm depends largely on the of sensitivity of information. The purpose of the groups observed for this study and topics discussed therein—namely, matters related to growing and maintaining Instagram accounts—are discussed widely and publicly outside of the groups, frequently by influencers on their personal blogs/vlogs. Indeed, over the course of my investigations, I observed instances of group members mentioning group discussions in public blog posts. All of this suggests that group discussions, generally, would not be considered sensitive to those participating in them (Whiteman, 2012). In terms of vulnerability of users, members of these groups would not, categorically, be considered especially vulnerable or at risk. That the groups are for (aspiring) influencers, who are known to often come from privileged social groups (Duffy, 2017), and that the topics of discussion appear to be relatively innocuous (from group members' standpoint, as far as I can tell), suggests that the members of the groups do not require any special protections. Nevertheless, in dealing with closed Facebook groups, I did feel it necessary to institute certain safeguards (Spicker, 2011) to protect the privacy of individuals within the groups I observed, as well as do my best to otherwise protect them from any potential (though unlikely) harmful side effects of the research. As such, out of an abundance of caution, I altered all quotations (Markham, 2012) drawn from Closed/Private groups so that they would not be searchable (on the off chance a reader happens to be a member of one of the groups). In terms of using quotes for a purpose not intended by the speaker, I tried to draw quotes from public-facing blog posts as much as possible. I found that bloggers frequently would reiterate the same ideas 71 and sentiments as members of the groups I observed. Such public blog posts have publicity expectations (Richterich, 2018) and, thus, are quoted verbatim and cited. There are only two exceptions to this. First, I did not cite any public communications that reference behavior that violates Instagram’s Terms of Use (e.g., using bots to automate engagement, buying/selling followers, etc.). Second, I did not cite public communications produced by the admins of Closed/Private Facebook groups from which I collected data. In order to avoid directing a flood of outsiders to these Facebook groups, I have assigned closed/private Facebook groups pseudonyms. Additionally, when I discussed the admins of Facebook groups, including their communications and activities outside of the Facebook groups, I referred to them with pseudonyms and altered their statements. BreadTube As discussed above, most of influencers’ discussions of Instagram’s algorithms would largely not be considered sensitive by outside observers or influencers. By contrast, political discussions, such as those occurring within the BreadTube community, can be inflammatory and divisive, which makes people hesitant to share their views widely and vociferously offline (Noelle-Neumann, 1993). Discussions among BreadTubers often touch on highly sensitive topics, such as racism, sexism, and homophobia. Further, political discussions are often more personal than, say, marketing strategy, as people’s personal experiences often inform ideological beliefs, making these beliefs an essential component of who they are. In many cases, to engage in political discussion, people must reveal details about themselves that could put them at risk. For example, a discussion of transgender equality may prompt a transgender person to speak about their firsthand experiences, which could make them the target of harassment and violence. Further, those speaking about their experiences online in relative anonymity—as on Reddit— 72 may not be “out” in other spheres of their life. Thus, for the above reasons, the BreadTube case study differs significantly from the influencers case study in terms of sensitivity of material. The BreadTube case study also differs from the influencers case study in that, while I drew heavily on Facebook for the latter, I drew heavily on Reddit for the former. This difference stems from following the communities to their dominant online hubs. However, Facebook and Reddit differ quite significantly, particularly in the fact that Reddit users, generally, remain anonymous, while Facebook users, generally, do not. This difference in anonymity matters when it comes to sharing potentially sensitive information. Those granted a degree of anonymity may feel empowered to share more about themselves than they would when personally identifiable. Additionally, although Reddit discussions are public, users may not take precautions to ensure that they do not share information across the site that could personally identify them in aggregate. While BreadTubers on Reddit, like influencers in Facebook groups, "could ‘reasonably expect to be observed by strangers’” (Townsend & Wallace, 2016, p. 8), they may not necessarily expect their statements to be used by strangers to determine who they are “in real life.” Many BreadTubers, by nature of the community’s political commitments, are members of or allies to the LGBTQ+ community. As such, BreadTubers are more likely to necessitate special protections (franzke et al., 2020) than influencers, as a population. This is particularly the case given the above considerations that 1) BreadTube discussions often address sensitive topics, and 2) BreadTubers may share their personal experiences when engaging in these discussions. Relatedly, BreadTubers occasionally discuss acts of civil disobedience, which could open them up to legal sanction if their identities were to be unintentionally revealed. Beyond civil disobedience, given the United States’ history of surveilling, infiltrating, and otherwise 73 disrupting leftist movements and figures (e.g., Davis, 1992; Theoharis, 1978), it is reasonable to expect that BreadTubers may feel a sense of vulnerability for merely openly embracing leftist ideology. For all of these reasons, I nearly exclusively collected data from public-facing sites. The only exception to this is one Facebook group I joined. For this, I contacted the admin of the group and gained their support for observing and collecting data from the group. When observing discussions on Reddit, I first contacted moderators of r/BreadTube to gain their feedback and support for data collection. Given their publicity expectations similar to influencers (Richterich, 2018), more visible BreadTuber content creators are quoted verbatim and cited. However, I have altered statements (Markham, 2012) by community members out of an abundance of caution. By altering statements I aimed to prevent them from being searched for online, which could lead to individuals being personally identified. 74 CHAPTER 4: Assembling the Rules to the Visibility Game Instagram influencers’ practices for learning about algorithms roughly resemble those associated with developing other digital skills. As I will show in this chapter, the key means by which people learn about other digital technologies—learning by doing (Blank & Dutton, 2012; Dutton & Shepherd, 2006; Selwyn et al., 2006; van Dijk & van Deursen, 2014), from others (van Dijk & van Deursen, 2014; van Dijk, 2005; Warschauer, 2003), from media (van Dijk & van Deursen, 2014), and from training (Selwyn et al., 2006, #54880; Warschauer, 2003, #74774)—all are means by which influencers learn about platform algorithms. What sets influencers’ learning practices apart from other kinds of digital literacy is the role platforms play throughout these practices. In this chapter, I use the Instagram influencers case to introduce the concept of platform epistemology, which captures how platforms structure how, when, and to what extent users learn about algorithms. As part of sociotechnical assemblages of actors and socioeconomic structures, platforms prescribe the conditions under which knowledge about algorithms may be constructed and legitimized. Platform epistemology involves five components: 1) experiential learning afforded by platforms’ design choices; 2) the configuration of certain users by platforms as attractive to the kind of information sources that address algorithms; 3) the creation of a market for information about algorithms, which results from platforms making relevant information scarce; 4) the positioning of the visibility game as a mechanism that grants more visible influencers epistemic authority; and 5) the consolidation of platforms’ epistemic authority, which results from the information asymmetry platforms establish between themselves and users. In explaining platform epistemology through Instagram influencers’ experiences, I will further demonstrate that, just as playing the visibility game demands time and money that creates 75 an uneven playing field for influencers (Duffy, 2017), so, too, does learning about algorithms. Those with more social advantages benefit from greater time and resources that afford greater access to information about algorithms. In this way, learning about algorithms is entangled with the parallel pursuits of visibility and capital (social, cultural, and economic). Learning is further shaped by the validation of information by way of success in the visibility game, which is not equally distributed. Influencers perceive those who demonstrate success in the visibility game (e.g., those who have a large following, or a verified account) as more knowledgeable due to their success. In turn, being perceived as an authoritative source of information about algorithms supports “guru” influencers in further gaining access to financial rewards by monetizing their knowledge. Platforms play an integral role in gurus’ ability to monetize their knowledge by making information about algorithms scarce and, so, creating a demand for it. Moreover, platforms reward those who achieve success in the visibility game with special access to insider information, which reinforces the wealth of knowledge from which influencers from more privileged backgrounds benefit. The visibility game, thus, becomes a self-fulfilling process in reproducing socioeconomic hierarchies by consequence of the interdependency of knowledge, visibility, and capital. In short, the reproduction of offline inequalities in status hierarchies of digital influence shapes the bounding of legitimated information about algorithms among influencers. Importantly, as I will argue, platforms like Instagram maintain epistemic authority as a result of the information asymmetry the company establishes between itself and users. By sharing few details about its algorithms, Instagram fosters uncertainty about what is and can be known about the platform’s algorithms and renders those outside of the company reliant on the company as the main adjudicator of knowledge about its algorithms. I will demonstrate that 76 Instagram can and does mobilize its epistemic authority to refute knowledge claims about its algorithms, such that the company engenders confusion and self-doubt among those whose experiences contradict platform statements. Those whose experiences contradict Instagram’s statements also risk being positioned as irrational if they speak out about their experiences. I introduce the term black-box gaslighting to describe this phenomenon. Black-box gaslighting is both a cause and effect of the power asymmetry between platforms and users. In what follows, I will demonstrate the above points through the description of influencers’ learning practices. First, I will describe how influencers learn via observing and reflecting on their experiences with Instagram’s algorithms, as well as conducting more systematic empirical investigations. Then, I will describe how influencers learn through developing their networks online and via various offline events, which affords information exchanges. Part of networking also involves the occasional connection to platform representatives, who can share exclusive insider information about algorithms. I will also explain how much of influencers’ learning occurs within communities of practice. Next, I will show that although Instagram’s public communications are seen as authoritative, for influencers, they typically provide too few details to be useful. Thus, influencers tend to seek out (and incidentally encounter) relevant information about algorithms via news reporting, blogs, YouTube videos, and other media. Finally, training—by way of professionalization gatherings and online courses, guides, and other services—offer a key source of insight about algorithms. Learning by Doing In her YouTube video, titled “Instagram Algorithm 2020: EXPLAINED,” Katie Steckly mused “It’s really interesting what you can learn by observing, and experimenting, and paying attention to what Instagram tells you.” In reality, as Steckly knew well, Instagram offers limited 77 information about its algorithms and updates them constantly, which influencers say makes it difficult to “prove” various claims about the platform’s algorithms. Consequently, influencers regularly express skepticism toward claims made about the Instagram’s algorithms, demanding “factual” and “incontrovertible” “evidence.” Instagram’s secrecy and influencers’ skepticism motivate them to learn about Instagram’s algorithms, as Steckly said, by “observing” and “experimenting” on the platform, and through what Instagram “tells” influencers via visibility metrics. Such experiential learning—or “learning by doing,” as Jan A.G.M. van Dijk and Alexander J.A.M. van Deursen call it (van Dijk & van Deursen, 2014)—is a key element of platform epistemology as it is shaped by Instagram’s design choices that enable and constrain learning in this manner. Like other platforms, Instagram provides influencers access to backend analytics, which allow them to study, and sometimes systematically investigate, the platform’s algorithms. In other words, Instagram’s suite of analytics, as well as visible metrics (e.g., “likes,” comments) in the public interface, afford learning via observation and empirical investigation. Perhaps because experiential learning is a relatively accessible means of building insight about algorithms, it is a widely used learning practice among Instagram influencers—as one interviewee put it: “We all learn something by doing something.” In the next two sections, I will explain what experiential learning looks like in its two essential forms: 1) observation and reflection and 2) empirical investigations. Observation and Reflection Influencers routinely discuss insights they have gleaned by paying attention to the behavior and composition of their feeds, including which posts become highly visible (or not) and the characteristics these posts share, particularly with regard to metrics. Metrics like how many “likes” or comments a post receives provides influencers with empirical data about how 78 visible their post was and, thus, allows them to get a sense of what factors matter for algorithmic ranking on Instagram. Observational learning also occurs naturally as influencers (and other users) identify patterns in the kinds of content displayed in their own feeds. Algorithmic curation reveals itself to users via the patterns in the users’ behaviors that it reflects back to them. Observational learning depends upon careful attention to metrics over time. In her video, Steckly offered an example of this. She recounts how she had recently shared a post that was different from the kind of content she normally shared, but, as Instagram informed her, was performing 95% better than her average post. Steckly explained how this led her to conclude Instagram’s main feed ranking algorithm had changed: I found this super fascinating because this post had way less likes than my average post. The photo of me, as you can see here, is kind of a silhouette, more artistic photo, so it wasn’t instantly recognizable as me, which is why I think it got less likes. However, it had way more shares than my posts ever get. So, this is telling me that Instagram is valuing shares way more than likes, because they saw a post that had way less likes than I normally get, and way more shares and they told me that it was performing 95% better than average. So, clearly, likes are not weighted as heavily in the new Instagram algorithm as shares are. So I think if you wanna make the most of the Instagram algorithm, then you should listen to the clues that it gives you through little notices like this and try to make the most of it. (Steckly, 2020, n.p.) As Steckly demonstrated here, tracking engagement over time helps signal changes to algorithmic ranking. In an interview, Christina offered another example of the importance of paying attention to these kinds of metrics for learning about Instagram’s algorithms: KC: And then as the algorithm gets updated, how do you learn about those updates? Christina: I wish there was an easier way, because you don't [learn] necessarily, at first. You... Oftentimes what happens is we start seeing changes on our end, in terms of our own analytics or our own user experience and our own engagement. I think that oftentimes, is one of the first indicators that something has changed or something that at least gets people thinking, "Hey, did something change?" Because you might see... Your Stories might be performing a certain way, and then the next day after updating your app, your Story views are all cut in half. Or something might happen, where you're like, “Wait, okay something changed, because this is now the normal...What was normal is now no longer a thing.” 79 As Christina explained, influencers learn of changes to Instagram’s algorithms as a result of constant assessment of “normality” in analytics. In fact, in interviews, many influencers talked about initially learning about Instagram’s transition to algorithmic ranking as a result of noticing drops in engagement. For example, Sofia and I had the following exchange: KC: So, how did you first learn about the algorithm? Sofia: Well, it happened and we noticed. We noticed right away. KC: Oh, so did you see when Instagram made their announcement that they were transitioning to this algorithmic feed, did you see that post? Or was it just through talking to other people? Sofia: No. It was just when it happened, when the change happened, we...everything dropped. So we were getting a certain amount of likes, and then we didn't get their likes anymore because people being…are worse. So, I don't know. If before I had 500 likes, the day the algorithm started I started having like 150 suddenly and was like, "Whoa, what's happening?" And when I realized, "Oh my god. Nobody's seeing my stuff anymore." I didn't know, but a lot less people are seeing my stuff… Observing levels of engagement over time, Sofia, like other influencers, was alerted to the introduction of algorithmic ranking on the platform. Similar to tracking metrics, observing aberrations in the composition of their feeds can help influencers learn about Instagram’s algorithms. As an example, in an interview, Cameron described his experience after Instagram’s transition to an algorithmic feed: …I was like, "Okay, something's happening here. Why are some people's photos showing up and some people's not, and why am I seeing these people's photos and why am I not seeing these people's photos? Okay, it's because I haven't interacted with them in a long time, and I pretty religiously watch this person's Instagram stories, and I like their photos and I have post notifications on for these people." Like Cameron, many influencers learn about Instagram’s algorithms—in this case the transition to algorithmic ranking on the platform—by observing the behavior of their feeds and comparing to past experiences. Observing patterns allows influencers to understand the “normal” state of affairs, but also to learn from any disruptions to the norm. Through this practice, influencers can 80 piece together details about the operational logics of Instagram’s algorithms. Cameron’s statement also demonstrates how influencers can draw on knowledge of their own behaviors, interests, and relationships to see themselves reflected back in algorithmically curated content in their feeds. Because algorithms are designed to prioritize the kind of content and accounts users routinely interact with, influencers can make connections between the degree to which they interact with specific users and kinds of posts and the frequency of their appearance in their feeds. Making these connections offers influencers information about how Instagram’s algorithms function. Beyond algorithmic ranking, influencers often discuss learning the boundaries of acceptable and unacceptable behavior, as algorithmically enforced, through observation. For example, Emily told me about a time when she discovered a series of accounts she liked, attempted to follow them and like their posts, and suddenly found her account temporarily restricted from performing certain actions: I just kept following more people and then I was liking things. Then all of the sudden I went to like something, and you know how the heart fills in when you like it? It filled in and then immediately emptied. It was like I clicked it and nothing was happening. I was like, "This is weird." I logged out, logged back in. Still doing it, still doing it. Then I got a message being like, "You violated Instagram's terms of service. Blah, blah, blah." Yeah. I had to wait 24 hours before I could do anything on Instagram. I could scroll through, but I couldn't like anything; I couldn't leave a comment; I couldn't follow anyone. Here, Emily drew connections between her behavior, limits placed on her account, and a message from Instagram to piece together that Instagram monitors user activity via automated processes and institutes certain parameters that signal unacceptable behavior. In this, the immediate feedback of being blocked and subsequently receiving a message from Instagram allowed Emily to reflect on her activity directly preceding the block to pinpoint the exact behavior that attracted algorithmic attention. As the above discussion exemplifies, Instagram’s design decisions afford opportunities 81 for learning about the platform’s algorithms. “Data-driven” patterns synthesized through observation and reflection on things like content and users displayed in feeds, metrics, and platform messages guide influencers in building insight about Instagram’s algorithms. Any changes to the interface have the potential to similarly afford, or constrain, learning. For example, the introduction of customization features like a button to indicate “I’m not interested in this” can signal to users that their feed is algorithmically curated (DeVito et al., 2018). Similarly, if Instagram decides to hide like counts on posts, as it has been experimenting with (Constine, 2019), this could eliminate a crucial data point used for learning about the platform’s algorithms. In this way, platforms have the capacity to delimit what users can and cannot learn through observation and reflection. Empirical Investigations The metrics Instagram makes available to users can be leveraged as empirical data in more systematic and deliberate investigations beyond mere observation and reflection. Influencers commonly conduct investigations that loosely follow the scientific method by iteratively formulating hypotheses and testing them. In this, influencers rely on metrics—for example, impressions or reach—as operationalizations of visibility. Such metrics, made available in Instagram Insights to those with business or creator accounts, enable influencers to gauge the visibility of individual posts in order to assess which post characteristics matter (and when) for algorithmic ranking on the platform. 82 Figure 12. Instagram Insights post metrics Figure 13. Instagram Insights stories metrics 83 Empirical investigations are iterative and ongoing, as Cameron suggested in an interview: And so, I was piecing together these tiny little bits and bobs. And I would say, right when this happened, this change, it was much easier to figure out, "Okay, if I get more people to turn on post notifications, then they'll see my photos. Okay, cool." And then the next step was like, "Okay, if I get more likes within the first little bit, then I'll get more people to see my photos. Okay, if I get more comments. Okay, if I get more this, if I do more this, if I do more this... " And it kinda just kept being this evolution of little tiny tricks that I would either figure out myself, or hear from other influencers… Although hypothesis testing can be performed without extensive forethought, as Cameron’s statement suggests, in many cases, influencers conduct more methodical investigations. For example, in an interview, Mia described an audit she conducted: What I did is I started testing three images side-by-side. They were all exactly the same image. And I would post different hashtags on each one of those three. And I basically did this for a month. And then I audited all of the posts. And I said, "Okay, what made this get more likes than the other one, than the one?" And I find that it was more localized hashtags. So if I did, #Sydney, #Melbourne, #Melbournelife, that that was attracting the most likes. And when I audited all of those likes, I measured up who they were. So were they a small business or were they an individual actually searching for my hashtag? Here, Mia set up an experiment to compare how different hashtags impacted engagement, as a proxy for the visibility of her posts. Such “A/B testing” is quite common among influencers, who treat it as a means of getting close to a ground truth about Instagram’s algorithms. The metrics Instagram makes available prescribe the easiest and fastest empirical investigations that can be conducted. When influencers want to understand something about Instagram’s algorithms that “likes,” impressions, or other provided metrics cannot help them with, they must collect their own data. This kind of empirical investigation requires the time and/or know-how to answer certain questions or test certain hypotheses. As an example of investigations that go beyond Instagram-provided metrics, Chelsea, an interviewee, gathered data over the course of a year and conducted a series of statistical analyses on her dataset. One research question she pursued concerned the relationship between post placement (i.e. first post 84 displaying in the main feed through the 100th post) and aggregate hours elapsed since a post was published. To address this question, she analyzed and graphed data she collected herself on her own account, even going so far as to provide an R-squared value in reporting her results in a YouTube video. Clearly, conducting an empirical investigation like this requires not only time and resources, but specialized skills. Because Instagram does not offer the kind of data at the heart of her questions, she had to look beyond Instagram Insights and visible metrics displayed in the public interface for drawing conclusions. Though Chelsea was able to synthesize important insights about the operational logics of Instagram’s algorithms, many influencers will not have the time or skills to conduct these kinds of empirical investigations. The easiest and most time efficient empirical investigations are those that stick close to the metrics Instagram provides to influencers. This evidences another aspect of platform epistemology: the metrics the platform makes available to users alternately enable and constrain which empirical investigations influencers pursue in attempting to understand Instagram’s algorithms. Pursuing questions about Instagram’s algorithms that cannot be fully answered with metrics the platform makes available requires influencers to lean on other resources at their disposal, resources which may not be evenly distributed. In the next section, I will address how influencers learn from others as part of networking activities and communities of practice. Learning from Others Learning about algorithms, like digital influence, is rooted in “productive socialization” (Duffy, 2017)—the building of robust social networks. Beyond increasing their visibility (Duffy, 2017), broadening their social networks allows influencers to tap into the wisdom of the crowd and gain broader insight about Instagram’s algorithms. Yet, because influencers’ networking is 85 typically organized around platforms, acquiring information in this way depends upon platforms facilitating influencers’ connections to the “right” people. To the extent that platforms like Instagram and Facebook configure the notion of “friendship” as “an equation geared towards maximizing engagement with the platform”(Bucher, 2018, p. 11), learning-via-networking depends upon influencers effectively performing their relationships with other influencers— particularly, well-connected influencers—on these platforms. Thus, we can see another aspect of platform epistemology in the way that platforms mediate learning via social connections. As Bucher argued, platforms like Facebook do not only facilitate existing social connections, but act as “friendship makers” as their algorithms “participate[] in creating, initiating, maintaining, shaping, and ordering the nature of connections between users and their networks” (Bucher, 2013, p. 489). Platforms structure the formation of communities of practice among Instagram influencers across platforms by enabling them to find and build connections with each other, which then structures learning about algorithms via information exchanges. Further, through their “algorithmic identities,” influencers’ platform activity follows them across the web, opening them up to or closing them off from opportunities (Cheney-Lippold, 2011) for activating social connections that will help them build insight about algorithms. In the next three sections, I will explore the three key ways that influencers’ productive socialization facilitates learning about algorithms, which are: networking online and at events, making connections at Facebook, and organizing in communities of practice. Networking Influencers routinely affirm the importance of networking for learning about Instagram’s algorithms. As Mia told me in an interview, “you forever have to be talking to other people in the industry.” Being an influencer revolves around attending events, whether for the explicit purpose 86 of networking, or as part of influencer marketing campaigns. Influencer marketing events, conferences and conventions, and more informal meetups afford opportunities for learning about Instagram’s algorithms as they bring influencers together in one space, thereby facilitating information exchanges. As an example, in talking about how she first learned about Instagram’s transition to an algorithmic feed, Olivia shared: …I follow a lot of influencers who are based in London as well. And then in London, I meet them quite often because we go to the same events, so people would just talk about [changes to the platform] all the time, and then we would think what was the best strategy going forward and all that. Gaining entry to some events means connecting with other influencers online, as Krissy explained in an interview: In the beginning, [we just connected] through the [Instagram] app. You can direct message people and get to know people. And then there's other little groups that form. You get invited to Facebook groups and cliques and such. And then you start going to events and then you meet other bloggers and just connect with a couple of people. As Krissy’s illustrated, to find events to attend, influencers need to first make connections with others, which typically happens on platforms—mainly, Instagram and Facebook. In turn, forming new connections can provide entry to other events that lead to more connections. Although some events influencers attend are casual happy hour-like networking events, many are designed to give the impression of exclusivity and, thus, are not accessible to all. These events are often invite-only and have capped guest lists. This is the case for some professionalization workshops and conferences, but is especially the case for influencer marketing events, wherein brands use influencers to promote their products and services. Frequently influencers only receive invitations to these events after undergoing a vetting process, whether formally or informally, to demonstrate their status (Duffy, 2017). Making it through the vetting process usually entails achieving a certain level of followers (Duffy, 2017). For example, Maryam, a “foodie” influencer, talked about attending local “meet ups” organized by Zomato, a 87 restaurant aggregator similar to Yelp, where she was offered a free restaurant tasting. For these events, receiving an invitation depended upon being active on Zomato, having a visible presence that would help the restaurants raise their profile. Maryam also talked about restaurants paying bloggers like her (with an active following) to find other bloggers to attend similar kinds of influencer marketing events—for example, a restaurant launch. Learning by networking at events revolves around platforms. To attend events, influencers must demonstrate their value by achieving success in the visibility game, the rules of which Instagram alone decides. Because the visibility game acts as an vetting mechanism for authorizing event attendance, platforms set the conditions for gaining access to networking opportunities. Further, because they host virtual activity around events, to some extent, platforms also set the conditions for some of the networking that happens at these events. In both cases, visibility on platforms like Instagram aids successful networking, which offers one key means of learning about algorithms. I will now discuss another key networking benefit afforded to those who gain success in the visibility game: connections with platform insiders. Connections at Facebook Certain influencers benefit from connections to platform representatives, who offer support to them and often share exclusive insider information about Instagram’s algorithms. To be afforded this opportunity, influencers must demonstrate success in the visibility game. For example, Instagram offers support to high profile influencers, as well as lesser-known influencers deemed destined to become stars, through its “talent partnerships team” (Carman, 2019). Which influencers become so-called “managed partners” depends in large part on visibility metrics, or “performance indicators,” like engagement rate. In some cases, influencers 88 are alternately recommended to the team by existing managed partners (Carman, 2019). Most platforms have similar programs that seek to assist “elite users,” since this helps facilitate the growth of an active user base (van Dijck, 2013). In an interview, Cameron offered insight into what gaining introduction to platform representatives can look like and entail. Cameron talked about making contact with Facebook representatives in 2016 at a premiere for a documentary he appeared in about influencers. Cameron explained: …when we finally finished and we were up in Toronto at an event to premiere the movie, Facebook reached out to me and they said, "Hey Cameron, we would love to test some features on your account, could you test live streaming?" And I was like, "Sure. Could you verify my Facebook so that everyone knows it's me?" They're like, "Yeah, of course." And they did that, then I ended up building this connection with Facebook, and I grew and grew and grew to where I was communicating with them on a pretty consistent basis about things I was learning about the platform, and what they could do to help me, and things that they could tell me about what was going on. So yeah, it was a really cool partnership that I'd built with them, and they were super responsive to helping, and they verified my Instagram account, and then they gave me all kinds of extra user names when I would ask for them, or whenever new features would come out, they'd be on my account first, just 'cause I was communicating with them about what I was doing. By making connections at Facebook, Cameron was able to open up a channel of communication that afforded information like algorithm updates in the works: They would do stuff like, “Hey, we're gonna change the way that when you post on Facebook, the algorithm's gonna react to your video.” And so sometimes I would post a video and it would get seen by five people, and I had like 1,000 followers on Facebook at the time. So it would get seen by like five people, and then sometimes I would post and it would get seen by like 20,000 people. And I'd be like, “Okay, well what was the difference there?” They wouldn't really tell me what they were testing, if I'm honest, but they would just test stuff and they'd be like, “Hey, we know you're a small creator, Facebook isn't your main platform, so we're just gonna kind of keep using it to experiment.” Clearly, this kind of “heads up” about design details and changes to platforms’ algorithms is invaluable and not easily acquired by other means. Cameron was not the only influencer I encountered who benefited from special access to 89 insider information. In a Facebook group I joined, the admin for the group, Meg Robinson,4⁠ at one point shared that she was meeting with “the Instagram team” and wanted members of her group to send her questions she could ask on their behalf. After the meeting she posted a video in the group sharing some answers. In part, she reported: I just wanted to address a couple of questions I’m seeing over and over in the group, and tell you guys a little bit about some more of the information. We did sign an NDA, so there’s only a certain amount I can disclose now. And so, I will be very careful about some of the information. […] [A] question I’m hearing is: how long does a post last, because everybody’s complaining that they see old posts in their feeds, which, by the way, the [reverse chronological feed] is not coming back. […] So, your post lasts for seven days. And, so, you wanna make sure that you are still engaging with your audience and trying to bring them back to the post in any way that you can. It can still resurface even after the first 24 hours. I know you guys tend to see posts die off after 24 hours, but that’s not the case. So, there is still an algorithm that keeps the posts live for seven days. On another note is if you guys have done anything to your account that has messed it up in the algorithms—loop giveaways, bot followers, bot likes, anything along those lines— you have to try really, really hard to fix your account in the algorithms, but it can be done. It takes 30 days for Instagram to relearn your content and your information and re- shift it in the algorithms. So, don’t give up hope, but it can be done. […] Through her meeting, Robinson was able to receive answers to burning questions within her group “straight from the horse’s mouth.” Although it is not clear how much of what Robinson shared came directly from Instagram employees and what might be extrapolation, whatever information they did share is more than what the vast majority of users will ever receive, and carries greater credibility than online rumors influencers routinely encounter. By making connections with platform representatives, influencers can negotiate a valuable stream of insight that helps them piece together how Instagram’s algorithms work. Access to those “on the inside” is not equally distributed among influencers. Contact with platform representatives is generally conditional on reaching a certain level of visibility on Instagram. To platforms like Instagram, achieving a substantial audience and a high engagement 4 Pseudonym 90 rate signals that an influencer can attract and maintain the attention of other users, which makes them valuable. Further, because Instagram dictates the rules of the visibility game and Instagram employees determine which influencers will be chosen as partners, again we see that platforms intervene in influencers’ learning practices and, so, indirectly shape the networked information infrastructure influencers construct. The third key way influencers learn from others is in communities of practice online, which I will now describe. Communities of Practice As influencers connect with one another, collaborations and mutual support give rise to communities of practice, hosted by platforms, wherein influencers exchange information about algorithms. In their role as host to these communities of practice, platforms shape flows of information. Learning in communities of practice centers on the core activity of their social world: the pursuit of visibility. Through these practices influencers collectively construct a body of knowledge about Instagram’s algorithms. Marcus described in an interview what learning in communities of practices looks like for influencers: Maybe I'll try to test something out and see if I can discover any new tactics. Which I've done some different things, and try to share with people on YouTube of what I found out. So, it’s just basically a bunch of the community just working together and sharing what they find out about how Instagram works… Influencers learn together by asking and answering questions, receiving guidance and critiques, researching and testing “expert” insights, troubleshooting problems, and triangulating information together. In this way, influencers engage in “participation as a way of learning—of both absorbing and being absorbed in—the ‘culture of practice’” (Lave & Wenger, 1991, p. 95). Learning about Instagram’s algorithms happens in large part through collaborative techniques for pursuing visibility, which will be discussed in the next chapter. 91 Because influencers rely on platforms where visibility is not guaranteed—mainly, Facebook groups and Reddit—for building and sustaining their communities, the logic of the visibility game orders how they learn together. First, the rules of the visibility game make connecting with others online both the cause and effect of becoming visible. Being the kind of person others would want to be friends with, or at least connected to, contributes to success as an influencer. This renders it natural, if not compulsory, for influencers to reach out to others and establish personal connections—to be active in communities of practices as they emerge online. Second, the high demand for information about Instagram’s algorithms, as a result of Instagram disclosing little information, has made “social media strategy” a lucrative niche for influencers. The scarcity of information also obliges influencers to follow other influencers as a way to keep informed, as Lacey suggested in an interview: I follow a couple bloggers. There's this one, her name is Julie Solomon, and she always is trying to keep people updated on the algorithm. I try to follow a lot of people that kind of have their noses in that stuff and then share it with their audience. So other people that are educational resources… Yet, finding and learning from other influencers like Julie Solomon depends upon them being visible. When influencers share what they know online, other influencers have an extra impetus to follow them—it is part of the way “expert” influencers “provide something of worth” to their followers, as influencers often advise. As influencers respond to the demand for information, for many, participation in the community elides distinctions between learning the trade and doing it. Influencers seek insight about Instagram’s algorithms in order to become more visible, as well as to share this insight with their communities, which may help them become more visible, both generally and within their communities. Thus, to some extent, the financial value of connecting with and supplying others with information about Instagram’s algorithms seems to underwrite the vitality of influencers’ communities of practice. 92 Third, the quantity and quality of information influencers encounter within Facebook groups, in particular, partially depends upon the platform “activating” connections (Bucher, 2018) with knowledgeable others and recommending groups and individual posts accordingly. In this sense, who and what platform algorithms make visible to individual influencers dictates the information they receive. Put differently, it is quite possible that certain Facebook groups offer “better” access to insight than others, and these groups will only be algorithmically recommended to those whose social graphs aligns with existing members of these groups. Beyond the logic of the visibility game, the communicative affordances of platforms allow large collectives of influencers to exchange information, but also facilitate the evaluation of information. When sharing information, as previous work has shown, engagement metrics act as a social confirmation heuristic, providing a visible endorsement of insights (Metzger et al., 2010). If many people like or upvote a post, influencers likely conclude the insights are credible (Metzger et al., 2010). Comments that explicitly validate the insights (e.g., “Thank you for saying what so many of us think.”) offer additional affirmation. For influencers, sharing insights in Facebook groups or subreddits also allows for more complex collaborative knowledge building to occur across a larger number of people than other modes of communication would allow. Threaded comments allow influencers to easily seek clarification or elaboration (e.g., “When you say ‘Or you could figure out how to provide something of value to people,’ what do you mean exactly? Can you give us an example with your account?”) and to branch out into tangential discussions in order to address the particular information needs of individual influencers. Commenting also allows influencers to push back on or critique insights (e.g., “I agree, but some of us do create good content, don’t use pods or buy followers, and that’s not rewarded. I see terrible content that gets insane amounts of 93 engagement.”). In responding to critical comments like this, influencers have the opportunity to reflect and offer additional information to make their case (e.g., “Sure, a ton of garbage gets likes, but that’s cause people shell out thousands of dollars for it and end up with one viral post that fades and then they need to shell out another thousand.”) In sum, the communicative affordances of platforms— the nudge to participate in information sharing as part of playing the visibility game; the learning opportunities permitted via algorithmic curation of groups, people, and posts; the ability to host discussions among large groups of influencers; and the credentialing function of engagement metrics—establish the conditions for influencers to learn about algorithms in communities of practice. As influencers learn together in this way, these affordances influence what information becomes visible, and to whom it becomes visible. In the next section, I will shift focus to the secondary sources influencers consult for learning about algorithms and how platforms shape this. Learning from Media Most influencers glean information about algorithms from media content on platforms and, to a lesser degree, from Instagram itself. While Instagram might seem like the most obvious source of information about the platform’s algorithms, few influencers rely heavily on the company’s disclosures, as they find them unhelpful. Thus, influencers turn to news and industry coverage (in addition to each other) as the next best option. As I will discuss below, while Instagram’s efforts to conceal its algorithms places restrictions on what can be learned from the platform itself, the dynamics of algorithmic curation on platforms further restrict the flow of relevant information. Many influencers perceive Instagram’s disclosures as authoritative sources of information 94 about the platform’s algorithms, but, as many interviewees told me, the company’s public communications leave much to be desired. In order to learn from Instagram’s disclosures, influencers must seek them out, but this task is more difficult than it may seem. As I can attest, even when actively looking for relevant information shared by Instagram, it can be quite challenging to find. Instagram’s official communications are spread across various venues, including Instagram’s blogs, help pages, and even videos from industry events at which Instagram employees speak. In 2019, CEO Adam Mosseri also began hosting weekly “ask me anything” sessions via Instagram stories, where he occasionally shares relevant information. Speaking to the difficulty of relying on Instagram’s disclosures for learning, author and DIY influencer Peggy Dean lamented in an article: “Staying in the loop is harder than ever, as many changes are no longer being regularly announced on Instagram's blog, but rather just being updated in a buried folder in their Help Center” (Dean, 2018). Further, in its occasional updates about changes to the platform and policies, Instagram shares only limited information about its algorithms and often uses coded language. For example, Instagram’s original blog post announcing the transition to an algorithmic feed was titled “See Posts you Care About First in your Feed” and never referred explicitly to an “algorithm”(Instagram, 2016). Influencers recognize that Instagram often intentionally conceals and selectively shares information about its algorithms. For example, Chelsea told me in an interview: I don't think that [Facebook and Instagram] actually want people to know their secrets and how [the algorithm] works so much. They'll broadcast certain things, but at the same time there is a PR element. They're telling you the things that they want you to know. Perhaps as a result of the limited and ambiguous nature of information Instagram offers, few influencers interviewed said they sought these out directly. Most influencers instead receive information from Instagram secondhand, either from news outlets reporting on Instagram’s official communications or from other members of the community. For example, when asked if 95 she follows Instagram’s official updates, Carina replied: Not really, sometimes when I'm checking in, or when I see that somebody has said they changed the algorithm again and they were supposedly going back to reverse chronological order, so I checked in on that and I would see just snippets of their press release through other articles. And it would just say, "We're not sharing that now, but this is gonna be...But we will tell you that this is gonna be better for users." So it's not that open. As Carina suggested, when Instagram does share updates, influencers often find the information unhelpful. Moreover, as Christina pointed out in an interview, life on Instagram moves fast and information often comes too late as a result of Instagram’s secrecy: …Instagram doesn't tell us anything. So eventually, a news article might come out "explaining the algorithm.” But oftentimes, those articles come out literally weeks, if not months, too late. By that point, they're old news, and everyone's like, "Yeah, honey, been there, done that." Like other influencers, Christina felt frustrated by a lack of transparency on Instagram’s part, which constricts the flow of information about the platform’s algorithms. Consequently, as Christina hinted, Instagram’s reticence underscores the importance of learning by other means. Instagram’s secrecy creates a vacuum of information that others must fill. Mainly, influencers routinely discuss learning about Instagram’s algorithms through a variety of media— blog posts and articles, YouTube videos, and podcasts. Sometimes influencers actively look for information about Instagram’s algorithms via targeted online searches. For example, Marcus explained in an interview: “…every time something would go wrong, I would be like, ‘What's the new Instagram algorithm?’ So I would go look it up on YouTube to see what other people are talking about.” Like Marcus, most influencers interviewed mentioned googling “the Instagram algorithm” or searching for “how-to’s” on YouTube and Reddit. Aside from targeted searches, information about Instagram’s algorithms also sometimes surfaces “incidentally” while seeking other insight on platforms, as Marcus discussed: So, I was looking into how people would grow their Instagram account. That's what I 96 would just look up, "How to grow your Instagram account?" I looked that up on YouTube. And it would pop up, "2018 Instagram algorithm change," and I was like, "What's that?" So [laughter], I clicked on it and I was like, "Oh wow, this is crazy." Here, Marcus shared that he first learned about Instagram’s algorithms as a result of a video (algorithmically) recommended to him by YouTube based on another video he had watched. News reports—for example, from technology websites like TechCrunch—offer crucial support to influencers in filling in the blanks left by Instagram. In addition to sharing and interpreting information from Instagram’s official communications, such reports often piece together the latest details drawn from leaks and efforts to reverse-engineer Instagram’s algorithms. Instagram has also previously held press conferences where reporters receive exclusive information about the platform’s algorithms (Constine, 2018).5⁠ Thus, news reports help influencers in their quest for credible insight. For example, in an interview, Jessica described her process of evaluating information as follows: I follow other bloggers who are bigger than me. And sometimes they'll blog about these things or they post about these things, so I will read all of their articles. And when a lot of people are saying similar things, then there has to be some truth to it. And then I will cross-reference with other things like TechCrunch or whatever comes up in Google search first, Mashable or whatever. And then look at those and kind of form my own opinion from the blogger side, and then from the tech non-blogger, more data analysis side of things. Influencers, like Jessica, often lean on news reports as part of efforts to triangulate insights. As the above discussion suggests, platforms like Google and YouTube constitute key sites for influencers to find and be exposed to information about Instagram’s algorithms. Yet, platforms regulate such exposure via algorithms. As is the case for exposure to news, any “incidental-ness” of exposure to information about algorithms on platforms is “a matter of degrees” (Thorson, 2020, p. 11). Some influencers will be “stronger attractors” of relevant 5 Although, as TechCrunch noted in an article reporting on such an event, reporters cannot (always) verify information they receive from Instagram (Constine, 2018). 97 information as a result of a combination of their choices, the choices of their connections, and publishers’ choices, all filtered through datafication and algorithmic processes that serve content based on existing models of interest (Thorson, 2020, p. 11). Influencers’ particular interest in social media marketing sets them on the right track for encountering relevant information, as Marcus’ story suggested above—YouTube recommended Marcus a video about “the Instagram algorithm” after he searched for videos about growing an Instagram account. However, such an encounter with relevant information is not accidental. Influencers must effectively signal their interest in content likely to contain information about algorithms in order to increase the likelihood they will come into contact with such information. In particular, they must “actively” customize their experiences on platforms by following, subscribing, connecting with people who are likely to discuss Instagram’s algorithms, and “passively” customize their experiences by interacting with content and accounts addressing Instagram’s algorithms (Thorson et al., 2018; Thorson et al., 2019). Most influencers do seem to customize their platform experiences in these ways, though any variation in degrees to which they customize is unclear. In short, encountering relevant information depends upon fitting the profile of users interested in the kind of content likely to discuss algorithms. Although to some information about algorithms may seem ubiquitous online, for many others it may be more scarce. Instagram creates the first level of scarcity of information about algorithms by remaining tight-lipped about them. Platforms (including Instagram) create a second level of selective scarcity by making encounters with relevant information conditional. In the next section, I will discuss how influencers learn about algorithms via training, as well as how gurus who implement this training gain epistemic authority via the visibility game. 98 Learning from Training Training for Instagram influencers consists primarily of conferences and conventions, courses, and guides. Such training is built around successful influencers, or gurus, whose credentials are built upon their visibility. Gurus’ visibility grants authority to the information they impart and makes it valuable. As the market for learning how to become an influencer grows, gurus reap substantial rewards from selling their insights. This incentivizes influencers to learn about algorithms, and has engendered a cottage industry of social media marketing gurus, who promise to set influencers on a path to success. The lure of simplifying one’s path to success has produced a lucrative—and diffuse—branch of the attention economy, which further enmeshes knowledge with visibility and capital. This situation forms the basis of an important pillar of platform epistemology, wherein gurus—manufactured under the conditions set by platforms in the visibility game—become venerated as algorithm experts. For many influencers, becoming a guru is the mark of success as an influencer. Carina explained this point in an interview, when I asked her what she thought about gurus: I think that this whole way to make money in any of these new tech-based industries is tutorials. I'm just listening to a podcast about […] dropshipping [a supply chain management method used by many online sellers]. […] At the end of the podcast, it was kind of, the moral of the story was that you make all of your money by being the person that sells access to a tutorial, or an online class, or tips and tricks every month, and charge people $200 a month for access to your tips. And that's the end goal, rather than having a successful shop. So I think even...Yeah, I think that money lies in having succeeded and then leaving that and trying to convince other people that they can follow in your footsteps if they just pay you a little bit of money and listen what you have to say, but it really doesn't work that way. As Carina suggested, becoming a guru who makes money from training others represents the highest rung of achievement among influencers. This is, in part, because the demand for insight about how to grow an Instagram account, has made training others a lucrative way of monetizing knowledge. For example, travel blogger and social media entrepreneur Helene Sula reportedly 99 makes $200,000 a year, of which 65% comes from online courses (Lawlor, 2018). By comparison, Sula makes 15% of her income from affiliate marketing (Lawlor, 2018). Gurus impart what they know about algorithms to other influencers in multiple ways. One key way is via conferences, workshops, and conventions, which frequently include sessions that explicitly address algorithms. For example, pitching the importance of attending conferences, a Sprout Social article stated: “We’ve already seen countless algorithm updates from the likes of Instagram and Facebook. Yet, as communication trends and preferences continue to evolve, so too will our social platforms. The question is, how do you get ahead of the curve?” (Carter, 2018). Crucially, conferences often provide greater access to “insider” “tips and tricks” than influencers might receive by other means. As Bloggy Boot Camp’s website states, conferences are hotspots for inside information to leak out, giving you access to the kind of inside information that you simply wouldn’t have been able to unlock any other way outside of grinding at blogging for hours and hours and making all kinds of mistakes yourself (Bloggy Boot Camp, 2018, n.p.) Alongside in-person professionalization gatherings, gurus typically offer a selection of online courses, guides, and coaching, which serve as an integral source of information about Instagram’s algorithms, but can be costly. These courses, which are ubiquitous online, come in different forms—text-based “guides,” webinars, a series of modules paired with demos and templates. Gurus charge varying fees for their training, depending on the market and their status. Most paid courses seem to range between $100-$300, but some are upwards of $1,000. Myriad free courses exist, but these often serve as a gateway to paid training services. Downloading a free e-guide or signing up for a free course typically involves providing contact information, which allows gurus to advertise paid training materials. For example, Instagram entrepreneur Alex Tooby advertises a free course in the footer of her website, which directs visitors to a page 100 offering a one-time deal of $200 off her signature course, which normally costs $597. Courses and training services are usually paired with Facebook groups that gurus create, where they lead discussions and feedback sessions about maintaining and growing an Instagram account. Groups scaffold paid training materials by fulfilling the role of “study groups” and “office hours.” They also foster learning via networking and by providing a space for a community of practice. Yet, managing Facebook groups incurs a cost, which is often exacerbated by the constraints instituted by platforms. In May 2018, after two years of hosting and managing a free Facebook group, Meg Robinson, the admin of a Facebook group named Instagram Partners,6 announced she would be shuttering the group in order to focus more on paid training services. In explanation, Robinson cited a variety of reasons for archiving the group. These included concerns about Facebook intervening in the group: Some of the things we’ve noticed lately with the algorithms on Facebook, censorship from Facebook (deleting posts AND groups without our approval - which they can do, it’s their platform) have been making us feel like we truly can’t express ourselves sometimes and have open conversations about all of the strategies without worrying about Facebook deleting posts (or worse, our own Facebook Groups like they did our Comment Pod Facebook Group on Friday) or comments that we put time into writing and creating. Beyond concerns about Facebook intervening in the group, Robinson also explained that managing the group demanded a lot of time and resources and explained that they wanted to offer more quality control: “Partners members have told us they want more one on one time from us. They told us that they are tired of the newbie questions, duplicate questions, attitudes, etc. We’re honestly quite tired of all of this too.” Consequently, Robinson shared that Instagram Partners would now be offering a paid off-Facebook forum for $21.99/month that promised “many more perks, streamlined discussions, Q&As on each topic.” 6 Pseudonym. 101 Platforms intervene in the above-described training opportunities in multiple ways. All training opportunities must be marketed on platforms to effectively reach influencer-clients. As explained in the previous section, given the logic of algorithmic curation, exposure to posts advertising training is conditional on effectively performing one’s interest in these opportunities on platforms like Instagram, Facebook, and YouTube; being connected to those who have effectively performed this interest; and/or fitting existing models of this interest. Gurus’ choices about how to market their courses matter here as well and their choices are typically informed by platform metrics and research based on platform metrics. Algorithmic curation also impacts (aspiring) gurus’ attempts to acquire new clients. Successfully playing the visibility game facilitates success as a guru. Because many gurus complement their training with linked Facebook groups, the policies and design decisions the platform make can shape the flow of information in these informal discussion spaces. As the example of Robinson’s Instagram Partners group illustrates, moving off Facebook may provide a way for gurus to avoid being shut down, as well as regain control over information flows and community of practice learning they foster. Gurus are seen as trusted experts on algorithms, but this is not solely based on the quality of their information. Indeed, as I will explain in the next section, gurus achieve epistemic authority via success in the visibility game. Constructing Guru Authority Just as follower counts and engagement rates “get[] coded as success” in the visibility game (Duffy, 2016, p. 450), so, too, do they help validate gurus’ insights by acting as credentialing mechanisms. The story told is: knowledge confers visibility, visibility confers knowledge, and both knowledge and visibility convert to capital. This axiom emerges from the 102 structures platforms set in place via the visibility game, and serves platforms, as they benefit from the cyclical, intersecting pursuits of knowledge, visibility, and capital. This is another way that platform epistemology manifests: by keeping information about algorithms secret, platforms compel influencers to rely on the visibility game for authorizing information about algorithms, rather than, for example, source citations or other traditional signals of authority. We can see an example of a metric-based pitch in Tooby’s copy for her training services: After quitting my job I went on to create an Instagram account […]. I grew the account to over 10,000 followers in 3 months and now it sits at nearly 400k (just 2 years since its inception). Growing this account […] ha[s] taught me many lessons and allowed me to really hone in on my gift (that's right, Instagram mastery is a gift). Highlighting their number of followers, as in this example, acts as a testament to gurus’ expertise. In this way, metrics act as a heuristic for aspiring influencers to use in assessing the authority of gurus. Client testimonials also often highlight follower metrics as part of their assessment of the value of gurus’ offerings. For example, in a paid testimonial about the Instagram Partners course, one influencer7⁠ wrote that the course helped her grow from 3,000 followers to over 100,000. This influencer was now offering her own free five day course, which she advertised in the same post. Of Sula’s “Instagram for Success” course, one influencer similarly reported: When I signed up in 2018, I had about 5,000 followers, decent likes, very few comments, no clue what I needed to fix or change, and no clue about working with brands. A year later, I have tripled my Instagram following, have a high engagement rate, get 200 – 400 comments per post, took my blog views from 20 a month to 4,000 a month, and signed travel contracts and worked with several clothing, high end jewelry, and accessory brands. (Wheeler, 2020). The size of Facebook groups paired with training can also signal quality. Having many Facebook group members demonstrates that gurus’ strategies have worked for them in that their success in 7 I do not include the individual’s name or cite her blog post as she refers to the real name of Instagram Partners in her blog post. 103 the visibility game has enabled them to attract many members, and gives the impression that many other influencers have an interest in what gurus know. Thus, gurus’ ability to make their groups visible on Facebook offers another avenue for increasing the value—or, at least the impression of value—of their paid training services. Aside from metric-based assessments of value, connections at Facebook and Instagram can be mobilized to gain a competitive advantage in the marketplace of information about Instagram’s algorithms, as well as demonstrate authority. For example, by being invited to meet with Instagram employees, as previously discussed, Meg Robinson’s status as a guru increased. Not only did the meeting signal that she was a “VIP,” but it allowed her to acquire valuable insider information she could share with her followers (and group members). As mentioned, Robinson posted a video in Instagram Partners’ Facebook group after her meeting, in which she shared information and stated: I have some more information that I’ll provide that I know I can provide—again, I have to be careful because of the NDA. But, that information I’ll be slowly trickling out over at our Facebook page […], as well as through our email list, and then here and there in throughout the group, obviously, the group discussions. Robinson made the strategic decision to hold back information, rather than release everything all to once, presumably, because it incentivized members of her Facebook group to subscribe to her email list and remain in the group. By mentioning the NDA, again Robinson signaled that she had gained access to information that few people enjoy, which gave her an advantage over other gurus who do not have this access. Even if she cannot share everything she knows, her possession of insight she cannot share still informs the advice she offers. In her video, Robinson also announced that she had added members of “Instagram’s research team” to the Facebook group and that they would now be listening in to learn from the group. In a comment, Robinson elaborated that the Instagram employees added to the group were 104 interested in "working with creators to try to better their tools and to motivate creators to use their tools more.” By sharing this, Robinson positioned her Facebook group as a rare commodity, one that promised members the ability to directly communicate their needs to Instagram—“…if you guys have any feature requests, or anything like that, please post them here. […] That’s a really great place to start if you guys have anything that you want them to hear.”8 Having this “direct line to Instagram” presumably made the Facebook group even more valuable to influencers than before. The more influencers value Robinson’s insight and her Facebook group, the more money she is likely to make from the paid training she offers. Similar to Robinson, Cameron discussed in an interview how his connections at Facebook allowed him to achieve the impression of authority. He said: Once my Instagram account was verified—and that was really through a Facebook connection that they just did that; I really have no qualification for it—people would just add me because they'd be like, "Oh, this kid knows how to get that done," or "this kid knows a lot about the algorithm, we gotta invite him in and just talk to him about what he knows," because at this point too, other influencers were reaching out to me and they'd be like, "Hey, I need help with this," and I'd be like "Oh, sure, I can help with that," and either email my connection or add them to a group or... I just kinda became this knowledgeable person that, when people would find out stuff, they would reach out to me to either confirm it or deny it, and I'd be like, "Oh yes, I have heard about this, but tell me everything you know." And that's really when I would learn about what was going on in some different vein of the internet. Here, Cameron shared that his relationship with Facebook employees allowed him to get his account verified. With this verified status, as Cameron alluded, he did not necessarily need to be an expert to be considered one. Because he was verified (and had a large number of followers), others assumed he would be a good source of information about Instagram’s algorithms, and, indeed, having connections at Facebook meant he might be able to provide other influencers with credible answers to their questions, gaining him cachet. As he explained, his verified status also 8 Meg Robinson does acknowledge: “I can’t guarantee that they’re gonna read through every comment and everything, but they are reading.” 105 led other influencers to reach out to him to confirm what they knew, which allowed him to acquire even more information. As we have seen, to some extent, gurus’ wisdom is measured by how well they play the visibility game. Success in the visibility game also opens up opportunities for being (and being perceived as) a conduit for insider information about Instagram’s algorithms, which increases the value of gurus’ training. In turn, achieving status as an authoritative source of information lends success in the visibility game due to the demand for authoritative information. Knowledge about algorithms among influencers is a sort of cultural capital in which gurus’ wealth of knowledge, relative to other influencers, allows them to distinguish themselves from other influencers and, thereby, “yield[] profits of distinction” (Bourdieu, p. 245) in the attention economy. As such, information about algorithms among influencers gets filtered through the logic of the visibility game. While gurus rank fairly high up in the echelons of epistemic authority on algorithms, for most influencers, Instagram itself holds the highest degree of authority. As I will explain in the next section, Instagram uses this authority to shape the legitimization of information about algorithms among influencers (and other users) via what I term black-box gaslighting. Information Asymmetry Between Platforms and Users: Black-Box Gaslighting Platforms like Instagram clinch their epistemic authority by making information about algorithms scarce (Pasquale, 2015), as discussed, and also by retaining exclusive access to user data for studying algorithms (Andrejevic, 2014). With these strategic decisions, platforms create an information asymmetry between platform owners and users (Andrejevic, 2014). Although the technical complexity of algorithms often places a limit on what even platform owners can know about their own algorithms (Ananny & Crawford, 2016; Burrell, 2016), through the information 106 asymmetry they establish, platform owners create ambiguity in what can be known about their algorithms. This ambiguity allows them to create the impression that they are the sole authority on all knowledge claims about algorithms. As Gillespie wrote: platforms “are in a distinctly privileged position to rewrite our understanding of [algorithms], or to engender a lingering uncertainty about their criteria…” (Gillespie, 2014, p. 187). Occasionally, as I will discuss below, platforms use their epistemic authority to assert their own understanding of algorithms over what users know to be true about them, as rooted in users’ embodied experiences with them. In doing so, platforms sow confusion and self-doubt among those whose knowledge conflicts with statements made by platforms, which undermines the process of learning about algorithms. In these situations, the issue is not necessarily what is or is not true about algorithms. Instead, the issue is that users are compelled to rewrite their understanding of algorithms, as Gillespie put it, without receiving any insight to help them make sense of why their experiences led them to certain conclusions and without any acknowledgement that some elements of their knowledge claims may correlate with “the truth.” When platforms cause users to question their own memory, perception, or judgment of information they have acquired about algorithms, it constitutes what I call black-box gaslighting. “Gaslighting” is a term used in popular discourse, as well as by psychotherapists, to describe a manipulation technique in which an individual attempts to “undermine their victim’s belief system and replace it with another” (Dorpat, 1996, p. 33; Dorpat, 1996, p. 33). The goal of gaslighting is to prompt someone to question their reality and conform to the gaslighter’s will. Those who gaslight may do so consciously or unconsciously and gaslighting is “not explicitly or directly hostile, abusive, coercive, or intimidating” (Dorpat, 1996, p. 31). Moreover, those who are the target of gaslighting may or may not realize they are being gaslit (Dorpat, 1996). Because 107 black-box gaslighting represents one way platforms control knowledge building around algorithms, it represents a key element of platform epistemology. In what follows, I explicate this concept through the example of influencers’ debates over “shadowbanning.” First, I provide an overview of this debate and then I will explain how, in refuting the existence of shadowbanning, Instagram (black box) gaslit the countless influencers who said (and continue to say) they had experienced shadowbanning themselves. The debate among influencers about whether or not shadowbanning exists is ubiquitous in influencers’ discussions. Shadowbanning refers to the suppression of an account’s post(s), such that the account becomes virtually invisible to non-followers. Typically, the concern over shadowbanning is that posts will not show up in the Explore page or when users search for specific hashtags, which makes it difficult for influencers to grow their accounts. Given that “shadowbanning” is not explicitly announced to users (hence “shadow”), it is difficult to know for sure if and when an account has been shadowbanned. Still, about half of the influencers I interviewed believed shadowbanning was real, and many talked about experiencing shadowbanning themselves. For example, Emily and I had the following exchange: KC: Have you experienced a shadowban? Emily: I'm not sure. I feel like I have based on... Especially when it was going on a lot. I always use the same four or five hashtags. Like #[CITY]blogger or #[CITY]girl, or whatever they were. #outfitoftheday, stuff like that. And then all of a sudden it was like I would have no engagement whatsoever on a post. KC: Just a really significant drop that you were like, "Something is off." Emily: Right, a really significant drop. And then part of the reason I created this personal account, A, was so that I could see my friend's posts that I was missing on my blogging account. But also I started using it just to see, "Hey, is my post coming up?" You know like, "Hey if I go into this hashtag will I see what I think I'm gonna see or not?" And a lot of... Sometimes I would and a lot of the time I wouldn't. Like Emily, influencers typically determine they have been shadowbanned by attending to 108 visibility metrics—particularly impressions gained through hashtags, which is how influencers’ a key way posts reach non-followers. Such metrics help to empirically ground influencers’ conclusions. For example, a post in r/InstagramMarketing stated: Recently I've made a post and used the same hashtag set I use when posting about that sport (sports page so I have different hashtag sets) and I got 0 hashtag impressions. It shows nothing when I go into my insight. It's like I have not put any hashtags. Influencers often look to Instagram to verify facts about the platform’s algorithms, which positions the company as a key arbiter in the debate over shadowbanning. For example, Jessica, a social media guru, said in an interview: “I would say I get all of the information, before I speak to my audience about it, from Instagram. They come out with little blog posts about the updates and things of that nature. So, I always check there.” Indeed, influencers like Jessica tend to trust Instagram’s disclosures more than other sources and check their information against details shared by the platform. As a result of the authority most influencers grant to Instagram, when Instagram explicitly denied the existence of shadowbanning in 2018 (Constine, 2018), it resolved the debate for many influencers. For example, Lacey told me in an interview: I remember when shadowbanning first came about, of course, I was like, "Oh, my gosh. It's a real thing." And we were, even at work [a social marketing company], we were trying to figure it out, and we kind of thought, "Oh, it's because we're using the same set of hashtags on all of our photos. So Instagram is flagging that, and shadowbanning you," and whatever it might be, but now, as time has gone on and then Instagram even said that, I don't think it's a real thing. Lacey and other influencers accepted Instagram’s denial because they generally perceive Instagram—and Instagram’s parent company, Facebook—as the most authoritative source of information about the platform’s algorithms. They recognize, pragmatically, that Instagram has access to more information about the design of its algorithms than they ever will, so it makes sense to trust Instagram’s statements. 109 Nevertheless, there are limits to even what those who work for Instagram can know about the platform’s algorithms (Ananny & Crawford, 2016). For example, machine learning algorithms function semi-autonomously, which means that those who design them cannot always explain the particulars of what they do or why they produce certain outcomes (Ananny & Crawford, 2016). Moreover, platform employees and users have different access to knowledge about algorithms, which pertains to the difference between knowledge of algorithms’ capabilities versus knowledge of their outcomes. Algorithmic capabilities are fixed: algorithms make use of specific techniques, operate on specific datasets, serve specific goals platforms dictate, and so on. Knowledge of algorithmic capabilities is, thus, more firmly rooted in a stable, universal ground truth. In contrast, algorithmic outcomes depend upon data inputs, which are “inextricable from contexts” (Neff et al., 2017). Algorithmic outcomes vary according to variance in trace data inputs from user to user, or from segments of users to segments of users. In this way, knowledge of algorithmic outcomes is more contingent upon the who, what, where, why, and how of data inputs. Shadowbanning involves both knowledge of algorithmic capabilities and outcomes. The “outcome” represents influencers’ experience of stark drops in engagement (e.g., reach, likes, clicks), which are anomalous to their average level of engagement. In this sense, influencers have unique insight, through their situated encounters with algorithms, which may go undetected by platforms. Indeed, as Sophie Bishop argued, influencers’ (and users’, more broadly) knowledge claims “present a valuable resource for revealing fragments of information about algorithms during particular algorithmic moments” (Bishop, 2019, p. 14). The “capabilities” related to shadowbanning mainly involve the signals that could lead to drops in engagement and Instagram’s intent behind suppressing certain posts. Based on their experiences, influencers 110 suggest various kinds of user behavior on the platform, which could serve as data inputs. For example, Marcus suggested in an interview that shadowbanning resulted from engaging in undesirable activity and/or content, as: So if you're spamming people and you have content that Instagram deems against the rules or something like that, or against their terms of agreement, they're gonna stop pushing that because they feel like that's gonna keep people off the app, I guess. […] So if they think there's some suspicious activity going on in your account that's not authentic or in line with their rules, then they're going to repress your reach… Influencers have many theories like this about the kind of behavioral data inputs that will lead to shadowbanning. These theories generally refer to behavior that could be seen a “spamming,” as Marcus said, and, thus, often reason that Instagram uses shadowbanning to maintain a good user experience. Theories about the algorithmic capabilities that could give rise to shadowbanning are difficult to “definitively prove” with only data for an individual account. Nevertheless, theories usually draw on kernels of (objectively verifiable) truth. Platforms—Instagram included—do use AI in their moderation efforts, including using image recognition AI (Statt, 2018), such that content deemed unacceptable, undesirable, inappropriate, may be hidden from all users or certain users (Gillespie, 2018). Instagram also does restrict certain hashtags (e.g., #tagsforlikes, #snowstorm, #sunbathing, #ass) so that a search for such hashtags would display only the “Top posts” and a message stating “Recent posts from [hashtag] are currently hidden because the community has reported some content that may not meet Instagram’s community guidelines.” Many influencers conclude, logically, that using such restricted (or shadowbanned) hashtags will torpedo the reach of posts. The verifiable elements of influencers’ understandings of shadowbanning, coupled with their firsthand empirical observations and investigations and pooling of insight in the community, suggest that the debate over shadowbanning may be, in some sense, a debate over 111 semantics—the experience of shadowbanning could simply be the result of algorithmic curation and/or algorithmic moderation. However, Instagram does not acknowledge this point. In its initial refutation of the existence of shadowbanning, all Instagram said, as conveyed by TechCrunch, was: “Shadowbanning is not a real thing, and Instagram says it doesn’t hide people’s content for posting too many hashtags or taking other actions” (Constine, 2018, n.p.). This statement is highly misleading, as Instagram does, indeed, filter out posts when users post content or behave in ways that violate the company’s policies. 9 Further, with this statement, the company rhetorically directs attention to design intent, rather than outcomes, which ignores the main thrust of influencers’ complaints. As an additional example of the statements that company has made about shadowbanning, Instagram CEO Adam Mosseri said in an Instagram story: Shadowbanning is not a thing. If someone follows you on Instagram, your photos and videos can show up in their feed if they keep using their feed. And being in Explore is not guaranteed for anyone. Sometimes you get lucky, sometimes you won’t. Certainly, this is all true. However, it does not rule out the possibility that, for example, engaging in repetitive behavior that resembles bot activity might cause a user to be demoted in ranking on the platform. If an influencer’s post is displayed at the bottom of her followers’ feeds, the post is effectively invisible. Additionally, Mosseri did not acknowledge that using restricted hashtags will limit the audience that will see a post. The fact that “being in Explore is not guaranteed” is beside the point. Even before these statements Instagram indirectly responded to complaints about shadowbanning in a Facebook post, ascribing the problem to a glitch and claiming the company did not have the resources to fix the issue (Instagram for Business, 2017). Most of this post was dedicated to providing advice about how to create “good content” as a “growth strategy,” which some (indignantly) read as the company blaming influencers for the problem 9 In June 2020, Instagram CEO Adam Mosseri published a blog post (Mosseri, 2020)that included a slightly more nuanced statement about shadowbanning, though still positioning it as an unfounded concern. 112 they were experiencing. Instagram’s decision in how to respond to “the shadowbanning question” impacted knowledge building among influencers. Instead of the wholesale, succinct denials Instagram has issued, the company could have recognized the various reasons why influencers might experience shadowbanning-like outcomes, as the result of algorithmic processes functioning as they are designed, though (perhaps) sometimes with unintended outcomes. However, the company had little incentive to do this. Thus, Instagram capitalized on the information asymmetry between the company and users by black-box gaslighting influencers. Although it is impossible to know for sure what Instagram’s intentions were with their different responses, the effect remains the same. In explicitly stating shadowbanning is not real without offering any additional insight, Instagram planted the seeds of self-doubt among many influencers and, thus, helped quell conversations about shadowbanning in the community. This is because influencers who make claims that counter Instagram’s statements risk being seen as irrational or uninformed in the face of the company’s epistemic authority (Bishop, 2019). This risk shapes the knowledge influencers construct, particularly that which gurus construct, as their income depends upon maintaining credibility. In this way, black-box gaslighting reinforces the positioning of platforms as the sole—or, at least principal—arbiter of knowledge about algorithms. It grants platforms epistemic authority that no one else can claim, even though platforms’ knowledge about algorithms has limits. In fact, with restricted access to data and information about algorithms, black-box gaslighting allows platforms to blur the boundaries of verifiable knowledge, which helps the company control their public image. At the same time, not all influencers take Instagram’s words at face value, and many do recognize themselves as being gaslit (though I have not seen them use this term). For example, in 113 r/Instagram, a user commented: “the official line from Instagram is that shadowbans don't even exist. We all know they do.” Likewise, in response to a post that simply read “Instagram confirmed today they don't shadowban,” a user in a Facebook group commented: “Omg then wtf happened to me last year? I didn't show up in hashtags for like 3-4 months and my engagement was wrecked lmao.” With these comments, we can see the acknowledgement that Instagram’s statements do not reflect influencers’ experiences on the ground, demonstrating that influencers are not necessarily unwitting targets for gaslighting. Further, for many, Instagram’s response to the shadowbanning debate only prompted distrust towards Instagram, which encouraged influencers to increase their reliance on each other for “the truth.” For example in a Facebook group post about one influencer’s experience of being shadowbanned, a commenter asked how the poster knew she was shadowbanned, noting “Also Instagram said on Twitter shadowbanning doesn’t exist, however many have experienced it.” In response, the poster wrote (in part): “I wish Instagram would just be honest about this because they obviously do shadowban people.” As this exchange demonstrates, black box gaslighting may help platforms control perceptions and beliefs about algorithms, but it also chips away at users’ confidence in platforms’ statements. Black box gaslighting creates uncertainty, which can undermine knowledge building around algorithms as it causes some influencers to question what they believe; it also reduces the degree to which many influencers will look to platforms for learning about algorithms. Conclusion In this chapter, I described the means by which influencers learn about algorithms, while noting how these learning practices illustrate the concept platform epistemology. Platform epistemology captures the ways that the social, technical, and economic structures of platforms set the conditions for acquiring information about algorithms as well as the process by which 114 information achieves coherence within a social world. Influencers learn about algorithms via observation, reflection, and empirical investigation; networking and becoming members of communities of practice; news reporting and other secondary sources; and developing their own training. In the absence of a central, comprehensive, and authoritative body of information about Instagram’s algorithms, the visibility game acts as a means by which relevant information is distributed, circulated, and validated among influencers. Likewise, the information Instagram chooses to publicly confirm or deny about its algorithms both influences the information infrastructure influencers construct and bounds legitimate information. I summarize these elements of platform epistemology in the Instagram influencers case study more extensively below. In this chapter, we saw platform epistemology emerging in a number of ways. Platforms play a role in activating connections between influencers via algorithmic processes, which permits learning via networking. Success in the visibility game opens up opportunities for making connections with Instagram (or Facebook) employees, who can supply insider information. The communicative affordances of platforms foster communities of practice, where influencers learn together as they pursue visibility. The underlying logic of the visibility game incentivizes participation in communities of practice, particularly sharing information. Influencers also gain insight from different media on platforms like Google and YouTube. Because these platforms rely on algorithmic curation, exposure to information about algorithms is contingent upon influencers effectively fitting the profile of users interested in the kind of content likely to discuss algorithms. Through its secrecy, Instagram makes information about its algorithms scarce, which engenders demand for credible information that gurus step in to meet with paid training events and services. Gurus establish their epistemic authority by documenting 115 their success in the visibility game via visibility metrics. The findings shared in this chapter suggest that influencers acquire information about algorithms unevenly according to structural inequalities. This is partly because the learning practices described all demand time and money. Those with the money to invest in multiple up- to-date devices, a fast home Internet connection, unlimited mobile data plans, paid apps, and so on will be better equipped for experiential learning and learning by way of media (Robinson, 2009). Networking at events and online demands considerable time, (Duffy, 2017). Attending professionalization meetings and procuring training services demands money in addition to time (Duffy, 2017). Moreover, achieving guru status, and, so, epistemic authority, depends upon achieving success in the visibility game. Yet, those who achieve success in the visibility game tend to be those from more privileged backgrounds, who benefit from greater access to material resources and time (Duffy, 2017). Because influencers’ twin pursuits of visibility and knowledge demand time and money, those who achieve guru status are more likely to be those who begin with greater socioeconomic advantages. The same line of reasoning helps explain why those with more social advantages tend to exhibit greater levels of knowledge about algorithms than those with fewer social advantages (Cotter & Reisdorf, 2020). The inequitable foundation of learning about algorithms has implications for the authorization of knowledge among influencers. Due to the epistemic authority granted to them, how gurus perceive and value Instagram’s algorithms may act as a filter through which new insight produced by members of the influencer community must pass. Gurus’ knowledge is one important basis for what is and can be known about algorithms. In some cases, information gurus endorse may discredit novel information other influencers produce or acquire. Thus, to some extent, influencers are obliged to understand algorithms within the framework of gurus’ 116 knowledge. In Bourdieu’s terms, some aspiring influencers share with gurus a similar habitus, or social ecology of dispositions, experiences, perceptions, and appreciations (Bourdieu, 1977). Given gurus’ centrality in the information infrastructure around algorithms in the influencer community, a “harmony of habitus” (Bourdieu, 1977) with gurus allows community-authorized knowledge to be more readily accessible and/or legible to aspiring influencers. In other words, sharing a similar social position as gurus—and, so, a similar ethos—may make it easier for influencers to make sense of algorithms in the same way gurus do. Conversely, if gurus’ social position shapes the knowledge they produce and authorize within the influencer community, those whose social location differs significantly from gurus may face resistance in fitting their knowledge within existing frameworks. For example, influencers of color who understand their visibility as being undermined by racial biases algorithmic curation reproduces may face resistance from White gurus who lack firsthand knowledge of this and may come to different conclusions about patterns of visibility (Bishop, 2019). Gurus are not the only figures who hold epistemic authority. Instagram asserts an additional level of epistemic authority in the decisions it makes about what to disclose and the knowledge claims it refutes (or endorses). I introduced the concept of black-box gaslighting to describe a phenomenon in which platforms refute users’ knowledge of algorithms based on firsthand experiences, without recognizing and offering insight on the reasons why users might draw the conclusions they did. With black-box gaslighting, platforms capitalize on the information asymmetry they have established. In the context of the present study, Instagram benefits from greater access to data and information that could be used to help influencers better understand their experiences of being shadowbanned. Yet, instead, Instagram offered wholesale denials without offering any additional insight. In doing so, Instagram effected a situation in 117 which influencers who continue to claim shadowbanning is real risk being seen as irrational. This risk imposes a prescriptive force on the knowledge claims influencers choose to put forth. This is particularly the case for gurus, who need to be seen as credible in order to continue generating an income from the insights they impart. Through black-box gaslighting, platforms like Instagram reserve a great deal of power in arbitrating knowledge about algorithms. In the next chapter, I will turn attention to what happens after influencers acquire information about algorithms—how they make sense of algorithms in different ways, and the impact this has on how they play the visibility game. 118 CHAPTER 5: Strategizing How to Play the Visibility Game Algorithmic literacy entails building knowledge, but also putting knowledge to work, as it factors into the interests and activities of individuals’ social worlds. For influencers, this means, as they learn about algorithms, they must determine how to translate the insight they glean into an actionable strategy in the visibility game in order to be successful. In this chapter, I detail how different kinds of knowledge guide influencers in identifying possible visibility tactics and determining which tactics to implement and how. In exploring what influencers know about Instagram’s algorithms, I do two things: 1) distinguish between and demonstrate the value of technical and practical knowledge about algorithms, and 2) demonstrate that what people know about algorithms permits them to consciously have a say in how algorithms make their social worlds. I summarize these findings below. When we talk about what people know about algorithms, usually discussions center on technical knowledge about algorithms. As discussed earlier, technical knowledge refers to knowledge about the technical design and functioning of algorithms. In this chapter, I discuss two broad types of technical knowledge evident in influencers’ reflections on algorithms: design knowledge and methods knowledge. While the former addresses what goal(s) an algorithm serves and how developers actualize that goal through the design of the algorithm, the latter addresses the means by which an algorithm actually accomplishes the goal(s) it was designed to serve. Both kinds of technical knowledge matter for influencers. They give rise to the rules of the visibility game around which influencers must design tactics, and provide a framework for assessing the soundness of new information acquired about Instagram’s algorithms, which can, in turn, be put to use in developing or modifying their tactics. Technical knowledge is, thus, one tool influencers draw on in their pursuit of visibility. 119 Technical knowledge is interconnected with practical knowledge, a kind of knowledge about algorithms, which, as yet, has not been well addressed in scholarly and popular discussions. As previously explained, practical knowledge refers to knowledge about algorithms that is praxeological. In other words, practical knowledge captures knowledge mobilized in situations of action (Suchman, 2007) and constructed in relation to the “universes of discourses” that define an individual’s social world (Strauss, 1978) and social identity. For influencers, practical knowledge is the knowledge expressed through their pursuit of visibility, as guided by discourses on what it means to be an influencer. Later in this chapter, I describe two particular discourses on authenticity and entrepreneurship, which correspond to two different approaches to playing the visibility game, which I call relational and simulation tactics. These discourses exhibit a relationship with the gendering of labor (Weeks, 2007), which also seems to impacts influencers’ strategizing. Thus, practical knowledge is not just knowing “what works” in the visibility game, but knowing how visibility should be achieved, which extends beyond the question of how algorithms function to the embeddedness of algorithms in complex systems of social relations. Platforms exert a force over influencers’ behavior by making them subject to “regimes of visibility that hinge on and operate through algorithmic architectures” (Bucher, 2012, p. 6). Yet, with practical knowledge, influencers become self-reflectively conscious of this disciplinary apparatus. Locating themselves within this field of control constitutes the first step towards intervening in the ways algorithms make their world—in other words, it is the foundation for critical algorithmic literacy. In the rest of this chapter, I will first describe how influencers know Instagram’s algorithms technically. Then, I will describe how influencers know algorithms practically. Throughout, I demonstrate how these two kinds of knowledge direct influencers’ pursuit of 120 visibility. Technical Knowledge Technical knowledge is comprised of two knowledge domains. First, technical knowledge includes knowledge about the design of algorithms. For influencers, design knowledge mainly consists of insight about the goals Instagram’s algorithms serve and their iterative design. Understanding the goals algorithms serve gives influencers a framework to use in assessing whether details they acquire about the functioning of algorithms make sense. Understanding the iterative design of algorithms allows influencers to keep vigilant for changes that will affect their strategies in the visibility game. Second, technical knowledge includes knowledge about the methodology of algorithms—the means by which algorithms fulfill their goal(s). For influencers, methods knowledge mainly consists of knowledge of the signals an algorithm draws upon in sorting and filtering content as well as sanctioning behaviors. Knowing these signals is what enables influencers to devise particular visibility tactics. Below I describe what both design knowledge and methods knowledge look like for influencers, as well as the utility of each for those engaged in the visibility game. Design Knowledge Design Goals Perhaps the most fundamental insight we can have about an algorithm is the goal(s) it serves. An algorithm’s goal directs what it does, such that an algorithm is, essentially, “a device for the automated production of interested readings of empirical reality” (Rieder, 2017). Understanding the goals of Instagram’s algorithms provides an overarching framework, which influencers can use to assess details they encounter about what the algorithms do and what that might mean for increasing their visibility. Put differently, it helps influencers assess, for 121 example, “Would it make sense for X to be a signal used by Instagram’s algorithm in ranking content?” In general, influencers recognize a duality in the “interested readings” of Instagram’s algorithms, which Chelsea succinctly articulated in an interview: [The algorithm] is a platform’s way of providing a user with the most relevant content to them in order to keep that individual on the platform for longer periods of time and coming back to the platform more often, so that they can earn money off of advertising. As Chelsea suggested, influencers understand that Instagram’s algorithms aim to 1) improve user experience on the platform, and 2) keep users on the platform for as long as possible in order to ensure advertising revenue. In terms of the former goal, influencers refer to a range of ways algorithms are used to enhance user experience, including helping users manage information overload, displaying more relevant content, minimizing spam-like content, and ensuring users see posts from the people and accounts they want to see. For example, in explaining the goal of Instagram’s feed ranking algorithm, Adyn told me: “You have a thousand people you follow and you see everything they post, it'll just be exhausting,” and “if you're getting the most relevant content in your feed, you're having a great time. Whether or not that's ads. If you're getting ads that are irrelevant to you. Facebook...Facebook, the parent company of Instagram, they don't want that.” While influencers like Adyn recognize user experience as a key goal of Instagram’s algorithms, many also recognize that Instagram does not completely or always fulfill this goal. Indeed, many influencers I interviewed and observed online complained about seeing posts that did not interest them or spammy posts they did not want to see. In this sense, knowing the goals algorithms serve helps influencers (and other users) assess how well an algorithm is functioning. In terms of the second goal, influencers recognize Instagram uses its algorithms to ensure ample streams of advertising revenue. For example, Jessica said in an interview: 122 …Instagram is a business and just as we are growing and I'm growing my business, they're trying to grow theirs. And like I said before, they want to make money, and I hope that they put the consumer and app user first, but I think at the same time, a really close second would be making money. And how they do that is through advertising, and how they pitch advertising to people saying, "Listen, we have 300 million viewers on Insta Stories and then they spend three hours a day on Insta Stories and here's this ad and you can be put in front of the eyes of 3,000 people." As savvy marketers themselves, influencers like Jessica recognize that Instagram needs to demonstrate value to those who generate the most revenue for the platform: advertisers. Knowing this helps influencers reason about and assess the validity of information they encounter about the functioning of the platform’s algorithms. In sum, knowing the goals an algorithm serves gives influencers a framework with which to judge the soundness of other kinds of knowledge about an algorithm (e.g., which signals it relies on) and, secondarily, helps influencers assess the performance of the algorithm. Iterative Design There are few subjects that receive as much attention among influencers as changes to Instagram’s algorithm. This is because knowing that (and when) Instagram’s algorithms change is crucial for keeping abreast of the rules of the visibility game and effectively playing. Without some understanding that the rules can change, influencers would be working with outdated information that may limit the utility of their tactics. For example, Lacey told me in an interview: I think with Instagram, your strategy always has to be changing, so you can't just keep doing the same thing. You have to stay in touch with what's working and what's not. So I think maybe when things were being valued differently [by the algorithm], certain people were realizing like, "Oh, my gosh. I feel like my engagement is lower and people aren't seeing my content." But based on that, I think it's just important to kind of shift your focus accordingly and maybe being open to adjusting your strategy. Similarly, Marcus told me in an interview: “…it sucks when the algorithm changes, because you're so used to one strategy and then you've got like, ‘Oh man, now I gotta start a new one and change up my strategy,’ and things like that.” As these quotes exemplify, influencers commonly 123 articulate the idea that Instagram’s algorithm can change at any given moment, which requires constant vigilance because changes often require them to adjust their visibility tactics. In the next section, I will describe the second kind of technical knowledge: methods knowledge, and what this kind of knowledge looks like among Instagram influencers. Methods Knowledge Data Signals for Algorithmic Ranking Methods knowledge provides more actionable information from which influencers can pursue visibility by offering influencers some insight about what algorithms do. For influencers, the most important methods knowledge is the input data that inform the sorting and filtering of content on Instagram, particularly the discrete signals drawn from input data, which form the “ABCs” of the rules of the visibility game. In other words, knowing the data signals algorithms operate on allows influencers to formulate visibility tactics that urge other users to interact with their accounts and content in ways that will boost these signals. Influencers know that Instagram’s algorithms rely on user data to fulfill the dual goals they serve. For example, Carina told me in an interview, “[Instagram] use[s] data about you. What you like, who you follow, who you comment on and engage with, and who you message privately, and whose stories you watch, to give you more of the same content.” They also have insight about key kinds of input data that matter for their visibility. Primarily, influencers focus on three kinds of input data, which are: engagement, timeliness, and relationships. Notably, these are also the main signals Instagram refers to publicly (e.g., Dimson, 2017; Instagram [@creators], 2020). In the next three sections, I will describe how influencers understand these three key kinds of input data and how influencers use them in the visibility game. Engagement. Influencers recognize engagement— “likes,” comments, shares, views, and 124 so on—as the most important signal used by Instagram’s algorithms. Influencers view engagement as both a measure of their success and a means of increasing visibility. This multifaceted understanding of engagement among influencers leads to the mantra expressed succinctly by one influencer: “engagement comes from engagement.” In other words, engagement is the means to an end, as well as the end itself. Thus, the utility of influencers’ knowledge of engagement as a data signal used by Instagram’s algorithms is 1) being able to measure, track, and evidence their visibility and, thus, success in the visibility game, and 2) devising tactics that encourage engagement and, so, visibility.In particular, influencers most commonly discuss engagement rate, or the ratio of engagement to followers, as a key signal used in the algorithmic ranking of content on Instagram. Influencers understand that engagement includes not only absolute degree of likes, comments, and so on, but also the relative degree of engagement. This has implications for how influencers formulate tactics, as one influencer expressed in a forum: “in order to maximize visibility, you need to get as many of your followers to regularly engage with your posts as you can.” In an interview, Olivia elaborated on this principle: Well, I've just heard that the algorithm favors accounts that have high engagement. So for example, if you have 10,000 or 20,000 followers and then you have over 1,000 likes on your picture, then the algorithm thinks like, “Oh, this account is super popular,” because, like in relation to their followers, they get this and that many likes. So, it's a popular account, let's boost it. Whereas, if you then grow and your likes don't grow, then Instagram's algorithm is not gonna put you on their explore page as much. As Olivia explained, influencers say Instagram’s feed ranking algorithm prioritizes users that have actively engaged followers, not just “dead weight” followers (what some influencers call followers who do not regularly engage with their content). This understanding leads many influencers to believe they need to periodically purge dead weight (i.e. remove followers). Although influencers know a variety of forms of engagement matter, they tend to 125 emphasize the importance of communication-centric engagement—i.e. comments and direct messages—above all. Adyn explained this point in an interview: What I've seen work the most [for increasing visibility], and based on what also the reps that I've worked with from Facebook have explained, it's things that create conversation get the most weight. So, comments is by far the number one intent signal that they prioritize. If people... If something's getting people to talk, it's getting them to connect. If it's getting them to connect, we're giving value to people. It's as simple as that. Influencers like Adyn tend to view comments and direct messages as the most important form of engagement, because these are believed to be weighted more heavily as they require greater effort on the part of users. Timeliness. Aside from engagement, influencers also frequently mention timeliness as another important signal drawn on in algorithmic ranking on Instagram. In the first place, timeliness refers to the recency of posts. However, influencers understand that, although Instagram algorithmically sorts posts roughly in terms of recency, the timing of posts intersects with other data signals that are more important. For example, Chelsea said in an interview:I do try to post at certain times during the day just because I know that's when my followers will be much more active and I have a better chance of appearing higher in the feed, especially for people who return to the platform a lot. If you haven't returned to the platform in a while, what happens is you get maybe shown something from three, four days ago, and it's going to be someone that you have a lot of historical interaction with. Here, Chelsea recognized that timeliness does not supersede other data signals in algorithmic ranking. In fact, she suggested that relationship with an account matters more, as I will discuss in the next section. As a data input, influencers more commonly discuss timeliness in relation to how much engagement posts generate in a short period of time after they are published. Explaining this point, Sofia said in an interview: If you post, and within the first half an hour you post your picture receive a certain ... a big amount of likes and comments, within that first ... let's say 10, 20 minutes. Really, I think that's ... from the minute one to minute 20, I think that's the real crucial part that determines how well your post will do. 126 Similarly, Chelsea told me: If you post something, you have a rush of good engagement, of people liking and commenting back and forth on a given photo, and that raises it in the algorithm, because the algorithm wants to see things that cause engagements. And so, it's more likely to be seen in feeds. This emphasis on early engagement shapes how influencers design and deploy visibility tactics.10 Influencers spend substantial time figuring out the best time to post, according to what they know about when their followers are typically online. They also often coordinate engagement- generating tactics to directly follow publishing posts in order to augment their visibility. Relationships. Alongside engagement and timeliness, relationships constitute the third major factor influencers (and Instagram) reference in relation to algorithmic ranking on the platform. Typically, “relationships” gets operationalized as interactions. For example, Christina said in an interview:So, say, if you are liking one person's post every single time they post, the next time you log onto Instagram, chances are that person's latest post will be at the top of your feed. Because, basically Instagram is saying, “We know you really like this person,” or, “We know that you have a personal relationship with this person and seeing this person is important to you.” The importance of interaction for establishing connections to other users on Instagram, provides another reason why influencers prioritize communication-centric engagement, as discussed previously: interacting with others on the platform helps influencers evidence relationships with their followers for the ranking algorithm, which they hope will help their posts rank higher in their followers’ feeds. 10 Although Instagram has refuted this idea that posts will rank higher if they generate more engagement directly after being published (Warren, 2020), based on their experiences on the platform, influencers like Sofia and Chelsea—as well as many third-party companies—nevertheless frequently suggest that initial engagement does, indeed, matter. Again, as in the case of shadowbanning, this is a could be a matter of confusing algorithmic outcomes with capabilities. Even if Instagram’s ranking algorithm is not designed to take into account “rushes of engagement” directly after a post has been published, it is still possible that posts that receive engagement sooner rather than later end up being ranked higher. For influencers, the main concern is understanding which data signals, in practice, correlate with visibility, and these factors need not necessarily be hard coded into the algorithms, particularly as Instagram’s algorithms rely on machine learning. 127 Influencers also talked about network connectedness that they believed matter for achieving visibility. For example, Emily said in an interview: Emily: I’ve been thinking a lot more about definitely tagging other accounts in the picture [on Instagram], all of these feature accounts, or things that are vaguely related to it. For some reason that seems to be helping boost things at the moment. I don't know if that'll change. It probably will. KC: Just tagging other accounts, but not necessarily... Not like them reposting it or anything like that, just simply tagging the other post is helpful? Emily: Yeah, just tagging another account, but like you think the followers of that account might be into whatever you're posting. Like Emily, many influencers emphasize the importance of explicitly establishing connections with related accounts as a way of gaining visibility. The logic here seems to be: by tagging similar accounts, influencers builds links to these accounts for Instagram’s ranking algorithm, such that the followers of those accounts are more likely to see their content. These influencers gather that, like other platforms (e.g., Wagner, 2014; Zhou & Moreels, 2014), Instagram’s algorithm infers users’ interests by extrapolating from their location in the platform’s social graph. Relatedly, many influencers report that receiving engagement from more visible accounts positively impacts algorithmic ranking on Instagram. As Sofia explained: “if the likes and comments come from big accounts—so like 20,000, 50,000, 100,000, 500,000 [followers]—then [the post] will go even more up and it will end up in the Explore page.” Likes from larger accounts are referred to as “power likes” and are thought to be, as the name implies, powerful for boosting the visibility of posts. As Instagram guru Biaheza explained in a YouTube video: “the theory here is that Instagram values the likes of large accounts—accounts with large followings—more than it does the likes of regular people and [an] influx of large accounts liking your post is gonna make it go viral and get on the Explore page” ([Biaheza], 2019). This 128 understanding inspires influencers to devote energy to building relationships with more visible influencers, pursuing collaborations with other influencers, and/or paying more visible influencers for “likes” and shout outs. While influencers spend considerable time determining which signals will positively impact their visibility on Instagram, they also attend to actions that will negatively impact their visibility—namely, actions that will be met with algorithmically enforced sanctions. I describe knowledge of penalties and how it shapes influencers’ strategizing in the next section. Penalties As a kind of methods knowledge, knowledge of penalties and the data signals that provoke them pertain to the functionality of Instagram’s algorithms. Instagram, like other platforms, uses algorithms to identify and respond to spam- and bot-like activity on the platform. Influencers have reported a variety of sanctions instituted by Instagram, mainly bans and shadowbans. Shadowbans, as discussed, result in posts being demoted in algorithmic ranking or otherwise not as widely visible. Bans prevent influencers’ from using the platform as they normally would by disabling user accounts from liking, commenting, posting, and/or (un)following for a period of time, most commonly 24-48 hours. Maryam described her understanding of bans: Instagram I think it records your...Every time you log onto the app, it does observe how much time you spend on the app and what does the typical interaction that you do with the app on a daily basis. So, after a point of time, if you exceed the number of likes or something, if you do say a hundred likes per login, or say fifty likes per login, and if you exceed that at one point Instagram will stop you from commenting or like anything for awhile because it will suspect that it's a bot. Like Maryam, influencers usually cite repetitive behavior in a short period of time as the trigger for bans. Other reasons include using software that violates Instagram’s Terms of Use, using a restricted hashtag, and being reported by other users. With insight about penalized actions, 129 influencers craft their visibility tactics to avoid actions they know (or think) will be algorithmically sanctioned. In other words, knowledge of penalties and the data signals that invite them allows influencers to work around penalized activities in their visibility tactics. Working around penalized activities applies to both those who use automation services, which perform actions in more rapid succession, and those that do not. Now, I will transition from discussing technical knowledge to discussing practical knowledge, wherein influencers draw on the discourses of their social world to make sense of algorithms in practice. Practical Knowledge While technical knowledge provides influencers with a series of technical specifications that illuminate and delimit the full range of viable tactics, practical knowledge emerges as influencers reconcile technical knowledge with their sense of self. In this, influencers bring together knowledge of the role algorithms play in the core activity of their social world: the pursuit of visibility, and their understanding of what it means to be an influencer. Practical knowledge is more fundamental than the logic of action; it is the result of the knower giving meaning to her action. It is the tacit knowledge influencers deploy in devising and mobilizing of strategies, and it is a knowledge derived (in part) from their membership in the social world of influencers and, to a lesser extent, from gendered self-conceptualizations. Practical knowledge entails making sense of how Instagram’s algorithms ensnare them in a particular arrangement of power, but also making sense of who they are as influencers. In what follows, I will discuss multiple expressions of practical knowledge. First, I will take a step back to describe how influencers recognize themselves as subject to the disciplinary control of Instagram’s algorithms. This recognition is what makes knowledge valuable for them 130 in the first place, and plants the seeds of a critical consciousness. Then, I will explain how influencers translate the rules of the visibility game into different strategies, as guided by discursive ideals of digital influence. In this, I first describe the discursive ideals of authenticity and entrepreneurship among influencers, which align, respectively, with what I call relational and simulation tactics deployed in the visibility game. Influencers who adopt relational tactics believe that Instagram’s algorithms can accurately detect strategies that circumvent true connectivity, and, as such, influencers should focus on developing “real” relationships with other users. Influencers who adopt simulation tactics believe that “real” relationships are easily simulated in algorithmically undetectable ways, with certain tactics signaling popularity to the algorithm even in the absence of abundant ties. Further, I suggest that the development and implementation of relational and simulation tactics seem to correspond to gendered notions of labor. Ultimately, I argue that while Instagram’s algorithms determine the viability of tactics that may be adopted in the visibility game, they do not determine which tactics influencers will adopt. Instead, influencers’ broader sense of themselves—namely, what it means to be an influencer and the gendering of labor—drives strategizing. Thus, I suggest that practical knowledge matters alongside technical knowledge for playing the visibility game. The Algorithm as Involuntary Adversary Instagram’s algorithms act as the material enactment of the rules of the visibility game, and influencers recognize their regulatory role. Exemplifying this, influencers frequently refer Instagram’s algorithms with language reminiscent of government authorities or law enforcement. For example, one user wrote in a forum, the algorithm “decides the ‘punishment’ for those ‘found guilty’ (by the algorithm) of violating the Community Guidelines.” Other influencers similarly described Instagram as “actively tracking” certain behaviors, “doing a sweep,” or 131 “cleaning up their platform,” as well as “cracking down on” or “coming down hard” on certain behaviors. In line with such language, influencers also describe ways of evading algorithms as they engage in strategies they believe Instagram might not like, though are not explicitly forbidden. Their sense of being surveilled and governed by Instagram and its algorithms extends to every aspect of their labor, including the production of content. For example, one influencer lamented algorithmic ranking on Instagram, envisioning a return to a chronological feed and wrote, “It would be about creating cool and interesting content again, as opposed to the same 5 unoriginal images you routinely see now because we’re all so afraid of no longer being relevant and doing something the algorithm doesn’t like.” Influencers like this individual believe that algorithms enforce a particular order by which they must abide. In recognizing algorithms’ regulatory role, influencers understand their relationship with “the algorithm” as both adversarial and involuntary. I describe this dual understanding of Instagram’s algorithm through interviewees’ responses to a two-part question: “If Instagram’s algorithm were a person, how would you describe them, and what would your relationship with them be like?” In characterizing the algorithm as an adversary, influencers used descriptors like “manipulative,” “abusive,” “bully,” and “enemy.” For example, Maryam described her relationship with the algorithm as “a love-hate relationship which is kind of toxic.” Similarly, Krissy described the algorithm as “a mysterious person who you can never really quite be sure if they are out to help you or hurt you.” Christina told me: “[The algorithm doesn’t] feel warm and fuzzy to me. They feel very much like the cool girl's club that we're just not deemed worthy to be in their presence.” These kinds of responses communicate feelings of anxiety, vulnerability, frustration, anger, and sadness provoked by the platform’s algorithm. These sentiments relate to a sense of being at the mercy of Instagram and its algorithm. Influencers recognize, at once, the 132 algorithm’s power and their inability to change it. While influencers have some latitude in the tactics they deploy to gain visibility, if they wish to be successful, they must still follow the rules. An exchange in a Facebook group about a collective exodus from Instagram in protest of its algorithm encapsulated this point: User 1: I doubt people will leave Instagram in the kind of numbers it would take for them to reverse course and stop using the algorithm. Their growth projections continue to be huge and they’ve invested significantly. They’ll continue in this direction and keep refining the algorithm in ways that make sense to them. Like it or not, we need to figure out how to make it work for us User 2: This, it's simply a case of if you can’t beat ‘em you might as well join ‘em ... It sucks, but we can’t change it, so you gotta just adapt or get left behind.” As these statements exemplify, influencers feel forced to abide by Instagram’s algorithm. In acknowledging the complexity of their relationship with the algorithm—for example, describing it as “a love-hate relationship”—they reveal a sense of dependency, rather than affection. Related to such a dependency, influencers commonly characterize their relationship as compulsory. For influencers, there is no recourse for their lack of control over the algorithm, aside from learning the rules of the game and finding ways to work within them. They have to be on Instagram to do what they do and, so, working with the algorithm is non-negotiable. Carina expressed this sentiment when likening the algorithm to her aunt, a non-optional relationship. She said her relationship with the algorithm would be: “Probably like an aunt. Because I can't get rid of her, she's in my life and I like them, but yeah, just kinda never considered not engaging with that person.” Similarly, Emily described her relationship with the algorithm as a kind of obligatory professional relationship: Emily: I think it be a very fraught working relationship. You know? Somebody that you work with that you're like, "Okay, I have to be civil, I have to get along with." Keep a certain professional standard. But not someone that you're like, "Oh, let's hang out after work." KC: Yeah, "Let's go get drinks." [chuckle] 133 Emily: Right. Probably the person in the office that isn't in the group text message. As Emily’s description demonstrates, influencers know that in order to do their job, they must work with the algorithm. Krissy similarly characterized her relationship with the algorithm as a one she maintained out of necessity, rather than fondness. She said: “Kinda like a frenemy, like you want to be close to them so you can get info or get the inside scoop but you would never quite really trust them with your deep-down secrets.” Influencers’ characterization of the algorithm as a sort of obligatory adversary captures influencers’ understanding of themselves as (reluctantly) subject to the algorithm’s will. This understanding motivates them to learn the rules of the game in the first place. For example, in response to the question about her hypothetical relationship with the algorithm-as-person, Jessica joked “Oh, I would kiss ass all day. [chuckle] I would be like teacher's pet, we're gonna be besties, yeah.” This response captures influencers’ knowledge in action: knowing that the algorithm enacts control creates opportunities for subverting that control. In this way, influencers sometimes see their job as finding ways to make the algorithm their ally. 134 Figure 14. Meme created by Manu Muraro, founder of Your Social Team In the next several sections, I will describe the different ways influencers go about attempting to assert control over their pursuit of visibility via different strategies. First, I will introduce the authenticity ideal and corresponding relational tactics. Second, I will introduce the entrepreneurial ideal and corresponding simulation tactics. After this, I will explain how relational and simulation tactics mirror gendered conceptions of labor. The Authenticity Ideal In mainstream culture, digital influencers are seen as ordinary individuals who have achieved micro-celebrity through their labor and sheer will (Abidin, 2015; Duffy & Hund, 2015; Duffy, 2017). Influencers, thus, are defined and define themselves through an ideal of authenticity (Abidin, 2015; Duffy, 2017). In this way, influencers are a feature of an era marked by brand cultures, wherein “realness” has become a pervasive and animating force (Banet- Weiser, 2012). Strategically capitalizing on impressions of their “realness,” influencers cultivate 135 a sense of intimacy, accessibility, and relatability, which forms the basis of affective relationships with followers (Abidin, 2015; Duffy, 2017; Marwick, 2013; Marwick, 2015). An air of authenticity also differentiates influencers from traditional media and celebrities, who often serve audiences carefully crafted fantasies that stand in stark contrast to lived experiences of “real” people (Duffy, 2017). In other words, influencers give the impression that their lives are truly no different than their audiences’. Furthermore, whereas traditional celebrities tend to maintain distance from and build hierarchical relationships with their fans, influencers use their “realness” to create a sense of proximity to and parity with their followers (Abidin, 2015). Of course, influencers’ authenticity is often performative, as they are obliged to delicately manage their audiences’ perceptions of them in order to maintain and grow their following, as well as secure brand partnerships (Abidin, 2015). The authenticity ideal corresponds with one prominent reading of Instagram’s algorithm, which I refer to as relational influence and which I will now explain. Relational influence Relational influence entails broad, yet intimate, social relationships and expertise. Relational tactics entail relational labor, which Nancy Baym defines as: “regular, ongoing communication with audiences over time to build social relationships that foster paid work” (Baym, 2015, p. 16). Relational influencers believe in the “social” element of social media, which they maintain necessitates “real” or “human” relationships. Supporting this perspective, many relational influencers echo statements like the following: “Influencers will be rewarded by the algorithm if they can build close, human relationships with their followers” (Guthrie, 2016, p. n.p.). Similarly, home DIY and design influencer Julia Marcum wrote, “I’ve been doing some research and testing and thinking in the last few weeks and this is the conclusion I’ve come to 136 about the new algorithm: Instagram just wants us all to act like human beings” (Chris Loves Julia, 2018, p. n.p.). In this way, relational influencers believe that Instagram’s algorithms can accurately detect depth of relationships and, thus, influencers must seek authentic connectivity. In line with the authenticity ideal, relational influencers primarily emphasize active and genuine communication with current and potential followers, as well as creating “good” content that provides value to audiences. Relational influencers emphasize honesty and ethics in their approach, which they often discuss when contrasting themselves to simulation influencers. I elaborate on these principles of relational influence below. Relational influencers heed the authenticity ideal by emphasizing the importance of being genuine in order to build relationships with their followers. Emily captured this in an interview, explaining how she tries to strike the right tone in her posts: Emily: I've been trying to keep it a little more chatty and not like a business type caption. I think sometimes I get... Sometimes I feel like my voice can be a little impersonal on Instagram, and I feel like those posts actually don't do very well. So I've been trying to be more chatty, and more asking questions. KC: Why are questions helpful? Emily: On the one hand, certainly, you hope that people will respond to these questions and leave comments. Definitely, that's a thing. But I also think that asking questions almost seems more organic to the algorithm. Relational influencers see tactics like asking questions as a way to encourage current and potential followers to feel more connected to them because it communicates that they are listening. In this way, relational influencers cultivate perceived interconnectedness among their followers through communicative intimacies(Abidin, 2015). Many influencers are, in reality, conducting business on Instagram. Yet, as Emily’s reflections suggest, relational influencers feel that taking a more personal tone encourages a sense of intimacy with followers, which leads to greater engagement with their posts, which leads Instagram’s ranking algorithm to display their 137 posts higher up in followers’ feeds and on the Explore page. Alongside building relationships with their followers, relational influencers also stick close to the authenticity ideal in their emphasis on creating “good” original content that will resonate with other users. Many relational influencers pride themselves on their creativity and use Instagram as a creative outlet and a means of connecting with other aesthetically-minded individuals. When relational influencers witness others succeeding with mediocre content, they often express indignation and frustration, for example, lifestyle influencer Lauren McPhillips wrote in a blog post: “I saw people with generic photos and boring captions get 800 likes and 300+ comments on every single post, a ratio that doesn’t add up” (McPhillips, 2017, n.p.). By contrast, relational influencers believe that good original content leads to engagement by “providing value” to followers. In response to a request for tips on growing an account, one influencer wrote in a Facebook comment, “When posting I always make sure I ask myself ‘what am I giving back to my followers’—It has been a successful strategy for me in the past.” In this, influencers strive to understand their followers and respond accordingly. For example, Mia told me in an interview: …if you want to be a savvy influencer, and you want to make a living from it, you need to understand what works, what doesn't work, what resonates with your audience. Again, it's the more people that engage with your content, and the more compelling your content is, the more reach it's going to get, more engagement it's going to get, the more ROI, so [you have] an influence. You really need to understand that, otherwise you're not going to make it very far. As Mia’s evidenced, relational influencers emphasize a primary principle of authentic, reciprocal relationships. Relational influencers often position themselves in contrast to simulation influencers, whose methods they view as disingenuous, dishonest, and selfish. This positioning aligns with the idealization of authenticity within influencer communities. Demonstrating this, one 138 influencer commented in a Facebook group that using simulation tactics encourages influencers “to not be their authentic selves in their online presence. It’s like everyone’s trying to follow a one-size-fits-all recipe but don’t realize that different ingredients result in different outcomes.” Further, many influencers rebuke simulation influencers’ prioritization of “business” above genuine, reciprocal relationships. As lifestyle influencer Summer Telban argued in a blog post, the simulation approach “reflects your intentions to only gain a number, not a friend or colleague or collaboration partner” (Telban, 2017, n.p.). Krissy expressed a similar sentiment in an interview: There's people have come in two, three years after me, and have surpassed my numbers within months. I know it's fake. I know what it takes. I did it genuinely and I'm at 14k, genuinely, you know authentically. So I'm like, "If you're at 30k in six months, I'm calling bullshit." It's just not it. I know what it takes. So it's hard now, it's hard to make new friends because I look at your numbers, I know you're fake […] I know it's a fake person and how can I get along with you when I know you're not doing this the right way? I don't know, I'm very ethical and moral and so this really bothers me, but some people are like, “Whatever. This is just a business. This is what you do to move forward.” Part of relational influencers’ issue with prioritizing business over “real” relationships is a pragmatic concern: relational influencers believe that using simulation tactics invites too much “dead weight” or “ghost followers.” Recall that dead weight refers to followers that never engage with influencers’ posts. Relational influencers argue that ghost followers hurt their engagement rate, which they believe factors into how Instagram’s algorithm prioritizes posts. In this sense, relational influencers believe that Instagram’s algorithm discerns “influence,” in part, through engagement rate. Thus, they believe a low engagement rate decreases the likelihood that their posts will appear at the top of their followers’ feeds or the Explore page. In some cases, former simulators-turned-relational influencers have found that they have accrued so much dead weight via simulation tactics that they cannot remediate the algorithm’s appraisal of their account. Marcus recounted in an interview: 139 So I was growing my account... I botted with engagement, doing follow/unfollow, and, I think, liking my people's pictures on my feed and followers for about four months, I think, this summer. And I got to 5,000 followers, and then my engagement dropped. So I was really upset and I was like, “Well, I don't want my account to be shadowbanned, and ruined, and never really grow past that.” So I deleted that account, restarted. […] it was just gonna be a lot harder 'cause I had 5,000 followers. Not all of them truly were interested in what I was doing. So the percentage that I was reaching was just very small, and I didn't wanna... […] It was gonna be really hard to dig myself out of the hole, I felt. So I was like, “I'm just gonna start fresh, and not do any automation, and just grow organically. Because Instagram is very, like pushing organic growth.” So I was like, “I need to post high-quality content, put a lot of work into it, talk to a lot of different people, comment, engage within the community. And just give as much value as possible,” because the more value you give, the more people are willing to follow you. So, that's what I started focusing my new page on, and now I'm just in the process of learning everything, so I can give that value back to people. Here, Marcus surmised that his use of bots led to his account being shadowbanned. From this realization, he determined that Instagram’s algorithm could fairly accurately discern real relationships, which lead him to decide to grow his account with relational tactics. Relational influencers also label simulation methods unethical and, sometimes, illegal. Beauty influencer Chloe Morello explained that those who use simulation methods defraud the brands they work with because their followers and engagement are “fake”: “The second a brand sends you a product, takes you on a trip, or pays you for posts ... you’re essentially committing a crime” (Morello, 2017, n.p.). Jessica explained in an interview the logic behind this claim: KC: So what's bad about buying followers? Jessica: It just looks... The conversion rate isn't there. So you have someone who buys followers and they work with Topshop on a campaign, and then the conversion rate isn't there. They have a 1% conversion rate, meaning 1% of your followers bought the jeans, which is nothing. So let's say two people bought the jeans, but you have a million followers. It should be like a 100,000 people bought the jeans, or at least 20,000. It should be somewhat relatable. But then the person doesn't buy the jeans. Then Topshop has a bad taste in their mouth and they work with this influencer who bought followers and then they don't wanna work with influencers again. So it creates an imbalance in the blogger ecosystem and it just puts some people out of work and it's cheating, it's just all wrong. KC: Yeah, I've seen other people who have called these kinds of activities like buying followers and using bots... They've called that actually fraud. Because you're not— 140 Jessica: Yeah, 100%. Yeah, it's false advertising. You're saying that you have an influence of 500,000 or one million people, but that's not really your influence. Your influence is 10,000 maybe. As Jessica explained, relational influencers view followers and engagement gained through simulation tactics as “false advertising” because they do not translate into actual purchases or clicks for brand partners. Moreover, time and again, influencers communicate that authenticity defines their pursuit of influence and fundamentally conflicts with the deception entailed by simulation tactics. For example, Sofia talked about how the profit-oriented nature of simulation influencers “felt wrong” and “dishonest”: Well, you're tricking people in thinking you're popular when you're not. At the very beginning, it wasn't too big of a deal because it was the new thing and people used it as, like, "Oh, the algorithm is hiding us. It's hiding our work from being seen, so we need to help each other." And it started like that. Then, it very quickly escalated to a, "Oh, no. Now I just wanna look popular and now I just want big numbers." Because I get a series of ... how do you say? If I have big numbers, I can get things. So I can get money, I can get products, I can get all this. So, like, fuck honesty. I'm just gonna get big to get the things that help me. And I don't have that mentality, 'cause I'm not that interest in ... I mean, interested in truth and that's not truth, so I don't care about acting like I'm popular and having like 100 comments under one of my posts if those comments are coming from people that are leaving me comments just so I'll leave them a comment to show that we are followed when we're not, you know? Relational influencers believe that focusing on relationship building results in a greater return on investment than simulation methods. They view the approach as more productive because they believe it prioritizes quality over quantity—that is, the depth of relationships with followers over the number of followers and engagement. Influencers know they need others to engage with their posts in order to gain visibility on the platform. They also know that the purpose of the algorithm is to show users content that will captivate their attention. They further recognize that Instagram’s algorithm enacts various penalties for engaging in inauthentic behavior. Relational influencers interpret this series of insights to mean that they must focus on cultivating organic engagement, “real” followers, in order for their posts to be algorithmically 141 prioritized above others in a sustainable way. In short, they assert that the best way to be algorithmically perceived as having authentic connections is to simply build authentic connections. The authenticity ideal which gives rise to relational influence coincides with a second ideal of entrepreneurship among influencers. I introduce the entrepreneurial ideal in the next section. The Entrepreneurial Ideal Given their self-presentation as ordinary individuals, influencers premise their influence on “entrepreneurial gumption” (Banet-Weiser, 2012). Like the authenticity ideal, contemporary brand culture fosters a pervasive logic of entrepreneurialism—the individualistic idea that “everyone is entrepreneurial” (Banet-Weiser, 2012, p. 217). The pervasiveness of entrepreneurialism also symptomizes the economic instability of the past decade and represents a shift in economic well-being as the responsibility of the individual, rather than a shared responsibility among organizations and government (Hearn & Schoenhoff, 2015; Neff et al., 2014). As Brooke Duffy and Jefferson Pooley put it, a “roving, anxious precarity gets channeled into compulsory self-improvement. Swimming ahead, or merely treading water, requires an entrepreneurial subjectivity: you are your own product, and its salesman too” (Duffy & Pooley, 2019, p. 42). This shift toward individual responsibility inspires a project of self-production and self- improvement that emphasizes ingenuity and hard work (Marwick, 2013). Social media plays an important role in the entrepreneurial ideal by promising a means of independently supporting and promoting oneself (Duffy & Hund, 2015; Marwick, 2013). Entrepreneurialism drives creative and media industries and encourages the idea that anyone can succeed in this realm with a little 142 bit of smarts, perseverance, and grit (Duffy & Hund, 2015; Marwick, 2013). In reality, the path to success in not straightforward or easy, and influencers often achieve success by way of pre- existing social privilege (Banet-Weiser, 2012; Duffy, 2017). The entrepreneurial ideal corresponds with a second prominent reading of Instagram’s algorithm, which inspires some influencers to implement simulation influence. Simulated influence Simulation influencers, or “simulators,” argue that high degrees of visibility can be achieved by seeking engagement and followers beyond or without efforts to build authentic relationships. Simulators prioritize metrics over intimacy, treating comments, “likes,” and shares as “social currency” (Marwick, 2015). Similar to hackers, simulators identify possibilities based on their understanding of the logics of the underlying code (Galloway, 2004). In discussing the simulation approach, simulators echo the entrepreneurial ideal, emphasizing innovation, the ability to problem solve, independent achievement, and accrual of status and capital. Many influencers acknowledge that the easiest way to simulate connectivity is through using automation services, or “bots,” to engage with posts or follow accounts. Efforts by Instagram to crackdown on bot activity--what one influencer even referred to as “The End of the Bot Era” (Decaillet, 2017)—has led to increasingly sophisticated attempts to mimic human behavior with bots, which demonstrate simulators’ entrepreneurial commitment to innovation and problem solving. Bots help influencers maximize their time by automating activities they would normally do manually. For example, Cameron rationalized his use of bots in an interview, saying: “I just wrote code so that you didn't have to go in and spend 20 minutes liking other random people's post during the day.” While bots permit efficiency, as Cameron discussed, they also often invite action blocks, or limits placed on actions an account can take. Figuring out how 143 to evade action blocks is an ongoing process for simulators, as Instagram regularly updates its algorithms. Recognizing this point, in response to concerns about Facebook instituting machine learning for detecting bots, a user in a forum thread commented: “It’s always gonna be like this. They build a wall, we make a new hole.” To make such “holes,” simulators who use bots typically recommend a “warming up” period, wherein follow or engagement bots will incrementally increase their actions over time. For example, a user on a forum described a strategy where they would increase the number of automated follows by five each week over the course of four weeks. This, they said, “in my mind, resembles the process of addiction to Instagram.” Like this user, most simulators who use bots also take care to vary their bot activity in human-like ways. Another user explained the rationale behind this in a forum: “The more you vary and randomize actions, the harder it is for them to detect ‘bot-activity.” To accommodate simulators in their attempt to mimic human behavior, automation software like Jarvee offer sophisticated settings that allow simulators to direct their accounts to perform activities in intervals, frequencies, and times of day that resemble “normal” user activity. Jarvee settings also include options like avoiding following non-English speaking users, prioritizing following users with “high follow back ratios,” avoiding liking posts without captions, only following “males” or “females,” and so on (see Figures 15 and 16). The complexity of settings helps to drive home just how much time, thought, and energy simulators have put into bot strategizing. 144 Figure 15. Jarvee follow settings Figure 16. Jarvee follow settings (cont.) 145 Use of bots also demonstrates simulators’ emphasis on independent achievement. In one common technique for using bots, known as the Master/Slave or Mother/Child method, simulators will set up multiple secondary, or slave/children, accounts to drive users to a main, or master/mother, account. Isaac Tinashe, owner of the digital marketing agency, The Insta Hustle, describes this technique, saying: The account to be grown is called the Mother account – its job is to post quality content, sit tight and look pretty, and not worry about a thing. The heavy lifting is done by the Slave/Child accounts, which are a series of accounts created with the primary purpose of promoting the Mother account. (Tinashe, 2019) Tinashe goes on to explain that child accounts can promote the mother account by inviting followers via direct message to follow the mother account and by tagging the mother account in bios, pictures, and captions. As compared to other tactics—particularly relational tactics—the mother/child method demonstrates a firm commitment to self-sufficiency in pursuits of visibility. Rather than team up with other influencers, simulators choose to “go it alone.” Although some influencers still use bots, since Instagram’s crack down on bots, influencers have devised other tactics to boost engagement in ways that “looks legit” (to algorithms), as one influencer put it. A popular method is the use of reciprocal engagement groups known as “pods.” Pods are private group messages where influencers assemble to share newly published posts so others in the pod can “like” or comment on them. Pods depend upon reciprocity: members are expected to engage with others’ posts before sharing their own with the group. As one influencer described it in a Facebook group, the purpose of pods “is to accelerate the rate of engagement and growth of an account by going viral.” That is, simulators use pods to seed engagement, thereby increasing the odds that followers and others will see the post. In this way, pods essentially replicate the logic underlying the use of bots for leaving likes or comments, but instead rely on “real” accounts. 146 Most often, pods are comprised of strangers, although strangers may become friends or collaborators as time goes on. Some pods begin with groups of friends and expand as network connections are added. These kinds of pods veer closer to a relational tactic, given the emphasis on “real” connections. Other kinds of pods will have many more people, typically influencers within a particular niche, which simulators believe helps the algorithm serve their posts to the right audience, as based on an understanding of the importance of relationships as a data signal used in algorithmic ranking. Pods, like bots, demonstrate how the simulation approach involves devoting energy to innovating and solving problems that crop up in the practice of simulating connectivity. Evidencing this point, one influencer posted in a Facebook group, “After lots of trial and error and many bans under multiple accounts, I’ve discovered the threshold between getting flagged and going viral by using pods.” For simulators, such dedication marks the key to success. This kind of dedication is important, as simulators must constantly adjust their tactics as Instagram learns of them and updates its algorithm accordingly. For example, early on simulators realized that leaving comments with only a few words were being detected as spam. As a result, many pod now have rules about the number of words that must be used in a comment so that comments appear “real” to algorithms. Similarly, in the beginning, simulators formed pods via group messages on Instagram, but eventually many simulators migrated to Telegram and other messaging apps. Marcus recalled the reason for this shift in an interview: Then Instagram started being able to detect that, “Oh, these are engagement groups and the DM groups.” They [influencers] took it to Telegram or WhatsApp, and they started making groups on there. And it's just so... It's always something new. They always try to find a work around to start building things up like that. As Marcus captures, constant innovation and problem solving are essential to the ethos of simulation influence. 147 In addition to bots and pods, another prominent simulation tactic, which helps demonstrate simulators’ entrepreneurial emphasis on status, is the follow/unfollow (F/UF). With F/UF, an influencer will follow users (manually or with a bot) to encourage them to reciprocate and follow them back. Then, the influencer will unfollow many (or all) of the accounts, particularly those who did not follow back. The tactic relies on a transactional logic in which simulators follow others to get something in return (i.e. engagement or a follow-back). Strategic unfollowing helps maintain a high ratio of followers to following, which connotes status (Moss, 2014), and which influencers believe Instagram’s algorithms considers when regulating visibility. By strategically unfollowing, simulators keep the number of accounts they follow low and composed only of users of potential value (e.g. active users, users with similar interests, and “smaller” accounts). As one influencer wrote in a Facebook group after “cleaning” their account with strategic unfollows: “Now my number of followers is greater than my number of following and it’s awesome! More people are seeing my posts and these are people I want and need to see my stuff.” F/UF, like other simulation tactics, prioritizes the goal of accruing—or, at least giving the impression of—status. With simulation tactics, entrepreneurial gumption is a driving force. Simulators do not negate the value of relationship building; they merely have different priorities. Simulators rationalize their tactics by pointing to the need to remain competitive, as well as conversant with the rules: “pods resulted from Instagram slashing the engagement people are used to seeing. Users have to ‘pod-up’ to get their engagement rates up and it isn’t (to my knowledge) against any IG ToS ... yet.” Of F/UF, another influencer wrote, Just because people have strong opinions about how the algorithm SHOULD work doesn’t mean it will change which methods work and which don’t according to how IG coded their algo. People who say unfollowing ghost followers is bad for you have a superficial understanding of the algo. 148 In the same vein, another influencer pointed out that the game demands a significant time commitment and defended pods by comparing them to hiring a nanny: “You get an extra pair of hands to help you out. I don’t have time (nor should anyone) to spend half my day on Instagram.” With these views, simulators underscore to the importance of generating engagement as a visible form of social currency, a means of documenting one’s status and success. Moreover, simulated influence is built upon the notion that Instagram’s algorithms can be easily duped. They believe the algorithm can be tricked into reading their activity as indicative of “true” connectedness. Thus, they valorize the resourcefulness and industriousness of discovering the most efficient, compliant tactics. In doing so, they affirm individualistic narratives of personal prowess and taking the initiative to augment their influence on their own. Even with pods, where reciprocity is key, cooperation often resembles a business agreement, rather than friendly support. In short, decisions to adopt simulation tactics appear to be motivated by the entrepreneurial imperative of nurturing one’s own financial and/or professional success. The Gendering of Relational and Simulated Influence Before concluding this chapter, I want to heed Brooke Duffy’s call to “remove the gender-neutral lens [of work on the digital economy] by foregrounding digital labour’s reliance on socially constructed notions of gender” (Duffy, 2016, p. 453). In this, I suggest that divisions between relational and simulated influence seem to map to gendered conceptions of labor, reanimated in the context of the visibility game. Whereas simulators principally seek to produce capitalist value, relational influencers prioritize relationships and caring labor (e.g., “making a difference” in others’ lives). Thus, part of the story of reading the Instagram algorithm for what it means involves the consideration of gender performance. Influencers’ gender identities and how they interpret Instagram’s algorithm in their praxis seem to be interdependent. 149 Traditionally, women’s labor has been associated with the private sphere. Historically, women have been socialized to perform “reproductive labor” (i.e., unwaged domestic work and caring labor) (Weeks, 2007) through the cultivation of “soft” interpersonal skills, which do not directly contribute to material gain, but are nevertheless vital for upholding the broader economy. By contrast, men have been socialized to perform “productive labor”—or, “waged production of material goods” (Weeks, 2007, p. 235)—and to avoid relational and affective work, which require embodying feminized traits like care, empathy, and affection. Although it is widely acknowledged that gender has no core essence that would guarantee certain traits or skills, the gendering of labor continues due to the performativity of gender and labor (Weeks, 2007). In the realm of the visibility game, men steeped in discourses of “affective labor as feminine” may view simulated influence as a more resonant interpretation of the rules Instagram’s algorithm articulates, as it more closely aligns with laboring practices historically associated with masculinity. Further, men may feel greater self-efficacy in their technical and problem-solving skills than in their relational skills (as a result of social discourses around masculinity), which may further nudge them towards a reading of what the algorithm means that better fits their sense of self. Additionally, simulated influence’s emphasis on bots depends upon technical skills that have been historically coded as masculine. In this way, if men do tend to lean towards simulated influence, it would mirror the overrepresentation of men in the tech industry (National Science Foundation, 2019). The gendered undercurrents of relational vs. simulated readings of Instagram’s algorithm should not necessarily be understood as consciously enacted. Influencers make sense of Instagram’s algorithm by bringing together latent knowledge of various discourses, including those related to authenticity, entrepreneurship, and gender. Moreover, the use of relational and 150 simulation tactics does not perfectly correlate with gender identity. Many masculine-presenting influencers I encountered preferred relational tactics and many feminine-presenting influencers I encountered preferred simulation tactics. In bringing up discourses on the gendering of labor, I only suggest that, for some influencers, in some ways, practices of reading Instagram’s algorithm may intersect with and/or extend broader constructions of social identities. In this case, readings of Instagram’s algorithm may be performative of gender. Conclusion In this chapter, I have explored different kinds of knowledge about algorithms evident among Instagram influencers. I distinguished between technical and practical knowledge of algorithms and evidenced the significance of each for influencers’ pursuit of visibility. In this, I explained how each kind of knowledge factors into the tactics influencers develop and deploy in the visibility game, though in distinct ways. While technical knowledge primarily assists influencers in determining viable tactics, practical knowledge helps influencers determine which tactics resonate with their sense of self, as heavily shaped by their membership in the social world of influencers. I described two subtypes of technical knowledge: knowledge of algorithms’ design and knowledge of their methods. For influencers, the most salient design knowledge were the goals Instagram’s algorithms serve and the iterative nature of their design. Understanding the goals Instagram’s algorithms serve gives influencers an overarching framework with which to assess the soundness of other technical details they learn about the algorithms. Knowing that (and when) the algorithms change prompt influencers to anticipate, attend to, and address changes that would require them to adjust their tactics. Turning to methods knowledge, influencers primarily focus on data signals used in algorithmic ranking and penalties enforced by algorithms. Such 151 insight helps influencers grasp which actions matter (and when) for maintaining and increasing visibility, as well as which actions may undermine their pursuit of visibility. While technical knowledge helps influencers inventory the full range of possible tactics, practical knowledge allows influencers to decide which tactics to adopt. I have shown that preexisting discourses on authenticity and entrepreneurship within the influencer community shape responses to algorithms at least as much as technical knowledge. From these discourses we see two divergent readings. Relational influencers believe that Instagram’s algorithms can accurately discern “authentic” connectivity, in line with the authenticity ideal, which involves building relationships, producing good content, and offering something of value to their followers. Simulators believe that Instagram’s algorithms cannot fully discern authentic connectivity and view influence as built, at least in part, on boosting algorithmically recognizable signals of popularity. These influencers characterize their tactics by targeting status markers and emphasizing individualistic narratives of innovation and problem solving. Further, I have suggested that the respective expression of relational and simulated tactics map to gendered notions of labor. Thus, for influencers, practical knowledge represents the instrumentalization of the rules of the visibility game in line with self-defining discourses of their social world, as well as the (gender) discourses beyond it. Although we often focus on technical knowledge when discussing user understandings of algorithmic systems, practical knowledge matters for how people use of systems. In focusing on technical knowledge, previous work often includes statements about the accuracy or soundness of users’ knowledge. However, as we can see in this chapter, assessing the “accuracy” of people’s knowledge about algorithms is not always possible or productive. Algorithmic literacy is situated in action, steeped in social discourses, imbricated with social identities and corresponding 152 boundaries. Further, when we attempt to measure algorithmic literacy, we must keep in mind that we are making value judgments about what kind of knowledge is important. This chapter’s findings suggest that the contextual nature of algorithmic literacy renders its value contingent upon the interests, activities, and sites in which individuals and algorithms are located. These matters are not external to algorithmic literacy, they are constitutive of it. In order to determine what people need to know about algorithms, we have to understand their circumstances and goals. Lastly, in acknowledging the close relationship between social discourses and algorithmic literacy, as evidenced in this chapter, we must also acknowledge that these discourses have the potential to be reinforced through the feedback loop of algorithms (Carah and Shaul, 2016). Just as influencers and other users actively make sense of algorithms through a prism of identity, algorithms may recursively construct the meaning of their social identities through their role in structuring praxis. 153 CHAPTER 6: Raising Algorithmic Consciousness in the BreadTube Community In my investigations of BreadTube, I routinely encountered talk about a particular experience on YouTube, which partially underwrites interest in the visibility war. One interviewee named Heather talked about this experience in order to lend support to claims about YouTube’s right-wing (algorithmic) bias. She told me: “I think I remember, I watched one Prager video to see what that was about, and the next thing I'm getting a Sargon of Akkad. I'm like I... What?” Here, Heather explained that she felt she had made her (leftist) views clear to YouTube through her browsing history. Yet, after watching a video from Prager University, an organization that produces videos from a far right perspective, she then began receiving recommendations for far right YouTuber, Sargon of Akkad. This story demonstrates the importance of cultural knowledge learned via membership in a social world—in this case, BreadTube—for making sense of algorithms. Without knowing who PragerU and Sargon of Akkad were, or without having had her curiosity piqued enough to watch a PragerU video in the first place, Heather’s experience of being recommended a Sargon of Akkad video would not have occurred or likely would have gone unnoticed. Yet, it was a salient memory for Heather, which offered insight about YouTube’s recommendation algorithm. In order to make sense of this experience, Heather needed to draw on her cultural understanding of the connection between PragerU and Sargon of Akkad, as allied opposing forces in the visibility war. For BreadTubers, the connection between these two channels is common knowledge, as a result of channels’ routine appearances in BreadTube discussions. Presumably, Heather watched the PragerU video in the first place in order to understand, as she said, “what [the channel] was about,” having seen it referenced. In this chapter, I explore learning experiences like this, which illustrate how BreadTubers learn about algorithms within the particular context of the system of social relations and practices that 154 make up their social world (Lave & Wenger, 1991). In chapter four, I introduced the concept of platform epistemology to explain how platforms’ social, technical, and economic structures shape the construction and legitimization of information about algorithms. BreadTubers’ learning practices offer additional exemplars of platform epistemology, but also illustrate how these practices are situated in the shared interests, ideologies, sites, and activities that define their social world. The features that define membership in the BreadTube community provide the perfect level of idiosyncrasy for viewing the situatedness of learning about algorithms. The more narrowly bounded nature of BreadTube, as compared to that of Instagram influencers, helps illuminate specific discourses and “palpable matters”—the material features of the world (Lave & Wenger, 1991)—with which to more clearly exemplify “learning as a dimension of social practice” (Lave & Wenger, 1991, p. 47). As I will show, BreadTubers’ learning practices closely intersect with their participation in the visibility war, as the central activity of the community. Participating in the visibility war gives essential context for parsing and synthesizing information about algorithms. Drawing forth insight about YouTube’s algorithms entails bringing to bear an understanding of the key figures, discourses, and concerns implicated in the visibility war. In short, in this chapter, I extend platform epistemology to emphasize how algorithmic literacy is situated—participation in social worlds matters for the cultivation of critical algorithmic literacy. Previously, I described four ways Instagram influencers learn about algorithms: learning by doing, learning from others, learning from media, and learning from training. BreadTubers also learn in these four ways, though with some variations. First, while BreadTubers do not emphasize networking like influencers do, they do seek each other out for political discussion, through which information about algorithms often passes hands. Second, Instagram influencers’ 155 organize in communities of practice around the pursuit of visibility. In contrast, not all members of the BreadTube community are pursuing personal visibility. While some BreadTubers are YouTube creators, many are not. BreadTube creators help infuse information about algorithms into the community via algorithm talk with each other and via meta-commentary in the content they produce. Third, BreadTubers’ anti-capitalist ideology makes them averse to for-profit models of training, like we saw with gurus among Instagram influencers. This ideology also engenders a cynical attitude towards platforms as capitalist corporations, which leads BreadTubers to rely less on YouTube’s disclosures about its algorithms than Instagram influencers do on Instagram’s disclosures. In what follows, I describe the four ways BreadTubers learn about YouTube’s algorithms, while noting these contrasts between the two cases. Observation and Reflection Learning by Doing Similar to influencers, many BreadTubers commonly learn about algorithms experientially by observing and reflecting on content they see in their feeds. For Instagram influencers this kind of learning often zeros in on relationships—whose content they typically see in their feeds—in order to draw conclusions about algorithmic curation. For BreadTubers, learning via observation and reflection centers more squarely on the topical and ideological composition of content made visible to them. Of course, Instagram influencers also learn by observing and reflecting on “what” they typically see in their feeds, and conversely BreadTubers also learn by observing and reflecting on “who” they typically see on YouTube. However, BreadTubers’ experiences make it easier to see how this kind of learning implicates participation in a social world. BreadTubers, in particular, routinely mention learning about YouTube’s recommendation 156 algorithm as a result of seeing right-wing content crop up in their recommendations. Nearly all interviewees told me stories similar to Heather’s, as described at the beginning of the chapter. For example, Luka told me in an interview: …there's a sub-genre of a very... I don't even know what this culture phenomenon is... […] It's people like feminist and so-called SJWs [social justice warriors] getting “owned" by these “smarty” men on Internet. Usually with a cartoon avatar. It gets made fun of on Twitter a lot, and I used to see that and I'm like, "Oh, what is that?" I wondered. Then, it's suddenly started getting recommended to me like crazy. Later in the interview, Luka described himself as “committing the sin” of clicking on this kind of video and subsequently being “haunted” by it ever since. This genre right-wing content Luka describes may not be immediately recognizable to all, but it is immediately recognizable to BreadTubers. For example, in the following quote, Sarah described a similar experience as Luka, referencing the same genre of videos in slightly different terms: I have not watched any kind of "feminist destroyed by facts and logic" video, but I get them recommended to me all the time. I pretty much watch BreadTube videos and cute animals and nothing that would suggest that I like those things, but I still get recommended those all the time. And people have done tests where if you watch, say, Steven Crowder, if you watch a video against him, you'll get recommended his videos and other right-wing videos. But if you do the same with watching an anti-left BreadTube video, you won't get recommended their videos, you'll get more of the same "debunking the left" or other right-wing talking points. So it definitely seems like there's more to it than just an accidental algorithm. This experience is commonplace among the BreadTube community because it suggests the propagation of right-wing content on YouTube, which is a key factor in motivating the visibility war. In fact, this experience is so common among BreadTubers, that, as Luka as Sarah demonstrated, they have a shared understanding of the particular genre of right-wing content— i.e. “SJWs getting ‘owned’ by these ‘smarty’ men on the Internet” with “facts and logic”— typically recommended. Familiarity with this genre permits knowledge building in this experience and speaks to their embeddedness in the social world of BreadTube. Indeed, as Luka suggested, BreadTubers often joke about and lampoon this genre in discussions. BreadTubers’ 157 familiarity with this kind of content makes the “misunderstanding” (by the algorithm) of BreadTubers’ political preferences salient, and, thus, particularly generative of learning. This “mistake” draws BreadTubers’ attention to clues about the logic of YouTube’s recommendation algorithm and what it “knows” about them. Thus, the experience captures a key way many BreadTubers learn about the algorithm and also shows how participating in the BreadTube community actuates learning therein. Occasionally, BreadTubers graduate from casual observation and reflection to more deliberate empirical investigations, which I will now describe. Empirical Investigations The visibility war has occasionally motivated BreadTubers to engage in different kinds of informal empirical investigations similar to those conducted by Instagram influencers. In these investigations, BreadTubers’ understanding of key players in the visibility war enables them to make sense of YouTube’s recommendation algorithm by providing crucial context for algorithmic outputs (e.g., recommended videos). For example, in an interview, Anthony recounted a time when he experimented with YouTube to understand how the platform decided which videos to algorithmically recommend to him: So, it's just something I noticed, just because at the time I was watching a lot of political videos, so I would occasionally see just Blaze TV or Daily Wire or [Steven] Crowder showing up to my feed, but I would chalk it up to it's just showing me something else, showing me politically-related videos. But then I logged into a private browser to open up a link, and when I opened it, I noticed that right away it gave me a bunch of [Ben] Shapiro, a bunch of Crowder, a bunch of other stuff, as suggestions. I don't watch the content at all, so I don't see why this would even pop up, if it's not even cached in my regular browser. So I tried it again, and this time I used the Microsoft Edge. I cleared the cache. I cleared everything out. I cleared all the cookies. So it was a clean browser, never used this at all. And you open up a new private tab, and again, I clicked on a random video, like in this case I clicked on [a Billie Eilish] video and then I saw those things popping up. Recognizing Blaze TV, The Daily Wire, and Steven Crowder as right-wing YouTube channels, 158 Anthony was able to use knowledge he cultivated as a member of the BreadTube community to see “misjudgments” of his preferences in the content algorithmically recommended to him. This initial observation also inspired him to employ different techniques to mask his identity on YouTube in order to see what content the platform might recommend to him. Anthony’s participation in the BreadTube community likely motivated him, at least in part, to experiment in this way. Conversations about YouTube’s right-wing bias abound within the community and these conversations were likely top of mind as Anthony recognized right-wing content in his recommendations. In testing the system, Anthony found that even when he did his best to mimic a new user with a clean slate, YouTube recommended him right-wing content. This led him to conclude that, indeed, as many BreadTubers suggest, the platform has a right-wing bias. Occasionally, BreadTubers conduct more systematic investigations than the kind that Anthony described. Typically, these in-depth empirical investigations are also motivated by their commitment to the visibility war. In chapter 4, I suggested that platform features that make algorithmic processes more visible enable and constrain empirical investigations. One empirical investigation posted in a r/BreadTube illustrates this point. In this post, a gaming YouTuber known as Tiny Pirate described an informal network analysis he conducted of right-wing versus left-wing creators (see Figures 17 and 18). 159 Figure 17. Tiny Pirate’s network analysis of BreadTube Figure 18. Tiny Pirate’s network analysis of “RightTube” Explaining his methodology and findings, Tiny Pirate wrote: 160 In both diagrams I started with a seed channel (Contrapoints and Sargon, respectively). I then looked at who was in the "related channels" list11 for the seeds (it's usually between 4 and 6 channels, by the way). With those noted down I then jumped to the related channels and looked at who was related to these channels, listing who they were linked to. The results are interesting. In Lefttube there were a total of 27 entities, and 6 mutual (green) linkages made by YouTube. Let's call that a 22% interconnectedness factor. There's a proper technical term for this and anyone who has done graph theory - feel free to chip in - I've forgotten and this is just a bit of quick and dirty analysis. On Righttube there is a much denser relationship between channels. Only 11 channels featured in the graph and yet there are 6 mutual connections, leading to a interconnectedness factor of 54%. Righttube is TWICE as interconnected as lefttube. Also interesting, righttubers were linked to Fox News (lol), which, I ponder, has an impact on their ability to be suggested alongside more "mainstream" content (ie, you can go from a Fox News article to Sargon [of Akkad] talking about the news, to Nazis doing Nazism pretty easily via recommended/suggested videos). As he explained, YouTube’s “Related Channels” feature allowed Tiny Pirate to inquire about the connectedness of “RightTube” and BreadTube. This feature, which was discontinued in May 2019, provided autogenerated suggestions of similar channels and, thus, offered Tiny Pirate some visibility into algorithmic recommendation on the platform. From his network analysis, Tiny Pirate surmised that the greater degree of connectedness of RightTube he found meant that “The algorithm has assessed righttube as very highly related to each other and so once you're in that sphere you're going to find your recommended videos endlessly bounce around the echo chamber every time you open YouTube.” By contrast, Tiny Pirate wrote, “…lefttube channels are more loosely related and aren't driving the algorithm to reliably recommend one left tube channel's videos to another lefttube channel's audience (well, not as reliably as my little sample of righttube).” By gathering data from YouTube and mapping networks of YouTubers on the right and left, Tiny Pirate produced new empirical evidence that he and others in the BreadTube 11 In May 2019, YouTube discontinued the “Related Channels” feature, which provided autogenerated suggestions of related channels. 161 community could use to explain why, in their estimation, YouTube tended to recommend right- leaning content more than leftist content. Notably, this investigation would not have been possible without the related channels feature. Thus, corroborating what we saw with Instagram influencers, platform features provide users with important data about algorithmic processes, which can be put to work in empirical investigations. Moreover, the discontinuation of the related channels feature demonstrates how platforms’ design choices impact what knowledge can or cannot be constructed about algorithms. Tiny Pirate’s investigation also exemplifies how individuals’ embeddedness in social worlds motivates and informs inquiries like this. Like other BreadTubers, Tiny Pirate wanted to know why right-wing YouTubers seem to be more visible than BreadTube creators, because this grants insight that can be mobilized in the visibility war. If the community were not engaged in a visibility war, Tiny Pirate likely would not have felt as compelled to conduct his investigation. Indeed, conducting the investigating, in itself, should be considered a form of participation in the visibility war. Further, for the investigation, Tiny Pirate needed to identify “seed channels,” representative of the respective ideological camps in the visibility war. For his analysis, he also needed to be able to accurately identify subsequent channels as definitively (or not) RightTube or BreadTube. Thus, this exemplifies how building knowledge about algorithms often requires drawing on knowledge cultivated via participation in a social world, in addition to data derived from platform features that afford such investigations. Aside from experiential learning, as I will now explain, BreadTubers also learn by way of online discussions with others. Learning from Others As we saw in chapter 4, Instagram influencers learn from others as they network and 162 participate in communities of practice. While Instagram influencers are obliged to build their social networks in order to be successful, this is not strictly the case for BreadTube, as a community made up of both content creators and audience members. However, BreadTubers do still learn from others, which happens as they build their own community of practice. For BreadTubers, this community of practice revolves are two key activities of their social world: 1) discussing politics, which often refers back to the visibility war; and 2) consuming BreadTube content. I explain these below. Online Discussion BreadTube discussions unfold across the web—in r/BreadTube, Twitter, Facebook groups, YouTube comments sections—and reflect members’ shared interest in politics and the visibility war. BreadTubers are highly political engaged and find their way to the community out of an interest in discussing electoral politics, public affairs, and social and political issues with others. These discussions feature matters related to strategizing in the war—how to amplify the visibility of leftist ideology on YouTube while redirecting audiences away from right-wing content—and, thus, have necessarily invoked algorithms, as algorithms regulate visibility. The constitution of the community—who BreadTube is—matters for these discussions. As I will explain, BreadTube includes constituencies of content creators and those with technical expertise. These constituencies impact the content and discussions in the community. Suggesting the prominence of visibility war discussions among BreadTubers, Natalie told me in an interview that being recommended right-wing videos on YouTube is “a well-documented phenomenon. And I know there’s tons of videos and Twitter threads and Reddit threads that have documented that.” Similarly, in relation to how he learned about YouTube’s recommendation algorithm, David told me: 163 Discussions on Reddit is one, not just BreadTube but other subs [subreddits], some even non-political generally, I've seen YouTubers and YouTube comments mention these too. As for IRL [in real life], there isn't much anecdotes there from me, most of my friends are apolitical, except a few… As Natalie and David exemplify, BreadTubers actively seek out political discussions, which often address matters related to the visibility war. Not all Internet users will find or be exposed to such discussions in which exchanges of information, questions, and reflections about YouTube’s algorithms are customary. As Natalie suggested, there may be “tons” of online discussion about the right-wing bias of YouTube’s recommendation algorithm, or as David suggested, multiple subreddits that address YouTube’s algorithms. However, Natalie and David encountered these discussions as part of becoming members of the BreadTube community. Because BreadTube is an online community, most of its discussion takes place online, as BreadTubers follow each other on Twitter, subscribe to r/BreadTube, and otherwise actively customize their experiences online to invite in BreadTube discussion. In this sense, it is no surprise that Natalie and David encountered the discussions they did. Participating—or, at least lurking—in these discussions defines participation in the BreadTube community. As platforms play host to discussion, like we saw with Instagram influencers, they play an important role in activating connections between BreadTubers and shaping communication via affordances. Indeed, as BreadTubers perform their interest in the visibility war by engaging with and/or participating in discussions about algorithms, this presumably increases the visibility of algorithm talk within BreadTube spaces, as well as the likelihood that individual BreadTubers will see such discussions. As we saw in chapter 4, the role platforms play as content curators is an important factor in shaping the flow of information about algorithms. In the case of the BreadTube community, the expertise of BreadTube creators and members of the community with technical backgrounds are important factors in shaping the content of discussions about algorithms. 164 Learning from Content Creators BreadTube emerged from the fandom of a network of YouTubers, which makes following BreadTube creators a defining characteristic of participation in the community. Additionally, this means that many BreadTubers are content creators. In short, the BreadTube community is a community of practice for producing and consuming leftist YouTube content, particularly as the visibility war subsumes these practices. BreadTube creators engage in algorithm talk publicly as it relates to their role in the visibility war, including in their own videos. This algorithm talk among and metacommentary by BreadTube creators aid the BreadTube community in building knowledge about YouTube’s algorithms. Just like Instagram influencers, BreadTube creators rely on other creators for insight about optimizing content for YouTube’s algorithms and effectively garnering views. For example, a BreadTube content creator known as Thought Slime told me in an interview that he picked up a particular detail about YouTube’s recommendation algorithm partly by talking to other content creators: “I think it's all speculation. So if you kinda keep up with what other YouTubers are saying, and we talk to each other, it's just what you hear, it's very much a rumor mill situation.” Likewise, in his post in r/BreadTube about the networks of left-wing and right- wing content creators, Tiny Pirate explained he learned what he knew about YouTube’s recommendation algorithm, in part, by talking to other creators: I also talk daily to multiple channels in the 500k-5m sub range. These are primarily gaming channels but they have contacts and talk about industry trends so I hear some insider baseball now and then from people whose livelihoods depend on the algorithm. Here, Tiny Pirate seems to be calling out gurus, as discussed in chapter 4 among Instagram influencers. Relatedly, in a BreadTube Facebook group I joined, occasionally creators will ask for advice on their YouTube channels, which is reminiscent of influencers’ communities of practices also formed via Facebook groups, though the primary purpose of the BreadTube 165 Facebook group is for discussing and sharing BreadTube content. Thus, as in other BreadTube spaces, creators and audiences mingle. When creators engage in algorithm talk publicly within such BreadTube spaces, they spread relevant information to others in the community, namely those who are not actively trying to become visible online for a living. This information can then be used for collective strategizing in the visibility war. Creators also talk about algorithms in their own videos as a sort of meta-commentary, which further transmits relevant information to the BreadTube community. I use the term “meta- commentary” to refer to commentary in which creators self-reflexively discuss the content or visibility of their videos or channels. For example, notoriously, YouTubers regularly encourage their viewers to “like, comment, and subscribe” at the end of their videos to help them become more visible on the platform. Many BreadTube creators use this practice to explicitly talk about YouTube’s algorithms. For example, BreadTuber creator Step Back History lamented at the end of his video: Subscribe, like, share, etc. because Step Back does not get a lot of algorithm love and I would love to have more people learn about responsibility with platforms. So, please, share with your friends. It really really helps. It’s really the only way this channel grows, because the algorithm hates me. (Step Back History, 2019) As another example, Gutian glibly said at the end of one of her videos: If you enjoyed this video or felt like you learned something, please help support me by liking, sharing, commenting, subscribing—all the stuff people do. I'm pretty sure it all means a lot, especially because I don't really know how the algorithm works so that could be it… (Gutian, 2019b)12⁠ Time and again, interviewees mentioned acquiring details about YouTube’s algorithm from meta-commentary like this. For example, when I asked Will how he learned about changes to algorithmic moderation wrought by the so-called Adpocalypse, he told me: “But the 12 Gutian deleted her YouTube channel on May 14, 2020 166 Adpocalypse stuff, I don't think it was mainstream media as much as I saw it from YouTubers themselves, very loudly complaining about the effect it had.” Similarly, when I asked Luka how he learned that YouTube’s recommendation algorithm treats “dislikes” the same as “likes,” he told me: It may have been a LeftTuber actually, I think maybe like Shaun, is a channel that may have spoken about this, it just seems to be like an age-old YouTube wisdom at this point where like channels will often also gloat about getting a lot of dislikes, where they're like, "Oh, that drives up my popularity only." So, I've just come to accept this as truth. BreadTube creators, as Will suggested, often discuss—publicly and vociferously—their experiences with having videos taken down or demonetized. Both Will and Luka’s reflections exemplify how BreadTube creators’ meta-commentary helps the community learn about how the platform’s algorithms function and what that means for the visibility war. As BreadTube content commonly features language and imagery deemed “not advertiser friendly,” metacommentary often serves to criticize algorithmic moderation on YouTube. For example, in her video “YouTube's Restricted Mode is Anti-LGBTQ,” BreadTube creator Marina Watanabe talked about having videos about her videos being deemed not “family friendly.” She said: I found out that YouTube has created this age restricted mode that basically censors any kind of LGBT related content. This is regardless of age appropriateness, or audience. It could be the most kid friendly, young adult, young kid-oriented content and if it has the word 'gay' or 'lesbian' or 'LGBT' in the title, it's gonna be removed from restricted mode (Watanabe, 2017). In some cases, creators extend meta-commentary like this to commentary in venues like Twitter and Reddit. One high profile example occurred when BreadTube creators The Serfs spoke out about YouTube deleting their channel. The Serfs created a YouTube video (on a new channel) documenting what happened and also tweeted about it. After sharing their experience publicly, the BreadTube community collectively pieced together that YouTube’s algorithms had received 167 signals from a coordinated mass-flagging attack by alt-right users on the video in question, which led to the channel being automatically taken down. This event served as an educational moment for many BreadTubers as they learned about YouTube’s use of algorithms to moderate content based on user reports, as well as the policies and process for appealing automated decisions like this. This was the case for Tyler, who explained in an interview: That one I learned about because people in, I think the BreadTube subreddits were talking about it and on Twitter. Yeah, I think the people who run the channel brought it up on Twitter. So I learned about it from them, but they presented pretty clear evidence if I remember correctly. Thus, we can see that (meta-)commentary like The Serfs issued about their experience with algorithmic moderation helps BreadTubers learn about the functionality and impacts of YouTube’s algorithms and seems to heighten the relevance of algorithms in the visibility war. To some extent, by making user engagement essential to creators’ visibility, by way of algorithmic processes, YouTube compels creators to “educate” their audiences via meta- commentary. Algorithmic recommendation and moderation on YouTube creates a threat of invisibility (Bucher, 2012), and, thus, urges creators to inform their audiences about how to help them remain visible on the platform. In this sense, receiving information about algorithms via meta-commentary is a part of the ways platforms shape learning under platform epistemology. However, differences between the social worlds of Instagram influencers and BreadTube result in different levels of meta-commentary. Instagram influencers engage in some metacommentary. Yet, because their content often focuses on their personal lives and they feel pressure to achieve a degree of intimacy with their followers (Abidin, 2015), they generally make fewer explicit references to the labor of playing the visibility game.13 By contrast, BreadTube content is designed to inform and/or entertain audiences, such that there is no illusion that it is the product 13 Of course, gurus are a notable exception to this avoidance of explicit talk about the pursuit of visibility. 168 of creators’ creative labor and creators need audiences’ support in order to be successful. In this sense, meta-commentary is par for the course of being a YouTuber. BreadTubers’ learning via metacommentary also speaks to particular context of BreadTube in which consuming BreadTube content and following BreadTube creators online fundamentally define participation in the community. Further, BreadTube creators use meta-commentary as a means of recruiting allies to contribute to the community’s collective enterprise of the visibility war. Through meta- commentary, BreadTube creators provide kernels of insight about algorithms to their audiences, which audiences can use in finding ways to increase the visibility of BreadTube content from their end. Given that the BreadTube community grew from the fandom of a network of YouTubers, and because of the community’s shared commitment to the visibility war, acquiring information about algorithms through meta-commentary has much to do with who the community is. Learning from those with Technical Expertise In addition to creators engaging in algorithm talk, BreadTube discussions routinely involve relatively complex technical details about algorithms, suggesting the presence of (or proximity to) those with technical backgrounds and training. Quite possibly BreadTube attracts many socially conscious programmers, data scientists, and so on because they have an interest in the social and/or political impact of algorithms. Certainly, this seemed to be the case for two interviewees who held degrees in computer science. However, two other interviewees also mentioned having friends with technical expertise. For example, Nick told me: “I have a few friends who work in the data space, so they build algorithms and stuff, so they dumbed down the essence of it for me.” BreadTubers with technical expertise often convey technical knowledge to the community—namely, as previously described in chapter 5, details about the design of 169 algorithms and methods they use to perform their function. For example, when a video titled “'ContraPoints' host says YouTube algorithm isn't ‘sophisticated’ enough to combat extremist content” was posted in r/BreadTube, several BreadTubers turned out to discuss the capabilities and limitations of machine learning algorithms. Excerpts from one exchange read as follows: User A: She's completely right. To think that any current AI/ML system is capable of knowing the context in which words are spoken, and how to take that into account is absolutely laughable. User B: The technology to do it is there. AI/ML systems are sophisticated enough to do this. The real problem is that YouTube would have to make ideological decisions to inform the AI on properly identifying extremist content, and they currently profit from that content. User C: That's almost entirely untrue - we have a hard time with much of fair use/context as humans, and there are no machines capable of understanding what fair use, satire or critical context is if it's not very, very blatant. At best, an ML system can see that the person's videos are generally tagged as leftist, so this one must be, too, instead of Nazi content. This exchange is typical of technical discussions among BreadTubers. These discussions enable BreadTubers who do not have technical training to learn about algorithms from those who do. In particular, BreadTubers with technical training can help ground discussions, as Dylan, a software developer, demonstrated. Dylan and I had the following exchange: KC: How much do you think [your degree] plays into your process of learning about these different platform algorithms and maybe particularly YouTube? Dylan: I would say it helps a lot. I would say that knowing something about AI and the state of AI research is really, really helpful for reading articles about AI, because a lot of them have the sort of problem that any mainstream science article has, which is that they're like... They're a little bit sensationalized because the writer doesn't really know enough about the field to know what claims are plausible and what aren't. So, yeah. So for example, it really helps to know a little bit about how AI works and what the state of AI research is to be able to dissect a claim about, “YouTube has a right-wing bias,” and be like, “Woah. Yeah, it does.” [laughter] But also... But also there's this whole more complicated situation with like, “This is... This is the part that the AI is doing, and this is the part that is just the right wing being more talented at YouTube.” As Dylan’s reflections suggest, BreadTubers with technical expertise have a firmer grasp of the 170 capabilities and limits of algorithms, which they can then impart to the community. In imparting this insight, they can help minimize the kind of fetishization of algorithms that invites conspiracy theorizing. BreadTubers’ interest in the visibility war subsidizes algorithm talk within the community across r/BreadTube, Facebook groups, Twitter, and YouTube comment sections. To some degree, participating in the BreadTube community inherently involves participating in—or, at least consuming—conversations about the visibility war. Put differently, BreadTubers’ shared commitment to the visibility war brings the community together online to engage in discussion that necessarily involves algorithms. Engaging with such discussion, in turn, enables connections between BreadTubers to be activated (by algorithms) across platforms. As discussed in chapter 4 in relation to Instagram influencers, platforms structure the formation of communities by predicting and prompting user connections within networks (Bucher, 2013). Trace data produced from BreadTubers’ algorithm talk helps direct algorithms in determining BreadTubers’ affiliation with one another across and within platforms. Thus, learning about algorithms via algorithm talk within the community represents the product of both platforms’ technical infrastructure and the community’s activities. In this sense, the collective identity of the BreadTube community shapes the information BreadTubers acquire via algorithm talk, and algorithm talk shapes the collective identity of the BreadTube community. BreadTubers’ participation in online discussion coincides with exposure to media content addressing algorithms on platforms. Below, I address how BreadTubers acquire information about algorithms from their media repertoires. Learning from Media Alongside discussing politics and matters related to the visibility matter, BreadTubers 171 also actively consume media that reflect the community’s particular blend of shared interests. The visibility war unites BreadTubers, but certain interests seem to draw them to the visibility war in the first place—namely, interests in politics, the rise of the alt-right, social issues, and digital culture and technology. These interests make members of the BreadTube community particularly attractive of information about algorithms as they are all domains in which algorithms increasingly feature. As I suggested in chapter 4, algorithmic curation across platforms configures information about algorithms as mainly relevant to those who have signaled their interest in—and those whose trace data fits the profile of users interested in—topics likely to address algorithms. For Instagram influencers, this means performing their interest in digital marketing. BreadTubers, like Instagram influencers, become particularly attractive of online media addressing algorithms within particular contexts by telegraphing their interests via data to algorithms. The BreadTube community, thus, helps exemplify how a social world’s shared interests render members more (or less) likely to be exposed to information about algorithms than other social worlds, depending on how relevant (or not) algorithms are to those interests. In what follows, I describe how that the BreadTube community’s interests in 1) politics, and 2) the intersection of social issues and technology, as well as their participation in the visibility war, usher members of the community into the orbit of information about algorithms. Media Coverage of Politics As algorithms increasingly crop up in political reporting, people like BreadTubers, who closely follow such reporting, learn about algorithms. This mode of acquiring insight begins with the BreadTube community’s defining interest in politics. As one user said in an r/BreadTube comment, “the idea of BreadTube was political in origin,” suggesting that political engagement unites the community. Just as Instagram influencers’ interest in Instagram marketing strategy 172 opens them up to algorithmically curated flows of information about algorithms in a marketing context, BreadTubers’ interest in politics opens them up to algorithmically curated flows of information about algorithms in a political context. Below I establish the prominence of algorithms in political reporting of recent years, as well as describe how BreadTube’s interest in politics directs them to such reporting and what that looks like. Media reporting on platform algorithms in political contexts has increased exponentially in recent years. Between the years 2000 and 2019, the number of newspaper articles mentioning “algorithm*” and “politic*” increased from 34 to 19,799⁠ (see Figure 19).14 In particular, news reporting on social media algorithms proliferated in the wake of the 2016 U.S. presidential election. Since the election, the number of newspaper articles discussing algorithms and politics more than doubled (see Figure 19). Such reporting covered, among other topics, “fake news” (e.g., The Washington Post Editorial Board, 2016), filter bubbles (e.g., Solon, 2016a), and censorship (e.g., Shahani, 2016), and included coverage of congressional testimonies from Google, Facebook and Twitter (e.g., Madrigal, 2017). Consequently, algorithms have become a fixture of mainstream conversations about politics. 14 This was calculated by performing a keyword search in LexisNexis for “algorithm* AND politic*”, limited to English language newspapers, with duplicates eliminated. 173 Figure 19. Newspaper articles with keywords algorithm* and politic* by year Given BreadTubers’ interest in politics, the prevalence of political reporting referencing algorithms, and the curated flows of news on platforms (Thorson & Wells, 2016), it is perhaps unsurprising that many BreadTubers I interviewed mentioned news reports when explaining how they learned about algorithms. In particular, multiple people mentioned the 2016 U.S. presidential election as an entry point for becoming aware of algorithms. For example, when I asked Melissa how she learned about YouTube’s algorithm, she replied: “I think a combination of watching the news and talking to people and also getting news from Internet sources after the 2016 election, made a lot of people focused on the algorithms when they really didn't know about them before.” As Melissa suggested, the entanglement of algorithms with the dynamics of the 2016 election, has made it difficult for those interested in politics to avoid exposure to information about algorithms in political news reporting. Even BreadTubers outside of the U.S. mentioned reporting on the election as a key factor in how they learned about algorithms. For example, when I asked Will, who is not American, about how he learned about the role of 174 algorithms in fake news, he told me: I think I learned about [fake news and algorithms] like most people in the world. I think it was fairly widely reported in mainstream outlets after the '16 election, that fake news sites, means of misleading information had spread fairly widely on Facebook. And that a lot of older people, particularly who might have voted for Trump, would have been influenced by these things. And that's when Facebook began to implement the fake news tag. And then obviously there was the Russian troll farm aspect of that. So, I was aware of that from there… As Will alluded, the discourse on fake news featured prominently in news coverage of the 2016 election, but also acted as a touchpoint for social media algorithms to enter into his—and other BreadTubers’—understanding of the present moment in electoral politics. For BreadTubers, post-mortem analyses of the 2016 election also drew attention to the rising tide of alt-right ideology. In an interview, Dylan mentioned that one key way he learned about YouTube’s algorithm was by reading a New York Times article about ContraPoints and a YouTuber known as Destiny “de-radicalizing” a young man who had gone down the “alt-right rabbit hole”15 on YouTube. Though it was not the first, nor the last, article of its kind, the article was particularly visible as it was published on the front page of The New York Times in June 2019 and widely shared online. Beyond the article’s high profile nature, it is easy to understand why Dylan would have been exposed to and chosen to read this particular article. The article appealed to various interests he communicated to me in our interview—namely, an interest in ContraPoints, politics, the rise of alt-right ideology, and Internet culture. These are all interests that define BreadTube. Indeed, other interviewees mentioned reading similar-style articles. For example, Emma told me “I've read some of the... Or I read at least one article that talked a lot about [the YouTube algorithm contributing to the alt-right pipeline] and how it can happen.” Part of the reporting on the rise of the alt-right included relevant public scholarship, 15 Otherwise known as the “alt-right pipeline.” 175 namely a high profile report produced by Data & Society researcher Rebecca Lewis, titled Alternative Influence: Broadcasting the Reactionary Right on YouTube (Lewis, 2018). This report made the rounds in the BreadTube community, with several BreadTube content creators, including The Serfs, The Majority Report, Innuendo Studios, and Peter Coffin, addressing and/or citing it in videos. As Tyler shared in an interview, the report was also actively discussed in r/BreadTube: So at the time I was involved in a lot of discussions, or at least reading a lot of discussions about the bias on YouTube toward this new young right-wing group. I think I discovered it from this paper written by someone named Becca Lewis. It was basically about how right-wing people on YouTube have cross-pollinated, where they will feature each other just all the time, far more than BreadTube does. And this has amplified their popularity to create this giant bubble where people will look at a video from one of them, and then they'll just immediately be recommended videos from someone who has a similar belief system and was just featured on the video you were just watching. So I saw some people discussing this on BreadTube… This report cited YouTube, its recommendation algorithm, and the 2016 election as vectors in the formation of an “alternative influence network,” or “a network of controversial academics, media pundits, and internet celebrities who use YouTube to promote a range of political positions from mainstream versions of libertarianism and conservatism to overt white nationalism” (Lewis, 2018, p. 10). On the ground, the interconnectedness of the growing alternative influence network, the 2016 election, YouTube and its algorithm was personally felt by many BreadTubers, as one user articulated in a post on r/BreadTube: So...shit has been super fucking difficult lately. I think everyone can agree on that. Since 2016, I've been following politics in America closely and I'm completely obsessed with keeping tabs on the alt right. This has been gnawing at me for a long time now and I just feel like I have to do something. It seems like no one around me has any idea what goes on online and how it's spilling into the real world. However, I do know multiple people who are getting pulled into the algorithms, even a few family members.16 Stories like this demonstrate the often intimate nature of BreadTubers’ relationship with the 16 Quote altered to protect the individual’s privacy, as they disclose sensitive information in the original post. 176 phenomenon of alt-right radicalization on YouTube, which presumably makes them more attentive to reporting referencing it, including news coverage and research reports like that of Data & Society. Growing awareness of algorithms among BreadTubers is the byproduct of such attentiveness, as they follow, engage with, and share political reporting that involves algorithms. Actively and passively customizing their platform feeds in this way signals BreadTubers’ interest in politics to platform algorithms, thereby inviting additional political reporting by way of algorithmic curation. In this way, platforms configure users like BreadTubers as “strong attractors” of such reporting and, thus, incidentally, of information about algorithms in the political context. This process provides yet another example of how platforms shape patterns of learning about algorithms, similar to what we saw in chapter 4 among Instagram influencers. The process also exemplifies how the interests that define and bring a social world together matter for learning about algorithms. More than interest, concern over the rise of alt-right ideology has prompted many BreadTubers to educate themselves about the alt-right pipeline, which reinforces their interest in algorithms. As mentioned, many BreadTubers feel very close to this situation and, as a result, become heavily invested in learning about algorithms. Thus, BreadTubers’ acquisition of information about algorithms is shaped by the community’s shared interests— mainly, those pertinent to the visibility war—that bring the community together in the first place. For the BreadTubers, interest in politics, membership in the community, and learning about algorithms all mutually constitute each other. As I will now discuss, BreadTubers’ interest in social issues and digital technology similarly invites information about algorithms in BreadTubers’ media repertoires. 177 Media Coverage of Sociotechnical Issues Involving Algorithms The BreadTube community has an “activist mindset”—as interest in building a more just and equitable world. This interest, like their interest in politics, factors into how they acquire information about algorithms through the media they consume. BreadTubers also exhibit a sociological interest in the Internet, social media, digital culture and the like, which often intersects with their interest in building a better world. This intersection makes sense, as so much of social life now plays out on platforms and algorithms are at the heart of the services platforms provide. As I will demonstrate below, BreadTube’s intersecting interests in addressing social problems and digital technology invite information about algorithms into their media repertoires. BreadTubers’ interest in social concerns about digital technology is evident in the various podcasts, YouTube channels, and news series to which they refer. Such media content routinely comes up in the context of BreadTubers discussing what they know about algorithms. For example, Dylan talked about learning about YouTube’s algorithm by listening to an episode of the tech podcast Reply All, which addressed the idea that YouTube algorithm’s tends to promote content with extreme viewpoints (Reply All, 2019). A user commenting on a post in r/BreadTube about YouTube’s algorithm promoting extremist content also mentioned this episode of Reply All. In the same vein, Luka told me in an interview that a YouTube video prompted him to reflect on platform algorithms for the first time: KC: So you are familiar, you're aware of the fact that YouTube and these other platforms use algorithms to sort and filter content. Do you remember how you came to learn about this happening? Or these algorithms existing in the world? Luka: I think it was one of those several "edutainment" channels, I think there was one called CGP Grey that I think was discussing algorithms that work on Reddit, that I vaguely think I remember as being the first case of me really thinking about what made this whole social media machinery function. Luka also mentioned further learning via a TED video, which documented: 178 …how social media algorithms, as we've come to expect them to be…What companies look for in the social media algorithms tend to create situations where radicalization is as easy as possible since it rewards videos that cause people to watch massive amounts of uninterrupted footage which…And then recommending videos that people who watch that also watched eagerly As the above examples attest, BreadTubers seem to have a particular curiosity about the social impacts of digital technology that motivates them to seek out media that speaks to this. This curiosity, as the above examples also evidence, intersects with the community’s concern over the alt-right pipeline and related participation in the visibility war. Although BreadTubers seem to seek out media that addresses the alt-right pipeline and other matters related to the visibility war, this interest is not always the impetus for media exposure to information about algorithms. For example, a user in r/BreadTube commented that watching James Bridle’s TED talk, “The nightmare videos of childrens' YouTube — and what's wrong with the internet today” informed their “views on creator vs. algorithm.” Similarly, in an interview, Will talked about watching a video about how YouTube’s algorithm may facilitate pedophile rings on the platform: Well actually one thing I should add […] that's probably informed my knowledge a bit…I know it was a big deal in 2018, when that man had done this video about finding the sort of the pedophile ring on YouTube […] His understanding was certainly that the reason if you clicked on one of these videos you'd suddenly get so many of them in your recommendations was because there was enough of a small little native audience for them that would just probably purely watch those videos there, that if you're into that algorithm, suddenly you were in that sphere of those videos, into the kind of the algorithmic kind of sphere that they all exist in. So that's informed my beliefs… Videos like these are not made with the sole intention of explaining YouTube’s algorithm, but end up doing so in order to tell a broader story about the ways platforms impact society. Thus, BreadTubers need not come to a video with an explicit interest in algorithms or the alt-right radicalization. Rather, BreadTubers like Will seem to come into contact with information about algorithms as a result of a more fundamental interest in addressing a social problem of concern to 179 them. Media coverage of one particular controversy has been influential in conveying information about YouTube’s algorithms into the BreadTube community. This controversy centers on the systematic demonetization of LGBTQ+ content on YouTube and has been documented by various YouTubers. For example, one such YouTube video titled “Youtube's Biggest Lie” was shared in various BreadTube spaces (e.g., r/BreadTube, the BreadTube.tv Facebook page, BreadTube Facebook groups). This video was produced by a YouTuber known as Nerd City, who describes his channel on Twitter as a “Youtube Channel dedicated to Internet Culture” (Nerd City, 2019). Nerd City describes the findings as follows: If we took a demonetized video and change the words “gay” or “lesbian” to “happy” or “friend,” every single time, the status of the video changed to advertiser friendly. Most of the words are ones we might expect YouTube to censor [on screen words like “asshole,” “boobs,” “cocaine” are displayed]. Many others put YouTube directly at odds with its progressive messaging [on screen words like “female,” “gay,” “LGBT” are displayed], and some of them only make sense to robots [on screen words like “destination,” “decade,” “cameras” are displayed]. (Nerd City, 2019, n.p.) BreadTubers’ interest in the content of this video offers a clear example of the community’s interrelated interests in social justice, the visibility war, and technology. Thus, it is not terribly surprising that this video and related media covering the controversy has been so instructive for the community in the insight it offered about algorithms. In a few cases, BreadTubers’ intersecting interests in social problems and technology has drawn them to stories about big tech whistleblowers, who offer insider insights about algorithms. For example, Heather mentioned acquiring insight about Facebook’s algorithms from a Slate interview with Yael Eisenstat, Facebook’s former head of Global Elections Integrity Operations. Heather told me: One of her [Eisenstat] ideas was Facebook rewards... And I think this is true for any platform... But Facebook in particular really rewards content that people really interact with, and the content that people really interact with is stuff that makes people emotional. 180 And so whether that's cute, funny animal videos or a friend's baby picture, but also I think in particular a lot of sort of inflammatory news. As Heather’s experience suggests, stories of big tech whistleblowers can aid BreadTubers in providing new insight about algorithms, including both technical and social aspects. Similarly, Ana also talked about encountering interviews with a former YouTube engineer, Guillaume Chaslot, who formerly worked on the platform’s recommendation algorithm and started the project AlgoTransparency.org. Chaslot has cropped up in online discussions in r/BreadTube as well. KC: Yeah. And so that website, AlgoTransparency.org, and the former engineer, how did you come across him and interview...You said you...He's done a bunch of interviews and he's on podcasts and things, how did you find out? Ana: Yeah. I think I found him through a podcast, it's called Your Undivided Attention. It is by the Center for Humane Technology, which is a nonprofit that specifically works to influence Silicon Valley into thinking more humanely in their design, because their argument is that a lot of the things that we use every day don't... Are not actually thought out in a way that is healthy for us, the humans using them. […] I know they did [an episode] about YouTube and they talked to this guy [Chaslot], and from there I googled him and I read that he did other interviews and whatnot, and I landed on his... On that website on algotransparency.org, which it's still operating, it's still updated, it's still very interesting. In referencing the podcast created by the Center for Humane Technology, Ana revealed her interest in the social impacts of digital technology. Heather shared this interest, as she made evident in the fact that she found the interview with Eisenstat as a result of following Slate’s Future Tense series, which “examines emerging technologies, public policy, and society” (Slate, n.d., (Slate, n.d.). Ana’s and Heather’s experiences, thus, stand as examples of how BreadTubers learn about algorithms, in part, as a result of interests fundamental to the community, which lead them to encounter, attend to, and consume media addressing algorithms. In this case, both women’s interests led them to interviews with big tech whistleblowers, which played a key role in their process of acquiring information about algorithms.)) 181 In the above discussion, we can how the BreadTube community’s particular interests make their media repertoires more likely to feature information about algorithms than the average Internet user. In this way, BreadTubers are similar to Instagram influencers, whose interests in digital marketing render them likely to encounter information about algorithms. In the contrasts between the media repertoires of BreadTubers and Instagram influencers, we can see how the particular preoccupations and activities of a social world contour the kind of information members of a world receive. BreadTubers report acquiring information about algorithms as they emerge in coverage of politics, as discussed in the last section, but also in media content documenting different social issues—for example, pedophile rings and the suppression of LGBTQ+ content on YouTube. In interviews, BreadTubers routinely mentioned following technology-focused YouTube channels, podcasts, and blog series, which frequently touch on algorithms. These preferences evidence that an interest in solving social issues, as well as a tangential sociological interest in digital technology and culture seem to drive many of BreadTubers’ mediated encounters with information about algorithms. Importantly, as discussed in chapter 4, such encounters are made possible by algorithms, as most BreadTubers primarily consume media on platforms like YouTube and Reddit. The interdependency of interests that define BreadTube and platforms’ technical infrastructure for tailoring content helps to illuminate the situated nature of learning under platform epistemology. Tangential to learning through media repertoires, BreadTube creators also acquire information about algorithms from instructional videos, which I will now discuss. Learning from Training In chapter 4, I discussed Instagram influencers’ reliance on courses and guides from Instagram gurus, which act as important sources of information about algorithms. In my 182 conversations with BreadTube creators and observations of the BreadTube community online, I encountered some references to a similar use of YouTube tutorials produced by YouTube gurus. Yet, unlike Instagram influencers, the smaller niche community of BreadTube does not appear to be interested in monetizing their insight. The phenomenon of Instagram influencers evolving into Instagram influencer-gurus has no straightforward analog in the BreadTube community. I did not encounter any paid guides, courses, tutorials, consulting, and so on being recommended or advertised among the community online. However, BreadTube’s commitment to the visibility war does necessitate providing instructional support to creators just getting started. For this reason, some more established BreadTube creators have created how-to videos for others to learn how to grow their channels and avoid being demonetized. I describe how such tutorials factor into BreadTubers’ acquisition of information about algorithms. Most how-to videos are produced by creators outside of the BreadTube community, and, thus, these seem to be what BreadTube creators predominantly rely on. Like Instagram influencers, some BreadTube creators actively search for such instructional videos in order to ensure their videos reach a wide audience. For example, Lance told me in an interview: KC: And in... So I watched the one video where you're talking about different strategies for growing an account, and you mentioned a bunch of different SEO hacks and things like that. So how did you learn about all of those different things, like tweaking the year and the title of the video or using hashtags strategically, things like that. Lance: All from YouTube. KC: YouTube. [laughter] Lance: I mean, both from YouTube and trial and error, because I was able to find out pretty quickly that it... Which strategies works and which ones didn't. But for the large part, a lot of the YouTube tutorials were pretty spot on for people figuring out how to game the algorithm, so to speak. KC: Yeah. So would you just sort of search around for... What would you search for to find those videos? 183 Lance: Everything from SEO optimization, how the algorithm works, algorithm tutorial, stuff like that. In a separate interview, Thought Slime similarly discussed using how-to videos from YouTube to inform himself about the factors that augment the reach of videos. Both Lance and Thought Slime have each produced videos to share what they learned from other YouTubers with the BreadTube community. For example, Thought Slime made a video for other BreadTubers in which he discussed creating good titles, descriptions, and thumbnails that would attract potential viewers’ attention—“It doesn't matter how good your video is if nobody clicks on it. So give people a reason to click!”—as well as choosing language thoughtfully, so as to not be algorithmically suppressed—“Swear words dunk up the YouTube algorithm. I wouldn't get pushed in the algorithm if I had a swear word in the title.” (In the latter quote, Thought Slime self-reflexively, and didactically, followed his own advice by using “dunk” instead of a swear word.) As another example of instructional videos produced by BreadTubers, in one of Lance’s videos—jokingly titled “How To Rise To The Top of the Algorithm (in YouTube and Twitter) - ONLY WORKS FOR LEFTISTS”—he offered the following advice: …so you have a captivating thumbnail […] and when people click on your thumbnail, is that person going to watch your video for long? That's the second most important thing. That's how YouTube's algorithm determines whether or not to keep bumping your video up and to keep suggesting it to other people. Do you have a high click-through rate, and once they click through, do they stay? That second part is entirely based on your content. You can have the best thumbnail in the world, the greatest title in the world, but if your video is boring and people click away from it in a matter of seconds, then you will plummet in the algorithms of YouTube. If they stay hooked to it, then you will be skyrocketing in the algorithms on YouTube, so that's why, in your videos, the first 10 to 15 seconds are the most important of the entire video… (The Serfs, 2019) Through videos like these, BreadTube content creators take insight they learn experientially, from other BreadTubers, secondary sources, and gurus who devote their time to actively researching and reverse engineering YouTube’s algorithms, and redistribute it to the broader BreadTube community. 184 Importantly, BreadTube creators like Lance and Thought Slime do not produce instructional videos for the explicit purpose of increasing subscriptions or raising their profiles. In fact, Lance’s tutorials are unlisted on his YouTube channel, meaning that the videos do not show up in his channel’s “Videos” tab or in search results on YouTube. Instead of serving a profit motive, in the handful of BreadTube instructional videos I encountered, content creators framed their advice as a contribution to the visibility war—namely, helping lesser-known leftist YouTubers gain visibility on the platform. I will return to this point in the next chapter. While profit is a not a key motivation for BreadTube creators to provide how-to videos to the community, as discussed, profit is a key motivation for Instagram influencers to produce guides, courses, tutorials, and the like. The different motivations between Instagram influencers and BreadTubers suggest the relevance of a social world’s interests, goals, and ideology in learning about algorithms. For Instagram influencers, the cottage industry of paid training services subsidizes knowledge building around algorithms and, further, legitimizes the knowledge claims of visibility elites. This make perfect sense given Instagram influencers’ entrepreneurial drive and the neoliberal tenets underwriting the influencer marketing industry, which prioritize financial success (Duffy, 2017). For BreadTube content creators, the monetization of knowledge conflicts with the community’s collectivist ideology and opposition capitalism and neoliberalism. BreadTube creators feel that information about algorithms should be equally accessible to all, and that class, capital, and/or any other material conditions should not pose a barrier to learning. In the contrasts between the use of training among Instagram influencers and BreadTubers, we can see that while platforms provide the infrastructure for organizing and implementing such training, the particular ideologies and interests defining social worlds matter for how this infrastructure will be used. 185 Instagram influencers and BreadTubers also different slightly in their perceptions of platforms’ credibility and their wiliness to accept platform statements about algorithms. Information Asymmetry Between Platforms and Users: Corporate Propaganda BreadTubers infrequently discuss acquiring information about YouTube’s algorithms directly from YouTube. However, they do frequently complain about the way the company withholds or presents information. In chapter four, I discussed the ways that Instagram capitalizes on an information asymmetry the company establishes in relation to users. In this, although most influencers are critical of Instagram’s disclosures, many also continue to accept much of the information Instagram provides. This is not the case for BreadTubers. Most BreadTubers understand the information asymmetry YouTube establishes through the community’s anti-capitalist lens, which limits their reliance on YouTube for information about its algorithms. That said, BreadTubers do sometimes grant platform owners—YouTube and its parent company Google, in this case—epistemic authority. For example, Thought Slime told me in an interview: “Only Google can say for certain whether or not anything [YouTube how-to videos are] saying is right.” At the same time, the vast majority of BreadTubers hold a fundamentally skeptical attitude towards platforms like YouTube, which was not as widespread among Instagram influencers. In fact, BreadTubers’ cynical view of YouTube inspires the belief that the company strategically withholds information and/or makes statements in bad faith. In what follows, I discuss how BreadTube’s anti-capitalist ideology instills in members of the community a deep distrust of YouTube, which prompts them to read the company’s public communications as corporate propaganda. BreadTubers routinely brought up YouTube’s lack of transparency around its algorithms in interviews. For example, Justin told me: “It seems like YouTube is... Or the whole way that 186 the company operates along those lines isn't terribly transparent as far as I can tell.” Likewise, Ana said in an interview: …we don't actually know how these websites operate, we don't know how the algorithm operates. We can guess, we can see the results of it and sort of infer from that, but we don't exactly know what it's looking for right now. We don't know the things that it flags or that it doesn't, or the markers of videos that will be recommended, we don't know any of that. And YouTube has no incentive to let anyone know how it works, same with Facebook, because it will screw with their revenue model. Like Ana, time and again, BreadTubers explain YouTube’s secrecy as a business decision. As another example of this, Heather said in an interview: …YouTube probably isn't too keen on telling it's content creators how they promote stuff, because again, YouTube relies, YouTube exists on people putting things on their website. That if tomorrow everyone said, "Nope," and took their stuff down and stopped going to YouTube, then YouTube would no longer be relevant. This is probably not gonna happen, but a social media platform only is useful as long as people are using it, and I think that YouTube probably doesn't want a lot of its content creators to know exactly how channels are monetized or demonetized or recommended or other things… As Heather exemplified, BreadTubers often acknowledge that by withholding information, YouTube attempts to manage content creators’ impressions of the platform in order to encourage use, which helps their bottom-line. BreadTubers’ anti-capitalist stance establishes a baseline distrust of and cynicism about large corporations. However, various controversies have done further damage to the community’s confidence in YouTube’s public statements. BreadTubers’ distrust of YouTube has increased over time, most notably through the community’s ongoing complaints about the platform’s treatment of the LGBTQ+ community. Many BreadTubers are aware of empirical investigations by YouTubers—mentioned in a previous section—which have documented the systematic demonetization of LGBTQ+ videos on the platform. Many BreadTubers also recognize that, while the company publicly embraces inclusiveness, this does not always gel with its actions. In one particularly infamous example of this, conservative YouTuber Steven Crowder 187 produced a series of videos in which he made racist and homophobic comments about then Vox reporter (now full-time BreadTube creator) Carlos Maza. Crowder’s videos led to Maza being doxxed and harassed by Crowder’s fans online. After Maza spoke out, arguing that Crowder had violated YouTube's policy on harassment and bullying, YouTube conducted an investigation, but determined Crowder had not violated any platform policies. After public backlash, and internal complaints from employees, the company changed course and announced it had demonetized Crowder’s channel (Goggin, 2019). This unfortunate episode served as a flashpoint for BreadTubers’ myriad grievances against YouTube, deepening distrust of the company. Perceptions of the company’s cynical prioritization of profit over social responsibility galvanized this distrust. For example, in response, Shaun, a prominent BreadTube creator, tweeted “youtube CEO @SusanWojcicki recently said her ‘#1 priority is responsibility, even if that comes at the expenses of growth.’ this whole incident shows that to be a fairly blatant lie” (Shaun [shaun_vids], 2019). Many viewed this incident as confirmation that YouTube was not an ally to BreadTube. For example, Thought Slime stated in a YouTube video about the event: [YouTube’s] policies and Terms of Service aren't put in place to protect vulnerable people; they’re public relations memos that disguise their complete disinterest in their users' safety behind hand-waving nice ideas that make them sound like an institution that gives even a single solitary shit about you. They do not. They are our enemies, and I don't mean simply the enemies of LGBTQ+ people and their allies--though to be clear, they are, and that alone would be bad enough. I mean they, like any other business under capitalism, are not designed to provide a service or to enrich the lives of their customers, they are designed with a singular focus of extracting the maximum profit humanly possible and will pursue that goal no matter the cost to anyone else. YouTube is a plundering machine. I don't expect them to look out for our best interests or to behave according to any sort of coherent ethical standard beyond their immediate self-interest. YouTube is not our ally in this or any fight against oppression because, like any power structure in capitalism, it benefits from that oppression and always will. They can monetize that oppression. They can use that oppression to generate value for themselves. When people resist that oppression, they can monetize that too--ain't life grand for them. As Thought Slime’s comments illustrate, many BreadTubers pointed to YouTube’s corporate 188 agenda as the driving force behind the company’s restrained and clumsy response. As another example, commenting on a video posted in r/BreadTube titled “YouTube's Total Incompetence: Maza Vs. Crowder Debacle,” one user wrote: “You keep saying Google (YouTube) is incompetent. That's incorrect. They know exactly what they're doing in regard to absorbing power and profit.” In short, the “Maza v. Crowder debacle,” as one high profile example among many other similar controversies, demonstrates how the BreadTube community has concluded over time that YouTube is not a reliable source of information, and that the information the company does share cannot be taken at face value. Thus, YouTube’s public communications offer limited utility to BreadTubers in learning about the platform’s algorithms. In a video, BreadTube creator Big Joel articulated how many in the community see YouTube’s public statements as corporate propaganda. This video took aim at YouTube’s Creator Academy, a series of how-to courses for creators covering a range of topics. Big Joel suggested that YouTube’s Creator Academy series "is not offering instructions and insights into how YouTube actually functions, what it wants from us, how it works.” Instead, he argued: YouTube […] does work to control its workers [content creators] and their output in various ways, and the YouTube Creator Academy exists as an expression of that control. It is not recommending; it is not advising. No. It is doing two things. Number one: it expresses to creators what YouTube wants or requires them to do. Number two, and probably the more important one: it provides the illusion that YouTube is not doing number one. Here, Big Joel characterized the Creator Academy as corporate propaganda, in which YouTube “defers” and “disguises” its power. He further contended: YouTube is premised off the idea that YouTube doesn't really exist. See, YouTube's power over digital media is unparalleled. They are not just a place that produces videos; they are the place. And the only way to legitimize and maintain that kind of power is through obfuscation: [speaking as YouTube] “No, we don't control production through our algorithm. If we did that, we'd be a business like any other business. Instead we are simply a vessel through which the human imagination occurs--a tool so basic, so versatile that only one needs to exist. And, yeah, we might give you some hot little tips—like, you could make longer videos, if you wanted to. Like, it would help you succeed, but it's not a 189 big deal. Make the videos you want to make, and, honestly, don't worry too much about any of this, because we won't ever give you enough information about how we work, about how our algorithm works, that it starts to look like we're telling you what to do. Like we're your boss and you're our workers or something like that…” This understanding of YouTube’s disclosures as corporate propaganda signals the BreadTube community’s recognition of the power asymmetry—which includes an information asymmetry— between YouTube and users. This recognition, in turn, deepens BreadTubers’ skepticism towards YouTube as they see YouTube’s public communications as a devious attempt to control their behavior. BreadTubers recognize YouTube’s public communications—disclosures about algorithms included—as, in Thought Slime’s words, “public relations memos,” and, in Big Joel’s words, “corporate propaganda.” These perceptions illustrate that, even if YouTube were more forthcoming with information about its algorithms, BreadTubers likely would not perceive the information as reliable. This forecloses on the possibility of learning about algorithms from platforms. BreadTubers’ resolute distrust of YouTube’s statements sets them apart from Instagram influencers, who are generally more willing to accept Instagram’s statements. With this contrast, we can see how learning takes shape within the context of social worlds, in which a world’s ideologies and accompanying beliefs make a difference in how people receive information about algorithms. BreadTube’s anti-capitalist stance factors into their evaluation of information provided by YouTube and, consequently, which information will be accepted from the company within the community. Conclusion In this chapter, I have described the various means by which BreadTubers acquire information about algorithms. BreadTubers learn from empirical observation, reflection, and investigation; discussing algorithms with others online; encounters with media coverage that 190 addresses algorithms; and (for some) instructional videos. In describing how BreadTubers acquire information about algorithms, I followed up on findings presented in chapter 4 to touch on how platforms structure learning practices, but also emphasized the embeddedness of these practices in the social world of BreadTube. Platforms largely lay the conditions for what can be learned about algorithms and how, but BreadTubers’ interests, ideologies, sites, and activities further motivate and direct how they acquire information via social practices and relations. By offering examples of how the particularities of BreadTube matter for learning—and contrasting this with learning among Instagram influencers—I demonstrated the situatedness of algorithmic literacy “in the trajectories of participation in which it takes on meaning” (Lave & Wenger, 1991, p. 121). I summarize this chapter’s findings below. BreadTubers acquire information about algorithms experientially, like Instagram influencers, and this kind of learning is made possible by platform features. BreadTubers learn by observing the datafied versions of their preferences reflected back to them in algorithmic outputs. Being recommended right-wing content, in particular, stimulates learning among BreadTubers, as informed by the community’s knowledge of their “enemies” in the visibility war. The visibility war also motivates BreadTubers to conduct informal empirical investigations, which offer additional information about YouTube’s algorithms. For these investigations, BreadTubers depend on platform features—for example, related channels listed on YouTubers’ profiles, visibility metrics, and interpretations of those metrics. Such investigations, like more casual observation and reflection, require knowledge related to the rise of the alt-right and the visibility war. Further, by conducting these investigations, BreadTubers offer the community important insight that can be used to increase the collective visibility of BreadTube content. BreadTubers also learn via through online discussion, which constitutes a key form of 191 participating in the community. In order to find and keep abreast of such discussion, platforms must infer and activate connections within the community, as premised on the community’s interests. In this sense, BreadTubers must effectively perform their connections to others in the community through discussion. Content creators feature in BreadTube discussions, and members of the community commonly learn about algorithms as BreadTube creators engage in algorithm talk, particularly meta-commentary. With meta-commentary, BreadTubers seek to recruit and mobilize allies in the visibility war by highlighting the ways that algorithms impact the visibility of leftist content on YouTube. BreadTube discussion also exhibit the presence of those with technical backgrounds, which infuses technical knowledge into the community. The community’s interests in keeping abreast of politics, social issues, and digital technology draw them to BreadTube, and provide additional opportunities for learning about algorithms. These interests are further cultivated as members of the community become more invested and involved in the visibility war. As BreadTubers consume YouTube videos, news reporting, podcasts related to these interests, they encounter information about algorithms, because algorithms increasingly feature in political processes and events, as well as social issues (particularly those playing out on platforms). Thus, BreadTubers’ particular blend of interests make them more likely to encounter information about algorithms than the average individual, as they tend to follow relevant sources and engage with relevant media online. Finally, BreadTube creators additionally learn about algorithms via how-to videos that provide information about how to grow a YouTube channel. BreadTubers’ production of how-to videos exemplifies the community’s commitment to the visibility war. Those who create these videos typically do so with the explicit intention of enabling others to join the ranks or more effectively do their part in the visibility war. The collective enterprise of the visibility war and 192 BreadTubers’ collectivist ideology urge models of instructional support in the community premised on equitable access to information. This notably contrasts models of instructional support among Instagram influencers, wherein training is typically paid and part of entrepreneurial pursuits. Besides how they acquire information about algorithms, BreadTube’s beliefs and attitudes dictate members’ reception of information. Specifically, BreadTubers’ anti-capitalist stance makes them skeptical of YouTube’s public communications, including disclosures about their algorithms. Past controversies have made the BreadTube community wary of information YouTube shares. While members of the community know that YouTube has the most extensive insight about the design and functionality of the platform’s algorithms, they do not trust them to be forthcoming and honest with what they know. Further, some believe YouTube actually capitalizes on this information asymmetry in order to manipulate user behavior. This leads many to read information disclosed about algorithms as corporate propaganda. These perceptions make YouTube’s disclosures an unlikely source of insight about algorithms for the BreadTube community. 193 CHAPTER 7 Knowledge for the Greater Good Those pursuing visibility require an understanding of the goals algorithms serve and how they accomplish them in order to identify viable visibility tactics. In discussions of what people know about algorithms, we tend to focus on this kind of technical knowledge as part of an adherence to a “planning model of human action” (Suchman, 2007, p. 31) rooted in cognitivism, which conceives of humans as fundamentally rational beings, disconnected from the context, locatedness, and embodiment entailed in people’s experiences with algorithms. In this chapter, I respond to this prioritization of technical knowledge by demonstrating the importance of another kind of knowledge about algorithms: practical knowledge. Practical knowledge, as previously described, refers to the intimate knowledge people construct about algorithms as informed by their sense of self, mobilized in situations of action (Suchman, 2007). Algorithmic literacy, then, involves not just how people use what they know about how algorithms work to respond to algorithms. It also involves how people draw on knowledge of the values and interests of their social worlds to respond to algorithms. Practical knowledge is about both understanding how algorithms fit into our worlds, as well as mobilizing knowledge in ways that preserve those worlds and ensure continued membership in them. While I elaborate on technical knowledge in this chapter, my principle intent is to give practical knowledge the empirical and analytical attention it warrants. In this chapter, I focus on how the BreadTube community’s anxiety about its own unstable existence shapes how members understand and respond to YouTube’s algorithms in the visibility war. After describing BreadTube’s existential anxieties, I describe BreadTubers’ reading of YouTube’s algorithms as urging “rugged individualism” by encouraging competition and conflict. Then, I explain how, in the face of this algorithmic prescription for behavior, 194 BreadTubers attempt to reassert (in the visibility war) their commitment to collectivism— particularly through the values of solidarity and equity—by prioritizing the sustainability and integrity of BreadTube above individual interests. In the next several sections, I provide an overview of BreadTubers’ technical knowledge, before turning attention to their practical knowledge. Technical Knowledge To effectively mobilize in the visibility war, BreadTubers require both design knowledge and methods knowledge. Recall that design knowledge provides a framework with which to understand the principles that govern and bound how algorithms function. Design knowledge provides an overarching logic against which individuals can assess their understandings of algorithms, as well as the soundness of any new information acquired. In terms of the design knowledge, BreadTubers come to understand the goals YouTube’s algorithms pursue, but also that they are logic-based and probabilistic, which limits what they can “know.” By contrast, building methods knowledge helps individuals spec out the full range of tactics possible for increasing visibility. In terms of methods knowledge, BreadTubers build knowledge about the data signals drawn on by YouTube’s algorithms, as well as knowledge about algorithmic techniques that specify what these algorithms do at a higher level. Data signals and algorithmic techniques form the basis of the rules of the game (or engagement). Thus, methods knowledge helps BreadTubers identify possible courses of action that fit within the logics of how YouTube’s algorithms function. Below I summarize BreadTubers’ design knowledge and methods knowledge. 195 Design Knowledge Design Goals Most BreadTubers describe the goal of YouTube’s recommendation algorithm as Sarah did in an interview. She told me the goal of the algorithm is “To keep you on the site longer, and to make money for YouTube.” Nearly all BreadTubers I interviewed offered some variation on this explanation when asked what they thought the purpose of YouTube’s algorithm was. Ana, for example, similarly stated “…the goal is usually to maximize revenue. And the way to do that is to keep people on the platform as much as possible, be it YouTube, Facebook or whatever.” Like Instagram influencers, BreadTubers also recognize the interrelatedness of this goal of ensuring the platform’s profitability with ensuring user satisfaction. However, most BreadTubers emphasize the former goal as part of their general cynicism directed towards YouTube and other “big tech” corporations. Thus, BreadTubers tend to understand what YouTube’s algorithms do as tethered to what YouTube wants as a corporation. In the next section, I explain the BreadTube community’s understanding of the limits inherent to the design of YouTube’s algorithms. Design Limitations Alongside understanding the goals guiding YouTube’s algorithms, for BreadTubers, another commonly referenced form of design knowledge is the understanding that algorithmic decision-making differs from human decision-making. This knowledge principally manifests within the BreadTube community in conversations about algorithms’ inability to fully understand the nuances and complexity of human expression. For example, BreadTube content creator Jim Sterling argued in a video: Scaremongering, personal attacks, seething hatred of individuals, racism, sexism, transphobia, homophobia, all are exceptionally popular, all pushed by an algorithm that 196 has all the personal and professional data in the world, and not one tiny shred of understanding, of any of it. Because it's a fucking algorithm! It's a set of numerical rules. It doesn't grasp the concept of context. It can't determine the difference between fact and fiction, and it doesn't respect diversity in content. It's just a process. (Sterling, 2020, p. n.p.) Like Sterling, BreadTubers routinely acknowledge that algorithms operate as a set of rules and perform statistical computations, and that this does not permit them to independently discern things like “facts” or “diversity.” Dylan, who studied computational linguistics in college, expanded on this thought in an interview: …one of the things that I learned at that time [in college] is that there are parts of analyzing human language with a computer that are relatively easy, and there are parts that are relatively hard, and one of the things that is hard is anything that has to do with actually figuring out what the human meant, because computers don't know what a human means. Computers don't know how the world is. They just know... They just see this as, “This is a string of tokens,” and they... When a computer sees, “Perhaps Ben Shapiro shouldn't be taken seriously by anyone about anything,” that computer has no idea who Ben Shapiro is, or what any of those words mean. It can just tell you, “Oh, a human has told me that the word ‘shouldn't’ is slightly negative, so I should make... So this is probably a slightly negative thing. Or, ‘seriously’ is positive, so this is a positive thing and...” Otherwise, it doesn't really know what the human is talking about, it... Anything that requires actual knowledge of the world is way beyond what AI is capable of right now. But pattern matching is, it's pretty easy. Just say... It's not that hard to say, for a computer to say, "Oh, this video is about Ben Shapiro," because it contains the words 'Ben Shapiro'. That's pretty obvious. Certainly, Dylan’s knowledge of algorithm design far exceeds most average individuals’ understanding of algorithms. However, the core idea he communicated about algorithms being incapable of discerning meaning repeatedly surfaces in the BreadTube community. For example, a user commenting in r/BreadTube wrote: …we have a hard time with much of fair use/context as humans, and there are no machines capable of understanding what fair use, satire or critical context is if it’s not very, very blatant. At best, an ML system can see that the person’s videos are generally tagged as leftist, so this one must be, too, instead of Nazi content As this statement demonstrates, many BreadTubers are alert to the imperfect nature of algorithms’ ability to understand human language. 197 Many BreadTubers particularly lament the fact that YouTube’s algorithm does not understand context enough to be able to differentiate between videos’ favorable and unfavorable treatment of hate speech in demonetization decisions. For example, YouTube previously demonetized a video by prominent BreadTube creator Three Arrows, who creates videos debunking far-right talking points, particularly Holocaust denialism. Demonetization of Three Arrows’ video, titled “Jordan Peterson doesn’t understand Nazism,” happened automatically as a result of an algorithmic determination that the video included hate speech. On Twitter, Three Arrows appealed to YouTube, writing: ". @TeamYouTube the video in question is co-written with a phd historian who wrote mutiple books on various aspects of the Third Reich and is educational in nature. Dont see how this qualifies for hatespeech” (Dan Arrows [@_DanArrows], 2019, pp. _DanArrows], 2019, #20108). Evidencing the community’s familiarity with such algorithmic “errors,” one user sarcastically commented on the incident, tweeting “Welcome to the wonderful world of algorithms that don't understand context!” Recognition of algorithmic limitations is crucial for helping BreadTubers make sense of algorithmic decision-making on YouTube as it relates to the visibility war. Knowing that algorithms cannot understand linguistic nuance or context informs BreadTube creators about what they can and cannot say (or depict) in their videos. In the next section, I will re-introduce methods knowledge, the second kind of technical knowledge, and explain what this knowledge looks like among BreadTubers. Methods Knowledge Methods knowledge constitutes information about the functioning of algorithms, which can be acted upon by BreadTube creators and audiences alike for increasing or ensuring the visibility of content. In this section, I describe four different domains of methods knowledge 198 among BreadTubers. First, I address BreadTubers’ knowledge of data signals impacting algorithmic recommendation on YouTube. Recall that “data signals” refer to the data points algorithms take into account in performing their functions. Second, I address BreadTubers’ knowledge of data signals used in algorithmic moderation, which differ somewhat from those used in algorithmic recommendation. Next, I show how BreadTubers’ methods knowledge extends beyond data signals to include knowledge of two different kinds of algorithmic techniques. As Bernhard Rieder explained, algorithmic techniques “provide[] a general rationale and formal calculative specifications” (Rieder, 2017, p. 101). Specifically, the two algorithmic techniques of which BreadTubers exhibit knowledge are content-based filtering and collaborative filtering. Data Signals Algorithmic Recommendation Signals. Chief among the data signals BreadTubers discuss in relation to algorithmic recommendation is watch time. Watch time refers to the amount of time viewers have spent watching a video. In an interview, Justin explained the role of watch time:I would say that the algorithm does the work of evaluating what you spend your time watching and it tries to give you relevant... It tries it's best to recommend things that it believes you will click on and either then click through in the case of an advertisement or click on and watch all the way through in the case of another YouTube video. Because BreadTubers like Justin recognize increasing watch time as the main goal YouTube’s recommendation algorithm serves, they also recognize it as the main factor that contributes to the visibility of content on the platform. Watch time may also be salient to BreadTubers because it is easy to connect with recommendations they personally receive—if you watch a lot of content from a particular channel and subsequently receive recommendations for this channel, the role of watch time in algorithmic recommendation can be easily inferred. In addition to watch time, BreadTubers also mentioned other kinds of engagement: likes, 199 dislikes, shares, and comments. For example, Emma said in an interview: …if you have videos that are getting a lot of likes or I suppose, a lot of dislikes as well, or a lot of comments, then maybe those videos are also sort of rising to the top as well and being just generally suggested for viewership. Knowing that engagement signals impact algorithmic recommendation guides BreadTubers in formulating practices that amplify these signals. For example, one commonly deployed practice among BreadTubers—one that content creators have actively encouraged via metacommentary in their videos—is leaving an “algorithm comment” on videos. With the algorithm comment, users will leave a comment solely for the purpose of algorithmically boosting a video’s visibility. Comments like this typically state something like “commenting for the algorithm” or “one for the algorithm.” BreadTube audiences’ use of algorithm comments loosely resembles comment pods among Instagram influencers, as the comments are a visibility tactic performed on creators’ behalf, rather than by creators themselves. However, in this case, audiences leave comments of their own accord as a kind of “endorsement” or “vote” they attempt to communicate to YouTube’s recommendation algorithm, not as the result of mutual reciprocation, as in pods. Aside from comments and likes, BreadTubers also routinely talk about clicks, a conversation that often surfaces alongside references to clickbait content. For example, Sarah explained in an interview: I would say to that [YouTube’s recommendation algorithm is] not necessarily going to recommend you videos that you like or that are good, it's just trying to maximize your engagement with the website, even if it just makes you angry or makes you disgusted at what you just saw, or even if it doesn't have anything to do with what you just saw, basically like clickbait. Awareness of the role of clicks has led some BreadTubers to build and implement strategies around giving videos interesting titles that will pique viewers’ interest. For example, in a tutorial video for other BreadTube creators, Thought Slime offered advice on crafting an algorithm- friendly title: 200 Let's say you wanna make a video on, I don't know, the labor theory of value. You could just call it "The labor theory of value, explained". Nobody will watch that, unless they're specifically looking for a video on the labor theory of value. And even then, what are the odds that your video is the one that's served to them first? Now, you could title it "A Marxist view of how value is generated." But that's boring! I'M BOOORED! Why not call it "Why Your Boss Is Stealing From You." Now, is that as accurate to the subject matter you're covering? No, but it's not completely irrelevant. And it's actual useful information in people's day-to-day lives. It's something that they might want to know, phrased in a way that they can understand without already being familiar with the subject you're trying to explain the basics of to them. Here, Thought Slime coached other BreadTube content creators on how to entice clicks in order to be ranked more favorably in search results and the “up next” queue on YouTube. Algorithmic Moderation Signals. BreadTubers also commonly discuss and act on data signals used in algorithmic moderation. In this, the signals most commonly discussed are those that play a role in high profile incidents of YouTube moderating BreadTube content. For example, on multiple occasions, right-wing users have coordinated to report, en masse, BreadTube content, thereby triggering (algorithmic) demonetization on and/or removal from YouTube.17 In these situations, when BreadTube creators have spoken out about their experiences, others in the community learn that YouTube uses reports (or flags) by users as a key data signal for algorithmic moderation. On occasions that do not involve a coordinated attack, BreadTubers have learned, through insight shared by creators and via discussion with others, that including the “wrong” keywords in metadata can trigger algorithmic sanctions. For example, when a BreadTube creator posted in r/BreadTube that their “anti-Nazi” video was deleted for hate speech, users commented with various ideas about what could have caused the removal, including visual depictions of Nazi symbols, tags, titles, and descriptions. In particular, the use of keywords as data signals in algorithmic moderation has been made most widely apparent via 17 Notably, in general, the BreadTube community disavows this practice of “brigading.”. I have not come across any evidence of any analogous flagging attack by BreadTubers on alt-right content. 201 BreadTube discussions about the systematic demonetization of LGBTQ+ content. For example, Ana said in an interview: …from what I have seen around this conversation is people are assuming that, yes, that the algorithm just listens for certain words or reads certain words in the description and whatnot and flags those videos without human intervention, because a human person would be able to tell the nuances between, "Oh, this is a video talking about sexually explicit things that should be flagged, and this is a video of a kid coming out, which shouldn't. Using the example of a video about coming out (as gay, trans, etc.), Ana said YouTube’s algorithm could “listen” for certain words in the video’s metadata, which are deemed not advertiser friendly. BreadTube creator Lance explained further in an interview: …[YouTube] seem[s] to be taking quite a big stance against a lot of words, which I guess they consider their “naughty list,” and unfortunately those words can include a lot of stuff related to the LGBTQ community. So words like “gay” or “lesbian” will get you flagged. Lance, like many other creators, has used the knowledge that keywords factor into algorithmic moderation to alter the language he uses in his videos. In this way, similar to Instagram influencers, BreadTube creators often have to find ways to evade algorithms in order to remain visible. As an example of this, in a video, Thought Slime talked about substituting the word “bastard” for “bad” in the title of one of his videos, in order to avoid demonetization. I have personally observed that the transcripts for BreadTube videos will often omit or use asterisks to alter words from the “naughty list,” which could be a similar proactive measure to avoid demonetization. In other words, because BreadTubers know that YouTube’s algorithms scan the transcripts for videos when assessing the suitability of content, they might delete or alter words that they think might be flagged. At a higher level than data signals, BreadTubers also have knowledge of certain algorithmic techniques YouTube uses for recommending and moderating content. I touch on these in the next two sections. 202 Content-Based Filtering Aside from data signals drawn on by YouTube’s recommendation algorithm, many BreadTubers know that algorithmic recommendation on YouTube depends upon content-based filtering (although they do not use this term). Content-based filtering is an algorithmic technique wherein the properties of content inform recommendations (Rieder, 2017).18 BreadTubers often discuss content-based filtering in relation to details established in the visible metadata of uploaded videos. For example, a user commented on a post in r/BreadTube: “Stuff that will [help the algorithm make connections between videos] is similarity (up to and including the same words) in titles, video descriptions, tags, hashtags, playlist names, spoken text (YouTube does text to speech).” Here, the user pointed to different kinds of content metadata (e.g., video descriptions, hashtags, and transcript text) as input data operated on by an algorithm for recommending content. In this, BreadTubers often discuss the inclusion of names in video metadata, mainly other YouTubers. For example, Nick and I had the following exchange in an interview: KC: What do you think the algorithm knows, I guess? And how is it able to make these connections? Nick: It would probably be partly off of keywords, partly off of... Again, if you've enjoyed watching this video, which has this guest on, then knowing that this guest also appears in these other videos, again, through the keywords of just the guest's name. Nick and other BreadTubers’ insight about content-based filtering based on names seems to stem from familiarity with Data & Society’s “Alternative Influence” report, discussed in the last chapter, which emphasized the importance of cross-over videos for gaining visibility. Other conversations about the alt-right pipeline inform BreadTubers’ understanding of content-based filtering as well. Recall that the alt-right pipeline refers to the phenomenon of 18 YouTube has publicly acknowledged making use of this technique (Covington et al., 2016). 203 users being recommended increasingly more extreme right-wing content. In this chain of recommendations, BreadTubers understand recommendations as resulting from common threads in the content of videos. For example, BreadTubers often talk about intervening in the alt-right pipeline by creating content that explicitly responds to alt-right figures or talking points. BreadTubers do this, in part, because they know that their videos have a better chance of being algorithmically recommended to alt-right audiences due to an overlap in topics. In the next section, I will address another algorithmic technique BreadTubers discuss known as collaborative filtering. Collaborative Filtering Beyond content-based filtering, BreadTubers know YouTube’s recommendation algorithm recommends videos based on the extent to which viewers of one video resemble viewers of other videos.19 Tyler explained his understanding of collaborative filtering in an interview: “It's just like you watched a Ben Shapiro video and a Joe Rogan video, and people who watched those videos tend to watch, I don't know, a Stefan Molyneux video. So it'll show you one of his.” As another example, a user on r/BreadTube explained the technique writing: [The algorithm] builds a profile of you and the videos you're watching and it goes based on my math, people like you who watch this video were more likely to click on this other one. And if you don't, it notes that down and it gets added to its data set. In a few cases, interviewees talked about learning about collaborative filtering by extrapolating from textual cues and experiences across different sites. For example, Emma told me in an interview: Emma: If I scroll down on my [YouTube] homepage, yeah, there used to be a section that was like “ContraPoints viewers also watched these.” If they were doing that to show you publicly like here, other people who watched this video that you like also liked this, then they could definitely be doing that behind the scenes. You get that with clothing 19 YouTube has also publicly discussed the use of collaborative filtering on the platform (e.g., Covington et al., 2016). 204 companies. If I go on H&M and I pick out a swimsuit and then I scroll down, then there's a section with like, people who purchased this also purchased this and maybe you would like it as well. It was sort of that same idea. And I noticed that it's not here anymore, but it used to be, definitely used to be. KC: Do you think that could be happening with the recommendations too? Emma: I think so, yeah. I think that could be happening with the recommendations that, yeah, that other people who... Yeah, that it's just basing on sort of what other people are doing as well. Here, Emma found commonalities in the textual cues in the tailoring of content on different websites to make sense of how YouTube uses algorithms to curate content for her. While her explanation was general, she demonstrated enough about the general principles behind collaborative filtering to easily recognize it across different websites. In understanding the basic principles of collaborative filtering, BreadTubers are able to recognize failures in its application. For example, David shared with me: I noticed people who were progressive and liked Bernie [Sanders] also watched those specific channels, so I realized this wasn't YouTube personalizing to my preferences, but to the preferences of a wide group. I would think the demographic looks something like 16-40 years of age, working/middle class, at minimum college educated or in college and of course left leaning. Some BreadTubers bristle at being profiled in this way according to certain features like demographics and location. Notably, David went on the tell me that YouTube’s algorithm does not always know users’ “true” demographics, but infers them from “external causes.” As another example, when comparing collaborative filtering on Pinterest to that on YouTube, Emma told me: [Algorithmic recommendation on Pinterest is] similar to the YouTube thing where it's like, "Oh, you liked this? Well, these other pinners from this community, they also like that, but then they also like this other stuff and maybe you will like it too, if we show it to you. […] I just got an image in my head of like, okay, you know a Venn diagram, right? So it's sort of like YouTube really wants to put you in the middle of a bunch of circles making a Venn diagram, but what you, or what I actually want on YouTube, is my own damn circle, all to myself, with my own stuff in it. And I don't necessarily want to be a part of 65 different, very sharply delineated communities. I just wanna have my own 205 circle with some videos from each community that I like without necessarily having to be associated with or connected with all of the other people who are in those communities as well. As Emma’s reflections viscerally illustrate, many BreadTubers understand collaborative filtering on YouTube as fairly primitive, capturing only a fuzzy understanding of who they are and what they like. For BreadTubers, the primitive nature of collaborative filtering frequently becomes a plot point in a broader story about how platforms like YouTube do not care about individual users like them; instead, BreadTubers maintain, they only care about what will keep them profitable. In the next section, I will turn attention to practical knowledge, wherein BreadTubers understand algorithms as filtered through the discursive lens of their social world. Practical Knowledge Practical knowledge results from BreadTubers giving algorithms meaning based on the interests and values that define their social world. As we saw in chapter 5 with Instagram influencers, practical knowledge emerges when people fit technical knowledge with their sense of self, particularly as based on social identity and membership in different social worlds. Practical knowledge is the local, embodied significance people give to algorithms through their responses to them. BreadTubers’ practical knowledge of algorithms—like that of Instagram’ influencers—grows from their collective self-conceptualization—their understanding of who and what the community is and what it stands for. This kind of knowledge involves understanding what algorithms “want”—what kinds of participation are deemed valuable and rewarded—and deciding whether and how to appease them. Specifically, in this section, I focus on how BreadTubers read algorithms through self- reflective anxieties about who the community fundamentally is and should be. In this reading, they come to wage their visibility war with one key strategy (among many), which seeks to 206 reaffirm broadly shared values within the community. First, I will describe BreadTube’s existential crisis that simmers beneath their collective pursuit of visibility. As a community, BreadTube is highly fragmented. The community brings together a (relatively) wide range of political ideologies, though all subsumed by the umbrella category of “leftist.” This heterogeneity invites infighting as the community hashes out their particular positions and attendant actions. In addition to infighting, many BreadTubers recognize that the community skews heavily white and is not immune to racial biases that result in the marginalization of BreadTubers of color in the community. This racial inequity factors into discussions of who constitutes “BreadTube.” These schisms within BreadTube leave the community in a constant state of existential crisis. On a practical level, this crisis concerns the sustainability of BreadTube—whether the community can continue to exist. Yet, this crisis also concerns the community’s self-conscious struggle to actualize the values it endorses—namely, solidarity and equity. The community’s existential crisis becomes salient for many BreadTubers as they understand algorithms as urging “rugged individualism,” which conflicts with the community’s collectivist values of solidarity and equity. BreadTubers believe that algorithms urge individualism by enacting a neoliberal capitalist order that compels competition and by rewarding those who participate in online conflicts with visibility. As I will show, in waging their visibility war, BreadTubers seek to embody their core collectivist values, in part, through a commitment to communal support via visibility tactics that prioritize the community over the individual. In this discussion, I suggest, the community attempts to keep itself accountable to the shared values that bring BreadTubers together. BreadTube (Dis)Unity The BreadTube community is notorious for existing in a constant state of existential 207 crisis. This crisis surfaces in two prominent discourses among BreadTubers. First, BreadTubers frequently note the community’s sectarian rifts, which result from ideological heterogeneity. Second, many in the community acknowledge that BreadTubers of color have been marginalized in dominant conversations about who constitutes “BreadTube,” reflecting additional divisions. In this section, I summarize these key discourses below. The community hosts a wide spectrum of beliefs, ranging from Social Democratic to Maoist, which has engendered considerable infighting. There is regular commentary within (and outside) BreadTube of the community “cannibalizing” itself (e.g, RE-EDUCATION, 2019). As one person tweeted in relation to a BreadTube clash: “The left eat their own. They are incapable of banding together for more than a few minutes before looking for the next person to cancel” (VITO the Ancestor of Men, 2019). Many BreadTubers share this opinion that the community’s infighting interferes with its ability to come together for tactical unity in the visibility war (or any other form of action). More fundamentally, many recognize that BreadTube exists in a state of ontological ambiguity: who is or is not a part of “BreadTube,” and what “BreadTube” stands for in relation to particular issues is the subject of an ongoing, often heated, debate. In many ways, BreadTube’s infighting and gatekeeping is the legacy of past leftist movements, which also struggled to resolve ideological differences (Diggins, 1992; Heideman, 2017; Heideman, 2017), BreadTube has been beleaguered by disagreements over doctrine. Alongside sectarian rifts, many BreadTubers have raised red flags about the community’s overwhelming Whiteness and the marginalization of content creators of color in BreadTube spaces. For example, in her video “Why is "LeftTube" so White?,” BreadTuber Kat Blaque, a Black woman, explained, …what I've found, personally, is all too often, what I say doesn't matter as much as what a white man has said that I said. […] [O]ne of the things that I definitely have observed is 208 that people are more willing to give my work a chance if a white person says that it's okay. Right? On the flip side, if a white person says the opposite, if a white person says that my work isn't valuable, that it doesn't have any merit, they're willing to believe that as well. (Wilkins, 2019b) In addition to being ignored and discredited, BreadTube creator T1J elaborated on Blaque’s points, noting that BreadTube creators of color are often treated as external to BreadTube because, frequently, “videos made by people of color, aren’t really made for White people. And so they're thought of as a completely different genre” (Peterson, 2019b). Here, T1J argued that the majority white community views leftist content through a white gaze, which means that BreadTube content that does not address a white audience is often not considered a part of “BreadTube.” He further suggested that the community’s implicit biases, as reflected in their content preferences and selections, leads to the relative invisibility of creators of color in the BreadTube community. Acknowledging this point, a user commented on a post in r/BreadTube titled “Where are the Black Breadtubers?”: it's hard for me to go on YouTube and hear from all these intersectional people- and yet the Tubers of color have maybe 10,000 subs, at most. It's a huge disparity between the two that needs to be addressed somehow- and not ignored. Moreover, many BreadTubers of color have talked publicly about being harassed by members of the community. For example, Marina Watanabe wrote in a Bitch article: The next time white YouTubers are held up as bastions of leftist thought and innovation, remember that before there was LeftTube, there were women of color on the site providing essential resources to a community that was largely hostile to us. (Watanabe, 2019a) This kind of racial prejudice and harassment in the BreadTube community has resulted in content creators of color receiving significantly less financial support than their White counterparts, which materially impacts their ability to produce content and engage with their fans (Watanabe, 2019a). Similar to the ideological schisms, BreadTube’s centering of whiteness also has historical analogs in various leftist movements (Heideman, 2018). 209 Rugged Individualism One principle that does unite the BreadTube community is its commitment to collectivist modes of governance, wherein individuals work together to serve a common good. Ironically, in spite of this commitment, the community remains plagued by disunity. The community’s anxiety over this contradiction materializes in their reading of algorithms as encouraging rugged individualism. Drawing on technical knowledge of the goals that algorithms serve, BreadTubers see algorithms as tools platforms use to encourage behavior that financially benefits them. Two particularly profitable behaviors BreadTubers zero in on are competition and conflict. The community has come to read algorithms as compelling individualism through these behaviors, which works against the community’s core values of solidarity and equity. I explain this reading of algorithms below, first addressing BreadTubers’ belief that algorithms encourage competition, then addressing BreadTubers’ belief that algorithms encourage conflict. Survival Under Capitalism Many in the BreadTube community read YouTube’s algorithms as enforcing a neoliberal capitalist order, which encourages competition. This reading inspires inner turmoil among the community as a result of the incongruity of using YouTube while also speaking out against capitalism. Yet, with no viable alternatives to YouTube, as one BreadTuber told me in an interview, many in the community see themselves as coping with “survival under capitalism.” In making sense of algorithms in this way, in relation to the community’s understanding of broader power structures, BreadTubers construct practical knowledge. BreadTubers advocate for social and economic equity. Yet, this position conflicts with their simultaneous awareness that YouTube’s recommendation algorithm revolves around engagement metrics that distinguish between creators by assigning value to them. In this, 210 BreadTubers read visibility metrics (e.g., followers, subscribers, likes, etc.) as signifiers of status, imposed onto the community by platforms. As BreadTube creator Peter Coffin explained in a video, visibility metrics “tell you how popular something is. They tell you how credible the person saying something is and they provide an incentive to try to create content that will get those metrics” (Coffin, 2019). By emphasizing visibility metrics, BreadTubers believe YouTube creates different “classes” of BreadTube creators. Mo Black explained this in a Medium article: There are people who have an incredible amount of social capital like Contrapoints, Hbomberguy, Oliver Thorn [Philosophy Tube], and the like, who control the fluctuations of the market through what they decide to Tweet or talk about. There’s a “middle class” of content creators who are associated with them who have enough social capital so sway an audience, but not enough to be immune from pressure at the top. And then there’s everyone else. No sway whatsoever. No sizeable control over what gets brought in the spotlight or how conversations unfold. We have to climb on top of other people’s work and ride drama and controversy to be seen or heard. (Black, 2019) This statement encapsulates a common view within the BreadTube community that the BreadTube creators with many subscribers, many patrons, many views have been chosen as “elites” by algorithms that reward visibility with more visibility and, so, allows them to accumulate financial, social, and cultural capital. Some BreadTubers see this hierarchization according to visibility metrics as dividing the community into “haves” and “have nots.” BreadTubers further see the enactment of a hierarchical order as essential to the design of YouTube, as BreadTube content creator Emerican Johnson explained in a video: YouTube will never give us tools to work together as creators, because they want us all to have this capitalist mindset. If you watch--I don't know if you ever watched those little training videos that YouTube gives you. They give you little videos on like how to start a YouTube channel and how to be successful. They very much want you to be in the mindset that you were a capitalist, you are out here to compete, you were out here to fight with everyone else elbow to elbow to push yourself up through the algorithm. They are not going to give us communal tools to operate… (Johnson, 2019) Feeling obliged to adopt this “capitalist mindset” provokes unease, as content creators feel forced 211 into competition with one another. As Johnson also exemplified, many BreadTubers believe platforms require individuals to compete independently, in the absence of platform-provided tools that would permit collectivism. Thus, as Peter Coffin put it in a video, BreadTubers see the attention economy structured by algorithms as “a pure expression of neoliberal capitalism” (Angie Speaks, 2019). The Algorithm Demands a Sacrifice Alongside compelling competition, many BreadTubers also believe that algorithms discourage unity among the community by rewarding participation in BreadTube’s “drama.” In this, they construct practical knowledge by reading algorithms through the discursive lens of BreadTube (dis)unity. These BreadTubers believe that infighting among the community tends to grab people’s attention, which leads to increased engagement, which leads to increased visibility. Thus, members of the BreadTube community refer to “algorithmically-driven bouts of conflict” (Khaled, 2019), reading algorithms as inflaming existing rifts and stymieing attempts to achieve unity. BreadTube creator Re-Education addressed this point in a video: My point is that we're doing [inciting discord] to ourselves what the state has been doing to us for years, and it makes things even more complicated because it's all a commodity now. It's all being sold. They're profiting off of our drama. So, not only are we cannibalizing ourselves, but we're doing it in a way that destroys our movement and also feeds the capitalist system at the same time. Drama always gets the most views. Drama always gets the most attention. Who gives a fuck about history? Nobody. I want to know more about the drama. You can say something good about the drama and then you'll get positive social capital, or you could say something bad about the drama and you'll get negative social capital, but none of it really matters, because at the end of the day, the capitalists are the ones that make all of the money. Every single tweet, every single repost, every single like that you make online is all a transaction. It's all a way for them to gain more eyes, gain more attention, and thusly more ad revenue money. But every single time, it chips away a little part of our movement. We can still criticize each other. Of course we can. But that doesn't mean that we have to cannibalize ourselves every time the algorithm demands another sacrifice. (RE-EDUCATION, 2019) Similarly, speaking of “Twitter pile-ons,” Tyler told me in an interview: You'll have literally thousands or tens of thousands of people calling you a terrible 212 person. And sometimes it may even be justified, but sometimes it can be overblown. And then people who receive that think that's authentic, but really, the algorithm encouraged it. As these comments suggest, many BreadTubers believe YouTube’s algorithms give those who participate in “drama” a leg up in terms of visibility, thus encouraging others to join in the fray in order to gain cultural capital. By encouraging this behavior, BreadTubers believe algorithms fan the flames of existing tensions. Some BreadTubers additionally read algorithms as hardening BreadTube’s divisions. For example, one user commented on a BreadTube video: I reckon "lines in the digital sand" [about who is or is not part of BreadTube] are getting drawn partly just because of human tribalism, but this is massively amplified by the recommendation algorithms. They're clustering people based on their viewing habits, and this is accelerating and fragmenting ideological polarisation. Which I think is even now affecting politics in wider society. Thus, as a side effect of encouraging conflict, some BreadTubers see algorithms as deepening existing rifts in the community by siloing audiences on YouTube. In a video, T1J took up this reading and linked it to BreadTube’s racial divides. As referenced earlier, he argued that BreadTube’s white bias resulted in YouTubers of color being “thought of as a completely different genre,” not part of the BreadTube “canon.” Later, he continued, “YouTube’s algorithm no doubt considers them a different genre as well,” suggesting that BreadTube-flavored channels by creators of color would be seen as separate from other BreadTube content by YouTube’s algorithm. Making a similar point, a BreadTuber tweeted: While I don't doubt that it's not the whole problem, it does seem that YouTube's algorithm is partly to blame for the whiteness of LeftTube's canon. It never showed me Kat Blaque until the LeftTube is white video, and I had to find out about TJ1 from a link Jack Saint put up. As T1J’s comments and this tweet suggest, some BreadTubers believe the algorithms contributed to BreadTube’s racial divide, by reproducing implicit biases in viewing behavior among the 213 community. In short, BreadTubers understand algorithms as encouraging both competition and conflict. In both cases, they see algorithms as ultimately exacerbating existing divisions in the community and pitting members of the community against each other. Thus, the community sees algorithms as a counterforce to their commitment to collectivism. In this way, BreadTubers understand algorithms as undermining the community’s attempts to fully realize solidarity and equity. In the next section, I will demonstrate how BreadTubers respond to these readings of algorithms to reaffirm their core values of solidarity and equity in their pursuits of visibility, in spite of pressures imposed by YouTube’s algorithms to adopt rugged individualism. For the Greater-Good The BreadTube community recognizes itself as a community divided by conflicting ideologies and implicit racial bias. Believing that algorithms encourage competition and participation in “drama,” many in the community view YouTube’s algorithms as exacerbating the community’s already fractured state by emphasizing individualism. Yet, as BreadTubers self- consciously read the community’s disunity into their understanding of YouTube’s algorithms, it shapes their collective pursuit of visibility in ways that, from an instrumental perspective, may seem illogical. For many BreadTubers, reflecting on their values of solidarity and equity results in the adoption of visibility tactics that may not be the most beneficial, easiest, or cost-effective tactics for individuals, but serve the greater good of upholding the sustainability and integrity of the community. Such “greater-good tactics” require individual BreadTubers to make sacrifices and/or assume risks that they might not otherwise if they did not believe that doing so would contribute to the core cause the unites BreadTube: achieving ideological dominance on YouTube 214 via success in the visibility war. In other words, as BreadTube creator Emerican Johnson advised in a video: “You know, ‘the rising tide will lift all the boats’ and we have to mindfully, thoughtfully apply ourselves to these algorithms, to these spaces.” Through their pursuit of visibility, the BreadTube community reaffirms its own ideological commitment to collectivist systems of governance, thereby resisting the disciplinary force they feel from YouTube’s algorithms. As the community marshals technical knowledge of algorithms with discourses on BreadTube values to formulate a sense of how to act, they construct practical knowledge. This is particularly on display in the ways BreadTube has built an infrastructure of communal support for lesser-known creators. In spite of algorithmic prescriptions for individualistic pursuits of visibility, the community devotes attention to finding tactics for making lesser-known creators visible and giving them the resources to succeed. Calls for “bigger” BreadTube creators to “platform” or “signal boost” lesser-known creators have resounded in the BreadTube community (e.g., Gutian, 2019a). Yet, in a system that prioritizes competition and conflict, supporting lesser-known creators offers few benefits for individuals and, indeed, incurs significant costs. Mainly, the costs revolve around BreadTube’s tendency to be self-critical and turn against anyone who expresses a “bad take.” Creators worry that giving a lesser-known creator a “shout out” could sully their reputation if the lesser-known creator turns out to be “problematic” or “controversial” in some way. As Hbomberguy, a prominent BreadTube creator, explained on Twitter why he and other creators are often reluctant to platform others: …if you share a video you like and the creator turns out to suck in some capacity people will remember and throw it in your face later. Or vice versa, they get associated with me and i basically fucked up their career. (Brewis, 2019) More visible creators—even the ones with the most clout—face significant backlash if the creators they support end up provoking the rancor of the community. Consequently, BreadTube 215 creators’ fear of being judged guilty by association makes them wary of collaborating with lesser-known creators, or even giving them a shout out. The risks associated with supporting lesser-known creators can be minimized by vetting creators. However, vetting takes time and energy and often interferes with creators’ ability to maintain the momentum of their own channel. BreadTube creators are well aware of the demands of the visibility game, and feel considerable pressure to churn out content on a regular basis in order to remain visible. In this, they say that YouTube’s recommendation algorithm rewards regular and frequent publishing and when they reduce their productivity, the algorithm penalizes the reach of their subsequent videos. Emerican Johnson discussed this in a video: …we're on this treadmill where we work and we work and we work. If we take three or four days off, because, you know, maybe sometimes I get sick, sometimes Luna [the other half of his YouTube channel] gets sick, sometimes we have to go out of town for whatever reason, you know. I'm about to go do a visa run to get my visa renewed, so we're gonna be in Cambodia for a few days, and that stresses me the hell out, because if we miss a week or two on our YouTube channels, the algorithm hits us so hard, and then the monetization drops. And, you know, we don't really have any savings right now, okay. We're living paycheck to paycheck and the pressure to continually pump out content for ourselves, for our own individual channels is extremely high. […] you have to realize that that many many many of the content creators who seem successful from the outside are in the same kind of treadmill situation and our pressure to keep our channels stable and keep our income stable, it's incredibly high. As a result of algorithmic recommendation on YouTube, BreadTube creators like Johnson feel forced to compete for visibility by continually producing new content. Any activities that divert attention away from this can undermine their ability to maintain visibility. For these reasons, if BreadTube creators only cared about their own visibility, it would make little sense to devote time, energy, and resources to supporting other creators. Yet, supporting other creators is widely championed by the BreadTube community. Johnson, in a continuation of his above comments, exemplified this in pledging his support in spite of acknowledged risks: 216 I want to say this very clearly: we're not going to stop platforming these people. We're not going to stop platforming people who suffer from oppression and have righteous anger. We're going to continue to try to find small leftist creators to support. We're going to continue to interview activists around the world. However, I want you to understand that that's a big risk that we're taking onto our shoulders when we do that. There are always effects and you can see on the algorithm, you'll see the videos where we lose the most subscribers, a lot of those videos are videos where I'm interviewing smaller content creators or people who, for whatever reason, you know, the audience doesn't respond to well, and I'll lose subscribers and I'll lose patrons every single time. That is just a fact of life. And I can kind of understand why bigger leftist YouTubers would be concerned with that. As Johnson’s comments hint, BreadTubers’ commitment to supporting other creators is a matter of embodying the community’s values—namely, prioritizing solidarity with the rest of the community, particularly those facing structural barriers. This commitment is also a pragmatic attempt to increase the overall visibility of the community. BreadTubers know that YouTube’s recommendation algorithm uses collaborative filtering, as discussed above. Thus, many BreadTubers believe that for individual creators to increase their visibility, they must establish links to more (or equivalently) successful content creators, who have a substantial following. Many prominent creators’ decision to share the “wealth” of their visibility with others hinges on this understanding. They believe doing so strengthens the connection of lesser-known creators with the rest of the BreadTube network, and, thus, increases the visibility of BreadTube as a whole. Creators share their “wealth” in different ways. The most basic tactic is to link to other creators in video descriptions, reference them on YouTube and elsewhere, share their videos on different platforms, and so on. Thought Slime, a well-known BreadTube creator, offered an example of one creative way he platforms lesser-known creators, which he calls “the eyeball zone.” In an interview, Thought Slime described how it works: But I think one thing that does [help BreadTube’s visibility], and this is something that I take very seriously and work hard on, is we need to shout out one another and show each other smaller channels and help them grow. And so that's something...In every video, I 217 shout out a smaller channel. I made it a whole bit. And I purposefully added all of these theatrical elements to it because I knew if I didn't, people would fast forward. People ask why I makes such a production out of... I call it “the eyeball zone.” I have a new, unique intro every video where I... Something spooky happens. It's a whole thing that I do. I do that because I know that if I didn't, people would be like, "Oh, this isn't what I came here for." They would press that little 30-second, skip forward button until I was done talking about it. Similarly, the BreadTube subreddit has spent time strategizing how to make lesser-known content creators more visible on YouTube. For example, one tactic the subreddit has implemented is the use of “flair” to highlight lesser-known content creators. Flair is an identifier, or tag, attached to users’ names when they comment or post. Many subreddits use flair to organize content. This tactic of highlighting lesser-known content creators with a special flair grew from a suggestion by a user in r/BreadTube. The user wrote: In the current state of the subreddit, posts of new creators are often difficult to find among videos of more well known creators. The more well known a creator is, the more likely the audience is to comment and engage with the post linking to the video. This leads to reddit's algorithm making the content of already known creators more visible, since it strives to maximise engagement. However, I think the subreddit can and should support smaller creators. The first step in that is making those creators more visible. Let's hack Reddit's algorithm with one simple (and legal) trick. This can simply be circumvented by enabling a flair for new creators. This would enable interested members of the subreddit to search for videos specifically of creators that are just starting out, while not hindering the normal enjoyment of the subreddit. Once new channels are more visible, promoting them through other methods is also easier. If you can search for new creators more easily, you can look for people to do crossovers with more easily. Anyone wanting to link to a new creator could find the post could search for this creator more quickly. One final example of a greater-good tactic is the LeftTube Diversity Incubator (LDI), launched by prominent BreadTube creators, Peter Coffin and Angie Speaks. As Coffin and Angie Speaks explain, LDI is a “‘find and fund’ operation,” wherein “the point and purpose is finding would- be creators and arming them with what is necessary to become a self-sufficient leftist YouTuber, alleviating the over-representation of white people in the leftist YouTube space…” (LeftTube Diversity Incubator, 2020). In practical terms, the operation involves providing financial and professional support to BreadTube creators of color—for example, via live stream fundraisers, 218 volunteer labor, and promotion. This initiative was designed with the explicit goal of subverting YouTube’s capitalist structure in order to encourage a more diverse BreadTube than they believe the platforms’ algorithms naturally permit. As Coffin and Angie Speaks explained: The more people we are able to put the message of what capitalism really is in front of (as well as general calls for collectivization and cooperation), the more potential for total systemic change exists. However, we wish to do what is possible to subvert the actual system at play, namely, we want to do what is possible to address the material conditions of people who want to be uploading leftist YouTube content but can not. As this statement suggests, with the LDI, Coffin and Angie Speaks seek systemic change by working outside of the system in place, namely the capitalist order enacted via YouTube’s algorithms. While supporting lesser-known creators can be risky, time-consuming, resource- intensive, the BreadTube community sees it as a sort of redistribution of (visibility) wealth, uplifting those facing material constraints, and generally increasing the collective visibility of the community. The above-described tactics demonstrate the BreadTube community’s firm belief that supporting other BreadTube creators is something they “should” do. In formulating these tactics, we can see practical knowledge in action. How BreadTubers mobilize their knowledge about algorithms extends beyond an instrumental understanding of what tactics will effectively increase visibility, as based on technical knowledge. The discursive ideals of individuals’ social worlds guide responses to algorithms, as we also saw in chapter 5 for Instagram influencers. For the BreadTube community, the pursuit of visibility must square with the community’s shared values of solidarity and equity. Supporting lesser-known creators passes this test by invoking BreadTube’s ideological commitment to dismantling systems of power and promoting solidarity among leftists. For the BreadTube community, the value of visibility tactics are assessed by how well they support the community’s visibility, as well as its integrity and sustainability. In orienting themselves within this calculus, BreadTubers construct practical knowledge. 219 Conclusion BreadTubers require different kinds of knowledge to meet their instrumental needs and participate in their social world. Different kinds of knowledge serve different purposes. Technical knowledge about the design of YouTube’s algorithms—the goals algorithms serve and the limitations inherent to their design—and about the methods they use to perform their functions allow BreadTubers to devise responses that increase the community’s visibility. Technical knowledge gives BreadTubers a structure within which to imagine possible tactics for contributing to BreadTube’s success in the visibility war. This kind of knowledge has been the main focus of most scholarly and popular discussions about algorithmic literacy and, thereby, most valued. Yet, in this chapter, I have offered evidence that, while technical knowledge has value for users like BreadTubers, other kinds of knowledge matter as well. In particular, I have demonstrated that practical knowledge also shapes pursuits of visibility beyond technical knowledge. Practical knowledge captures how people grasp algorithms as elements of their social worlds. With practical knowledge, people understand that their responses to algorithms define them, which means their responses must be cast in the shadow of the values of those worlds. Thus, practical knowledge often begins with an understanding of what algorithms “want.” The BreadTube community understands YouTube’s algorithms as requiring individualistic pursuits of visibility. BreadTubers see the algorithms as enacting a neoliberal capitalist order that encourages competition and as rewarding participation in conflict. As a community struggling to gain and maintain coherence and stability, BreadTubers are careful to preserve the values of their worlds that unite them—namely, solidarity and equity. The tactics they deploy in the visibility war, as the core activity of their social world, present an opportunity 220 to reaffirm their shared values. Thus, while technical knowledge gives BreadTubers important insight about possible visibility tactics they could implement, their practical knowledge acts as a lodestar for a pursuit of visibility that serves the community’s collectivist values and, thereby, upholds its sustainability and integrity. In chapter 5, I demonstrated how Instagram influencers’ knowledge of algorithms represents a means by which they perform their identity. In this chapter, I followed up on this thread, adding that knowledge of algorithms also represents a means by which people subversively reassert who they are in the face of algorithms’ disciplinary force. In this case, BreadTubers read YouTube’s algorithms as enacting an individualistic order that pits people against one another in order to determine their worth. This order fundamentally conflicts with BreadTube’s commitment to social systems defined by comradery and cooperation. In this way, BreadTubers’ responses to YouTube’s algorithms demonstrate that as we define algorithms, we simultaneously define ourselves with practical knowledge—our conscious decisions about how we respond to algorithms. Here, we see critical algorithmic literacy most clearly on display: as BreadTubers have exemplified, understanding how algorithms compel certain behaviors and impose a particular sociopolitical order gives people space to intervene in these algorithmic attempts to make our worlds. In closing this chapter, it is worth noting that other social worlds may find it more challenging to oppose algorithmic directives than the BreadTuber community has. While many BreadTube creators have publicly discussed their financial struggles, and many others have talked about challenges they face as members of different marginalized groups (e.g., disabled, LGBTQ+), it is still possible that the community benefits from social advantages that allow them to resist algorithmic power in ways that would not be possible or sensible for others. Building 221 communal support for other BreadTubers, as discussed, requires time and money. Further, as discussed earlier in relation to Instagram influencers, pursuing visibility, itself, similarly demands time and money, which has meant that the most successful influencers tend to come from privileged backgrounds (Duffy, 2017). Possibly, the ability to subvert platform algorithms as BreadTube has requires a degree of social advantage that makes it less accessible to those actively struggling against other structural barriers—for example, poverty. Further, as we have seen, while BreadTube is conscious of its racial biases and has made attempts to correct for them, BreadTubers of color may nevertheless be marginalized in efforts to build an infrastructure of communal support. Thus, while some elements of the community’s response to YouTube’s algorithms are explicitly self-critical—for example, recognitions of failure to fully embody a commitment to racial justice, as seen with the Leftist Diversity Incubator—it remains to be seen how effective such actions will be in actualizing solidarity and equity in the community. 222 CHAPTER 8: Discussion and Conclusion Algorithmic literacy has been hailed as a vital skill of the 21st century by various stakeholders (e.g., Cantrell, 2019; Davidson, 2011; Hargittai et al., 2020; Parisi, 2020; Rainie & Anderson, 2017). Yet, these calls have been issued in the absence of a clear understanding of what “algorithmic literacy” really means. The body of research exploring what people know about algorithms not yet well-developed. The body of research exploring the outcomes of algorithmic literacy is even less well-developed. Calls for greater algorithmic literacy use the term to gesture towards a range of different skills, though mainly prioritizing technical understanding of algorithms. Sometimes this means learning to code or think computationally (e.g., Davidson, 2011; Wing, 2014), sometimes this means understanding the mechanics of algorithms and how they operate (e.g., Hargittai et al., 2020; Parisi, 2020). Many of these calls set the goal of algorithmic literacy as preparing people for the full realization of an algorithmic society, as part of the utopian promise of a post-industrial life. Yet, many of these calls fail to recognize algorithmic literacy as not only a matter of concern for industrial progress, but also for individual and collective empowerment. In this dissertation, my intent was to help contribute empirical insights to the nascent body of work on algorithmic literacy, but, more importantly, to interrogate early assumptions about what this concept means. In particular, I wanted to draw attention to what has, thus far, been an under-studied aspect of algorithmic literacy: its relationship with power. By investigating what people know about platform algorithms across two case studies, I have developed the conceptual framework of critical algorithmic literacy, offering three defining characteristics that stand as provocations to past and present discussions of algorithmic literacy: 1) platforms structure knowledge building around (platform) algorithms, 2) algorithmic literacy is situated within the particular contexts of people’s social worlds, and 3) 223 algorithmic literacy allows individuals to reassert their own agency in the face of algorithmic power. I summarize these points below. At every point in the process of building knowledge about algorithms, I found that platforms play a central role in how Instagram influencers and BreadTubers learn about and make sense of algorithms. I introduced the concept of platform epistemology in order to capture the ways that platforms shape the construction and legitimization of knowledge about algorithms. Through their design choices, in activating connections between users, and in connecting users to content based on inferred interests, platforms orchestrate the flow of information about algorithms. By shrouding their algorithms in secrecy and closing off access to user data from which insight could be drawn, platforms also set the conditions for uneven distribution of epistemic authority on algorithms. Platform epistemology symptomizes platforms’ growing power in the “platform society” (Van Dijck et al., 2018), particularly the power asymmetry between platforms and users (Andrejevic, 2014; Zuboff, 2019). Although this power is not always exercised consciously or deliberately, it ultimately allows platforms to structure what, how, when, and to what extent people learn about algorithms. Platforms do not merely passively support learning about algorithms, they actively shape what is and can be known about them. Although platforms shape learning practices, I found that the interests, ideologies, sites, and activities of Instagram influencers’ and BreadTubers’ social worlds are fundamental to what they come to know about algorithms. In other words, critical algorithmic literacy is situated. It is inextricably intertwined with the growth and development of identity, as contoured by people’s participation in social worlds. People learn about algorithms only as they become relevant to the social relations and practices of their social worlds. Once they gain relevancy, algorithms gain 224 meaning in relation to the discourses that make up these social worlds. Thus, learning about algorithms helps people move from peripheral to full participation in a social world (Lave & Wenger, 1991). The situatedness of critical algorithmic literacy is evident in all the ways Instagram influencers’ pursuit of knowledge intersects with their pursuit of influence by way of visibility, and in all the ways BreadTubers’ shared enterprise of waging a visibility war acts as a centripetal force in the community’s knowledge building. Last but not least, I have argued that critical algorithmic literacy should be thought of as a tool for confronting power relations that algorithms mediate. Like all forms of knowledge (Collins, 2002; Foucault, 1980; Haraway, 1988), critical algorithmic literacy fundamentally involves negotiations of power. Platform algorithms mediate structural power, but also the more local power asymmetry between platform owners and users. This power asymmetry is particularly pronounced for those wishing to earn an income and/or effect social or political change via platforms, as platforms use algorithms to enact and enforce parameters for (un)desirable participation. I showed that both Instagram influencers and BreadTubers recognize the force of algorithms as they impose a particular politic-economic order on pursuits of visibility. What both communities know about algorithms has given them leverage for asserting their own agency and reaffirming the values that define them. These three prominent features of critical algorithmic literacy have many implications for the governance of algorithms. While mainstream discussions of governance tend to emphasize government regulation, governance includes a broader spectrum of interventions taken by individuals and institutions to “reinforce benefits and minimize risks” of algorithms (Saurwein et al., 2015, p. 37). I suggest that critical algorithmic literacy constitutes one of many “ingredients” that make up effective governance. The features of critical algorithmic literacy also have 225 implications for future work on algorithms, particularly the growing domain of work investigating what people know about algorithms. Such implications concern not only approaches to investigating and analyzing, but also framing and communicating what people know about algorithms. I explore these implications below, first discussing platform epistemology, then the situated nature of algorithmic literacy, and finally configuring critical algorithmic literacy around a goal of empowerment. Platform Epistemology: Public vs. Private Interests for Better Governance Platform epistemology—the ways platforms prescribe the conditions under which knowledge about algorithms may be constructed and legitimized—involves the ongoing negotiation of private interests (of platforms) and public interests in the platform society (Van Dijck et al., 2018). Van Dijck, Poell, and de Waal (2018) describe the platform society as the result of a process by which platforms are “gradually infiltrating in, and converging with, the (offline, legacy) institutions and practices through which democratic societies are organized” (Van Dijck et al., 2018, p. 2). Just as platforms shape how we produce and distribute news, keep healthy and fit, and learn in formal educational settings (Van Dijck et al., 2018), so too do they shape how we learn about platform algorithms. Yet, what benefits platforms does not necessarily benefit the public. Platform epistemology evinces various dilemmas with regard to promoting algorithmic literacy as a public good. As I will discuss, these dilemmas indicate that governing platforms in ways that will support algorithmic literacy requires a joint sense of responsibility across multiple actors and sectors (Van Dijck et al., 2018). In addressing these dilemmas, we can think more deeply about how to achieve more widespread and richer understandings of algorithms. In what follows, I will first discuss how platforms foster unevenness in epistemic 226 authority as a result of their lack of transparency and openness about their algorithms. Next, I will discuss how platforms exacerbate inequalities embedded in the online circulation of information about algorithms. Finally, I will discuss how platforms’ design choices can enable and constrain learning about algorithms, which makes them relevant to governance efforts. In discussing these three dilemmas, I affirm the need for greater regulatory oversight of platforms (and their algorithms). Manufacturing Uneven Epistemic Authority What qualifies as legitimate knowledge about algorithms rests on negotiations of power, which creates an uneven terrain of epistemic authority. In particular, platforms have an outsized role in authorizing knowledge. By making information about their algorithms scarce, as I have demonstrated, platforms create a market for this information and urge the validation of it through the logics of the visibility game. Thus, as evident in the case study of Instagram influencers, so- called gurus can profit substantially from selling their insight about algorithms to others. This marketization of insight about algorithms closes off access to one key source of information for those who cannot afford to pay for it. Further, the coding of success in the visibility game as epistemic authority on algorithms has important implications. First, because it further incentivizes pursuits of visibility, it materially benefits platforms by subsidizing user activity. This is especially the case since gurus must rely on platforms to market their training services. Second, making epistemic authority among users conditional on success in the visibility game introduces barriers to achieving epistemic authority since those with greater social advantages have a leg up in the visibility game (Duffy, 2017). Both playing the visibility game and learning about algorithms require time and resources, which, of course, are not equally distributed. This ultimately allows epistemic authority to be concentrated among those with greater social 227 advantages. This “hegemonization” of epistemic authority may make it difficult for those disadvantaged by structural inequalities to fit new and/or contradictory knowledge claims within the prevailing wisdom. In other words, the visibility game acts as a credentialing mechanism that determines who can speak authoritatively about algorithms, and which ultimately reenacts existing structural inequalities. In turn, being able to speak authoritatively about algorithms helps individuals gain visibility due to the marketization of information about algorithms. Second, platforms also assert their own epistemic authority by sharing few details about their algorithms and restricting access to user data from which insight could be drawn. In doing this, platforms offers users and others little ground upon which to stake claims about algorithms that conflict with what platforms themselves affirm. While there are limits to what even platforms can know about their algorithms (Ananny & Crawford, 2016; Burrell, 2016), their lack of transparency and openness fosters ambiguity in what can or cannot be known about algorithms. This ambiguity ultimately allows platforms to refute claims about their algorithms that they may not actually be able to disprove, or that would take empirical investigations to invalidate. In short, platforms clinch their epistemic authority through secrecy. In this case, platforms can use public statements and other communications to inspire doubt among and urge the silence of those who suggest algorithms are operating problematically. To put this in context, recall the racial discrimination lawsuit brought by black YouTubers mentioned at the beginning of this dissertation. Perceptions of YouTube as the epistemic authority on its algorithms will inevitably make it difficult for the plaintiffs to make their case. By retaining extensive epistemic authority over their algorithms, to the exclusion of most others, platforms can exert pressure on public understandings of algorithms—both what the public can and does know about algorithms. This gives platforms the ability to influence governance efforts by undermining the credibility of 228 critical claims about algorithms when it suits them. Thus, ultimately, platform’s semi-monopoly on epistemic authority allows them to substantively impact the public’s ability to effectively govern algorithms in its interests. Both the hegemonization of epistemic authority and platforms’ semi-monopoly on epistemic authority can be minimized with greater transparency on the part of platforms, but this will require regulation. Greater transparency—particularly, transparency in which information about algorithms is made equally accessible and intelligible to all—would decrease the value of information about algorithms, which would decrease incentives for individuals to sell their insights, which, in turn, would help lessen informational divides. Greater transparency would also allow stakeholders outside of platforms to learn about algorithms more easily and comprehensively, which would limit platforms’ semi-exclusive claim on epistemic authority. However, years of scholarship on algorithmic transparency has not given any indication that platforms will endeavor to be more transparent on their own accord. Thus, the problems related to uneven epistemic authority under platform epistemology lend further support to the chorus of calls for government regulation that would compel greater transparency on platforms’ part (e.g., Citron, 2007; Diakopoulos, 2016; Pasquale, 2015; Zarsky, 2016). In the absence of regulation, the black-box need not be completely opened for researchers to shed some light on algorithms. As previous work suggests, empirical investigations of algorithms—for example, audits, reverse-engineering, interviews with developers (Bucher, 2018; Kitchin, 2017)—can reveal important insights about algorithms that can be conveyed to the public. This underscores the particular need for ongoing public scholarship in this area. It is not enough for us to share what we learn about what platform algorithms do and with what effect with policymakers and other researchers. This information needs to meet people where they are 229 and must be translated into the contexts of their social worlds in order to compensate for the unevenness of epistemic authority engendered under platform epistemology. One lesson—for all stakeholders—essential to remediating uneven epistemic authority is understanding epistemic flexibility—that algorithmic outcomes, and experiences with algorithms, differ depending on who you are and different experiences permit the cultivation of different knowledge claims. In other words, recognizing that knowledge about algorithms is situated can help counteract absolute claims to epistemic authority. I will come back to this point in a later section. Automating Inequality in Algorithmic Literacy Another important element of platform epistemology concerns the ways that platforms orchestrate flows of information about algorithms with algorithms. Platforms deploy algorithms to shape whom users connect with and which information they see in their feeds. As I have shown across the two case studies, acquiring information about algorithms via discussion with others and via exposure to media content on platforms constitute two key ways people learn about algorithms. Yet, in subjecting flows of information to algorithmic processes, platforms make access to information about algorithms conditional—some users will be stronger attractors of mediated discussions about algorithms than others. Algorithms have become increasingly central—or, at least less peripheral—to certain interest domains that are stratified across populations due to patterns of socialization. This suggests baseline inequalities in access to opportunities for learning about algorithms. Under platform epistemology, these inequalities may be exacerbated. In this dissertation, I addressed multiple interest domains that feature algorithms. Instagram influencers have an interest in digital marketing, where algorithms have become a part of longstanding efforts to ensure strategic communications reach the right audiences (Turow, 2011). The BreadTube community’s 230 interest in politics brings them into contact with discussions about the role of algorithms in electoral politics and the rise of alt-right ideology, among other matters. Taking BreadTubers as an example, an interest in politics correlates with socioeconomic background (Alfred et al., 2019; Prior, 2007; Strömbäck et al., 2013), which seems to be borne out by what we can extrapolate from characteristics of the BreadTube community.20 BreadTubers’ political interest motivates them to seek out and engage with relevant discussions and content on different platforms, as discussed in chapter 6. These practices, as well as the practices of BreadTubers’ connections and choices made by content publishers all get translated into algorithmic models of political interest on platforms, which ultimately silo political content (Thorson et al., 2019; Thorson & Wells, 2016) and, so, information about algorithms in this domain. In this way, algorithmic models may reinforce inequalities in flows of information about algorithms and, thereby, widen algorithmic knowledge gaps (Cotter & Reisdorf, 2020). In Virginia Eubanks’ terms (Eubanks, 2017), platforms may automate existing inequalities in opportunities for learning about algorithms online. The connection between interest domains, exposure to information about algorithms, and algorithmic curation on platforms has two principal implications. First, “ignorance” about algorithms should not be treated as the result of an “irrational” indifference—an individual failure in motivation or failure to see the value of knowing about algorithms. Interest in domains likely to feature algorithms depends upon the social, cultural, and historical context of individuals’ lives. Second, it suggests that efforts to promote algorithmic literacy cannot guarantee an equitable distribution of information about algorithms. Without any intervention 20 If BreadTubers resemble other young leftists, they are more likely to be highly educated (Thompson, 2019). As active YouTube and Reddit users, BreadTubers are doubly likely to be highly educated, as well as from higher income households and white (MarketingCharts, 2019). 231 into the design of platforms, formal support for learning about algorithms will not level the playing field. Yet, as I will discuss further in the next section, redesigning platforms to create more equitable access to information does not easily reconcile with platforms’ commercial interests. Still, as a first step to address this issue, more research is needed to assess the extent to which algorithmic processes on platforms deepen algorithmic knowledge gaps. With such investigations, we can then have more informed discussions about what would need to be done— in other words, how platforms and their algorithms could be designed in ways that minimize or eliminate inequities in exposure to relevant information and, so, literacy outcomes. In the next section, I turn to addressing how platform epistemology also suggests efforts to promote algorithmic literacy must include considerations of platform design. Platform Design for Algorithmic Literacy and Governance Platform epistemology includes the recognition that platforms’ design choices directly impact opportunities for learning about algorithms. This is because people learn about algorithms, in great part, by observing and reflecting on, and empirically investigating platform feeds, queues, textual cues, visibility metrics, and other platform features. Experiential learning was evident across members of the two social worlds under study in this dissertation, and has been shown more broadly as well (Cotter & Reisdorf, 2020). Consequently, platform decisions to introduce, remove, or tweak platform features can afford or constrain learning, even when this was not consciously intended in the design phase. For example, as suggested in earlier chapters, design choices like eliminating like counts on Instagram or removing the related channels feature on YouTube constrain what users can learn about algorithms by eliminating the visibility of key details about algorithmic processes from user interfaces. The importance of such features for learning corresponds with the well-established importance of feedback—“communicating the 232 results of an action”—in the domain of interaction design for how people make sense of technologies (Norman, 2013, p. 23). My findings suggest that design choices can introduce or eliminate feedback mechanisms that support algorithmic literacy in sometimes unanticipated ways. Thus, I suggest that the scope of longstanding calls for making algorithmic systems more visible can be broadened beyond questions of explanation (Association, 2017; Pasquale, 2015; Rader et al., 2018) or the intelligibility (Bellotti & Edwards, 2001; de Laat, 2017; Pasquale, 2015) to also include attention to design choices. Viewing design choices as relevant to algorithmic literacy means introducing a new evaluative criterion to the design process—alongside other criteria like auditability, fairness, and interpretability (e.g., de Laat, 2017; Diakopoulos et al.) —which entails friction between the interests of platforms and public interests. Though design choices can support learning among users, this is not the principal concern (perhaps not a concern at all) for platforms. Companies’ principal concern in making changes to platforms is whether such changes improve key performance indicators—for example, increased time spent on platforms, increased clicks, or user satisfaction. Design choices that benefit platforms—even those that also benefit individual users—may hinder algorithmic literacy. For example, Instagram’s decision to hide likes on the platform might improve individual users’ wellbeing and, so, their experiences on the platform, as the company has suggested (Sanchez, 2019). This is good news for both individual users and Instagram. However, my findings suggest this decision will also weaken a key source of insight by which users learn about algorithms on the platform. More generally, enabling stakeholders to acquire information about algorithms is not always in platforms’ best interest. In fact, secrecy and obfuscation grant platforms power. Such opacity allows platforms to move agilely in iteratively designing algorithms by evading public scrutiny, litigation, and regulation (Pasquale, 233 2015). Further, platforms’ interest in preventing users from “gaming the system” limits the extent to which they will be willing to make algorithmic processes more legible to users (de Laat, 2017). In general, platforms have historically prioritized commercial interests over transparency (Pasquale, 2015), which makes them unlikely to prioritize algorithmic literacy above profitability in design choices. Considering the impact of design choices on users’ ability to learn about algorithms represents an ethical responsibility. However, given the tension between this ethical responsibility and platforms’ private interests, we will need complementary tools of governance to compel platforms to do their due diligence and hold them accountable in this regard. Specifically, we will need governance in the form of what Florian Saurwein, Natascha Just, and Michael Latzer call “self-regulation,” or “collective efforts of an industry/branch that takes measures of self-restriction to pursue public interest objectives” (Saurwein et al., 2015, p. 40), and state interventions, which can include regulation, incentives by subsidies or funding, and so on (Saurwein et al., 2015). These forms of governance might establish standards and principles about when and how to evaluate the impacts of design choices on users’ learning practices. They might urge platforms to conduct user studies or implement audits to evaluate opportunities for learning about algorithms via platform features. These are only a few examples. The bottom line is that individuals and institutions involved in governing algorithms for the public good must recognize platform design choices as important for the cultivation of algorithmic literacy as a tool of governance. Now, I will turn to addressing the implications of another major feature of algorithmic literacy: its situatedness. 234 Situating the Situated Nature of Algorithmic Literacy in Broader Contexts While I have highlighted commonalities throughout this dissertation in how Instagram influencers and BreadTubers learn about and make sense of algorithms, I have also shown how algorithmic literacy grows from the particular ways that algorithms matter for different social worlds. Algorithmic literacy is actively constructed through social world-defining social practices, in the sites where those practices take place, and directed by a world’s shared interests and ideologies. In what follows, I explain what the situated nature of algorithmic literacy means for future work in this area, governance, and educational interventions. First, I draw together findings from both cases to summarize the situatedness of algorithmic literacy. Next, I explain how this situatedness complicates previous approaches to studying what people know about algorithms, which have prioritized approaches rooted in cognitive science. I then examine what implications we can draw for promoting algorithmic literacy via educational interventions. Implications for Future Work Early work investigating what people know about algorithms has leaned heavily on cognitive science, defining algorithms narrowly as technical objects that can be fully known via mental representations and accompanying beliefs. while this early work has significantly progressed our understanding of what people know about algorithms (and how), based on the findings of this dissertation, I suggest that it also dramatically reduces the complexity of algorithmic literacy by focusing narrowly on cognitive reasoning (e.g., rules, logics, and schema). Below, I describe the two prevailing approaches to algorithmic literacy—mental models and folk theories—while noting how my findings complicate them. I then address how current approaches can do epistemic harm. Finally, I offer some implications for methodology. In the present context, “mental models” refer to users’ cognitive schema of algorithms— 235 their mental representations of how algorithms operate (e.g., Eslami et al., 2015).. Mental models have been mainly studied in HCI research, because they help explain user behavior on platforms, which has direct implications for design, although the basic premise of mental models has been exported to other domains studying algorithmic literacy as well (e.g., Hargittai et al., 2020). In line with previous work in the STS tradition that questions strictly cognitivist approaches to knowledge (e.g., Collins, 2012; Suchman, 2007), my findings suggest that such mental “blueprints” of algorithmic processes are only the tip of the iceberg of algorithmic literacy. What people know about algorithms extends much further than cognitive schema. Moreover, other kinds of knowledge beyond technical structure and functionality directly impact platform behavior. Users’ practical knowledge—how users make sense of algorithms through the discursive lenses of their social worlds and social identities—plays an instrumental role in how users respond to algorithms. In fact, as I showed in the Instagram influencers case study, different practical understandings of algorithms lead users to behave in different ways. Some influencers’ alignment with the discursive ideal of authenticity draws them to relational tactics, while other influencers’ alignment with the discursive ideal of entrepreneurship draws them to simulation tactics. Further, as I showed with the BreadTube case study, practical knowledge sometimes directs users’ behavior in ways that do not logically follow from technical knowledge. BreadTubers’ greater-good tactics do not completely make sense according to their technical understandings of YouTube’s algorithms, but they make sense in relation to discourses in the community on solidarity and equity. In other words, what users know about algorithms from a technical standpoint only partially explains their behavior. Another way previous work has addressed algorithmic literacy is through the concept of “folk theories” (e.g., DeVito et al., 2017; DeVito et al., 2018; Eslami et al., 2016; Rader & Gray, 236 2015). Folk theories are intuitive causal frameworks people use to explain a variety of phenomena (Gelman & Legare, 2011). Folk theory approaches to algorithmic literacy sometimes push beyond a singular focus on technical understandings of algorithms to include opinions and attitudes (DeVito et al., 2017). However, the use of folk theories still tends to rely on a conceptualization of algorithmic literacy as a strictly a cognitive process, which follows a series of steps and focuses on rational evaluations of information about algorithms for utilitarian ends (e.g., DeVito et al., 2017). My findings suggest limitations of this approach by demonstrating the highly contingent, contextual nature of algorithmic literacy, its rootedness in social relations, activities, and discourses. Folk theory approaches take for granted the tacit knowledge and cultural competence we gain through socialization. As H.M. Collins argued, discrete facts and formal rules “can convey knowledge only because they rest on the indefinitely large foundation of taken-for-granted and shared reality” (Collins, 2012, p. 335). In this dissertation, I have illustrated how this “taken-for-granted and shared reality” is at the heart of how people make sense of algorithms, and is inextricable from people’s technical knowledge. In light of my findings, I argue that existing cognitivist approaches to user understandings of algorithms are merely one kind of “ordering device” (Suchman, 2007) that guide, but do not wholly determine people’s behavior. Assuming technical knowledge predominantly directs user behavior misses the culturally-embedded, embodied nature of algorithmic literacy. Knowing what users know about how algorithms operate cannot fully predict how users will behave on platforms. Technical knowledge of algorithms is inseparable from practical knowledge of algorithms, and both kinds of knowledge impact behavior in concert. Thus, people do not form mental representations of how algorithms function, which then bind their behavior to “logical” trajectories of behavior. Instead, people draw on a deep well of local, contextual insight that 237 grants coherence and legibility to what algorithms “want,” what they do, how, and with what effect. Consequently, people’s responses to what has been called “algorithmic power” are not programmable or straightforward. While I join others in affirming the value of technical knowledge as one tool for understanding how users respond to algorithms, I also suggest that we need to recognize the role of other kinds of knowledge in directing user behavior. Beyond an interest in understanding user behavior, approaches to studying algorithmic literacy that focus on technical knowledge without attending to other kinds of knowledge have the potential to do epistemic harm. Talking to people about their understanding of algorithms can reveal perfectly reasonable rationales for the conclusions they draw. Framing people’s knowledge as inaccurate or incorrect without recognizing the often reasonable rationales behind their knowledge contributes to a perception that we should not take users’ understanding of algorithms seriously. This treatment of users’ knowledge is a close cousin to black-box gaslighting. Recall that I conceptualized black-box gaslighting to describe how platform owners sometimes use their epistemic authority to provoke confusion and self-doubt among users with wholesale denials of their knowledge claims. When we characterize users’ knowledge as “wrong” without recognizing the partial truths their knowledge rests on, we both diminish the importance of the unique insight they have to offer and assert our own epistemic authority over theirs. Yes, we, as researchers, often know things about algorithms that “average” users do not know, but, it is also true that average users often know things about algorithms that we do not know. My findings suggest that people have insight about algorithmic outcomes that cannot always be predicted by technical knowledge of algorithms’ design and methods. In this way, people’s situated knowledge of algorithms serves as a kind of canary in the coal mine to alert us when algorithmic processes go awry. In other words, “lay” insight can be indicative of system- 238 wide patterns. Studying algorithmic literacy should not only be a matter of seeking to understand what users believe to be true about algorithms. I join Sophie Bishop in arguing that studying what people know about algorithms should also be a matter of taking their knowledge seriously (Bishop, 2019). We need to trust that people’s situated encounters with algorithms give them insight that we lack, and this matters. Finally, the situatedness of algorithmic literacy has methodological implications. While it may not make sense for all studies, the approach I have used in this dissertation helps illuminate the context-contingent nature of algorithmic literacy. Case studies bring local circumstances that contour algorithmic literacy to the fore. Further, interviews with people about the activities of their social worlds offer a powerful way of probing the unconscious ways people deal with algorithms in their daily lives. Asking people about algorithms directly sometimes evokes mainstream normative discourses about algorithms—for example, talk about filter bubbles—and I certainly experienced this. However, asking people about activities that define their social worlds and involve algorithms—like the pursuit of visibility—can be a means of drawing forth people’s tacit conceptualizations and skills that lie beneath the surface of discursive constructions of algorithms. Implications for Governance In evidencing the situatedness of algorithmic literacy, I lend support to others’ arguments that governing algorithms must involve a diverse coalition of stakeholders (e.g., Gillespie, 2018; Noble, 2018). The situated nature of algorithmic literacy requires acknowledging the essential heterogeneity and partialness of knowledge claims about algorithms. Governance for fairness, justice, and equity must, then, ensure that stakeholders from all walks of life are involved and made to feel welcome, supported, seen and heard in governance initiatives. This would allow all 239 stakeholders to speak their truths about algorithms from their experiences on the ground, in context. Diversity means representing a multiplicity of different social worlds, but, more importantly, a multiplicity of people from different sociodemographic backgrounds—first and foremost, those from marginalized groups underrepresented in the tech industry (e.g., black people and white women) (Costanza-Chock, 2020; Noble, 2018). As Tarleton Gillespie wrote of governing content moderation on platforms: [T]o truly diverse teams, the landscape will look different, the problems will surface differently, the goals will sound different. Team that are truly diverse might be able to better stand for their diverse users, might recognize how platforms are being turned against users in ways that are antithetical to the aims and spirit of the platform, and might have the political nerve to intervene. (Gillespie, 2018, p. 202) The two social worlds investigated in this dissertation were distinct in terms of ideologies, goals, sites, and activities. However, they were also both predominantly made up of relatively privileged populations—namely, white and middle-class people. Social identities broader than social worlds also matter for algorithmic literacy in the same way that membership in different social worlds does. Thus, communities composed predominantly of, for example, working class people and/or Indigenous people will know algorithms in ways that others will not and cannot. The unique insight such groups have for justice-oriented governance is paramount (Costanza- Chock, 2020). As Haraway argued, the knowledge of members of oppressed groups should be privileged because: They are knowledgeable of modes of denial through repression, forgetting, and disappearing acts—ways of being nowhere while claiming to see comprehensively. The subjugated have a decent chance to be on to the god trick and all its dazzling—and, therefore, blinding—illuminations. 'Subjugated' standpoints are preferred because they seem to promise more adequate, sustained, objective, transforming accounts of the world (Haraway, 1988, p. 584) For this same reason, marginalized people bring unique insight about algorithms to governance that can be valuable for ensuring individual and collective empowerment through design and 240 other interventions (Costanza-Chock, 2020). Next, I will plant some seeds for future work interested in the pedagogy of algorithmic literacy. Implications for Educational Interventions The situated nature of algorithmic literacy suggests that we must approach “algorithmic knowledge gaps” (Cotter & Reisdorf, 2020) as being relative and contingent. My findings suggest that different people will have different needs and interests to satisfy when building knowledge about algorithms. Instagram influencers need to know about algorithms in order to pursue visibility and, therefore, professional success in creative and media industries. BreadTubers need to know about algorithms in order to win the visibility war and, therefore, achieve (leftist) ideological dominance on YouTube as part of a broader war of ideas. Other social worlds will have other needs and interests that determine the value of different kinds of knowledge. The starting point for addressing any inequalities in algorithmic literacy should be an assessment of how knowledge about algorithms can support individuals in achieving outcomes meaningful to them and their social worlds (Warschauer, 2003, p. 32). Indeed, a need for knowledge precedes effective learning. As John Seely Brown and Paul Duguid explain, “[w]hen people cannot see the need for what’s being taught, they ignore it, reject it, or fail to assimilate it in any meaningful way” (Brown & Duguid, 2017, p. 127). Rather than presupposing a universal, predefined need for algorithmic literacy, the need for algorithmic literacy should be defined by learners in their local contexts. Further, the value of algorithmic literacy for learners may vary, which means that some people will be more motivated than others to learn. Thus, it is important to not ascribe perceived indifference to algorithms as irrationality (Brock et al., 2010). 241 Algorithms will be more or less relevant to people based on the extent to which they make a difference in people’s worlds. Beyond this, the way that algorithms are addressed in mainstream discourses may be alienating to some or simply may not spark interest. For example, mainstream discourse about filter bubbles may not activate interest in algorithms for those in the LGBTQ+ community as much as the systematic suppression of LGBTQ+ voices via (algorithmic) content moderation. The relevance of public discussions about algorithms to people’s lived experiences matter for the degree to which they become aware of and interested in algorithms. This, in turn, impacts any attempts to implement educational interventions. The inseparability of the knower from the context of knowing requires situated approaches to algorithmic literacy interventions. While some aspects of algorithms are fixed, stable and can only be known in one way, I have evidenced that much of what can be known about algorithms is not fixed or stable. Moreover, the situated nature of algorithmic literacy means acknowledging the partialness of all knowledge about algorithms. This acknowledgement requires a particular approach to educational interventions. Heeding Brown, Collins, and Duguid, the effective promotion of algorithmic literacy, in light of its situatedness, entails careful consideration of “what should be made explicit in teaching and what should be left implicit” (Brown et al., 1989, p. 40). As the authors explain, Whatever the domain, explication often lifts implicit and possibly even nonconceptual constraints (Cussins, 1988) out of the embedding world and tries to make them explicit or conceptual. These now take a place in our ontology and become something more to learn about rather than simply something useful in learning. But indexical representations gain their efficiency by leaving much of the context underrepresented or implicit (Brown et al., 1989, p. 41). As the authors suggest, rather than asserting a conceptual representation of algorithms first, around which learners must construct their understanding, instruction should begin with activities meaningful to learners, through which they can develop their own representations 242 authentic to them (Brown et al., 1989). This kind of approach crucially centers the interpretive flexibility (Bijker et al., 2012) of algorithms, which allows people to understand algorithms in a way that is natural to them. Approaches that presuppose the value and importance of algorithms for learners will fail to promote critical algorithmic literacy, which allows learners to critically reflect on algorithms in order to imagine an algorithmically-mediated world that works better for them. This “critical” component of critical algorithmic literacy—how algorithmic literacy reconfigures algorithmic power—is the third major contribution of this dissertation, which I will now discuss. Critical Algorithmic Literacy: Confronting Algorithmic Power A goal of this dissertation was to investigate the possibility of a critical algorithmic literacy, the possibility that learning about algorithms could involve developing a critical consciousness that would guide people in reasserting their own agency in the face of algorithmic power. I wondered in what ways could critical algorithmic literacy support governance of algorithms under fast growing conditions of governance by algorithms (Just & Latzer, 2017). I found that, across the two case studies, critical algorithmic literacy exhibited two essential components. First, Instagram influencers and BreadTubers recognize themselves as ensnared in a system of power enforced by algorithms. Second, their understanding of algorithms affords the opportunity to direct their own behavior, as guided by their identification with their social worlds. Contrasts between the two case studies help illuminate the limits of critical algorithmic literacy. Specifically, cultivating a critical consciousness about algorithms does not necessarily engender a critical consciousness about broader systems of power beyond the “walled gardens” of platforms that algorithms merely mediate. Cultivating a critical consciousness about 243 algorithms also does not mean that people have the means of dismantling those systems. As I will argue, while critical algorithmic literacy is a requisite component of effective governance of algorithms, it cannot, on its own, effect systemic change. In what follows, I draw together findings from the two cases to discuss developing a critical consciousness about algorithms through literacy, then how literacy permits reassertion of individual agency. I conclude this section by suggesting that critical algorithmic literacy is only a partial antidote to algorithmic power. Cultivating a Critical Consciousness Instagram influencers and BreadTubers alike recognize themselves as obliged to algorithms, as enactments of the rules of the visibility game that govern both their labor and the product of that labor (i.e. the content they produce). Members of both social worlds also understand algorithms as mediators of diffuse power, nodes in a broader sociotechnical system. Thus, while they express frustration about algorithms, much of their frustration is also directed at the platforms, themselves, who they recognize as the hand that guides algorithms. They also know that algorithms function as reflections of user preferences, as expressed through their activity on platforms, which serve platform owners’ intersecting goals of providing a good user experience and increasing ad revenue via an active user base. Many Instagram influencers and BreadTubers express great resentment towards Instagram and YouTube. Based on their experiences, few view platform owners as allies in their pursuits of visibility. Instead, they see them as autocrats, who assert their power via algorithms. At the core of Instagram influencers’ and BreadTubers’ critical consciousness is an understanding of what algorithms “want.” Members of both worlds have come to understand algorithms as conduits of power for platforms, which prescribe a particular politico-economic 244 order for their labor. Members of both social worlds share the understanding that what algorithms want is for them to compete for visibility. Indeed, they know competition is paramount to the visibility game, since visibility is not guaranteed to all, only those who play the game well (and by the rules). I agree with BreadTubers, who explicitly call out this emphasis on competition as a re-enactment of neoliberal capitalism, of the status quo politico-economic system in the U.S., enforced by algorithms. As Alice Marwick argued, “the tech industry’s strong faith in deregulated capitalism and entrepreneurship” (Marwick, 2013, p. 11) is imprinted in the DNA of social media platforms. As such, “Web 2.0 models ideal neoliberal selves, and rewards those who adopt such subjectivities” (Marwick, 2013, pp. 5-6). Although I did not observe Instagram influencers using this language, I believe they, too, recognize Instagram’s algorithms as urging a neoliberal capitalist subjectivity. Instagram influencers, like BreadTuber creators, understand themselves, as scholars have suggested, under the disciplinary control of algorithms (Bucher, 2012), which positions them as valuable “free labor” for platforms (Fuchs & Sevignani, 2013; Terranova, 2000). Instagram influencers understand this power dynamic implicitly, as part of their broader awareness of the requirements of working in creative and media industries that require individualistic self-management and assumption of risk (characteristic of neoliberalism), which makes their labor demanding and precarious. Indeed, this awareness coincides with the knowledge that they must constantly prove themselves, above others, in a system that defines success through visibility, as regulated by algorithms. In the next section, I will unpack how members of the two social worlds differently assert their agency in relation to algorithms, in spite of commonalities in their critical consciousness about algorithms. 245 Asserting Agency Across the two case studies, I saw that Instagram influencers and BreadTubers both find ways to consciously exert control over their individual responses to algorithms. With Instagram influencers, we see that their knowledge of Instagram’s algorithms allows them to decide for themselves how to instrumentalize the rules of the visibility game that algorithms enact. Although influencers all recognize that visibility depends upon generating engagement and followers, they diverge in their approaches to accomplishing this. In this, they draw on discursive ideals about what it means to be an influencer in order to assert, for themselves, how they believe influence should be achieved. Relational influencers view authenticity as foundational to influence and draw on the discursive ideal of authenticity to understand Instagram’s algorithms as accurately discerning “authentic” connectivity. Thus, as they respond to Instagram’s algorithms, they are able to direct their behavior in ways that prioritize authenticity—primarily, via proactive, personal interaction with current and potential followers. By contrast, simulation influencers view influence as built, at least in part, on boosting algorithmically recognizable signals of popularity—that influence represents entrepreneurial success. Thus, they understand Instagram’s algorithms as not able to effectively discern authentic connectivity, and respond to algorithms with tactics that target status markers and emphasize individualistic narratives of ingenuity and personal prowess. Through Instagram influencers’ divergent understandings of algorithms, we can see that as they fit their knowledge of algorithms with their sense of self, they lessen the experience of self-alienation. In other words, knowledge of algorithms can provide a space in which people can reunite their behavior, beyond any algorithmic directives, with their core self-concept. Moreover, the choice to lean on relational vs. simulation tactics—as based on their own understanding of 246 what influence means—helps influencers achieve the promise of what Brooke Duffy calls “aspirational labor,” or fulfilling and self-actualizing self-employment—“work that doesn’t seem like work” (Duffy, 2017, p. 225; emphasis in original). The ability to instrumentalize the rules of the visibility game according to how they see themselves brings them one step closer to doing work that feels right to and for them. More fundamentally, regardless of the approach influencers take, all of the visibility tactics they adopt should be considered attempts to exert control over their labor and success as influencers. Critical algorithmic literacy allows influencers to formulate, as Victoria O’Meara argued of engagement pods⁠21, “a communal response whereby these cultural producers seek a measure of stability. They manufacture their own consistent popularity despite shrouded and ever-changing systems of evaluation” (O’Meara, 2019, p. 8). As O’Meara rightfully suggests, visibility tactics,⁠22 which are the product of algorithmic literacy, should be considered a means by which influencers recoup a sense of agency by actively fighting back against the threat of invisibility imposed by platforms via algorithms. In this sense, they face their precarity head on and assert their agency by attempting to leverage some stability. BreadTubers also use their understanding of algorithms to subvert algorithmic directives. However, in contrast to Instagram influencers, BreadTubers frequently and explicitly decry YouTube’s algorithms for upholding a neoliberal capitalist order on the platform. They understand the platform’s algorithms as compelling them to compete with one another, which conflicts with the collectivist roots of their core ideology. As anti-capitalists, they believe in flat social structures, a communal spirit, and political and economic systems founded on social 21 A tactic that entails reciprocal engagement organized in group messages described in chapter 5. 22 Although O’Meara addresses pods, specifically, I suggest that the argument applies to all visibility tactics. 247 ownership. The BreadTube community unites around shared collectivist values of solidarity and equity. Thus, in confronting YouTube’s algorithms, which urge a neoliberal capitalist subjectivity, BreadTubers feel a strong sense of dissonance. While BreadTubers oppose this particular subjectivization, they also understand that waging their visibility war requires them to be on YouTube, and, thus, describe their continued presence on YouTube as “survival under capitalism.” Yet, while individual BreadTube creators must heed the rules of the visibility game, the community as a whole also actively attempts to counteract algorithmic prescriptions for individualistic pursuits of visibility. Specifically, BreadTubers have devised ways to reaffirm their collectivist values of solidarity and equity by supporting each other’s pursuits of visibility, particularly building communal support for lesser-known creators. In some cases, BreadTubers explicitly attempt to subvert the neoliberal capitalist system they see YouTube enforcing with its algorithms. One example of this is seen in the “Leftist Diversity Incubator” discussed in chapter 7. Another example is the “YouTube Walkout” in December 2019 that some BreadTubers participated in (and helped organize), which entailed refraining from using the platform for three days. The walkout was in response to a change in the platform’s Terms of Service that included language that was read as allowing YouTube the ability to “terminate” channels deemed “not commercially viable,” which BreadTubers believed would have a disproportionate impact on LGBTQ+ creators who are regularly (algorithmically) demonetized (Albarrán, 2019). The differences in how Instagram influencers and BreadTubers assert their own agency in relation to algorithms help illuminate limits to critical algorithmic literacy related to 1) how their social worlds further shape how members respond to algorithms, and 2) the power of platforms. I will now address these. 248 The Limits of Critical Algorithmic Literacy for Empowerment The less overtly subversive nature of Instagram influencers’ response to algorithms suggests that a critical consciousness about algorithms does not necessarily translate into agitating for changes to platforms, let alone extend beyond platforms. This difference rests on differences between the two social worlds. The social world of Instagram influencers’ exists within industries premised on a “neoliberal form of governmentality” (Gill, 2010, p. 260). Being successful in creative and media industries means working within this structure. Influencers are quite used to the requirements of this structure, even if they do not always love it. Certainly, they feel the pressure to compete and make frequent comparisons between their own visibility and others’. Many also dream of a return to a reverse-chronological feed on Instagram, which they see as a return to a fairer, less competitive way of life as an influencer. Yet, few influencers call for a total dismantlement of the broader politico-economic system Instagram’s algorithms reify like BreadTubers do. While, as argued in the last section, much of influencers’ instrumentalization of the rules of the visibility game is subversive, the prevailing attitude in the community is, as one influencers said in a discussion about a collective exodus from Instagram: “it's simply a case of if you can’t beat ‘em you might as well join ‘em ... It sucks, but we can’t change it, so you gotta just adapt or get left behind.” This attitude speaks to the requirements of working in creative and media industries. As Rosalind Gill argues, new media work revolves around a “thoroughgoing, wholesale management of the self, which requires the radical remaking of subjectivity. It may not even be experienced as such—indeed, it’s much more likely to be understood as simply ‘the way things are’” (Gill, 2010, p. 260). Indeed, influencers have found ways to cope with the politico-economic order Instagram’s algorithms impose. They “adapt,” resiliently and (somewhat subversively), with few advocating for a total upheaval of the 249 power structure entailed by this politico-economic order. BreadTubers similarly find ways to cope with the politico-economic order YouTube’s algorithms impose. However, BreadTubers also actively and explicitly advocate for the undoing of capitalism, including as instantiated on YouTube via algorithms. This key difference in how BreadTubers and Instagram influencers act on their critical consciousness implicates the differences between their social worlds. BreadTubers’ calls for the undoing of capitalism is primarily founded on their familiarity with critical theory, which inspires their politics. The ideas of critical theorists like Karl Marx, Noam Chomsky, and Peter Kropotkin enact a centripetal force on the community’s shared identity and discourses and heavily shape their critical consciousness about YouTube’s algorithms. Learning critical theory is part of becoming a member of the community. The critical consciousness they develop in learning about algorithms might be considered an extension of the critical consciousness cultivated from learning critical theory. In particular, BreadTube’s critical stance grows around dismantling capitalism, which clears the way for their overtly subversive responses to the neoliberal capitalist order YouTube’s algorithms encourage. Instagram influencers, by contrast, are not defined by the same critical stance as BreadTubers. Their world does not require them to learn and align with critical theory. Moreover, BreadTube audiences, whose patronage makes up the vast majority of BreadTube creators’ income, expect creators to engage in anticapitalist activism. Yet, for Instagram influencers to similarly agitate for change, they would need to work against the structures and institutions that give rise to their industries and enable their livelihood. In comparison to Instagram influencers, BreadTube creators stand to reap substantially more benefits from and risk less in agitating for change, both locally on YouTube and beyond. In studying BreadTubers’ efforts to agitate for change, we can also head off any utopian 250 mythologizing of critical algorithmic literacy. From BreadTubers’ efforts, we can see that critical algorithmic literacy does not necessarily guarantee substantive change. As BreadTubers themselves recognize, their acts of resistance have yet to do much to chip away at YouTube’s power, or to change, in any meaningful way, how the platform deploys algorithms. This seems to be due to two main reasons. First, the BreadTube community, like other participants of connective action (Bennett & Segerberg, 2012), is diffuse and decentralized, which makes it difficult to organize collectively.23 Second, the community is relatively small and not nearly as visible as other online communities, which brings us full circle back to the original problem: YouTube has the sole authority to regulate visibility via algorithms. The community’s current levels of visibility do not permit them much bargaining power with YouTube or influence over public sentiment. Their reach is relatively limited. Thus, like other YouTube communities, BreadTubers may be able to “negotiat[e] and shap[e] the social norms of the platform” (Burgess & Green, 2018, p. 119), but they still lack any direct control over the platform. Like many other protestations YouTubers have issued over the last decade (van Dijck, 2013), the BreadTube community’s complaints about the politico-economic order YouTube’s algorithms oblige have not yet gained any significant traction. The action critical algorithmic literacy gives rise to within these two social worlds illustrates the limits of critical algorithmic literacy. As I have shown, critical algorithmic literacy has helped Instagram influencers and BreadTubers resist algorithmic power, not in earth- shattering, complete upheavals of the algorithmic order, but in small acts of subversion. Indeed, this is largely because the power of individual users, and even user communities, pales in comparison to the power of platforms like Instagram and YouTube, which have billions of 23 For example, the YouTube Walkout previously mentioned was widely acknowledged by many in the community as a failure. 251 dollars in resources at their disposal and have become deeply imbricated with the technical and financial infrastructure of major institutions. While Instagram influencers and BreadTubers can assert their own agency, they cannot singlehandedly change the systems in place. What individuals can accomplish through critical algorithmic literacy seems to be limited to more conscious control over how they individually respond to algorithms. It is possible that broader impacts require a more widespread critical consciousness about algorithms—a mass movement. Perhaps, the strength of critical algorithmic literacy lies in reaching a critical mass of critical consciousness. As it stands, changing what algorithms do or how requires broader governance efforts than any single individual or community can accomplish. Indeed, as the activists suing YouTube introduced in the introduction to this dissertation seemed to know, effecting substantive change likely requires tapping into institutional and legislative powers. Conclusion In order to effectively govern algorithms, we need to hear from a variety of stakeholders and to recognize that users have valuable insight to bring to such efforts. Critical algorithmic literacy is a tool for fostering bottom-up governance of algorithms—a means of increasing the involvement of private citizens in governing algorithms in order to ensure governance serves public interests. Granted, as I have discussed, being able to know and, so, speak truths about algorithms does not guarantee any meaningful change to or control over algorithms. Only platforms can alter their algorithms and, thus, their impacts. Yet, what the public knows about algorithms lays a foundation from which people can petition platforms for updates to algorithms that serve their and their communities’ needs and interests. Critical algorithmic literacy further helps users petition elected officials for policies that will put a check on platforms’ power and demand greater accountability on platforms’ part. Although critical algorithmic literacy is not an 252 endpoint for governance, it is an important point in a broader system of actions that can be taken. In short, critical algorithmic literacy serves a public good of facilitating stakeholders in more effectively governing algorithms. With this important role of critical algorithmic literacy in mind, I offer some additional thoughts on how researchers—and any other individuals who contribute to our collective understanding of algorithms—can help facilitate the effective mobilization of critical algorithmic literacy. Specifically, in light of previous discussion about platform’s near- monopoly on epistemic authority, I argue that in order for critical algorithmic literacy to be an effective tool of governance, we need to leave room for user understandings of algorithms to differ from platforms’ understandings of algorithms. While platforms do benefit from unparalleled access to information about the design of their algorithms and capabilities, and to user data from which empirically-grounded conclusions could be drawn about algorithmic outcomes, as discussed, this does not make them the sole authority on all knowledge claims about algorithms. While platforms’ efforts to black-box their algorithms impact how people learn about algorithms, this does not mean people cannot or do not nevertheless build knowledge about algorithms, or that they always unwittingly accept platform- sanctioned explanations. Because of critical algorithmic literacy, people are better able to recognize when platforms issue statements in bad faith, or that are not strictly objective, or that leave out important information. In other words, critical algorithmic literacy can allow users to recognize and critically reflect on platforms’ attempts (whether intentional or not) to re-write what we know about algorithms. At the same time, we do need to recognize that platforms’ epistemic authority allows them to control (to a degree) narratives about their algorithms, as I have shown. Platforms’ private interests often diverge from the public’s interests (Van Dijck et al., 2018), and they have a variety 253 of good reasons—at least from their perspective—for withholding, obfuscating, and manipulating information about their algorithms (Pasquale, 2015). Platforms cannot be trusted to act as neutral purveyors of information about algorithms. As discussed, platforms lean on their epistemic authority to nudge user behavior in ways that suit them, but simultaneously cause users to question their own understandings of algorithms. They also actively urge certain understandings of their algorithms via rhetorical framing in their public communications that serves their material interests (Petre et al., 2019). In light of the potential for platforms to capitalize on their epistemic authority to shape public understandings of algorithms, I suggest that we need to treat user understandings of algorithms much differently than we have up until this point. By “we” here, I mean researchers, journalists, and other figures who the public perceives as authoritative sources when they speak about algorithms. We need to protect users’ capacity to draw conclusions about algorithms that conflict with information communicated or endorsed by platforms in order to govern algorithms in the interest of the public. We need to ensure users can effectively advocate for their interests based on their unique insight about what algorithms mean to and for them. This cannot happen if we only trust platforms to inform us about their algorithms. The shift I am calling for means two things. First, we need to be alert to moments when platforms attempt to assert their epistemic authority over that of users. In this, we need to think critically about how platforms respond to users’ knowledge claims and interrogate any points at which users’ and platforms’ understandings of algorithms diverge. Certainly, users can be wrong about algorithms, but we need to recognize that platforms can also be wrong (intentionally or unintentionally). For example, recent news reports revealed that Facebook’s internal research suggested the platform’s algorithms were contributing to political polarization (Horwitz & 254 Seetharaman, 2020), even after the company spent years suggesting via research (e.g., Bakshy et al., 2015) and other public statements (e.g., Solon, 2016b) that this was not a legitimate problem. Past work has also exemplified that platforms discursively position themselves as benevolent moral authorities in public statements about algorithms while obscuring the ways they materially benefit from this positioning (Hoffmann et al., 2018; Petre et al., 2019). For these reasons, we should be deeply skeptical of what platforms tell us about their algorithms. Second, we need to approach users’ knowledge claims with care. As Puig de la Bellacasa reminds us "[w]ays of studying and representing things can have world-making effects" (Puig de la Bellacasa, 2011, p. 86). We should be mindful of this point in the way we treat user understandings of algorithms. This means taking user understandings seriously. This does not, of course, mean we cannot question the accuracy of users’ knowledge claims. It simply means we should not begin with the assumption that what users know is wrong or ignorant. As I have shown, users have much to tell us about algorithmic outcomes, insights that platforms cannot or do not always see. This point is important because if we begin with the assumption that what users know about algorithms is invalid or irrational, we end up transmitting this assumption when we speak about algorithms publicly, which only serves to harden the epistemic authority of platforms. Taking users’ knowledge claims seriously erodes platforms’ ability to claim exclusive epistemic authority on all questions surrounding algorithms. It allows us to more fully recognize the black-boxing of algorithms as a red herring, as Bucher argued (Bucher, 2018). In fact, we should recognize that users have unique important insight from their experiences on the ground, in situ. This is particularly the case for understandings of algorithms among members of marginalized groups, whose knowledge, as mentioned, is already suppressed and delegitimized, and, consequently, who are best able to recognize attempts to rewrite our understandings of 255 algorithms. Because we cannot trust platforms to always and objectively share what they know about their algorithms, users’ insights provide a resource for triangulating a broader understanding of the social impacts of algorithms, as I and others have argued (e.g., Bishop, 2019; Bucher, 2018; Kitchin, 2017). In short, we need to recognize the value of user understandings in order to effectively mobilize critical algorithmic literacy for governing algorithms in the public’s interests. 256 APPENDICES 257 APPENDIX A: Interview Techniques Given that participants' assumptions, beliefs, and practices around algorithms are tacit and/or difficult to convey, I used a few different elicitation techniques in interviews for probing what participants knew and how they knew what they knew First, taking previous researchers' approach (DeVito et al., 2018; Eslami et al., 2016), I assured participants that my interest was in their ideas and perceptions, not how their responses measure up to a more "accurate" model of algorithms. Second, I asked participants to describe algorithms in multiple ways. For example, I asked participants how they would describe a particular algorithm to a friend who didn't know anything about it. I also asked participants to reflect on the two-part question: “If Instagram’s algorithm were a person, how would you describe them, and what would your relationship with them be like?” Asking participants to describe algorithms in these ways helped me draw forth their understandings of algorithms beyond technical details about the design or functioning of algorithms. Third, some of my questions referred to discourses I observed among Instagram influencers and the BreadTube community online. For example, I asked many Instagram influencers their thoughts on the “shadowbanning debate”— whether they believed shadowbanning was real or not. As another example, I asked many BreadTubers some variation on the question “Some in the BreadTube community think that YouTube has a right-wing bias. What are your thoughts on this?” With this technique of making observed discourses explicit, I intended to elicit taken-for-granted practices and knowledge. It allowed participants to take a stand on concrete knowledge claims I observed with their social worlds and encouraged them to marshal relevant personal experiences and knowledge for on-the-spot reflection and interpretation. 258 APPENDIX B: Interview Questions Note: Divergence in questions asked is characteristic of semi-structured interviews. The questions presented below were sometimes phrased differently in interviews. Further, not all of the questions listed below were asked in all interviews, and some questions asked in interviews were emergent from conversations with individual participants and, thus, do not appear below. Instagram Influencers Interview Questions 1. So, I’ve visited your blog/website, but I’d would like to begin the interview with you sharing more about who you are, what you do, and how you came to be where you are today. Tell me about how you got into Instagram. 2. What does a typical day look like for you? 3. 4. What do you like about Instagram? 5. As you know, some people use Instagram for fun, some use it to earn some extra money, 6. and some use it to earn a living? Where do you fit? So, you wrote this blog post that references/is about the Instagram algorithm. What prompted you to write the blog post? 7. How did you first learn about Instagram’s algorithm? 8. Once you were aware of the algorithm, how did you go about learning more? 9. How would you describe the algorithm to someone who didn’t know anything about it? 10. How do you think the Instagram algorithm works? How do you know? 11. In your opinion, what is the purpose of the algorithm? a. What do you think Instagram would say the purpose of the algorithm is? b. Do you think the algorithm successfully fulfills its purpose? Why/why not? 12. Do you think the algorithm changes? If so, how do you learn about the changes? 13. Tell me more about the tricks for gaining visibility you described in your blog post. a. How did you learn these tricks? Do they work? b. Why do you think people need these tricks? 14. Tell me about how you build your account. a. What would you say have been the most successful strategies you’ve adopted for building your account? Why do you think they have been so successful? b. What strategies were least successful? Why do you think they weren’t successful? c. Are there any strategies you’ve adopted that used to be successful, but stopped working at some point? Why do you think they stopped working? d. What advice would you give others who are just starting out and are trying to build their accounts? 259 15. When you’re working on your next Instagram post, do you think about the algorithm at any point in the process (from coming up with the idea for what you posted, to crafting the post, to actually posting it and beyond.)? 16. How else does Instagram’s algorithm influence your strategies for building your account? (If it doesn’t, why?) 20. 19. 17. Are there any other ways the algorithm impacts how you use Instagram? 18. In what ways would it be easier for you to build your account if Instagram stopped using the algorithm? In what ways would it be harder for you to build your account if Instagram stopped using the algorithm? If the algorithm were a person, how would you describe them, and what would your relationship with them would be like? 21. What does “influence” mean to you? 22. Do you consider yourself an influencer? Why or why not? 23. Can influence be achieved without knowledge of the algorithm? Why do you think that? 24. If Instagram stopped using the algorithm, would the meaning of influence change at all? Would influencers as a population change at all? 25. How, if at all, does the algorithm impact your understanding of influence and what you do as an influencer? 26. How do you think the algorithm impacts others’ understanding of influence and what influencers do? 27. Those are all the questions I wanted to ask. But, before we wrap up, were there any questions or issues you thought might come up during the interview, that didn’t? Anything you would like to share that I didn’t ask about or anything that you want to elaborate on that I missed while we were talking? 1. Why don’t we start with you telling me a little more about who you are, what you do, and BreadTube Interview Questions how you came to be where you are today. 2. What does a typical day look like for you? 3. I found you by way of the BreadTube subreddit. Do you consider yourself a member of the BreadTube community? Why/why not? 4. What got you interested in BreadTube? How did you find BreadTube? 5. How did you become interested in politics? 6. How would you describe your political views? 7. How did you develop these political views? What has had an influence on your political views? a. What people, organizations, or publications influenced your views? 260 8. Tell me about how you keep up with political news. How do you ensure you know all you need to know about the important news of the day? a. What sources of news do you use? (e.g. Facebook, Fox News, Reddit, CNN, NPR, local TV news?) 9. On a typical day, which online sites do you visit for news and information? a. (Particularly, which platforms)—do you use any platforms like Google, Facebook, YouTube for information? Which do you use the most? 10. I reached out because I came across a comment/post you posted in r/BreadTube. Your comment/post said: [COMMENT/POST]. Can you tell me about what you were thinking about when you wrote this? a. Follow up on knowledge claims—how did you learn X? What makes you think Y? 11. How did you first learn about YouTube’s/[FAVORITE PLATFORM’S] algorithm? 12. How would you describe YouTube’s/[FAVORITE PLATFORM’S] algorithm to someone who didn’t know anything about it? In your opinion, does YouTube’s/[FAVORITE PLATFORM’S] algorithm do a good job at sorting and filtering news and political information for you and other users? Why/why not? 13. 14. a. In what ways could they improve? In your opinion, what is the purpose of YouTube’s/[FAVORITE PLATFORM’S] algorithm? a. Do you think the algorithm successfully fulfills this purpose? Why/why not? 15. Do you think the algorithm changes? If so, how do you learn about the changes? 16. If YouTube’s/[FAVORITE PLATFORM’S] algorithm were a person, how would you describe them, and what would your relationship with them be like? 17. Some people in the BreadTube community think that social media algorithms fuel political extremism. What do you make of this idea? a. b. If they say they do: in your opinion, how do algorithms fuel political extremism? If they say they do not: in your opinion, why do other people think that algorithms fuel political extremism? 18. Some in the BreadTube community think that YouTube has a right-wing bias. What are your thoughts on this? a. b. If they say it’s biased, what makes you think it’s biased? Where does this bias come from? Do you think this bias is intentional on YouTube’s part? If they say it’s NOT biased, what makes you think that it’s NOT biased? 19. 20. In your opinion, what kind of role (if it at all) are social media algorithms playing in the 2020 presidential election? Or what role do you think they will play? How? In your opinion, what kind of role (if it at all) do social media algorithms play in politics in terms of things like struggles over basic rights and freedoms? Voice and representation? How? 21. Does the sorting and filtering of content and information by social media algorithms impact 261 your own engagement with political and social issues? Others’ engagement with political and social issues? 22. Can BreadTubers successfully promote their ideas and advocate for the issues that are important to them without knowledge of social media algorithms? Why do you think that? How about other creators, politicians, activists, and advocacy organizations? 23. How well (or not) do you feel your views and values on political and social issues are 24. reflected in the way YouTube/[FAVORITE SITE] sorts and filters content? In your opinion, what role (if at any) do people like you have to play in making sure social media platforms like YouTube sort and filter content and information in the public’s best interest? a. b. If they think they have a role to play: what does this role look like? If they NOT think they have a role to play: whose job should it be, if anybody’s? 25. How much power do you feel you and others like you have to shape the ways YouTube/[FAVORITE SITE] sorts and filters content? a. What makes you say that? b. What helps or hinders you in having an influence? c. What would have to change for you to have a stronger influence on the way YouTube/[FAVORITE SITE] sorts and filters content? In your opinion, what role (if at any) does the government have in making sure social media platforms like YouTube sort and filter content and information in the public’s best interest? a. b. If they think the government has a role to play: what does this role look like? If they think the government should NOT be involved: why? Whose job should it be, if anybody’s? 26. 262 APPENDIX C: Description of Interviewees Instagram influencers All but five of the 17 Instagram influencers I interviewed were women. The majority of all interviewees were in their 20’s. All interviewees had completed at least some college. All but four interviewees were white. Interviewees represented a range of niches, including fashion, pets, photography, lifestyle, digital marketing strategy, beauty, art, and travel. All but five interviewees were American. The other five were from (and lived in) Italy, United Kingdom, Australia, United Arab Emirates, and Malaysia. BreadTubers Six BreadTubers I interviewed identified as women. Two identified as genderfluid, one as non-binary. The rest identified as men. All but three interviewees were in their 20’s. Three interviewees had a high school diploma; the rest completed at least some college. Four of the interviewees were YouTube content creators. All but five interviewees were American. The other five were from United Kingdom, Australia, Georgia, Canada, and Argentina. 263 APPENDIX D: Notes on Key Terms Three choices I made with regard to key terms used in this dissertation warrant some explanation. First, in the Instagram influencers case study, I investigated both influencers with a few hundred Instagram followers and those with a few hundred thousand followers. As such, while some might conclude that my study population consists of “aspiring” and “established” influencers, for simplicity’s sake, I refer to them collectively as “influencers.” Second, YouTube “content creators” are central to the BreadTube case study. To some degree, “influencer” and “content creator” refer to the same class of actors, though there are slight nuances between the terms. I made the decision to refer to members of the first case study as Instagram “influencers” and not “content creators,” because this is the term typically used to describe people who pursue visibility on Instagram as income-generating labor. By contrast, I made the decision to refer to the group of BreadTubers who maintain YouTube channels as “content creators,” because this is the term typically used to describe this group of people. Additionally, while BreadTube creators typically pursue visibility and make money from their channels, only a (relatively) small percent of them make money as brand affiliates, whereas this is the main way Instagram influencers make money. Thus, I use the term “influencer” to signal a connection to the influencer marketing industry, wherein brands work with creators to sell products and services. Still, I want to acknowledge that Instagram “influencers” are, indeed, also “content creators,” and some BreadTube “content creators” are also “influencers.” I call this out because my sense is that “influencer” and “content creator” are gendered terms. As discussed, most Instagram influencers are women and, consequently, dominant discourses tend minimize their labor and the skills it requires. Thus, I want to acknowledge that what Instagram influencers do requires a unique skillset that includes, but is not limited to creating content. Third, both Instagram and YouTube 264 rely upon systems of interconnected algorithms to sort and filter content, among other tasks. However, most influencers and BreadTubers refer to a singular algorithm—i.e. “the Instagram algorithm” and “the YouTube algorithm”—as a stand-in for the overarching system. At times, in my findings, I adopt this rhetorical framing to capture how my subjects address algorithms. 265 APPENDIX E: Analytical Maps Figure 20. Instagram Influencers Messy Situational Map 266 Figure 21. Instagram Influencers Relational Map 267 Figure 22. BreadTube Messy Situational Map 268 Figure 23. BreadTube Relational Map 1 269 Figure 24. BreadTube Relational Map 2 270 APPENDIX E: “Whiteboarding” Notes Figure 25. Situated Platform Epistemology 271 Figure 26. Programmed (Platform) Epistemology 272 Figure 27. BreadTube Critical Algorithmic Literacy 273 Figure 28. Visibility Game vs. Visibility War 274 BIBLIOGRAPHY 275 BIBLIOGRAPHY [-_-_-_-otalp-_-_-_-]. (May 10, 2018). Why is this a bread themed subreddit, you ask [Online forum post]. Reddit. Retrieved from https://www.reddit.com/r/BreadTube/comments/8ihgvq/why_is_this_a_bread_themed_su breddit_you_ask/ [Angie Speaks]. (July 19, 2019a). Angie Speaks interviews Peter Coffin! Custom reality, scandals and LeftTube consumer demographics [Video file]. Retrieved from https://www.youtube.com/watch?v=H1khIVcLhg4 [Angie Speaks]. (March 10, 2019b). Who are black leftists supposed to be? [Video file]. Retrieved from https://www.youtube.com/watch?v=K0umweUyL-s [Biaheza]. (March 22, 2019). Why Instagram DM groups and power likes don’t really work. Retrieved from https://www.youtube.com/watch?v=fW9LuOt48TA [Gutian]. (July 21, 2019a). An open letter to big Breadtube [Video file]. Retrieved from https://www.youtube.com/watch?v=vH35lLKm4VI [Gutian]. (August 23, 2019b). Is the Left actually winning on Youtube? [Video file]. Retrieved from https://www.youtube.com/watch?v=i6hzJ4Spbu8&list=PLJIs9OvcbZKvSY1RfMtIEyM4 D8VE3QM04&index=105&t=0s [Nerd City]. (September 29, 2019). Youtube’s biggest lie [Video file]. Retrieved from https://youtu.be/ll8zGaWhofU [quangli]. (December 3, 2019). What are your critiques of r/BreadTube? [Re-Education]. (September 9, 2019). Internet Drama Is DESTROYING The Left [Contrapoints] [Video file]. Retrieved from https://www.youtube.com/watch?v=F6Tg5EYmAGs [Step Back History]. (March 25, 2019). The radicalization funnel - The history of hate [Video file]. Retrieved from https://www.youtube.com/watch?v=tL3b6oBfh04&feature=youtu.be [The Serfs]. (August 9, 2019). How to rise to the top of the algorithm (in YouTube and Twitter) - only works for leftists [Video file]. Retrieved from https://www.youtube.com/watch?v=UTTQT3yezlQ [u/Jethawk55]. (January 21, 2020). What is ‘the dirtbag left’? Retrieved from https://www.reddit.com/r/BreadTube/comments/es2swp/what_is_the_dirtbag_left/ Abidin, C. (2015). Communicative intimacies: Influencers and perceived interconnectedness. Ada, (8). https://doi.org/10.7264/N3MW2FFG. 276 Abidin, C. (2016). “Aren’t these just young, rich women doing vain things online?”: Influencer selfies as subversive frivolity. Social Media+ Society, 2(2), 1-7. https://doi.org/10.1177/2056305116641342. Albarrán, J. (2019). Stop the unfair “commercially viable” change to Youtube’s Terms of Service. Retrieved from https://www.change.org/p/youtube-subsidiary-of-google-ceo- susan-wojcicki-stop-the-unfair-commercially-viable-change-to-youtube-s-terms-of- service Albergotti, R. (June 18, 2020). Black creators sue YouTube, alleging racial discrimination. Retrieved from https://www.washingtonpost.com/technology/2020/06/18/black-creators- sue-youtube-alleged-race-discrimination/ Ananny, M., & Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973- 989. https://doi.org/10.1177/1461444816676645. Andrejevic, M. (2014). The big data divide. International Journal of Communication, 8, 1673- 1689. Association, for Computing Machinery U.S. Public Policy Council. (2017). Statement on algorithmic transparency and accountability. Retrieved from https://www.acm.org/binaries/content/assets/public- policy/2017_usacm_statement_algorithms.pdf Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130-1132. Banet-Weiser, S. (2012). Authentic™: Politics and ambivalence in a brand culture. New York, NY: NYU Press. Baym, N. K. (2015). Connect with your audience! The relational labor of connection. The communication review, 18(1), 14-22. Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6), 985-1002. https://doi.org/10.1177/1461444809336551. Beer, D. (2016). Metric power. London: Palgrave Macmillan. Bellotti, V., & Edwards, K. (2001). Intelligibility and accountability: human considerations in context-aware systems. Human–Computer Interaction, 16(2-4), 193-212. https://doi.org/10.1207/S15327051HCI16234_05. Bennett, W. L., & Segerberg, A. (2012). The logic of connective action: Digital media and the personalization of contentious politics. Information, communication & society, 15(5), 739-768. https://doi.org/10.1080/1369118X.2012.670661. 277 Bijker, W. E., Hughes, T. P., & Pinch, T. (2012). The social construction of technological systems: New directions in the sociology and history of technology. Cambridge, MA: MIT Press. Bijker, W. E., & Law, J. (1992). Shaping technology/building society: Studies in sociotechnical change. Cambridge, MA: MIT Press. Bishop, S. (2019). Managing visibility on YouTube through algorithmic gossip. New Media & Society, 21(11-12), 2589-2606. https://doi.org/10.1177/1461444819854731. Black, M. (November 5, 2019). Holy shit, Contra said a thing! well, guess I better singlehandedly solve BreadTube. Retrieved from https://medium.com/mos-home-for- treatises-and-hot-takes/holy-shit-contra-said-a-thing-well-guess-i-better-singlehandedly- solve-breadtube-e84b3bfeeaed Blank, G., & Dutton, W. H. (2012). Age and trust in the Internet: The centrality of experience and attitudes toward technology in Britain. Social Science Computer Review, 30(2), 135- 151. https://doi.org/10.1177/0894439310396186. Bloggy Boot Camp. (May 5, 2018). Why every blogger should attend a blog conference. Retrieved from https://www.bloggybootcamp.com/why-every-blogger-should-attend-a- blog-conference/ Bourdieu, P. (1977). Outline of a theory of practice. Cambridge, U.K.: Cambridge University Press. Bourdieu, P. (1998). Practical reason: On the theory of action. Stanford, CA: Stanford University Press. Bowker, G. C., & Star, S. L. (1999). Sorting things out: Classification and its consequences. Cambridge, MA: MIT Press. boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662-679. https://doi.org/10.1080/1369118X.2012.678878. BreadTube About. (n.d.). Retrieved from https://breadtube.tv/about/ Brewis, Harris [Hbomberguy]. (July 23, 2019). Also if you share a video you like and the creator turns out to suck in some capacity people will remember and throw it in your face later. Or vice versa, they get associated with me and i basically fucked up [Tweet]. Retrieved from https://twitter.com/hbomberguy/status/1153569136036974592 Brock, A., Kvasny, L., & Hales, K. (2010). Cultural appropriations of technical capital: Black women, weblogs, and the digital divide. Information, Communication & Society, 13(7), 1040-1059. https://doi.org/10.1080/1369118X.2010.498897. Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. 278 Educational researcher, 18(1), 32-42. Brown, J. S., & Duguid, P. (2017). The social life of information: Updated, with a new preface. Boston, MA: Harvard Business Review Press. Buchanan, E. A., & Zimmer, M. (2016). Internet research ethics. In E. N. Zalta (Ed.), Stanford Encyclopedia of Philosophy Archive (Winter 2016 Edition). Retrieved from https://plato.stanford.edu/archives/win2016/entries/ethics-internet-research/ Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164-1180. https://doi.org/10.1177/1461444812440159. Bucher, T. (2013). The friendship assemblage: Investigating programmed sociality on Facebook. Television & New Media, 14(6), 479-493. https://doi.org/10.1177/1527476412452800. Bucher, T. (2018). If.then: Algorithmic power and politics. New York: Oxford University Press. Burgess, J., & Green, J. (2018). YouTube: Online video and participatory culture. Cambridge, UK: Polity. Burgess, J. E., & Green, J. B. (2008). Agency and controversy in the YouTube community. (pp. Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1-12. https://doi.org/10.1177/2053951715622512. Cantrell, A. (January 29, 2019). MSU researchers receive grant to build ‘algorithmic awareness’ as form of digital literacy. Retrieved from https://www.montana.edu/news/18360/msu- researchers-receive-grant-to-build-algorithmic-awareness-as-form-of-digital-literacy Carbone, L. (April 10, 2019). This is how much Instagram influencers really cost. Retrieved from https://later.com/blog/instagram-influencers-costs/ Carman, A. (April 4, 2019). Instagram needs stars, and it’s built a team to find them. The Verge. Retrieved from https://www.theverge.com/2019/4/4/18293981/instagram-igtv-influencer- stars-talent-scouting-justin-antony Carter, R. (May 9, 2018). The top 8 social media conferences to book in 2018. Retrieved from https://sproutsocial.com/insights/social-media-conferences/ Charmaz, K. (2014). Constructing grounded theory. Thousand Oaks, CA: SAGE. Cheney-Lippold, J. (2011). A new algorithmic identity: Soft biopolitics and the modulation of control. Theory, Culture & Society, 28(6), 164-181. https://doi.org/10.1177/0263276411424420. Chris Loves Julia. (2018). 6 Questions to ask yourself if the Instagram algorithm has you down. Retrieved from https://www.chrislovesjulia.com/2018/01/6-questions-ask-instagram- 279 algorithm.html Citron, D. K. (2007). Technological due process. Wash. UL Rev., 851249. Clarke, A. E., Friese, C., & Washburn, R. (2018). Situational analysis: Grounded theory after the interpretive turn. Los Angeles, CA: SAGE. Clarke, A. E., & Star, S. L. (2008). The social worlds framework: A theory/methods package. In The handbook of science and technology studies (3 ed., pp. 113-137). Cambridge, MA: MIT Press. Coffin, P. (November 6, 2019). Cancel culture, or capitalist culture? - Many Peters. Retrieved from https://www.youtube.com/watch?v=4RaR1xAcR9Y Collins, H. M. (2012). Expert systems and the science of knowledge. In W. E. Bijker, T. P. Hughes, & T. Pinch(pp. 321-340). Cambridge, MA: MIT Press. Collins, P. H. (2002). Black feminist thought: Knowledge, consciousness, and the politics of empowerment. New York, NY: Routledge. Retrieved from https://homologacao- reciis.icict.fiocruz.br/index.php/reciis/article/download/854/1496 Constine, J. (June 1, 2018). How Instagram’s algorithm works. TechCrunch. Retrieved from https://techcrunch.com/2018/06/01/how-instagram-feed-works/ Constine, J. (November 14, 2019). Instagram tests hiding Like counts globally. TechCrunch. Retrieved from https://techcrunch.com/2019/11/14/instagram-private-like-counts/ Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. Cambridge, MA: MIT Press. Cotter, K., & Reisdorf, B. C. (2020). Algorithmic Knowledge Gaps: A New Horizon of (Digital) Inequality. International Journal of Communication, 14, 21. Covington, P., Adams, J., & Sargin, E. (September 15-19, 2016). Deep neural networks for youtube recommendations. Proceedings from Proceedings of the 10th ACM Conference on Recommender Systems (pp. 191-198), Boston, MA: Association for Computing Machinery. https://doi.org/10.1145/2959100.2959190. Dan Arrows [@_DanArrows]. (July 2, 2019). @TeamYouTube the video in question is co- written with a phd historian who wrote mutiple books on various aspects of the Third Reich and is educational in nature. Dont see how this qualifies for hatespeech [Tweet]. Retrieved from https://twitter.com/_DanArrows/status/1146118566724472833 Davidson, C. (October 31, 2011). What are the 4 R’s essential to 21st century learning? Retrieved from https://www.hastac.org/blogs/cathy-davidson/2011/10/31/what-are-4-rs- essential-21st-century-learning Davis, J. K. (1992). Spying on America: The FBI’s domestic counterintelligence program. 280 Westport, CT: Praeger. de Laat, P. B. (2017). Big data and algorithmic decision-making: Can transparency restore accountability. ACM SIGCAS Computers and Society, 47(3), 39-53. https://doi.org/10.1145/3144592.3144597. Dean, P. (January 14, 2018). New Instagram algorithm changes in January 2018. Retrieved from https://web.archive.org/web/20190620184009/https://thepigeonletters.com/2018/01/14/ne w-instagram-algorithm-changes-in-january-2018/ Decaillet, Q. (2017). Mass Planner shut down by Instagram: The end of the bot era. Retrieved from https://fstoppers.com/social-media/mass-planner-shut-down-instagram-end-bot-era- 176654 DeVito, M. A., Gergle, D., & Birnholtz, J. (2017). Algorithms ruin everything:# RIPTwitter, folk theories, and resistance to algorithmic change in social media. (pp. 3163-3174), Denver, CO: Association for Computing Machinery. https://doi.org/10.1145/3025453.3025659. DeVito, M. A., Birnholtz, J., Hancock, J. T., French, M., & Liu, S. (2018). How people form folk theories of social media feeds and what it means for how we study self-presentation. Proceedings from ACM Conference on Human Factors in Computing Systems (CHI) (pp. 1-12), Montreal, Canada: Association for Computing Machinery. https://doi.org/10.1145/3173574.3173694. Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56-62. https://doi.org/10.1145/2844110. Diakopoulos, N., Friedler, S., Arenas, M., Barocas, S., Hay, M., Howe, B., . . . Zevenbergen, B. Principles for accountable algorithms and a social impact statement for algorithms. Retrieved from https://www.fatml.org/resources/principles-for-accountable-algorithms Diggins, J. P. (1992). The Rise and Fall of the American Left. New York: W.W. Norton & Company Incorporated. Dimson, T. (2017). Measurement and analysis of predictive feed ranking models on Instagram. Retrieved from https://code.facebook.com/posts/1692857177682119/machine-learning- scale-2017-recap Donk, W. V. D., Loader, B. D., Nixon, P. G., & Rucht, D. (2004). Cyberprotest. New York: Routledge. Dorpat, T. L. (1996). Gaslighthing, the Double Whammy, Interrogation and Other Methods of Covert Control in Psychotherapy and Analysis. Northvale, NJ: Jason Aronson, Incorporated. Duffy, B. E. (2016). The romance of work: Gender and aspirational labour in the digital culture industries. International Journal of Cultural Studies, 19(4), 441-457. 281 Duffy, B. E. (2017-06-27). (Not) getting paid to do what you love. New Haven, CT: Yale University Press. Duffy, B. E., & Hund, E. (2015). “Having it all” on social media: Entrepreneurial femininity and self-branding among fashion bloggers. Social Media + Society, 1(2), 205630511560433. https://doi.org/10.1177/2056305115604337. Duffy, B. E., & Pooley, J. (2019). Idols of promotion: The triumph of self-branding in an age of precarity. Journal of Communication, 69(1), 26-48. https://doi.org/10.1093/joc/jqy063. Duffy, B. E., Poell, T., & Nieborg, D. B. (2019). Platform practices in the cultural industries: Creativity, labor, and citizenship. Social Media + Society, 1-8. https://doi.org/10.1177/2056305119879672. Dutton, W. H., & Shepherd, A. (2006). Trust in the Internet as an experience technology. Information, Communication & Society, 9(4), 433-451. https://doi.org/10.1080/13691180600858606. Eslami, M., Karahalios, K., Sandvig, C., Vaccaro, K., Rickman, A., Hamilton, K., & Kirlik, A. (2016). First I like it, then I hide it: Folk theories of social feeds. Proceedings from 2016 ACM Conference on Human Factors in Computing Systems (CHI) (pp. 2371-2382), San Jose, CA: Association for Computing Machinery. https://doi.org/10.1145/2858036.2858494. Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., . . . Sandvig, C. (2015). “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds. Proceedings from 2015 ACM Conference on Human Factors in Computing Systems (CHI) (pp. 153-162), Seoul, Republic of Korea: Association for Computing Machinery. https://doi.org/10.1145/2858036.2858494. Eslami, M., Vaccaro, K., Karahalios, K., & Hamilton, K. (2017). “Be careful; things can be worse than they appear”: Understanding biased algorithms and users’ behavior around them in rating platforms. Proceedings from International AAAI Conference on Web and Social Media (pp. Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor. New York, NY: St. Martin’s Press. Facebook. (n.d.). What are the privacy options for Facebook groups? Retrieved from https://www.facebook.com/help/220336891328465 Foucault, M. (1980). Power/Knowledge. New York: Pantheon Books. Foucault, M. (2012). The history of sexuality, volume 1: An Introduction. New York: Knopf Doubleday Publishing Group. franzke, A. S., Bechmann, A., Zimmer, M., Ess, C., & Association of Internet Researchers. (2020). Internet research: Ethical guidelines 3.0. Retrieved from 282 https://aoir.org/reports/ethics3.pdf Freire, P. (2000). Pedagogy of the Oppressed. New York: Bloomsbury Publishing. Freire, P., & Macedo, D. (1987). Literacy: Reading the word and the world. London, UK: Routledge. Fuchs, C., & Sevignani, S. (2013). What is digital labour? What is digital work? What’s their difference? And why do these questions matter for understanding social media. TripleC: Communication, capitalism & critique. Open access journal for a global sustainable information society, 11(2), 237-293. https://doi.org/10.31269/triplec.v11i2.461. Fujimura, J. H. (1988). The molecular biological bandwagon in cancer research: Where social worlds meet. Social Problems, 35(3), 261-283. https://doi.org/10.2307/800622. Galloway, A. R. (2006). Gaming: Essays on algorithmic culture. Minneapolis, MN: University of Minnesota Press. Galloway, A. R. (2004). Protocol: How control exists after decentralization. Cambridge, MA: MIT Press. Gee, J. P. (1999). Critical issues: Reading and the new literacy studies: Reframing the national academy of sciences report on reading. Journal of Literacy Research, 31(3), 355-374. https://doi.org/10.1080/10862969909548052. Gelman, S. A., & Legare, C. H. (2011). Concepts and folk theories. Annual review of anthropology, 40, 379-398. Gill, R. (2010). Life is a pitch: Managing the self in new media work. In M. Deuze (Ed.), Managing media work (pp. 249-262). Thousand Oaks, CA: SAGE. Gillespie, T. (2010). The politics of ‘platforms’. New Media & Society, 12(3), 347-364. https://doi.org/10.1177/1461444809342738. Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies (pp. 167-194). Cambridge, MA: MIT Press. Gillespie, T. (2018-06-26). Custodians of the Internet. New Haven, CT: Yale University Press. Goggin, B. (June 9, 2019). YouTube’s week from hell: How the debate over free speech online exploded after a conservative star with millions of subscribers was accused of homophobic harassment. Retrieved from https://www.businessinsider.com/steven- crowder-youtube-speech-carlos-maza-explained-youtube-2019-6#the-dispute-among- youtube-maza-and-crowder-is-just-the Google. (n.d.). How Search algorithms work. Retrieved from https://www.google.com/search/howsearchworks/algorithms/ 283 Haraway, D. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575. https://doi.org/10.2307/3178066. Haraway, D. (2000). A Cyborg manifesto: Science, technology and socialist-feminism in the late twentieth century. In D. Bell & B. M. Kennedy (Eds.), The cybercultures reader (pp. 291- 324). New York: Routledge. Hargittai, E., Gruber, J., Djukaric, T., Fuchs, J., & Brombach, L. (2020). Black box measures? How to study people’s algorithm skills. Information, Communication & Society, 1-12. https://doi.org/10.1080/1369118X.2020.1713846. Hearn, A. (2010). Structuring feeling: Web 2.0, online ranking and rating, and the digital ‘reputation’ economy. ephemera: theory & politics in organization, 10(3/4), 421-438. Hearn, A., & Schoenhoff, S. (2015). From celebrity to influencer: Tracing the diffusion of celebrity value across the data stream. In P. D. Marshall & S. Redmond (Eds.), A companion to celebrity (pp. 194-212). Hoboken, NJ: John Wiley & Sons, Inc. Retrieved from http://doi.wiley.com/10.1002/9781118475089.ch11 Heideman, P. (April 30, 2018). Socialism and Black Oppression. Jacobin. Retrieved from https://jacobinmag.com/2018/04/socialism-marx-race-class-struggle-color-line Heideman, P. (February 20, 2017). The rise and fall of the socialist party of America. Jacobin. Retrieved from https://www.jacobinmag.com/2017/02/rise-and-fall-socialist-party-of- america Henry, Z. (October 14, 2014). Mark Zuckerberg’s 10 best quotes ever. Inc. Retrieved from https://www.inc.com/zoe-henry/mark-zuckerberg-move-fast-and-break-things.html Hesmondhalgh, D. (November 15, 2017). Why it matters when big tech firms extend their power into media content. Retrieved from https://theconversation.com /why-it-matters-when- big-tech-firms-extend-their-power-into -media-content-86876 Hoffmann, A. L., Proferes, N., & Zimmer, M. (2018). “Making the world more open and connected”: Mark Zuckerberg and the discursive construction of Facebook and its users. New Media & Society, https://doi.org/10.1177/1461444816660784. Horwitz, J., & Seetharaman, D. (May 26, 2020). Facebook executives shut down efforts to make the site less divisive. Retrieved from https://www.wsj.com/articles/facebook-knows-it- encourages-division-top-executives-nixed-solutions-11590507499 Instagram [@creators]. (January 23, 2020). A peak at some of the metrics that impact feed ranking. Check out our stories today where we’re busting those algorithm myths you ask us about most frequently! [Instagram post]. Retrieved from https://www.instagram.com/p/B7rY_wkBj38 Instagram for Business. (February 28, 2017). We understand users have experienced issues with our hashtag search that caused posts to not be surfaced. We are continuously working on 284 improvements to our system with the resources available. When developing content, we recommend focusing on your business objective [Facebook status update]. Retrieved from https://www.facebook.com/instagramforbusiness/posts/1046447858817451 Instagram. (March 15, 2016). See posts you care about first in your feed. Retrieved from https://about.instagram.com/blog/announcements/see-posts-you-care-about-first-in-your- feed Introna, L. D. (2016). Algorithms, governance, and governmentality: On governing academic writing. Science, Technology, & Human Values, 41(1), 17-49. https://doi.org/10.1177/0162243915587360. Johnson, Emerican [Non-Compete]. (July 25, 2019). An open letter to small BreadTube (response to Gutian). Retrieved from https://www.youtube.com/watch?v=gjhpP3htnpQ Just, N., & Latzer, M. (2017). Governance by algorithms: Reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238-258. https://doi.org/10.1177/0163443716643157. Khaled, A. (November 29, 2019). Meet the YouTubers trying to make sense of YouTube. Retrieved from https://medium.com/swlh/tiffany-ferg-sarah-z-kat-blaque-internet-culture- radicalization-youtube-algorithm-38cdac8c3089 Newman, L. C., v. Google LLC, 5:20-cv-04011 (N.D. Cal.). Kinnucan-Welsch, K. (2010). Critical Literacy. In T. C. Hunt, J. C. Carper, T. J. Lasley II, & C. D. Raisch (Eds.), Encyclopedia of educational reform and dissent. Thousand Oaks, CA: SAGE. Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14-29. https://doi.org/10.1080/1369118X.2016.1154087. Klawitter, E., & Hargittai, E. (2018). “It’s like learning a whole other language:” The role of algorithmic skills in the curation of creative goods. International Journal of Communication, 12, 3490-3510. Krause, A. (January 14, 2020). Inside the life of NikkieTutorials, a beauty YouTuber who just came out as transgender in a new video. Retrieved from https://www.insider.com/who-is- nikkietutorials-beauty-youtuber-2020-1 Kroll, J. A. (2018). The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 1-14. https://doi.org/10.1098/rsta.2018.0084. Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2016). Accountable algorithms. U. Pa. L. Rev., 165633. 285 Kuznetsov, D., & Ismangil, M. (2020). YouTube as praxis? On BreadTube and the digital propagation of socialist thought. tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 18(1), 204-218. Latour, B. (2005). Reassembling the social an introduction to actor-network-theory. Oxford, UK: Oxford University Press. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New York: Cambridge University Press. Lawlor, J. (2018). This pro travel blogger shares her tips for getting paid to see the world. Retrieved from https://www.thepennyhoarder.com/make-money/career/helene-sula- travel-blogging/ Lee, D. (August 14, 2019). Facebook is simplifying group privacy settings and adding admin tools for safety. Retrieved from https://www.theverge.com/2019/8/14/20805928/facebook-closed-secret-public-private- group-settings LeftTube Diversity Incubator. (2020). Let’s make LeftTube more diverse together. Retrieved from https://www.diverseleft.com/ Lewis, R. (2018). Alternative influence: Broadcasting the reactionary right on YouTube. Retrieved from https://datasociety.net/wp- content/uploads/2018/09/DS_Alternative_Influence.pdf Mackenzie, A. (2015). The production of prediction: What does machine learning want. European Journal of Cultural Studies, 18(4-5), 429-445. https://doi.org/10.1177/1367549415577384. Madrigal, A. C. (November 2, 2017). 15 things we learned from the tech giants at the Senate hearings. The Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2017/11/a-list-of-what-we-really- learned-during-techs-congressional-hearings/544730/ MarketingCharts. (April 24, 2019). Social networking platforms’ user demographics update 2019. Retrieved from https://www.marketingcharts.com/digital/social-media-108184 Markham, A. (2012). Fabrication as ethical practice: Qualitative inquiry in ambiguous internet contexts. Information, Communication & Society, 15(3), 334-353. https://doi.org/10.1080/1369118x.2011.641993. Marwick, A. (2013). Status update: Celebrity, publicity, and branding in the social media age. New Haven, CT: Yale University Press. Marwick, A. E. (2015). Instafame: Luxury selfies in the attention economy. Public culture, 27(1), 137-160. https://doi.org/10.1215/08992363-2798379. 286 McMeekin, S. (August 2, 2018). Nikkie Tutorials on her first beauty video, the power of makeup & more. Retrieved from https://www.glamourmagazine.co.uk/article/nikkie-tutorials- interview McPhillips, L. (2017). When social influencers stop being influential. Retrieved from http://www.thisrenegadelove.com/when-influencers-stop-being-influential MediaKix. (2019a). Influencer marketing 2019: Industry benchmarks. Retrieved from https://mediakix.com/influencer-marketing-resources/influencer-marketing-industry- statistics-survey-benchmarks/ MediaKix. (May 27, 2019b). Influencer tiers for the influencer marketing industry. Retrieved from https://mediakix.com/influencer-marketing-resources/influencer-tiers/ Metzger, M. J., Flanagin, A. J., & Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of communication, 60(3), 413-439. Mirrlees, T. (2019). The Alt-Right’s platformization of Fascism and a New Left’s digital united front. Democratic Communiqué, 28(2), 28-46. Morello, C. (2017). EXPOSED: Beauty bloggers committing FRAUD [Video file]. Retrieved from https://youtu.be/M0aNjpaN5cE Mosley, M., & Tucker-Raymond, E. (2007). Critical literacy. In G. L. Anderson & K. G. Herr (Eds.), Encyclopedia of activism and social justice. Thousand Oaks, CA: SAGE. Moss, C. (2014). I tried using Instagram like a teenager — and it completely changed the way I see the app. Retrieved from http://www.businessinsider.com/how-teens-use-instagram- 2014-6 Mosseri, A. (June 15, 2020). Ensuring black voices are heard. Retrieved from https://about.instagram.com/blog/announcements/ensuring-black-voices-are-heard MP Social. (n.d.). About MP Social. Retrieved from https://mpsocial.com/about Munn, L. (2019). Alt-right pipeline: Individual journeys to extremism online. First Monday, 24(6), https://doi.org/10.5210/fm.v24i6.10108. National Science Foundation, National Center for Science and Engineering Statistics. (2019). Women, minorities, and persons with disabilities in science and engineering: 2019. Special Report NSF 19-304. Alexandria, VA. Retrieved from https://www.nsf.gov/statistics/wmpd. Neff, G., Foot, K. A., & Nardi, B. A. (2014). Venture Labor: Work and the burden of risk in innovative industries. Cambridge, MA: MIT Press. Neff, G., Tanweer, A., Fiore-Gartland, B., & Osburn, L. (2017). Critique and contribute: A practice-based framework for improving critical data studies and data science. Big Data, 287 5(2), 85-97. https://doi.org/10.1089/big.2016.0050. Newberry, C. (2019). Influencer marketing in 2019: How to work with social media influencers. Hootsuite Social Media Management, Nieborg, D. B., & Poell, T. (2018). The platformization of cultural production: Theorizing the contingent cultural commodity. New Media & Society, 20(11), 4275-4292. Nissenbaum, H., & Introna, L. D. (2000). Shaping the Web: Why the politics of search engines matters. The Information Society, 16(3), 169-185. https://doi.org/10.1080/01972240050133634. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York: New York University Press. Noelle-Neumann, E. (1993). The spiral of silence: Public opinion—our social skin. Chicago: University of Chicago Press. Norman, D. (2013). The design of everyday things: Revised and expanded edition. New York: Basic books. O’Connor, C. (2017). Top influencers. Retrieved from https://www.forbes.com/top- influencers/#638a2ecb72dd O’Meara, V. (2019). Weapons of the chic: Instagram influencer engagement pods as practices of resistance to Instagram platform labor. Social Media+ Society, 5(4), 1-11. https://doi.org/10.1177/2056305119879671. Pangrazio, L., & Selwyn, N. (2019). ‘Personal data literacies’: A critical literacies approach to enhancing understandings of personal digital data. New Media & Society, 21(2), 419- 437. https://doi.org/10.1177/1461444818799523. Parisi, T. (February 24, 2020). NIU expert says a new 21st century skill set is needed— algorithmic literacy. Retrieved from https://newsroom.niu.edu/2020/02/24/niu-expert- says-a-new-21st-century-skill-set-is-needed-algorithmic-literacy/ Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Cambridge, MA: Harvard University Press. Peterson, K., [@the1janitor]. (July 26, 2019a). “I do think you became less active right around the time “LeftTube” started to blow up. But you’re right in that it’s very cliqueish and seems to depends a lot on who you’re friends with/liked by/collabed with” [Tweet]. Retrieved from https://twitter.com/the1janitor/status/1154931567623639041 Peterson, K., [T1J]. (July 26, 2019b). I’m kinda over this whole ‘LeftTube’ thing [Video file]. Retrieved from https://www.youtube.com/watch?v=2nksGGP9SF8 Petre, C., Duffy, B. E., & Hund, E. (2019). “Gaming the System”: Platform Paternalism and the 288 Politics of Algorithmic Visibility. Social Media+ Society, 5(4), 1-12. https://doi.org/10.1177/2056305119879995. Plantin, J.-C., Lagoze, C., Edwards, P. N., & Sandvig, C. (2018). Infrastructure studies meet platform studies in the age of Google and Facebook. New Media & Society, 20(1), 293- 310. https://doi.org/10.1177/14614448166611553. Plantin, J.-C., & Punathambekar, A. (2019). Digital media infrastructures: Pipes, platforms, and politics. Media, Culture & Society, 41(2), 163-174. https://doi.org/10.1177/0163443718818376. Polanyi, M. (2009). The tacit dimension. Chicago: University of Chicago Press. Price, R. (February 4, 2020). Instagram reportedly generated $20 billion in ad revenue in 2019 — even more than YouTube. Retrieved from https://www.businessinsider.com/instagram- 20-billion-ad-revenue-2019-report-2020-2 Puig de la Bellacasa, M. (2011). Matters of care in technoscience: Assembling neglected things. Social Studies of Science, 41(1), 85-106. https://doi.org/10.1177/0306312710380301. Rader, E. (2017). Examining user surprise as a symptom of algorithmic filtering. International Journal of Human-Computer Studies, 98, 72-88. https://doi.org/10.1016/j.ijhcs.2016.10.005. Rader, E., Cotter, K., & Cho, J. (2018). Explanations as mechanisms for supporting algorithmic transparency. Proceedings from 2018 ACM Conference on Human Factors in Computing Systems (CHI) (pp. 1-13), Montreal, Canada: Association for Computing Machinery. https://doi.org/10.1145/3173574.3173677. Rader, E., & Gray, R. (2015). Understanding user beliefs about algorithmic curation in the Facebook News Feed. Proceedings from 2015 ACM Conference on Human Factors in Computing Systems (CHI) (pp. 173-182), Seoul, Republic of Korea: Association for Computing Machinery. https://doi.org/10.1145/2702123.2702174. Rainie, L., & Anderson, J. (2017). Theme 7: The need grows for algorithmic literacy, transparency and oversight. Retrieved from http://www.pewinternet.org/2017/02/08/theme-7-the-need-grows-for-algorithmic- literacy-transparency-and-oversight Reply All. (July 11, 2019). #145 Louder. Retrieved from https://gimletmedia.com/shows/reply- all/rnhzlo Richterich, A. (2018). Tracing controversies in hacker communities: Ethical considerations for internet research. Information, Communication & Society, 23(1), 76-93. https://doi.org/10.1080/1369118X.2018.1486867. Rieder, B. (2017). Scrutinizing an algorithmic technique: the Bayes classifier as interested reading of reality. Information, Communication & Society, 20(1), 100-117. 289 https://doi.org/10.1080/1369118X.2016.1181195. Robinson, L. (2009). A taste for the necessary: A Bourdieuian approach to digital inequality. Information, Communication & Society, 12(4), 488-507. https://doi.org/10.1080/13691180902857678. Robinson, L., Cotten, S. R., Ono, H., Quan-Haase, A., Mesch, G., Chen, W., . . . Stern, M. J. (2015). Digital inequalities and why they matter. Information, Communication & Society, 18(5), 569-582. https://doi.org/10.1080/1369118X.2015.1012532. Romm, T. (June 10, 2020). Amazon, Facebook and Google turn to deep network of political allies to battle back antitrust probes. Retrieved from https://www.washingtonpost.com/technology/2020/06/10/amazon-facebook-google- political-allies-antitrust/ Roose, K. (June 8, 2019). The making of a YouTube radical. Retrieved from https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html Rowsell, J., & Pahl, K. (2015). The Routledge handbook of literacy studies. New York: Routledge. Sanchez, C. (November 12, 2019). Why is Instagram hiding likes for U.S. users? Retrieved from https://www.harpersbazaar.com/culture/art-books-music/a29761597/why-instagram- hides-likes/ Saurwein, F., Just, N., & Latzer, M. (2015). Governance of algorithms: Options and limitations. info, 17(6), 35-49. https://doi.org/10.1108/info-05-2015-0025. Seaver, N. (2014). Knowing algorithms. Retrieved from http://nickseaver.net/s/seaverMiT8.pdf Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2), 1-12. https://doi.org/10.1177/2053951717738104. Selwyn, N., Gorard, S., & Furlong, J. (2006). Adult learning in the digital age: information technology and the learning society. New York: Routledge. Senft, T. M. (2008). Camgirls: Celebrity and community in the age of social networks. New York: Peter Lang. Shahani, A. (November 17, 2016). From hate speech to fake news: The content crisis facing Mark Zuckerberg. NPR. Retrieved from https://www.npr.org/sections/alltechconsidered/2016/11/17/495827410/from-hate-speech- to-fake-news-the-content-crisis-facing-mark-zuckerberg Shaun [shaun_vids]. (June 4, 2019). youtube CEO @SusanWojcicki recently said her “#1 priority is responsibility, even if that comes at the expenses of growth.” this whole incident shows that to be a fairly blatant lie [Tweet]. Retrieved from https://twitter.com/shaun_vids/status/1136107064500129797 290 Sloane, G. (February 03, 2020). YouTube ad revenue, disclosed by Google for the first time, topped $15 billion in 2019. Retrieved from https://adage.com/article/digital/youtube-ad- revenue-disclosed-google-first-time-topped-15-billion-2019/2233811 Solon, O. (April 19, 2018). How Europe’s ‘breakthrough’ privacy law takes on Facebook and Google. The Guardian. Retrieved from https://www.theguardian.com/technology/2018/apr/19/gdpr-facebook-google-amazon- data-privacy-regulation Solon, O. (November 10, 2016a). Facebook’s failure: did fake news and polarized politics get Trump elected? The Guardian. Retrieved from https://www.theguardian.com/technology/2016/nov/10/facebook-fake-news-election- conspiracy-theories Solon, O. (November 10, 2016b). Facebook’s fake news: Mark Zuckerberg rejects ‘crazy idea’ that it swayed voters. Retrieved from https://www.theguardian.com/technology/2016/nov/10/facebook-fake-news-us-election- mark-zuckerberg-donald-trump Southern Poverty Law Center. (2017). Alt-Right. Retrieved from https://www.splcenter.org/fighting-hate/extremist-files/ideology/alt-right Spicker, P. (2011). Ethical covert research. Sociology, 45(1), 118-133. https://doi.org/10.1177/0038038510387195. SPLC Hatewatch Staff. (April 19, 2018). McInnes, Molyneux, and 4chan: Investigating pathways to the alt-right. Retrieved from https://www.splcenter.org/20180419/mcinnes- molyneux-and-4chan-investigating-pathways-alt-right#youtube Srnicek, N. (2017). Platform capitalism. Cambridge, UK: Polity. Star, S. L. (1990). Power, Technology and the Phenomenology of Conventions: On being Allergic to Onions. The Sociological Review, 38(1_suppl), 26-56. https://doi.org/10.1111/j.1467-954X.1990.tb03347.x. Statt, N. (May 2, 2018). Facebook is using billions of Instagram images to train artificial intelligence algorithms. Retrieved from https://www.theverge.com/2018/5/2/17311808/facebook-instagram-ai-training-hashtag- images Steckly, K. (January 23, 2020). Instagram algorithm 2020: EXPLAINED. Retrieved from https://www.youtube.com/watch?v=zBy7Ww7UEHQ Sterling, J. (March 30, 2020). Mister negative (The Jimquisition) [Video file]. Retrieved from https://www.youtube.com/watch?v=FywC0YQFxD0 Strauss, A. (1978). A social world perspective. Studies in symbolic interaction, 1, 119-128. 291 Suchman, L. A. (2007). Human-machine reconfigurations: plans and situated actions. Cambridge; New York: Cambridge University Press. Telban, S. (2017). 5 key reasons to stop follow/unfollow. Retrieved from https://www.coffeewithsummer.com/blogging-social/stop-follow-unfollow Terranova, T. (2000). Free labor: Producing culture for the digital economy. Social text, 18(2), 33-58. The bread book. (n.d.). The bread book? Retrieved from https://thebreadbook.org/index-2.html The Washington Post Editorial Board (November 18, 2016). Social media sites can’t allow fake news to take over. The Washington Post. Retrieved from https://www.washingtonpost.com/opinions/social-media-sites-cant-allow-fake-news-to- take-over/2016/11/18/ba8ace9e-ac22-11e6-8b45-f8e493f06fcd_story.html The White House, Office of the Press Secretary. (2016). FACT SHEET: President Obama announces Computer Science For All initiative [Press release]. Retrieved from https://obamawhitehouse.archives.gov/the-press-office/2016/01/30/fact-sheet-president- obama-announces-computer-science-all-initiative-0 Theoharis, A. G. (1978). Spying on Americans: Political surveillance from Hoover to the Huston plan. Philadelphia, PA: Temple University Press. Thomas, S. L., Nafus, D., & Sherman, J. (2018). Algorithms as fetish: Faith and possibility in algorithmic work. Big Data & Society, 5(1), 1-11. https://doi.org/10.1177/2053951717751552. Thomas, W., I, & Thomas, D. S. (1928). The child in America: Behavior problems and programs. New York: AA Knopf. Thompson, D. (December 10, 2019). The Millennials-versus-Boomers fight divides the Democratic Party. The Atlantic. Retrieved from https://www.theatlantic.com/ideas/archive/2019/12/young-left-third-party/603232/ Thorson, K. (2020). Attracting the news: Algorithms, platforms, and reframing incidental exposure. Journalism, 1-16. https://doi.org/10.1177/1464884920915352. Thorson, K., Cotter, K., Medeiros, M., & Pak, C. (2019). Algorithmic inference, political interest, and exposure to news and politics on Facebook. Information, Communication & Society, 1-18. https://doi.org/10.1080/1369118X.2019.1642934. Thorson, K., & Wells, C. (2016). Curated flows: A framework for mapping media exposure in the digital age. Communication Theory, 26(3), 309-328. https://doi.org/10.1111/comt.12087. Thorson, K., Xu, Y., & Edgerly, S. (2018). Political inequalities start at home: Parents, children, and the socialization of civic infrastructure online. Political Communication, 35(2), 178- 292 195. https://doi.org/10.1080/10584609.2017.1333550. Tinashe, I. (June 18, 2019). Instagram’s mother slave method. Retrieved from https://theinstahustle.com/index.php/2019/06/18/instagrams-notorious-mother-slave- method-exposed/ Townsend, L., & Wallace, C. (2016). Social media research: A guide to ethics. Retrieved from www.dotrural.ac.uk/socialmediaresearchethics.pdf Traweek, S. (1988). Beamtimes and lifetimes: The world of high energy physicists. Cambridge, MA: Harvard University Press. Tufekci, Z. (March 10, 2018). YouTube, the great radicalizer. Retrieved from https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html Turow, J. (2011). The daily you. New Haven, CT: Yale University Press. van Dijck, J. (2013). The culture of connectivity: A critical history of social media. Oxford, England: Oxford University Press. Van Dijck, J., Poell, T., & De Waal, M. (2018). The platform society: Public values in a connective world. New York: Oxford University Press. van Dijk, J. (2005). The deepening divide: Inequality in the information society. Thousand Oaks, CA: SAGE. van Dijk, J. A., & van Deursen, A. (2014). Digital skills: Unlocking the information society. New York: Palgrave Macmillan. Vasquez, V. (2012). Critical Literacy. In J. A. Banks (Ed.), Encyclopedia of diversity in education. Thousand Oaks, CA: SAGE. VITO the Ancestor of Men, [@VitoGesualdi]. (July 10, 2019). Watching a breadtuber getting dragged for following “the wrong people” on twitter and realizing breadtube is destined to fail. The left eat their own. They are incapable of banding together for more than a few minutes before looking for the [Tweet]. Retrieved from https://twitter.com/VitoGesualdi/status/1148874316869599232 Wagner, K. (August 22, 2014). How Twitter decides what to place in your stream. Vox. Retrieved from https://www.vox.com/2014/8/22/11630120/how-twitter-decides-what-to-place-in- your-stream Warren, J. (February 3, 2020). This is how the Instagram algorithm works in 2020. Retrieved from https://later.com/blog/how-instagram-algorithm-works/ Warschauer, M. (2003). Technology and social inclusion: rethinking the digital divide. Cambridge, MA: MIT Press. 293 Watanabe, M. (December 5, 2019a). The unbearable whiteness of leftist YouTube. Bitch. Retrieved from https://www.bitchmedia.org/article/Unbearable-Whiteness-Leftist- YouTube-LeftTube Watanabe, M. (March 21, 2017). YouTube’s restricted mode is anti-LGBTQ [Video file]. Retrieved from https://www.youtube.com/watch?v=VcsFQnq5sGI Watanabe, M., [@marinashutup]. (July 26, 2019b). Listen, I get that I haven’t done a video or livestream in six months and my content is more feminist/social justice leaning, but I was on YouTube for over six years and I have been left out of almost every single conversation about “LeftTube” lmao [Tweet]. Retrieved from https://twitter.com/marinashutup/status/1154926078042894336 We Are Social, Hootsuite, & DataReportal. (April 23, 2020)). Most popular social networks worldwide as of April 2020, ranked by number of active users (in millions) [Graph]. Retrieved from https://www.statista.com/statistics/272014/global-social-networks- ranked-by-number-of-users/ Weeks, K. (2007). Life within and against work: Affective labor, feminist critique, and post- Fordist politics. Ephemera: Theory and Politics in Organization, 7(1), 233-249. Wheeler, C. (2020). Are Instagram courses worth it? Retrieved from https://www.liveloveruntravel.com/instagram-classes-worth-it/ Whiteman, N. (2012). Undoing ethics: Rethinking practice in online research. New York: Springer. Wilkins, K., [@kat_blaque]. (July 27, 2019a). Despite what some may think, I am not super invested in being considered part of “Left Tube”. I think it really sucks how people ignore how Franchesca, myself and Marina were viciously harassed and how that honestly paved the way [Tweet]. Retrieved from https://twitter.com/kat_blaque/status/1154969265549635584 Wilkins, Kathryn, [Kat Blaque]. (June 4, 2019b). Why is “LeftTube” so White? [Video file]. Retrieved from https://www.youtube.com/watch?v=_JxbjvvpDNQ Willson, M. (2017). Algorithms (and the) everyday. Information, Communication & Society, 20(1), 137-150. https://doi.org/10.1080/1369118X.2016.1200645. Wing, J. M. (2014). Computational thinking benefits society. Retrieved from http://socialissues.cs.toronto.edu/index.html%3Fp=279.html Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121-136. Zarsky, T. (2016). The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology, & Human Values, 41(1), 118-132. https://doi.org/10.1177/0162243915605575. 294 Zhou, D., & Moreels, P. (2014). U.S. Patent No. 8,856.235 B2. Washington, DC: U.S. Patent and Trademark Office. Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75-89. https://doi.org/10.1057/jit.2015.5. Zuboff, S. (2019-01-15). The age of surveillance capitalism. New York: PublicAffairs. 295