Skip to content
youtube, subcribe, icon, logo, youtube icon, play icon, black theme, youtube, youtube, youtube, youtube, youtube

YouTube as a Radicalizing Force; The Promotion of the Alt-Right Pipeline


A hand holding a smartphone displaying the YouTube app against a red background.

⚠️ CW – Discussion surrounding:

  • Extremist ideologies; alt-right and white nationalist movements
  • The Christchurch mosque shootings
  • Racist, xenophobic, and misogynistic rhetoric
  • Hate-based violence and discriminatory content

Introduction

When a broadcast comprised of white nationalists debating ’race realism’ was briefly the most popular live-streamed video on YouTube, concern grew regarding YouTube’s role in disseminating extremist content and encouraging political radicalisation (Lewis, 2018). With 2.5 billion active monthly users, YouTube is amongst the most popular social media platforms. It has been at the forefront of many concerns that social media platforms promote increasingly extremist and divisive content to users (Haroon et al., 2020). One typically divisive group that YouTube hosts is the alternative-right (alt-right) network, a group whose exact beliefs vary significantly from liberalism to white nationalism but is unified by its contempt for current social justice movements and progressive politics (Lewis, 2018). Recognising that “…media also shapes identity” (Pariser, 2011, p.23) and that over 70% of the content viewed on YouTube is recommended by YouTube’s recommendation algorithm, algorithmic bias can play a significant role in shaping users’ worldviews and self-contextualisation (Rodriguez, 2018, as cited in Harron et al., 2023). This paper argues that YouTube’s recommendations algorithm disproportionality recommends alt-right political ideologies and intensifies users’ selective exposure through filter bubbles, increasing the likelihood of affective polarisation and political radicalisation. This is achieved through algorithmic biases, the incentivisation of commercial populism, and the creation of filter bubbles that encourage in-group thinking and “othering”, exemplified in ex-alt-righter Casey Cain’s testimony and Christchurch shooter Brenton Tarrant’s actions.

Table of Contents

Packaging Radicalisation

A key reason behind YouTube’s recommendation algorithm’s disproportionate proliferation of increasingly extreme content is the alt-right’s ability to understand YouTube’s culture and affordances. Alt-right creators package content in a manner that ensures it is advanceable by YouTube’s recommendation algorithm. YouTube’s audio-visual content delivery is ideal for populist content, which relies predominantly on ethos rather than evidence (Wurst, 2022). This format allows a more personalised delivery and a captivating, satirical tone, which aligns well with YouTube’s culture and entertainment purposes (Evans, 2018). This ability of the alt-right network to present their ideology in a light-hearted manner through the use of memes and degrading jokes further helps advertise an identity as ‘edgy’ and ‘counter-cultural’, providing a ‘punk-like’ appeal. By packaging extremist ideas in this digestible comedic manner, audiences can become gradually desensitised towards the presented attitudes (Lewis, 2018). This presentation of the alt-right is aided by YouTube’s audio-visual affordances and its alignment with YouTube’s YouTube culture of memes and satire, allowing increased engagement and better ranking within the platform.
Another method by which alt-right ideology is algorithmically favoured is by their ability to platform themselves as an interconnected network. Alt-right content creators frequently collaborate and appear as guests on different channels. The scale and interconnected nature of the alt-right network is illustrated in Figure 1. By hosting other right-wing influencers as guests, the channels facilitate easier flow from one video to another, often through proliferation and direct links to content, increasing their audience and exposure (Lewis, 2018). By creating a network of guest appearances, there is an increasing overlap between conservative and more extremist influencers, creating a “radicalisation pipeline” (Ribeiro et al., 2020, p.1) by which users can travel to find increasingly extremist content. These collaboration strategies provide a comprehensive network from which the algorithm can draw, contributing to algorithmic overrepresentation.


Figure 1

Documenting the Alternative Influence Network on YouTube

Note. Adapted from “Alternative Influence: Broadcasting the reactionary right on YouTube,” R. Lewis, 2018, Data & Society, p.11 (https://datasociety.net/wp-content/uploads/2018/09/DS_Alternative_Influence.pdf). Copyright 2018 by Data & Society.

Profit-maximisation

As it operates within the attention economy, YouTube’s recommendations algorithm optimises user engagement to maximise its advertising value and potential revenue from third parties. To track the recommendations algorithm’s success, YouTube measures engagement using “captivation metrics” (Seaver, 2018, as cited in Milano et al., 2020, p. 962), such as video views, ‘watch times’, likes, shares and comments. Before 2012, YouTube’s recommendations algorithm used ‘views’ as its primary indicator of video quality and popularity (Goodrow, 2021). However, this cultivated a mass of ‘click-bait’ videos, and to combat this shift, YouTube amended the algorithm to centre around ‘watch-time’ to rank videos based on the content’s ability to sustain user engagement (The YouTube Team, 2019; Covington et al., 2016). YouTube’s algorithm has elements that remain relatively opaque, as it has integrated a “neural network” (Byrant, 2020, p.3), which is a machine-learning system wherein more obscure patterns between content are identified, and recommendations are automatically adjusted accordingly (Byrant, 2020). These amendments were made to maximise user engagement and the platform’s profitability.

Over 70% of the content viewed on YouTube is recommended by YouTube’s recommendation algorithm. (Rodriguez, 2018, as cited in Harron et al., 2023)

Building the pipeline

By prioritising user engagement and locating these underlying connections, many of these algorithmic adjustments built the pipeline to the alt-right. Bryant found strong biases in the recommendations algorithm to promote right-leaning content, most notably that of the alt-right (2020). This research is supported by Harron et al. (2023), who found that content was increasingly recommended from “problematic” (p.1) channels, whether they be alt-right, populist or conspiratorial, with over 36.1% of users encountering them. One explanation is that the changes to focus the algorithm on ‘watch-time’ suited the alt-right creation style as predominantly long-form content of debates, podcasts and video essays. As alt-right content was often interwoven into pop culture commentary or game reviews, the new incorporation of AI to link content based on otherwise indiscernible parallels meant that this content further acted as a gateway for users to the more explicitly political and radical alt-right ideologies.
In adherence to optimising users’ time spent on the platform, YouTube financially incentivises content creators through the YouTube Partner Program and ‘super-chats’, providing avenues for full-time careers for alt-right creators and rewarding populist content. YouTube’s algorithm is engagement-oriented, meaning that sensationalised and controversial content receives a financial reward, monetising and incentivising extremer content for it is more likely to increase views and profitability. The YouTube Partner Program allowed all creators to receive a portion of ad revenue when, previously, this was restricted to certain channels that met set moderation requirements (Munger & Philips, 2020). This amendment resulted in more lucrative career opportunities for far-right content creators. It incentivised the creation of more ‘rage-bait’, extreme and populist content, or “commercial populism”, to retain users’ attention (Volcic and Andrejevic, 2022, p.1, as cited by Wurst, 2022). In 2019, YouTube demonetised political channels to prevent the association of advertising brands with extremist channels (Munger & Philips, 2020). However, creators continued to have access to content monetisation through ‘super-chats’ on live streams, crowdfunding sites like Patreon, and direct product promotion and advertising in their content (Munger & Philips, 2020). This enabled full-time careers out of the production of controversial content, resulting in commercial populism and allowing for an even more saturated market of extremist content.

38% of fascists who credited the internet explicitly credited YouTube for their ‘red-pilling

Evans 2018

An Algorithmic Push

As YouTube’s recommendations algorithm amplifies echo chambers into filter bubbles and constricts the diversity of opinion and information that alt-right users encounter, the abundance of extremist content supplies an alt-right tunnel. When engaging with political news and information, audiences tend to build their echo chambers through selective exposure and confirmation bias, but by sourcing political information from YouTube’s recommendations algorithm, the agency of this exposure is transferred onto the algorithm, and a filter bubble is created (Selvanathan & Leidner, 2022). Audiences are most prone to selective exposure to political topics, seeking material reinforcing their pre-existing political beliefs, which are then used to process said political information (Papacharissi & Trevey, 2023; Stroud, 2008, as cited in Leung and Lee, 2014). For this reason, having a recommendations algorithm designed to categorise users’ interests and feed them political information similar users have engaged with can cause a bias-affirming feedback loop that shelters users from rebuttals (Haroon et al., 2023). It creates an environment wherein each user lives in their own “unique universe of information” (Pariser, 2011, p.10), and messages reaffirming reactionary politics are pushed; counter-arguments routinely subsided (Jamieson & Capella, 2008, as cited in Arguedas et al., 2022). This diminishes the common ground between parties, and the limited exposure to alternate views lowers tolerance for alternate opinions and reinforces the more extreme worldview (Arguedas et al., 2022).

One of Us

The alt-right network is an affective public connected by the solidarity of anti-progressive sentiment; it offers belonging and solidification of social and political identity and bolsters an in-group mentality coupled with YouTube’s algorithmic-induced filter bubbles. The alt-right is an affective public in that it is connected through expressions of anti-progressive sentiment, supports connective action, and is disruptive of dominant political narratives (Papacharissi & Taylor, 2023). By centralising the alt-right identity around contempt for ‘progressives’, it defines the in-group identity as based on supremacy over the ‘social justice warriors’ and feminists outgroup. The alt-right presents itself as having a fundamental understanding of truth, and by accepting it, the user is permitted into an exclusive network. By identifying and engaging with the alt-right as a networked public, users can adopt and refine their social identities as much as their political identities. It grants a sense of belonging and acquired knowledge (Papacharissi & Trevey, 2023). A popular term used by the alt-right community to describe their first-time embracement of alt-right ideology is ‘Red-pilling’. It references The Matrix and likens the absorption of alt-right beliefs to enlightenment, gaining access to forbidden truths, and seeing the world for ‘what it is’ (Botto & Gottzén, 2023). However, having this ‘superior’ knowledge also distinguishes that individual from others who ‘lack’ it. The representation of this ‘us’ and ‘them’ mentality that is encouraged by the alt-right network is exacerbated in the filter bubble environment, allowing for easier ‘othering’ and assertive polarisation towards progressives.

One of Them

As each user operates on different information in their personalised filter bubble, the lack of mutual understanding that sprouts from that enhances the alt-rights ability to ‘other’ progressives, as they may appear ignorant or uninformed, encouraging the classification of them as ‘lesser’ than the alt-right groups (Holiday et al., 2004). This foundation can create a culture wherein arguments opposing the alt-right are more dismissible, as they appear ill-informed and below consideration (Holiday et al., 2004; Papacharissi & Trevey, 2023). The populist framing of ‘social justice warriors’ as threatening Western culture and restricting freedom of speech further encourages affective polarisation wherein animosity is perpetuated between alt-right and progressives. This is due to the increased homogeneity in online networks, which filter bubbles tend to induce and can result in increased isolation from other partisans (Iyengar et al., 2019).

Gradual Radicalisation

YouTube’s algorithmic ability to amplify filter bubbles and ‘other’ alternate beliefs and induce assertive polarisation can produce a compelling environment for user radicalisation, as illustrated in interviews and recounts such as that of ex-alt-righter Caleb Cain. In 2014, Mr Cain watched YouTube’s recommended self-help videos by men’s rights advocate Stefan Molyneux. While Mr Molyneux was not explicitly endorsing far-right ideologies, he acted as a “gateway to the far right” for the YouTube recommendations algorithm (Hosseinmardi et al., 2020, p. 1; Wurst, 2022). YouTube linked Mr Cain’s interests to right-leaning and viewership patterns with those of similar right-leaning users. As he gradually consumed extremer content, the algorithm was encouraged to feed him more and sustain his engagement. Over the next two years, Mr Cain watched increasing amounts of content from right-wing creators and even explicitly racist videos, despite having originally identified as a Liberal. While the radicalisation process can halt for some users as it eventually did for Mr Cain, others can become more engulfed and radicalised in their filter bubbles.

…”YouTube [is] a powerful recruiting tool for Neo-Nazis and the alt-right” (Byrant, 2020, p.1)

The Common Thread

Mr Cain is not a standalone case; YouTube has been identified as “… a powerful recruiting tool for Neo-Nazis and the alt-right” (Byrant, 2020, p.1). This is supported by Evan’s (2018) research, which found that 38% of fascists who credited the internet explicitly credited YouTube for their ‘red-pilling’. This is further illustrated by Brenton Tarrant, the shooter responsible for the Christchurch massacre in 2019, who previously donated to Mr Molyneux’s channel and, in his live streaming of the shooting, explicitly endorsed famous YouTuber ‘PewDiePie’. This direction of the audience towards YouTube suggests that he may have garnered some of his ideologies there and that YouTube can play a facilitative role in the radicalisation process (RCIACM, 2020). The shooter’s manifesto was further broadcast by popular white ethnonationalist YouTuber Joesph Cox to his 600,000 followers (Macklin, 2019). This action demonstrates the networked nature of the alt-right and how, in these filter bubbles, as alt-right political content creators inspire extremism amongst viewers, viewers can encourage extremity amongst creators, resulting in feedback loops of mutual radicalisation (Lewis, 2018). These theories are supported by research conducted into YouTube’s algorithms recommendation system, which suggests that although YouTube’s algorithm works to recommend content that aligns with users ‘ partisan beliefs, and that deeper in the recommendations trail is increasingly extremist content, with this trend being most evident with right-leaning users (Haroon et al., 2023; Haroon et al., 2022).

Conclusion

Users are shaped by the content they consume. The YouTube algorithm solidifies echo chambers, and the resulting filter bubble impedes users’ ability to empathise with subjects that have become ‘othered’, resulting in increased affective polarisation. This contributes to an environment where users are more susceptible to radicalisation, as Mr Cain and Mr Tarrant illustrated. While the distinguished radicalising force is often the alt-right community, the YouTube recommendations algorithm is responsible for steering users’ understanding of their political landscape and identity towards more extreme views through the interlinked alt-right network (Munger & Phillips, 2020). This network benefits from the amplification of the filter bubble phenomenon through the YouTube recommendations algorithm and its bias for controversial content, which contributes to an environment wherein the audience and alt-right influencers alike are encouraged to extremify their views, which raises the risk of political radicalisation.

References


Arguedas, A. R., Robertson, C., Fletcher, R., & Nielsen, R. (2022). Echo chambers, filter bubbles, and polarisation: a literature review. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2022-01/Echo_Chambers_Filter_Bubbles_and_Polarisation_A_Literature_Review.pdf

Botto, M., & Gottzén, L. (2023). Swallowing and spitting out the red pill: young men, vulnerability, and radicalization pathways in the manosphere. Journal of Gender Studies, 33(5), 596–608. https://doi.org/10.1080/09589236.2023.2260318

Bryant, L. V. (2020). The YouTube algorithm and the Alt-Right filter bubble. Open Information Science, 4(1), 85–90. https://doi.org/10.1515/opis-2020-0007

Covington, P., Adams, J., & Sargin, E. (2016). Deep Neural Networks for YouTube Recommendations. Proceedings of the 10th ACM Conference on Recommender Systems, 191–198. https://doi.org/10.1145/2959100.2959190

Evans, R. (2018). From memes to infowards: How 75 Fascist activists were “red-pilled”. Bellingcat. https://www.bellingcat.com/news/americas/2018/10/11/memes-infowars-75-fascist-activists-red-pilled/

Goodrow, C. (2021, September 15). On YouTube’s recommendation system. YouTube Official Blog. https://blog.youtube/inside-youtube/on-youtubes-recommendation-system/

Haroon, M., Chhabra, A., Liu, X., Mohapatra, P., Shafiq, Z., & Wojcieszak, M. (2022). YouTube, the great radicalizer? Auditing and mitigating ideological biases in YouTube recommendations. https://doi.org/10.48550/arXiv.2203.10666

Haroon, M., Wojcieszak, M., Chhabra, A., Liu, X., Mohapatra, P., & Shafiq, Z. (2023). Auditing YouTube’s recommendation system for ideologically congenial, extreme, and problematic recommendations. Proceedings of the National Academy of Sciences – PNAS, 120(50). https://doi.org/10.1073/pnas.2213020120

Holliday, A., Hyde, M., & Kullman, J. (2004). Otherization. Intercultural communication: An advanced resource book (1st ed, pp. 21-35). Routledge. https://ebookcentral.proquest.com/lib/curtin/detail.action?docID=182287

Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D. M., & Watts, D. J. (2021). Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences – PNAS, 118(32), 1–8. https://doi.org/10.1073/pnas.2101967118

Iyengar, S., Lelkes, Y., Levendusky, M., Malhotra, N., & Westwood, S. J. (2019). The Origins and Consequences of Affective Polarization in the United States. Annual Review of Political Science, 22(1), 129–146. https://doi.org/10.1146/annurev-polisci-051117-073034

Leung, D. K. K., & Lee, F. L. F. (2014). Cultivating an active online counter public: Examining usage and political impact of internet alternative media. The International Journal of Press/Politics, 19(3), 340-359.
https://doi.org/10.1177/1940161214530787

Lewis, R. (2018). Alternative Influence: Broadcasting the reactionary right on YouTube. Data & Society. https://datasociety.net/wp-content/uploads/2018/09/DS_Alternative_Influence.pdf
Macklin, G. (2019). The Christchurch Attacks: Livestream Terror in the Viral Video Age. CTCSENTINEL, 12(6), 18-29. https://ctc.westpoint.edu/christchurch-attacks-livestream-terror-viral-video-age/

Milano, S., Taddeo, M. & Floridi, L. (2020). Recommender systems and their ethical challenges. AI & Society, 35(6), 957-967. https://doi.org/10.1007/s00146-020-00950-y

Munger, K., & Phillips, J. (2020). Right-Wing YouTube: A supply and demand perspective. The International Journal of Press/Politics, 27(1), 186-219. https://doi.org/10.1177/1940161220964767

Papacharissi, Z., & Trevey, M. T. (2018). Affective Publics and Windows of Opportunity: Social media and the potential for social change. In M. Graham (Ed.) The Routledge Companion to Media and Activism, (87-96). Routledge.

Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. Penguin.
Ribeiro, M. H., Ottoni, R., West, R., Almeida, V. A. F., & Meira, W. (2019). Auditing radicalization pathways on YouTube. https://doi.org/10.48550/arxiv.1908.08313

Roose, K. (2019). The making of a YouTube radical. The New York Times.
Royal Commission of Inquiry into the Attach on Christchurch Mosques on 15 March 2019 (2020). Ko tō tātou kāinga tēnei. https://christchurchattack.royalcommission.nz/the-report/part-2-context/harmful-behaviours-right-wing-extremism-and-radicalisation

Selvanathan, H. P., & Leidner, B. (2022). Normalization of the Alt-Right: How perceived prevalence and acceptability of the Alt-Right is linked to public attitudes. Group Processes & Intergroup Relations, 25(6), 1594–1615. https://doi.org/10.1177/13684302211017633

The YouTube Team. (2019, January 25). Continuing our work to improve recommendations on YouTube. YouTube Official Blog. https://blog.youtube/news-and-events/continuing-our-work-to-improve/

Wurst, C. (2022). Bread and plots: Conspiracy theories and the rhetorical style of political influencer communities on YouTube. Media and Communication, 10(4), 213-223. https://doi.org/10.17645/mac.v10i4.5807

YouTube Creators. (2023, October 11). YouTube Partner Programme: How to make money on YouTube. [Video]. YouTube. https://www.youtube.com/watch?v=om2WiDsHLso

Share this:

Search Site

Your Experience

We would love to hear about your experience at our conference this year via our DCN XVI Feedback Form.

Comments

20 responses to “YouTube as a Radicalizing Force; The Promotion of the Alt-Right Pipeline”

  1. Lily Avatar

    Hi Eva,
    Very interesting discussion of how social media platforms like YouTube are breeding grounds for red-pilled content and an easy pathway to the alt-right pipeline. I’ve noticed that more and more viral right-wing channels seem to be cropping up, and it’s particularly worrying when you consider YouTube’s audience is made up largely of children.

    This is a connection I made in my own paper, where I discussed the spread of conservative ideology on YouTube (facilitated in this instance by religious family vlogging channels). I’m curious to get your thoughts on whether you think it is the responsibility of the platform to stop the radical politicisation of the content being posted? Do you think there should be processes put in place to filter out controversial content, or is it up to the viewer to discern bias and ‘rage-bait’?

    Thanks for the great read!

    1. Eva Avatar

      Hi Lily,

      I could not agree more, the scale and demographic of the platform increases the severity of the issue. And the rise in anti-progressive sentiment and affective polarization between spheres is increasingly concerning.

      I’ll be sure to read your paper and see your take on the issue! I believe YouTube holds a responsibility to properly disincentivize harmful content. While, to an extent, its understandable to struggle to moderate all the uploads it encounters, YouTube needs to become more transparent as a beneficiary in this process, rather than continuously portraying itself as a mere mediator. While the broader issue of hate speech and disinformation are not constrained to YouTube alone, YouTubes monetization directly incentivized such content for a long period of time. I believe its this active profiteering of such harmful content wherein YouTube needs to bare greater accountability. Greater content moderation processes are to be expected for such a large scale platform. However, this is more readily-arguable on the far right, than for say conservative influencers who aren’t as explicit in their messaging.

      I think it is increasingly unrealistic to expect the user to discern rage-bait videos when parodies are becoming increasingly indiscernible from genuine hate speech or rage-bait. I believe there should be greater processes established for filtering out controversial content, however, logistically I am not sure what that would look like. But I imagine normalizing content warnings systematically would be a good starting point to help viewers discern some biases. Another area which would alleviate the weight of the issue is increased media-literacy and recognition of biases, however, ultimately I think it unrealistic for this burden to be on the user considering the magnitude of media they encounter daily.

      Thanks for your comment and thought-provoking questions!

      1. John Lim Avatar

        Hi Lily & Eva,

        Sorry to insert myself into your discussion, but I was intrigued by the subject matter concerning filtering out controversial content. During my research I did come across efforts for filtering but that only seemed to push people engaging in alt-right rhetorics to other platforms that do not censor or filter, leading to a further concentration of toxic individuals and ideologies into obscure platforms that even further amplifies such ideas.

        I was curious to hear what you guys think of this? How effective can filtering and censoring really be when people will just hop on to another platform that allows such content? What other alternative can we pursue? I really think education is more way to go or facilitating a platform where both sides can engage without enraging each other and really discuss core issues to their grievances such as addressing gaps in power, wealth, status or loss of therein fueling racist, sexist and xenophobic movements.

        If you guys are interested in exploring how toxic technocultures form in other platforms here is a link to my paper.

        https://networkconference.netstudies.org/2025/onsc/5420/social-media-affordances-donald-trump-politics-and-social-change/

        If you’re also interested in how the alt-right has been growing in influence check out Wendling’s (2018) book. Very interesting read!

        Wendling, M. (2018). Alt-Right : From 4chan to the White House. Pluto Press.
        https://ebookcentral.proquest.com/lib/curtin/reader.action?docID=5391114&ppg=8

  2. Joel Bourland Avatar

    Great work on this piece, Eva, and thanks for your insightful comments, Lily. I think the point you raise, Lily, is a significant one. How can platforms strike a balance between protecting freedom of speech while also ensuring that radical viewpoints are not given to us so disproportionately? And where should the responsibility fall for determining which kinds of content are ‘dangerous’ or ‘radical’?

    There are many competing interests always present between governments, platforms, advertisers and stakeholders, and even lobbying groups, political parties, and special interest groups—each of which influence the accessibility of certain kinds of information. How can these interests be mediated in an impartial manner? These are big questions!

  3. pangi Avatar

    Hi Eva,

    Your paper was a solid read. I liked how clearly you explained how the alt-right uses YouTube to spread their content. The part about how they make it entertaining or meme-like to get around detection and gain more reach really stood out to me. It’s crazy how something so serious can be hidden behind humour and satire.

    It connected a lot with what I looked at in my own paper about filter bubbles and how they trap young people into only seeing one kind of content. I also talked about how algorithms play a huge part in slowly changing how people think, kind of like what you showed with the Caleb Cain example.

    One thing I was wondering is whether you think YouTube will ever change its algorithm to stop this kind of content from being promoted or do you think the system benefits too much from the views and engagement to make any real change?

    Really enjoyed your paper.

    1. Eva Avatar

      Hi Pangi,

      Thank you so much for your comment! I think that their understanding of how to appeal to a wider audience by making it making their content into memes is so crucial to its spread and how it can infiltrate other non-political areas like game culture and assist the algorithm in drawing connections to their content.

      I agree, our paper really compliment one another in that regard.

      I think YouTube may feel more pressure to make changes but that they will always be superficial – like their adjustment to demonetize explicitly political content. Even that change was made to appeal to third party advertisers as they were reluctant to be associated with extremer political views. To me each of YouTube’s previous algorithmic adjustments only demonstrates that their adjustments revolve around their own self-interest. So unless extreme outside pressure is placed by third parties, or government intervention is threatened, I do not believe they have any true incentive to change. One thing I noticed during my research is YouTube’s tendency to position itself as a mediator or supplier of content rather than a profiteer, and I think this facade gives more hope that they will implement fair changes but at the end of the day it is a for-profit business, and unless outside factors demand change, it has no reason to fix something that brings in more revenue.

      Thanks again for taking the time to read and comment ☺️

  4. Jiahao Avatar

    Hi Eva
    The paper clearly explains how YouTube’s algorithm can lead people to more extreme political views and promote certain types of content. I found it very interesting to read the paper, which enabled me to learn about the power of algorithms to manipulate people in this day and age. I liked how the conference described the role of filter bubbles, commercial interests and in-group thinking in this process. The paper was well organised, and it was easy to understand how YouTube has the power to influence people’s views. To what extent do you think algorithms push people towards more extreme political views, compared to other social or psychological influences?

    1. Eva Avatar

      Hi Jiahao,

      Thank you for your feedback!
      Excellent question – I think while its important to recognize the algorithmic bias towards extremer content, the algorithm is centered more around long-term engagement, and as such, if the user is not interested in conspiratorial or right-wing content the algorithm will take note of the disinterest and promote it less. For this reason, I believe other social and psychological factors are much more influential. I consider these outside factors o be what primes users and increases their susceptibility to radicalization, wherein YouTube algorithm becomes more problematic. YouTube acts as a booster to what users pre-existent beliefs. While both the algorithm and social and psychological influences each play a role, I believe the psychological to be the most influential and algorithm the least.

      Thank you for your thought-provoking question! 😊

  5. Tilly Avatar

    Hi Eva, What an interesting topic! I was intrigued when I saw that your article was focused on YouTube, as I feel as though it’s a platform that has fallen under the radar nowadays, with the focus mainly being on TikTok, would you agree? When I think of filter bubbles, my mind thinks of TikTok straight away, potentially because it’s all that is ever spoken about now, particularly among younger generations. This is why I found your article so interesting, as it has widened my perspective! Just from my own experience on YouTube, I feel as though creators can avoid the consequences of posting controversial content, and there is less risk of a video being taken down, as opposed to TikTok, which is very strict when it comes to sensitive content.

    If someone is exposed to this type of content all the time, I can see how it could impact their views, as we are so easily convinced by what we see online. I remember when Logan Paul was one of the most famous people on YouTube, with most of his users being primary school age (myself included). I recall him posting humorous and comedic content, such as pranks and vlogs; however, there was one particular video that exposed these viewers to an awful topic that should not have been seen by young people. I have learnt that since then, YouTube has increased its censorship guidelines, particularly with young-oriented content. How do you think this could have possibly impacted the viewers’ identities?

    Well-done on such an interesting and engaging paper!

    1. Eva Avatar

      Hi Tilly,

      I agree, people tend to underestimate Youtube’s maintained popularity and significance. I think since TikTok has been more topical the moderators are more under the public eye and as YouTube has more long-form content it is perhaps more tedious for moderators to look through. However, I think it is still important to have better moderation in YouTube, as the video-essay formats that the political influencers gravitate to can definitely give the impression of more researched and reliable information than what they actually are.

      That is an interesting relation, I absolutely agree. Influencers can have such a major impact on users identity especially in more formative years. I believe censorship amongst content aimed towards younger audiences is a good intervention tactic. I think the impact on users identities is hard to pinpoint, however, I can imagine seeing such inappropriate content could normalize it, in a way which undermines its severity. While it is difficult to comment on the impact of the inappropriate video not knowing the exact nature of the video, I think the impact of the moderation later introduced could mean younger audiences are more aware of what might be inappropriate for comedic commentary, and cause a greater respect for serious issues. However, that is all my personal opinion and speculative.

      I really appreciate your comment and question and I am very glad you enjoyed my paper!

  6. 22068297 Avatar

    Hi Eva,
    You paper was an interesting read. I never knew the extent of influence alt-right content creators had on the platform.
    YouTube is typically my go to platform for how to videos and educational content such as ted talks etc. I don’t feel like I have ever come across any of the alt-right content mentioned in your paper, and might be because I have never gone looking for it. But based on your research, is there a way to perhaps identify these creators?
    I tend to agree, the YouTube algorithm can create filter bubbles of recommended content, but have found this is typically for content that already has been searched or consumed in the past. For example I recently searched for a video on how to renovate a lawn, then ten other recommendations came up with similar content. The algorithm didn’t just serve these videos, it was because I went looking for them. Similarly, if people are inclined to alt-right radical content, will they will find it? I would argue users cannot just blame the algorithm for recommending alt-right content, they need to take some responsibility for feeding it in the first place.
    Thanks again for enlightening on a different perspective.
    Cheers, Greg

    1. Eva Avatar

      Hi Greg,

      It can be easy to be sheltered from their influence, especially as the YouTube’s recommendations algorithm learns that you are using YouTube for predominantly educational purposes and as you show disinterest in more populist content it will likely adjust its recommendations to show less. There is quite the pool of research done on this topic, I would highly recommend Rebecca Lewis’ Alternative Influence: Broadcasting the Reactionary Right on YouTube (2018), her mapping of these influencers I have used as the first figure in my essay to refer back to. Lewis attempted to map out these influencers, however, it can be difficult to identify alt-right creators, particularly so when many may not identify themselves as alt-right, and many only host more far-ight influencers and inadequately debunk their ideas. For these reasons the exact influencers are difficult to list.

      I think you make an excellent point and I tend to agree – the algorithm is recommending what users interact with and search for, and as such, the users definitely have some responsibility there. I do not mean to imply YouTube to be steering each user directly to the right regardless of their political ideologies, but I think it is important to highlight that the algorithm has a bias towards it. If users are engaging with slightly right-leaning political material it is easier for that interest to be exacerbated on, and that the majority of the user-base is not alt-right, yet the recommendation algorithm is over-representing this content in a way that is disproportionate to the population. To answer your question, I think people inclined to alt-right radical content will definitely find it, and as you may recall in the case of Caleb Cain, people who are not directly searching for it but engage with these political influencers content in different spheres may lead to more recommendations of their more radical content.

      No worries, and thank you for taking the time to check out my paper!

  7. Jake Avatar

    Hi Eva,

    Interesting read and very thought provoking. Its a concerning idea that the algorithm can unintentionally lead people down a politically extreme path, simply because it gets more engagement. I wonder if this issue is equally as prominent with other popular social media platforms? Maybe these types of extremist videos should have an age restriction? Do you think this phenomenon also occurs on the other (left) end of the political spectrum with virtue signalling being a driver? It’s a tricky balance maintaining freedom of speech whilst attempting to regulate the intrinsic nature of these content algorithms.

    1. Eva Avatar

      Hi Jake,

      You definitely raise some important questions. While this issue is definitely not restricted to YouTube, it is most evident there. I chose YouTube as my research topic as it has been highly cited in people’s radicalisation journeys. However, there is little research on platforms such as X, which could be damaging similarly. The filter bubble phenomenon is a prominent feature of most social networking sites that algorithmically curate a feed. I believe this is largely determined by the algorithm’s success in moderating content. In this sense, each platform often has a different purpose, as the early days of Facebook prioritised showing content from your friends.
      In contrast, TikTok is a discovery platform. It shows you content based on what is most popular and what you have historically engaged with, alongside the user behaviour of similar users. But to comment on the extent to which each platform shares this issue without having studied each algorithm, etc., is outside my expertise.

      I agree that an age restriction would be a good intervention; however, it is only so effective as determining what is considered “extremist” is more murky. This issue would also be on the left side of the political spectrum; however, as the right uses more outrage tactics and exhibits a more interconnected and expansive network, studies suggest this bias is more pronounced on the right.

      Thank you for your thought-provoking comment!

  8. Pip Foster Avatar

    This is an excellent analysis, the diagram showing the “spiderweb” of the online alt right is fascinating ,but more importantly the important point you make on the kind of self selection of the algorithm increasingly pushing people further into the right. Scary stuff , and it raises so many questions about how we fight it because as you say , the algorithm is opaque. I think it’s really shocking just how many people credit youtube worth part of their far right behavior, I wonder if as many would say that twitter (X) radicalized them one way or another , I wonder if the reputation now of twitter to just be a right wing cesspit , gives it so credibility in terms of influencing people’s beliefs , compared to the “objective” and reliable YouTube , clearly a question to be asked about the influence youtube has , but then again it is something like the second most visited website.

    I really feel legislature is behind in terms of online content and I hope things catch up , the amount of harm some people on YouTube have done is awful , and yet they still make money from the platform.

    Again excellent piece , I found it very thought provoking.

    1. Eva Avatar

      Hi Pip,

      I think you raise an excellent point as to what role X is now playing in the radicalization landscape, I think it serves as a fantastic example of how a lack of moderation will reconstruct its userbase and can become a gathering point for the alt-right community. But you’re right in that YouTube may seem more credible by comparison. I think the longer-form content also aids in the illusion of better quality or more credible journalism on YouTube.
      I agree – external intervention is key, as otherwise YouTube has no incentive to make any meaningful changes because they simply won’t be profitable. It is definitely saddening to see more incidents like the Christchruch shooting occur and watch investigations unpack these issues only to have their recommendations be ignored.

      Thank you for your comment!

  9. Rebecca Tracey Avatar

    Hi Eva,
    I really enjoyed your essay and also felt it resonated with my own (disinfo with the Voice). “YouTube’s algorithmic ability to amplify filter bubbles and ‘other’ alternate beliefs and induce assertive polarisation can produce a compelling environment for user radicalisation.” In light of this statement and your findings within your essay, how do you feel about YouTube being given a free-pass with the Aus government’s under 16 social media ban? What do you think are going to be the implications for singling out YouTube and allowing access despite it being an algorithmic nightmare for alt-right content?

    https://www.abc.net.au/news/2025-04-23/youtube-social-media-ban-exemption-decided-government-says/105201088

  10. Isabelle Service Avatar

    Hi Eva,

    This is such an insightful piece and your analysis of how the alt-right uses YouTube’s algorithm and culture is really compelling.

    One thing I kept thinking about was whether we sometimes overemphasise the algorithm’s role. Are users being radicalised by the platform, or are they already seeking out that content—and the algorithm just speeds things up?

    Also, how do we address this without risking censorship or suppressing valid, if controversial, viewpoints? Would love to hear your take on where the line should be drawn.

    Isabelle

  11. JordanUhe Avatar

    Hi Eva,
    It is frightening how well the pipeline works, I personally experienced it, in a depressive stage of high school around 2017, I got draw into Jordan Peterson tearing apart silly comments from SJWs, it started off as watching compilations of his, then listening to longer talks, and finally I started spreading out into other creators. I started watching MGTOW content and Milo Yiannopoulos, I agreed with a lot of their beliefs and the ones which I didn’t I just ignored, until I had gotten caught in the cult of their personalities.
    I am happy to say I started seeing cracks in their content and managed to break away, but it is a very scary thing to look back upon.

    This was a very insightful read into the strategies which I fell for.

    Do you think we can educate children into seeing propaganda for what it is? Should we be banning people spouting ideas and remove freedom of speech from the internet?
    Thank you
    From Jordan

  12. John Lim Avatar

    Hi Eva,

    I really enjoyed reading your paper and discovering how deep YouTube really feeds into echo chambers and filter bubbles and prioritizing attention and engagement over ethical circulation of content. Your paper was really detailed and explained things clearly in a way that was easy to understand, especially when encountering terminology I’ve never heard of before, great job!

    I also wrote similar things to your paper, particularly how alt-right movements form, especially with the rise of Donald Trump and how he has acted like a lightning rod to ban alt-right rhetorics to his cause. If you think you might be interested, the link to my paper is below. In regards to the alt-right and social media and the polarization effect, do you think with demonetization on social media platforms and censorship should equally apply to other movements that is considered progressive? Just thinking as allowing one ideology but not the other also feeds into polarization and makes the gap even bigger and further fuels the alt-right’s narrative that they are being silenced while the rest of society are following ‘progress’ like sheep. What are your thoughts or perhaps you have another way to go about it?

    Thanks for the fantastic read!

    Link to my paper: https://networkconference.netstudies.org/2025/onsc/5420/social-media-affordances-donald-trump-politics-and-social-change/