Skip to content

Political Polarisation and TikTok


Abstract

This paper examines how algorithm-driven social media platforms, especially TikTok, have exacerbated political polarisation by fostering echo chambers, normalising hate speech and enabling self-radicalisation. The way people consume political content has drastically changed as TikTok has replaced traditional media, especially for younger generations. This shift in consumption has enabled hate speech and extremist ideologies to gain a global reach as users are rapidly funnelled into ideologically homogenous filter bubbles once they start using the platform. Studies demonstrate that users become more passionate about their beliefs and that exposure to combative content nudges users towards increasingly extreme viewpoints and content, which erodes trust in political opponents and deepens social divides. Despite superficial efforts, TikTok’s reliance on automated forms of moderation fails to address hate speech on the platform, while governments struggle to regulate without infringing on users’ free speech. The paper argues that meaningful human-led oversight and transparent regulation are essential to mitigate the polarisation caused by the engagement-driven algorithms used by social media platforms.

Introduction

The digital age has transformed how people consume media. Traditional media filters that once dictated what people consumed have been replaced by algorithms designed to maximise engagement, often at the expense of accuracy. For many younger generations, platforms like TikTok have completely replaced television as the primary news and entertainment source. Hate speech and harmful ideologies that once struggled to find an audience have been given global platforms. The profit-driven algorithm used by TikTok has separated users into highly segregated digital communities. These echo chambers create filter bubbles where the recommendation algorithm presents only information that reinforces a user’s beliefs. Problematic and combative content has thrived under TikTok’s algorithm, drastically transforming how people consume and interact with political content. Social media platforms like TikTok, which rely on algorithms with limited human oversight, have helped to create a more polarised political climate, allowing extremism and hate speech to thrive. Using TikTok as the primary example, this paper will argue that social media sites have caused this polarised political climate by platforming hate speech, nudging users into self-radicalisation, through problematic filter bubbles, and how the platforms self-regulate with algorithms.

Platforming Hate

TikTok has rapidly replaced traditional media, cementing itself as a dominant platform that fuels political polarisation by platforming hate speech and legitimising extremism. Unlike traditional media, TikTok’s algorithmic architecture disrupts news and entertainment, prioritising engagement over information accuracy. For younger generations, algorithm-based platforms have not supplemented traditional media; in most cases, it has supplemented it entirely, becoming the primary content distributor (Faltesek et al., 2023). The shift in media consumption has raised critical concerns about the unchecked material on platforms like TikTok. Far-right groups have flocked to TikTok due to the platform’s accessibility compared to traditional media (Cuevas-Calderón et al., 2023). Studies have shown that through the use of coded language and dog-whistle tactics, extremists have successfully been able to grow sizable and profitable audiences on TikTok (Shin, 2024). A problem with this platforming of problematic content, such as extremist ideologies, is that it has been shown that consuming such content can erode people’s trust in any alternative content they consume (Grandinetti & Bruinsma, 2022). Another key issue with TikTok platforming these extremist users is that while they may have to filter their language, social media creators are not locked into just one platform. Creators have often used platforms like TikTok to create an audience and then migrate viewers to a less regulated part of the internet (Mamié et al., 2021). The issues caused by the platforming of harmful content are also new and unpredictable due to the interactive dynamics caused by social media. The interactive nature of TikTok means users are no longer passive in their interactions with news and politics. The interactive dynamic of online content consumption breeds personal investment, which can normalise harmful ideologies and language. For example, incel communities thrive on TikTok, where sexist language is trivialised and amplified, reshaping communication norms both online and offline (Solea & Sugiura, 2023). Features like comments, “duets”, and” stitches” further entrench this normalisation, enabling users to interact with harmful content personally. The blending of extremist content with mundane entertainment erodes the boundary between acceptable debate and hate. The personal investment of users and the blend of hate into everyday entertainment have contributed to a more politically polarised environment that has normalised what were once fringe views.

The Algorithm and New Accounts

TikTok’s algorithm-driven content distribution pushes new accounts towards politically combative and extremist content, fuelling political polarisation. As soon as a new account is created, the recommendation algorithm curates a personalised feed based on limited interaction data. Divisive content drives engagement, which leads the algorithm to prioritise it when recommending content to new accounts (Shin, 2024). Crucially, TikTok bypasses filters like search terms or tags used by platforms such as YouTube; instead, the algorithm relies on demographic signals to instantly funnel users into nationalistic, racial and ideological echo chambers (Fichman & Akter, 2024; Shin & Jitkajornwanich, 2024). The lack of reliance on search features for engagement also means that the TikTok algorithm must be aggressive at any sign of user engagement, and users have little autonomy over what is initially presented to them. Research confirms that TikTok disproportionately amplifies political content (Grandinetti & Bruinsma, 2022), and once users interact with such material, the algorithm escalates recommendations toward political extremes (Shin, 2024). The gradual push towards increasingly extreme content also affects what users consider “normal” (Shin & Jitkajornwanich, 2024). This push also creates a rabbit hole unique to the digital age. The algorithmic nudge was not possible pre-algorithmic content distribution and can fuel users to self-radicalise. Studies have shown that just three months of TikTok usage intensifies users’ convictions on divisive issues such as politics, abortion and religion (Shin, 2024). This result is, by design, as users become more politically concerned, which drives revenue for creators and TikTok itself (Shin & Jitkajornwanich, 2024). This lack of incentive for change means creating a politically combative and polarised environment is in TikTok’s best interest. While initial interaction is needed for users to be fed increasingly extreme ideological content, the algorithm contributes directly to political polarisation by reinforcing existing prejudices. Providing users with similar or more extreme content based on limited initial engagement facilitates a self-radicalisation process that was not present in a pre-digital world. This self-radiation process has changed how people consume and discuss political content online and in everyday life.

TikTok and Filter Bubbles

Unlike traditional media, which exposes audiences to a shared narrative, TikTok’s algorithms isolate users in clusters of narrow ideological similarities. Also, unlike television, where viewers can change the channel, the TikTok search algorithm is influenced by the content users consume on the platform (Grandinetti & Bruinsma, 2022). Therefore, if users consume far-right content on TikTok and search for left-leaning talking points, they may be presented with this information from a far-right perspective. The systematic way people are separated into categories with minimal self-control is a fundamental shift in the consumption of news and entertainment. The combativeness that the recommendation algorithm thrives on has created a binary clash of “us vs them,” which has directly contributed to a hate-fuelled political polarisation. There have been concerns that if filter bubbles cause political discourse to become too combative, the quality of political discourse will deteriorate to the point that it may hamper the ability of democracies to function (Garimella et al., 2018). Part of the danger of filter bubbles and echo chambers on social media is that studies have shown that the more fragmented communities are online, the greater the risk of exposure to misinformation (Rhodes, 2021). TikTok and other social media platforms have exposed people to more news than ever before. However, filter bubbles have shown that the quality of information has never been less significant. The algorithm that separates users into separate communities is only concerned with maximising engagement; information quality is not considered (Rhodes, 2021). The fragmented nature in which communities receive information inside their own bubbles online has created a disconnect when it comes to how people communicate about politics. The disconnect has stifled mediation between communities and festered an even more politically polarised environment (Garimella et al., 2018). While people have largely been able to be separated into distinct left and right camps for the majority of modern democracy, algorithm-driven platforms such as TikTok have seen people become less compromising in their beliefs (Shin & Jitkajornwanich, 2024). Not only have filter bubbles made people less compromising in their beliefs, but a trend has seen people dehumanise their political opposites (Shin & Jitkajornwanich, 2024). While it has been argued that there needs to be pre-existing beliefs for people to be sorted into these filter bubbles, studies have also shown that it is extremely difficult for users to escape these bubbles (Shin, 2024). Users must actively retrain their recommendation algorithm to show them content from the opposite side of the political divide. So, without this active effort, users are funnelled into narrow ideological groups. Without a clear and transparent shift to favour information quality over user engagement, the algorithmic distribution of content will only continue to create an even more polarised political climate.

Algorithmic Moderation

Hate speech and extremism have thrived on TikTok due to a lack of human oversight on its algorithms and the inability of governments to hold the platform accountable for infringements of user’s rights. These failures have allowed hate speech and extremist views to thrive on the platform, which has festered a more polarised and combative political environment. TikTok has minimal human oversight on its moderation, often relying only on algorithms to moderate the platform. The primary moderation algorithm flags banned words and phrases in speech. However, the algorithm often lacks the nuance to detect when users use code words to mask their harmful ideologies. Multiple countries have banned or limited the use of the platform. However, those countries’ main causes for concern are data security and the involvement of the Chinese government (Maftei & Duică, 2025). Multiple countries have fined TikTok for privacy violations, including the USA, which fined TikTok USD$5.7 million for breaches of child safety laws (Gamito, 2023). For companies as large as TikTok, fines of this size are symbolic and not deterrents. In 2024, TikTok had a yearly revenue of USD$155 billion (Tech in Asia, n.d.). The EU has tried to combat harmful content and foreign interference on TikTok (Bernot et al., 2024). However, policymakers have faced many challenges since not wanting to impede freedom of expression has left many gaps in moderation and regulation (Gamito, 2023). TikTok users looking to spread hate and harmful messages have shown adaptability to circumvent algorithmic moderation and use the platform’s algorithms to their advantage. Studies have shown that creators have grown audiences by avoiding words that trigger the moderation algorithm and using algorithm-friendly terms (Sykes & Hopner, 2024). This growth demonstrates that the message of content published is not being moderated and that only the words used are, highlighting that moderation needs to be proactive rather than reactive and requires human intervention. Moderation that attacks just the words used and not the message spread has proven ineffective. TikTok’s self-regulation is inadequate and performative. The platform’s regulation is characterised by loosely defined guidelines that are inconsistently enforced and allow the platform to say they comply with regulations without impeding user engagement. Without global coordination to mandate transparent algorithms and a more human-led moderation, TikTok will continue to play a key role in fostering political polarisation globally, and the self-regulation of the platform will remain performative.

Conclusion

Social divides have actively been deepened because of TikTok’s use of algorithms. However, these same algorithms have undeniably led to the platform’s success. By privileging engagement over truth, these algorithms have normalised hate speech and extremist ideologies, which have embedded combativeness into the fabric of political discourse. The relentless push for user engagement has nudged new and unsuspecting users down a rabbit hole of self-radicalisation. The filter bubbles created by TikTok’s algorithms isolate users within narrow ideological communities that reinforce and normalise hateful ideologies and directly contribute to the combative nature of political discourse. Together, these issues highlight the rapidly changing nature of the digital age. While people have always been divided politically, the issues created by algorithmic news distribution have created challenges that platforms and governments were unprepared to handle. Addressing these digital age issues will require meaningful human oversight, which is the antithesis of what has allowed these platforms to thrive. Until these issues are addressed, platforms like TikTok will remain complicit in the detrimental drive for user engagement, which comes at the cost of meaningful political discourse.

References

Bernot, A., Cooney-O’Donoghue, D., & Mann, M. (2024). Governing Chinese technologies: TikTok, foreign interference, and technological sovereignty. Internet Policy Review, 13(1). https://doi.org/10.14763/2024.1.1741

Cuevas-Calderón, E., Dongo, E. Y., & Kanashiro, L. (2023). Spreadability and hate speech of radical conservatism: The Peruvian case on TikTok. Punctum International Journal of Semiotics, 9(2), 27–53. https://doi.org/10.18680/hss.2023.0018

Faltesek, D., Graalum, E., Breving, B., Knudsen, E., Lucas, J., Young, S., & Zambrano, F. E. V. (2023). TikTok as television. Social Media + Society, 9(3). https://doi.org/10.1177/20563051231194576

Fichman, P., & Akter, S. (2024). Political trolling on TikTok. Telematics and Informatics, 96, 102226. https://doi.org/10.1016/j.tele.2024.102226

Gamito, M. C. (2023). Do too many cooks spoil the broth? How EU law underenforcement allows TikTok’s violations of minors’ rights. Journal of Consumer Policy, 46(3), 281–305. https://doi.org/10.1007/s10603-023-09545-8

Garimella, K., De Francisci Morales, G., Gionis, A., & Mathioudakis, M. (2018). Political Discourse on Social Media (pp. 913–922). https://doi.org/10.1145/3178876.3186139

Grandinetti, J., & Bruinsma, J. (2022). The affective algorithms of conspiracy TikTok. Journal of Broadcasting & Electronic Media, 67(3), 274–293. https://doi.org/10.1080/08838151.2022.2140806

Maftei, D., & Duică, L. N. B. (2025). Risks, threats, and vulnerabilities related to social media platforms and search engines. Regulations and national legal frameworks. Bulletin of Carol I National Defence University, 13(4), 249–265. https://doi.org/10.53477/2284-9378-24-62

Mamié, R., Ribeiro, M. H., & West, R. (2021). Are Anti-Feminist Communities Gateways to the Far Right? Evidence from Reddit and YouTube (pp. 139–147). https://doi.org/10.1145/3447535.3462504

Rhodes, S. C. (2021). Filter bubbles, echo chambers, and fake news: How social media conditions individuals to be less critical of political misinformation. Political Communication, 39(1), 1–22. https://doi.org/10.1080/10584609.2021.1910887

Shin, D. (2024). Artificial misinformation: Exploring Human-Algorithm Interaction Online. Springer Nature.

Shin, D., & Jitkajornwanich, K. (2024). How Algorithms Promote Self-Radicalization: Audit of TikTok’s algorithm using a reverse engineering method. Social Science Computer Review, 42(4), 1020–1040. https://doi.org/10.1177/08944393231225547

Solea, A. I., & Sugiura, L. (2023). Mainstreaming the Blackpill: Understanding the Incel community on TikTok. European Journal on Criminal Policy and Research, 29(3), 311–336. https://doi.org/10.1007/s10610-023-09559-5

Sykes, S., & Hopner, V. (2024). TradWives: Right-Wing social media influencers. Journal of Contemporary Ethnography, 53(4), 453–487. https://doi.org/10.1177/08912416241246273

Tech in Asia. (n.d.). Tech in Asia – Connecting Asia’s startup ecosystem. https://www.techinasia.com/news/tiktok-fuels-bytedances-revenue-to-155b

Share this:

Search Site

Your Experience

We would love to hear about your experience at our conference this year via our DCN XVI Feedback Form.

Comments

8 responses to “Political Polarisation and TikTok”

  1. Lily Avatar

    Hi Max,
    This is a really relevant and interesting read! I myself have definitely noticed this ‘gradual push’ that the algorithm gives you when engaging with particular content and accounts. It’s troubling how quickly seemingly innocuous trends and videos can lead to more extreme political content, and this sort of ‘rabbit hole’ is something I explore at length in my own paper.

    I’m curious to get your thoughts; do you think the popularity of some of this more outlandish political content is due to actual viewer interest/radicalisation, or more-so a result of the divisiveness and controversy attached? For example, clips of the Fresh&Fit podcast (‘red-pilled’ YouTube duo) get millions of views but most of the comments are negative. Do you think viewers are really agreeing with the content, or simply ‘hate-watching’?

    Thanks for the great read.

    1. maxf Avatar

      Hi Lily,
      Thanks for replying to my paper. I enjoyed reading your paper and comparing the similarities.

      One thing that makes it hard to get a grasp on audience opinions is that TikTok will filter the comments section based on the viewer. So if the algorithm thinks you will agree with the content, the comment section will have mostly positive comments at the top. So, while for all political content will have a large section of hate watchers I don’t think any of the research I mentioned ratios or anything and its hard to come to conclusions without deep research into the topic. Personally, though I wouldn’t be surprised if it was well north of 50% for some creators, especially on TikTok with features like stitches, etc

      Thanks again for your reply.

  2. Mathew Avatar

    Hey Max!

    This is a very good and close look at something I wrote about myself, and I’m happy to see other people interested in it too!
    Your variety of sources and the depth of your research really make your arguments highly credible, I’m sure you have quite a bit of knowledge on the subject matter after all that research!

    In the course of your research, did you come across anything that demonstrates the differences (or lack thereof) between TikTok and other social media platforms in their contributions to political polarization?

    Mat

    1. maxf Avatar

      Hi Mathew,

      Thanks for the reply. I think social media’s role in the current political space is fascinating and it seems like a few people have wrote about it for this conference.

      The main differences between TikTok and other platforms, as a lot of the research I was reading mentioned, were the lack of users’ input on how their personal fyp algorithms are curated. Other platforms like X or Facebook will ask users to like content or users first to give the platform indications of what users want. This combined with the TikTok algorithms bias towards political content has the possibility to send users down rabbit holes pretty fast.

      Features like stitches and duets also seem to be big contributors. While stuff like ratioing on X and reacting to others’ content on YouTube are things that contribute to the combativeness. The accessibility and personal touch of the TikTok features seem to get users more personally invested.

  3. DanielAnderson Avatar

    Hello Max

    Very interesting read about TikTok. I don’t use it myself so learning how it works and how it as affecting people and the broader culture was very enlightening.

    I note you suggested more human moderation as a key method to combat harmful/extreme content, but wonder if you encountered any suggestions on how it can be addressed from the user side? I have found studies that indicate that critical media literacy could be used to better negotiate information on social media (Higdon, 2022). But I do wonder about this in the context of TikTok though as its algorithm seems far more aggressive in steering you into an extreme content bubble than the examples studied in the paper.

    Full paper refence if you are interested: Higdon, N. (2022). The critical effect: Exploring the influence of critical media literacy pedagogy on college students’ social media behaviors and attitudes. Journal of Media Literacy Education, 14(1), 1–13. https://doi.org/10.23860/JMLE-2022-14-1-1

    Regards

    Daniel

    1. maxf Avatar

      Hi Daniel,

      Thanks for your reply, I think you raise a good point.

      Sometimes I take for granted that people just understand how the algorithms on platforms like TikTok work. I have had conversations with people who try to talk about something they have seen and are legitimately confused about “how haven’t you seen this?”. While that is more digital literacy than critical media literacy, I think it is important at the very least to teach both in schools. And in a way that fits how people are consuming content nowadays. A lot of the studies I looked at did talk about the importance of users being aware of how the algorithms they rely on work, but in the context of TikTok, most of the discussion was around how difficult it is to untrain an algorithm once it has set you down a path.

      Thank you for linking that study. I found it fairly insightful. After reading that study and my paper im interested to hear your thoughts on the topic.

      Thanks,
      Max.

  4. Maxim Lullfitz Avatar

    Hello Max,

    As a frequent user of Instagram and to a lesser extent Facebook, it is interesting to hear about TikTok and the differences in it’s approach in delivering content to it’s users.

    It is clear to see the extreme rise of popularity of this platform, as well as the rise of filter bubbles and extreme content pushed onto it’s users. Similar changes are happening on other platforms such as YouTube which clearly pushes ‘rage bait’ videos onto it’s users as it will guarantee greater engagement. Do you think that platforms will continue to follow this trend in the future, or there will be a tipping point where governments outlaw or reduce the use of harmful algorithms, as they have with other harmful products like alcohol and cigarettes?

    Kind regards,
    Max

    1. maxf Avatar

      Hi Max,

      Thank you for reading my paper and for your reply!

      Just from an Australian perspective, think outside of outright bans on platforms, it will be hard to get much done. And since that will be extremely unpopular (probably rightly so), I don’t see that happening. I think it will be interesting to see how the EU vs TikTok saga plays out since when I wrote the paper, it seemed like TikTok was mostly just ignoring what the EU was saying and now they have been fined USD$600 million. But to more directly answer your question, I think there will be a tipping point at some stage, but it will probably need a global push to solve any of the issues.

      Now I just have the image in my head of government-enforced pre-rolls on TikTok videos in the style of the cigarette-packaging warnings. TIKTOK CAUSES CANCER. Anyway.

      Thanks for your reply,
      Max