Abstract
This paper examines how algorithm-driven social media platforms, especially TikTok, have exacerbated political polarisation by fostering echo chambers, normalising hate speech and enabling self-radicalisation. The way people consume political content has drastically changed as TikTok has replaced traditional media, especially for younger generations. This shift in consumption has enabled hate speech and extremist ideologies to gain a global reach as users are rapidly funnelled into ideologically homogenous filter bubbles once they start using the platform. Studies demonstrate that users become more passionate about their beliefs and that exposure to combative content nudges users towards increasingly extreme viewpoints and content, which erodes trust in political opponents and deepens social divides. Despite superficial efforts, TikTok’s reliance on automated forms of moderation fails to address hate speech on the platform, while governments struggle to regulate without infringing on users’ free speech. The paper argues that meaningful human-led oversight and transparent regulation are essential to mitigate the polarisation caused by the engagement-driven algorithms used by social media platforms.
Introduction
The digital age has transformed how people consume media. Traditional media filters that once dictated what people consumed have been replaced by algorithms designed to maximise engagement, often at the expense of accuracy. For many younger generations, platforms like TikTok have completely replaced television as the primary news and entertainment source. Hate speech and harmful ideologies that once struggled to find an audience have been given global platforms. The profit-driven algorithm used by TikTok has separated users into highly segregated digital communities. These echo chambers create filter bubbles where the recommendation algorithm presents only information that reinforces a user’s beliefs. Problematic and combative content has thrived under TikTok’s algorithm, drastically transforming how people consume and interact with political content. Social media platforms like TikTok, which rely on algorithms with limited human oversight, have helped to create a more polarised political climate, allowing extremism and hate speech to thrive. Using TikTok as the primary example, this paper will argue that social media sites have caused this polarised political climate by platforming hate speech, nudging users into self-radicalisation, through problematic filter bubbles, and how the platforms self-regulate with algorithms.
Platforming Hate
TikTok has rapidly replaced traditional media, cementing itself as a dominant platform that fuels political polarisation by platforming hate speech and legitimising extremism. Unlike traditional media, TikTok’s algorithmic architecture disrupts news and entertainment, prioritising engagement over information accuracy. For younger generations, algorithm-based platforms have not supplemented traditional media; in most cases, it has supplemented it entirely, becoming the primary content distributor (Faltesek et al., 2023). The shift in media consumption has raised critical concerns about the unchecked material on platforms like TikTok. Far-right groups have flocked to TikTok due to the platform’s accessibility compared to traditional media (Cuevas-Calderón et al., 2023). Studies have shown that through the use of coded language and dog-whistle tactics, extremists have successfully been able to grow sizable and profitable audiences on TikTok (Shin, 2024). A problem with this platforming of problematic content, such as extremist ideologies, is that it has been shown that consuming such content can erode people’s trust in any alternative content they consume (Grandinetti & Bruinsma, 2022). Another key issue with TikTok platforming these extremist users is that while they may have to filter their language, social media creators are not locked into just one platform. Creators have often used platforms like TikTok to create an audience and then migrate viewers to a less regulated part of the internet (Mamié et al., 2021). The issues caused by the platforming of harmful content are also new and unpredictable due to the interactive dynamics caused by social media. The interactive nature of TikTok means users are no longer passive in their interactions with news and politics. The interactive dynamic of online content consumption breeds personal investment, which can normalise harmful ideologies and language. For example, incel communities thrive on TikTok, where sexist language is trivialised and amplified, reshaping communication norms both online and offline (Solea & Sugiura, 2023). Features like comments, “duets”, and” stitches” further entrench this normalisation, enabling users to interact with harmful content personally. The blending of extremist content with mundane entertainment erodes the boundary between acceptable debate and hate. The personal investment of users and the blend of hate into everyday entertainment have contributed to a more politically polarised environment that has normalised what were once fringe views.
The Algorithm and New Accounts
TikTok’s algorithm-driven content distribution pushes new accounts towards politically combative and extremist content, fuelling political polarisation. As soon as a new account is created, the recommendation algorithm curates a personalised feed based on limited interaction data. Divisive content drives engagement, which leads the algorithm to prioritise it when recommending content to new accounts (Shin, 2024). Crucially, TikTok bypasses filters like search terms or tags used by platforms such as YouTube; instead, the algorithm relies on demographic signals to instantly funnel users into nationalistic, racial and ideological echo chambers (Fichman & Akter, 2024; Shin & Jitkajornwanich, 2024). The lack of reliance on search features for engagement also means that the TikTok algorithm must be aggressive at any sign of user engagement, and users have little autonomy over what is initially presented to them. Research confirms that TikTok disproportionately amplifies political content (Grandinetti & Bruinsma, 2022), and once users interact with such material, the algorithm escalates recommendations toward political extremes (Shin, 2024). The gradual push towards increasingly extreme content also affects what users consider “normal” (Shin & Jitkajornwanich, 2024). This push also creates a rabbit hole unique to the digital age. The algorithmic nudge was not possible pre-algorithmic content distribution and can fuel users to self-radicalise. Studies have shown that just three months of TikTok usage intensifies users’ convictions on divisive issues such as politics, abortion and religion (Shin, 2024). This result is, by design, as users become more politically concerned, which drives revenue for creators and TikTok itself (Shin & Jitkajornwanich, 2024). This lack of incentive for change means creating a politically combative and polarised environment is in TikTok’s best interest. While initial interaction is needed for users to be fed increasingly extreme ideological content, the algorithm contributes directly to political polarisation by reinforcing existing prejudices. Providing users with similar or more extreme content based on limited initial engagement facilitates a self-radicalisation process that was not present in a pre-digital world. This self-radiation process has changed how people consume and discuss political content online and in everyday life.
TikTok and Filter Bubbles
Unlike traditional media, which exposes audiences to a shared narrative, TikTok’s algorithms isolate users in clusters of narrow ideological similarities. Also, unlike television, where viewers can change the channel, the TikTok search algorithm is influenced by the content users consume on the platform (Grandinetti & Bruinsma, 2022). Therefore, if users consume far-right content on TikTok and search for left-leaning talking points, they may be presented with this information from a far-right perspective. The systematic way people are separated into categories with minimal self-control is a fundamental shift in the consumption of news and entertainment. The combativeness that the recommendation algorithm thrives on has created a binary clash of “us vs them,” which has directly contributed to a hate-fuelled political polarisation. There have been concerns that if filter bubbles cause political discourse to become too combative, the quality of political discourse will deteriorate to the point that it may hamper the ability of democracies to function (Garimella et al., 2018). Part of the danger of filter bubbles and echo chambers on social media is that studies have shown that the more fragmented communities are online, the greater the risk of exposure to misinformation (Rhodes, 2021). TikTok and other social media platforms have exposed people to more news than ever before. However, filter bubbles have shown that the quality of information has never been less significant. The algorithm that separates users into separate communities is only concerned with maximising engagement; information quality is not considered (Rhodes, 2021). The fragmented nature in which communities receive information inside their own bubbles online has created a disconnect when it comes to how people communicate about politics. The disconnect has stifled mediation between communities and festered an even more politically polarised environment (Garimella et al., 2018). While people have largely been able to be separated into distinct left and right camps for the majority of modern democracy, algorithm-driven platforms such as TikTok have seen people become less compromising in their beliefs (Shin & Jitkajornwanich, 2024). Not only have filter bubbles made people less compromising in their beliefs, but a trend has seen people dehumanise their political opposites (Shin & Jitkajornwanich, 2024). While it has been argued that there needs to be pre-existing beliefs for people to be sorted into these filter bubbles, studies have also shown that it is extremely difficult for users to escape these bubbles (Shin, 2024). Users must actively retrain their recommendation algorithm to show them content from the opposite side of the political divide. So, without this active effort, users are funnelled into narrow ideological groups. Without a clear and transparent shift to favour information quality over user engagement, the algorithmic distribution of content will only continue to create an even more polarised political climate.
Algorithmic Moderation
Hate speech and extremism have thrived on TikTok due to a lack of human oversight on its algorithms and the inability of governments to hold the platform accountable for infringements of user’s rights. These failures have allowed hate speech and extremist views to thrive on the platform, which has festered a more polarised and combative political environment. TikTok has minimal human oversight on its moderation, often relying only on algorithms to moderate the platform. The primary moderation algorithm flags banned words and phrases in speech. However, the algorithm often lacks the nuance to detect when users use code words to mask their harmful ideologies. Multiple countries have banned or limited the use of the platform. However, those countries’ main causes for concern are data security and the involvement of the Chinese government (Maftei & Duică, 2025). Multiple countries have fined TikTok for privacy violations, including the USA, which fined TikTok USD$5.7 million for breaches of child safety laws (Gamito, 2023). For companies as large as TikTok, fines of this size are symbolic and not deterrents. In 2024, TikTok had a yearly revenue of USD$155 billion (Tech in Asia, n.d.). The EU has tried to combat harmful content and foreign interference on TikTok (Bernot et al., 2024). However, policymakers have faced many challenges since not wanting to impede freedom of expression has left many gaps in moderation and regulation (Gamito, 2023). TikTok users looking to spread hate and harmful messages have shown adaptability to circumvent algorithmic moderation and use the platform’s algorithms to their advantage. Studies have shown that creators have grown audiences by avoiding words that trigger the moderation algorithm and using algorithm-friendly terms (Sykes & Hopner, 2024). This growth demonstrates that the message of content published is not being moderated and that only the words used are, highlighting that moderation needs to be proactive rather than reactive and requires human intervention. Moderation that attacks just the words used and not the message spread has proven ineffective. TikTok’s self-regulation is inadequate and performative. The platform’s regulation is characterised by loosely defined guidelines that are inconsistently enforced and allow the platform to say they comply with regulations without impeding user engagement. Without global coordination to mandate transparent algorithms and a more human-led moderation, TikTok will continue to play a key role in fostering political polarisation globally, and the self-regulation of the platform will remain performative.
Conclusion
Social divides have actively been deepened because of TikTok’s use of algorithms. However, these same algorithms have undeniably led to the platform’s success. By privileging engagement over truth, these algorithms have normalised hate speech and extremist ideologies, which have embedded combativeness into the fabric of political discourse. The relentless push for user engagement has nudged new and unsuspecting users down a rabbit hole of self-radicalisation. The filter bubbles created by TikTok’s algorithms isolate users within narrow ideological communities that reinforce and normalise hateful ideologies and directly contribute to the combative nature of political discourse. Together, these issues highlight the rapidly changing nature of the digital age. While people have always been divided politically, the issues created by algorithmic news distribution have created challenges that platforms and governments were unprepared to handle. Addressing these digital age issues will require meaningful human oversight, which is the antithesis of what has allowed these platforms to thrive. Until these issues are addressed, platforms like TikTok will remain complicit in the detrimental drive for user engagement, which comes at the cost of meaningful political discourse.
References
Bernot, A., Cooney-O’Donoghue, D., & Mann, M. (2024). Governing Chinese technologies: TikTok, foreign interference, and technological sovereignty. Internet Policy Review, 13(1). https://doi.org/10.14763/2024.1.1741
Cuevas-Calderón, E., Dongo, E. Y., & Kanashiro, L. (2023). Spreadability and hate speech of radical conservatism: The Peruvian case on TikTok. Punctum International Journal of Semiotics, 9(2), 27–53. https://doi.org/10.18680/hss.2023.0018
Faltesek, D., Graalum, E., Breving, B., Knudsen, E., Lucas, J., Young, S., & Zambrano, F. E. V. (2023). TikTok as television. Social Media + Society, 9(3). https://doi.org/10.1177/20563051231194576
Fichman, P., & Akter, S. (2024). Political trolling on TikTok. Telematics and Informatics, 96, 102226. https://doi.org/10.1016/j.tele.2024.102226
Gamito, M. C. (2023). Do too many cooks spoil the broth? How EU law underenforcement allows TikTok’s violations of minors’ rights. Journal of Consumer Policy, 46(3), 281–305. https://doi.org/10.1007/s10603-023-09545-8
Garimella, K., De Francisci Morales, G., Gionis, A., & Mathioudakis, M. (2018). Political Discourse on Social Media (pp. 913–922). https://doi.org/10.1145/3178876.3186139
Grandinetti, J., & Bruinsma, J. (2022). The affective algorithms of conspiracy TikTok. Journal of Broadcasting & Electronic Media, 67(3), 274–293. https://doi.org/10.1080/08838151.2022.2140806
Maftei, D., & Duică, L. N. B. (2025). Risks, threats, and vulnerabilities related to social media platforms and search engines. Regulations and national legal frameworks. Bulletin of Carol I National Defence University, 13(4), 249–265. https://doi.org/10.53477/2284-9378-24-62
Mamié, R., Ribeiro, M. H., & West, R. (2021). Are Anti-Feminist Communities Gateways to the Far Right? Evidence from Reddit and YouTube (pp. 139–147). https://doi.org/10.1145/3447535.3462504
Rhodes, S. C. (2021). Filter bubbles, echo chambers, and fake news: How social media conditions individuals to be less critical of political misinformation. Political Communication, 39(1), 1–22. https://doi.org/10.1080/10584609.2021.1910887
Shin, D. (2024). Artificial misinformation: Exploring Human-Algorithm Interaction Online. Springer Nature.
Shin, D., & Jitkajornwanich, K. (2024). How Algorithms Promote Self-Radicalization: Audit of TikTok’s algorithm using a reverse engineering method. Social Science Computer Review, 42(4), 1020–1040. https://doi.org/10.1177/08944393231225547
Solea, A. I., & Sugiura, L. (2023). Mainstreaming the Blackpill: Understanding the Incel community on TikTok. European Journal on Criminal Policy and Research, 29(3), 311–336. https://doi.org/10.1007/s10610-023-09559-5
Sykes, S., & Hopner, V. (2024). TradWives: Right-Wing social media influencers. Journal of Contemporary Ethnography, 53(4), 453–487. https://doi.org/10.1177/08912416241246273
Tech in Asia. (n.d.). Tech in Asia – Connecting Asia’s startup ecosystem. https://www.techinasia.com/news/tiktok-fuels-bytedances-revenue-to-155b
Hi Shannon Kate, You’re right to ask; it is incredibly difficult to police these issues today. Predatory behaviour isn’t exclusive…