Introduction:
TikTok (formerly known as Musical.ly), was introduced in January of 2020 and became an online community where users could watch, create, and share short-form videos from mobile devices (D’Souza, 2025). The app stands out for its high levels of engagement and captivating nature thanks to its customised feeds containing quirky short-form videos with sound effects and music (D’Souza, 2025). However, due to its widespread use, the platform has witnessed a flood of misinformation spreading throughout the app, which has the potential to deceive and confuse users (Capitol Technology University, 2023). This essay looks at how TikTok has become a haven for false information through their algorithm, echo chambers and filter bubbles, and dangerous trends. I will then look at how it has affected communities and society, such as the public health sector, how misinformation can affect younger audiences, and how political divides occur. Lastly, I will delve into the steps TikTok is taking to combat the spread. They are partnering with experts, incorporating strict policies on artificially generated content, and providing communities with reliable information.
1. How Misinformation Proliferates on TikTok
1.1 TikTok’s Algorithm
Busto (2022) states that individuals have the power to decide what content they would like to view on TikTok and what content they would rather not view. A particularly important factor in the app’s ability to keep the attention of users is its algorithm. The For You Page (FYP), the first thing users view when they install the app, provides carefully chosen content. By displaying an endless quantity of short-form videos that are well-liked by other users on the network, TikTok surpasses its competition (Busto, 2022). TikTok’s algorithm encourages the spread of dramatised or emotionally driven content, which can contain misinformation and may spread to online communities quicker than educational and factual content. An example of this was a news article written by Mouse Trap News, a satire website, stating that Disney World had reduced the legal drinking age to eighteen years old (VanSistine-Yost, 2022). That same day, a video spreading misinformation about the park was uploaded to TikTok on August 20, 2022. TikTok’s algorithm quickly spread this video, racking up millions of views in just a few days (Brown, 2023).
Figure 1: TikTok image misinforming public by stating that Disney World is reducing the legal drinking age to 18

1.2 Echo Chambers and Filter Bubbles Are Formed
GCF Global (2019) defines an echo chamber as a circumstance in which an individual only comes across facts or viewpoints that support and mirror their own. They have the potential to spread misinformation and skew public perspectives, making it harder for them to debate challenging topics and consider other points of view. TikTok’s algorithm is so effective that it creates echo chambers where a lot of users only see viewpoints that support their own (Scott, 2023).
GCF Global (2019) states the term filter bubble was first coined by internet advocate Eli Pariser. They define a filter bubble as a state of mind in which you’ve been cut off from viewpoints and information that you haven’t previously shown interest in, which could cause you to lose out on crucial information. TikTok’s algorithm can filter out content that they deem irrelevant to the user, which may result in a limited selection of viewpoints (Altered State, 2025). Users may be less exposed to other concepts and points of view as a result, which might hinder their ability to think critically and make decisions.
1.3 Dangerous Trends Arise
Like how echo chambers and filter bubbles can come about, dangerous trends are here to spread misinformation on TikTok too. It is mainly due to “influencers” and their large following that these trends become popular, and misinformation gets spread. A trend that spread in popularity was the ‘Benadryl challenge.’ Users have stated that taking an excessive dose of Benadryl at one time can cause intoxication and hallucination (Vanderbilt University Medical Centre, 2025). Key information left out for participants was that this trend is not only dangerous but life-threatening. A 15-year-old teenager from Oklahoma, USA, had allegedly overdosed on the drug. A doctor from the Oklahoma Centre for Poison and Drug Information stated that the dose obtained by the teen was life-threatening and ultimately led to their death (Vanderbilt University, 2025). This case reinforces how misinformation via TikTok trends can have dangerous and potentially deadly side effects.
2. The Impact Misinformation has on Society and Communities
2.1 Public Health
TikTok’s link to public health during the Covid-19 outbreak has been heavily scrutinised, particularly the platform’s role in spreading misinformation. This negative element has had significant consequences for public health initiatives, influencing habits and attitudes in ways that have hindered public response to the Coronavirus. Lundy (2023) noted that TikTok makes it tougher to identify misinformation due to their short-form video format. A video emerged in 2020, during the peak of the pandemic, falsely claiming that eating garlic may prevent COVID-19, which sparked outrage within communities (Capitol Technology University, 2023). Another video from 2021 incorrectly stated that COVID19 vaccines were magnetic, leading some people to question the effectiveness of vaccines (Capitol Technology University, 2023). Not only can these claims induce moral panic, but they can also weaken the trust in medical experts and public health efforts, making it harder to stop the virus’s spread.
Misinformation regarding mental health conditions can be detrimental, even though it might not appear as bad as the spread of anti-vaccine ideology (Woods, 2024). A new TikTok phenomenon called “self-diagnosis” has become rampant. Users typically dance to a TikTok sound while listing traits or signs of conditions or illnesses, after which they declare themselves “self-diagnosed” (Quirk, 2024). This phenomenon spreads misinformation and reinforces the mental health community’s need for individuals to take medical conditions seriously online.
Figure 2: Image of TikTok displaying “7 signs someone might have ADHD”

2.2 Younger Audiences
Sackville (2022) explains how although every social media platform can spread misinformation, TikTok is specifically popular among younger audiences, and that can have detrimental effects on their early brain development. In Gerard Delanty’s book “Virtual Community,” he quoted, “people experience belonging in virtual forms, then community does have a reality for those individuals” (Delanty, 2018). To elaborate, young individuals on TikTok can often gain a sense of belonging and create connections within virtual communities when participating in viral trends and challenges. This can be seen as a form of social validation. However, when misinformation circulates within these online communities, this sense of belonging may have harmful repercussions. For example, unrealistic beauty standards can spread misinformation to users, but they can especially have detrimental effects on younger users.
The skincare community on TikTok has seen an influx of pre-teens purchasing hundreds of dollars on skincare products that can do more damage than benefiting their skin (Sandlin, 2024). The pressure for young people to conform to these norms can influence the actions and choices of young users by making misinformation seem as genuine as factual information. As Delanty points out, these virtual communities on TikTok can be considered real, and when misinformation spreads, it makes it even harder for younger audiences to distinguish between the sense of belonging online and the harmful effects it can have in the real world.
Figure 3: Image of TikTok showing a young girl using skincare

2.3 Political Divide Occurs
On TikTok, misinformation contributes significantly to political divides, as the app’s algorithm often only shows videos on a user’s for you page (FYP) that support their own preexisting opinions. This misinformation has the potential to skew the public’s opinions on politics, increase political polarisation, and control emotions within communities. One excellent example of how spreading misinformation can cause political division is how voters in the United Kingdom election are being shown fake AI-generated videos featuring party leaders (Spring, 2024). In one video, Rishi Sunak, the leader of the Conservative Party, is stating, “Please don’t vote us out; we would be proper gutted!” and making unsubstantiated allegations against his use of public funds, including how he will be sending his friends in parliament “loads of dosh” (Spring, 2024). The comments on the videos indicate that some users are unsure of whether claims are true, even though some of them are labelled as sarcasm or parodies in the captions.
3. Steps TikTok has Taken to Combat Misinformation
3.1 Collaborating with Experts
Due to scrutiny from governments across the globe, TikTok has aimed to take action in reducing its influence after openly acknowledging the extent of their involvement in disseminating misinformation (Capitol Technology University, 2023). TikTok Transparency Centre (2023) has stated that TikTok have partnered with experts worldwide to deliver accurate and consistent moderation, comprehend local context, and provide communities with reliable details. To find and eliminate inaccurate or deceptive content from the app, they have partnered with respectable fact-checking organisations like PolitiFact and Snopes (Capitol Technology University, 2023). Tools to assist users in recognising and reporting false information have been introduced. Additionally, they have started educational initiatives that use texts, videos, and quizzes to help identify and keep clear of misinformation (Capitol Technology University, 2023). Another expert, Irrational Labs, introduced a method called “Behavioral Design”. This entails creating a detailed user experience map, examining pertinent research, and combining the two (Irrational Labs, n.d.).
3.2 Stricter Policies on Artificially Generated Content
TikTok acknowledges that although artificial intelligence (AI) opens amazing creative outlets for communities and creators, those who are unaware that content was created or altered using AI may become confused or misled (TikTok Transparency Centre, 2023). By funding media literacy programmes that foster creativity and provide context for the content viewers are consuming, they encourage open and accountable content creation methods as more creators use AI to boost their creativity (TikTok Transparency Centre,
2023). One of their policies “Synthetic and Manipulated Media” explains that users must label AI-generated content that depicts realistic scenarios. Additionally, if the information is used for endorsements or in violation of any other policy, they forbid AI-generated content that features a representation of any real public figure, including anybody under the age of eighteen as well as artificially generated images of public figures (TikTok Transparency Centre, 2023). Every TikTok effect that is heavily altered with artificial intelligence must have “AI” in in the description and associated effects label in order to improve transparency regarding AI-powered TikTok tools and mitigate the spread of misinformation within the app (TikTok Transparency Centre, 2023).
3.3 Provide Communities with Accurate Information
In addition to addressing the content itself, TikTok aims to strive to prevent misinformation by providing their community, both online and offline with media literacy tools that enable users to identify false information, evaluate content critically, and report offensive content (TikTok Transparency Centre, 2023). They consist of:
- TikTok encouraging people to think twice before spreading unproven and potentially dangerous content (TikTok Transparency Centre, 2023)
- They offer blue “verified” checks to verify the legitimacy of popular accounts (TikTok Transparency Centre, 2023)
- They enhance LIVE videos with informative banners (TikTok Transparency Centre, 2023)
- State-affiliated media are labelled to aid viewers in comprehending the sources of content (TikTok Transparency Centre, 2023)
Conclusion:
Evidently, TikTok has allowed for misinformation to spread rampantly within the app. The app’s highly engaging algorithmic structure has made it an ideal platform for the spread of misinformation, as well as making room for the formation of echo chambers, filter bubbles, and dangerous trends. As discussed in the essay, they have all contributed to the proliferation of misinformation and have directly impacted society and communities in many ways. That is through misinformation regarding public health, the influence on younger audiences, and political division. However, TikTok is not oblivious to these impacts, and they have taken measures to combat misinformation spreading. By collaborating with experts to ensure safe use, applying stricter policies on artificially generated content, and actively providing communities, both online and offline, with accurate information to mitigate the effects of misinformation. Even while these efforts are a step in the right direction, continued awareness is still essential to making TikTok a safer place, free of misinformation.
References:
D’souza, D. (2025, January 20). TikTok: What It Is, How It Works, and Why It’s
Popular. Investopedia. https://www.investopedia.com/what–is–tiktok–4588933
VanSistine-Yost, L. (2020). LibGuides: Fake News & Fake Facts: Media
Literacy Awareness: Four Moves & A Habit and AI in the News. Western Technical College. https://westerntc.libguides.com/FakeNews/fourmoves
Brown, E. (2023, June 27). LibGuides: Fake News & Disinformation: Case Studies in Fake News. CWU Libraries.
https://libguides.lib.cwu.edu/c.php?g=625394&p=4391900
GCF Global. (2019). Digital Media Literacy: What is an Echo Chamber? GCF
Global. https://edu.gcfglobal.org/en/digital–media–literacy/what–is–an–echo–chamber/1/
Scott, S. (2023, October 25). Echo Chambers & The “Wrong Side” of TikTok. SXU Student Media. https://sxustudentmedia.com/echo–chambers–the–wrong–side–oftiktok/
Vanderbilt University Medical Centre. (2025). Tik Tok Dangerous ChallengesParents be watchful | Paediatric Trauma Injury Prevention Program. Www.vumc.org.
Woods. T. (2024, December 1). Debunking TikTok’s Mental Health
Misinformation. Psychology Today. https://www.psychologytoday.com/au/blog/denyingto–the–grave/202412/debunking–tiktoks–mental–health–misinformation
Delanty, G. (2018). Virtual Community. 200–224.
https://doi.org/10.4324/9781315158259-10
Sandlin, C. (2024, November 13). Teen TikTok skincare craze: What every parent needs to know. WISN. https://www.wisn.com/article/teen–tiktok–skincare–craze–whatevery–parent–needs–to–know/62888058
Irrational Labs. (n.d.). TikTok: How behavioural science reduced the spread of misinformation. Irrational Labs. https://irrationallabs.com/case–studies/tiktok–howbehavioral–science–reduced–the–spread–of–misinformation/
Quirk, M. (2024, February 29). TikTok’s Growing Self-Diagnosis Culture | Psychology Today Australia. Www.psychologytoday.com. https://www.psychologytoday.com/au/blog/living–psyched/202402/tiktoks–growing–selfdiagnosis–culture
Altered State. (2023, March 29). TikTok’s Algorithm’s Dark Viral Trends and Echo Chambers. Altered State Prod. https://www.alteredstateprod.com/post/tiktok–
algorithm?srsltid=AfmBOopec_AODn4urRfKSiUUzEjL0MfVnQ19DFB3V2Y5KyMRX
TikTok. (2023, January 19). Combating harmful misinformation. TikTok.
https://www.tiktok.com/transparency/en-us/combating-misinformation
Capitol Technology University. (2023, November 13). TikTok and the War on Misinformation | Capitol Technology University. Www.captechu.edu; Capitol
Technology University. https://www.captechu.edu/blog/tiktok–and–war–misinformation
Busto, R. (2022). TikTok and Misinformation: Which Factors Contribute to Spreading Misinformation? Degree Programme in Media and Arts Interactive Media. https://www.theseus.fi/bitstream/handle/10024/786353/Kivijarvi%20Busto_Rebeca.pdf?s equence=2&isAllowed=y
Hi Shannon Kate, You’re right to ask; it is incredibly difficult to police these issues today. Predatory behaviour isn’t exclusive…