Abstract
This essay explores the paradox of digital activism in the age of social media, where platforms designed to empower global movements often marginalise the very voices they aim to amplify. It adopts a qualitative, critical approach through discourse analysis and a case study of the #GirlsLikeUs movement. The findings show that trans activists actively resist digital erasure by strategically mobilising hashtags, using digital platforms to assert identity and build community in the face of algorithmic suppression. The essay concludes that although social media enables new forms of activism, it remains governed by commercial and algorithmic logics that reinforce existing hierarchies. These insights suggest a need for platform reform and greater algorithmic accountability to support genuinely inclusive digital activism.
Introduction
Since the creation of social media, platforms such as Instagram and YouTube have increased opportunities for political engagement, identity expression, and global connection (Noble, 2018). From hashtags, online protests and petitions, online participation has become a normalised and necessary aspect of activism. However, the promise of inclusion is at times undermined by the biases embedded within these same platforms (Singh, 2023).
This essay begins by examining critiques of symbolic online activism and the structural forces that constrain digital participation. It then explores how digital platforms shape identity and belonging, especially for marginalised users. The role of algorithms in reproducing dominant narratives will then be explored, before the focus shifts to the concept of ‘refracted publics,’ which show how systemic inequalities drive access to visibility and safety online. Finally, a analysis on the #GirlsLikeUs movement attempts to demonstrate that underrepresented individuals fight against their accounts being shadowbanned by social media algorithms. By looking at each of these topics, this essay aims to prove that while digital technology and social media provide more chances for digital activism, dominant narratives are replicated and prioritised algorithmically, which exacerbates prejudice against marginalised identities.
Symbolic Action and Structural Constraints
Online activism is often disregarded as failing to establish meaningful connection with political topics, with critics using terms like ‘slacktivism’ to describe low-effort actions such the use of hashtags or updating profile pictures with political overlays. In 2023, 46% of users in the United states reported that they had been politically active on social media in the last twelve months, while a further 76% believed that said social media makes people think they are helping when they aren’t, and 82% said social media distracts people from more important issues (Smith, 2023). However, these critiques overlook the underlying traits that distinguish digital activism from other forms of political engagement. As Barassi (2018) explains, activism on social media is not only highly nuanced based on personal network and the users identities, but it is also based around a new form of visibility (Barassi, 2018). Rather than being solely ostentatious, these digital practices reflect how individuals navigate digital spaces to engage politically and construct collective identities. This is true especially in situations where physical forms of protest may not be possible due to safety and accessibility issues for individuals in marginalised groups (Younes, 2023).
The Routledge Companion to Media and Activism highlights the perspective that capitalism is entrenched in digital platforms, therefore visibility is commodified. Users are encouraged to engage in ways that generate advertising revenue based on previous viral outputs, rather than highlight systemic inequalities (Barassi, 2018). Under these conditions, symbolic acts such as hashtag activism may be the only available form of expression for those who lack institutional power or access to safer spaces for protest (Mueller et al., 2021). New social media platforms such as TikTok have created unprecedented opportunities for marginalised people, those often excluded from economic, social, and political life (Walsh, 2006). This platform allows those with marginalised identities to raise their voices and make their grievances visible (Ortiz et al., 2024). Traditional media on the other hand has historically underrepresented or misrepresented these groups, favouring dominant cultural narratives and perspectives (Miranda et al., 2016).
For marginalised users, digital platforms become tools for strategic visibility, offering acts of protest in environments where traditional avenues for participation remain inaccessible or hostile. Hashtags such as #BlackLivesMatter or #MeToo have evolved from trends into sustained global movements because they amplify perspectives that were never shared by mainstream media (Jackson, 2016). Accordingly, symbolic digital activities need to be interpreted within the socio-technical contexts in which they arise since they are intricately linked to larger frameworks of power, identity, and representation (Ortiz et al., 2024). For this reason it is impossible to comprehend digital activism without considering the context in which it occurs. This comprehension allows for a more complex idea of identity formation and performance in these settings especially for those who cannot practice identity presentation in a safe manner (Ortiz et al., 2024).
Networked Identity and Digital Belonging
Social media platforms are now important locations for people to construct and share their identity. Zizi Papacharissi (2010) argues in ‘A Network Self’ that self presentation is performative, formed by platform interfaces and created through individual encounters with online communities. Thus, the online identity portrayed is fluid rather than set or authentic, continuously curated through postings, profiles, and interactions with other people (Papacharissi, 2010).
Social media platforms facilitate a community for many marginalized people to establish their identities and create relationships. LGBTQ+ users may utilise these platforms to test out different gender expressions or to get support from people who have gone through similar things. This means that while users may try to create meaningful identities, platforms thwart these efforts by using algorithmic and community moderation tools to control what kinds of identities are promoted, silenced, or flagged. In these situations, digital technologies can offer visibility and validation that might not exist offline (Younes, 2023), but identity construction online is shaped by what the platform deems legible. While identical content from cisgender users has gone unnoticed, trans users have reported being penalised for sharing photographs that violate unclear “community standards” or banned for using their preferred identities (Aldridge et al., 2024).
This uneven enforcement of identity legitimacy shows that platforms privilege normative, commercialised expressions of self while marginalising others. What appears to be a space for personal expression is, in fact, highly regulated and policed (Younes, 2023). Therefore, while digital spaces offer possibilities for belonging, they also present profound risks, exclusions, and disciplinary mechanisms; especially for those whose identities do not conform to dominant norms. This systemic exclusion is not coincidental but algorithmically constructed, which leads into the next section: how algorithmic logics reinforce social hierarchies under the guise of neutrality.
Algorithms and the Reproduction of Dominance
Algorithms are considered by some as neutral tools that customise user experience based on their interests, but in practice they have a significant impact on what other social media users see, and interact with. Brusseau (2019) makes the case that algorithms push content to users based on probabilistic identities and on data correlations. These correlations then affect the material that is presented based on presumptions and financial interests included into the platform’s architecture (Brusseau, 2019). Algorithms reinforce popular material and dominant norms based on data on previous platform and user engagement metrics (Ortiz et al., 2024). It is more probable that content that supports popular identities, sensationalism, or ideas will appear. Conversely, content from or about marginalised groups is often under-represented, especially when it challenges dominant narratives or violates advertiser-friendly norms (Ortiz et al., 2024).
Safiya Noble (2018) published a book titled ‘Algorithms of Oppression’ that demonstrated how Black and Brown populations are marginalised by search engines and recommendation systems. Searches for phrases like ‘Black girls’ yielded stereotyped or pornographic results, demonstrating how problematic depictions of marginalised groups can infiltrate algorithms (Noble, 2018). Similar dynamics apply to social media sites (Bucher, 2018). Users who post content that is thought to pose a risk to brand safety – such as posts by sex workers, Indigenous land defenders, or racial justice activists. These users are frequently shadowbanned or purposefully made less visible, which occurs without alerting the content creator or end users (Nicholas, 2023). At the same time, material that supports materialism, nationalism, or beauty standards is promoted. Platforms thereby reinforce existing inequalities through coding, even as they profess to democratise discourse. This algorithmic gatekeeping reinforces dominant cultural narratives while suppressing dissent, leading to the creation of ‘refracted’ rather than ‘networked’ publics—an idea explored in the next section.
Refracted Publics and Marginalised Participation.
Online communities created by social media affordances have long been referred to as the ‘networked public.’ However, this framework presumes equitable access to speech, safety, and visibility (Abidin, 2021). In actuality, algorithmic surveillance, commercialisation, and cultural gatekeeping affect digital environments, leading to uneven participation, particularly for users who are marginalised (Abidin, 2021). ‘Refracted publics’ allow online communities and cultures to shape the way they are portrayed and avoid weaponised misinformation campaigns, preventing harassment, censorship, or de-platforming. For instance, trans influencers and sex workers frequently negotiate ever-evolving social norms to keep their audiences interested while avoiding content removals and shadowbans (Aldridge et al., 2024). It is emotionally and materially taxing to bear the dual duty of protecting oneself from algorithmic or user-driven harm while also needing to be visible for survival of the community.
Furthermore, visibility isn’t necessarily empowering. It may incite violence, doxxing, or trolling. Coordinated harassment campaigns against trans activists and women of color are common, demonstrating how online “participation” may turn into a place of pain rather than empowerment (Aldridge et al., 2024). Refracted publics are therefore not as not inherently safe or easily readable. On social media platforms, marginalised users must strike a balance between self-defense and identity expression (Younes, 2023). However, despite these limitations, acts of solidarity and resistance continue to occur; this concept is best illustrated by the #GirlsLikeUs campaign, which is discussed next.
Case Study: #GirlsLikeUs and Trans Digital Activism
The #GirlsLikeUs hashtag was launched in 2012 by author and activist Janet Mock to raise awareness of trans women of color, after a series of high profile murders and suicides (Jackson et al., 2018). This movement swiftly expanded into a community where trans and queer adolescents found representation and empowerment, in a world where trans women are frequently harassed or ostracised on social media (Aldridge et al., 2024). Through interpersonal connection, #GirlsLikeUs formed as people supported one another, shared their experiences, and confirmed their identities. The hashtag challenged popular perceptions of trans women as tragic, dangerous, or deceitful. Instead, it highlighted their contentment, resilience, and complexity—often in ways that differed from media clichés (Aldridge et al., 2024).
The way this campaign navigated refracted publics is what makes it noteworthy. By adopting the hashtag, trans women were putting themselves in danger of algorithmic suppression or harassment but also making themselves visible. Many encountered shadowbanning or content flagging (Nicholas, 2023). In order to fight erasure, the community persevered, creating solidarity tactics and platform literacies. #GirlsLikeUs was successful because it served two purposes: it defended identity in a harsh online space and challenged algorithmic systems that aimed to hide transgender existence. It powerfully illustrates how marginalised people use digital tools to disrupt the structures that would otherwise silence them, but also to express themselves.
Conclusion
In conclusion, digital platforms have become vital arenas for identity expression, political engagement, and community formation (Jiang et al., 2022). Yet, despite their democratic promise, these spaces remain deeply shaped by systemic inequalities and algorithmic power. For marginalised identities, slacktivism is more than low-effort activism—it is often a necessary form of digital presence, resistance, and connection in an environment that privileges dominant narratives. This essay has argued that although social media and digital technologies offer greater opportunities for activism, dominant narratives are algorithmically prioritised and reproduced, resulting in compounding discrimination for marginalised identities. Recognising this tension helps us better understand the political and ethical dimensions of digital engagement and move beyond reductive critiques, toward a more equitable digital public sphere.
References
Abidin, C. (2021). From “Networked Publics” to “Refracted Publics”: A Companion Framework for Researching “Below the Radar” Studies. Social Media + Society, 7(1), 2056305120984458. https://doi.org/10.1177/2056305120984458
Aldridge, Z., McDermott, H., Thorne, N., Arcelus, J., & Witcomb, G. L. (2024). Social Media Creations of Community and Gender Minority Stress in Transgender and Gender-Diverse Adults. Social Sciences, 13(9), Article 9. https://doi.org/10.3390/socsci13090483
Barassi, V. (2018). Social media activism, self-representation and the construction of political biographies. In The Routledge Companion to Media and Activism. Routledge. https://research.gold.ac.uk/id/eprint/23736/3/V.Barassi%20Media%20Activism_Routledge%20Companion.pdf
Brusseau, J. (2019). Ethics of identity in the time of big data. First Monday. https://doi.org/10.5210/fm.v24i5.9624
Bucher, T. (2018). If… Then: Algorithmic Power and Politics. Oxford University Press, Incorporated. http://ebookcentral.proquest.com/lib/curtin/detail.action?docID=5401028
Jackson, S. J., Bailey, M., & Foucault Welles, B. (2018). #GirlsLikeUs: Trans advocacy and community building online. New Media & Society, 20(5), 1868–1888. https://doi.org/10.1177/1461444817709276
Jiang, Y., Jin, X., & Deng, Q. (2022). (PDF) Short Video Uprising: How #BlackLivesMatter Content on TikTok Challenges the Protest Paradigm. ResearchGate. https://doi.org/10.36190/2022.42
Miranda, S., Young, A., & Yetgin, E. (2016). Are Social Media Emancipatory or Hegemonic? Societal Effects of Mass Media Digitization in the Case of the SOPA Discourse. Management Information Systems Quarterly, 40(2), 303–329.
Mueller, A., WOOD-DOUGHTY, Z., AMIR, S., DREDZE, M., & LYNN NOBLES, A. (2021). Demographic Representation and Collective Storytelling in the Me Too Twitter Hashtag Activism Movement. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 107. https://doi.org/10.1145/3449181
Nicholas, G. (2023). Sunsetting “Shadowbanning” (SSRN Scholarly Paper No. 4522168). Social Science Research Network. https://doi.org/10.2139/ssrn.4522168
Noble, S. U. (2018). Algorithms of Oppression—How Search Engines Reinforce Racism. https://safiyaunoble.com/wp-content/uploads/2020/09/Algorithms_Oppression_Introduction_Intro.pdf
Ortiz, J., Young, A., Myers, M. D., Bedley, R., & Carbaugh, D. (2024). (PDF) Giving voice to the voiceless: The use of digital technologies by marginalized groups. ResearchGate. https://www.researchgate.net/publication/333034416_Giving_voice_to_the_voiceless_The_use_of_digital_technologies_by_marginalized_groups
Papacharissi, Z. (2010). A Networked Self: Identity, Community, and Culture on Social Network Sites. Taylor & Francis Group. http://ebookcentral.proquest.com/lib/curtin/detail.action?docID=574608
Singh, D. P. (2023). The Algorithmic Bias of Social Media. The Motley Undergraduate Journal, 1(2), Article 2. https://doi.org/10.55016/ojs/muj.v1i2.77457
Smith, S. B., Risa Gelles-Watnick, Meltem Odabaş, Monica Anderson and Aaron. (2023, June 29). 2. Americans’ views of and experiences with activism on social media. Pew Research Center. https://www.pewresearch.org/internet/2023/06/29/americans-views-of-and-experiences-with-activism-on-social-media/
Walsh, T. (2006). A right to inclusion? Homelessness, human rights and social exclusion. Australian Journal of Human Rights, 12(1), 185–204. https://doi.org/10.1080/1323238X.2006.11910818
Younes, rasha. (2023). “All This Terror Because of a Photo”: Digital Targeting and Its Offline Consequences for LGBT People in the Middle East and North Africa | HRW. https://www.hrw.org/report/2023/02/21/all-terror-because-photo/digital-targeting-and-its-offline-consequences-lgbt
Images:
RDNE Stock Project. (2025). “Close-Up Shot of a Person Taking Photo Using a Mobile Phone”. https://www.pexels.com/photo/close-up-shot-of-a-person-taking-photo-using-a-mobile-phone-6257630/

Hi Shannon Kate, You’re right to ask; it is incredibly difficult to police these issues today. Predatory behaviour isn’t exclusive…