Skip to content

Digital Activism: Amplifying Social Justice Efforts, or Perpetuating Inequality?


Abstract

This essay explores the paradox of digital activism in the age of social media, where platforms designed to empower global movements often marginalise the very voices they aim to amplify. It adopts a qualitative, critical approach through discourse analysis and a case study of the #GirlsLikeUs movement. The findings show that trans activists actively resist digital erasure by strategically mobilising hashtags, using digital platforms to assert identity and build community in the face of algorithmic suppression. The essay concludes that although social media enables new forms of activism, it remains governed by commercial and algorithmic logics that reinforce existing hierarchies. These insights suggest a need for platform reform and greater algorithmic accountability to support genuinely inclusive digital activism.

Introduction

Since the creation of social media, platforms such as Instagram and YouTube have increased opportunities for political engagement, identity expression, and global connection (Noble, 2018). From hashtags, online protests and petitions, online participation has become a normalised and necessary aspect of activism. However, the promise of inclusion is at times undermined by the biases embedded within these same platforms (Singh, 2023). 

This essay begins by examining critiques of symbolic online activism and the structural forces that constrain digital participation. It then explores how digital platforms shape identity and belonging, especially for marginalised users. The role of algorithms in reproducing dominant narratives will then be explored, before the focus shifts to the concept of ‘refracted publics,’ which show how systemic inequalities drive access to visibility and safety online. Finally, a analysis on the #GirlsLikeUs movement attempts to demonstrate that underrepresented individuals fight against their accounts being shadowbanned by social media algorithms. By looking at each of these topics, this essay aims to prove that while digital technology and social media provide more chances for digital activism, dominant narratives are replicated and prioritised algorithmically, which exacerbates prejudice against marginalised identities.

Symbolic Action and Structural Constraints

Online activism is often disregarded as failing to establish meaningful connection with political topics, with critics using terms like ‘slacktivism’ to describe low-effort actions such the use of hashtags or updating profile pictures with political overlays. In 2023, 46% of users in the United states reported that they had been politically active on social media in the last twelve months, while a further 76% believed that said social media makes people think they are helping when they aren’t, and 82% said social media distracts people from more important issues (Smith, 2023). However, these critiques overlook the underlying traits that distinguish digital activism from other forms of political engagement. As Barassi (2018) explains, activism on social media is not only highly nuanced based on personal network and the users identities, but it is also based around a new form of visibility (Barassi, 2018).  Rather than being solely ostentatious, these digital practices reflect how individuals navigate digital spaces to engage politically and construct collective identities. This is true especially in situations where physical forms of protest may not be possible due to safety and accessibility issues for individuals in marginalised groups (Younes, 2023). 

The Routledge Companion to Media and Activism highlights the perspective that capitalism is entrenched in digital platforms, therefore visibility is commodified. Users are encouraged to engage in ways that generate advertising revenue based on previous viral outputs, rather than highlight systemic inequalities (Barassi, 2018). Under these conditions, symbolic acts such as hashtag activism may be the only available form of expression for those who lack institutional power or access to safer spaces for protest (Mueller et al., 2021). New social media platforms such as TikTok have created unprecedented opportunities for marginalised people, those often excluded from economic, social, and political life (Walsh, 2006). This platform allows those with marginalised identities to raise their voices and make their grievances visible (Ortiz et al., 2024). Traditional media on the other hand has historically underrepresented or misrepresented these groups, favouring dominant cultural narratives and perspectives (Miranda et al., 2016). 

For marginalised users, digital platforms become tools for strategic visibility, offering acts of protest in environments where traditional avenues for participation remain inaccessible or hostile. Hashtags such as #BlackLivesMatter or #MeToo have evolved from trends into sustained global movements because they amplify perspectives that were never shared by mainstream media (Jackson, 2016). Accordingly, symbolic digital activities need to be interpreted within the socio-technical contexts in which they arise since they are intricately linked to larger frameworks of power, identity, and representation (Ortiz et al., 2024). For this reason it is impossible to comprehend digital activism without considering the context in which it occurs. This comprehension allows for a more complex idea of identity formation and performance in these settings especially for those who cannot practice identity presentation in a safe manner (Ortiz et al., 2024).

Networked Identity and Digital Belonging

Social media platforms are now important locations for people to construct and share their identity. Zizi Papacharissi (2010) argues in ‘A Network Self’ that self presentation is performative, formed by platform interfaces and created through individual encounters with online communities. Thus, the online identity portrayed is fluid rather than set or authentic,  continuously curated through postings, profiles, and interactions with other people (Papacharissi, 2010). 

Social media platforms facilitate a community for many marginalized people to establish their identities and create relationships. LGBTQ+ users may utilise these platforms to test out different gender expressions or to get support from people who have gone through similar things. This means that while users may try to create meaningful identities, platforms thwart these efforts by using algorithmic and community moderation tools to control what kinds of identities are promoted, silenced, or flagged. In these situations, digital technologies can offer visibility and validation that might not exist offline (Younes, 2023), but identity construction online is shaped by what the platform deems legible. While identical content from cisgender users has gone unnoticed, trans users have reported being penalised for sharing photographs that violate unclear “community standards” or banned for using their preferred identities (Aldridge et al., 2024).

This uneven enforcement of identity legitimacy shows that platforms privilege normative, commercialised expressions of self while marginalising others. What appears to be a space for personal expression is, in fact, highly regulated and policed (Younes, 2023). Therefore, while digital spaces offer possibilities for belonging, they also present profound risks, exclusions, and disciplinary mechanisms; especially for those whose identities do not conform to dominant norms. This systemic exclusion is not coincidental but algorithmically constructed, which leads into the next section: how algorithmic logics reinforce social hierarchies under the guise of neutrality.

Algorithms and the Reproduction of Dominance

Algorithms are considered by some as neutral tools that customise user experience based on their interests, but in practice they have a significant impact on what other social media users see, and interact with. Brusseau (2019) makes the case that algorithms push content to users based on probabilistic identities and on data correlations. These correlations then affect the material that is presented based on presumptions and financial interests included into the platform’s architecture (Brusseau, 2019). Algorithms reinforce popular material and dominant norms based on data on previous platform and user engagement metrics (Ortiz et al., 2024). It is more probable that content that supports popular identities, sensationalism, or ideas will appear. Conversely, content from or about marginalised groups is often under-represented, especially when it challenges dominant narratives or violates advertiser-friendly norms (Ortiz et al., 2024). 

Safiya Noble (2018) published a book titled ‘Algorithms of Oppression’ that demonstrated how Black and Brown populations are marginalised by search engines and recommendation systems. Searches for phrases like ‘Black girls’ yielded stereotyped or pornographic results, demonstrating how problematic depictions of marginalised groups can infiltrate algorithms (Noble, 2018). Similar dynamics apply to social media sites (Bucher, 2018). Users who post content that is thought to pose a risk to brand safety – such as posts by sex workers, Indigenous land defenders, or racial justice activists. These users are frequently shadowbanned or purposefully made less visible, which occurs without alerting the content creator or end users (Nicholas, 2023). At the same time, material that supports materialism, nationalism, or beauty standards is promoted. Platforms thereby reinforce existing inequalities through coding, even as they profess to democratise discourse. This algorithmic gatekeeping reinforces dominant cultural narratives while suppressing dissent, leading to the creation of ‘refracted’ rather than ‘networked’ publics—an idea explored in the next section.

Refracted Publics and Marginalised Participation.

Online communities created by social media affordances have long been referred to as the ‘networked public.’ However, this framework presumes equitable access to speech, safety, and visibility (Abidin, 2021). In actuality, algorithmic surveillance, commercialisation, and cultural gatekeeping affect digital environments, leading to uneven participation, particularly for users who are marginalised (Abidin, 2021). ‘Refracted publics’ allow online communities and cultures to shape the way they are portrayed and avoid weaponised misinformation campaigns, preventing harassment, censorship, or de-platforming. For instance, trans influencers and sex workers frequently negotiate ever-evolving social norms to keep their audiences interested while avoiding content removals and shadowbans (Aldridge et al., 2024). It is emotionally and materially taxing to bear the dual duty of protecting oneself from algorithmic or user-driven harm while also needing to be visible for survival of the community. 

Furthermore, visibility isn’t necessarily empowering. It may incite violence, doxxing, or trolling. Coordinated harassment campaigns against trans activists and women of color are common, demonstrating how online “participation” may turn into a place of pain rather than empowerment (Aldridge et al., 2024). Refracted publics are therefore not as not inherently safe or easily readable. On social media platforms, marginalised users must strike a balance between self-defense and identity expression (Younes, 2023).  However, despite these limitations, acts of solidarity and resistance continue to occur; this concept is best illustrated by the #GirlsLikeUs campaign, which is discussed next.

Case Study: #GirlsLikeUs and Trans Digital Activism

The #GirlsLikeUs hashtag was launched in 2012 by author and activist Janet Mock to raise awareness of trans women of color, after a series of high profile murders and suicides (Jackson et al., 2018). This movement swiftly expanded into a community where trans and queer adolescents found representation and empowerment, in a world where trans women are frequently harassed or ostracised on social media (Aldridge et al., 2024). Through interpersonal connection, #GirlsLikeUs formed as people supported one another, shared their experiences, and confirmed their identities. The hashtag challenged popular perceptions of trans women as tragic, dangerous, or deceitful. Instead, it highlighted their contentment, resilience, and complexity—often in ways that differed from media clichés (Aldridge et al., 2024).

The way this campaign navigated refracted publics is what makes it noteworthy. By adopting the hashtag, trans women were putting themselves in danger of algorithmic suppression or harassment but also making themselves visible. Many encountered shadowbanning or content flagging (Nicholas, 2023). In order to fight erasure, the community persevered, creating solidarity tactics and platform literacies. #GirlsLikeUs was successful because it served two purposes: it defended identity in a harsh online space and challenged algorithmic systems that aimed to hide transgender existence. It powerfully illustrates how marginalised people use digital tools to disrupt the structures that would otherwise silence them, but also to express themselves. 

Conclusion

In conclusion, digital platforms have become vital arenas for identity expression, political engagement, and community formation (Jiang et al., 2022). Yet, despite their democratic promise, these spaces remain deeply shaped by systemic inequalities and algorithmic power. For marginalised identities, slacktivism is more than low-effort activism—it is often a necessary form of digital presence, resistance, and connection in an environment that privileges dominant narratives. This essay has argued that although social media and digital technologies offer greater opportunities for activism, dominant narratives are algorithmically prioritised and reproduced, resulting in compounding discrimination for marginalised identities. Recognising this tension helps us better understand the political and ethical dimensions of digital engagement and move beyond reductive critiques, toward a more equitable digital public sphere.

References

Abidin, C. (2021). From “Networked Publics” to “Refracted Publics”: A Companion Framework for Researching “Below the Radar” Studies. Social Media + Society, 7(1), 2056305120984458. https://doi.org/10.1177/2056305120984458


Aldridge, Z., McDermott, H., Thorne, N., Arcelus, J., & Witcomb, G. L. (2024). Social Media Creations of Community and Gender Minority Stress in Transgender and Gender-Diverse Adults. Social Sciences, 13(9), Article 9. https://doi.org/10.3390/socsci13090483


Barassi, V. (2018). Social media activism, self-representation and the construction of political biographies. In The Routledge Companion to Media and Activism. Routledge. https://research.gold.ac.uk/id/eprint/23736/3/V.Barassi%20Media%20Activism_Routledge%20Companion.pdf


Brusseau, J. (2019). Ethics of identity in the time of big data. First Monday. https://doi.org/10.5210/fm.v24i5.9624


Bucher, T. (2018). If… Then: Algorithmic Power and Politics. Oxford University Press, Incorporated. http://ebookcentral.proquest.com/lib/curtin/detail.action?docID=5401028


Jackson, S. J., Bailey, M., & Foucault Welles, B. (2018). #GirlsLikeUs: Trans advocacy and community building online. New Media & Society, 20(5), 1868–1888. https://doi.org/10.1177/1461444817709276


Jiang, Y., Jin, X., & Deng, Q. (2022). (PDF) Short Video Uprising: How #BlackLivesMatter Content on TikTok Challenges the Protest Paradigm. ResearchGate. https://doi.org/10.36190/2022.42


Miranda, S., Young, A., & Yetgin, E. (2016). Are Social Media Emancipatory or Hegemonic? Societal Effects of Mass Media Digitization in the Case of the SOPA Discourse. Management Information Systems Quarterly, 40(2), 303–329.


Mueller, A., WOOD-DOUGHTY, Z., AMIR, S., DREDZE, M., & LYNN NOBLES, A. (2021). Demographic Representation and Collective Storytelling in the Me Too Twitter Hashtag Activism Movement. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 107. https://doi.org/10.1145/3449181


Nicholas, G. (2023). Sunsetting “Shadowbanning” (SSRN Scholarly Paper No. 4522168). Social Science Research Network. https://doi.org/10.2139/ssrn.4522168
Noble, S. U. (2018). Algorithms of Oppression—How Search Engines Reinforce Racism. https://safiyaunoble.com/wp-content/uploads/2020/09/Algorithms_Oppression_Introduction_Intro.pdf


Ortiz, J., Young, A., Myers, M. D., Bedley, R., & Carbaugh, D. (2024). (PDF) Giving voice to the voiceless: The use of digital technologies by marginalized groups. ResearchGate. https://www.researchgate.net/publication/333034416_Giving_voice_to_the_voiceless_The_use_of_digital_technologies_by_marginalized_groups


Papacharissi, Z. (2010). A Networked Self: Identity, Community, and Culture on Social Network Sites. Taylor & Francis Group. http://ebookcentral.proquest.com/lib/curtin/detail.action?docID=574608
Singh, D. P. (2023). The Algorithmic Bias of Social Media. The Motley Undergraduate Journal, 1(2), Article 2. https://doi.org/10.55016/ojs/muj.v1i2.77457


Smith, S. B., Risa Gelles-Watnick, Meltem Odabaş, Monica Anderson and Aaron. (2023, June 29). 2. Americans’ views of and experiences with activism on social media. Pew Research Center. https://www.pewresearch.org/internet/2023/06/29/americans-views-of-and-experiences-with-activism-on-social-media/


Walsh, T. (2006). A right to inclusion? Homelessness, human rights and social exclusion. Australian Journal of Human Rights, 12(1), 185–204. https://doi.org/10.1080/1323238X.2006.11910818


Younes, rasha. (2023). “All This Terror Because of a Photo”: Digital Targeting and Its Offline Consequences for LGBT People in the Middle East and North Africa | HRW. https://www.hrw.org/report/2023/02/21/all-terror-because-photo/digital-targeting-and-its-offline-consequences-lgbt

Images:

RDNE Stock Project. (2025). “Close-Up Shot of a Person Taking Photo Using a Mobile Phone”. https://www.pexels.com/photo/close-up-shot-of-a-person-taking-photo-using-a-mobile-phone-6257630/

Share this:

Search Site

Your Experience

We would love to hear about your experience at our conference this year via our DCN XVI Feedback Form.

Comments

19 responses to “Digital Activism: Amplifying Social Justice Efforts, or Perpetuating Inequality?”

  1. Jayne Avatar

    Hi Maxim,

    I found your paper very interesting and enlightening, particularly around the delicate and difficult balance within social media of both presenting yourself and at the same time staying below the radar, so that you are not blocked by platforms. It is another way that some communities have extra stress put on them just to present themselves. I found your example #GirlslikeUs a good way of demonstrating this, so I was interested that in your research did you find any other examples of this being ‘seen and not seen’ way of communication, as it makes me think this would be a tricky balance sometimes.

    Thank you for your thoughtful paper, it has made me reflect further about the pressure and constraints of algorithms.

    Many thanks

    Jayne

    1. Maxim Lullfitz Avatar

      Hi Jayne,

      Many thanks for reading my paper and engaging with thoughtful feedback. I appreciate your response!

      Regarding your question on further examples of users utilising social media to be ‘seen and not seen’, a good example is a report published by Rasha Younes titled ‘All This Terror Because of a Photo’. I did not discuss this report in depth due to the word limit constraints of the essay, but it was cited throughout my paper to aid in some of the arguments made throughout. It is a great example of visibility work which marginalised identities have been forced to participate in.

      Specifically, this report gives various accounts of LGBT individuals in Middle Eastern and North African countries such as Egypt, Iraq, Jordan, Lebanon, and Tunisia who have been persecuted based on images and engagement on social media. Some examples include security forces masquerading as fellow LGBT individuals on platforms such as Grindr or Instagram in order to meet in person, leading to online extortion, outing, arrests and detainment for the victims. Within the article which is included in the above paper references is this YouTube video which includes interviews with some people affected by the discrimination discussed above – https://www.youtube.com/watch?v=ujW562tYcKA . As a result of these prosecutions, affected users must balance censorship strategies while presenting their identity in safe manner, otherwise they are forced to avoid social media altogether.

      Please let me now if you have any more questions and I would love to discuss further. I am also looking forward to reading your paper shortly.

      Thanks,
      Max

      1. Jayne Avatar

        Hi Max,

        Thank you for sharing and highlighting those excellent examples. The video really showed the terrible persecution that can happen when sharing your identity online, and therefore how much users have to be careful of how much of themselves they can reveal. Indeed it can be a matter of life and death.

        In light of such stark and life changing stories it feels like there are no protections for people who want to engage online, and platforms should both promote and protect communities.

        Thanks again for sharing those helpful links.

        All the best

        Jayne

        1. Maxim Lullfitz Avatar

          HI Jayne,

          Justin asked some interesting questions below, and I am curious on your thoughts – Would you use an application which is specifically tailored towards a balanced viewpoint or online safety?

          I think this idea reads extremely well on paper but I’m not sure it would attract the same user bases as some of the massive social media applications i.e. YouTube, Facebook, Instagram etc. I could be wrong though!

          Thanks,
          Max

  2. Kai_Armstrong Avatar

    Hey Max!

    Thanks for reading my essay and engaging, it led me to give yours a read.

    Your essay raises some compelling points about the role of algorithms in shaping digital activism and identity. What struck me most was the idea that algorithmic systems (often assumed to be neutral) actually reproduce and reinforce dominant social norms. The example of trans users facing shadow banning or content suppression highlights how these supposedly “automated” systems can actively silence marginalised voices, even as platforms market themselves as inclusive and empowering.

    Given the scale and influence of these platforms, it raises a serious ethical concern: if algorithms are consistently marginalising certain identities, isn’t that a form of systemic discrimination? And if so, shouldn’t there be greater legal or regulatory oversight?

    We have accountability mechanisms for media bias in traditional journalism, so why not the same for algorithmic bias, especially when these digital platforms serve as our new place of connection and interaction?

    This leads me to my question: Do you think social media companies should be held legally accountable for how their algorithms affect visibility and participation, especially for marginalised communities? Or would that risk compromising the openness and flexibility that makes digital activism so powerful in the first place?

    Would love to hear your thoughts.

    Thanks,
    Kai

    1. Maxim Lullfitz Avatar

      Hey Kai,

      You’re absolutely right to highlight the ethical concerns around algorithmic discrimination and the lack of regulation. Social media platforms have positioned themselves as champions of inclusivity, yet their algorithms often reproduce dominant social norms, leading to the suppression of marginalised voices. This is particularly concerning given the scale and influence of these platforms.

      We’re seeing this tension play out in real-time. Since Elon Musk’s acquisition of X, for example, the platform has seen a rise in racism, misogyny, and misinformation (The Guardian, 2024 https://www.theguardian.com/technology/article/2024/sep/05/racism-misogyny-lies-how-did-x-become-so-full-of-hatred-and-is-it-ethical-to-keep-using-it). Similarly, Meta has begun rolling back algorithmic fact-checking in the name of “free expression,” just ahead of the 2025 U.S. election (Alligator, 2025 https://www.alligator.org/article/2025/02/mark-zuckerberg-free-speech). Other papers in this conference, like Mathew’s, also touch on how the far right strategically exploits these platforms to build filter bubbles and spread disinformation (Phillips, 2025 https://networkconference.netstudies.org/2025/csm/5402/right-wing-media-disinformation-understanding-the-polarizing-power-of-right-wing-media/).

      Given these patterns, I do believe social media platforms should be held legally accountable if their algorithms amplify harmful narratives. However, it’s a complex issue. In today’s polarised climate, any move to uplift marginalised voices can be mischaracterised as “woke,” potentially driving more users toward unregulated platforms like X. The goal shouldn’t be to limit expression, but to ensure algorithms don’t reduce visibility of vulnerable communities or promote violence and hate.

      Legal accountability shouldn’t stifle digital activism—it should protect the spaces that make it possible.

      Thanks for reading and engaging my paper, and all the best through the rest of the conference,
      Max

      1. John Lim Avatar

        Hi Maxim and Kai,

        Sorry for intruding on your discussion but I found the conversation quite interesting. I especially liked and resonated with the statement “The goal shouldn’t be to limit expression, but to ensure algorithms don’t reduce visibility…” and the potential for pushing users toward unregulated platforms like X. I hold this opinion strongly as well and agree that censoring of any time of degree exacerbates the toxic rhetorics pushing them and allowing them to concentrate into an even stronger echo chamber. This is also explored in my own paper if you are interested. Especially with how Donald Trump acted as a rallying point for these toxic rhetorics to banner around. https://networkconference.netstudies.org/2025/onsc/5420/social-media-affordances-donald-trump-politics-and-social-change/

        Which brings me to my question. During my research I come across a book by Wendling (2018) about the formation of Alt-right communities and what drives them and one of the possible answers he divulged in was that the Alt-right are usually white men who are confused or lost by sex politics and feel they have lost their power and social standing compared to their predecessors of patriarchal America and are also threatened by the success of ethnic minorities and women in the workplace.

        Do you think this holds true when we consider how the trans community is constantly targeted by trolling, doxxing and shadow banning? Is it a matter of education? How do you think should we enable conversations, especially in dominant channels, that would allow the algorithm to notice that most of progressive society values these types of discourses and allow its visibility?

        Would really love to know your thoughts. Here is also a link to my paper: https://networkconference.netstudies.org/2025/onsc/5420/social-media-affordances-donald-trump-politics-and-social-change/

  3. Greg Avatar

    Hey Max,

    Your paper was well written and made some interesting points about how digital activism is not always as empowering as it’s made out to be. The concept of refracted publics stood out to me, I hadn’t really thought about how algorithms could affect people’s visibility online.

    The #GirlsLikeUs example highlighted just how much effort is required to simply stay visible, especially for marginalised users who are impacted by algorithmic bias.

    Do you think platform reform could actually fix this issue, or are grassroots movements like these the only real way to push for change?

    Thanks,
    Greg

    1. Maxim Lullfitz Avatar

      Hi Greg,

      Thanks for engaging with this post.

      I do believe the platforms could fix these issues by algorithmically suggesting similar content to what the user has engaged with previously, as opposed to what is ‘popular’ amongst most of it’s users. That being said, grassroot movements are a viable way to make political changes and can absolutely help in any minor or major movement.

      Ultimately, social media platforms have a complex role whereby they can potentially create ‘filter bubbles’ which only echo a users’ political beliefs and fail to present a diverse range of content. Some of the complexities are touched on in this essay written by Hannah Metzler – https://journals.sagepub.com/doi/full/10.1177/17456916231185057, including fueling negative mental health via body dissatisfaction, suicides, and other negative impacts on society such as hate speech and polarisation. On the other side of the coin, if the content is too diverse then users may become less engaged as the the content shown to them becomes less engaging. It is a deeply complex issue and one that will take constant monitoring and continual improvement over time to see what works best for the platform and users.

      Thanks again for reading,
      Max

  4. Mathew Avatar

    Hi Maxim!

    I really enjoyed your paper, the inclusion of the #GirlsLikeUs movement as a case study was a very good example of how marginalized communities resist digital erasure, and that made your arguments that much more impactful.

    What do you think the users themselves could do to challenge algorithmic biases in these digital spaces?

    Mat

    1. Maxim Lullfitz Avatar

      Hi Mat,

      Thanks for engaging with those post, I really enjoyed reading your essay on how the Right Wing uses misinformation to sway users online.

      I think social media users can advocate for better transparency on social media platforms, and if this is not successful then they can choose to interact with other platforms entirely. At the end of the day, if social media giants see less users globally then they will be forced to comply with their users’ requests or forego commercial rewards and advertising revenue. This of course requires users to be aware of the unseen algorithms, and politically motivated enough to stop using platforms and make a change. I do believe however that the platforms share responsibility in this case and that continual improvement to the user experience should always be considered first and foremost.

      Thanks,
      Max

  5. Justin Avatar

    Hi Maxim,
    Like a lot of the papers in this conference you have made me think, and in your case about visibility and safety for marginalized communities online. I was interested in your discussion of ‘refracted publics’ and how algorithms end up connecting and isolating communities. What are your thoughts on potential solutions, do you think transparency in the use of algorithms would help? Or would that just give people tools to game the system and target vulnerable communities even further? Also, I wonder if there are examples of platforms that have successfully balanced visibility and safety for marginalized users? Are there any platforms that have built better systems from the ground up? Looking forward to your thoughts!”

    1. Maxim Lullfitz Avatar

      Hi Justin,

      I think the use of transparency in algorithms would be a great idea. Having a level of explainability in regards to the content being shown to users coupled with interaction from the user to ask if they would like to see similar content again (yes/no) would be ideal. This would allow the user to continue to actively adjust what they are shown and work with algorithms to have an enjoyable feed of content.

      I haven’t come across any platforms that are noted as being successfully balanced per se, but I will get back to you if I find one! Ultimately though I think people will gravitate to social media applications that their friends are using, or are the most engaging and have enjoyable content. An app that specifically is balanced and promotes safety may not get the same quantity of users, while that is a great thought.

      Would you use an application which is specifically tailored towards a balanced viewpoint or online safety? I suppose if I had children I would definitely want them to use an application like this once they reached a certain age.

      Thanks Justin,
      Max

      1. Justin Avatar

        The yes/no option like all good ideas, is simple and therefore more likely to be adopted as users are less likely to be overwhelmed.
        Regarding your question, yes a more balanced, safety conscious platform has merits as long as the valuable social connectivity isn’t lost. I did some digging around and wondered if the concept of a user owned platform would work? This explains it quite well:

        “Platform co-ops can be a way for businesses to use a member-ownership model to serve their members better. Rather than be at the mercy of investors and having no say in how the company is run and how profits are disbursed, a cooperative is owned by the members for the benefit of the members and their customers and is accountable to users/members rather than outside investors or venture capitalists.”

        https://blog.tsl.io/platform-cooperatives-what-are-they-and-who-is-a-good-fit

        1. Maxim Lullfitz Avatar

          Hi Justin,

          Platform cooperatives is an interesting concept, allowing organisations/ users to unite together to present web content based on a democratic system. From what I am reading in the link you provided it seems to align more closely with web3.0 concepts rather than web2.0 based on the fact that is is migrating away from centralised platforms such as Meta, X, TikTok etc. This could indeed be a greater way to promote content safely without algorithms having ultimate governance over what users will see. I wonder if this cooperative nature could create filter bubbles though, and ultimately only present one side of an argument/ one perspective on a topic. It’s great food for thought!

          Thanks for your comments, it has been great having this conversation with you!

          Max

  6. maxf Avatar

    Hi Max!

    I found your paper very insightful and enjoyed going through it. I hadn’t heard of #GirlsLikeUs before, so that sent me down a rabbit hole for a while. I thought your use of the case study illustrated the struggles with combating algorithmic suppression.

    While going over your paper, the main question that came to mind was a similar question to what you asked me about my paper. Do you think digital activism is inherently compromised by its reliance on profit-driven platforms, and do you think it is inevitable that there will be a need for external government interventions?
    Thanks,
    Max

    1. Maxim Lullfitz Avatar

      Hello Max 🙂

      I think that the value that social media brings to activism will always ensure it is a useful tool in bringing awareness, and thus should be used for awareness campaigns and bringing attention to issues online. Unfortunately, most social media platforms are started with good intentions but over time the need to bring more people onto the platform and then more advertising revenue becomes more fruitful for the company, rather than the needs of it’s users. For example, the ‘big four’ platforms including Twitter, Instagram, Facebook and Whatsapp are more effective than traditional media advertising. Ads can be delivered in ways where you don’t realise they are an ad at first, for instance ‘Get ready with me’ videos from influencers that aim to sell makeup products.
      https://www.researchgate.net/publication/332312053_Is_advertising_on_social_media_effective_An_empirical_study_on_the_growth_of_advertisements_on_the_Big_Four_Facebook_Twitter_Instagram_WhatsAppInternational_Journal_of_Procurement_Management_2020_Vol1

      I don’t believe there is a need for external government interventions at the end of the day, users who are more politically active can choose to engage over different platforms and means of communication and social media still has it’s uses.

      I hope that answers your questions 🙂

      Max

  7. Shannon Kate Avatar

    I think the idea of a the neutral algorithm (perpetuated by both platforms themselves, and users who aren’t effected by shadowbanning etc) is something that really need to be examined by every user on social media.
    My wife has a sexual education account on Instagram – no pornographic content, just every day information about sex topics aimed at adults. She is regularly shadowbanned and has content blocked as ‘sexually explicit’ which is just ridiculous. Meanwhile, there are thirst traps that get pushed out to millions of viewers. I believe it’s not just content from or about marginalised groups is often under-represented, but also content that gives education around topics that could threaten the commercial models of social media.

    Your idea that ‘visibility isn’t necessarily empowering. It may incite violence, doxxing, or trolling’ is one I hadn’t considered, but it echoes the issues examined in https://networkconference.netstudies.org/2025/icodsm/5042/social-medias-advocacy-in-preserving-indigenous-australians-cultural-heritage/. The fact that some minority users have to fight to be seen and THEN fight to have their privacy protected, or against racism, sexism, transhobia, etc is so heartbreaking.

  8. John Lim Avatar

    Hi Maxim,

    I really loved and enjoyed reading your paper! It is so well structured, the flow was easy to read and understand the research was really backed with strong evidence. I never knew algorithms had such a strong sway at promoting or silencing particular discourses or topics and it is eye-opening and a little bit threatening, also disheartening to fully understand how much capitalism has a hold of society even when we think we are engaging in democratic practices.

    Do you think that users are aware how much social media platforms are driven by capitalist agendas? Do you think it would make a difference in how they engage with social media or maybe even demand more transparency and unfiltered recommendations if they were more aware of social media company’s revenue motivation? Or do you think dominant narrative in society would still outweigh minorities and actively silence them even if social media companies allowed more transparency?

    Looking forward to your response!