Communities and Social Media

Social Media Free Speech Policies are a Myth

Abstract

Social media giants like Facebook and Twitter claim in their list of community guidelines that their users have the right to free speech on their platforms. However, hate speech on social media has become a major issue in recent years and in response these platforms have had to find ways to regulate what people are allowed and not allowed to say. This has been done by setting up moderators, flagging systems and algorithms that are supposed to automatically detect hate speech and censor it. In addition, these methods have been proven to fail due to inaccurate detection mechanisms and abuse of the flagging system. The users of social media are so socially, culturally and ethnically diverse that the hate speech detection systems are an inadequate means of recognizing language used in a particular cultural context.  The result of this policing has been the direct or indirect censorship of certain individuals or communities. This paper is suggesting that the so-called freedom of expression on social media platforms is actually a myth because of the moderation, flagging systems and failing algorithms. Although certain guidelines are necessary to protect vulnerable communities from abusive language, ultimately, a better solution would be to allow free speech to counter hate speech. Keeping speech genuinely “free” will preserve open communication between online global communities.

Keywords

#Hatespeech #Censorship #Freespeech #Moderation #Algorithms #Communities

Social media platforms such as Facebook or Twitter are widely regarded as public spheres where people can freely discuss and share their views, opinions and beliefs on a variety of subjects. In fact, these social networks are today a key component of social life and politics while “shaping how individuals interact with each other and how political movements organize and communicate with the public at large” (Jackson, 2014). Facebook and Twitter claim to defend the right to free speech and encourage public conversation on their platforms as long as their users adhere to a set of community guidelines. These community guidelines “announce the platform’s principles and list(s) prohibitions, with varying degrees of explanation and justification” (Gillespie, 2018). These big social networking sites “present themselves as democratizing forces”. However, in recent years, “increased attention has been given to their role in mediating and amplifying old and new forms of abuse, hate and discrimination” (Matamoros-Fernández & Farkas, 2021). With these issues steadily rising on their platforms, social networking sites have had to “face a number of internal and external pressures to censor content” (Jackson, 2014). In response, policing systems of moderation and surveillance algorithms have been set up in order to censor content that is deemed by the corporate entities of these platforms to be offensive, racially abusive, misogynist, homophobic etc.  However, these restrictions have proven to fail not only by increasing hate speech, but in collateral have prevented people from freedom of expression. This paper will prove that moderation, censorship and abuse of the flagging system can be detrimental to the discussion and freedom of speech of individuals and communities on social media. It will also discuss how communities on social media are affected by these restrictions and how these communities fight against hate speech on social platforms.

Freedom of speech and expression is the notion that any person has a natural right to freely express themselves across any form of media without outside interference, such as censorship, and without fear of being threatened or persecuted for doing so. Social Networking Sites (SNS) are ideal media platforms for people to freely express themselves, in fact, these platforms expressly state that their purpose is to “serve the public conversation” (Twitter, 2021) and to allow people to “to share diverse views, experiences, ideas and information…even if some may disagree or find them objectionable” (Facebook, 2021). However, social media platforms have also facilitated the dissemination of hate speech (Strossen, 2001) because comments can be posted on any post and can be seen by many users at one time. To remedy this, major social media corporations such as Facebook and Twitter have set up a set of community guidelines that “limit expression in service of their values” (Facebook, 2021) such as authenticity, safety, privacy and dignity by censoring content that they believe to breach the platform’s code of conduct. Social media companies have no ties with the First Amendment because this law only applies to government, therefore social media have the right to decide that certain voices and messages should be removed from their platforms.  Recently, social networking sites have been policing content posted on their platforms by means of moderation and surveillance algorithms. These content moderators and surveillance teams have the power to “censor content or block particular users’ access to the site” (Jackson, 2014) There are a few reasons as to why these moderation systems fail and are a threat to discussion and free speech on these social platforms. Content moderation and surveillance is carried out by human regulation and Artificial Intelligence (AI). Social media platforms today generate “thousands of apps, millions of videos, and billions of search results” and these platforms are presenting themselves as worldwide services suited to everybody. However, “the rules of propriety are crafted by small teams of people that share a particular worldview, and are not well suited to those with different experiences, cultures, and value systems” (Gillespie, 2018). When the corporate entities of these major social media platforms dictate what can and cannot be posted on their site they can inevitably be biased towards certain forms of expression.  For example, they may censor people that post on matters of public concern “that might interfere with their business interests and filter(ing) out viewpoints that are critical of business partners that could be of offence to them…or statements that wrongly attribute the social platform itself” (Jackson, 2014) Another reason could be to “censor content in order to promote specific political agendas or in response to political pressure from certain sets of users” (Jackson, 2014) The Wall Street Journal gives an account of censorship on Twitter where Meghan Murphy, a gender-politics blogger, was banned from the platform for her criticism of transgender rights where she wrote that “transgender women are the same as men, as part of her argument that gender is determined at birth.” This opinion was viewed by some lesbian, gay, bisexual and transgender activists as inciting hate speech against transgender people, therefore Twitter took action and banned her account. Murphy filed a lawsuit against Twitter for violating her rights of free speech complaining that her account was banned for not aligning with Twitter’s political views, but Twitter stated that its policies did not take into account political views (Wells, 2019). This example shows that pressure from communities or political figures can have an effect on social media company’s actions of censorship and deny individuals from free expression on their platforms. The next part of this argument will explore social media moderation algorithms and how they may fail to recognise the difference between hate speech and criticism or offensive language which ultimately hinder free speech on social platforms.

To be able to continue with this argument it is important to understand what “hate speech” is and what algorithms classify as hate speech. “Hate speech” is not a recognised legal concept in the United States, but what is considered “hate speech” is speech that conveys a hateful message and may in a particular context be punished if it directly causes specific, imminent, or serious harm. Although there is no formal definition, there is consensus that hate speech targets disadvantaged social groups in a manner that is potentially harmful to them (Jacobs & Potter, 2000; Walker 1994). However, Strossen views the term “hate speech” as a means “to demonize views people find offensive” (Wilson, 2019) Human moderators cannot monitor the huge amount of user generated texts on social networks. With this knowledge comes the challenge for automatic hate speech detection algorithms on social media to separate hate speech from offensive language. It is true that these algorithms have been created for good intentions like “preventing hate speech based on characteristics like race, ethnicity, gender, and sexual orientation, or threats of violence towards others” (Davidson et al., 2017) However, these hate speech detection algorithms do have their flaws. Researchers (Davidson et al.) from Cornell University conducted research by crowd-sourcing to gather tweets containing hate speech key words. The results showed that the tweets fitted their definition of hate speech but were misclassified by algorithms because “they did not contain the terms most strongly associated as hate speech” (Davidson et al., 2017). These algorithms performed well at classifying prevalent forms of hate speech, particularly anti-black racism and homophobia, but were less reliable at detecting types of hate speech that occurred less frequently such as those that are more subtle and that do not contain curse words (Nobata et al., 2016). With these results we can confirm that hate speech detection algorithms are inaccurate, and risk censoring users posts and hindering free expression on social media platforms. These algorithm flaws even risk missing out on real hate speech and if these flaws are not addressed then they “can lead to users abandoning an online community due to harassment” (Nobata et al., 2016).

Social media platforms also contain reporting or flagging systems that help involve individuals or communities to decide what content should be removed from their platforms. These reporting systems work simultaneously with Facebook’s algorithms to rid platforms of hate speech. However, these flagging systems can be abused by individuals or communities by content that they would consider offensive or do not agree with. As mentioned previously, social media platforms are home to many communities and some disagree with each other and “only a small but quite devoted percentage of users are committed enough…to police bad behavior and educate new users” (Gillespie, 2018). Communities can abuse flagging systems through organized flagging because “community flagging offers a mechanism for some users to hijack the very procedures of governance” (Gillespie, 2018). Examples of this are when communities or citizens flag genuine news reports with authentic information as fake news because the news doesn’t align with their political views. These reports are taken down as a result of the number of flags before they even fact checked. These issues arise especially during election periods of countries. This violates the right of free speech of news corporations on social media (Gaozhao, 2021).  Flagging systems can be a “tactical and convenient tool by which to silence others…because flagging mechanisms do not reveal those who use it and they use these complaints as legitimate data for their platform” (Gillespie, 2018).  These measures can be detrimental to free speech on social media and proves that flagging systems and algorithms cannot be relied upon to not be biased towards a certain community or group.

Censorship has not been effective historically and has most ironically been used to suppress the rights of marginalized individuals or communities. Because the term “hate speech” is so broad, very often civil rights organizations have complained to social media giants like Facebook and Twitter that they have been punished for their advocacy of equality as hate speech. Social media companies claim that their platforms can be used for people to freely express themselves, but it seems that free speech only acceptable as long as they abide to the list of regulations set up by the powerful corporate entities of these platforms who claim to be “unbiased”. However, Nadine Strossen suggests that censorship of “hate speech” comments on social media is not the ideal course of action as this can cause more hate to arise (Wilson, 2019). The best course of action against hate speech would be for people to refute it in a compassionate approach instead of a hostile approach. Although social media makes it easier for hate speech to be spread, it is also possible that counter speech may be just as easily disseminated. It would be a better solution for social media companies to trust its users and communities to ignore and refute hateful messages on their platforms instead of trying to manage speech on their platforms using fallible algorithms that end up unfairly censoring those who express their ideas and opinions. Moderation, flagging systems, and failing algorithms are all reasons as to why social media free speech policies are a myth.

References –

Davidson, T., Warmsley, D., Macy, M., & Weber, I. (2017, May). Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media (Vol. 11, No. 1).

Facebook. (2021). Community Standards. Facebook. https://www.facebook.com/communitystandards/.

Gaozhao, D. (2021). Flagging fake news on social media: An experimental study of media consumers’ identification of fake news. Science Direct. https://doi.org/https://doi.org/10.1016/j.giq.2021.101591

Gibson, A. (2019). Free Speech and Safe Spaces: How Moderation Policies Shape Online Discussion Spaces. SAGE Journals, 5(1). https://doi.org/https://doi.org/10.1177/2056305119832588

Gillespie, T. (2018). Custodians Of The Internet: platforms, content moderation, and the hidden decisions that shape… social media. YALE UNIVERSITY PRESS.

Jackson, B. F. (2014). Censorship and freedom of expression in the age of Facebook. New Mexico Law Review, 44(1), 121-168.

Jacobs, J. B., and Potter, K. 2000.Hate crimes: Criminal Lawand Identity Politics. Oxford University Press.

  Matamoros-Fernández, A., & Farkas, J. (2021). Racism, Hate Speech, and Social Media: A Systematic Review and Critique. Television & New Media, 22(2), 205–224. https://doi.org/10.1177/1527476420982230

Mokhtar, M. F., Sukeri, W. A. E. D. W., & Latiff, Z. A. (2019). Social Media Roles in Spreading LGBT Movements in Malaysia. Asian Journal of Media and Communication, 3(2).

Nobata, C., Tetreault, J., Thomas, A., Mehdad, Y., & Chang, Y. (2016). Abusive Language Detection in Online User Content. Proceedings of the 25th International Conference on World Wide Web. https://doi.org/10.1145/2872427.2883062

Saez-Trumper, D., Castillo, C., & Lalmas, M. (2013). Social media news communities – Gatekeeping, Coverage, and Statement Bias. Proceedings of the 22nd ACM International Conference on Conference on Information & Knowledge Management – CIKM ’13. https://doi.org/10.1145/2505515.2505623

Trottier, D. (2011). A research agenda for social media surveillance. Fast Capitalism8(1).

Twitter. (2021). The Twitter rules: safety, privacy, authenticity, and more. Twitter. https://help.twitter.com/en/rules-and-policies/twitter-rules.

Vaccaro, K., Sandvig, C., & Karahalios, K. (2020). “At the End of the Day Facebook Does What It Wants” – How Users Experience Contesting Algorithmic Content Moderation. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2), 1–22. https://doi.org/10.1145/3415238

Walker, S. 1994.Hate Speech: The History of an AmericanControversy. U of Nebraska Press

Wells, G. (2019, February 12). Writer Sues Twitter Over Ban for Criticizing Transgender People. The Wall Street Journal. https://www.wsj.com/articles/writer-sues-twitter-over-ban-for-mocking-transgender-people-11549946725.

Wilson, R. A. (2019). HATE: Why We Should Resist it with Free Speech, Not Censorship by Nadine Strossen. Human Rights Quarterly, 41(1), 213–217. https://doi.org/10.1353/hrq.2019.0015

31 thoughts on “Social Media Free Speech Policies are a Myth

  1. Hey Luc!

    Well done on this paper! It as quite intriguing and thought provoking! There are many good points in here that really show despite many social media platforms claiming to have ‘freedom of speech’ policies, they truly can be quite bias in nature.

    All this talk about censorship algorithms is quite interesting, in theory they are a terrific idea. Do you think they will ever advance to a level were they could be used practically? Would love to hear your thoughts.

    Regards, Jacob.

    1. Hi Jacob,
      I agree with you that algorithms are an ingenious idea and as I replied to a similar comment I do believe it can be a great tool once it is perfected to the point that it can detect real “hate speech”. Unfortunately, this is not yet the case and very often people are unfairly censored because the algorithm did not detect in what context the offensive word was expressed.

      Cheers

  2. Hi Luc, this was a really interesting read. It made me think of an experience I’ve had on facebook. I play a game called Stardew Valley, which is a farming simulation game. One of the items you’re equipped with is a hoe, to till soil. I’m in a Facebook group for Stardew Valley players, and when we refer to the hoe we have to censor it as “h*e” because the group admins have received warnings over the use of the word in the group. I think that’s a pretty good example of hate speech censorship using AI gone wrong in a harmless and amusing context.

    On the flip side though, I do understand the concept behind algorithms used to detect potential incidences of hate speech, I just think it can definitely miss the mark. People who want to spread hate speech will easily dodge the algorithms in the same way my Stardew Valley group dodges them over an incorrect flagging of hate speech.

    1. Hi Silas,
      I have heard of this game, it looks quite relaxing! That was a great example of Facebook’s erroneous hate speech detection algorithms. In the same way I wanted to point out how these algorithms prevent free speech and discussions on social media. Imagine if the word “Nazi” is considered hate speech, then there would be no discussion on the Second World War because the word would be censored as it is considered offensive! That was just an example, but the way the world is going what is being considered as “hate speech” is getting quite ridiculous!

      I can understand that people can deliberately be hateful towards a particular individual or group, but a lot of the time we don’t realize what the “hater” is going through and being unkind online is a way of letting out their true feelings on certain things. That is why I believe that censorship isn’t the best way forward because it fuels more hate, instead I believe that people could respond to said hate in a more passive manner and maybe help that person.

      Thanks for your comment 🙂

  3. Hello Luc,
    Firstly, I would like to congratulate you for your awesome paper. It is a well-thought paper, I can sense how much effort you have taken to support your claim. Thanks to your argument, it gave me the chance to learn something new, out of my ‘comfort’ zone really. Bravo!
    As I have clearly understood, online platforms started to exercise new forms of powers via established moderations and regulations to violate free-speech. You have shown the its impacts through clear reasons on how it raised serious concerns. However, as an active online user, I would to undim your argument, that social media is actually a brave space. According to you, I agree that, to some extent social media appears way to like a comfort home in regards to limit hate speech, content moderations, away from risk and conflict = a ‘safe space’. In social media, individuals are supported in being brave, which means being authentic, and speaking our truth while acknowleding our social oppressiveness (Arao and Clemen, 2013). Being brave means being open to others who are also bravely sharing their experiences with media. Brave space and ‘safe space’ can be used interchangeably.
    Imagine living in an environment where there is only hate, an unsafe place etc…, and social media do not have any regulations as you have mentioned in your work, cause remember, nobody has a perfect system Luc. Moreover, through these regulations online, one is able to participate with diversified identities, while being respectable. What are your views upon my take?

    Best,
    Lovee

    Reference list:
    Arao, B., & Clemens, K. (2013). From safe spaces to brave spaces. The art of effective facilitation: Reflections from social justice educators, 135-150.

    1. Hi Lovee,
      I understand that regulations need to be put in place to create a “safe space” for people to converse and express their opinions without being abused or hated for their views. However, I don’t believe that the online environment is only hate as you suggested. I feel it is wrong for an algorithm to decide what you can and can’t say. Just because someone makes a comment that offends someone doesn’t mean it is hate speech, and I believe that in this day and age people shy away from intense civil discussions that could potentially be beneficial to both parties because they are afraid to offend someone in fear of being ganged up on or cancelled. One great example would be the sacking of an English teacher at Eton College because of a controversial educational video that he posted on YoutTube that offended just one staff member. If you’re interested in knowing more about this story here is the YouTube link https://www.youtube.com/watch?v=o0SXWEQMsgM&t=1811s

      Thank you for your comment, I really appreciate it 🙂

  4. Hi Luc! I enjoyed reading your paper and I agree on your point of view on this topic. Especially right now, when you see what is happening in Gaza and all the contents are being banned, videos are put down and influencers supporting or voicing out this matter are having low views or having their accounts temporary disabled. From my point of view, social media platforms have political aspects, at the end of the day; they are a business and they are often caught up in between. They might advocate on some social topics but again their platforms are very limited to what extent can they allow or expose facts and issues disturbing their homogenous audience.

    Concerning the ‘trust’ well that’s another debate; in fact a study done in Thailand proved that a lot of misinformation were spread due to freedom of speech, resulting as the State to establish laws on social media platforms such as facebook or instagram to employ and set boundaries. herefore, to put the control of information and knowledge back to the social media users and civil society, a crucial step that the state must take is to focus on imposing transparency and accountability obligations requiring technology companies to make efforts to disclose a certain degree of their curation practices and be more transparent about how platforms function.
    Presently, mechanisms to enhance efficiency and transparency in online discourse provide a useful normative foundation for addressing fake news without infringing on free speech. This is a first step in the right direction. Most major platforms have already developed some form of solution to deal with fake news, and established their own standards for moderating content. However, those features remain somewhat obscure, which makes it difficult to assess whether competing interests are being balanced. What are your thoughts about it ?

    Ref:
    Anansaringkarn, P., & Neo, R. (2021). How can state regulations over the online sphere continue to respect the freedom of expression? A case study of contemporary ‘fake news’ regulations in Thailand. Information & Communications Technology Law, 1-21. https://doi.org/10.1080/13600834.2020.1857789

    Also do check my paper: https://networkconference.netstudies.org/2021/2021/04/26/from-new-york-streets-to-instagram-community-the-chronicles-of-body-positivism-movement-of-curvy-women-and-its-transition-to-social-media/

    1. Hi Ruby,
      I agree with you that fake news is quite the problem nowadays on social media. I also feel that if news was hand picked by the social media companies and they decided what was real of fake news, it would seem that there might be some bias. It is quite a tough situation for these social networking sites as their is so much data and information going around that it must be very difficult to sift out misinformation without mistakenly taking out what may be real. I can’t think of a solution to this issue right now, but it is definitely something I would do more research on.

      Thank you for your inciteful comment 🙂

  5. Hey there Luc,

    That was a good read, I enjoyed gaining insight into your view towards social media moderation and how it impacts our freedom of speech.

    My paper focused on cancel culture (you can check it out here – https://networkconference.netstudies.org/2021/2021/05/01/cancel-culture/ ) I also discussed social media and the limits of freedom of speech, supporting your argument. Social platforms no longer allow for complete freedom of speech, both due to moderation and due to the pressures of society surrounding judgment and the fear that you might “offend” somebody without any intent to do so. It is so easy to misinterpret someone’s tone or the meaning of a message in an online environment. This is dangerous and can lead to negative consequences.

    While moderation techniques are no perfect, I do believe that it is more beneficial to have them in place, rather than have a free-for-all space where comments fly without monitoring.

    1. Hi Matthew,
      Thanks for your comment, I’m glad you chose to write about the topic of cancel culture, in fact I wanted to elaborate a bit more on that particular subject but I was afraid it would not be able to fit the word count! Cancel culture is definitely an issue that needs to be talked about more, I think that it is wrong to “cancel” someone just because he/she has different ideas or opinions that may be contrary to the “norm”, especially when being cancelled can cost ones job and reputation.

      Cheers

  6. Hi Luc!

    Very fascinating as it is an ongoing controversy and something I have been looking into these days.

    I think the biggest challenge we face when it comes to social media and free speech is the legal one, where social media sites can simply get away with restricting speech by defining themselves as “private companies” (which they very much are), rather than global platforms (which also happens to be somewhat true) that have a near-monopoly on civic engagement online.

    What kind of solutions do you think could be applicable in this context? Anti-trust ones or governments of a country simply compelling the platforms to allow free speech? The latter seems much less practical, considering most governments don’t have absolute free speech anyway, and already compel the platforms to do the opposite in some cases (like China). How do you think we can move forward?

    1. Hi Anurag,
      I agree with you that the mix of social media and legal aspects are quite tricky especially as you mentioned that each country has different laws. Honestly, if social media are able to create algorithms that are able to detect what is hate speech and what is being used for debate or discussion, then I would say that algorithms would be a good way to move forward. Although my argument is against the current algorithms it is because this technology is yet to be improved.

      Thanks for your comment

  7. Hi Luc,

    this was a really introspective read! I agree on your point about how free speech on social media doesn’t seem real at this point.

    I have noticed these days the term ‘hate speech’ can be thrown around at will online and get users into trouble. Community guidelines seem to be applicable to some but not to others. Shadow banning is also more common and users can’t seem to understand why they are banned in the first place.

    What is your opinion on shadow banning? Do you think it is fair for creators to be unaware that they have been banned in the first place?

    In my paper as well, I question the credibility of community guidelines and how they seem to work in synchrony with a hidden agenda as part of my debate about Social Commentary YouTube and its role in online activism

    https://networkconference.netstudies.org/2021/2021/04/27/social-commentary-youtube-performance-of-civic-agency-in-the-21st-century/

    I would love to have your take on the points I made.

    1. Hi Elodie,
      Thanks for your comment. About shadow banning, to be plain, I think it is wrong. If someone is shadow banned and has not been given a reason for the ban it can be very unfair to that person and shows that that the social media company has an element of bias. If someone is banned just because they might have commented something that offends someone, then there will be no room for discussion or debate. As I mentioned in my paper, for example, Facebook claims that everyone has the right to free speech regardless of whether people agree with each other or not. If Facebook shadow bans users then the company is not staying true to their own “Community Guidelines”.

      Hope this answered your question!

  8. Hi Luc,
    Overall I found your paper interesting as the algorithms and flagging systems you discuss interest me although personally my knowledge of the two comes more so from the perspective of influencers where they are utilised heavily in the process of cancel culture.

    Firstly, I agree with your statement that “Moderation, flagging systems, and failing algorithms are all reasons as to why social media free speech policies are a myth.” as these systems inherent fuction is to remove posts therefore hindering users ability to enact free speach. I also mostly agree with your argument that flagging systems can be abused by groups of users to silence other users a practice which is again common within cancel culture.

    On the otherhand I disagree with your argument that “It would be a better solution for social media companies to trust its users and communities to ignore and refute hateful messages on their platforms instead of trying to manage speech on their platforms using fallible algorithms that end up unfairly censoring those who express their ideas and opinions.” as overall it is safe to assume that these algorithms remove a much larger quantity of actual hate speach in comparioson to posts that are mistakenly taken down. But I understand that this point specifically can come from a difference in an individuals opinion on preferencing the many over the few. I personally see the use of these algorithms as a better alternative as while mistakenly taking some posts down is a problem it is a small cost for the communual benefit to the large majority of users on the platform.

    1. Hi Brodie,
      I don’t have anything against algorithms, and I agree with you that they have proven to be quite effective in removing comments that can be abusive towards others. What I am trying to convey is that social media platforms are nothing without its users, therefore the users should play a greater role in the fight against “hate speech” with more speech.

      Thanks for reading my paper and your comment
      Cheers
      Luc

  9. Hey Luc! Brilliant paper, very thought provoking! I particularly liked this quote of yours: “With this knowledge comes the challenge for automatic hate speech detection algorithms on social media to separate hate speech from offensive language.” It can become such blurred field muddied further with the moderation of biased moderators. I was cautioned for using the word ‘delusional’ in a reply to a comment. The reported comment was not one of malice, moreover, it was within a debate and reported out of malice. The irony is the only recourse was one of algorithms with no recourse through human interaction. It highlighted to me, exactly what you are explaining here! I love that you suggest to “refute it in a compassionate approach”. I do fear, however, we may need much more than that. Thank you for such an insightful paper.

    1. Thanks Emma!
      You’re right, I agree that abuse online like racial abuse should have further consequences e.g a fine. It is quite alarming how racism online has affected many people particularly in the sporting world, and this abuse can surely affect ones performance. However, I also believe that “a more compassionate approach” would be a good start to the fight against online abuse, because being hostile just adds more fuel to the flames.

      Thank you for taking the time to read my paper and your comment!

  10. Hi Luc,

    Your paper was very well written and your references were great sources! I loved hearing your side to this argument as it was very interesting and informed and definitely something I have been aware of but never explored on a deeper level.
    I found interesting the part of your argument where you talked about some online communities clashing with each other and the idea that some people will ‘police’ bad behaviour to help inform on what’s wrong or right. I have seen this first hand on my social media platforms and I find that this is usually blown out of proportion with people quick to argue back. I suppose this adds to the argument of lack of freedom of speech online but it is interesting how these days trying to correct someone can be viewed as rude and nasty even if this was not the intention.
    I thoroughly enjoyed this article and can see how this toxic nature to social media is always there amongst the good.

    Great job!
    Jasmine

    1. Hi Jasmine!
      Thanks for taking the time to read my paper and commenting! I agree with the fact that many times on social media constructive criticism can be viewed as rude. Imagine a world where everybody agreed with everyone…we would get nowhere! That’s why I believe that people should be able to debate and discuss subjects that may be sensitive on social media platforms. Surely through debate one could be inspired to do further research on a particular subject. If people are censored just because their opinion contradicts the majority, then there would only be one side to a story. Even if a person makes a comment that is ill informed, it sparks debate and gets people to do research and reply to such comments with facts to back it up which my lead to changing that person’s mind!

      Cheers!
      Luc

  11. Hi Luc,

    Thanks for sharing this paper and your insights on this topic, it was a very interesting read and not something I can say I have thought a lot about. While I agree that there are clearly some issues with the algorithm and the way it filters data and key words, I do think there is value in hate speech detection especially when it comes to racism, as this should not be tolerated on any platform.
    I can see in your conclusion you have offered an alternative solution where social media companies can “trust its users and communities to ignore and refute hateful messages”. Although I agree it would be nice if all users could be trusted on the internet, there have been too many examples of incidents which prove that users cannot always be trusted. Some countries, like Germany, actually impose financial penalties on social media companies if they fair to reduce the volume of abusive content on their platforms. See more information about this here if you are interested: https://www.latrobe.edu.au/nest/can-combat-rise-online-hate-speech/

    You have used the example of Meghan Murphy being banned from Twitter for her comments on transgender rights, which I perceive as hate speech by definition and completely agree with the actions taken to remove her from the platform. I am curious as to whether you came across any other examples where people from minority groups were silenced by social media platforms because of the algorithm?

    Again, thank you for an insightful read Luc, I look forward to your response.

    Meg

    1. Hi Megan,
      Thank you for your comment, I completely agree with you about racism being a major issue on social media platforms and I do not object to the use of hate speech detection algorithms as a means to help prevent racial discrimination. My point is that these algorithms could potentially jeopardize formal discussion or debate on various issues if they end up censoring people who want to express their opinions. It is true that it may seem like a daft idea to trust a platforms users to ignore and refute such comments, but it seems to be the lesser of two evils than to trust government officials or powerful corporate entities to decide what can and cannot be said on these platforms.

      About the banning of Meghan Murphy from Twitter, I find that it wasn’t fair to have banned her, the reason is because she had strongly expressed her opinion about transgenderism and that was her right to do so, if you look at a few posts she made on Facebook concerning her ban you can see that her motive for voicing her opinions was to be able to have a discussion with those who do not share the same opinion as her.

      Thank you again for your comment and I’m glad you offered a different perspective on my paper.

      Cheers

  12. Hello Luc!

    I really enjoyed reading your paper and it was very informative. I can really understand this when I look at the way hate speech in Mauritius cannot be regulated due to the language. The algorithm does not detect all the foul language and hate speech in creole! And I think that in other countries with other languages and dialects it is also the case.
    Those hate speech detection algorithms can, as you mentioned, be a disadvantage for freedom of speech when people try debating on taboo subjects such as rape, murder, or simply sex education online.
    Well after writing your paper do you have any suggestions in order to overcome this situation?

    I also encourage you to check out my paper on, “Black Natural Hair Vloggers on YouTube Are Empowering Their Audiences’ by Encouraging Them to Embrace Their Black Identity.”
    https://networkconference.netstudies.org/2021/2021/04/26/black-natural-hair-vloggers-on-youtube-are-empowering-their-audiences-by-encouraging-them-to-embrace-their-black-identity/#comment-532

    I hope to hear from you soon.

  13. Hi Luc,

    First off, I agree with your notion that ‘hate speech’ is a hard thing to define and that leaving it up to AI is somewhat a recipe for disaster. I also agree with your discussion around the dangers of algorithms and flagging when it’s used for the oppression of opposing views. However, I had some thoughts in regards to some of the more hard-lined opinions shared here.

    Firstly, you mention that: “With these results we can confirm that hate speech detection algorithms are inaccurate, and risk censoring users posts and hindering free expression on social media platforms. ” I think this is potentially quite a large step to take. You mention that these algorithms miss subtleties of hate speech as they are only focused on key words that are associated with such hate speech. To say then that this can lead to the propagation of censorship on non-hate speech posts would be a contradiction wouldn’t it? How can the algorithm block non-hate speech containing posts if it functions by finding words that are illicitly connected to hate speech?

    This notion of such algorithms contributing to the problem is continued further when you say “… these restrictions have proven to fail not only by increasing hate speech, but in collateral have prevented people from freedom of expression”. I understand the collateral section of this statement as you mention that opposing parties can use flagging systems to bring down posts of their opposition. However, the notion that hate speech has been ‘Proven’ to be increased via these algorithms strikes me as being quite wrought with bias. ‘Proven’ being the key word here, but not followed by any reference, nor any statement which supports the actual mechanism which would make this true. How exactly do such algorithms ‘increase’ hate speech?

    Thanks for the paper, was very thought provoking!

    1. Hi Jordan!
      Thank you for your insightful comment. You are right, I should have added a scholarly text to prove my point about censorship algorithms increasing hate speech. I guess what I was trying to say was that these hate speech detection algorithms can possibly prevent people from debating taboo subjects on the platform because of the selection of words that are considered “hate”. However, if these algorithms are improved to detect in which context these “hate” words are being used e.g debate or discussion, then I call it a win for these social media platforms.

      Cheers

  14. Hey Luc

    Really enjoyed reading your paper!

    I just knew that the first amendment have no ties to social media and only applies to the government. I completely agree with your statement in the end where you mentioned that social media should put more trust in their users.

    If you have time, feel free to check my paper out: https://networkconference.netstudies.org/2021/2021/04/28/how-social-media-such-as-twitter-and-discord-can-help-individuals-with-mental-illness-and-build-communities-online/

    Thank a lot Luc!

  15. Hello Luc,

    When you wrote that social media should trust their users more, I do agree that there is a certain level of trust some content creators deserve but having that structured frame is a good way to maintain a certain amount of stability too.

    GREAT WORK THOUGH!

    Cheers!

  16. Excellent paper Luc Samuel! Congratulations

    It certainly is worrying to see all the major digital platforms implement viewpoint-based censorship while at the same time preaching about inclusion and diversity.

    Clearly, Social Media Sites now exercise an unhealthy amount of power over the global psyche these days, being able to curate information and censor debate about public interest topics.

    I found your paragraph about hate speech very insightful and agree with you when you wrote that social media sites need to trust their users more.

    1. Thanks David! Yes and as mentioned in the paper social media platforms claim to be “unbiased” but sometimes bias is inevitable especially under pressure! Thanks for reading my paper and for your comment!

Leave a Reply to Jasmine Carulli Cancel reply

Your email address will not be published. Required fields are marked *