This paper discusses how identity, in an online sphere, can be attacked and how that can flow into real life, going into such instances as the Christchurch shooting. It explores how different groups interact with and continue to function on social media sites. It also explores the actions taken by Facebook and Twitter to address the flood of hate speech online.
How online Islamophobia goes unmonitored and the toll it takes in reality.Download
How online Islamophobia goes unmonitored and the toll it takes in reality.Download
9 thoughts on “How online Islamophobia goes unmonitored and the toll it takes in reality.”
Hi Anika 🙂
Thank you for your beautifully written conference paper, I really enjoyed this!
I liked how you mentioned about Facebook wanting the “real identity” of people because this is the scary truth of social media and especially Facebook.
Facebook is personally my least favorite as of recently due to some of the decisions made and you can see exactly the power Facebook is having on individuals life.
Thank you for this well written paper, I really liked it and found it interesting!
Georgia Wiley :))
I loved reading your paper, this is a very interesting and important topic. The internet may sometimes be a safe space for people to explore their identity, but it can just as much be the opposite. Thus, it is a tricky place to navigate.
I agree that online media has made attacks on minorities more prevalent. As you mentioned in your paper, social media provides a platform for those who are racist, homophobic or misogynistic to express their offensive views. These views are then only further reinforced by like minded individuals.
On the other hand, I feel that this can aspect of social media can be utilised in a positive way. A study done by Gupta and Ariefdjohan regarding trends of antidepressant use uncovered that the increase in use of antidepressants matched the inspection of growth of posts about antidepressant use published on Instagram from 2010 to 2017 (Gupta & Ariefdjohan , 2020). This trend, along with the study results, prove that people are using social media to share their experiences with mental illnesses and the proper treatment, encouraging others in similar situations to get the help they need (Gupta & Ariefdjohan, 2020).
The widespread access to social media has also gives minorities the chance to express their views and call out such behaviour. There are more and more people of colour who are coming online to talk about their struggles and encouraging those with certain privileges to advocate for them and support them. They are able to make others see their perspective and become more aware and tolerant.
In a sense, these racist/homophobic/misogynistic people are being exposed for their harmful behaviour and being called out and held responsible for their actions. In your opinion, is it then a good thing that these awful people are exposing themselves and getting caught?
However, I absolutely agree that social media sites such as Facebook and Twitter should get better at managing hate speech online. It is so scary to know that many Islamophobic posts are being created on Facebook and sparking real violence. I was shocked to see that 349 instances of hate speech directed towards Muslims were found by The Online Hate Prevention Centre after looking over more than fifty Facebook pages. The fact that this is allowed is a real cause for concern.
Looking forward to hearing back from you Anika, you did great on this paper! If you’re interested in looking at a more positive view on social media, particularly how it can benefit mental health, please check out my paper here:
Hi Anika! I found your paper fascinating!
I have personally never seen any hate speech or any racism on any of my social media platforms before, do you think this is because these groups are more private on social media or does it depend on your friendship circles in real life? While reading your paper, I also found myself wanting to think of a solution to the problem. The only thing I could really think of is that if governments were to step in and force social media platfroms like Facebook and Twitter police their sites better. Do you think that could be a possible solution to the problem?
An interesting read which raises some great points and looks at the bad side of social media rather than good. In regards to your argument being negatively positioned towards social media do you think that while there has been such a large amount of Islamophobia people are beginning to become more aware of the situation? I would like to this that online networks and social media has lead more people to awareness of social issues such as Islamophobia, BLM ect.. which has then created more action against speaking/acting out on these issues. Would love to hear your opinion!
Within my research, I did find instances of people becoming more aware of the situation of online hate speech. Although alongside this is usually those being targeted advocating. After the Christchurch shooting, there was a much higher proportion of people and public figures saying that it is unacceptable in today’s day and age. Now although there are many bad things happening online there has been a surge of minorities coming forward and sharing their experience to further validate it and make it more obvious that it is happening. I believe that it has become much harder for these hate speeches to become validated online but because of the echo chambers they have adapted to be able to continue their crimes.
I found your paper particularly interesting, as the regulation, or lack thereof, of hate speech online is a subject that users often have to grapple with and face daily. I was really intrigued by your discussion of “Alternative for Germany”, as the correlation between outages and lowering of hate crimes, was staggering. To see the direct result of social media on real life crimes was rather confronting, as it demonstrates just how intertwined these issues are.
The lack of effective monitoring done by these platforms themselves was something that really struck me. The fact that these goliath companies rely on users to report content as their primary ‘awareness’ method, was shocking but not surprising. It is like these platforms do not want to be blamed for not having better mechanisms in action to regulate the content, so they place the responsibility with the users, which morally, I do not agree with. If these companies are providing a space where such speech can occur, it is up to them to find ways to deal with it. It feels lazy of these platforms to take this position of ignorance and say that “they cannot stop what isn’t reported”. Especially given that, as you mention, relying on other users to notify the platform of such content will only have little success given the echo chambers that these users exist in online.
Something you also discuss, is how Facebook is heavily focused on one’s ‘real’ identity, which is particularly interesting regarding this issue, as even being identifiable is not a deterrence for users to post hate speech or join hate groups. It would have also been interesting if you explored how platforms that are not so adopted by the mainstream and instead encourage the use of anonymity, like 8chan as you mention briefly, are used by hate groups and if there are the same, or even more struggles for monitoring this content.
There are several aspects of social media that these platforms tend to leave up to other users, such as digital death. These platforms cannot tell if someone has died, thus a profile will continue to remain available unless someone goes out of their way to inform the platform, again, putting the responsibility with the users. This is something I wrote about in my paper if you would be interested in reading it too! https://networkconference.netstudies.org/2021/2021/04/26/when-are-we-truly-dead-online-the-complexities-of-finalising-death-in-the-online-context/
I’ll definitely give it a read and thank you for your comment. I found it highly concerning the amount of reliance these companies have on the users to actually ensure hate crimes aren’t happening. Most concerningly though were the fact that these echo chambers can essentially block them from the rest of the world seems wrong when it comes to the effectiveness of the algorithm.
This is an interesting topic for a paper, but I found it very hard to read and comment on in the absence of an actual online post with a comment button below. It took me quite a while to hunt for this and figure it out (I clicked on your name in the end), and so you might want to edit this to be more visible and accessible.
The paper itself highlights a contemporary issue that I found intriguing to read about. I noticed that you mention “social media” very broadly as an element that facilitates hate crimes and hate speech, giving Facebook and Twitter as examples. I wondered if there are other platforms that hate groups use to promote their message and also whether there are also online activist groups that similarly promote tolerance to counter such movements. Do you know?
I am not actually sure about the different groups. I did mention the use of 8Chan, a popular site to be used by hate groups. Although mostly what they do is make pages or such on social media sites such as Facebook and Twitter to form a community of like-minded people, these areas go unmonitored as algorithms and reporting can be faulty. For the communities of people who advocate against hate crimes I didn’t do much research on, but my assumption would be they do the same forming groups in protest and to combat the amount of hate crime online they would be the ones reporting the content to the social media platforms.