Communities and Social Media

“Social networking sites like Facebook and Reddit have amplified the voice of alt-right extremists and allowed for faster spread of misinformation.”

Social networking sites have changed the way we view and access information on the internet. It is now easier than ever to communicate, share opinions and forge relationships with like-minded people all across the world. The internet is now the most powerful tool used by organisations of all forms to share their message and expand their horizons. Unfortunately, this has meant that there has been a surge of hate speech broadcast across social networking platforms. It is now easier than ever for extremist groups to spread hate speech and recruit new members through the use of propaganda posted online. Arguably the biggest change that has come about in recent years has been the rise of the ‘Alt-Right’. Two key platforms that have made this change possible are Facebook and Reddit. Both of these sites have different elements, and specific content policies that make them effective platforms for users to spread and consume extremist content. Within this essay, analysis will be made on the key features and policies of Facebook and Reddit that have allowed them to amplify the voice of the Alt-Right and allow for rapid spread of misinformation online.

To better understand how the Alt-Right use social networking sites to promote their message, it is essential to understand how the Alt-Right came to exist in the first place. Essentially, the movement was born after the election of Barack Obama as US President in 2008 – at the time, a number of conservative representatives were unhappy about the election result, and blamed the election of the USA’s first black President on the failure of so-called “mainstream conservatives” to prevent it. These conservatives represented groups such as the Ku Klux Klan, Nazis and Neo-Confederates, all of which are regarded as ‘extremist’ in their political stance. They essentially suggested that if the current conservative (right-wing) politicians were unable to prevent the left from dominating politics, then there was a need for and ‘alternative right’, with a stronger conservative message (Sangillo, 2019). It was not until 2012, after a few key events, that the Alt-Right movement really started to gain traction online. One key event was the 2012 shooting of Trayvon Martin by George Zimmerman. This event is widely credited to be the beginning of the ‘Black Lives Matter’ (BLM) movement, which saw thousands rally for change after Zimmerman was acquitted. It was the counter argument to the BLM message that saw huge support from conservative, right-wing sides of politics. This in turn gave exposure to Alt-Right leaders and ultra-conservative activists who made comments and shared their opinions online. Between 2012 and 2015, there was an annual increase in the number of Alt-Right posts across Facebook, Twitter and Reddit. In 2016, however, there was an explosion of Alt-Right, ultra-conservative cointent being posted on social networking sites, as a result of the election of Donald Trump (Shahin & Ng, 2020). This explosion of content is largely what led to the Alt-Right being defined the way it is today.

In today’s context, the Alt-Right manifests itself primarily online, which makes estimating their strength difficult (Cook, 2016). Alt-Right leaders use social networking platforms to organise rallies in private groups, which means that they are often hard to keep track of. However, as the movement became more and more prevalent online, particularly in the US, it became abundantly clear that social networking sites were the foundation on which the entire movement was built upon – particularly after the 2016 election of Donald Trump as US President. There is no doubt that Trump became an icon of intolerance, and almost normalised white nationalism. Numerous Alt-Right forums across Facebook and Reddit are centred around Trump and use his style of politics and justification for their own brash, often offensive content (Sangillo, 2019). Basically, he became a figurehead for a movement that had previously lacked a real leader. Shortly after his election, and the coinciding increase of conservative content being posted on social networking sites, Facebook began to record that millionsof young people were either leaving Facebook altogether, or simply deleting the app from their phones. To begin with, Facebook theorised that this was due to other platforms becoming more prevalent with young people, and Facebook was falling behind with the younger audience, but after a short while it became clear that young people were leaving Facebook because it was becoming far too right-wing, whereas other platforms such as Instagram and Twitter were more left-wing (Bilton, 2020). Hard evidence of this became clear in November 2020, when Twitter, and Snapchat banned Donald Trump from using their platform altogether, as they believed his posts incited violence and division. At the same time, Mark Zuckerberg, CEO of Facebook, publicly refused to censor or fact-check any of Trump’s posts, claiming that he did not feel comfortable doing so because he is “not the arbiter of free speech” (Cook, 2016) . Further to this, posts from Alt-Right activists and other ultra-conservative groups such as Ben Shapiro, ForAmerica and Dan Bongino regularly feature in the daily top 10 most-shared Facebook posts in the USA (Facebook’s Top 10, 2021). This is a clear indication of the dominant political persuasion present on Facebook. What this essentially means, is that Facebook have a vested interest in pleasing conservative groups, as they are the groups giving Facebook the most financial support (Timberg, 2021). This makes especially clear that Facebook, as an organisation, makes far more allowances for the Alt-Right to post and consume content, which therefore amplifies the voice of Alt-Right extremism and allows for faster spread of misinformation.

Facebook have long struggled with drawing a line in the sand between what is and isn’t acceptable on their platform – to do so, according to them, would violate users’ right to free speech (Sangillo, 2019). Facebook and its policies have been questioned numerous times – sex workers are regularly banned for posting ‘offensive’ content, yet self-proclaimed Nazis are allowed to continue spreading hate (Are, 2021). Many believe this is a fundamental failure of Facebook’s censorship policies – how can a user be allowed to post regular antisemitic images and content, with the sole purpose of causing offence, but another be banned for posting a topless image of themselves? There has been plenty of debate regarding why this is the case, and the simple answer could be that nudity is far easier to define than hate speech. One prime example of where Facebook has failed to draw the line is in its policy toward holocaust denial. For anyone to deny that the holocaust happened, is clearly spreading misinformation, which is harmful and offensive, and should therefore be censored by Facebook – however this is not the case. Facebook refuses to censor posts on this topic, as they generally do not include any specific insults or slurs that can be deemed as offensive, rather they simply suggest that what is written in history, and the first-hand accounts of events that took place during the holocaust, are inaccurate (University Wire, 2020). Further to this, Facebook made a decision to include far-right news publications in its ‘Facebook Watch’ section – a section of Facebook dedicated to what they describe as “high quality journalism”. These sources include sites known for blatantly including false information and offensive language within their reports and articles, such as Breitbart, Red Ice TV and VDare (Wong, 2019). Facebook spokespeople are directly quoted as saying that in order to create a reliable, unbiased news source, Facebook should include “content from ideological publishers on the left and right” (Wong, 2019). This is despite the fact that a large portion of these so-called ‘news sources’ continually and regularly post c ontent containing factually incorrect statements for the pure purpose of causing offence. This shows that Facebook’s hesitance to ban users and censor publishers that are openly extremist in their political views, amplifies the voice of alt-right extremists and allows for faster spread of misinformation.

Reddit also plays a significant role in allowing extremist ideology to spread. In general, Reddit takes a much more hands-off approach to moderation than other platforms such as Facebook, Twitter or Snapchat (Zakrzewski, 2020). Part of the reason for this is the way the platform is structured, which is somewhat different to other social networking sites. It is much more difficult to find Reddit users who are posting extremist content and block their activity, than it is for Facebook or Twitter users (Lagorio-Chafkin, 2018). This is because Reddit is a platform where users create anonymous profiles and use alias’ to access content. Further to this, users accessing extremist content have created their own dialect of words and phrases designed to make it even more difficult for any algorithms or moderators to pick up their activity (Sonnad & Squirrell, 2017). The platform also uses what are called ‘subreddits’, which are basically focus groups for singular subjects. This means that anyone with a similar view can access content specifically about that topic and engage with like-minded users easily. There is no limit to how many of these groups can be created, and each of these subreddits is self-moderated, meaning that the Reddit community, essentially, are meant to police themselves. This, obviously, opens the door to a lot of questionable content. In 2015, Reddit was called out by the Southern Poverty Law Centre as being home to “the most violently racist” content on the internet, citing a “constellation of antiblack forums” (Tiffany, 2020) as the reason for making this assertion. This combined with a recent study that found that right-leaning forums contained three times as much hate speech containing racist, sexist, religious and homophobic attacks as the left-leaning groups (Zakrzewski, 2020) makes it clear that Reddit can be, and has been, used to spread extremist content. Although Reddit has taken steps to try and reduce the amount of Alt-Right content visible on their platform in recent months, their lack of effort to do this in the past is what has essentially allowed violent, racist content to fester and grow out of control for years. This is a clear indication of the way Reddit has amplified the voice of alt-right extremists and allowed for faster spread of misinformation.

In summary, there is no question that Facebook in particular hold a level of responsibility for the drastic increase in extremist content seen online, given that they have a tendency to refuse to draw a line in the sand and define a point, after which, content becomes classified as inappropriate or inaccurate. Reddit, on the other hand, also hold a significant level of responsibility for providing an anonymous platform for users to spread hate and incite violence. The very layout of their platform invites this kind of use, by providing the space for like-minded individuals to share ideas and opinions. As a result, it can clearly be argued that social networking sites, in particular Facebook and Reddit, have amplified the voice of Alt-Right extremists, and allowed for faster spread of misinformation.

References:

Are, C. (2021). Facebook’s free speech myth is dead – and regulators should take notice. The Conversation. Retrieved April 3, 2021, from http://theconversation.com/facebooks-free-speech-myth-is-dead-and-regulators-should-take-notice-153119

Bilton, N. (2020). How Facebook Became the Social Media Home of the Right. Vanity Fair. Retrieved April 3, 2021, from https://www.vanityfair.com/news/2020/06/how-facebook-became-the-social-media-home-of-the-right

Cook. J. (2016). US election: Trump and the rise of the alt-right—BBC News. Retrieved April 4, 2021, from https://www.bbc.com/news/election-us-2016-37899026

Facebook’s Top 10. (2021, March 30). The top-performing link posts by U.S. Facebook pages in the last 24 hours are from: 1. ForAmerica 2. Franklin Graham 3. Ben Shapiro 4. Dan Bongino 5. Dr. Sasa 6. Dan Bongino 7. Fox News 8. ForAmerica 9. Ben Shapiro 10. Sean Hannity [Tweet]. @FacebooksTop10. https://twitter.com/FacebooksTop10/status/1376921674424053760

Free speech or hate speech: New Facebook policy serves as step in right direction. (2020, Oct 23). University Wire https://link.library.curtin.edu.au/gw?url=https://www-proquest-com.dbgw.lis.curtin.edu.au/wire-feeds/free-speech-hate-new-facebook-policy-serves-as/docview/2453828628/se-2?accountid=10382

Shahin, S., & Ng, Y. M. M. (2020, January 7). White Twitter: Tracing the Evolution of the alt-right in Retweets, 2009-2016. https://doi.org/10.24251/HICSS.2020.296

Sonnad, N. & Squirrell, T. (2017).  The alt-right is creating its own dialect. Here’s the dictionary.Quartz. Retrieved April 4, 2021 from https://qz.com/1092037/the-alt-right-is-creating-its-own-dialect-heres-a-complete-guide/

Tiffany, K. (2020, June 12). Reddit Is Finally Facing Its Legacy of Racism. The Atlantic. https://www.theatlantic.com/technology/archive/2020/06/reddit-racism-open-letter/612958/

Timberg, C. (2021). How conservatives learned to wield power inside Facebook. Washington Post. Retrieved April 4, 2021, from https://www.washingtonpost.com/technology/2020/02/20/facebook-republican-shift/

Zakrzewski, C. (2020). Analysis | The Technology 202: New study reveals extent of hate speech on Reddit in right-leaning forums. Washington Post. Retrieved April 4, 2021, from https://www.washingtonpost.com/news/powerpost/paloma/the-technology-202/2020/06/15/the-technology-202-new-study-reveals-extent-of-hate-speech-on-reddit-in-right-leaning-forums/5ee6ab4c602ff12947e8c19a/

17 thoughts on ““Social networking sites like Facebook and Reddit have amplified the voice of alt-right extremists and allowed for faster spread of misinformation.”

  1. Good morning Taj,

    Thank you for sharing your paper, the message of your paper seems to be somewhat the opposite of mine. My paper on ‘Virtual Vs Traditional Communities’ focuses on the benefits of Facebook, so maybe they complement each other in a way by highlighting the pros and cons of the platform?

    It is interesting that your research has found that Facebook almost seems to favour alt-right groups and extremist speech. I have certainly come across a good deal of racism and alt-right speech on the platform. It is my social network of choice because of its popularity and as a small business owner, it’s essential for advertising and reaching the most number of clients. It is also home to my neighborhood watch and community groups for my small town. While I don’t mix with groups that share and spread alt-right and extremist content when I do see it pop up in my groups I find that if Facebook has not removed the content themselves, moderators will, or the more left-leaning voices will drown out the offender.

    If Facebook and Reddit will not police their platforms then I wonder if user-moderation might be the key here? I might be considered a bit of an optimist, but I would like to think that the majority of people are not alt-right leaning and would report and stand against alt-right rhetoric when they see it. Due to the sheer size and ubiquity of Facebook, I highly doubt that users jumping ship en mass is likely. Admittedly, I was under the impression that much of the alt-right movement migrated to Parler in protest of tightening restrictions on Facebook and other platforms. Have you found much of an alt-right presence on Facebook over the course of your research?

    Thank you again for sharing, your paper was a very informative read.

    Here’s a link to my paper that you might find interesting:
    https://networkconference.netstudies.org/2021/2021/04/26/virtual-vs-traditional-communities-the-benefits-of-facebooks-virtual-communities-and-how-they-differ-from-smaller-traditional-communities/

  2. Hey Taj! What a fascinating and insightful read! There is a real dilemma when it comes to regulating social platforms. It makes me ponder if we are too reliant on computers and algorithms to monitor and manage such networks, though I admit, such is the enormity of social networks, it would be virtually impossible. Sadly with it comes the spread of misinformation that seems to be permeating society. In the real world, we are policed by real people. Social networks are reliant on businesses giving community guidelines. Should these platforms shoulder more responsibility? Are we expecting too much and need to have a more finite policing system of all public platforms, a level playing field which they all should be following? There have been a few papers questioning the negatives of social networks, given the extremes we have seen come of it. I am in agreeance with your assessment in regards to Alt-right groups. The days of the KKK had seemed to be fading, but now there is a resurgence of such political factions. Social media has sadly allowed for a much faster spread of misinformation, including those of the alt-right.

  3. Hello Taj,

    This is a great conference paper which enlightened me regarding the ‘Alt-right’ movement and what they are really about. Like many others who have commented on your paper I had heard of the Alternative Right but didn’t fully understand why they were formed or what they really stood for. After reading your paper you were able to make it abundantly clear what they are about and that the election of Trump gave this political faction a far greater foothold in the community which I believe is to the detriment of the public as a whole. I agree with you that social media platforms such as Facebook and Reddit have a lot to answer for when it comes to the spreading of misinformation and hate speech and I believe the only way to slow their influence is for users to opt out of these platforms and possibly find more suitable social media sites which will then hit their bottom lines.
    I must admit to never being a big user of social media and the points made in your paper are just some of the reasons that reinforce my reluctance to change that fact anytime soon.
    Thanks for a well written and well researched paper Taj.

    Regards,
    Bernie

  4. Hey Taj,

    Interesting insight into the alt right movement online. I have been pondering if the ramifications of censoring free speech with ill defined policies, and how in doing so may alienate and disenfranchise certain unstable elements of our society. What may happen if a large group of people feel like they aren’t being heard, and would that be as bad as allowing expressions of intolerance being broadcast to everyone.
    My other thought is who should moderate this legally? The government which could have it’s own agenda (like: https://www.wsj.com/articles/india-accused-of-censorship-for-blocking-social-media-criticism-amid-covid-surge-11619435006);
    The platform itself which is likely to be more liberal or self serving;
    Or by the other users with some sort of democratic voting process, which may be subject to dislike mobs or review bombs that have been seen on youtube videos (see the video disliked 19 million times: https://www.youtube.com/watch?v=YbJOTdZBX1g&t=2s).

    Thank you for allowing me to read your paper,

  5. Hi Taj, great essay, very insightful and interesting. I mostly agree with the discussion as a whole and that both social network platforms have allowed the alt-right movements to flourish. Though regarding Facebook’s unclear stance I would have to rather disagree. The reason for this is that big tech companies need to adhere to the various international laws which also cover the distribution of pornographic pictures and sexual transactions. Having lived in a few different countries I could immediately see a problem with both the facts that the user is a sex worker, who would then be immediately flagged for content moderation for transactional possibilities that the user posts nude pictures, which are mostly illegal for children under 18. So you are correct that sexual themes are more clearly defined than hate speech, but the reason sex workers aren’t allowed to post nude pictures might actually have some layers for discussion and interpretation. Washington Posts did an article on Google’s stance on showing political boundaries – another instance of a big tech having to adhere to the current political climate. Do have a read when you have a little bit of time! It’s fascinating if nothing else – https://www.washingtonpost.com/technology/2020/02/14/google-maps-political-borders/

  6. An interesting read Taj and very relevant to current political practices. I know you focused on the alt-right and how it came to utilise social media for political influence etc but I wonder we can assert that it is only the alt-right which do so successfully? Certainly Instagram in recent years has felt dominated by left wing hashtags and socio political movements with creators/organisations who do not get involved often being called out as racist/homophobic etc. Perhaps this has to do with better human rights standards and more awareness around social issues but I wonder whether different platforms have become associated either with the right or left because of their tendency to become echo chambers once dominated by either side of the coin. In this way I wonder how it is that both Facebook and Instagram are owned by Facebook and yet the political spectrum on either is so different and seemingly governed by different ideals and regulations?

    Thank you for sharing your paper it was well written and researched and certainly made me think!

    Jessica

    1. Hi Jessica,

      Thanks for taking the time to read my paper! I think there is no denying that the Alt-Right aren’t the only group to utilise social media/networks to spread their political message, they just seem to be the most influential in recent years (Trump MAY have something to do with it). There is definitely a counter-movement of left-leaning extremists, though. The ‘antifa’ movement is essentially the counter-movement to the Alt-Right, and definitely has a growing presence online. The difference between the two, from my very brief research into both movements, seems to be the sheer violence of Alt-Right content when compared to antifa. There are huge differences is the level of violence and downright offensive content, which is probably why we hear more about the Alt-Right.

      As for your question about why the political spectrum is so different between platforms, despite them being owned by the same entity, I think it’s purely a reflection of the users. Instagram tends to attract younger people, and Facebook tends to have an older demographic. The platform then has a vested interest in keeping said users on their platform, and therefore employ an algorithm that caters to what they want to see. I don’t think theres a lot more to it than that!

  7. Hi Taj,

    I thoroughly enjoyed your paper. In fact, I had considered this myself, even going so far as to consider using Reddit as a case study for how the Alt-Right uses social media to perpetuate their ideologies. One thing I had not considered, however, was how the partisan divide was being played out through different platforms due to their respective viewership e.g Facebook and Reddit (right) vs Twitter and Instagram (left). I also like your inclusion of the BLM movement as the catalyst of Alt-Right activity, as it situates the movement’s activity within a timeline.

    My question is, is the rise of Antifa (radical left) mirroring that of the Alt-Right?

    I think there is a great amount of crossover in our work, and your research fills the gaps of how the alt-right community interacts in an online setting. Give it a read https://networkconference.netstudies.org/2021/2021/04/28/post-truth-in-colour-online-networks-and-the-breakdown-of-objective-standards-of-truth/#comment-1063

    Cheers,

    Danny!

  8. Hey Taj,

    This is a really interesting paper you’ve written here and I think it deals with the important question of how much responsibility social media sites have in regulating their content? Correct me if I’m wrong but you seem to be arguing that Facebook and Reddit need to be doing more to sensor content on their sites. While I tend to agree for the most part that some regulation is needed on these sites I do get nervous about allowing private companies to have too much power in controlling public conversation. Like you say in your essay, hate speech is hard to define and I think in some cases bad ideas should be debated against and not just blocked. Do you think there may be a problem with giving private companies too much control over the public conversation?

    It was also interesting to read your discussion about how Reddit acts as a popular platform for the alt-right to spread their ideas. In my essay, I discuss how the incel (involuntary celibate) community radicalise their members with misogynistic ideologies. Reddit was one of the main platforms they used to engage with each other until their thread was banned. Whilst you can find them in some threads on Reddit, they now mainly use 4chan and their own website to communicate. If you get a chance have a look at my paper, I think it relates to your essay a lot in the sense that it discusses how the internet is being used to spread toxic ideologies.

    Here’s the link to my paper: https://networkconference.netstudies.org/2021/2021/04/27/misogynistic-radicalization-of-users-in-the-online-incel-community/

    Cheers.

    1. Hi Cameron,

      Thanks for taking the time to read my paper! You raise some interesting points. I think your question about whether or not there is a problem with giving private companies too much control over the public conversation is one that doesn’t have a definitive answer. Facebook clearly have a vested interest in keeping content on their platform, as removing too much content can drive users away to other platforms, but on the other side of the coin, allowing offensive content to stay on their platform can lead to users being driven away as well, so they are faced with something of a conundrum in that sense.

      Of course, issues arise if Facebook exert power by censoring all those that oppose their core beliefs, but issues also arise if they do nothing as well. There is no singular answer to what should and shouldn’t be allowed to be posted – at the end of the day, everyone is entitled to their opinion. I agree with you that some topics should be discussed and debated – I personally think there is no issue with a constructive conversation about potentially offensive topics. I think Facebook recognise this and have gone down the route of slapping a warning on posts that may contain potentially inaccurate information, with the intention of allowing conversation to take place, yet still making clear that what’s written/said is potentially not true. Again, it’s a tricky problem for Facebook to handle, and one that doesn’t have a singular answer, but doing something is better than doing nothing.

      I read your paper, and can definitely see the similarities in regard to the way ideology spreads in both the Alt-Right and Incel communities, and the overarching theme of violence. I found your paper insightful and well researched. It is disturbing to think and learn about the way online behaviour can translate into real-world violence, as it has with both of our topic groups.

      1. Hey Taj,

        It really is a tricky question of how much power we give to social media companies isn’t it? They are similar to traditional media companies in the way they can steer public discussion yet they don’t have to claim the same responsibility as they rely on user-generated content. I tend to agree that some regulation is definitely needed, it’s just hard to say exactly how much.

        Thanks for reading my paper as well, I thought you might find it relevant to your discussion.

        Cheers.

  9. Hi Taj,
    It was great to read something that collates with some of the information within my paper “Radicalisation and Social Media”.

    I really enjoyed reading your paper, as it was able to give me an entirly different spin to the approach l had taken in my own paper. It was also very interesting to read about the research on the political placement of these various social media platforms. My paper tends to describe the influence the likes of Twitter had on influencing Trumps presidential campaign in 2016, and also how social media platforms have the ability to be used for the incubation and recruitment of radicalised individuals to extremist organisations. But my paper is lacking the details as why these platforms are predominately targeted by Alt-Right radicals, other than their ability to dessiminate hate speech and misinformation on an influential scale. Your paper has given me some more evidence and clarity on perhaps why Alt-Right radicals choose certain platforms and why.
    What do you think of anonymity within social media platforms? As many experts say anonymity is quickly becoming a thing of the past. Infact it is Mark Zuckerberg that is leading the charge in bringing real-time identity to social media.
    Let me know what you think.

    Cheers,
    Nathan

  10. Hey Taj,

    Great job on your paper. I thought it was fascinating and really well written. .

    You made a great point that social media platforms like Facebook find it hard to distinguish between genuine news and misinformation. In 2021 the line is so blurred between news and propaganda then between censorship and free speech. It is an overwhelming idea of how we could stop the spread of misinformation, but I agree that the solution must be closely linked to the economic benefits of these sites.

    Did you think that this community was formed authentically, where users seek out membership or do you see that facebook’s algorithm practices reinforce these communities? I must admit that I occasionally see Ben Sharpio videos when strolling on Facebook, even though I do not share any of his ideology and don’t follow his page or have ever liked any of his videos. I wonder if the platform is as much to blame for radicalization as the people who create the content.

    Well done on your paper. I really enjoyed it.
    If you’re interested in the formation of the ‘virtual loungeroom’ by reality television audience on Twitter as a third place, head over to my paper and let me know what you think https://networkconference.netstudies.org/2021/2021/04/26/turningtvonline/

  11. Hi Taj,

    This is an interesting read and well-researched. I had heard about the alt-right movements but hadn’t realised that Facebook and Reddit made it easier for their hatred to be spread. I had realised how Donald Trump had used Twitter to fan the flames of the group, but your paper showed another side to the issue.

    A question I thought of while reading this is, do you think Facebook will ever do anything to stop these groups? And if they do, what other platforms do you think they would use?

  12. Hi Taj,

    This was a very insightful paper about how social platforms like Facebook have become a place for the alt- right extremists. It is easy to forget that not all information that are found on social media is correct as there are alot of hate speeches and misinformation. I liked how you gave a bit of background information on how did the Alt-Right start and how it is used today. The examples you used were great points as alot of people who may not use social media were aware of these events.

    I do not use Facebook frequently as there are alot of hate speeches that are found throughout different type of posts and sometimes its a bit concerning . I do wonder with the the increase of hate speeches or misinformation if Facebook can do anything to stop this?

    Great work Taj.

  13. Hi Taj,

    A interesting read about the alt-right which is something many of us hear about often but don’t really know too much about.

    I think while you may be correct in your comment about how due to alt-right being more prevalent online and thus making it harder to track movements or the strength of these so social networking groups however I think in some cases it can also make it easier in some ways for governments and police in particular to identify members of a particular alt-right group who in some cases have acted on their threats/comments and some who are yet to thus preventing something that has not yet been done and this could be done by simply looking at people who may have shared an extremist post or may be using a hashtag such as “#whitepower”.

    Thanks!

  14. Very interesting read, I have always been under the assumption that Facebook was trying to appeal to the widest audience through their policies and the way in which they enforce them however, was not aware how reluctant they were to remove certain posts (ie. Holocaust denial posts). I think Facebook’s attempt to be “everyone’s” social media has made it an environment that sometimes fosters negative activity whether unintentionally or not, trying to play fair for both sides when one side is statistically more negative in regard to hate speech, racism and etc.

    I have not regularly used Facebook in quite a while, partly due to the level of intolerance present on almost any post that has to do with politics or any news related story. However, Twitter, while not as infamously known for racism and intolerance as Facebook, still has pockets of communities where the spreading of hate speech occurs. Although, Twitter appears more timely in their responses to these accounts, I definitely agree that Facebook and Reddit have some level of responsibility to play as do all social medias. Facebook and Reddit aren’t the only guilty ones but are copping the most flack due to their larger followings of alt-right supporters, and policing the volume and degree of which these intolerant messages are being spread might prove somewhat difficult. Level of difficulty although, doesn’t appear to be the reason as to why they have not tried to take action, as you’ve mentioned in your essay, they are making large financial gains off of allowing these communities to stay on the platforms.

    With the rate at which social media and communities on the internet operate, I am unsure as to what the future holds for groups like the alt-right, will the spreading of misinformation ever be stopped now that it is travelling at the pace it does? Facebook doesn’t appear to be too concerned and they frame it as if it’s just offering 2 different perspectives to news and trending topics. The Silicon Valley social media company operates in a very forward thinking state and hires young employees who are likely also similar in their political stances and so the risk of them becoming radicalised is so low and their ability to find multiple sources for information is a lot easier, I think Facebook may be underestimating just how detrimental this could become (as if it already hasn’t caused large amounts of problems).

    Very insightful read! I enjoyed it and hope you have time to read my paper https://networkconference.netstudies.org/2021/2021/04/27/instagram-influencers-and-their-complicated-relationship-with-fast-fashion-james-von-kelaita/?fbclid=IwAR3X__v56avxo4O9X-qPkOWlKh3SzajLb3I7cQQ_MvSTCpCtlRhjXZKaTq0

Leave a Reply to Cameron Foots Cancel reply

Your email address will not be published. Required fields are marked *