Navigating the Minefield: The Proliferation of Misinformation on Facebook

Posted on

by


Navigating the Minefield: The Proliferation of Misinformation on Facebook

Abstract:

The paper examines the transformation of Facebook into a breeding ground for misinformation, analysing its impacts on public discourse, democracy, and individual beliefs. Through a combination of personalised algorithms, lack of content moderation, and the psychological underpinnings of social media interaction, this study highlights the dangerous trajectory of misinformation spread on the platform.

 

Introduction:

Facebook’s evolution as a social media giant is a story of remarkable growth, innovation, and controversy. founded in 2004 by Mark Zuckerberg and his four college roommates eduardo saverin, andrew mccollum, dustin moskovitz, and chris hughes at harvard university, “The Facebook” initially served as a social network exclusively for harvard students. Its popularity quickly surged, and it expanded to other ivy league universities and eventually to colleges nationwide (Brügger, 2015).

With its constant evolution throughout the years, Facebook’s role in modern communication and information dissemination plays a pivotal role in shaping how individuals, communities and organisations interact and share information. As a digital platform that provides an ease of content sharing and communication, it has become integral to daily life, influencing everything from personal relationships to global events.

Despite its origins as a platform designed to connect friends and family and its potential for positive community building and information sharing, Facebook has faced increasing scrutiny and become a dangerous platform for the spread of misinformation, not only affecting individual users but public perceptions on complex issues around the world.

 

Section 1: The Mechanisms of Misinformation Spread on Facebook

Facebook’s algorithms, designed to maximize user engagement, play a crucial role in shaping the information landscape on the platform. By prioritizing content that is likely to generate interaction, such as likes, shares, and comments, these algorithms can inadvertently favour sensationalist, divisive, or even inaccurate information over more nuanced or accurate reporting. This prioritization of engagement over accuracy has significant implications, such as, the rise of sensationalism, formation of echo chambers and the unnecessary psychological manipulations.

 

Engagement based algorithms:

Facebook’s algorithms are sophisticated systems that determine what content appears in a user’s News Feed. Historically, these algorithms have been optimized to keep users on the platform as long as possible by showing them content that is most likely to capture their attention and provoke a reaction. While this approach maximizes engagement, it does not necessarily align with the goal of promoting accurate or reliable information.

 

The Rise OF Sensationalism:

Sensationalism has been a component embedded in media for some time, but the definition of has evolved over the years, but its context has always been to appeal to human emotion, thought to inspire “unwholesome emotional responses, stimulate empathy and evoke curiosity” (Brown et al., 2016). The digital space being as vast as it is, the ease of access to this sensationalist content is easier than ever. This content that evokes strong emotional reactions, be it outrage, fear, or amusement, is more likely to be shared and commented on. Sensationalist or misleading content can receive unneeded visibility on Facebook, regardless of its factual accuracy. This dynamic encourages content creators to produce more of such content, further exacerbating the problem.

 

Formation echo chambers (Groups):

“An echo chamber can act as a mechanism to reinforce an existing opinion within a group and, as a result, move the entire group toward more extreme positions” (Cinelli et al., 2021). Facebook’s algorithms contribute to this concept in two ways:

  1. Personalization: The platform personalizes each user’s News Feed based on their past interactions, showing them content similar to what they have liked  or engaged with before. This personalization means users are more likely to see content that reinforces their preconceptions.
  2. Group and Page Recommendations: Many Facebook groups are either private or have selective membership criteria, creating a homogeneous user base. Similarly, pages often cater to specific interests or viewpoints. This selectivity ensures that members or followers are likely to share similar perspectives, reducing the diversity of opinions and information presented.

 

The Impact of Echo Chambers (Groups):

Echo chambers can have several harmful effects on society:

Division: The rampant spread of misinformation can incite societal divides, strengthen users in their beliefs and making them more resistant to alternative views.

Real-world Actions: Misinformation circulated in echo chambers can lead to real-world consequences, including public health risks (e.g., vaccine hesitancy), political unrest, or even violence.

Distrust in Institutions: Persistent exposure to misinformation can erode trust in established institutions, including the media, scientific community, and government, as these entities’ narratives often conflict with those prevalent within echo chambers.

 

Cognitive Biases (Psychological aspects):

The spread and acceptance of misinformation on platforms like Facebook are aided by psychological aspects such as cognitive biases. These biases are systematic patterns of deviation from norm or rationality in judgment, where individuals create their own “subjective reality” from their perception of the input.

The following examples discuss some common cognitive biases we see in the spread of misinformation (Mitchell, 2021):

  • Confirmation bias – only finding or seeing data that confirms one’s preconceived notions.
  • On social media, users are more likely to engage with and share information that aligns with their views, regardless of its accuracy. This bias is particularly potent in echo chambers, where users are exposed to a similar set of opinions and facts that reinforce their existing beliefs.
  • Semmelweis effect – The tendency to reject evidence if it contradicts with a current paradigm.
  • This explains why certain individuals or groups may resist factual information that contradicts their preexisting beliefs or misinformation they have accepted as truth.
  • Bandwagon effect – The tendency to do or think things based on other people doing or thinking the same.
  • Misinformation can spread rapidly on social media as users see others sharing a piece of information and decide to share it themselves, assuming that the popularity of the content is an indicator of its truthfulness.

 

While Facebook has the potential to serve as a powerful tool for community building and information sharing, its role in facilitating the spread of misinformation represents a significant threat to society. Tackling these issues is essential for ensuring the platform contributes positively to society and fostering informed public discourse. The path forward involves not only technological and policy solutions but also a collective effort to promote critical media literacy and a shared commitment to truth and accuracy in the digital age.

 

Section 2: The Impact of Misinformation on Society

Political Division

The impact of misinformation on society is profound and multifaceted, with one of the most significant effects being political polarization. Misinformation can deepen divisions within societies, heighten tensions, and undermine the democratic process, often involving exaggerated, sensationalized, or entirely fabricated narratives that appeal to emotions rather than facts. Such content is more likely to be shared and can amplify extremist views, making them appear more mainstream than they actually are.

President Donald Trump’s Presidency and electoral campaign is a great example of how spreading misinformation can create division and radicalisation. Trump has been accused of amplifying misinformation through his social media platforms. The Washington Post (2021) allege Trump told over Thirty-thousand false or misleading claims during his presidency. This includes sharing unfounded claims, conspiracy theories, and misleading information on a range of topics from political opponents to public health measures. His position as President gave his statements significant weight, leading to widespread dissemination and acceptance of misinformation among his supporters.

Roberts-Ingleson & Mcann (2003) claim “Misinformation is designed to elicit strong emotions and legitimise extreme beliefs, including propaganda and conspiracy theories.” They go on to explain what propaganda is, being “False or Misleading information” designed to exploit and influence its “consumer’s beliefs and preferences to achieve a political goal” (BBC, 2021). This quote can be seen in a majority of President Trumps presidency.

One of his most dangerous misleading claims had to be his reaction to his loss in the 2020 election, such as, “We won this election, and we won it by a landslide”, put doubt into his supporters’ minds. This paired with the proclamation “We will never give up. We will never concede. It doesn’t happen” and the call for action “We are going to the capitol” spread online like wildfire.

Although it wasn’t said on Facebook, with over 2.9 billion users at the end of 2021 (Datareportal, 2023) it was very prominent on the platform. Welcomed by echo chambers that supported Trump, many believe these statements lead to the Capitol Riots.

 

Public Health

Facebook’s relationship with public health during the COVID-19 pandemic has faced significant scrutiny, particularly concerning the platform’s role in the spread of misinformation. This negative aspect has had profound implications for public health efforts, influencing behaviours and attitudes in ways that have sometimes undermined the public response to the pandemic.

A study from 2022 analysed Facebook data compromised of posts concerning Covid-19, from Facebook pages or groups with 95,000 members or 100,000 likes. The study collected data from March 8th, 2020 – May 1st, 2020 and in that time they found “13,437,700 posts to Facebook pages, and 6,577,307 posts to Facebook groups, containing keywords pertaining to COVID-19” (Broniatowski et al., 2022).

Within these posts, it was found that their contented related to Covid-19 were more likely to contain ‘more credible’ sources, although, the “not credible” sources were more often within Facebook Groups and “3.67 times more likely to share misinformation” (Broniatowski et al., 2022).

Despite efforts to control it, Facebook has been a hotbed for the spread of inaccurate ‘evidence & statistics’ about the pandemic. False information about the virus’s origins, prevention methods, treatments, and vaccines has furthered on the platform. Misleading posts have included dangerous health advice, conspiracy theories about the pandemic being a hoax, and misinformation about the safety and efficacy of vaccines. This widespread dissemination of incorrect information has posed a significant challenge to public health authorities trying to manage the pandemic response effectively.

Although Facebook’s spread of misinformation in regard to the pandemic has been in the spotlight in recent years, their relationship with public health has faced challenges beyond the specific context of COVID-19, highlighting broader issues with misinformation that can negatively impact public health outcomes. For example, the impact on mental health, particularly among teenagers and young adults that Facebook contribute too. The spread of unrealistic body image standards and the intense social comparison facilitated by curated posts can lead to issues like depression, anxiety, and body dysmorphia. Although not misinformation in the traditional sense, the presentation and promotion of unattainable lifestyles can be misleading and have a tangible negative impact on mental health.

 

Section 2: Final Thoughts

The mechanisms of misinformation spread on Facebook are complex and multifaceted, leveraging the platform’s vast reach and the natural tendencies of users to engage with sensational content. Sensationalism, echo chambers, and cognitive biases, significantly accelerate the spread of false information.

As misinformation proliferates, its impact on society becomes increasingly detrimental, eroding trust in institutions, exacerbating divisions, and compromising public health and safety. Sensational content grabs attention and encourages sharing, while echo chambers reinforce existing beliefs and insulate users from contradictory viewpoints. Cognitive biases skew perception and judgment, making misinformation more appealing and harder to counteract.

Addressing these challenges is crucial for maintaining societal cohesion and ensuring the health and safety of the public. Tackling the spread of misinformation on Facebook requires a joint effort from all involved, from the platforms themselves, users, organisations, and educators. This situation demands a comprehensive approach involving stricter regulatory frameworks, improved digital literacy education, and healthy community engagement strategies.

 

References:

 


Search Site

Your Experience

We would love to hear about your experience at our conference this year via our DCN XV Feedback Form.

Comments

19 responses to “Navigating the Minefield: The Proliferation of Misinformation on Facebook”

  1. ezra.kaye Avatar
    ezra.kaye

    Hi,
    Thanks for your article! I thought it was really well fleshed out and covered all bases. Really well done.

    I particularly liked your use of examples such as the Trump election/presidency, and the COVID 19 pandemic as spaces which facilitated lots of misinformation on Facebook.

    I was wondering what your thoughts are on the new recommended settings from Meta which filter out political content. Posts about laws, elections, and social topics are now automatically filtered out unless you go and physically change the setting. Do you think this has any impact on the spread of misinformation?

    Cheers,
    Ezra

    1. Zaneampho Avatar
      Zaneampho

      Hi Ezra,
      Thank you for your kind words and for engaging thoughtfully with the article!

      Regarding the new settings from Meta to filter out political content,I believe it is a double edged sword in a sense.
      This approach could potentially impact the spread of misinformation in several ways. On one hand, by reducing the visibility of political content, these settings might decrease the spread of politically charged misinformation, which can often be polarizing and divisive. This could help create a more neutral and less contentious environment on the platform.

      On the other hand, filtering out political content could also limit users’ exposure to important civic and political information, potentially reducing overall awareness and engagement. It’s crucial that users have access to reliable and factual information about laws, elections, and social issues to make informed decisions. The challenge lies in balancing the reduction of misinformation while ensuring that vital political content reaches the public.

      Ultimately, the effectiveness of these settings in combating misinformation will depend on how they are implemented and how users interact with them. It will be important for Meta to monitor these changes closely and adjust their strategies based on their impact on user behavior and misinformation spread.

      Cheers, Zane

  2. Kim Cousins Avatar
    Kim Cousins

    Algorithms are fascinating, aren’t they! They play such a big, largely unnoticed part on our day-to-day lives. What do you think about the upcoming changes to Meta (Facebook and Instagram/Threads), where political news will no longer be suggested to users as default? People will have to search for their own news on these platforms, which does feel like it would increase the potential of echo chambers and the spread of misinformation.

    I noticed recently that a regional ABC outlet, in the town I work, is now going back to email newsletters to try and keep the connection with its followers. Do you think this will help address the problem, even on a small scale?

    1. Zaneampho Avatar
      Zaneampho

      Hey Kim,
      Absolutely, algorithms really do have a subtle yet profound impact on our daily lives! Regarding Meta’s upcoming changes, it’s a double-edged sword. On one hand, it might reduce some of the noise and unsolicited political content, which can be a relief for many. On the other hand, as you mentioned, it could indeed intensify the formation of echo chambers. People might end up only seeking out and engaging with viewpoints they already agree with, potentially missing out on broader perspectives.

      As for the regional ABC outlet turning back to email newsletters, I think it’s a smart move. It’s like going back to basics, but with a modern twist. Newsletters can provide a more direct and personal way to deliver news, potentially cutting through the clutter of social media feeds. While it might be on a smaller scale, it’s definitely a step in the right direction towards maintaining a trustworthy connection with their audience and could help in mitigating misinformation by providing reliable and curated content directly to readers. Every little bit helps!

      Thanks for reading my Article!
      Zane

  3. Jess Wilson Avatar
    Jess Wilson

    Hi, I enjoyed reading your article and found your overall contribution to the knowledge of misinformation on Facebook very interesting. While reading your paper, I could not help but wonder if misinformation on the platform is one reason for its slow downfall in recent years? I feel as if Facebook is not as popular as it once was. As you pointed out, while it may not be misinformation in the traditional sense, these thoughts could be attributed to my recent feelings when I see the lifestyles some individuals present on the platform. Regardless of whether Facebook is or is not losing popularity because of misinformation, I agree with you. I, too, believe that something needs to be done to counteract the spread of misinformation on Facebook; otherwise, I do fear for the platform’s complete downfall. What do you think? Is Facebook’s slow decline in recent years due to misinformation? Or am I just reading too much into this? Anyway, great job on your paper, it made for an interesting read. Jess

  4. Neelen Murday Avatar
    Neelen Murday

    One interesting aspect of the article is its exploration of the relationship between misinformation on Facebook and its effects on societal cohesion and public health. The examples provided, such as the case of former U.S. President Donald Trump’s use of social media to amplify misinformation, offer a compelling illustration of the issue at hand.
    To enhance the article further, it could delve deeper into specific case studies or examples beyond those mentioned. Additionally, offering practical solutions or strategies for addressing the spread of misinformation on Facebook could provide readers with actionable steps to combat the problem effectively.

    1. Zaneampho Avatar
      Zaneampho

      Thank you for your thoughts and constructive help to further enhancing my Article. I wrote a template on the topics I was going to talk about, one of which was to examine further information on how misinformation effects such as the No campaign, But I might have detailed other topics more than needed to be. I do agree with your critisism and think if I was to write something like this again, I would make sure I focus more attention into offering practical solutions like you said.
      Thank you for your input, it has deffinety been relected upon and understood

  5. El Ashcroft Avatar
    El Ashcroft

    Interesting read. Your section about engagement based algorithms got me thinking about my experience on Facebook during the Indigenous Voice to parliament campaign. As someone who actively disputed misinformation I commented on a lot of posts from the no camp, this led to me seeing more posts from the no camp in my feed rather than from the yes camp. Because of the algorithm I found myself in an echo chamber of “no” information and while I didn’t fall into the trap of believing the information that was circulating, I wonder how many people who were on the fence many have been influenced by what the algorithm feed them.

    While the algorithms are sophisticated systems, I wonder if they need to become more “intelligent” to be able to decipher the types of comments people are making so people who don’t agree with things aren’t sucked into echo chambers that might influence their opinion negatively. What are your thoughts?

    1. Zaneampho Avatar
      Zaneampho

      Thank you for you comment, I am glad you found this interesting. I agree with what you said, about they agorithms needed to become more “intelligent”. I was in a similar boat, I didn’t understand the logic behind the No campaign, So I googled it once, and even know I was votinig Yes, my simple google flooded my facebook with No campaign propaganda. I can see how these agorthims can play a huge part in important subjects like the Indigenous Voice to parliment campaign. If I didn’t do my own research on the topic, being uneducated in the matter, I could of been persuaded to vote No. In is a very scary thought that these platforms can genuinely have a negative impact in very important affairs.

  6. Chris May Avatar
    Chris May

    It’s intriguing to me just how much Web 2.0 platforms, such as Facebook, influence us both as individuals and as a collective. While early social media may have been intentionally designed to be an online public square, multiple factors including online anonymity and company CEOs chasing engagement (read: profit) have led to rampant misinformation that has negatively shaped our society and our politics.

    Do you believe that social media platforms should have more of a responsibility to guard against misinformation? If so, how do you think this could realistically be achieved?

    If you have the time my paper touches on a similar topic, albeit focusing on Twitter (or X now, I guess). I’d be interested in hearing your thoughts.

    https://networkconference.netstudies.org/2024/onsc/3488/twubbling-tweets-how-twitter-influences-modern-political-discourse-and-its-impact-on-society/

  7. Mohamed Ali Avatar
    Mohamed Ali

    Hi Zaneampho,

    You have made a lot of valid points. As noted in the article you referenced earlier, to tackle misinformation spread on Facebook, a coordinated response from all stakeholders, ranging from the platforms themselves, users, organisations, and educators, is required. This kind of situation demands a multifaceted approach, including a more stringent regulatory framework, enhanced digital literacy education, and sound community engagement strategies.

    These are all great steps, and we are one step closer to a healthier social media environment. Nonetheless, poor information is bound to keep springing up on Facebook, with or without these mechanisms in place. There are events when information quickly spreads on Facebook, given the viral aspect of social media. For example, a social network post asserting that a drug under trial is a miracle treatment for a specific disease could gain momentum, potentially leading to fatal consequences for patients before fact-checking could intervene. Ultimately, it is almost impossible to entirely get rid of misinformation, even with the existence of a regulatory framework, digital literacy education, and community engagement strategies. However, I agree that these solutions will bring about some positive change, but not completely eradicate the underlining issue.

  8. Haoyu Wang Avatar
    Haoyu Wang

    Hello,Zaneampho,
    Your article is a very thorough and in-depth discussion of how FaceBook’s algorithms prioritize content that triggers user interaction, and how this practice facilitates the spread of misinformation. I love how you effectively reveal the psychological and technical mechanisms of information distortion on social media through your detailed analysis of sensationalism, echo chamber effects, and cognitive bias.
    I think the most valuable part of the article is the emphasis on the social consequences of the spread of misinformation by social media, such as political polarization, declining trust in authority institutions, etc. These analyses show the potential threat to social stability and democratic health of the spread of misinformation.

  9. Tiaan Avatar
    Tiaan

    Hey there

    I loved reading your paper, it drew a lot of similarities with my own. I really liked how you explained echo chambers (I wish I had thought to use that term!). This is such a relevant topic in todays day and age, and only continues to become more pressing as social media grows.

    Its true that Facebook needs to address the issue of misinformation, and every social media platform for that matter. I suggested in my own research that the access to social media needs to be revised, as I believe the age of the user contributes to the spread of misinformation heavily. Do you have any other ideas?

    Thanks again for the great and thoughtful read

  10. Mathew.C Avatar
    Mathew.C

    Hey Zaneampho,

    I enjoyed reading your paper and how it explored the efficacy and impact of spreading and engaging with misinformation and deceitful content on Facebook. Despite the plethora of information released over the years relating to the deceptive and ingenuine motives practiced by the tech giants and social platforms, it never ceases to amaze me at how far they will go to drive engagement metrics on their platform and in doing so, further supporting and fuelling engagement in content known to be misleading and disingenuous.

    As you have so rightly pointed out, despite the content being distributed causing a range of issues on the platform itself, these systems help fuel division and sensationalism in communities throughout the globe, supporting the further erosion and fracturing of families and friendships through sensationalised rhetoric and content (and is unfortunately witnessed and promoted across other social platforms as well, not just Facebook).

    With regards to algorithmically curated and disseminated misinformation, and false and misleading content, I can recommend reading the conference paper titled “The Psychological Impact of Social Media on Introverts” by Alan (https://networkconference.netstudies.org/2024/csm/3489/the-psychological-impact-of-social-media-on-introverts/). This paper examines the approaches used by big tech companies and social media platforms and the lasting damage and psychological effects it can have on users during their engagement with the platforms, neatly complementing points you raise while exploring the negligent practices of Facebook and its specialised algorithms.

    With regards to the current tools and practices used to curate and disseminate misinformation on social media, what are your thoughts on how problem may evolve alongside the rapid advancements in new AI models (such as Sora https://openai.com/index/sora) which can at present create high quality ultra-realistic photos and videos based off text-prompts, still image(s) and audio recordings, enabling a mass production of very convincing fake video and image content?

    Interested to hear your thoughts on how AI will further influence this problem 😊

    On an unrelated note and across a different stream, I’d appreciate if you could find time to check out and contribute my paper’s discussion – “Neurodiversity in a digital age: How the internet helped advance neurodivergent advocacy through self-representation, community, and identity”
    (https://networkconference.netstudies.org/2024/ioa/3551/neurodiversity-in-a-digital-age-how-the-internet-helped-advance-neurodivergent-advocacy-through-self-representation-community-and-identity/).

    I might have to workshop the title, little lengthy I know!

    All the best,

    Mat

  11. Alan Donovan Avatar
    Alan Donovan

    Hi Zane,

    Thank you for your discussion of misinformation on Facebook. Your paper does a commendable job of delineating how Facebook’s algorithms and social dynamics contribute to the spread of misinformation, impacting public discourse and democracy profoundly.

    Your analysis of cognitive biases supports the results of Pennycook and Rand’s study (2019), which show how analytical thinking might lessen a person’s sensitivity to false information. They suggest that encouraging critical thinking in users could help them recognise misleading material on the internet.

    Regarding the mitigating techniques you might provide, I have a question: Which technological or governmental measures, in your opinion, are most successful in halting the spread of false information on websites like Facebook?

    In addition, I would really appreciate it if you could fill out the short survey that is provided in my paper’s comments section – “The Psychological Impact of Social Media on Introverts”. It only takes a minute or so to complete, and I would like to discuss with you about your findings and opinions regarding the issues raised.

    Alan.

    https://networkconference.netstudies.org/2024/csm/3489/the-psychological-impact-of-social-media-on-introverts/

    Pennycook, G., & Cannon, T. (2018). Prior Exposure Increases Perceived Accuracy of Fake News. Journal of Experimental Psychology: General, 147. https://doi.org/10.1037/xge0000465

  12. 21742082 Avatar
    21742082

    Hi Zane,

    Your paper, from my comprehension, explores the critical issues in today’s digital landscape, misinformation on Facebook. I appreciate your insightful analysis on the mechanisms driving the spread of false information on the platform, particularly the discussion on Facebook’s algorithms, echo chambers, and cognitive biases.

    What struck me the most about your paper was your exploration of the psychological factors at play in the dissemination of misinformation. By researching cognitive biases like confirmation bias and the bandwagon effect, you highlighted the intricate ways in which individuals perceive and interact with information online, contributing to the rapid spread of false narratives.

    I’m curious to hear your thoughts on potential strategies that Facebook could implement to combat misinformation effectively. Given the platform’s immense influence, do you see feasible solutions that Facebook could adopt to address this issue? Additionally, how do you envision the roles of users, regulatory bodies, and civil society organisations in combating misinformation on social media platforms like Facebook?

    Overall, I really enjoyed your paper and it was a great read! Feel free to check out my paper: https://networkconference.netstudies.org/2024/csm/3889/beyond-filters-the-evolution-of-online-identity-through-instagram/#comments

    Regards,
    Maddison

  13. dale_b Avatar
    dale_b

    Hello Zane,

    You have written a terrific and very pertinent paper.

    Stand-out texts for me include:

    “Breeding ground for misinformation.”

    “Lack of content moderation.”

    “Misinformation can spread rapidly on social media as users see others sharing a piece of information and decide to share it themselves, assuming that the popularity of the content is an indicator of its truthfulness.”

    Your examples of Trump and COVID-19 were very apt for this paper and in the example of COVID-19, Gallagher (2021) on ABC News reported “According to the research, Facebook is fact-checking some of the COVID-19 misinformation posted by the group but failing to take appropriate action and in many other cases failing to detect the misinformation altogether. The data also suggests that Facebook’s fact-checking in languages other than English is insufficient and almost nonexistent in some languages.

    As for the Trump example, according to Time Magazine’s Ghosh and Scott (2018), ” Cambridge Analytica used that data to create a tool of psychological warfare to manipulate American voters with targeted Facebook ads and social media campaigns. Highly sensitive personal data was taken from Facebook users without their knowledge to manipulate them into supporting Donald Trump.” This misinformation influenced voters.

    Now these are large-scale examples of misinformation, and we saw Zuckerberg face a hearing over this breach of trust and he was made to look into this situation, however, every day we see misinformation on Facebook and due to it being small scale, in my opinion, nothing will ever be done about it. In years gone by, all journalists who published information had to “fact check”. This is something not done on social media. By the time a journalist checks the facts, it has already been posted on social media.

    Dwoskin (2021) in her Washington Post article shared, “The forthcoming peer-reviewed study by researchers at New York University and the Université Grenoble Alpes in France has found that from August 2020 to January 2021, news publishers known for putting out misinformation got six times the amount of likes, shares, and interactions on the platform as did trustworthy news sources, such as CNN or the World Health Organization” and this is related to what you have said above, “By prioritizing content that is likely to generate interaction, such as likes, shares, and comments, these algorithms can inadvertently favour sensationalist, divisive, or even inaccurate information over more nuanced or accurate reporting. This prioritization of engagement over accuracy has significant implications, such as, the rise of sensationalism, formation of echo chambers and the unnecessary psychological manipulations.”

    Have a look at Meta’s page “About fact-checking on Facebook, Instagram and threads” – https://www.facebook.com/business/help/2593586717571940 where the first line says “We’re committed to fighting the spread of misinformation”, yet I am still seeing fake news and clickbait posts regularly (especially about famous people who have died, yet they are still alive!)

    What are some other societal implications of misinformation spread on Facebook, apart from health and political as discussed in the paper?

    Thank you for a wonderful read.

    Kind Regards,

    Dale.

    Dwoskin, E. (2021). Misinformation on Facebook got six times more clicks than factual news during the 2020 election, study says. The Washington Post. https://www.washingtonpost.com/technology/2021/09/03/facebook-misinformation-nyu-study/

    Gallagher, F. (2021). Facebook ‘failing’ to tackle COVID-19 misinformation posted by prominent anti-vaccine group, study claims.
    ABC News. https://abcnews.go.com/Technology/facebook-failing-tackle-covid-19-misinformation-posted-prominent/story?id=81451479

    Ghosh, D., & Scott. B. (2018). Facebook’s new controversy shows how easily online political ads can manipulate you. Time. https://time.com/5197255/facebook-cambridge-analytica-donald-trump-ads-data/

    Meta. 2024. About fact-checking on Facebook, Instagram and threads. Meta. https://www.facebook.com/business/help/2593586717571940

Leave a Reply

Skip to content