Skip to content

Twitter and Mental Health: the effect of Twitter on those suffering from Eating Disorders and Self-Harm


Abstract

The social media platform Twitter provides a space for users to form niche communities that can help those suffering from eating disorders or self harm by promoting recovery, sharing beneficial mental health resources and providing individuals with a sense of belonging. Despite this, there is an overwhelming amount of negative content encouraging users in these communities to continue with practices surrounding eating disorders and self harm, seemingly overriding the benefits these communities bring. This paper looks at studies that investigate the benefits and drawbacks of these communities and discusses the potential positive impact these Twitter communities can have, and the areas of concern that need moderation to do so.

Introduction

The social media site Twitter (recently rebranded as ‘X’) is a microblogging platform that allows for rapid dissemination of information on a public forum and connects users with shared topics of interest. While most of these communities are thematically harmless, discussing topics surrounding fandoms, political issues, pop culture, and other similar content, the app allows mentally ill youths to encourage each other’s harmful behaviours by facilitating communities that enable eating disorders and self-harm. SHtwt (self-harm Twitter) and EDtwt (eating disorder Twitter) are communities that have been formed on Twitter by individuals suffering from the respective disorders to communicate about their struggles and experiences. While mentally ill youths can benefit from online friendships with those who have faced similar struggles, it is important to weigh the pros and cons and consider whether the inherent normalisation and even encouragement of harmful behaviour within these communities are detrimental enough to negate the benefit that can be gained from a sense of belonging.  

Ramifications of Content on Self-Harm Twitter

Self-harm or non-suicidal self-injury (NSSI) is defined as “a person’s self-inflicted damage to body tissues, caused consciously and deliberately, to modify their mood and without suicidal intent” (Pérez-Elizondo, 2020, as cited in Atauri-Mezquida et al., 2025, p. 1). It is an increasingly significant health problem, particularly in girls aged 13-16 (Morgan et al., 2017, as cited in Lavis & Winter, 2020, p. 842). With almost twice as many self-harmers reporting a higher internet usage when compared to those who do not self harm (Mitchell & Ybarra, 2007, as cited in Hilton, 2017, p. 1692), it is likely that their internet use is a large contributor to their mental condition. SHtwt content consists of users publishing their acts of self-harm, typically through photos or videos. The majority of the media published shows open wounds and tools used to self-harm and this exposure can be detrimental to young people’s mental health (Atauri-Mezquida et al., 2025), more so if they are already struggling with mental health issues. Regular consumption of self-harm imagery can “encourage or even cause acts such as self-cutting through mechanisms of contagion” (Lavis & Winter, 2020, p. 842), and when paired with captions or comments describing acts of self-harm as an everyday task it becomes normalised and even romanticised to those viewing the content. Users on SHtwt will often provide positive reinforcement on posts with self-harming material through likes and comments, strengthening the sense of community in a harmful way as it encourages users to self-harm in order to post about it to receive social reinforcement in return (Atauri-Mezquida et al., 2025).  The cycle of publishing and consuming self-harm content has a dangerous impact on the mental health of the users within these communities, promoting toxic echo chambers and normalising and promoting self-harming behaviours.

The Benefit of Community for Self-Harming Individuals

It is important to acknowledge that the SHtwt community can also positively impact these individuals. A prominent reason for self-harming individuals to join these online forums is to be validated by others as they may feel defined by their self-harm to their peers in real life (Adams et al., 2005, as cited in Hilton, 2017). The sense of community can help reduce feelings of isolation and create meaningful friendships between those who struggle to do so in real life. Due to its ability to allow anonymity, Twitter provides a space where users can express their emotions more honestly, creating trust between individuals and allowing users to feel more comfortable exchanging support or advice (Hilton, 2017). There is evidence that the internet can help to facilitate “wellness and empowerment and [reduce] social isolation for those with poor health” (Hilton, 2017, p. 1700), with SHtwt users often providing each other with messages of encouragement and alternatives to self-harming when others express the need for help (Hilton, 2017). The real-time nature of Twitter allows those going through a mental health episode to receive instantaneous support to alleviate their distress (Lavis & Winter, 2020). Users support each other and spread information within the community on how to seek professional help offline, giving tips on how to be taken seriously or move up waiting lists within the health industry. Harm reduction is also prominent within the self-harm community on Twitter, with threads being posted on how to self-harm safely, “such as how to sterilise a blade or what to put in a first aid kit” (Lavis & Winter, 2020, p. 848). Users will also respond to extremely graphic imagery with “comments such as ‘that’s too deep’ or ‘that needs stitches’” (Lavis & Winter, 2020, p. 848) and encourage users whose self-harm may be putting their lives at risk to go to the hospital. The sense of belonging and plethora of advice and support that SHtwt provides is a valuable way for youths to alleviate their struggles in a way that may be difficult for them offline. However, peer-to-peer support, while beneficial to those receiving help, can harm those providing it. Listening to other stories of distress can have an emotional impact on individuals and potentially trigger them to self-harm (Lavis & Winter, 2020). The usage of the hashtag “#shtwt,” a hashtag used to categorise tweets belonging to the community has increased by 500% between October 2021 and August 2022, reaching 20,000 tweets a month on average (Atauri-Mezquida et al., 2025; Goldenberg et al., 2022). This indicates that while Twitter has policies in place that do not tolerate content featuring self-harm, there appears to be a lack of actual content moderation. This lack of moderation is what causes the existence of the self-harm Twitter community to have a negative overall impact on its users. While the community provides a valuable space to seek help and a sense of belonging, the plethora of triggering content ultimately normalises and encourages acts of self-harm.

The Role of EDtwt in Encouraging Disordered Eating Behaviours

Eating disorders (EDs) are complex psychophysiological illnesses with serious health consequences and have the highest mortality rate of any psychiatric disorder (Arseniev-Koehler et al., 2016), claiming the lives of “3.3 million people globally every year, a number that has doubled over the last 10 years” (Sukunesan et al., 2021, p. 2). EDtwt, also known as pro-ana (pro-anorexia) Twitter, are communities that take normalising eating disorders to extremes, posting about the disorder as if it is a lifestyle choice rather than a mental illness. The nature of social media, particularly on Twitter, makes it easier for eating disorder content to spread, potentially having a negative effect “on vulnerable individuals ranging from healthy individuals who may be influenced to engage in ED behaviours, through to the ‘triggering’ of individuals who may already have an ED” (Branley & Covey, 2017, p. 2). Much of the content on eating disorder Twitter revolves around users’ desire to continue with their disordered behaviour and encourages users to do the same. Rather than being just a passive member on these forums, many users centre their whole account around disordered eating, indicating their current and goal weights in their account biographies. These users document their ‘weight loss journey’ by posting what they eat, tracking calories, posting exercise routines and images of their bodies to sustain motivation (Branley & Covey, 2017, p. 3). Exposure to this type of content is linked to negative body image (Rodgers et al., 2016, as cited in Branley & Covey, 2017), and the competitive nature of eating disorders can result in those consuming the content feeling obligated to follow this lifestyle. EDtwt, specifically pro-eating disorder or pro-ana communities, are centred around users posting ‘tips’ for others, such as how to prolong a fast, and ‘motivational’ materials, such as “‘thinspo’ images of extremely thin women displaying extremely protruding collarbones, hipbones and ribs, or thigh gaps” (Branley & Covey, 2017, p. 3). Starvation is seen as a sign that users are ‘successful’ in their behaviour, with sayings such as “the sound of a stomach rumbling is equivalent to the sound of applause” and “nothing tastes as good as skinny feels” being popular within the community.  Encouraging other users to engage in disordered eating behaviours allows these individuals to feel as if their lifestyle is “acceptable, justifiable, and sometimes even desirable” (Schroeder, 2010, as cited in Branley & Covey, 2017, p. 2).

The Pro-Recovery Side of Eating Disorder Twitter

Similarly to SHtwt, the eating disorder communities have some users who are on the pro-recovery side, encouraging recovery in other users and sharing their own recovery process and struggles (Branley & Covey, 2017). Users in these communities are often in a vulnerable state of mind and are easily influenced by the content they are viewing; however, this has the potential to be used positively to direct those struggling towards support, advice and help. Health professionals can also use the content posted on eating disorder Twitter to identify issues that those with eating disorders struggle with and determine how to approach them correctly (Branley & Covey, 2017). While there are benefits to having support communities and insights into the struggles of individuals with eating disorders, careful moderation is required to reap the benefits without the consequences. Trigger warnings can alert users that the content could potentially trigger those with eating disorders, “however, it has been suggested that the use of trigger warnings may help users to purposefully search for pro-ana content (Borzekowski et al., 2010) and the use of hashtags is likely to facilitate this” (Branley & Covey, 2017, p. 5). Hashtags are the most prominent tool on Twitter used to find posts with specific content, with #proana being used to find content with users promoting eating disorders as a lifestyle rather than a psychophysiological illness. Unlike other social media platforms, Twitter currently lacks policies regarding blocking hashtags (Sukunesan et al., 2021). Complete censorship of eating disorder content is not a suitable option because although the majority of content is harmful towards individuals with this disorder, there are posts dedicated to support and recovery, which are particularly important due to ED sufferers rarely seeking professional help offline (Branley & Covey, 2017). Censorship may also “push these communities further into secrecy and stigmatise individuals already feeling alienated because of their ED symptoms” (Arseniev-Koehler, 2016, p. 664). Finding a middle ground where triggering content is monitored, but the sense of community and pro-recovery posts are still maintained, is something that needs to be done in order for users in these communities to be impacted in a positive rather than negative way.

Conclusion

Social media has the potential to detrimentally impact young people’s mental health (Lavis & Winter, 2020). Twitter, in particular, hosts a variety of online communities, including self-harm Twitter and eating disorder Twitter. While it is important to note that both of these communities have positive impacts on youths suffering from mental health issues, such as providing a safe space in which they can feel less isolated and have a sense of belonging, the negative impacts that consuming content surrounding these disorders are dangerous and potentially life-threatening. Both EDtwt and SHtwt are flooded with content that normalises, encourages and glamorises harmful behaviours and generally goes unmoderated. While Twitter has the potential to have a positive impact on these communities, in its current state, the consequences of young and impressionable users consuming content posted by those normalising self-inflicted and dangerous behaviours are too damaging to argue that the existence of EDtwt or SHtwt should be seen as having a positive impact on mentally ill youths.

References

Arseniev-Koehler, A., Lee, H., McCormick, T., & Moreno, M. A. (2016). Proana: Pro-Eating Disorder Socialization on Twitter. Journal of Adolescent Health58(6), 659–664. https://doi.org/10.1016/j.jadohealth.2016.02.012

Atauri-Mezquida, D., Nogales-González, C., & Martínez-Pastor, E. (2025). Exploring self-harm on Twitter (X): Content moderation and its psychological effects on adolescents. Online Journal of Communication and Media Technologies, 15(1), 1-12. https://doi.org/10.30935/ojcmt/15867

Branley, D. B., & Covey, J. (2017). Pro-ana versus pro-recovery: A content analytic comparison of social media users’ communication about eating disorders on Twitter and Tumblr. Frontiers in Psychology, 8(1), 1356–1356. https://doi.org/10.3389/fpsyg.2017.01356

Emma Hilton, C. (2017). Unveiling self‐harm behaviour: what can social media site Twitter tell us about self‐harm? A qualitative exploration. Journal of Clinical Nursing, 26(11–12), 1690–1704. https://doi.org/10.1111/jocn.13575

Lavis, A., & Winter, R. (2020). Online harms or benefits? An ethnographic analysis of the positives and negatives of peer‐support around self‐harm on social media. Journal of Child Psychology and Psychiatry, 61(8), 842–854. https://doi.org/10.1111/jcpp.13245

Sukunesan, S., Huynh, M., & Sharp, G. (2021). Examining the pro-eating disorders community on twitter via the hashtag #proana: Statistical modeling approach. JMIR Mental Health8(7), 1-9. https://doi.org/10.2196/24340

Share this:

Search Site

Your Experience

We would love to hear about your experience at our conference this year via our DCN XVI Feedback Form.

Comments

17 responses to “Twitter and Mental Health: the effect of Twitter on those suffering from Eating Disorders and Self-Harm”

  1. juliannebanares Avatar

    I went into this with the mindset of debating, however, I agree with almost everything that was said in this paper. I do believe platforms like Twitter (or X), especially when moderation is below standards, is extremely harmful and can perpetuate harmful behaviours like eating disorders or self harming.

    That being said, I wonder if the issue runs deeper than the platform itself. Is it really just about X, or should we focus our attention to the broader societal problem where vulnerable individuals are already seeking out these communities?

    In other words, is the platform amplifying and shedding light to an existing issue rather than actually being the one to create it?

    1. Tayla Black Avatar

      Hi! Thanks so much for taking the time to read my paper!

      There is definitely a broader societal problem with individuals seeking out these communities, however in my opinion X facilitates these communities in a more harmful way than other platforms (although tumblr is a good contender as well). In my experience platforms like Instagram and TikTok are much more strict on the type of content posted, with graphic content (both images and text) being taken down quite rapidly or not even being uploaded at all, whereas on Twitter I have seen some pretty nasty content that sometimes doesn’t get taken down at any point. I would also argue that the nature of Twitter (as in the way people can ‘rapid fire’ post in ways that are generally done on other platforms) means that these communities build faster and more easily.

      Other platforms facilitate harmful behaviours for sure, especially eating disorders, but in a less straight forward way. For example, certain diets or exercise trends on TikTok that are actually just disordered eating. On Twitter the content is more straightforward – people will outwardly say they have an eating disorder but still promote it – and the content can be extremely graphic.

      Though I can’t argue that I’m completely correct in saying these points without doing further research – this is mostly based on my personal experience. I think a good course of action would be to double down on content moderation and have mental health experts utilise the posts from these communities as informative sources that could help learn more about these disorders and help the people suffering from them.

      1. Kyle Vasquez Avatar

        I’d like to discuss this as well,

        Do you think that the state of Twitter/X when it comes to allowing posts like this is a moderation issue or is it more of a community issue? Regardless of how strict or loose the guidelines are for the platform, the actual people engaging and participating in these trends or posts are still promoting/sharing them regardless.

        This is definitely prevalent in the gym community, where it seems the rising trend with gymgoers is the more casual and accepted use of anabolic steroids in everyday life. Despite the adverse health effects, gym influencers still promote this sort of lifestyle as an ends all meets all (“You’ll never be big/perfect unless you take steroids”).

        But the biggest problem in terms of moderating these types of unhealthy lifestyle promotions is how do you categorise/classify it as harmful or improper for a social media platform?

        I really liked your article and I found it very interesting how it correlates to other posts or trends that are harmful, yet are allowed and promoted on Twitter.

        1. Tayla Black Avatar

          Hi Kyle! Thanks for your comment.

          I think it’s both a moderation and community issue. I think it’s important to remember that the people participating in these trends and posts are mentally ill. Particularly with eating disorders and self-harm these disorders are competitive by nature so when given the platform to do so they will be driven by their disorders to make these posts. Although it is an issue caused by the community itself I don’t think we can really blame mentally ill people for participating in behaviours that are consistent with their mental illness. I also think that people don’t necessarily join these communities for the purpose of posting harmful content but rather to feel less alone; and are consequently influenced by the type of content they see. It also appears that the people posting this content don’t really view it as ‘wrong’ in the same way people outside of these communities would.

          I definitely see a lot of toxic content from gym influencers, on TikTok especially, but I think the difference is that gym influencers typically have a large following of people that are influenced by the content they post and see them as a source of information. When your job is to literally influence people by telling them what to do I think you have more responsibility to spread healthy information and should be held more accountable.

          I think a good and easy starting point would be to block images and videos containing triggering content, as I mentioned in my article regular consumption of graphic imagery can be extremely triggering. There are lots of key phrases used by these communities such as ‘thinspo’ which should be blocked as well, but obviously they can just come up with new phrases so I think it would probably be something that would need regular moderation.

          Sorry for such a long reply hahaha, thanks for reading my paper I’m glad you found it interesting!

      2. juliannebanares Avatar

        Well said Tayla. Doubling down on content moderation would definitely be the best course of action, especially for X. You painting the comparison to other platforms like TikTok and Instagram helped paint a clearer picture on how toxic and harmful content slips through on Twitter.

        I still do feel like the platform is more of a reflection rather than a root cause. If you think about it, if X disappeared tomorrow, do you think these communities would too? Sadly, the reality would most likely be that they would just find new digital spaces and continue perpetuating these harmful communities. Because, the desire to connect, even around harmful behaviours, comes from somewhere deeper: unmet needs, mental health struggles, lack of early intervention etc.

        To me, thats where it gets really complicated. It’s not either/or. Platforms like X do make it easier for these behaviours to happen, but it’s happening because of something already broken outside of the platform itself. Maybe the real solution sits somewhere between better platform accountability and better offline support systems…

        Anyway, great paper! Thoroughly enjoyed reading this!

        1. Tayla Black Avatar

          I agree, at the end of the day the platforms only perpetuate the behaviour rather than cause it. It’s definitely a complicated issue, but I think proper moderation on all platforms is a good start to tackling it. If we can make digital platforms more of a safe space for these individuals maybe they can be used as a resource for mental health professionals to learn more about these disorders and tackle the root cause.

          Thanks so much for reading!

  2. stellapearse Avatar

    Hey Tayla,

    I found your take on how twitter is contributing to eating disorders and self harm really interesting, and I definitely agree. Your analysis regarding how twitter communities are normalizing eating disorders to extremes, posting about the disorder as if it is a lifestyle choice rather than a mental illness I found particularly fascinating, yet disturbing.

    It’s upsetting too see how people in these communities are clearly suffering but are also spreading content that is harmful and causing more and more people to suffer. Similar in my paper I discussed how platforms are facilitating adolescent boys to be radicalized by extremists groups.

    Since readings your paper and having my own understanding from the research of my own I am recognizing more and more how these communities are formed by people with mental health issues that are struggling themselves, which is what drives these toxic environments to spread misinformation and amplify toxic behaviors.

    A question I have for you. What platforms do you think are the biggest facilitators of these communities? In my research reddit was the home to most of these communities, meanwhile for eating disorders is it mostly seen on twitter. Already other platforms will be experiencing the formation of these communities, but which one next do you think is going to enabling these harmful communities?

    1. Tayla Black Avatar

      Hi Stella, thank you for reading my paper!

      In my opinion the biggest facilitators are Twitter, Reddit and Tumblr (I know eating disorders especially were very big on Tumblr although I’m not sure if it’s changed or not). I think these platforms all have a lower level of moderation compared to others which makes it easier for these communities to post triggering content.

      I think the next platform will be TikTok. While the content may not be as graphic and straight forward as others, there has been a rise of ED content on there disguised as “health” related content. The trend of ‘what I eat in a day’ videos in particular has a lot of content of users and influencers posting an extremely restricted diet which can influence people into thinking that’s what they need to do to lose weight or be skinny. There is also a lot of ‘body checking’ videos of people showing off how skinny they are, disguised as showing off their outfit.

      In some ways this can be even more harmful as people are promoting disordered eating as healthy, viewers (particularly young girls) won’t recognise that content as disordered eating but as a healthy diet that they should maintain to lose weight, which can subsequently trigger an eating disorder.

      I’ll definitely give your paper a read!

  3. ben.merendino Avatar

    Hi Tayla,

    Thanks so much for this powerful and deeply important paper. I really appreciated how you balanced the discussion — acknowledging the sense of belonging and support these communities can offer, while still highlighting the serious harm caused by unmoderated and triggering content.

    I did have a question: do you think platforms like Twitter should partner with mental health organisations to create safer versions of these communities, perhaps using verified moderators or AI to distinguish between pro-recovery and harmful content? I’d love to hear your thoughts on whether moderation alone is enough, or if we need platform-level redesigns to truly make these spaces safe.

    You could also bring in examples from other platforms (like TikTok’s content warning systems or Instagram’s ban on certain hashtags) to compare how different platforms are attempting to handle similar problems. This might give your argument more range and show what’s working (or not working) elsewhere.

    Your topic actually overlaps with mine in some ways — my paper explores how algorithm-driven filter bubbles shape the beliefs and behaviours of Australian adolescents. While I focus on echo chambers and limited worldviews, your work looks at how those same bubbles can reinforce dangerous behaviours in mental health communities. I think they really complement each other in showing how vulnerable young people are to algorithmic influence. Feel free to check it out here: https://networkconference.netstudies.org/2025/csm/5190/the-harmful-impacts-of-social-media-on-australian-adolescents/

    Really great work Tayla — this was eye-opening and incredibly well argued!

    1. Tayla Black Avatar

      Hi Ben, thanks so much for reading my paper and leaving such an insightful comment!

      You raise a really interesting point, I never thought about platform redesigns instead of just moderation but thats a great idea! Human moderation is probably best as some posts might slip past algorithms or AI. Your comment made me think that maybe when users go to post something against the guidelines it could not only be blocked but the user could also receive a notification that directs them to a free online mental health service or something like that?

      It would definitely be good to compare Twitter against other platforms, as I have noticed the content moderation seems to be more strict on other platforms but I haven’t researched it much.

      I’ll absolutely give your paper a read!

  4. danielle Avatar

    Hi Tayla,

    I was intrigued by your paper topic and I think you have written it all very well and informative. I agree that Twitter/X has a strong influence on mental health and eating disorders in society and your paper certainly brought new perspectives to light as well – like how you wrote about users post about their mental like its a choice not a diagnosed disorder. It’s unsettling how that could not be more true but not many people actually admit it.

    In my opinion in terms of Twitter’s role, I believe they could do more things to help increase their positive impact on the EDtwt and SHtwt communities, so my question is do you believe there are more ways that Twitter/X can moderate posts in an effective way that could help with minimizing Twitter’s negative impact on the mental health communities? How do you think they would be able to do that?

    1. Tayla Black Avatar

      Hi Danielle,

      Thanks for your comment!

      I think the best starting point for Twitter to moderate posts effectively would be to block posts with graphic content (e.g. pictures/videos of self harm, pictures of extremely underweight bodies that are clearly meant to be thinspo). I’m not expert on algorithms, but I believe that because of how much of this content is already on the platform it would be easy to train an algorithm, especially with AI, to quickly recognise this content and take it down. As I mentioned in my paper, this content can be very triggering.

  5. Xing Bai Lai Avatar

    Hi Tayla ,

    Your analysis of Twitter’s mental health impacts on people experiencing eating disorders and self-harm behaviors are completely resonates with me. Through your analysis , you have shown how the platform presents hazards together along with positive effects. How important do you think stronger content moderation will be compared to broad educational campaigns for creating safer Twitter use? Looking forward to hear what you think.

    Thanks!

  6. Yana_Chua Avatar

    Hey Tayla,
    Really insightful paper! You’ve done excellent job hightlighting the dual nature pf Twitter communities like SHwt and EDtwt, both the dangerous normalization of harmful behaviors and the potential for peer support and recovery. I especially appreciated the balanced tone and strong use of academic sources to support your arguments. The section on harm reduction within these communities was eye-opening. Great work! You mentioned that SHtwt and EDtwt can offer a sense of belonging and even harm reduction for some users, a question came in mind, do you think platforms like Twitter should work with mental health professionals to moderate or guide these spaces rather than removing them altogether?

    1. Tayla Black Avatar

      Hi Yana,

      Thank you for taking the time to read my paper! Yes I think collaborating with mental health professionals would be a great idea. I think the removal of these communities would do more harm than good, and they would probably end up migrating to a different platform anyway. Moderation is one good idea, but I also thought that if the platforms algorithm recognised a harmful post that it could block the post but also provide the user with access to a mental health helpline or some other free mental health resource.

  7. 21483789 Avatar

    Hi! This was such an insightful and important paper. You did a great job highlighting the dual nature of Twitter communities like SHtwt and EDtwt—how they can both offer a sense of belonging and support, while also dangerously normalising self-harm and disordered eating.

    I also appreciated your discussion of the pro-recovery side of these communities. It’s clear these spaces can help reduce isolation, but as you point out, they also risk triggering others or reinforcing harmful behaviours.

    Do you think Twitter could realistically moderate harmful content through tools like AI flagging or improved reporting, while still protecting the supportive, recovery-focused aspects of these communities?

    I would love to see your opinion on this

    Thanks again for the thought-provoking read. If you’re interested, I’d love your thoughts on my paper too—it looks at how TikTok is normalising adult content for young users without proper safeguards: https://networkconference.netstudies.org/2025/csm/5477/are-influencers-in-adult-content-impacting-minors-negatively-the-impact-of-tiktoks-strong-online-communities-on-young-people/

    1. Tayla Black Avatar

      Hi,

      Thank you for reading my paper! While I’m not an expert on algorithms I have noticed that other platforms such as Instagram and TikTok have strict content guidelines when compared to Twitter, it’s very rare that graphic content is allowed on these sites and even when these posts do slip through they are generally taken down quite quickly. I think it’s absolutely realistic for Twitter to moderate the content better, but I think the difficulty would come with recognising which content is harmful and which content is just people venting or asking for advice. AI flagging could be useful here as it could be trained to recognise the difference.

      I’ll definitely give your paper a read!