Risks Posed by AI and Language Models to Online Social Platforms

Posted on

by


Link to PDF version

Abstract: Rapid advancements in Generative AI and Large Language Models have led to the potential for robust semi-independent AI agents to interact with and influence online social networks. By drawing upon vast amounts of data from across the internet, new Generative AI and Large Language Models are able to convincingly mimic human interaction, reproduce biases, spread misinformation and create coherent politically radical content. From past examples of AI misuse in the form of automated ‘bot’ accounts on Twitter, we can infer that the misuse of Generative AI on online social platforms could lead to increased political polarisation, lower standards of discourse, and spread propaganda, misinformation and conspiracy theories. Further research into the potential harms of Generative AI is required in order to develop mitigation strategies prior to Generative AI use becoming widespread within online social platforms.

Introduction

Misuse of ever-improving Generative Artificial Intelligence (GEN AI) and Large Language Models (LLMs) have the potential to wreak havoc on online social platforms by infiltrating social networks and communities and spreading misinformation, undermining trust and furthering dubious political goals. The ongoing proliferation of Generative Artificial Intelligence (GEN AI) and Large Language Models (LLMs) such as ChatGPT raises questions regarding the role that these technologies will play in online life. This paper will focus on the ability for and implications of GEN AI to interact with online communities and the potential harms and consequences thereof.

For the purposes of this paper, the term ‘GEN AI’ also encompasses ‘LLMs’ and Machine Learning (ML) systems, although it should be noted that not all GEN AI is necessarily an LLM.

The increasingly advanced ability of LLMs to generate coherent and dynamic content in response to user input allows for misuse of GEN AI and LLMs to influence, disenfranchise and radicalise users of online communities and social networks. Central to this paper is the concept that AI can be made to act as a semi-independent agent that is able to pursue goals within the context of a social platform, such as undermining the authenticity of a community and influencing political sentiment. Without any clear way for users to identify GEN AI agents in online social communities, the GEN AI agents are then free to ceaselessly build and influence their own network of social relations within a platform in order to maximise outcomes indicated by actors that are able to deploy and direct them. This is particularly relevant to social platforms, as they are currently the main vector through which AI is able to interact with the general public in an unplanned manner.

We can extrapolate harms from past examples of the use of more primitive AI ‘bot’ accounts to influence political outcomes. From these we can determine that the use of automated tools to influence popular opinion is linked with disenfranchised and polarised communities. Other possible negative outcomes that the use GEN AI agents within social media communities is the intentional or unintentional proliferation of misinformation, erosion of democratic values, decline in integrity of public debate and even commit abuse against users of online social platforms, with relevant social media services actively facilitating the ability for bots to interact with users (Spitale et al, 2023).

In order to respond to these dangers, it is vitally important to conduct sober research into the capabilities of GEN AI with the goal of developing a robust legislative framework to identify and mitigate the potential harms that generative AI may cause to online social platforms, or we may find that the next generation of online influencers of the virtual world have no counterpart in the physical world

Increasing and Emerging Capabilities

The ability of AI actors to infiltrate online communities has been a long-standing concern for researchers and community members alike. To develop an understanding of the impact that LLMs will have on social platforms, we should consider their emerging potential to act as semi-independent agents that able to “take on increasingly open-ended tasks that involve making and executing novel plans to optimize for outcomes in the world” (Bowman, 2023). Chan et al, (2023) characterise LLMs as having the ability to complete digital tasks with multiple steps, interact with web APIs independently of human guidance and be able to simulate specific human interactions, such as a conversation with Albert Einstein.

A 2016 study conducted by Murthy et al, (2016) indicated that the risk of primitive automated bot infiltrating online social communities is potentially lower than typically estimated due to the need to accumulate social capital in order to gain visibility to users of the social platform, with the researchers also noting that the simple functions available to their bots did not afford them the ability to generate significant social capital without some degree of human intervention on their part. However, the aforementioned capabilities could lead to bot accounts supported by LLMs being used to accumulate social capital on social networks by simulating successful influencers. This possibility is made more alarming by evidence that suggests that content generated by LLMs can be very difficult for users to identify as being created by GEN AI as opposed to a human user.

A study conducted by Brown et al, (2020) shows that humans have a mean accuracy of only 52% when attempting to identify whether a news article was produced by a GPT-3 175 billion parameter model, and although models with lower parameter counts were more easily identified, content produced by the non-control model with the lowest parameter count could be identified with a mean accuracy of 76%, indicating that LLM parameter count is closely related to the degree of credibility that the model output appears to be able to simulate.

One potential barrier to predicting the risks and capabilities of GEN AI is the emergent nature of some LLM capabilities. Many features that increase the ability of LLMs to act as agents appear to emerge as a result of the development of specific user-inputted prompts and the increasing computational scale of LLMs (Chan et al, 2023). In a work attempting to predict future economic impacts of LLMs, Eloundou et al (2023) note that the emergent ability of LLMs to manipulate digital tools suggests that they may eventually be capable “executing any task typically performed at a computer” (Eloundou et al, 2023). This is further exacerbated by the inability of experts to understand the internal processes of LLMs as they are used, although this is notably a current research goal (Bowman, 2023). With these capabilities in mind, we can begin to define the risks that GEN AI poses to online social platforms and their communities. As we have mostly previous examples to infer from, it is important not to underestimate the degree to which previous harms from AI and bot use can be augmented by the above discussed abilities of GEN AI.

(Mis)Use Cases

The potential impacts on online social platforms should not be underestimated. Using simple zero shot learning techniques, GPT-3 is demonstrably able to be manipulated into creating and reproducing politically extreme content, such as far right manifestos and conspiracy theories (McGuffie and Newhouse, 2020). In response to this danger, OpenAI has developed a sophisticated form of content filtration to prevent the aforementioned output from being created. However it is important to note that content filtering takes place after the prompt response is completed, thus data that may lead to the creation of extremist content would still be present in the LLM and thus can potentially influence the output in other ways, even if outright offensive content is rejected before it is presented to the community (Microsoft, 2023). In addition, this demonstrates that LLMs have this capacity to generate content that may fuel radicalisation, therefore this is an ongoing consideration that may have further impact as more GEN AI technologies and LLMs are developed.

Although research into the applications and effects of newer GEN AI and LLMs technologies is immature (Bowman, 2023), we can anticipate some of the potential effects on social networks and communities by extrapolating from previous examples of AI misuse and their consequences. In the 2014 Brazilian general electoral campaign, interlinked networks of bots (known as ‘botnets’) were involved in spreading political propaganda throughout social networks for both major presidential candidates (Arnaudo, 2017). A 2022 study on Spanish social media networks in the wake of the COVID-19 posits that bot accounts tended to polarise the community more than non-bot accounts, proceeding to observe that the bot accounts tended toward aggressive and emotional attacks on the character of political figures, rather than debates about specific economic or health policy around the handling of the COVID-19 pandemic (Robles et al, 2022). Robles et al, (2022) go on to speculate that the bots were likely deployed with these goals in mind, noting the following impression regarding the effect on Spanish social networks:

…the polarisation-negativisation binomial is the ammunition chosen by these types
of accounts to alienate and confront the parties involved in this public debate, as well as to create an environment of tension, lack of civility and attacks on those who think differently.” (Robles et al, 2022)

This could lead to a “spiral of silence” effect, wherein participants of a community gain the impression that a proportion of community members will disagree with the participant’s views, causing the subject to withhold their opinions or withdraw altogether (Hampton, 2015). Whether or not the bots discussed by Robles et al, are augmented by GEN AI is unclear, however it is clear that bots as deployed to discuss politically-charged issues in social networks do not support robust debate, and we could speculate that when coupled with the above discussed rapid advancements in GEN AI and LLMs, the negative effects of bot misuse in this manner on public debate in social network forums could be greatly amplified.

In addition to partisan propaganda, a report written for the NATO Strategic Communications Center for Excellence identifies a further concern; strategic bot infiltration into social media communities in order to deliberately foment controversy, spread misinformation and conspiracy theories as a form of deliberate and strategic destabilization (Nissen, 2016). As Nissen elaborates: “These effects are often associated with uncertainty and mistrust towards the existing establishment (media and political elite) and fear for the future” (Nissen, 2016). Spitale et al, (2023) show that LLMs such as GPT-3 are capable of creating more persuasive information and misinformation than human users, that humans and GPT-3 alike are not able to identify as LLM-generated content.

Algorithmic Bias

Irrespective of GEN AI’s ability to act as an independent agent, we should also be concerned about the potential for GEN AI to influence our view of the world as it is incorporated into the social network and community streams that we view and interact with. In this way, inaccuracies and biases reproduced by GEN AI may influence our beliefs even when offline as it begins to incorporated into content produced by peers, or even as GEN AI agents begin to form part of our ostensible peer group (Papacharisi, 2010). It therefore becomes necessary to consider the methods used for creating and training GEN AI. LLMs for instance require vast amounts of language data in order to achieve their functionality. One such example of a relevant dataset is ‘Common Crawl’—petabytes of language data scraped from 8 years of web content—which was notably used to train GPT-3. Although a filtered version of the Common Crawl dataset has been used for this training, concerns have still been raised about biases that may be present in the remaining training data, including but not limited to content that represents white supremacist, misogynistic and other discriminatory viewpoints (Bender et al, 2021). Bender et al, (2021) argue that more research and thought must be applied to the subject in order to understand and mitigate the effects of potential reproducing hegemonic forms through GEN AI prior to widespread use: “Thus what is also needed is scholarship on the benefits, harms, and risks of mimicking humans and thoughtful design of target tasks grounded in use cases sufficiently concrete to allow collaborative design with affected communities” (Bender et al, 2021).

Further to this, given the inability of humans to consistently identify content produced by GEN AI, casual analysis on the part of affected users of social media platforms may be difficult or misleading. Thus it is all the more important for swift scholarly engagement with the subject, as without this, the exact effects of GEN AI proliferation may not be known until they are already diffuse within online social platforms and communities.

Conclusion

Advancements in GEN AI fields should be of great concern for users of online social platforms. As GEN AI, and LLMs in particular, become more advanced, we incur the risk of filling our social networks and communities with simulacra rather than human beings. Based on current trends, these simulacra may eventually become nearly indistinguishable from humans, and have the potential to manipulate unsuspecting users, polarise discussion, perpetuate untruths and inaccurate models of reality, or perhaps most worryingly, accomplish all of the aforementioned outcomes while in robust pursuit of clandestine political or economic goals. As the use of GEN AI is adopted by more companies, strategic government or social bodies and even lone actors, we can as people begin to interact unknowingly with AI agents in their social networks and communities of preference. Unfortunately due to the difficulty in predicting capabilities of GEN AI Robust regulation, supported by sober research into the capabilities and implications of GEN AI, is vital to protecting the health and integrity of online social networks and communities.

 

References

Arnaudo, D. (2017). Computational propaganda in Brazil: social bots during elections (2017). Computational Propaganda Research Project.

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability and Transparency, 2021. p.610-623

Bowman, S. R. (2023). Eight Things to Know about Large Language Models. https://arxiv.org/abs/2304.00612

Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., & Askell, A. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901.

Chan, A., Salganik, R., Markelius, A., Pang, C., Rajkumar, N., Krasheninnikov, D., Langosco, L., He, Z., Duan, Y., Carroll, M., Lin, M., Mayhew, A., Collins, K., Molamohammadi, M., Burden, J., Zhao, W., Rismani, S., Voudouris, K., Bhatt, U., . . . Maharaj, T. (2023). Harms from Increasingly Agentic Algorithmic Systems. arXiv pre-print server. https://doi.org/None arxiv:2302.10329

Dhiraj Murthy, A. B. P., Ramine Tinati, Nick Anstead, Leslie Carr, Susan J. Halford, Mark Weal. (2016). Automation, Algorithms, and Politics| Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital. International Journal of Communication, 10, 4953-4971.

Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). Gpts are gpts: An early look at the labor market impact potential of large language models. arXiv preprint arXiv:2303.10130.

Hampton, K. N. (2016). Persistent and Pervasive Community:New Communication Technologies and the Future of Community. American Behavioral Scientist, 60(1), 101-124. https://doi.org/10.1177/0002764215601714

McGuffie, K., & Newhouse, A. (2020). The Radicalization Risks of GPT-3 and Advanced Neural Language Models. arXiv pre-print server. https://doi.org/None arxiv:2009.06807

Microsoft. (2023). Content filtering. https://learn.microsoft.com/en-us/azure/cognitive-services/openai/concepts/content-filter

Nissen, T. E. (2016). Social Media’s Role in’Hybrid Strategies’.

Papacharissi, Z. (2010). A Networked Self : Identity, Community, and Culture on Social Network Sites. Taylor & Francis Group. http://ebookcentral.proquest.com/lib/curtin/detail.action?docID=574608

Robles, J.-M., Guevara, J.-A., Casas-Mas, B., & Gómez, D. (2022). When negativity is the fuel. Bots and Political Polarization in the COVID-19 debate. Comunicar, 30(71), 63-75.

Spitale, G., Biller-Andorno, N., & Germani, F. (2023). AI model GPT-3 (dis)informs us better than humans. arXiv pre-print server. arxiv:2301.11924

 


Search Site

Your Experience

We would love to hear about your experience at our conference this year via our DCN XIV Feedback Form.

Comments

19 responses to “Risks Posed by AI and Language Models to Online Social Platforms”

  1. Amit.Munjal Avatar
    Amit.Munjal

    Hi Bysimon,

    Great topic, intelligent, adaptive machines and robots can support us humans, make life easier and solve complicated problems and artificial intelligence is evolving with every passing day- Sanjana De (2021).

    The theme of “Video Killed the Radio Star” was nostalgic, with the lyrics referring to a period of technological change in the 1960s, the desire to remember the past and the disappointment that children of the current generation would not appreciate the past- Warner (2003). I believe that this is what’s is happening again in the 2020’s and it would not be wrong if there was a song that would read like “AI killed the language model” with lyrics that would include ” The model tried to keep up, but could not keep up with the pace……..”. Is it the same fear that is eluding us.

    We humans have evolved by adapting quickly to changes especially in the technical environments, and hence have survived for centuries. Current, Gen Z and beyond will find good use cases for this technology for innovations that will assist human mankind to lead better lives.

    I’d embrace this technology than rather be a dinosaur. What are your thought?

    Regards

    Amit

    1. simon.roberts-carroll Avatar
      simon.roberts-carroll

      Hi Amit,

      Thank you for reading my paper and commenting. I agree that AI, and LLMs in particular, have a lot of positive use cases and are evolving very quickly.

      I highly recommend reading the 2023 paper by Eloundou et al (https://arxiv.org/abs/2303.10130) on the topic. They make a very solid case for LLMs being a ‘general purpose technology’, much like printing or the steam engine. Despite noting that it has a lot of potential uses, they also recommend “societal and policy preparedness to the potential economic disruption posed by LLMs and the complementary technologies that they spawn”. The authors take for granted that generative AI is going to continue to improve and proliferate and recommend that we be prepared for it as a society.

      With this in mind, I think we have no choice but to embrace it. But at the same time we should also encourage research and legislation to identify and mitigate the negative impacts where possible.

      As you say, Gen Z is well placed to find uses for this technology. Although listening to “Video Killed the Radio Star”, I can’t help but wonder if, in a decade or so, Gen Z will feel a similar sense of nostalgia for online platforms as they are today, prior to generative AI use becoming presumably more widespread.

      Thanks,

      Simon

      1. Amit.Munjal Avatar
        Amit.Munjal

        HI Simon,

        Thanks for your response and sharing the link form Eloundou et al (https://arxiv.org/abs/2303.10130) on the topic. I find their numbers interesting especially in the context when incorporating software and tooling built on top of LLMs, i.e. 15% of all worker tasks in the US could be completed significantly faster at the same level of quality, share increases to between 47% and 56% of all tasks. Perhaps this is what is needed to get economies out from the post covid recession. As productivity increases, output will increase at a lower cost, thus enabling cheaper products.

        Is there a business case with AI to assist economies come out of post covid recession?

        Regards

        Amit

        1. simon.roberts-carroll Avatar
          simon.roberts-carroll

          Hi Amit,

          Apologies for the delayed response and thanks for commenting!

          Admittedly, I don’t really think I have the economic expertise to make a meaningful comment on how LLMs might affect a recession (the topic of whether or not we are currently in a recession seems to be highly debated at the moment), but based on the research by Eloundou et al. (2023) I do think that there’s definitely a business case for adopting LLM use to increase worker efficiency.

          That said, Eloundou et al. do seem to also be concerned at the potential for lost employment and expertise due to LLM use, in their words: “Further research is necessary to explore the broader implications of LLM advancements, including their potential to augment or displace human labor, their impact on job quality, impacts on inequality, skill development, and numerous other outcomes” (Eloundou et al., 2023). As their research also demonstrates, higher wage employment seems to be more at risk than low wage employment.

          In conclusion, while use of LLMs there might be an increase in worker efficiency, this could be offset by a loss of employee experience, gainful employment – specifically loss of higher wage employment, which could in turn lead to greater wealth inequality.

          Thanks again for reading and commenting on my paper!

          Cheers,
          Simon

          References:
          Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). Gpts are gpts: An early look at the labor market impact potential of large language models. arXiv preprint arXiv:2303.10130. https://arxiv.org/abs/2303.10130

        2. Amit.Munjal Avatar
          Amit.Munjal

          Ref:

          Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models. ArXiv:2303.10130 [Cs, Econ, Q-Fin]. https://arxiv.org/abs/2303.10130

  2. Danny.Y.Chan Avatar
    Danny.Y.Chan

    Hi Simon,
    This is a really interesting and hot topic. The thought of AI infiltrating social networks and communities and spreading misinformation is scary, and it definitely got me thinking about the potential consequences of AI misuse.
    I love how you have provided examples in your paper to support and strengthen your arguments.
    Overall, a very interesting read.

    BTW, I was wondering if you have any recommendations for how individuals or organisations can protect themselves from the potential harms of AI infiltration on social platforms?

    1. simon.roberts-carroll Avatar
      simon.roberts-carroll

      Hi Danny,

      Thanks for reading my essay, I’m glad you found it to be an interesting read!

      That’s a very good question you’ve raised. In retrospect I don’t think I defined AI infiltration in the paper, but to provide a very simple definition; I would say AI infiltration in online communities would involve an AI actor participating in and interacting with a community in such a way that it is mistaken to be an authentic (human) member of the community. The actor can then use the community and context to influence onlookers or other community members.

      I’m fairly skeptical of the capacity for individuals to protect themselves from the effects of AI infiltration. Based on the research by Brown et al. (https://dl.acm.org/doi/abs/10.5555/3495724.3495883), humans don’t seem to be able to consistently identify content generated by more powerful LLMs (only 2% better than a coinflip for a 175B GPT model), so this effectively rules out being able to identify generative AI content by eye. So if we can’t spot AI actors, what can we do?

      Unfortunately, the tools that can be used to govern AI interaction on social media platform are only really available to the owners of the social media platforms themselves. On an individual level, I think awareness of the likelihood and effects of AI infiltration is really the only protective measure one can take at the moment. For example, as LLMs become more prolific, I think it’s important for us to understand how and to what extent web APIs can facilitate bot interaction on social media platforms. With a solid understanding of how web APIs function and what they allow, we can begin to build personal ‘risk profiles’ to understand how vulnerable we and our communities are to AI infiltration and advocate for change to APIs to mitigate risks where possible, such as specifically identifying content and posts generated by leveraging APIs.

      Twitter’s web API for example has undergone some rapid changes in the last few months, with the purported reason being to limit bot interaction to mostly ‘good content’ – granted these changes weren’t community-driven, but instead made in order to monetise bot access to the API. Whether or not this will have a net positive effect on bot interaction is as yet unclear. (https://www.forbes.com/sites/jenaebarnes/2023/02/03/twitter-ends-its-free-api-heres-who-will-be-affected/?sh=4aeb8c4b6266)

      This might be a slightly naive view, but I think the more aware we are of the possibility and consequences of AI infiltration, the more we can generate consensus toward online social platform policies or legislation that mitigate the negative impacts it might entail. This is also why I think it’s important for more research to be done on the topic in this era of LLM proliferation, and for this information to filter through to the general public.

      Hope this answers the question and thanks again for giving the paper a read!

      Cheers,

      Simon

      1. Deepti Azariah Avatar
        Deepti Azariah

        Hi Simon,

        I don’t think this is a naive view, although it is optimistic to hope that awareness of the consequences of AI infiltration will generate consensus towards regulation. Unfortunately there are always going to be those who will be happy to manipulate AI tools to their own ends. It is important that there is more research though.

        Deepti

        1. simon.roberts-carroll Avatar
          simon.roberts-carroll

          Hi Deepti,

          Thanks for responding, I agree with you there. It’s hard to imagine there being much financial incentive to regulate LLMs as well.

          That said, I’d like to add that I also think it’s important for research to filter through to the public so that most users of social platforms are aware to what degree they’re actually affected. That way we can at least avoid a situation in which most people have a vague sense that their networks and communities are rife with AI accounts, but don’t have any concrete understanding of how this might affect them or whether it’s really the case at all.

          Cheers,
          Simon

  3. Sarah.Bailey Avatar
    Sarah.Bailey

    Hi Shane,

    This was a fantastic read, especially given the recent rise of Chat-GPT. Thank you for writing such an interesting paper!

    Your point that “the next generation of online influencers of the virtual world [may] have no counterpart in the physical world” actually reminded me of a pop-culture “controversy” (for lack of a better word) from 2016. I’m not sure if you’re familiar, but at the time, a profile had been created for a fictional character named “Lil Miquela”, who’s photographs looked heavily edited (or completely generated). I remember there being a lot of discussion/investigation into whether or not the profile was a bot or a real person (and whether the person in the photos existed in any capacity). There was so much attention given to this profile due to the perception that it may be some form of AI, that the fake profile gained a subtantial following and got nominated for a “Best Influencer” Streamy (an award for influencers).

    Your point has me wondering if the same thing would happen today (albeit, with an actual bot, not a fake one run by humans), given the huge amount of growth (especially on a public scale) that has happened for GEN AI. Do you think that people would even investigate potential bot accounts in this capacity anymore? Or do you think we’re already so desensitised to the prospect that we wouldn’t particularly care whether an account was real or not? Do you think AI could build an audience simply based on the fact that they weren’t human, as the people behind Lil Miquela managed to do?

    I’d love to hear your thoughts!
    Sarah

    1. simon.roberts-carroll Avatar
      simon.roberts-carroll

      Hi Sarah,

      Thanks for reading, glad you enjoyed it!

      These are some very intriguing questions. I haven’t heard of Lil’ Miquela before (despite her apparently being named one of Time’s 25 most influential people at one point). To be honest, I hadn’t even heard the term ‘Virtual Influencer’ prior to researching Lil’ Miquela either.

      Research on the topic is sparse and seems to primarily focus on the potential utility of Virtual Influencers as marketing vectors. A paper by Conti et al. (2022) seems to indicate that virtual influencers to have potential to develop and engage with large audiences, although the results on the later metric seem to vary wildly. The opportunities and threats listed by Conti et al. (2022) I think also broadly apply to GEN AI, with the notable exception of ‘brand safety’ as despite the best efforts of OpenAI and others, GEN AI can still be fairly easily manipulated into outputting questionable content. That said, I think this indicates that there will likely some degree of financial impetus to investigate whether or not GEN AI bot accounts have this potential.

      I wouldn’t quite say we’re desensitised to the possibility of bot accounts just yet. As GEN AI becomes more common I think we may actually become more ‘sensitised’ to it as researchers and (non-academic) investigators reveal the extent to which we could be interacting with or taking into account the opinions of autonomous AI accounts. For example, a Twitter user recently discovered that searching for tweets with the phrase “As an AI language model” shows countless results of accounts that otherwise appear unremarkable tweeting responses that seem to indicate that they are being generated by ChatGPT as a result of breaking OpenAI’s terms of service. Here is a link to an analysis of the bot network believed to be behind it. That said, please note that there may be offensive content linked or in the thread, as generally the reason for OpenAI LLMs to generate this error message is because they are prompted to produce offensive or explicit content (e.g. hate speech, racism, sexism and so forth): https://twitter.com/conspirator0/status/1647672160373071877

      I think it’s possible for AI to build an audience based on the fact of not being human. But if we assume that the results of Virtual Influencers give us an indication of how GEN AI might perform, I think this is actually a diminishing possibility as GEN AI becomes more commonplace. Survey results on attitudes toward Virtual Influencers seem to show that novelty is a large contributing factor to their popularity, and that the majority of participants would trust a Virtual Influencer less than a ‘Real’ Influencer (Conti et al. 2022). In the authors’ words: “This section highlighted that people would follow a VI mainly for curiosity and fun, rather than to learn something or feel closer to them.” (Conti et al. 2022). That said, it’s important to consider that attitudes may change, particularly as GEN AI and prompt engineering become ever more capable. As Bowman (2023) notes, AI is not necessarily limited to human standards of ability, and therefore may eventually become better at producing novelty than most humans.

      Thanks again for reading my paper and asking a very interesting question!

      Cheers,
      Simon

      References:

      Bowman, S. R. (2023). Eight Things to Know about Large Language Models. https://arxiv.org/abs/2304.00612

      Conti, M., Gathani, J. and Tricomi, P. P. “Virtual Influencers in Online Social Media,” in IEEE Communications Magazine, vol. 60, no. 8, pp. 86-91, August 2022, doi: 10.1109/MCOM.001.2100786. https://ieeexplore.ieee.org/document/9772313

      1. Sarah.Bailey Avatar
        Sarah.Bailey

        Hi Simon,

        Thank you for such an in-depth response! (Also, I’ve just realised in my initial message I called you the wrong name! So sorry! I think I’d just read Shane Bundoo’s paper right before yours and my wires got crossed!)

        I found it particularly interesting that you feel people will become more sensitised to autonomous AI accounts, but less interested/invested in AI influencers. There is certainly a fascinating dichotomy there, perhaps because people will be engaging more directly with bot accounts, and (as you assert) the novelty of AI will wear off after a while. I wonder what will happen when AI develops to the extent where (at least textually) it is indistinguishable from humans. ChatGPT is already so impressive, and as your paper suggests, humans are already iffy on identifying AI-generated content as is! Do you think ultimately the development of GEN AI will be a positive or negative thing?

        Sarah

        1. simon.roberts-carroll Avatar
          simon.roberts-carroll

          Hi Sarah,

          No worries, I hadn’t even realise you had called me by the wrong name before you pointed it out to be honest!

          That’s a very good question. The paper probably gives the impression that I think it will be a negative thing, but I actually have an optimistic view of the future of GEN AI – I think it will ultimately have more positive effects than negative. If we follow the logic of Eloundou et al. (2023) and think of GEN AI as a general purpose technology in the vein of printing or the steam engine, there is considerable potential for beneficial long term economic and social outcomes. Even if it isn’t AI in the truest sense of the word, a perfect example of the utility of LLMs is the potential for them to act as ‘language calculators’ with a wide variety of applications (Willison, 2023) (https://simonwillison.net/2023/Apr/2/calculator-for-words/).

          However, I also believe it will take a lot of time and trial and error to get to that point, and in the meantime there could be significant social and economic upsets along the way. In my opinion, the key factor that modulates the overall risk profile that GEN AI presents is the speed and oftentimes unpredictable results of GEN AI development – this why legislation, research and communication is crucial, and I hope that with these things we may be able to take the longer and hopefully safer road toward new GEN AI technologies.

          Thanks again for an interesting discussion and for reading my paper!

          Cheers,
          Simon

          References:

          Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). Gpts are gpts: An early look at the labor market impact potential of large language models. arXiv preprint arXiv:2303.10130. https://arxiv.org/abs/2303.10130

          Willison, S. (2023). Think of language models like ChatGPT as a “calculator for words”. Simon Willison’s Weblog. https://simonwillison.net/2023/Apr/2/calculator-for-words/

  4. Shane.Bundoo Avatar
    Shane.Bundoo

    Hi Simon,

    I really enjoyed reading this, especially in light of Chat-recent GPT’s growth. I appreciate you writing such a fascinating paper.

    The study underlines the necessity of cooperation among several stakeholders, including businesses, governmental organizations, and researchers, in order to address the hazards connected with generative AI. How can these stakeholders collaborate productively to create mitigation measures that protect the well-being and integrity of online social networks remains to be seen.

    A concentrated effort is needed to tackle the problems that Generative AI presents. By instituting ethical policies and norms for AI development, usage, and content control, businesses may play a significant role. Government agencies may help by developing laws and rules that support openness, responsibility, and moral application of AI technology. Researchers may actively look into the effects of generative artificial intelligence and create methods for spotting and minimizing such risks.

    The key is effective cooperation between these parties. Regular interaction, information exchange, and multidisciplinary study can aid in the creation of thorough mitigation plans. For the inclusive administration of online social networks to be ensured, open communication with user groups and civil society organizations is also crucial.

    Stakeholders may work together to address the threats posed by generative AI, advance ethical behavior, and maintain the strength and integrity of online social networks through promoting collaboration.

    I’d be interested in hearing your thoughts on this.

    Regards,
    Shane

    1. simon.roberts-carroll Avatar
      simon.roberts-carroll

      Hi Shane,

      Thank you for reading my paper. It’s great to hear that you found it to be enjoyable.

      I broadly agree with all of the points that you raise in the above comment. This risks represented by generative AI development require collaboration from business, government and academia to ensure that risks are mitigated as they arise.

      I think one of the issues at play that might be preventing mitigation measures from being considered is the sheer pace that generative AI seems to be developing at. It’s understandably quite difficult to draft legislation for technology that is so poorly understood. Thus it’s important for constant research and collaboration between stakeholder parties, as you say.

      Thankfully there have been some recent developments that indicate that risk mitigation measures are being considered by the powers that be, such as the ‘Blueprint for an AI Bill of Rights’ released by the Biden administration: https://www.whitehouse.gov/ostp/ai-bill-of-rights/

      With that being said, I think there is definitely a danger that online communities are left to fend for themselves as legislation focuses on certain more overtly harmful aspects of generative AI (e.g. LLMs being able to provide users with recipes for chemical weapons, among other such things) and less on the more open-ended effects of allowing generative AI to interact with users of social networks in various ways. With that in mind, it’s also important for users of social media networks to push for developers to implement methods to control and monitor bot activity on their networks.

      In the end, I believe the social media landscape will likely be reshaped by generative AI to some extent even with a cautious approach to generative AI development and legislation.

      Thanks again for reading my paper. All the best!

      Cheers,
      Simon

  5. Avinash Assonne Avatar
    Avinash Assonne

    Hi Simon,

    Your paper was an interesting read. GenAI definitely comes with many risks which need to be addressed and confronted. Your paper did a very job on elaborating about such risks by providing some concrete examples as well. I liked your discussion with Danny and it’s true that sometimes it’s not an easy task to distinguish between real content (created or written by real people) and AI generated content. Do you think that a way of moderation should be implemented to moderate such content? But then it would involve human intervention somehow which in this case, it would not be fully AI generated content anymore. I think there is an urgent need to more carefully control the spread of these approaches and their effects on society and the economy. The development of generative AI technology and solutions can only proceed with greater consideration and benefit when reliable checks and balances are in place.

    Regards,
    Avinash

    1. simon.roberts-carroll Avatar
      simon.roberts-carroll

      Hi Avinash,

      Thanks for reading and commenting on my paper – much appreciated!

      Absolutely agree with the points that you raise. I think one key point that I didn’t actually mention in the paper is that when gauging the degree to which users are able to identify LLM-created content, Chan et al. (2023) also measured the confidence that people had that they could identify LLM-created content before and after the study. Unsurprisingly, the participants were a lot less confident after actually taking part in the study (Chan et al., 2023). Most people seem to underestimate how difficult it can be to spot content produced by LLMs.

      I think moderation is definitely something that should be considered, although as you say, it may not be viable for humans to be that involved in the output of LLMs, partially also due to sheer quantity of content that will likely be produced by LLMs in the future. It has been suggested that LLMs also be used to moderate LLM output. This seems somewhat dubious to me, but I can’t really support that opinion with any evidence.

      At the very least I think posts created by AI need to be able to be identified within the web apps themselves (for example by identifying posts that are created through API calls). To that end, I believe developers of social media platforms would need to take a much more active role in monitoring and controlling GEN AI interaction on their platforms. At least then we can determine the need and viability of mitigation strategies such as moderation. We can’t solve a problem that we can’t measure.

      Once again, thanks for reading and commenting on my paper – all the best!

      Cheers,
      Simon

      Reference(s):

      Chan, A., Salganik, R., Markelius, A., Pang, C., Rajkumar, N., Krasheninnikov, D., Langosco, L., He, Z., Duan, Y., Carroll, M., Lin, M., Mayhew, A., Collins, K., Molamohammadi, M., Burden, J., Zhao, W., Rismani, S., Voudouris, K., Bhatt, U., . . . Maharaj, T. (2023). Harms from Increasingly Agentic Algorithmic Systems. arXiv pre-print server. https://doi.org/None arxiv:2302.10329

  6. Avinash Assonne Avatar
    Avinash Assonne

    Hello Simon,

    Thank you for this detailed and prompt response. You are right! As you correctly stated, given the increasing amount of content (which will keep on increasing anyway) produced by LLMs, it will indeed be quite a complicated task to monitor and moderate such enormous amount of generated content. Maybe this might actually provide even more jobs opportunities for humans? haha who knows! It is unclear how LLMs will affect human labor.

    Do you believe that understanding the best practices for applying LLMs in our line of work would likely be a good idea in the near future? We might discover that they aren’t very useful and that our jobs are safe, or we might discover that these models and the myriad of apps that will soon be built on top of them enable us to perform our jobs more effectively. LLMs still cannot model the judgements of real people and they lack the emotional intelligence as well. The part where you stated, “It has been suggested that LLMs also be used to moderate LLM output.” Yes, this does seem a bit dubious to me as well.

    Regards,
    Avinash

    1. simon.roberts-carroll Avatar
      simon.roberts-carroll

      Hi Avinash,

      Thanks for the quick response – agree with your points here as well, particularly in relation to the effects that LLMs might have on the employment market.

      If there’s one thing that’s clear about LLMs at the moment, it’s that they have very broad applications across some potentially very well paid fields. Eloundou et al. (2023) demonstrate that GPT-4 in particular is able to produce very competent content across a wide variety of disciplines.

      I definitely agree that it’s a good idea to learn how LLMs might be applied to our various lines of work. Would highly recommend Simon Willison’s weblog for assisting with this – he provides a lot of in-depth information on his process of learning how to use LLMs. Although most of his work with LLMs relates to his role in software development, the ‘prompt engineering’ techniques that he discusses can be broadly applied for use in other fields. Link: https://simonwillison.net/2023/Feb/21/in-defense-of-prompt-engineering/

      Thanks again for reading the paper and commenting!

      Cheers,
      Simon

      Reference(s)
      Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). Gpts are gpts: An early look at the labor market impact potential of large language models. arXiv preprint arXiv:2303.10130. https://arxiv.org/abs/2303.10130

Skip to content