Opinion piece on the dangers of AI and what it means to over rely on them and "cede" authorship in some way
This article was written by an Undergraduate student in the Faculty of Arts and does not necessarily represent the views of the Faculty or the University
On July 19th 2024, CrowdStrike shipped out a sensor configuration update to users internationally which, due to an error, resulted in total failures of Windows systems causing a blue screen of death. The impact of this failure was widespread, resulting in the grounding of flights, the delay of medical procedures and the disruption of banking services. These routine societal systems that were enhanced by technological change and innovation were subsequently disrupted, bringing into the spotlight existing discussions around digital connectivity and reliance. What could we do without resilient and robust digital infrastructure?
Through the dizzying technological transformation humanity is continuing to experience throughout the past two centuries, a wide range of discussions on the dangers of this shift have certainly entered our popular discourse. From fears of job displacement, addiction, dissemination of misinformation and data privacy, technology is arguably a policy lifesaver and headache. AI, specifically LLMs, are no exception to this story. However, is overreliance on LLMs and its associated risks unique to other technologies? In other words, is there something different about LLMs, the way we rely on them and ‘cede’ authorship.
I think that LLMs can be similar to disruptive technologies of the past, but are very different in many critical ways that are worth exploring. Simon Willison’s analogy of LLMs as ‘calculator for words’ has certainly helped me, but as Ballsun-Stanton and Hipólito demonstrate, thinking of LLMs as calculators can be misleading but are a useful negative heuristic. I think that an initial danger of LLM overreliance emerges from this discussion, wherein misunderstandings on capabilities can lead to confusion. For example, this year Air Canada lost a court case resulting from a ‘lying chatbot’, which was an LLM service that gave false information to a customer. This resulted in the customer being rejected a bereavement discount to attend a funeral, as the bot had ‘lied’ about company policy. In the broadest sense, the danger arose out of a misunderstanding of LLM capabilities. LLMs don’t retrieve or ‘remember’ factual information, so it could not have retrieved company policy for the user, hence the ‘lie’. In the broadest sense, LLMs ‘predict the next word’ they unfortunately don’t ‘reproduce factual information’. Reliance on this technology, for the wrong reasons, can certainly lead to confusion and unwanted outcomes.
This should certainly not detract people from using LLMs, when learning about this I was certainly confused for a while. Given this understanding of LLMs as ‘predicting the next word’, I thought that life would get easier after finding an effective use case. However, here problems continue to arise. Ceding authorship to an LLM that is effectively meeting your use case is dangerous as LLMs have the potential for bias and discrimination. This is an ongoing field of investigation however in the broadest sense, LLMs are trained on incredibly large amounts of data that inherit society’s implicit biases and assumptions. The key word is implicit, as developers are capable of limiting overt stereotypes. However, issues such as dialect prejudice are harder to detect. For example, this study demonstrates how LLMs apply racist adjectives to describe African-American English comparative to Standard American English. Another study demonstrated how Pre-trained Language Models (PLMs) favour ableist language. This study demonstrates how LLMs used as hiring tools discriminate through favouring white applicants over others. Certainly, LLM models can differ in performance however across the board, the issue of implicit bias stands. In this light, ‘ceding’ authorship to an LLM that fits your use case can be highly problematic. I believe we shouldn’t cede our authorship to a technology which reproduces society’s implicit biases and discrimination, reinforcing power structures that are harmful for many beloved members of our community. Continued critical engagement with LLM outputs, even when an effective use case is discovered, will be necessary as this technology is increasingly adopted. Yet, is this a unique characteristic of LLMs and other generative technologies? I want to leave that question for you.
For those interested, above I mentioned that LLMs ‘predict the next word’ and do not ‘reproduce factual information’. This was certainly confusing for me, so Ballsun-Stanton & Hipólito are useful to understand this concept, however here is another great reading that I hope helps: https://www.understandingai.org/p/large-language-models-explained-with
This page was written by students. Contributors include: Ben: Undergraduate International Relations
This content is proudly written by students for students and does not necessarily represent the views of the Faculty or the University