Abstract
The development of Large Language Models (LLMs) as a component of systems such as ChatGPT foregrounds a range of issues which can only be analysed through novel interdisciplinary approaches. Our pilot project `Exploring novel figurative language to conceptualise Large Language Models’, funded by Cambridge Language Sciences, is aimed at helping both specialists and non-specialists gain a more precise understanding of the technology and its implications. In this poster, we use `slop' as a metaphor to highlight one aspect of LLMs, but situate the issue in a broader context.
We use ‘slop’ to mean text delivered to a reader which is of little or no value to them (or is even harmful) OR is so verbose or convoluted that the value is hidden. Examples of slop include: over-general instructions, unnecessary terms and conditions and spam email. The term `slop' is sometimes used specifically for AI-generated content but in our usage it predates machine-generated text. Slop arises when desiderata other than communication with readers determine text production or delivery.
Systems incorporating LLMs may become `supersloppers': tools for the creation and delivery of more and more pointless text. Because so much slop already exists, and because it is often repetitious, maximizing the text on which LLMs are trained results in systems which excel at production of slop.
It is useful to think of slop as a category because it draws attention to specific ways in which the arenas we are examining are far removed from the basic setting of human conversation.