Abstract
Contemporary AI technologies offer a remarkable simulacrum of human language, to the extent that they are now in regular use in contexts ranging from computer programming to essay writing to virtual dating. Given their general effectiveness in many settings, it seems reasonable to ask whether these systems "have concepts": do they "understand" their language input and the responses that they generate? Do they "know" about the world? If they do, how would we tell? These questions have also been posed of other intelligent behaving systems, including animals, infants and children, people with atypical patterns of neuro-cognitive development, and patients suffering from cognitive and language impairments following brain injury. Over the years scientists have innovated clever ways of assessing what kinds of language and cognitive behaviors such populations can/cannot do, which provide a toolbox for evaluating contemporary AI capabilities. In this talk I will suggest that current models show patterns of success and failure that are unlike any naturally intelligent system, and that such patterns highlight aspects of human cognition and language that, while not really recognized in most prior work, may be essential to what we mean by the word "concept."


