Abstract
Humans learn language building on more basic conceptual and computational resources that we can already see precursors of in infancy. These include capacities for causal reasoning, symbolic rule formation, rapid abstraction, and common sense representations of events in terms of objects, agents and their interactions. I will talk about steps towards capturing these abilities in engineering terms, using tools from hierarchical Bayesian models, probabilistic programs, program induction, and neuro-symbolic architectures. I will show examples of how these tools have been applied in both cognitive science and AI contexts, and point to ways they might be useful in building more human-like language, learning and reasoning in machines.
Supplementary weblinks
Title
Cognitive and computational building blocks for more human-like language in machines - talk video
Description
Online talk at the 2020 Cambridge Language Sciences Annual Symposium by Professor Josh Tenenbaum (Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology)
Actions
View