Anders Søgaard

SHORT BIO

Anders Søgaard is in a dual position as Professor of Computer Science and Professor of Philosophy at the University of Copenhagen. He is a recipient of an ERC Starting Grant, a Google Focused Research, a Carlsberg Semper Ardens Advanced, and has won eight best paper awards. He has written more than 300 articles and five academic books.

TITLE

Why Most Are Wrong About LLM Understanding

ABSTRACT

I identify a fallacy common to many LLM no-go theorems of the form: LLMs cannot do X, because they were designed to – or trained to – do Y. I present observations that seem to challenge stochastic parrot or database views of LLMs, as well as arguments for why, contrary to popular belief, structural similarity may be sufficient for grounding.


Alexander Koller

SHORT BIO

Alexander Koller is a Professor of Computational Linguistics at Saarland University in Saarbrücken, Germany. His research interests include planning and reasoning with LLMs, syntactic and semantic processing, natural language generation, and dialogue systems. He is particularly interested in neurosymbolic models that bring together principled linguistic modeling and correctness guarantees with the coverage and robustness of neural approaches. Alexander received his PhD from Saarland University and was previously a postdoc at Columbia University and the University of Edinburgh, faculty at the University of Potsdam, and Visiting Senior Research Scientist at the Allen Institute for AI.

TITLE
Untrustworthy and still revolutionary: Some thoughts on how LLMs are changing NLP

ABSTRACT

There is no doubt that large language models (LLMs) are revolutionizing the field of natural language processing (NLP) in many ways. There are many doubts on whether this a good thing, whether we will ever be able to overcome their inability to reliably distinguish truth from falsehood, whether there is any place left for pre-LLM models, and how to do good science any more.

I do not have definitive answers on any of these questions, and am personally torn on many of them. In this talk, I will first discuss some recent research on the limitations of LLMs for semantic parsing and on overcoming them through the use of neurosymbolic models. I will then discuss recent work on the extent to which LLMs can capture world knowledge and apply it to planning and reasoning tasks. I will conclude with some general thoughts on science and engineering in NLP in the era of LLMs.