Page of the monthly sessions: [here]
On June 1st 2021, GdR LIFT is organising a one-day online seminar on the interactions between formal and computational linguistics.
In particular, the position of symbolic methods in natural language processing systems and the contribution of computational methods to theoretical linguistics will be discussed.
The seminar is intended to make members of diverse scientific communities around the world meet and share their different perspectives.
It is free to attend the seminar and it will be held on Zoom and Gather.Town.
Please register as soon as possible using the following form so we can send you your login information: [here]
Program:
The times are in the Central European Summer Time zone (UTC+2).
- 10:30-11:30 Juan Luis Gastaldi, ETH Zürich: “What do neural models tell us about the nature of language?”
[abstract]
[slides]
[video] - 11:30-12:30 Koji Mineshima, Keio University (18:30-19:30 UTC+9): “Natural Language Inference: A View from Logic and Formal Semantics”
[abstract]
[video] - 12:30-14:00 Lunch break & meetup Gather.Town
- 14:00-15:00 Maud Pironneau, Druide informatique (8:00-9:00 UTC-4): “Once Upon a Time, Linguists, Computer Scientists and Disruptive Technologies”
[abstract]
[slides] - 15:00-16:00 Marie-Catherine de Marneffe, Ohio State University (9:00-10:00 UTC-4): “Can neural networks identify speaker commitment?”
[abstract]
[slides]
[video] - 16:00-16:30 Meetup Gather.Town
- 16:30-17:30 Jacob Andreas, MIT (10:30-11:30 UTC-4): “Language models as world models”
[abstract]
[slides]
[vidéo] - 17:30-18:30 Olga Zamaraeva, University of Washington (8:30-9:30 UTC-7): “Assembling Syntax: Modeling wh-Questions in a Grammar Engineering Framework”
[abstract]
[slides]
[video] - 18:30-19:30 Meetup Gather.Town
Abstracts:
- Jacob Andreas (MIT, MA, USA)
Title: Language models as world models
Abstract: Neural language models, which place probability distributions over sequences of words, produce vector representations of words and sentences that are useful for language processing tasks as diverse as machine translation, question answering, and image captioning. These models’ usefulness is partially explained by the fact that their representations robustly encode lexical and syntactic information. But the extent to which language model training also induces representations of *meaning* remains a topic of ongoing debate. I will describe recent work showing that language models—trained on text alone, without any kind of grounded supervision—build structured meaning representations that are used to simulate entities and situations as they evolve over the course of a discourse. These representations can be linearly decoded into logical representations of world state (e.g. discourse representation structures). They can also be directly manipulated to produce predictable changes in generated output. Together, these results suggest that (some) highly structured aspects of meaning can be recovered by relatively unstructured models trained on corpus data. - Juan Luis Gastaldi (ETH, Zürich, Switzerland)
Title: What do neural models tell us about the nature of language?
Abstract: Recent advances in deep neural networks are having a considerable impact on the practice and methods of linguistics. But to what extent what appears as a mainly technical feat concerns also our understanding of what language is? Although this is chiefly a philosophical question, the way we understand the nature of language can have critical consequences for both the conceptual groundings and the technical developments of the field. Moreover, a clear assessment of this question can provide critical tools against ungrounded claims, as witnessed within the linguistic scientific community by the renewed interest in the relation between linguistic form and meaning. In addition, such a philosophical perspective can be relevant to extend the field’s theoretical capabilities to the study of languages other than natural language. Assuming a philosophical perspective, I will try to identify the philosophical consequences of the recent success of neural computational methods in NLP and suggest possible conceptual and technical orientations for the study of language deriving from them. I will start by discussing the problem of linguistic meaning through a reassessment of the philosophical signification of the so-called “distributional hypothesis”. In particular, against a cognitivist interpretation, I will propose to reconsider it as a weak version of the structuralist hypothesis, where formal and empirical methods do not conflict. Then, I will draw some conceptual implications, namely by proposing the notion of “formal content” to characterize the semantic capabilities of distributional models and highlighting the importance of the explicit derivation of paradigmatic units. Finally, I will suggest how the last decade’s most relevant NLP technical advances can contribute to elaborating an analytic framework in line with the principles exposed. - Marie-Catherine de Marneffe (Ohio State University, OH, USA)
Title: Can neural networks identify speaker commitment?
Abstract: When we communicate, we infer a lot beyond the literal meaning of the words we hear or read. In particular, our understanding of an utterance depends on assessing the extent to which the speaker stands by the event they describe. An unadorned declarative like “The cancer has spread” conveys firm speaker commitment of the cancer having spread, whereas “There are some indicators that the cancer has spread” imbues the claim with uncertainty. It is not only the absence vs. presence of embedding material that determines whether or not a speaker is committed to the event described: from (1) we will infer that the speaker is committed to there being war, whereas in (2) we will infer the speaker is committed to relocating species not being a panacea, even though the clauses that describe the events in (1) and (2) are both embedded under “(s)he doesn’t believe”.
(1) The problem, I’m afraid, with my colleague here, he really doesn’t believe that it’s war.
(2) Transplanting an ecosystem can be risky, as history shows. Hellmann doesn’t believe that relocating species threatened by climate change is a panacea.
In this talk, I will present the CommitmentBank, a dataset of naturally occurring discourses developed to deepen our understanding of the factors at play in identifying speaker commitment, both from a theoretical and computational linguistics perspective. I will show that current neural language models fail on items that necessitate pragmatic knowledge, highlighting directions for improvement. - Koji Mineshima (Keio University, Tokyo, Japan)
Title: Natural Language Inference: A View from Logic and Formal Semantics
Abstract: While deep neural networks (DNNs) have shown remarkable performance for a variety of tasks in natural language processing (NLP), there is still much work to be done to understand the extent to which they have the abilities to talk and reason like humans — those abilities that been traditionally described and elucidated in theoretical linguistics and logic. I will present recent work on probing the “systematicity” of DNN models in the domain of Natural Language Inference (NLI), focusing on the transitivity of entailment relations (infer “A to C” from “A to B” and “B to C”), one of the fundamental properties of logical inferences (called the “cut” rule). In particular, it focuses on transitivity inferences composed of veridical inferences (those triggered by clause-embedding verbs), showing that current NLI models lack the generalization capacity to perform well the transitivity inferences. I will also present recent work that tries to fill the gap between NLP and theoretical linguistics (in particular, formal semantics) by developing a semantic parsing and logical inference system based on Categorial Grammar and Automated Theorem Proving and discuss some open problems that arise for logic-based approaches to NLI in the contemporary context. - Maud Pironneau (Druide informatique, Québec, Canada)
Title: Once Upon a Time, Linguists, Computer Scientists and Disruptive Technologies
Abstract: At Druide informatique, we have been devising writing assistance software for over 25 years. We create writing text correctors, dictionaries, and guides, for everyone and every type of written document, available first in French, and more recently in English. As of 2021, more than 1 million people use Antidote, our flagship product. Consequently, we possess extensive experience in language technologies and we know how to make linguists and computer scientists work together. This knowledge can be seen as both historical and paradigm-shifting: historical in that Antidote for French was created back in 1993, at that time using symbolic rules; paradigm-shifting through the use of disruptive technologies and applications for different languages. Add to this complexity constant societal evolution, a dash of language politics, rational or not, and an inherent linguistic conservatism: now you have a portrait of the important themes in our work. This presentation will expose our successes as well as our failures across this field of possibilities. - Olga Zamaraeva (University of Washington, WA, USA)
Title: Assembling Syntax: Modeling wh-Questions in a Grammar Engineering Framework
Abstract: Studying syntactic structure is one of the ways to learn about the range variation in human languages. But without computational aid, assembling the complex and fragmented hypotheses about different syntactic phenomena quickly becomes intractable. Fully explicit formalisms like HPSG allow us to encode our hypotheses about syntax and associated compositional semantics on the computer. We can then test these hypotheses rigorously, showing a clear area of their applicability, which can grow over time. In this talk, I will present my recent work on modeling the syntactic structure of constituent (wh-)questions for an HPSG-based grammar engineering framework called the Grammar Matrix. The Matrix includes implemented syntactic analyses which are automatically tested as a system on test suites from diverse languages. The framework helps speed up grammar development and is intended to make implemented grammar artifacts possible for many languages of the world, particularly for endangered languages. In computational linguistics, formalized syntactic representations produced by such grammars play a crucial role in creating annotations which are then used for evaluating NLP system performance. The grammars were also shown to be useful in applications such as grammar coaching, and advancing this line of research can contribute to educational and revitalization efforts.
Registration form: [here]
Contact: Timothée BERNARD (firstname.lastname@u-paris.fr) and Grégoire WINTERSTEIN (firstname.lastname@uqam.ca)