Séminaire ILFC : interactions entre linguistiques formelles et computationnelles, en ligne, 1er juin 2021

[English version here]

Page des sessions mensuelles : [ici]

Le GdR LIFT organise le 1er juin 2021 une journée de séminaire en ligne sur les interactions entre linguistiques formelles et computationnelles.
Le séminaire se penchera en particulier sur la place des méthodes symboliques dans les systèmes actuels de traitement automatique des langues et sur l’apport des méthodes computationnelles à la linguistique théorique.
Cette journée a pour but de réunir des membres de communautés scientifiques différentes tout autour du monde et de favoriser l’interfécondation des approches.

Le séminaire sera entièrement gratuit et aura lieu en ligne via les plateformes Zoom et Gather.Town.
Nous vous invitons à vous inscrire le plus tôt possible avec le formulaire suivant, qui nous permettra de vous transmettre les informations de connexion : [ici]

Programme :

Les horaires sont indiqués suivant l’heure d’été d’Europe centrale (UTC+2).

  • 10h30-11h30 Juan Luis Gastaldi, ETH Zürich : « What do neural models tell us about the nature of language? »
    [résumé]
    [diapositives]
    [vidéo]
  • 11h30-12h30 Koji Mineshima, Keio University : « Natural Language Inference: A View from Logic and Formal Semantics »
    [résumé]
    [vidéo]
  • 12h30-14h00 Pause déjeuner et rencontre Gather.Town
  • 14h00-15h00 Maud Pironneau, Druide informatique : « Once Upon a Time, Linguists, Computer Scientists and Disruptive Technologies »
    [résumé]
    [diapositives]
  • 15h00-16h00 Marie-Catherine de Marneffe, Ohio State University : « Can neural networks identify speaker commitment? »
    [résumé]
    [diapositives]
    [vidéo]
  • 16h00-16h30 Rencontre Gather.Town
  • 16h30-17h30 Jacob Andreas, MIT : « Language models as world models »
    [résumé]
    [diapositives]
    [vidéo]
  • 17h30-18h30 Olga Zamaraeva, University of Washington : « Assembling Syntax: Modeling wh-Questions in a Grammar Engineering Framework »
    [résumé]
    [diapositives]
    [vidéo]
  • 18h30-19h30 Rencontre Gather.Town

Résumés :

  • Jacob Andreas (MIT, MA, É.-U.)
    Titre : Language models as world models
    Résumé : Neural language models, which place probability distributions over sequences of words, produce vector representations of words and sentences that are useful for language processing tasks as diverse as machine translation, question answering, and image captioning. These models’ usefulness is partially explained by the fact that their representations robustly encode lexical and syntactic information. But the extent to which language model training also induces representations of *meaning* remains a topic of ongoing debate. I will describe recent work showing that language models—trained on text alone, without any kind of grounded supervision—build structured meaning representations that are used to simulate entities and situations as they evolve over the course of a discourse. These representations can be linearly decoded into logical representations of world state (e.g. discourse representation structures). They can also be directly manipulated to produce predictable changes in generated output. Together, these results suggest that (some) highly structured aspects of meaning can be recovered by relatively unstructured models trained on corpus data.
  • Juan Luis Gastaldi (ETH, Zürich, Suisse)
    Titre : What do neural models tell us about the nature of language?
    Résumé : Recent advances in deep neural networks are having a considerable impact on the practice and methods of linguistics. But to what extent what appears as a mainly technical feat concerns also our understanding of what language is? Although this is chiefly a philosophical question, the way we understand the nature of language can have critical consequences for both the conceptual groundings and the technical developments of the field. Moreover, a clear assessment of this question can provide critical tools against ungrounded claims, as witnessed within the linguistic scientific community by the renewed interest in the relation between linguistic form and meaning. In addition, such a philosophical perspective can be relevant to extend the field’s theoretical capabilities to the study of languages other than natural language. Assuming a philosophical perspective, I will try to identify the philosophical consequences of the recent success of neural computational methods in NLP and suggest possible conceptual and technical orientations for the study of language deriving from them. I will start by discussing the problem of linguistic meaning through a reassessment of the philosophical signification of the so-called “distributional hypothesis”. In particular, against a cognitivist interpretation, I will propose to reconsider it as a weak version of the structuralist hypothesis, where formal and empirical methods do not conflict. Then, I will draw some conceptual implications, namely by proposing the notion of “formal content” to characterize the semantic capabilities of distributional models and highlighting the importance of the explicit derivation of paradigmatic units. Finally, I will suggest how the last decade’s most relevant NLP technical advances can contribute to elaborating an analytic framework in line with the principles exposed.
  • Marie-Catherine de Marneffe (Ohio State University, OH, É.-U.)
    Titre : Can neural networks identify speaker commitment?
    Résumé : When we communicate, we infer a lot beyond the literal meaning of the words we hear or read. In particular, our understanding of an utterance depends on assessing the extent to which the speaker stands by the event they describe. An unadorned declarative like “The cancer has spread” conveys firm speaker commitment of the cancer having spread, whereas “There are some indicators that the cancer has spread” imbues the claim with uncertainty. It is not only the absence vs. presence of embedding material that determines whether or not a speaker is committed to the event described: from (1) we will infer that the speaker is committed to there being war, whereas in (2) we will infer the speaker is committed to relocating species not being a panacea, even though the clauses that describe the events in (1) and (2) are both embedded under “(s)he doesn’t believe”.
        (1) The problem, I’m afraid, with my colleague here, he really doesn’t believe that it’s war.
        (2) Transplanting an ecosystem can be risky, as history shows. Hellmann doesn’t believe that relocating species threatened by climate change is a panacea.
    In this talk, I will present the CommitmentBank, a dataset of naturally occurring discourses developed to deepen our understanding of the factors at play in identifying speaker commitment, both from a theoretical and computational linguistics perspective. I will show that current neural language models fail on items that necessitate pragmatic knowledge, highlighting directions for improvement.
  • Koji Mineshima (Keio University, Tokyo, Japon)
    Titre : Natural Language Inference: A View from Logic and Formal Semantics
    Résumé : While deep neural networks (DNNs) have shown remarkable performance for a variety of tasks in natural language processing (NLP), there is still much work to be done to understand the extent to which they have the abilities to talk and reason like humans — those abilities that been traditionally described and elucidated in theoretical linguistics and logic. I will present recent work on probing the “systematicity” of DNN models in the domain of Natural Language Inference (NLI), focusing on the transitivity of entailment relations (infer “A to C” from “A to B” and “B to C”), one of the fundamental properties of logical inferences (called the “cut” rule). In particular, it focuses on transitivity inferences composed of veridical inferences (those triggered by clause-embedding verbs), showing that current NLI models lack the generalization capacity to perform well the transitivity inferences. I will also present recent work that tries to fill the gap between NLP and theoretical linguistics (in particular, formal semantics) by developing a semantic parsing and logical inference system based on Categorial Grammar and Automated Theorem Proving and discuss some open problems that arise for logic-based approaches to NLI in the contemporary context.
  • Maud Pironneau (Druide informatique, Québec, Canada)
    Titre : Once Upon a Time, Linguists, Computer Scientists and Disruptive Technologies
    Résumé : At Druide informatique, we have been devising writing assistance software for over 25 years. We create writing text correctors, dictionaries, and guides, for everyone and every type of written document, available first in French, and more recently in English. As of 2021, more than 1 million people use Antidote, our flagship product. Consequently, we possess extensive experience in language technologies and we know how to make linguists and computer scientists work together. This knowledge can be seen as both historical and paradigm-shifting: historical in that Antidote for French was created back in 1993, at that time using symbolic rules; paradigm-shifting through the use of disruptive technologies and applications for different languages. Add to this complexity constant societal evolution, a dash of language politics, rational or not, and an inherent linguistic conservatism: now you have a portrait of the important themes in our work. This presentation will expose our successes as well as our failures across this field of possibilities.
  • Olga Zamaraeva (University of Washington, WA, É.-U.)
    Titre : Assembling Syntax: Modeling wh-Questions in a Grammar Engineering Framework
    Résumé : Studying syntactic structure is one of the ways to learn about the range variation in human languages. But without computational aid, assembling the complex and fragmented hypotheses about different syntactic phenomena quickly becomes intractable.  Fully explicit formalisms like HPSG allow us to encode our hypotheses about syntax and associated compositional semantics on the computer. We can then test these hypotheses rigorously, showing a clear area of their applicability, which can grow over time.  In this talk, I will present my recent work on modeling the syntactic structure of constituent (wh-)questions for an HPSG-based grammar engineering framework called the Grammar Matrix. The Matrix includes implemented syntactic analyses which are automatically tested as a system on test suites from diverse languages. The framework helps speed up grammar development and is intended to make implemented grammar artifacts possible for many languages of the world, particularly for endangered languages. In computational linguistics, formalized syntactic representations produced by such grammars play a crucial role in creating annotations which are then used for evaluating NLP system performance. The grammars were also shown to be useful in applications such as grammar coaching, and advancing this line of research can contribute to educational and revitalization efforts.

Formulaire d’inscription : [ici]

Contact : Timothée BERNARD (prenom.nom@u-paris.fr) et Grégoire WINTERSTEIN (prenom.nom@uqam.ca)