Séminaire mensuel en ligne ILFC

[English version here]

Le GdR LIFT organise un séminaire mensuel en ligne sur les interactions entre linguistiques formelles et computationnelles.

L’un des buts du séminaire est de réunir des membres de communautés scientifiques différentes tout autour du monde et de favoriser l’interfécondation des approches.

Le séminaire est entièrement gratuit et a lieu en ligne via la plateforme Zoom.
Pour assister au séminaire et recevoir les informations à propos des prochaines séances, veuillez vous inscrire à la liste de diffusion : [ici]

Prochaines session :

  • 2022/01/18 17h00-18h00 UTC+1 : Johan Bos (University of Groningen)
    Titre : Variable-free Meaning Representations
    Résumé : Most formal meaning representations use variables to represent entities and relations between them. But variables can be bothersome for people annotating texts with meanings, and for algorithms that work with meanings representations, in particular the recent machine learning methods based on neural network technology.
    Hence the question that I am interested in is: can we replace the currently popular meaning representations with representations that do not use variables, without giving up any expressive power? My starting point are the representations of Discourse Representation Theory. I will show that these can be replaced by a simple language based on indices instead of variables, assuming a neo-Davidsonian event semantics.
    The resulting formalism has several interesting consequences. Apart from being beneficial to human annotators and machine learning algorithms, it also offers straightforward visualisation possibilities and potential for modelling information packaging.
  • 2022/02/15 17h00-18h00 UTC+1 : Najoung Kim (New York University ; 11h00-12h00 UTC-5)
    Titre : Compositional Linguistic Generalization in Artificial Neural Networks
    Résumé : Compositionality is considered a central property of human language. One key benefit of compositionality is the generalization it enables—the production and comprehension of novel expressions analyzed as new compositions of familiar parts. I construct a test for compositional generalization for artificial neural networks based on human generalization patterns discussed in existing linguistic and developmental studies, and test several instantiations of Transformer (Vaswani et al. 2017) and Long Short-Term Memory (Hochreiter & Schmidhuber 1997) models. The models evaluated exhibit only limited degrees of compositional generalization, implying that their learning biases for induction to fill gaps in the training data differ from those of human learners. An error analysis reveals that all models tested lack bias towards faithfulness (à la Prince & Smolensky 1993/2002). Adding a glossing task (word-by-word translation), a task that requires maximally faithful input-output mappings, as an auxiliary training objective to the Transformer model substantially improves generalization, showing that the auxiliary training successfully modified the model’s inductive bias. However, the improvement is limited to generalization to novel compositions of known lexical items and known structures; all models still struggled with generalization to novel structures, regardless of auxiliary training. The challenge of structural generalization leaves open exciting avenues for future research for both human and machine learners.

Sessions passées :

  • 2021/12/14 17h00-18h00 UTC+1 : Lisa Bylinina (Bookarang, Netherlands)
    Titre : Polarity in multilingual language models
    [vidéo]
    [diapositives]
    Résumé : The space of natural languages is constrained by various interactions between linguistic phenomena. In this talk, I will focus on one particular type of such interaction, in which logical properties of a context constrain the distribution of negative polarity items (NPIs), like English ‘any’. Correlational — and possibly, causal — interaction between logical monotonicity and NPI distribution has been observed for some NPIs in some languages for some contexts, with the help of theoretical, psycholinguistic and computational tools. How general is this relation across languages? How inferable is it from just textual data? What kind of generalization — if any — about NPI distribution would a massively multilingual speaker form, and what kind of causal structure would guide such speaker’s intuition? Humans speaking 100+ languages natively are hard to find — but we do have multilingual language models. I will report experiments in which we study NPIs in four languages (English, French, Russian and Turkish) in two pre-trained models — multilingual BERT and XLM-RoBERTa. We evaluate the models’ recognition of polarity-sensitivity and its cross-lingual generality. Further, using the artificial language learning paradigm, we look for the connection between semantic profiles of tokens and their ability to license NPIs. We find partial evidence for such connection.
    Collaboration avec Alexey Tikhonov (Yandex).
  • 2021/11/16 17h00-18h00 UTC+1 : Alex Lascarides (University of Edinburgh ; 16h00-17h00 UTC+0)
    Titre : Situated Communication
    [vidéo]
    [diapositives]
    Résumé : This talk focuses on how to represent and reason about the content of conversation when it takes place in an embodied, dynamic environment. I will argue that speakers can, and do, appropriate non-linguistic events into their communicative intents, even when those events weren’t produced with the intention of being a part of a discourse. Indeed, non-linguistic events can contribute an (instance of) a proposition to the content of the speaker’s message, even when her verbal signal contains no demonstratives or anaphora of any kind.
    I will argue that representing and reasoning about discourse coherence is essential to capturing these features of situated conversation. I will make two claims: first, non-linguistic events affect rhetorical structure in non-trivial ways; and secondly, rhetorical structure guides the conceptualisation of non-linguistic events. I will support the first claim via empirical observations from the STAC corpus (www.irit.fr/STAC/corpus.html)—a corpus of dialogues that take place between players during the board game Settlers of Catan. I will support the second claim via experiments in Interactive Task Learning: a software agent jointly learns how to conceptualise the domain, ground previously unknown words in the embodied environment, and solve its planning problem, by using the evidence of an expert’s corrective (verbal) feedback on its physical actions.
  • 2021/10/12 17h00-18h00 UTC+2 : Christopher Potts (Stanford University ; 8h00-9h00 UTC-7)
    Titre : Causal Abstractions of Neural Natural Language Inference Models
    [vidéo]
    [diapositives]
    Résumé : Neural networks have a reputation for being « black boxes » — complex, opaque systems that can be studied using only purely behavioral evaluations. However, much recent work on structural analysis methods (e.g., probing and feature attribution) is allowing us to peer inside these models and deeply understand their internal dynamics. In this talk, I’ll describe a new structural analysis method we’ve developed that is grounded in a formal theory of causal abstraction. In this method, neural representations are aligned with variables in interpretable causal models, and then *interchange interventions* are used to experimentally verify that the neural representations have the causal properties of their aligned variables. I’ll use these methods to explore problems in Natural Language Inference, focusing in particular on compositional interactions between lexical entailment and negation. Recent Transformer-based models can solve hard generalization tasks involving these phenomena, and our causal analysis method helps explain why: the models have learned modular representations that closely approximate the high-level compositional theory. Finally, I will show how to bring interchange interventions into the training process, which allows us to push our models to acquire desired modular internal structures like this.
    Collaboration avec Atticus Geiger, Hanson Lu, Noah Goodman et Thomas Icard.
  • 2021/06/01 10h30-18h30 UTC+2 : journée avec 6 intervenant·e·s

Contact : Timothée BERNARD (prenom.nom@u-paris.fr) et Grégoire WINTERSTEIN (prenom.nom@uqam.ca)