neuralnoise.com


Homepage of Dr Pasquale Minervini, PhD
Researcher at University College London
London, United Kingdom


Call for PhD Students!

From September 2022, I will join the Institute for Language, Cognition and Communication (ILCC) at the School of Informatics, University of Edinburgh as a faculty member!

There are several fully-funded scholarships available: if you would like to work with me, make sure to apply to both the UKRI CDT in Natural Language Processing (12 4-year PhD scholarships) and the ILCC PhD Program (10 3-year PhD scholarships). The deadline for applying is January 28th, 2022: this deadline is mainly for UK applicants, but foreign applicants will be considered as well. If you have any further questions, feel free to reach out! And feel free to share this ad with your friends and connections that may be interested, especially if they come from under-represented groups!

I mainly work on learning from graph-structured and natural language data, hybrid neuro-symbolic models, compositional generalisation and, in general, on making Deep Learning models become more data-efficient, statistically robust, and explainable. As Artificial Intelligence and Machine Learning systems become more pervasive in high-risk areas like education and healthcare, there is an increasing need for AI-based systems that we can trust.

My research focuses on filling this gap and developing Deep Learning systems that can produce explanations, that can learn from fewer examples, and that can handle out-of-distribution data such as adversarial inputs.

Probably you may want to know a bit more about my research so far in these directions – here are a few pointers about some of my recent works. Let me know if any of these clicks with you!

Bridging Neural and Symbolic Computation

One way I am trying to address some of the limitations of modern Deep Learning models is by designing hybrid approaches that inherit the strength of both neural and symbolic systems.

For example, let’s consider the problem of answering complex symbolic queries on potentially very large Knowledge Graphs. In our paper Complex Query Answering with Neural Link Predictors, we propose a hybrid approach that combines symbolic and neural computation. Using orders of magnitude less training data, our approach obtains significant improvements compared with the purely-neural state-of-the-art models while also being able to produce faithful explanations to its users. This paper received an Outstanding Paper Award at ICLR 2021.

Or, for example, let’s consider tasks that require some sort of logic deductive reasoning. Previous research shows that even BERT-based models may not generalise properly when they are required to perform new reasoning tasks. We proposed several approaches for solving this problem by designing neural models whose behaviour mimics that of logic deductive reasoners. Our approaches enable neural models to answer multi-hop question and to jointly learn logic rules and reasoning policies, even on massive Knowledge Bases.

More recently, we were wondering whether it could be possible to incorporate black-box algorithmic components, like Dijkstra’s shortest path algorithm or any ILP solver, in a neural model. In our paper Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions, we propose a very general (and extremely simple!) method for back-propagating through a massive variety of algorithmic components, effectively allowing neural models to use them as off-the-shelf components. See our NeurIPS 2021 presentation of this paper, as well as Yannic Kilcher’s explanation.

Incorporating Constraints in Neural Models

Some other times, we would like a neural model to comply with a given set constraints, coming for example from domain experts or from the existing laws.

In Adversarial Sets for Regularising Neural Link Predictors, we propose the first framework for incorporating a wide family of (First-Order!) logic constraints in neural models. Our framework can also be useful for producing formal robustness guarantees: in many interesting cases, we can mathematically prove that for any possible input, the model will never violate a given set of constraints!

We explored further applications of these ideas in several settings. For example, in Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge, we show that some common-sense reasoning patterns can also be represented as constraints, and incorporating these in neural Natural Language Inference (NLI) models yields improvements both on in-distribution and out-of-distribution data. In Gone At Last: Removing the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training, we propose a method for effectively de-biasing neural NLI models. In Undersensitivity in Neural Reading Comprehension, we find that neural Question Answering (QA) models can often ignore semantically meaningful variations in the inputs, and analyse different ways of correcting such behaviour. In Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations, we show that models for producing natural language explanations can easily contradict themselves!

If you want to connect, feel free to chat me up at NeurIPS in case you are attending, or just send me an e-mail!

comments powered by Disqus