Luc De Raedt
Luc De Raedt's homepage
Title: Probabilistic (Logic) Programming
The tutorial will provide a motivation for, an overview of and an introduction to the fields of statistical relational learning and probabilistic programming. These combine rich expressive relational representations with the ability to learn, represent and reason about uncertainty. The tutorial will introduce a number of core concepts concerning representation and inference. It shall focus on probabilistic extensions of logic programming languages, such as CLP(BN), BLPs, ICL, PRISM, ProbLog, LPADs, CP-logic, SLPs and DYNA, but also discusses relations to alternative probabilistic programming languages such as Church, IBAL and BLOG and to some extent to statistical relational learning models such as RBNs, MLNs, and PRMs.
The concepts will be illustrated on a wide variety of tasks, including models representing Bayesian networks, probabilistic graphs, stochastic grammars, etc. This should allow participants to start writing their own probabilistic programs. We further provide an overview of the different inference mechanisms developed in the field, and discuss their suitability for the different concepts. We also touch upon approaches to learn the parameters of probabilistic programs, and mention a number of applications in areas such as robotics, vision, natural language processing, web mining, and bioinformatics.
This tutorial is based on joint work and previous tutorials with Angelika Kimmig.
Paolo Frasconi
Paolo Frasconi's homepage
Title: Kernels and Deep Networks for Structured Data
Relational data naturally occurs in many domains including natural language processing, vision, and computational chemistry and biology. In this lecture, I will give an overview of kernels and neural networks for learning with graphs and other structured data types such as sequences and trees. I will review some classic and more advanced graph kernels (including those capable of handling graphs labeled by continuous attributes) and different types of neural networks for relational data, including recent work on shift-aggregate-extract networks and networks for multi-multi instance learning.
Sebastian Riedel and Pasquale Minervini
Sebastian Riedel's homepage
Pasquale Minervini's homepage
Title: Differentiable Program Interpreters
Can computers learn how to program themselves? There is a long history of efforts in the area of program induction to reach this goal. In this lecture, I will discuss recent developments in this area that centre around the idea of learning programs via stochastic gradient descent, in a manner that seamlessly integrates with upstream/ downstream neural networks, handles noisy input and scales well with data. At the heart of these approaches are differentiable program interpreters: neural networks that behave like symbolic execution engines and are differentiable with respect to continuous representations of their source code. We will present such interpreters, from neural turing machines to differentiable theorem provers, mechanisms how to train them, and their pros and cons with respect to more traditional methods.
Artur d’Avila Garcez
Artur d’Avila Garcez's homepage
Title: Neural-symbolic learning
Deep learning has achieved great success at image and audio analysis, language translation and multimodal learning. Recent results however indicate that deep networks are susceptible to adversarial examples, not being robust as expected or capable of achieving extrapolation especially at video and language understanding tasks. To address this problem, much of the research has turned to neural artificial intelligence (AI) systems capable of harnessing knowledge as well as learning from large amounts of data, including relational reasoning and rich forms of knowledge representation and memory in recurrent networks. Neural-symbolic computing has sought to benefit from the integration of symbolic AI and neural computation for many years. In a neural-symbolic system, neural networks offer the machinery for efficient learning and computation while symbolic knowledge representation and reasoning offer an ability to benefit from prior knowledge, transfer learning and extrapolation, and to produce explainable neural models. Neural-symbolic computing has found application in many areas including software specification evolution, training and assessment in simulators, and the prediction and explanation of harm in gambling. I will introduce the principles of neural-symbolic computing and will exemplify its use with logic programming, defeasible and nonmonotonic knowledge, with a specific emphasis on first-order languages and relational learning, including connectionist inductive logic programming and the combination of deep networks and full first-order logic in Logic Tensor Networks. I will conclude by outlining examples of applications where the neural-symbolic approach has been successful and by discussing the main challenges of the research on neural-symbolic AI for the next decade.
Marco Lippi
Marco Lippi's homepage
Title: Applications of Statistical Relational Artificial Intelligence
In this hands-on lecture I will introduce a few software packages that can be used to exploit Statistical Relational Learning. In particular, I will focus on ProbLog, pracMLN, and cplint, providing several examples of problems that can be modeled with these tools.
Sriraam Natarajan
Sriraam Natarajan's homepage
Title: Human Allied Statistical Relational AI
Statistical Relational AI (StaRAI) Models combine the powerful formalisms of probability theory and first-order logic to handle uncertainty in large, complex problems. While they provide a very effective representation paradigm due to their succinctness and parameter sharing, efficient learning is a significant problem in these models. First, I will discuss state-of-the-art learning methods based on boosting that is representation independent. Our results demonstrate that learning multiple weak models can lead to a dramatic improvement in accuracy and efficiency.
One of the key attractive properties of StaRAI models is that they use a rich representation for modeling the domain that potentially allows for seam-less human interaction. However, in current StaRAI research, the human is restricted to either being a mere labeler or being an oracle who provides the entire model. I will present the recent progress that allows for more reasonable human interaction where the human input is taken as “advice” and the learning algorithm combines this advice with data. Finally, I will discuss more recent work on soliciting advice from humans as needed that allows for seamless interactions with the human expert.
Mathias Niepert and Alberto García Durán
Mathias Niepert's homepage
Alberto García Durán's homepage
Title: Neural Link Prediction for Multi-Modal Knowledge Graphs
The importance of knowledge bases (KBs) for AI systems has been demonstrated numerous times. KBs provide ways to organize, manage, and retrieve structured data and allow AI system to perform reasoning in various domains. The lecture will provide the necessary background and an introduction to knowledge graph embedding (KGE) methods for link prediction in knowledge graphs. We will also cover methods that aim at integrating other data modalities (images, numerical features, etc.) in these KG embedding methods. Finally, we will explore methods for link prediction in temporal knowledge graphs, where relations between entities may only hold for a time interval or a specific point in time.
Download slides of the first part | Download slides of the second part
Kristian Kersting
Kristian Kersting's homepage
Title: Lifted Statistical Machine Learning
Our minds make inferences that appear to go far beyond standard data science approaches. Whereas people can learn richer representations and use them for a wider range of data science tasks, data science algorithms have been mainly employed in a stand-alone context, constructing a single function from a table of training examples. In this seminar, I shall touch upon an approach to data science that can capture these human learning aspects by combining graphs, databases, and relational logic in general with statistical learning and optimization. Here, high-level (logical) features such as individuals, relations, functions, and connectives provide declarative clarity and succinct characterizations of the data science problem. While attractive from a modeling viewpoint, this declarative data science programming also often assuredly complicates the underlying model, making solving it potentially very slow. Hence, I shall also touch upon ways to reduce the solver costs. One promising direction to speed up is to cache local structures in the computational models. I shall illustrate this for probabilistic inference, linear programs, and convex quadratic programs, all working horses of data science.
Fabrizio Riguzzi
Fabrizio Riguzzi's homepage
Title: Probabilistic Inductive Logic Programming
Probabilistic Logic Programming is a form of Probabilistic Programming that is receiving an increasing attention for its ability to combine powerful knowledge representation with Turing completeness. The lecture will overview the main systems for learning probabilistic logic programs both in terms of parameters and of structure. For parameter learning, it will illustrate the systems PRISM, LeProbLog, EMBLEM and ProbLog2. For structure learning, it will present the systems ProbFOIL+ and SLIPCOVER.
Vaishak Belle
Vaishak Belle's homepage
Title: Effective Probabilistic Logical Reasoning and Learning in Continuous Domains
Weighted model counting (WMC) is the problem of computing the mass of a function over the set of models of a propositional theory and lies at the heart of probabilistic artificial intelligence, where a core issue is to quantify uncertainty over logically-structured worlds. Many state-of-the-art algorithms dealing with discrete Bayesian networks, factor graphs, probabilistic programs , and probabilistic databases reduce their inference problem to a WMC computation. While a typical WMC inference task is to compute the partition functions and marginals of factored probability distributions, it has also been used as a subroutine for more general tasks such as automated planning.
In this talk, we report on a new computational abstraction called weighted model integration (WMI) that extends WMC to continuous and mixed discrete continuous domains. We discuss various strategies for solving WMI effectively. We then report on ongoing work on parameter and structure learning for WMI.