RuleML+RR 2017 Conference

View schedule

Online Springer Proceedings

Online CEUR Proceedings

 

 

Accepted Papers

  • Abdelraouf Hecham, Pierre Bisquert and Madalina Croitoru. On the Chase for All Provenance Paths With Existential Rules
  • Christian Meilicke, Daniel Ruffinelli, Andreas Nolle, Heiko Paulheim and Heiner Stuckenschmidt. Fast ABox Consistency Checking using Incomplete Reasoning and Caching
  • Daniela Briola and Viviana Mascardi. Can My Test Case Run on Your Test Plant? A Logic-Based Compliance Check and its Evaluation on Real Data
  • Enrique Matos Alfonso and Giorgos Stamou. Rewriting Queries with Negated Atoms
  • Gines Moreno and José Antonio Riaza Valverde. An Online Tool for Tuning Fuzzy Logic Programs
  • Domenico Cantone, Marianna Nicolosi-Asmundo and Daniele Francesco Santamaria. A set-theoretic approach to ABox reasoning services
  • Rafael Peñaloza. Inconsistency-tolerant Instance Checking in Tractable Description Logics
  • Sebastian Binnewies, Zhiqiang Zhuang and Kewen Wang. Three Methods for Revising Hybrid Knowledge Bases
  • Daniel Gall and Thom Fruehwirth. A Decidable Confluence Test for Cognitive Models in ACT-R
  • Sergey Paramonov, Daria Stepanova and Pauli Miettinen. Hybrid ASP-based Approach to Pattern Mining
  • Doerthe Arndt, Ben De Meester, Anastasia Dimou, Ruben Verborgh and Erik Mannens. Using Rule Based Reasoning for RDF Validation
  • Emanuele De Angelis, Fabio Fioravanti, Maria Chiara Meo, Alberto Pettorossi and Maurizio Proietti. Verifying Controllability of Time-Aware Business Processes
  • Diego Calvanese, Marlon Dumas, Fabrizio Maria Maggi and Marco Montali. Semantic DMN: Formalizing Decision Models with Domain Knowledge
  • Marco Alberti, Marco Gavanelli, Evelina Lamma, Fabrizio Riguzzi and Riccardo Zese. Dischargeable Obligations in Abductive Logic Programming
  • Dimitra Zografistou. ArgQL: A Declarative Language for Querying Argumentative Dialogues

 
 

Keynotes and speeches

  
Bob Kowalski (Imperial College London)

Logic and AI – The Last 50 Years
Fifty years ago in 1967, the research community working in Logic and AI was tiny. Everyone knew everyone else, and most communication between researchers was by word of mouth or correspondence. If you didn’t work in Stanford, MIT, Carnegie Mellon or Edinburgh, you were seriously disadvantaged.In 1967, Pat Hayes and I started our PhDs in Edinburgh. Bernard Meltzer was our Ph.D. supervisor, and John Alan Robinson, who developed resolution logic, was in Edinburgh on sabbatical. Bernard started the Journal of Artificial Intelligence in 1970, and co-edited the influential series of Machine Intelligence Workshop Proceedings with Donald Michie. The workshops attracted a wide range of researchers, mainly from the UK and USA, including Cordell Green and John McCarthy from Stanford. There was much excitement about Cordell’s application of resolution to a wide range of AI applications.The high hopes for resolution logic were challenged by researchers, including Marvin Minsky, Carl Hewitt, Gerald Sussman and Terry Winograd, at MIT. The resulting intellectual skirmishes between advocates of logical and procedural approaches to knowledge representation led to the development of logic programming and Prolog in 1972. My contributions to this development were greatly assisted by visits to Alain Colmerauer in Marseille.Over the subsequent years, we have seen the Fifth Generation Project with its focus on the application of logic programming to AI applications, the rise of the internet, and now big data and deep learning. But the fundamental challenge to Computer Science of the relationship between declarative (logical) and imperative (procedural) representations is still unresolved.Motivated by this challenge, Fariba Sadri and I are developing LPS (Logic-based Production System), as an imperative language with a logical interpretation. Programs include logic programs, interpreted as beliefs, and a logical reconstruction of production rules, interpreted as goals. Computation executes actions, to satisfy a global imperative of making the goals true in a model of the world determined by the beliefs. There is an online prototype of LPS, developed in the CLOUT (Computational Logic for Use in Teaching) project, to teach logic and computing to children.
  
Stephen Muggleton, Imperial College London

Meta-Interpretive Learning: Achievements and Challenges
Meta-Interpretive Learning (MIL) is a recent Inductive Logic Programming technique aimed at supporting learning of recursive definitions. A powerful and novel aspect of MIL is that when learning a predicate definition it automatically introduces sub-definitions, allowing decomposition into a hierarchy of reuseable parts. MIL is based on an adapted version of a Prolog meta-interpreter. Normally such a meta-interpreter derives a proof by repeatedly fetching first-order Prolog clauses whose heads unify with a given goal. By contrast, a meta-interpretive learner additionally fetches higher-order meta-rules whose heads unify with the goal, and saves the resulting meta-substitutions to form a program. This talk will overview theoretical and implementational advances in this new area including the ability to learn Turing computabale functions within a constrained subset of logic programs, the use of probabilistic representations within Bayesian meta-interpretive and techniques for minimising the number of meta-rules employed. The talk will also summarise applications of MIL including the learning of regular and context-free grammars, learning from visual representions with repeated patterns, learning string transformations for spreadsheet applications, learning and optimising recursive robot strategies and learning tactics for proving correctness of programs. The talk will conclude by pointing to the many challenges which remain to be addressed within this new area.
  
Jordi Cabot (IN3-UOC, Barcelona)

The Secret Life of Rules in Software Engineering (sponsored by EurAI)
Explicit definition and management of rules is largely ignored in most software development projects. While the most “popular” software modeling language (UML) enjoys some measure of success in real-world software projects, its companion, the Object Constraint Language (the OMG standard to complement UML models with textual constraints and derivation rules) is largely ignored. As a result, rules live hidden in the code, implemented in an ad-hoc manner.This somehow worked when data was mostly stored in relational databases and DBAs could at least enforce some checks on that data. But now, data lives in the open (e.g., data as a service, big data), accessible in a variety of formats (NoSQL, APIs, CSVs,…). This evolution facilitates the consumption and production of data but puts at risk any piece of software accessing it, at least in case no proper knowledge of the structure, quality and content of that data is available. And with the emergence of open data, it’s not only the software who accesses the data but people as well.In this talk, I will argue that rules must become first-class citizens in any software development project and describe our initiatives in discovering, representing and enforcing rules on (open and/or semi-structured) data.
  
Elena Baralis (Politecnico di Torino)

Opening the Black Box: Deriving Rules from Data
A huge amount of data is currently being made available for exploration and analysis in many application domains. Patterns and models are extracted from data to describe their characteristics and predict variable values. Unfortunately, many high quality models are characterized by being hardly interpretable. Rules mined from data may provide easily interpretable knowledge, both for exploration and classification (or prediction) purposes. In this talk I will introduce different types of rules (e.g., several variations on association rules, classification rules) and will discuss their capability of describing phenomena and highlighting interesting correlations in data.
 
  
Eric Mazeran (IBM)

Machine Learning, Optimization and Rules: Time for Agility and Convergences
Dr. Eric MAZERAN heads the Decision Optimization R&D within IBM developing CPLEX, CP Optimizer, Decision Optimization Center, etc. He is responsible for delivering innovative market-leading products based on Optimization & AI technologies in the area of Prescriptive Analytics.
Besides several executive roles in software development within IBM and ILOG, incl. leading the R&D of Business Rules some years ago, Eric has a strong technical background in Artificial Intelligence (author of Expert Systems, Expert System Shells, Knowledge Acquisition Systems, Rules Engines) and Software Architectures (author of Object Oriented Languages, Discrete Events Simulation software, Network Management System software, etc.) and has accrued 25+ year experience of applications in the field of AI & decision automation with many different companies and industries.
Eric holds a PhD in Artificial Intelligence from French National Institute of Applied Sciences (INSA), a Master in Robotics and CS (INSA) and a Master/Engineer’s Degree in Civil Engineering (ENTPE). Eric is passionate about applying findings in architecture & epistemology to AI technology and software.

 
 

Tutorials

 
Decision Modeling with DMN and OpenRules  (1h45m)
Jacob Feldman
This tutorial with introduce major business decision modeling concepts in the Decision Model and Notation (DMN) standard – see http://www.omg.org/spec/DMN/Current/. We will demonstrate the practical use of DMN by implementing various decision models using an popular open source Business Rules and Decision Management system “OpenRules” (http://openrules.com). We will start with creation and testing of a simple decision model oriented to business people only. Then we will explain how the tested decision models can be integrate with IT systems. Then we will develop several more complex enough decision models demonstrating the power and applicability of different decision modeling constructs. We will end up with development of custom decisioning constructs that go beyond the DMN standard but support real-world decision modeling needs. All demonstrated decision models will be actually executed and analyzed with the audience during the presentation.
 
How to do it with LPS (Logic-Based Production System)  (3h)
Robert Kowalski, Fariba Sadri, Miguel Calejo
CLOUT is an open-source, web-based prototype of the computer language LPS (Logic-based Production System), implemented in SWISH. LPS includes both logic programming, which underpins the computer language Prolog, and a logical reconstruction of production systems, which are, arguably, the most popular computational model of human thinking. LPS fills the gap between imperative and logical languages, by viewing computation as generating state-transforming actions, to make goals, represented in logical form, true. This combination of logic and change of state makes LPS not only a programming, database, and AI knowledge representation and problem-solving language, but also a scaled-down model of human thinking. The tutorial will present LPS by means of the web-based implementation, CLOUT, using such examples from programming, databases and AI, as sorting, dining philosophers, bank account maintenance, map colouring, the blocks world, and the prisoner’s dilemma. It will demonstrate the relationship between LPS and such other approaches to computing as production systems, reactive systems, abstract state machines and BDI agent languages. Moreover, it will show the close relationship between LPS, MetaTem, Transaction Logic and Abductive Logic Programming (ALP).
 
Logic-based Rule Learning for the Web of Data  (1h45m)
Francesca A. Lisi
The tutorial introduces to Inductive Logic Programming (ILP), being it a major logic-based approach to rule learning, and surveys extensions and applications of ILP to the Web of Data.
 
Rulelog: Highly Expressive Semantic Rules with Scalable Deep Reasoning  (3h30m)
Benjamin Grosof, Michael Kifer, Paul Fodor
In this half-day tutorial, we cover the fundamental concepts, key technologies, emerging applications, recent progress, and outstanding research issues in the area of Rulelog, a leading approach to fully semantic rule-based knowledge representation and reasoning (KRR). Rulelog matches well many of the requirements of cognitive computing. It combines deep logical/probabilistic reasoning tightly with natural language processing (NLP), and complements machine learning (ML). Rulelog interoperates and composes well with graph databases, relational databases, spreadsheets, XML, and expressively simpler rule/ontology systems – and can orchestrate overall hybrid KRR. Developed mainly since 2005, Rulelog is much more expressively powerful than the previous state-of-the-art practical KRR approaches, yet is computationally affordable. It is fully semantic and has capable efficient implementations that leverage methods from logic programming and databases, including dependency-aware smart cacheing and a dynamic compilation stack architecture.
 
Probabilistic Description Logics: Reasoning and Learning  (1h30m)
Riccardo Zese
The increasing popularity of the Semantic Web has lead to a wide-spread adoption of Description Logics (DLs) for modeling real world domains. Given the nature of such domains and the origin of the available data, the capability of managing probabilistic and uncertain information is of foremost importance. As a result, the last decade has seen an exponential increase in the interest for the development of methods for combining probability with DLs.

This tutorial will present various probabilistic semantics for knowledge bases (KBs). The tutorial will then concentrate on one of them, DISPONTE, which is inspired by the distribution semantics of Probabilistic Logic Programming. The tutorial will also describe approaches and algorithms to reason upon probabilistic knowledge bases. An overview of the major systems will be provided and three reasoners and two learning algorithms for the DISPONTE semantics will be presented more in details. BUNDLE and TRILL are able to find explanations for queries and compute their probability w.r.t. DISPONTE KBs while TRILL$^P$ compactly represents explanations using a Boolean formula and computes as well the probability of queries. The system EDGE learns the parameters of axioms of DISPONTE KBs while LEAP learns both the structure and parameters of KBs.