.. more to follow soon!
Keynotes and speeches
|Bob Kowalski (Imperial College London)|
Logic and AI – The Last 50 Years
Fifty years ago in 1967, the research community working in Logic and AI was tiny. Everyone knew everyone else, and most communication between researchers was by word of mouth or correspondence. If you didn’t work in Stanford, MIT, Carnegie Mellon or Edinburgh, you were seriously disadvantaged.
In 1967, Pat Hayes and I started our PhDs in Edinburgh. Bernard Meltzer was our Ph.D. supervisor, and John Alan Robinson, who developed resolution logic, was in Edinburgh on sabbatical. Bernard started the Journal of Artificial Intelligence in 1970, and co-edited the influential series of Machine Intelligence Workshop Proceedings with Donald Michie. The workshops attracted a wide range of researchers, mainly from the UK and USA, including Cordell Green and John McCarthy from Stanford. There was much excitement about Cordell’s application of resolution to a wide range of AI applications.
The high hopes for resolution logic were challenged by researchers, including Marvin Minsky, Carl Hewitt, Gerald Sussman and Terry Winograd, at MIT. The resulting intellectual skirmishes between advocates of logical and procedural approaches to knowledge representation led to the development of logic programming and Prolog in 1972. My contributions to this development were greatly assisted by visits to Alain Colmerauer in Marseille.
Over the subsequent years, we have seen the Fifth Generation Project with its focus on the application of logic programming to AI applications, the rise of the internet, and now big data and deep learning. But the fundamental challenge to Computer Science of the relationship between declarative (logical) and imperative (procedural) representations is still unresolved.
Motivated by this challenge, Fariba Sadri and I are developing LPS (Logic-based Production System), as an imperative language with a logical interpretation. Programs include logic programs, interpreted as beliefs, and a logical reconstruction of production rules, interpreted as goals. Computation executes actions, to satisfy a global imperative of making the goals true in a model of the world determined by the beliefs. There is an online prototype of LPS, developed in the CLOUT (Computational Logic for Use in Teaching) project, to teach logic and computing to children.
|Stephen Muggleton, Imperial College London|
Meta-Interpretive Learning: Achievements and Challenges
Meta-Interpretive Learning (MIL) is a recent Inductive Logic Programming technique aimed at supporting learning of recursive definitions. A powerful and novel aspect of MIL is that when learning a predicate definition it automatically introduces sub-definitions, allowing decomposition into a hierarchy of reuseable parts. MIL is based on an adapted version of a Prolog meta-interpreter. Normally such a meta-interpreter derives a proof by repeatedly fetching first-order Prolog clauses whose heads unify with a given goal. By contrast, a meta-interpretive learner additionally fetches higher-order meta-rules whose heads unify with the goal, and saves the resulting meta-substitutions to form a program. This talk will overview theoretical and implementational advances in this new area including the ability to learn Turing computabale functions within a constrained subset of logic programs, the use of probabilistic representations within Bayesian meta-interpretive and techniques for minimising the number of meta-rules employed. The talk will also summarise applications of MIL including the learning of regular and context-free grammars, learning from visual representions with repeated patterns, learning string transformations for spreadsheet applications, learning and optimising recursive robot strategies and learning tactics for proving correctness of programs. The talk will conclude by pointing to the many challenges which remain to be addressed within this new area.
|Jordi Cabot (IN3-UOC, Barcelona)|
Explicit definition and management of rules is largely ignored in most software development projects. While the most “popular” software modeling language (UML) enjoys some measure of success in real-world software projects, its companion, the Object Constraint Language (the OMG standard to complement UML models with textual constraints and derivation rules) is largely ignored. As a result, rules live hidden in the code, implemented in an ad-hoc manner.
This somehow worked when data was mostly stored in relational databases and DBAs could at least enforce some checks on that data. But now, data lives in the open (e.g., data as a service, big data), accessible in a variety of formats (NoSQL, APIs, CSVs,…). This evolution facilitates the consumption and production of data but puts at risk any piece of software accessing it, at least in case no proper knowledge of the structure, quality and content of that data is available. And with the emergence of open data, it’s not only the software who accesses the data but people as well.
In this talk, I will argue that rules must become first-class citizens in any software development project and describe our initiatives in discovering, representing and enforcing rules on (open and/or semi-structured) data.
|Elena Baralis (Politecnico di Torino)|
Opening the Black Box: Deriving Rules from Data