Research Topics

From International Center for Computational Logic
Toggle side column

Research Topics

Examle AF.png

Argumentation is one of the major fields in Artificial Intelligence (AI). Numerous applications in diverse domains like legal reasoning, multi-agent systems, social networks, e-government, decision support and many more make this topic very interdisciplinary and lead to a wide range of different formalizations. Out of them the concept of abstract Argumentation Frameworks (AFs) is one of the most popular approaches to capture certain aspects of argumentation. This very simple yet expressive model has been introduced by Phan Minh Dung in 1995. Arguments and a binary attack relation between them, denoting conflicts, are the only components one needs for the representation of a wide range of problems and the reasoning therein. Nowadays numerous semantics exist to solve the inherent conflicts between the arguments by selecting sets of “acceptable” arguments. Depending on the application, acceptability is defined in different ways.

Answer Set Programming (ASP) is a modern approach towards true declarative programming (and declarative problem solving). Based on non monotonic reasoning, it represents an important logic-programming paradigm. Any program consists of (logical) rules, which describe the problem of interest. Programs are then evaluated under the answer-set semantics (or stable model semantics), such that each of the program's answer sets is a solution. Therefore, one is focused on describing some problem by the means of specifying appropriate rules, rather than bothering about how to solve the problem.


We are mainly interested in using ASP as efficient computational approach to tackle problems of our other research areas, for example, we compute different semantics of Argumentation Frameworks using an ASP based system ASPARTIX.

Fly-brain.png

Description Logics represent one of today's major paradigms in Knowledge Representation. They have developed out of semantic networks and frame-based approaches.

Whiteboard-David-Irina.jpg

Existential Rules are a knowledge representation formalism used in artificial intelligence and database theory. Their syntactic flexibility enables an easy integration of both semantic knowledge and databases. Syntactically close to Datalog rules, an important distinguishing feature is the possibility to describe individuals whose existence was not originally known, which is of great help for modeling purposes. All these features have motivated an intense research effort on how to query data under such rules. Questions typically tackled with this respect are the following: which query language achieve a good tradeoff between expressivity and complexity? What ensures the decidability of the query answering problem? What is the complexity of specific classes of rules? How to build efficient reasoners?

Conceptlattice.png

Formal Concept Analysis is a mathematical theory which describes concepts and conceptual hierarchies by means of lattice theory. Thereby, the basis are formal contexts, elementary data structures that describe objects and their attributes. A formal concept is then composed of a set of objects and a set of attributes which -- in line with philosophical and linguistic traditions -- are called concept extent and concept intent, respectively. Formal concepts can then be put into relation to each other, forming a hierarchy of sub- and superconcepts. Formal Concept Analysis was founded in the 1980s in Darmstadt in the group of Rudolf Wille and has become a discipline with numerous practical applications in diverse areas, including data- and textmining, knowledge management, Semantic Web, software engineering, economics or biology.

Ramon Llull - Ars Magna Tree and Fig 1.png

Knowledge representation and reasoning (KRR) is concerned with the encoding of human knowledge in computer systems such that it can serve as a basis for drawing logical conclusions. By means of logical reasoning, KRR systems derive implicit knowledge from the given information. KRR plays an important role in constructing more intelligent computer systems, but it is also used in applications where large knowledge bases need to be constructed, since reasoning can help to reduce redundancy and to detect errors.


Most modern KRR approaches are based on formal logics, with description logics and rule-based formalisms constituting the most prominent language families. In applied fields, knowledge bases are used to construct ontologies, which relates KRR to other research topics, such as semantic technologies. While KRR is based on a large body of theoretical research, it also includes many applied topics, such as the design and implementation of tools for automated reasoning.

Satisfiability testing (SAT) is a very generic decision problem, which can be used to solve many industrial problems, such as hardware verification, configuration, planning, or for tasks in bio informatics. Hence, improvements of SAT technology have a direct impact to all the fields that exploit SAT solvers in their tool chain.

Sequential and Parallel SAT Solving

We develop a generic SAT-solver that serves as a basis for various solver configurations via combination of solver components. It shall offer great flexibility in deriving different SAT-solver instances from available components and existing parameters. This is of importance in many respects, e.g. in optimizing solvers for a particular domain by automatically tuning solver parameters and in constructing portfolio-based solvers. We consider components from classical DPLL-based to CDCL-based solvers, from principal components (e.g. search, unit propagation, decision heuristics, backtracking) to secondary components (e.g. learning, clause forgetting heuristics, restart heuristics). A special focus is put on simplification and formula rewriting techniques.


Given a good sequential SAT solver, parallel solvers are investigated to improve the performance of SAT technology on modern multi-core architectures. Here, the focus is to partition the search space recursively, to benefit from the increasing number of available computing resources.

As SAT systems become more and more complex, verifiable output is required. Hence, we study how to produce unsatisfiability proofs for existing SAT techniques and parallel SAT solvers, to allow to use as many existing techniques as possible and still being able to verify the result of the SAT solver.

Encoding Real-World Problems

Furthermore, encoding real world problems into the language of SAT is often non-trivial. Hence, encodings for commonly used constraints are studies and new ways to describe these constraints in propositional logic are developed. We currently focus on cardinality constraints and pseudo Boolean constraints, which are frequently used in configuration, car sequencing, or planning.

Optimization and Enumerating all Models

Besides solving a single decision problem with a SAT solver, finding an optimal solution, or finding all solutions are two common tasks as well. In our group, we extend SAT solvers to encounter all solutions of a formula, or the related real world problem. Furthermore, we develop an optimization solver.

Kiwv-group-picture.jpg

Human Reasoning, Computational Logic, and Connectionism
The project aims at developing non-classical logics as well as connectionist realizations thereof which are adequate for human reasoning.
For further information, please contact Christoph Wernhard.

Human Reasoning in Computational Logic

Until now there are no widely accepted theories that give formal representations of sophisticated human reasoning. In our project we address this issue by exploring current results from relevant research areas. The goal is to formalize a framework that allows the expression of reasoning processes which are fundamental in order to explain human behaviour. Conventional formal approaches such as classical logic are not appropriate for this purpose as they cannot deal with the most elementary parts humans are confronted with e.g. incomplete information. Alternatives can be found in formalizations of weaker logics such as non-monotonic logics or three-valued logics. Furthermore, in the field of neurosicence (e.g. connectionist networks) and cognitive science (e.g. studies from psychological reasoning such as the selection and suppression task) a lot of reaserach is going on in understanding and simulating human reasoning processes. Their results are important for our purpose as they deal with similar fundamental questions about human behaviour.
For further information, please contact Emmanuelle Dietz.

Semantic-technologies.png

Semantic Technologies are comprise a wide field methods and technologies used to represent, manage, exchange, and process semantic data. At the heart of this are machine-readable formats for representing data and schema information (ontologies). A number of related technology standards have been created by the World Wide Web Consortium (W3C) in the context of its Semantic Web and Web of Data activities. The most prominent of these standards are the RDF Resource Description Language (a graph-based data-representation format), the OWL Web Ontology Language (an ontology language based on Description Logics), and the SPARQL Protocol and RDF Query Language (a data management and query language). However, semantic technologies also comprise other approaches, such as the annotation approach of schema.org, as well as aspects of methodology and infrastructure.


Research in semantic technologies is concerned with a variety of applied questions. The specification and processing of ontologies is closely related to the field of knowledge representation and reasoning while other questions touch upon issues of databases, information extraction, or Web systems. The latter aspect comes to the fore in applications of Linked Open Data, but there are also many important uses of OWL and RDF that have little or nothing to do with the World Wide Web.