Attribut:Beschreibung DE
Aus International Center for Computational Logic
Dies ist eine Eigenschaft des Typs Text.
A
The AGM postulates by Alchourron, Gärdenfors, and Makinson continue to represent a cornerstone in research related to belief change. We generalize the approach of Katsuno and Mendelzon (KM) for characterizing AGM base revision from propositional logic to the setting of (multiple) base revision in arbitrary monotonic logics. Our core result is a representation theorem using the assignment of total – yet not transitive – “preference” relations to belief bases. We also provide a characterization of all logics for which our result can be strengthened to preorder assignments (as in KM’s original work).
This talk will take place online via BigBlueButton. To access the room, take one of the following links:
with ZIH-login:
https://selfservice.zih.tu-dresden.de/l/link.php?m=103268&p=3fab0d1a
without ZIH-login:
https://selfservice.zih.tu-dresden.de/link.php?m=103268&p=c61bdf2f +
Die lineare temporale Logik (LTL) ist eine modale Logik, die zur Untersuchung temporaler Eigenschaften in der Informatik verwendet wird, während die Standpunktlogik (SL) eine kürzlich eingeführte Multiagentenlogik ist, die unterschiedliche semantische Verpflichtungen erfasst und Schlussfolgerungen mit niedriger Komplexität ermöglicht. Standpunkt-lineare temporale Logik (SLTL) kombiniert das temporale Schließen von LTL mit den multiperspektivischen Möglichkeiten von SL und erlaubt so die Modellierung sowohl der Systementwicklung als auch wechselnder Standpunkte über die Zeit.
<br>
Ein zuvor eingeführtes Tableau-Kalkül für SLTL ist strukturell komplex und schwer zu handhaben. Dieses Projekt zielt darauf ab, ein verfeinertes, korrektes, vollständiges und terminierendes Tableau-Kalkül für SLTL zu entwickeln, das sich an Reynolds’ einfacherer Tableau-Methode für LTL orientiert. Dieser neue Ansatz soll das Schlussfolgern in SLTL vereinfachen, ohne dessen Ausdrucksstärke zu beeinträchtigen.
<br>
Bitte klicken Sie auf den Download-Link auf der linken Seite, um ein PDF-Dokument zu erhalten, in dem die Ziele und Erwartungen des Projekts beschrieben sind. +
<b>Abstract:</b> Existential rules are a knowledge representation and reasoning formalism that extends both positive rules a la Datalog and most description logics used in ontology-based query answering. The chase is a fundamental tool for reasoning on knowledge bases composed of an instance and a set of existential rules. It is well-known that deciding, given a chase variant and a set of existential rules, whether the chase will halt for any instance is an undecidable problem. Hence, a crucial issue is whether it becomes decidable for known classes of existential rules. We focus on three main chase variants, namely semi-oblivious, restricted and core chase, and consider linear existential rules, a simple yet important subclass of existential rules. We show that the termination problem is decidable in these three variants with a novel unified approach based on a single graph and a single notion of forbidden pattern.
Joint work with M. Leclère, M.-L. Mugnier and F. Ulliana.
<b>Speaker Bio:</b> Michaël Thomazo is an INRIA researcher (CEDAR team), currently working on ontology-based query answering, and especially efficient evaluation of reformulated queries. He had been a post-doc at TU Dresden, working with S. Rudolph, and got his Ph.D from the University of Montpellier, supervised by J.-F. Baget and M.-L. Mugnier. You can find more details at http://pages.saclay.inria.fr/michael.thomazo/. +
Column stores have been a 'neglected child' relative to traditional, row-oriented, relation-focused database management systems: The systems people came up with them, and the theoreticians did not really give them the time of day. This talk will discuss what happens when we pick up the slack and formalize a model for analytic computation with columns. In addition to sound conceptual grounding being its own aesthetic reward, we will touch on some of the examples of how such a formalization enables architectural and performance improvements in real-life systems:
Seamless integration of decompression and query execution; removal of special-case handling of different column features (such as nullability and variable-length elements); closure of query execution plans to partial execution; et cetera. Central to achieving such benefits will be the discussion of what constitutes a column, how columns are to be represented, and what they can represent. +
In this talk we will take an introductory glance at the notion of "partition width", first conceived by Achim Blumensath. As partition width is also closely related to a notion of decomposition of an arbitrary structure into a tree-like shape, the so called "partition refinement", we will also take a look at the relation of both these notions to more established notions of decomposition and width measures (namely tree-decompositions, tree width, and clique width). +
A pragmatic approach to translation: Vocabulary alignment through Multiagent Interaction and Observation +
Talk by: Paula Chocrón
Institute: Artificial Intelligence Research Institute (IIIA-CSIC), Barcelona, Spain
Abstract:
"Every speaker that has been abroad knows that understanding a foreign language is easier when performing simple, well-defined interactions. For example, it is easier to ask for a coffee than to discuss politics in a language we do not master. In this talk I will discuss how this idea can be applied to help achieve meaningful communication in artificial multi-agent systems. In open, heterogeneous environments, it is likely that interlocutors with different backgrounds use different languages. Can the contextual information about the tasks being performed be used to learn a translation that allows agents to interact?"
I will start by presenting a notion of context that is based on the formal specifications of the tasks performed by agents. I will then show how this context can be used by agents to align their vocabularies dynamically, by learning mappings from the experience of previous interactions. In doing so, we will also rethink the traditional approach to semantic matching and its evaluation, tackling the following questions: What does it mean for agents to "understand each other"? When is an alignment good for a particular application? Finally, I will present an application to agents that interact using procedural protocols obtained from the WikiHow website, showing how they can infer a translation between English and Spanish without using external resources. +
Many natural combinatorial problems can be expressed as constraint satisfaction problems. This class of problems is known to be NP-complete in general, but certain restrictions on the form of the constraints can ensure tractability. The standard way to parameterize interesting subclasses of the constraint satisfaction problem is via finite constraint languages. The main problem is to classify those subclasses that are solvable in polynomial time and those that are NP-complete. It was conjectured that if a constraint language has a weak near unanimity polymorphism then the corresponding constraint satisfaction problem is tractable, otherwise it is NP-complete. The hardness result is known since 2001. We present an algorithm that solves Constraint Satisfaction Problem in polynomial time for constraint languages having a weak near unanimity polymorphism, which proves the remaining part of the conjecture. +
'''Abstract:'''
When writing today’s distributed programs, which frequently span both devices and cloud services, programmers are faced with complex decisions and coding tasks around coping with failure, especially when these distributed components are stateful. If their application can be cast as pure data processing, they benefit from the past 40-50 years of work from the database community, which has shown how declarative database systems can completely isolate the developer from the possibility of failure in a performant manner. Unfortunately, while there have been some attempts at bringing similar functionality into the more general distributed programming space, a compelling general-purpose system must handle non-determinism, be performant, support a variety of machine types with varying resiliency goals, and be language agnostic, allowing distributed components written in different languages to communicate. This talk describes the first system, publicly available on GitHub, called Ambrosia, to satisfy all these requirements. We coin the term “virtual resiliency”, analogous to virtual memory, for the platform feature which allows failure oblivious code to run in a failure resilient manner. We also introduce a programming construct, the “impulse”, which resiliently handles non-deterministic information originating from outside the resilient component. Of further interest to our community is the effective reapplication of much database performance optimization technology to make Ambrosia more performant than many of today’s non-resilient cloud solutions.
'''Bio:'''
Over the last 20 years, I have worked at Microsoft in a combination of research and product roles. In particular, I’ve spent about 15 years as a researcher at MSR, doing fundamental research in streaming, big data processing, databases, and distributed computing. My style of working is to attack difficult problems, and through fundamental understanding and insight, create new artifacts that enable important problems to be solved in vastly better ways. For instance, my work on streaming data processing enabled people with real time data processing problems to specify their processing logic in new, powerful ways, and also resulted in an artifact called Trill, which was orders of magnitude more performant than anything which preceded it. Within the academic community, I have published many papers, some with best paper awards (e.g. Best Paper Award at ICDE 2012), and two with test of time awards (e.g. SIGMOD 2011 Test of Time award and ICDT 2018 Test of Time award), and have also taken many organizational roles in database conferences. My research has also had significant impact on many Microsoft products, including SQL Server, Office, Windows, Bing, and Halo, as well as leading to the creation of entirely new products like Microsoft StreamInsight, Azure Stream Analytics, Trill, and most recently, Ambrosia. I spent 5 years building Microsoft StreamInsight, serving as a founder and architect for the product. Trill has become the de-facto standard for temporal and stream data processing within Microsoft, and years after creation, is still the most expressive and performant general purpose stream data processor in the world. I am also an inventor of 30+ patents.
Amalgamation SNP (ASNP) is a fragment of existential second-order logic that strictly contains binary connected MMSNP of Feder and Vardi and binary connected guarded monotone SNP of Bienvenu, ten Cate, Lutz, and Wolter; it is a promising candidate for an expressive subclass of NP that exhibits a complexity dichotomy. We show that ASNP has a complexity dichotomy if and only if the infinite-domain dichotomy conjecture holds for constraint satisfaction problems for first-order reducts of binary finitely bounded homogeneous structures. For such CSPs, powerful universal-algebraic hardness conditions are known that are conjectured to describe the border between NP-hard and polynomial-time tractable CSPs. The connection to CSPs also implies that every ASNP sentence can be evaluated in polynomial time on classes of finite structures of bounded treewidth. We show that the syntax of ASNP is decidable. The proof relies on the fact that for classes of finite binary structures given by finitely many forbidden substructures, the amalgamation property is decidable.
This will be a test talk for the presentation of an eponymous paper consisting of the 20 minute long prerecorded talk (like it will be presented at the conference) which will be followed up with a Q&A session to the talk where questions will be answered by Simon Knäuer, one of the authors. Feedback and suggestions in preperation for the conference talk are heavily encouraged.
This talk will be held digitally. If there is any interest in attending, please write an e-mail to thomas.feller@tu-dresden.de. +
ASPARTIX-D is a system designed to evaluate abstract argumentation frameworks. It consists of collection of answer-set programming (ASP) encodings together with an optimal ASP (resp. SAT) solver configuration for each reasoning problem. In this talk we will give an overview of the modifications and the evalutation performed to make ASPARTIX-D
ready for the first International Competition on Computational Models of Argumentation (ICCMA 2015). +
Abstract:
Examining the influence of human behavior on road safety is a matter of considerable interest. As
studies in this field have demonstrated a correlation between driver behavior and road safety over
the past decades, there is a compelling need to develop a software system capable of impartially
identifying the typical driving behavior of motorists. The objective of this research is to create one
or more algorithms capable of recognizing and classifying a driver's typical driving behavior by
integrating various data sources. Since different data sources may present conflicting information
due to noise errors or failures, our goal is to resolve such ambiguities by employing artificial
intelligence paradigms. We are currently exploring the potential use of a well-known AI framework,
namely Dung’s Abstract Argumentation Framework (AAF). Argumentation Frameworks, indeed,
provide a valuable tool for analyzing and assessing conflicting pieces of information, enabling us to
draw more accurate and dependable conclusions. Our argumentation-based system provides the
user with a driving certificate describing their driving behavior. In the certificate, we outline, for
each predefined class C of driving behavior (e.g., calm driving, normal driving, aggressive driving),
the minimum and maximum percentage of time points where the driving behavior falls in C. The
minimum percentage represents the time points where driver behavior has been classified in C
without uncertainty, whereas the maximum percentage encodes the time points where some other
classes in addition to C have been identified, allowing for multiple interpretations.
The experimental evaluation baked up the need for intervals, as for several time points ambiguity in
identifying classes occurred, due to different measures collected by different sensors leading to
conflicting information. Therefore, assigning a single class is not always possible; instead, multiple
possible interpretations need to be considered.
The talk will take place in a hybrid fashion, physically in the APB room 2026, and online through the link:
https://bbb.tu-dresden.de/b/pio-zwt-smp-aus
Abstract Dialectical Frameworks. An Analysis of their Properties and Role in Knowledge Representation and Reasoning +
Abstract dialectical frameworks (ADFs) are a formalism for representing knowledge about abstract arguments and various logical relationships between them. This talk presents an overview of some of our results on ADFs.
Firstly, we use the framework of approximation fixpoint theory to define various semantics that are known from related knowledge representation formalisms also for ADFs.
We then analyse the computational complexity of a variety of reasoning problems related to ADFs.
Afterwards, we also analyse the formal expressiveness in terms of realisable sets of interpretations and show how ADFs fare in comparison to other formalisms.
Finally, we show how ADFs can be put to use in instantiated argumentation, where researchers try to assign meaning to sets of defeasible and strict rules.
The main outcomes of our work show that in particular the sublanguage of *bipolar* ADFs are a useful knowledge representation formalism with meaningful representational capabilities and acceptable computational properties.
'''Short Bio:''' The speaker is a research associate in the Computational Logic Group. He obtained his BSc in Computer Science in 2006, his (European) MSc in Computational Logic in 2008, both from TUD; his PhD about Non-monotonic Reasoning in Action Theories in 2012, his Habilitation about the topic of this talk in 2017, both from Leipzig University. Most recently, he worked at compl3te GmbH in Leipzig dealing with automated problem solving, before he joined TUD again.
This talk will take place online via BigBlueButton. To access the room, take one of the following links:
with ZIH-login:
https://selfservice.zih.tu-dresden.de/l/link.php?m=139679&p=9e3a0a4a
without ZIH-login:
https://selfservice.zih.tu-dresden.de/link.php?m=139679&p=7cb3769c +
Database manipulating systems (DMS) formalize operations on relational databases like adding new tuples or deleting existing ones. To ensure sufficient expressiveness for capturing practical database systems, DMS operations incorporate guarding expressions first-order formulas over countable value domains. Those features impose infinite state, infinitely branching processes thus making automated reasoning about properties like the reachability of states intractable. Most recent approaches, therefore, restrict DMS to obtain decidable fragments. Nevertheless, a comprehensive semantic framework capturing full DMS, yet incorporating effective notions of data abstraction and process equivalence is an open issue. In this paper, we propose DMS process semantics based on principles of abstract interpretation. The concrete domain consists of all valid databases, whereas the abstract domain employs different constructions for unifying sets of databases being semantically equivalent up to particular fragments of the DMS guard language. The connection between abstract and concrete domains is effectively established by homomorphic mappings whose properties and restrictions depend on the expressiveness of the DMS fragment under consideration. We instantiate our framework for canonical DMS fragments and investigate semantical preservation of abstractions up to bisimilarity, being one of the strongest equivalence notions for operational process semantics.
The talk will take place in a hybrid fashion, physically in the APB room 3027, and online through the link:
https://bbb.tu-dresden.de/b/pio-zwt-smp-aus +
Argumentation ist ein wichtiges Forschungsfeld der Künstlichen Intelligenz und des nichtmonotonen Schließens, welches Anwendungen in Bereichen wie z.B. Legal Reasoning, soziale Netzwerke, E-Government, Multi-Agenten-Systeme und Decision Support findet.
Dabei hat sich das Konzept der abstrakten Argumentation Frameworks (AFs) zu einem der beliebtesten Ansätze entwickelt. Dieses relativ einfache aber sehr ausdrucksstarke Modell wurde im Jahre 1995 von Phan Minh Dung eingeführt und besteht, kurz gesagt, aus einer Menge von Argumenten und einer binären Attackrelation zwischen den Argumenten, durch die Konflikte ausgedrückt werden.
Dabei ist mit "abstrakt" gemeint, dass man nicht die interne Struktur der Argumente betrachtet, sondern nur die Beziehung der Argumente zueinander, die durch die Attacken gegeben ist.
Zur Lösung der Konflikte werden unterschiedliche Semantiken herangezogen, die zulässige Mengen von Argumenten auswählen. Abhängig von der jeweiligen Anwendung genügen diese Semantiken unterschiedlichen Anforderungen. +
This talk is part of Dresden Talks on Interaction & Visualization:
<https://imld.de/research/dresden-talks/2018-demberg/>
Language-based interaction with digital agents (e.g. Siri, Alexa) has become ubiquitous, and is used in various situations and by an increasingly large variety of different users. Research shows however that a dialog system should not just be able to understand and generate language correctly, but that it should also adapt the way it formulates its messages to fit the user and the situation (for instance, it should use simpler formulations to avoid distraction during driving).
In this talk, I will start out by presenting an information-theoretic measure, surprisal, as a way of quantifying linguistically induced cognitive load on a word-by-word basis. I will then proceed to talk about neural network models that we have recently developed to estimate semantic surprisal, i.e. the amount of cognitive load that will be caused by an unexpected word like "bathtub" in context, such as "I did the dishes in the bathtub.".
Finally, I will report on our recent work using a novel pupillometry-based measure of cognitive load, the Index of Cognitive Activity (ICA), which allows us to assess cognitive load in dual task settings such as driving a car. +
We study extensions of expressive decidable fragments of first-order logic with circumscription, in particular the two-variable fragment FO2, its extension C2 with counting quantifiers, and the guarded fragment GF. We prove that if only unary predicates are minimized (or fixed) during circumscription, then decidability of logical consequence is preserved. For FO2 the complexity increases from coNExp to coNExp^NP-complete, for GF it (remarkably!) increases from 2Exp to Tower-complete, and for C2 the complexity remains open. We also consider querying circumscribed knowledge bases whose ontology is a GF sentence, showing that the problem is decidable for unions of conjunctive queries, Tower-complete in combined complexity, and Elementary in data complexity. Already for atomic queries and ontologies that are sets of guarded existential rules, however, for every k ≥ 0 there is an ontology and query that are k-Exp-hard in data complexity. +
We introduce a family of logics extending the lightweight Description Logic EL, that allows us to define concepts in an approximate way. The main idea is to use a graded membership function m, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C~t for ~ in {<,<=,>,>=} then collect all the individuals that belong to C with degree ~t. We further study this framework in two particular directions. First, we define a specific graded membership function deg and investigate the complexity of reasoning in the resulting Description Logic tEL(deg) w.r.t. both the empty terminology and acyclic TBoxes. Second, we show how to use the relaxed instance query approach of Ecke et al. to turn concept similarity measures into membership degree functions. It turns out that under certain conditions such functions are well-defined, and therefore induce a wide range of threshold logics. Last, we present preliminary results on the computational complexity landscape of reasoning in such a big family of threshold logics. +
Abstract argumentation is a prominent reasoning framework. It comes with a variety of semantics and has lately been enhanced by probabilities to enable a quantitative treatment of argumentation. While admissibility is a fundamental notion in the classical setting, it has been merely reflected so far in the probabilistic setting. In this paper, we address the quantitative treatment of argumentation based on probabilistic notions of admissibility in a way that they form fully conservative extensions of classical notions. In particular, our building blocks are not the beliefs regarding single arguments. Instead, we start from the fairly natural idea that whatever argumentation semantics is to be considered, semantics systematically induces constraints on the joint probability distribution on the sets of arguments. In some cases there might be many such distributions, even infinitely many ones, in other cases, there may be one or none. Standard semantic notions are shown to induce such sets of constraints, and so do their probabilistic extensions. This allows them to be tackled by SMT solvers, as we demonstrate by a proof-of-concept implementation. We present a taxonomy of semantic notions, also in relation to published work, together with a running example illustrating our achievements.
This is a talk regarding the paper at KR 2021. The talk will be given online via BigBlueButton. To access the room, use the following link:
Room Link:
https://bbb.tu-dresden.de/b/ali-zgz-l8d-52n +
In recent years the research field of argumentation has become a major
topic in the study of artificial intelligence (AI). Within the
argumentation process, the focus of this work is on the evaluation of
the acceptability of conflicting arguments using the formal models of
abstract argumentation frameworks (AFs) and abstract dialectical
frameworks (ADFs). While an AF is a directed graph where nodes
represent arguments and directed edges represent conflicts between
arguments, ADFs constitute a very powerful generalization of AFs by
additionally assigning to each argument an acceptance condition. In
this work we contribute to the advancement of the study of abstract
argumentation by addressing aspects of expressiveness and dynamics of
argumentation semantics. In terms of expressiveness we complement
recent work on realizability in AFs, investigate the role of rejected
arguments, and study the induced class of compact argumentation
frameworks. Then, we lift the study of expressiveness to the concept
of input-output AFs. Finally, we present a unifying algorithmic
approach to realizability capturing AFs and ADFs as well as
intermediate formalisms in a modular way, which is also implemented in
ASP. Taking into account the dynamic nature of argumentation, we
study two central issues therein: revision and splitting. For
revision we apply the seminal AGM theory of belief change to
argumentation by presenting representation theorems for revision
operators which guarantee to result in a single framework. We also
present concrete belief change operators and analyze their
computational complexity. Finally, we study splitting of ADFs, aiming
for optimization of computation by incremental computation of
semantics. +
Have you sorted you toys at home by color or by size? Have you
ever looked for this one particular pencil? Have you asked yourself, why
did this search take so long or how can it be faster? The "World of
Algorithms" (German: Die Welt der Algorithmen) provides some useful
tools for you to achieve your goals as quick as possible. This includes
problems that never crossed your minds before. We are going to look at
certain problems and their (algorithmic) solutions. It is our goal the
learn about the strategies leading to the solutions. We are also facing
problems that are so hard to solve for which a quick solutions is out of
reach, even for a modern computer.
This is a translation of the advertisement I provided for pupils at the
39th Primary School here at Dresden for a so-called "Ganztagsangebot"
(GTA for short) which is basically a 1 hour/week course
institutionalized as afternoon activity. My target group has been
children at the level of 1st and 2nd grade (ages 6 to 8). In this talk,
I will briefly explain what I planned to do, what I did and what I'm
planning to do in this GTA. I will mainly elaborate on the experiences I
gathered in teaching 6-8 year-olds over the course of this school year.
The talk will take place in a hybrid fashion, physically in the APB room 3027, and online through the link:
https://bbb.tu-dresden.de/b/pio-zwt-smp-aus +