BEGIN:VCALENDAR
PRODID:-//SMW Project//Semantic Result Formats
VERSION:2.0
METHOD:PUBLISH
X-WR-CALNAME:ICCL-Veranstaltungen
BEGIN:VEVENT
SUMMARY:Exploiting forwardness: Satisfiability and Query-Entailment in Forward Guarded Fragment
URL://iccl.inf.tu-dresden.de/web/Exploiting_forwardness:_Satisfiability_and_Query-Entailment_in_Forward_Guarded_Fragment
UID://iccl.inf.tu-dresden.de/web/Exploiting_forwardness:_Satisfiability_and_Query-Entailment_in_Forward_Guarded_Fragment
DTSTART:20210506T130000
DTEND:20210506T143000
LOCATION:Online
DESCRIPTION:We study the complexity of two standard reasoning problems for Forward Guarded Logic (FGF)\, obtained as a restriction of the Guarded Fragment in which variables appear in atoms in a specific order. We show that FGF enjoys the higher-arity-forest-model property\, which results in ExpTime-completeness of its (finite) knowledge-base satisfiability problem. Moreover\, we show that FGF is well-suited for knowledge representation. By employing a generalisation of the spoiler technique\, we prove that the conjunctive query entailment problem for FGF remains in ExpTime. We believe that our results are quite unusual as FGF is\, up to our knowledge\, the first decidable fragment of First-Order Logic\, extending standard description logics\, that offers unboundedly many variables and higher-arity relations while keeping its complexity surprisingly low.\n\n\nThis talk is based on an accepted paper submission to JELIA 2021 and will take approximately 30 minutes. It will take place online via BigBlueButton. To access the room\, take one of the following links: \n\n\nwith ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/l/link.php?m=86974&p=57992a96\n\n\nwithout ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/link.php?m=86974&p=f38fa613
DTSTAMP:20210223T085257
SEQUENCE:33120
END:VEVENT
BEGIN:VEVENT
SUMMARY:From Data to Knowledge: Extending Database Techniques for Knowledge Graphs
URL://iccl.inf.tu-dresden.de/web/Seminar_29.04.2021
UID://iccl.inf.tu-dresden.de/web/Seminar_29.04.2021
DTSTART:20210429T143000
DTEND:20210429T160000
LOCATION:Online
DESCRIPTION:Modern applications like recommender systems and question answering systems can leverage models beyond traditional data representations. These novel applications build upon knowledge\, which cannot be easily captured with relational data models used in databases. Instead\, Knowledge Graphs (KGs) allow for modeling\, in a semi-structured way\, inter-connected facts or statements annotated with semantics. In KGs\, concepts and entities correspond to nodes while their connections are modeled as directed and labeled edges\, creating a graph.\nWhile the models for representing relational data and KGs differ considerably\, the architecture for querying databases have served as a foundation for querying KGs. However\, not all the advancements in databases can be directly applied to KGs. This lecture will present some necessary extensions as well as successful applications of database techniques to efficiently execute queries over KGs. First\, I will introduce the problem of query optimization and present extensions to traditional optimizers to cope with the semi- structured nature of KGs. Then\, I will present the application of adaptive execution techniques to handle unexpected conditions when querying decentralized KGs. I will conclude with an outlook on future research directions\, which include preliminary results on applying Deep Learning to the problem of query optimization for KGs.\n\n\nThis talk will take place online via BigBlueButton. To access the room\, take one of the following links: \n\nwith ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/l/link.php?m=94316&p=76ddab55\n\n\nwithout ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/link.php?m=94316&p=7b2c73ef
DTSTAMP:20210315T043942
SEQUENCE:33453
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Semantic Characterization of Belief Base Revision for Arbitrary Monotonic Logics
URL://iccl.inf.tu-dresden.de/web/A_Semantic_Characterization_of_Belief_Base_Revision_for_Arbitrary_Monotonic_Logics
UID://iccl.inf.tu-dresden.de/web/A_Semantic_Characterization_of_Belief_Base_Revision_for_Arbitrary_Monotonic_Logics
DTSTART:20210422T130000
DTEND:20210422T143000
LOCATION:Online
DESCRIPTION:The AGM postulates by Alchourron\, Gärdenfors\, and Makinson continue to represent a cornerstone in research related to belief change. We generalize the approach of Katsuno and Mendelzon (KM) for characterizing AGM base revision from propositional logic to the setting of (multiple) base revision in arbitrary monotonic logics. Our core result is a representation theorem using the assignment of total – yet not transitive – “preference” relations to belief bases. We also provide a characterization of all logics for which our result can be strengthened to preorder assignments (as in KM’s original work).\n\n\nThis talk will take place online via BigBlueButton. To access the room\, take one of the following links:\n\nwith ZIH-login: \n\nhttps://selfservice.zih.tu-dresden.de/l/link.php?m=103268&p=3fab0d1a\n\nwithout ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/link.php?m=103268&p=c61bdf2f
DTSTAMP:20210407T151457
SEQUENCE:33607
END:VEVENT
BEGIN:VEVENT
SUMMARY:An Introduction to Proof Theory I: Sequent Calculus
URL://iccl.inf.tu-dresden.de/web/Seminar_15.04.2021
UID://iccl.inf.tu-dresden.de/web/Seminar_15.04.2021
DTSTART:20210415T130000
DTEND:20210415T143000
LOCATION:Online
DESCRIPTION:Proof theory is an important branch of mathematical logic that is primarily concerned with the study and application of mathematical proofs. One of the preferred formalisms in proof theory for building deductive systems is Gerhard Gentzen's sequent calculus formalism. This talk is intended as an introduction to proof theory and will introduce a variant of Gentzen's sequent calculus for classical propositional logic. Moreover\, the sequent calculus will be used to demonstrate and define fundamental proof theoretic notions such as sequents\, types of inference rules\, methods of soundness and completeness\, and properties of calculi.\n\n\nThis lecture will take place online via BigBlueButton. To access the room\, take one of the following links:\n\nwith ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/l/link.php?m=88852&p=57980ee1\n\nwithout ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/link.php?m=88852&p=7788afd1
DTSTAMP:20210407T194406
SEQUENCE:33610
END:VEVENT
BEGIN:VEVENT
SUMMARY:Detecting Non-Existence of Finite Universal Models for Existential Rules
URL://iccl.inf.tu-dresden.de/web/Seminar_08.04.2021
UID://iccl.inf.tu-dresden.de/web/Seminar_08.04.2021
DTSTART:20210408T130000
DTEND:20210408T143000
LOCATION:Online
DESCRIPTION:For reasoning over ontologies\, (finite) universal models play an important role in tasks like conjunctive query answering\, which is undecidable. The (restricted) chase is a sound and complete algorithm for computing (finite) universal models of ontologies featuring existential rules. Termination of the chase is undecidable and various sufficient conditions for termination and non-termination have been studied. If the chase terminates\, we obtain a finite universal model. However\, if the chase does not terminate\, a finite universal model may still exist. In recent work\, it has been shown that for certain ontologies for which the chase terminates\, there exists a chase sequence that yields a universal model that is a core and therefore is the smallest universal model for the given rule set up to isomorphism. We extend this result to non-terminating chase sequences. By that\, we are able to introduce a sufficient condition for the existence of an infinite universal model that is a core which in turn implies that no finite universal model exists.\n\nNote that this talk acts as a colloquium/examination for the module INF-PM-FPG. It will take place online via BigBlueButton. To access the room\, take one of the following links: \n\nwith ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/l/link.php?m=86971&p=30896df0\n\nwithout ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/link.php?m=86971&p=83001746
DTSTAMP:20210301T092106
SEQUENCE:33385
END:VEVENT
BEGIN:VEVENT
SUMMARY:Using Datalog to Ground ASP Programs
URL://iccl.inf.tu-dresden.de/web/Using_Datalog_to_Ground_ASP_Programs
UID://iccl.inf.tu-dresden.de/web/Using_Datalog_to_Ground_ASP_Programs
DTSTART:20210401T130000
DTEND:20210401T143000
LOCATION:Online
DESCRIPTION:Answer Set Programming (ASP) is a powerful tool in Knowledge Representation and Reasoning. The typical approach to find the answer sets of an ASP program is a ground-and-solve approach\, where a grounder transforms an input program with variables into a ground program without variables and a solver computes the answer sets of the ground program. To be feasible\, this requires an intelligent grounding phase to keep the grounding relatively small.\nThe relational query language Datalog is related to ASP and has some similarities to it. Even though they arise from the same origin\, there might be recent progress in the development of fast Datalog reasoners\, which might be beneficial for grounding ASP programs. Thus\, we present an approach for an ASP grounder that uses a Datalog reasoner. We then implement this approach by using the Datalog reasoner VLog\, and we show that our implementation is competitive with state-of-the-art grounders. \n\n\nNote that this talk is a defence of a "Belegarbeit". It will take place online via BigBlueButton. To access the room\, take one of the following links: \n\n\nwith ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/l/link.php?m=86969&p=6e64180f\n\n\nwithoput ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/link.php?m=86969&p=94b73c2d
DTSTAMP:20210223T085215
SEQUENCE:33118
END:VEVENT
BEGIN:VEVENT
SUMMARY:"Most of" leads to undecidability: Failure of adding frequencies to LTL
URL://iccl.inf.tu-dresden.de/web/%22Most_of%22_leads_to_undecidability:_Failure_of_adding_frequencies_to_LTL.
UID://iccl.inf.tu-dresden.de/web/%22Most_of%22_leads_to_undecidability:_Failure_of_adding_frequencies_to_LTL.
DTSTART:20210325T130000
DTEND:20210325T143000
LOCATION:Online
DESCRIPTION:(joint work with Jakub Michaliszyn from University of Wrocław)\n\nLinear Temporal Logic (LTL) interpreted on finite traces is a robust specification framework popular in formal verification. However\, despite the high interest in the logic in recent years\, the topic of their quantitative extensions is not yet fully explored. The main goal of this work is to study the effect of adding weak forms of percentage constraints (e.g. that most of the positions in the past satisfy a given condition\, or that sigma is the most-frequent letter occurring in the past) to fragments of LTL. Such extensions could potentially be used for the verification of influence networks or statistical reasoning. Unfortunately\, as we prove in the paper\, it turns out that percentage extensions of even tiny fragments of LTL have undecidable satisfiability and model-checking problems. Our undecidability proofs not only sharpen most of the undecidability results on logics with arithmetics interpreted on words known from the literature\, but also are fairly simple. We also show that the undecidability can be avoided by restricting the allowed usage of the negation\, and briefly discuss how the undecidability results transfer to first-order logic on words.\n\nThe talk will take approximately 30 minutes and will be held online via BigBlueButton. To access the room\, take one of the following links: \n\n\nwith ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/l/link.php?m=67110&p=dbb695ef\n\nwithout ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/link.php?m=67110&p=cf01a562
DTSTAMP:20210319T114641
SEQUENCE:33518
END:VEVENT
BEGIN:VEVENT
SUMMARY:Evaluating the Generality of Disjunctive Model Faithful Acyclicity on OWL ontologies
URL://iccl.inf.tu-dresden.de/web/Seminar_18.03.2021
UID://iccl.inf.tu-dresden.de/web/Seminar_18.03.2021
DTSTART:20210318T130000
DTEND:20210318T143000
LOCATION:Online
DESCRIPTION:The chase is a well-studied\, sound and complete algorithm that is used in different variants as a basis for reasoning tasks over (disjunctive) existential rules. Since termination of the chase is undecidable\, acyclicity notions\, i.e. sufficient conditions for termination\, like model faithful acyclicity (MFA) for the skolem chase and restricted model faithful acyclicity (RMFA) for the restricted chase are introduced. The recently developed acyclicity notion disjunctive model faithful acyclicity (DMFA) for the disjunctive skolem chase promises improvements for detecting termination over existing notions like MFA in theory. We further know that RMFA captures DFMA while RMFA itself is not sound for the disjunctive skolem chase. We evaluate the generality of DMFA in practice compared to MFA and RMFA on rule sets that we obtain from real-world OWL ontologies in the Oxford ontology repository (OXFD) and the dataset of the OWL reasoner evaluation 2015 (ORE15). Our results show that DMFA achieves practical improvements over MFA that narrow the gap towards RMFA. Our findings motivate further research regarding the disjunctive skolem chase in general and the development of sufficient conditions for non-termination of the disjunctive skolem chase in particular.\n\n\nNote that this talk acts as a colloquium/examination for the module INF-PM-FPG. It will take place online via BigBlueButton. To access the room\, take one of the following links: \n\nwith ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/l/link.php?m=86967&p=b14dca92\n\nwithout ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/link.php?m=86967&p=00922683
DTSTAMP:20210301T091808
SEQUENCE:33383
END:VEVENT
BEGIN:VEVENT
SUMMARY:Solving Datalog(S) Problems with Lazy-Grounding ASP Solving
URL://iccl.inf.tu-dresden.de/web/Solving_Datalog(S)_Problems_with_Lazy-Grounding_ASP_Solving
UID://iccl.inf.tu-dresden.de/web/Solving_Datalog(S)_Problems_with_Lazy-Grounding_ASP_Solving
DTSTART:20210304T130000
DTEND:20210304T143000
LOCATION:Online
DESCRIPTION:Datalog(S) extends Datalog with support for sets\, and it can be captured in a decidable fragment of existential rules. We show that it can also be captured by Answer Set Programming (ASP) with function symbols. The traditional ground-and-solve approach for ASP\, however\, is insufficient\, as it computes the grounding in a first step before solving. Thus\, the introduction of function symbols can lead to infinite groundings\, even though a finite one would be sufficient\, since grounding and solving are separated and the obtained grounding is an over-approximation of the smallest possible one. We therefore propose the use of ASP systems with lazy-grounding\, i.e.\, systems that interleave grounding and solving\, to solve Datalog(S) programs as they guarantee termination. We show empirically that the lazy-grounding ASP solver Alpha can solve practical Datalog(S) problems. \n\n\nNote that this talk acts as a colloquium/examination for the module INF-PM-FPG.\nIt will take place online via BigBlueButton. To access the room\, take one of the following links: \n\n\nwith ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/l/link.php?m=86965&p=58febf13\n\n\nwithout ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/link.php?m=86965&p=35a69ec8
DTSTAMP:20210223T085148
SEQUENCE:33117
END:VEVENT
BEGIN:VEVENT
SUMMARY:Knowledge Graphs for AI: Wikidata and Beyond
URL://iccl.inf.tu-dresden.de/web/Knowledge_Graphs_for_AI:_Wikidata_and_Beyond
UID://iccl.inf.tu-dresden.de/web/Knowledge_Graphs_for_AI:_Wikidata_and_Beyond
DTSTART:20210211T130000
DTEND:20210211T143000
LOCATION:Online
DESCRIPTION:Abstract:\n"Wikidata\, the knowledge graph of Wikimedia\, has successfully grown from an experimental “data wiki” to a well-organized reference knowledge base with a large and active editor community as well as many academic and industrial uses. It is also a key ingredient of popular AI applications\, most prominently of intelligent agents such as Apple's Siri or Amazon's Alexa. Of course\, human knowledge is fully expected to be in high demand in this time of rapidly advancing AI. And yet\, the fact that modern AI relies on the manual labor of thousands of human knowledge modelers is in stark contrast to the common narrative of AI in popular media\, which tells us that methods of pattern recognition and statistical function approximation can produce intelligent behavior from unstructured data without much human intervention. However\, Wikidata is not a singular exception to the trend but rather a specific solution to a general need of AI: the need for knowledge that is understandable to humans and accessible to computers. Almost every major AI application incorporates such knowledge\, and organizations long have realized the need to acquire and develop knowledge resources as part of their AI strategy. The next frontier in AI is the ability of systems to explain and justify their behavior. There\, too\, we can see the need for knowledge-based technologies as a bridge between human understanding and computational mechanisms\, but the task goes far beyond the realms of knowledge representation or machine learning\, and will require the effort of all of AI and maybe all of computer science.\n\nIn my talk\, I will give an overview of Wikidata and other knowledge-based technologies in AI\, and outline some ongoing research efforts that combine knowledge representation with other methods towards the creation of (more) understandable and accountable AI."\n\n\n\nThis talk will have a duration of approximately 45 minutes and takes place via BigBlueButton. To access the room\, use one of the following links: \n\nwith ZIH-Login:\n\nhttps://selfservice.zih.tu-dresden.de/l/link.php?m=82158&p=3d86e9c8\n\nwithout ZIH-Login:\n\nhttps://selfservice.zih.tu-dresden.de/link.php?m=82158&p=506714c8
DTSTAMP:20210205T091100
SEQUENCE:32692
END:VEVENT
BEGIN:VEVENT
SUMMARY:Codifying Logical Fragments in ASP - A General Knowledge Base Approach
URL://iccl.inf.tu-dresden.de/web/Codifying_Logical_Fragments_in_ASP_-_A_General_Knowledge_Base_Approach
UID://iccl.inf.tu-dresden.de/web/Codifying_Logical_Fragments_in_ASP_-_A_General_Knowledge_Base_Approach
DTSTART:20210204T130000
DTEND:20210204T143000
LOCATION:Online
DESCRIPTION:The study of logic fragments\, their limitations and expressive properties is and has been an important field for topics such as Knowledge Representation\, Model Checking and even Human Reasoning. Many different research fields have found properties for different logic fragments and drawn links between these fragments. In this project\, we have aimed to build a general framework for representing these different fragments\, their properties and for inferring new results from their relative expressivity. Using Answer Set Programming (ASP) for its useful computational properties\, we have implemented this knowledge base and provide ways to query it or find explanations for any inferred results.\n\nThis talk will be a project defence and takes place via BigBlueButton. To access the room\, take one of the following links:\n\nwith ZIH-Login:\n\nhttps://selfservice.zih.tu-dresden.de/l/link.php?m=79924&p=e5e8bd36\n\nwithout ZIH-Login:\n\nhttps://selfservice.zih.tu-dresden.de/link.php?m=79924&p=e66e7dfe
DTSTAMP:20210203T093942
SEQUENCE:32628
END:VEVENT
BEGIN:VEVENT
SUMMARY:Integration and Evaluation of an ASP-Solver as an Alternative Reasoning Backend in the Rulewerk Toolkit
URL://iccl.inf.tu-dresden.de/web/Integration_and_Evaluation_of_an_ASP-Solver_as_an_Alternative_Reasoning_Backend_in_the_Rulewerk_Toolkit
UID://iccl.inf.tu-dresden.de/web/Integration_and_Evaluation_of_an_ASP-Solver_as_an_Alternative_Reasoning_Backend_in_the_Rulewerk_Toolkit
DTSTART:20201217T130000
DTEND:20201217T143000
LOCATION:Online
DESCRIPTION:This talk will be a BA thesis defence.\n\nRulewerk is a Java API\, that is build to abstract the access to data sources for the rule-based reasoner VLog. Since the VLog reasoner is actively developed\, there is always a need to evaluate potential improvements and to compare VLogs performance to other systems. A good way to make it way easier to compare multiple reasoners is to provide them easy access to the same and also diverse data sources.\nA potential first alternative reasoning backend is the ASP system Clingo. In this thesis I present a translation form the datalog-based input language of Rulewerk into the ASP-based input language of Clingo. I also show a direct comparison of the performance of both backends.\n\n\nThis talk will take place via BigBlueButton. To access the room\, take one of the following links:\n\nwith ZIH-Login:\n\nhttps://selfservice.zih.tu-dresden.de/l/link.php?m=56216&p=bb0e03e2\n\n\nwithout ZIH-Login:\n\nhttps://selfservice.zih.tu-dresden.de/link.php?m=56216&p=6efb2684
DTSTAMP:20201216T070540
SEQUENCE:32258
END:VEVENT
BEGIN:VEVENT
SUMMARY:Positivity-Hardness and Saturation Points in Markov Decision Processes
URL://iccl.inf.tu-dresden.de/web/Positivity-Hardness_and_Saturation_Points_in_Markov_Decision_Processes
UID://iccl.inf.tu-dresden.de/web/Positivity-Hardness_and_Saturation_Points_in_Markov_Decision_Processes
DTSTART:20201105T130000
DTEND:20201105T143000
LOCATION:Online
DESCRIPTION:The Positivity problem for linear recurrence sequences is a number-theoretic decision problem whose decidability has been open for many decades. We show that this Positivity problem can be encoded into several optimization problems on Markov decision processes\, a standard model combining non-deterministic and probabilistic behavior. This reduction demonstrates the inherent mathematical difficulty of various problems in the formal verification of probabilistic systems.\nFor naturally restricted versions of most of these problems\, we can nevertheless establish decidability. The key ingredient is the existence of so-called saturation points which can be seen as bounds on the memory required for an optimal strategy in the Markov decision process.\n\n\nThe talk will be approximately 45 minutes long and takes place online. To access the talk use the following links:\n\nIf you have a ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/l/link.php?meeting_id=42224&pin=4c120949\n\n\nIf you do not have a ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/link.php?meeting_id=42224&pin=d3219e91
DTSTAMP:20201030T155351
SEQUENCE:31781
END:VEVENT
BEGIN:VEVENT
SUMMARY:Multi-Context Stream Reasoning
URL://iccl.inf.tu-dresden.de/web/TBA6
UID://iccl.inf.tu-dresden.de/web/TBA6
DTSTART:20201029T130000
DTEND:20201029T143000
LOCATION:Online
DESCRIPTION:The field of artificial intelligence and knowledge representation (KR) has originated a high variety of formalisms\, notions\, languages\, and formats during the past decades. Each approach has been motivated and designed with specific applications in mind. Nowadays\, in the century of Industry 4.0\, the Internet of Things\, and Smart-devices\, we are interested in ways to connect the various approaches and allow them to distribute and exchange their knowledge and beliefs in an uniform way. In addition most information is provided continuously over time in streams of data. This leads to the problem that these sophisticated knowledge representation approaches cannot understand each others point of views\, their positions on semantics are not necessarily compatible either\, and most formalisms are designed as one-shot computations. These three problems between different KR-formalisms has been tackled by the concept of stream-aware Multi-Context Systems\, which allow methods to transfer information under a strong and generalised notion of semantics over time.\n\n\nStefan Ellmauthaler is a post-doctoral researcher in the group of Gerhard Brewka in Leipzig. He received his B.Sc. in 'Medicine and Computer Science' and his diploma in 'Computational Intelligence' at the Vienna University of Technology. His doctoral thesis 'Multi-Context Reasoning in Continuous Data-Flow Environments' has been supervised by Gerhard Brewka at the Leipzig University. Currently his research fields are abstract argumentation\, logic programming\, multi-context systems\, and hybrid reasoning approaches like stream reasoning and reactive reasoning.\n\n\nThe talk will take place online. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20201020T122226
SEQUENCE:31570
END:VEVENT
BEGIN:VEVENT
SUMMARY:Forgetting Atoms and Arguments
URL://iccl.inf.tu-dresden.de/web/TBA5
UID://iccl.inf.tu-dresden.de/web/TBA5
DTSTART:20201027T111500
DTEND:20201027T124500
LOCATION:Online
DESCRIPTION:Whereas the operation of forgetting has recently seen a considerable amount of attention in the context of Answer Set Programming (ASP)\, most of it has focused on theoretical aspects\, leaving the practical issues largely untouched. Recent studies include results about what sets of properties operators should satisfy\, as well as the abstract characterization of several operators and their theoretical limits. However\, no concrete operators have been investigated.\nWe address this issue by presenting the first concrete operator that satisfies strong persistence (SP) - a property that seems to best capture the essence of forgetting in the context of ASP - whenever this is possible\, and many other important properties. The operator is syntactic\, limiting the computation of the forgetting result to manipulating the rules in which the atoms to be forgotten occur\, naturally yielding a forgetting result that is close to the original program. \nWhenever (SP) cannot be satisfied\, it is inevitable to break some relations between the atoms not to be forgotten.\nTherefore\, an alternative to satisfying (SP) only when possible is to enhance the logical language so that all such relations can be maintained after the forgetting operation.\nWe borrow from the recently introduced notion of fork – a conservative extension of Equilibrium Logic and its monotonic basis\, the logic of Here-and-There – which has been shown to be sufficient to overcome the problems related to satisfying (SP). We map this notion into the language of logic programs\, enhancing it with so-called anonymous cycles\, and we introduce a concrete syntactical forgetting operator over this enhanced language that we show to always obey strong persistence.\nResearch of forgetting in the context of Dung-style argumentation frameworks (AFs) is much less developed. For example\, properties that forgetting operators over AFs should satisfy have only been compiled this year. We provide a summery of relevant results and preliminary thoughts on how to proceed in the field.\n\n\nShort-Bio:\nMatti Berthold received his B.Sc. in Computer Science at the University of Leipzig\, before studying Computational Logic at TU Dresden and FCT NOVA Lisbon. During his studies he had the chance to assist and conduct teaching and publish and present the results of both his theses.\n\n\nThe talk will take place online. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20201020T122044
SEQUENCE:31569
END:VEVENT
BEGIN:VEVENT
SUMMARY:Inconsistency Values for Logic Programs under Answer Set Semantics
URL://iccl.inf.tu-dresden.de/web/Inconsistency_Values_for_Logic_Programs_under_Answer_Set_Semantics
UID://iccl.inf.tu-dresden.de/web/Inconsistency_Values_for_Logic_Programs_under_Answer_Set_Semantics
DTSTART:20201022T130000
DTEND:20201022T143000
LOCATION:Online
DESCRIPTION:The proposition that conflicting pieces of information cannot exist together appears to be too simplistic for developing intelligent systems. Knowledge representation tools such as ASP (Answer Set Programming) rely on non-monotonic reasoning\, i.e.\, programs may contain inconsistent information without collapsing. While measuring inconsistency in monotonic formalisms has been investigated for some time now\, my master thesis addresses the issue of measuring inconsistency in ASP in order to provide an approach for analysing and resolving inconsistency in logic-based knowledge representation. Taking nonmonotonicity into account demands for a generalisation of the notion of inconsistency\, namely Strong Inconsistency [2]. Aiming to generalise A. Hunter's and S. Konieczny's\napproach [1] to non-monotonic formalisms\, the key idea is to characterise and use Minimal Strong Inconsistency Values for inducing a game in coalitional form on a knowledge base\, and to then use the Shapley Value in\norder to obtain a solution to this sort of blame game\, which reflects the impact of each element on the overall inconsistency in the base. Based on the latter I characterise the Minimal Strong Inconsistency Shapley Strong\nInconsistency (MSISSI) value and the MSISSI measure\, which provide an approach to describing inconsistency in ASP and allow for repairing inconsistent knowledge bases.\n\n[1] Anthony Hunter and Sébastien Konieczny. "On the measure of conflicts: Shapley inconsistency values". In: Artifcial Intelligence 174.14 (2010)\, pp. 1007-1026.\n\n[2] Markus Ulbricht\, Matthias Thimm\, and Gerhard Brewka. "Measuring Strong Inconsistency." In: AAAI. 2018\, pp. 1989-1996.\n\n\nDominik Rusovac was born in Vienna and studied Philosophy at the University of Vienna. Then he studied Logic at the University of Leipzig\, focussing on knowledge representation and formal systems. After graduating in 2019 he has been working since January 2020 as a software developer.\n\n\nThe talk will take around 45 minutes after which there will be the opportunity to ask questions and it will take place online. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20201016T090429
SEQUENCE:31550
END:VEVENT
BEGIN:VEVENT
SUMMARY:Don’t Repeat Yourself: Termination of the Skolem Chase on Disjunctive Existential Rules
URL://iccl.inf.tu-dresden.de/web/Termination_of_the_Skolem_Chase_on_Disjunctive_Existential_Rules
UID://iccl.inf.tu-dresden.de/web/Termination_of_the_Skolem_Chase_on_Disjunctive_Existential_Rules
DTSTART:20201008T130000
DTEND:20201008T143000
LOCATION:Online
DESCRIPTION:Disjunctive Existential Rules are a fragment of first order logic that is already expressive enough to capture many description logics. Conjunctive Query answering is an important reasoning task over such rules. Although this problem is undecidable\, different variants of the chase provide sound and complete algorithms that can be used as a reasoning basis. Since it is undecidable if these algorithms terminate for a specific rule set\, sufficient conditions for termination\, called acyclicity notions\, are introduced. We develop Disjunctive Model Faithful Acyclicity (DMFA) as a novel acyclicity notion for the disjunctive skolem chase variant by using ideas from Model Faithful Acyclicity (MFA) for the non-disjunctive skolem chase and Restricted Model Faithful Acyclicity (RMFA) for the restricted chase. Our research shows that our notion captures MFA and is able give a better approximation for termination on disjunctive existential rules. Acyclicity notions for the restricted chase like RMFA still capture DMFA but these notions are not sound for the disjunctive skolem chase. Our results encourage the use of the disjunctive skolem chase in practical applications\, which is implementable using well optimized ASP solvers.\n\n\nThis talk is a defence of the speakers "Großer Beleg".\nFurthermore it will be held online and take approximately 45 minutes which consist of an uninterrupted talk and a following question and answers part. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20201002T175430
SEQUENCE:31420
END:VEVENT
BEGIN:VEVENT
SUMMARY:Smoke Test Planning using Answer Set Programming
URL://iccl.inf.tu-dresden.de/web/Smoke_Test_Planning_using_Answer_Set_Programming
UID://iccl.inf.tu-dresden.de/web/Smoke_Test_Planning_using_Answer_Set_Programming
DTSTART:20200910T130000
DTEND:20200910T143000
LOCATION:Online
DESCRIPTION:Smoke testing is an important method to increase stability and reliability of hardware-depending systems. Due to concurrent access to the same physical resource and the impracticality of the use of virtualization\, smoke testing requires some form of planning. In this paper\, we propose to decompose test cases in terms of atomic actions consisting of preconditions and effects. We present a solution based on answer set programming with multi-shot solving that automatically generates short parallel test plans. Experiments suggest that the approach is feasible for non-inherently sequential test cases and scales up to thousands of test cases.\n\n\nThis talk will be held online and take approximately 30 minutes after which questions can be asked. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200910T121605
SEQUENCE:31266
END:VEVENT
BEGIN:VEVENT
SUMMARY:Notation3 Logic: From informal to formal semantics
URL://iccl.inf.tu-dresden.de/web/Notation3_Logic:_From_informal_to_formal_semantics
UID://iccl.inf.tu-dresden.de/web/Notation3_Logic:_From_informal_to_formal_semantics
DTSTART:20200730T130000
DTEND:20200730T143000
LOCATION:Online
DESCRIPTION:Notation3 Logic is a rule-based extension of RDF. Since its invention\, the logic has been refined and applied in several reasoning engines like for example EYE\, Cwm and FuXi. But despite these developments\, a clear formal definition of Notation3’s semantics is still missing and the details of the logic are only defined in an informal way. This lack of formalisation does not only cause theoretical problems - the relationship to other logics cannot be investigated - it also has practical consequences: in many cases the interpretations of the same formula differ between reasoning engines. In this talk\, I will explain these differences and discuss how the formal semantics of the logic can be defined based on the informal specifications and the implementations.\n\n\nThis talk will be held online. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200727T072445
SEQUENCE:31071
END:VEVENT
BEGIN:VEVENT
SUMMARY:On the Complexity of Synthesis of nop-Free Boolean Petri Nets
URL://iccl.inf.tu-dresden.de/web/On_the_Complexity_of_Synthesis_of_nop-Free_Boolean_Petri_Nets.
UID://iccl.inf.tu-dresden.de/web/On_the_Complexity_of_Synthesis_of_nop-Free_Boolean_Petri_Nets.
DTSTART:20200724T130000
DTEND:20200724T143000
LOCATION:Online
DESCRIPTION:In a Boolean Petri net\, the interaction nop allows places and transitions to be independent\, so that the firing of a transition does not affect the marking of a place\, and vice versa. Recently\, the complexity of synthesis of nets equipped with nop has been investigated thoroughly\, while the question for the rest 128 types of nets remains open. This work tackles the case of nop-free nets synthesis\, that is\, the Boolean nets where places and transitions are always related via interactions that are able to modify the marking of a place. In this paper\, we show that\, for nop-free nets\, the absence of swap leads always to a polynomial time synthesis procedure. Moreover\, we give a first hint\, that the presence of swap might make the synthesis for these types NP-complete.\n\n\nThis talk will be held online. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200722T203026
SEQUENCE:31066
END:VEVENT
BEGIN:VEVENT
SUMMARY:Imprecise Probabilities in Decision-making
URL://iccl.inf.tu-dresden.de/web/Imprecise_Probabilities_in_Decision-making
UID://iccl.inf.tu-dresden.de/web/Imprecise_Probabilities_in_Decision-making
DTSTART:20200709T130000
DTEND:20200709T143000
LOCATION:Online
DESCRIPTION:An agent’s beliefs come in different strengths. We understand her degree of belief in a\nproposition as a measure of the strength of her belief in that proposition. According to the\northodox Bayesian picture\, an agent's degree of belief is best represented by a single probability\nfunction. In particular\, the Bayesian claims that agents must assign numerically precise\nprobabilities to every proposition that they can entertain. On an alternative account\, an agent’s\nbeliefs ought to be modeled based on imprecise probabilities. With that\, imprecise degrees of\nbelief can be represented by a set of probability functions.\nRecently\, however\, imprecise probabilities have come under attack. Adam Elga (2010) claims\nthat there is no adequate account of the way they can be manifested in decision-making. In\nresponse to Elga\, more elaborate accounts of the imprecise framework have been developed. One\nof them is based on Supervaluationism\, originally\, a semantic approach to vague predicates. Still\,\nSeamus Bradley (2019) shows that those accounts that solve Elga’s problem\, have a more severe\ndefect: they undermine a central motivation to introduce imprecise probabilities in the first place.\nThe aim of my presentation is to modify the supervaluationist approach in such a way that it\naccounts for both Elga’s and Bradley’s challenges to the imprecise framework.\n\n\nThis talk will be held digitally. If there is any interest in attending\, please write an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200706T044025
SEQUENCE:30938
END:VEVENT
BEGIN:VEVENT
SUMMARY:ASNP: a tame fragment of existential second-order logic
URL://iccl.inf.tu-dresden.de/web/ASNP:_a_tame_fragment_of_existential_second-order_logic
UID://iccl.inf.tu-dresden.de/web/ASNP:_a_tame_fragment_of_existential_second-order_logic
DTSTART:20200625T130000
DTEND:20200625T143000
LOCATION:Digital
DESCRIPTION:Amalgamation SNP (ASNP) is a fragment of existential second-order logic that strictly contains binary connected MMSNP of Feder and Vardi and binary connected guarded monotone SNP of Bienvenu\, ten Cate\, Lutz\, and Wolter\; it is a promising candidate for an expressive subclass of NP that exhibits a complexity dichotomy. We show that ASNP has a complexity dichotomy if and only if the infinite-domain dichotomy conjecture holds for constraint satisfaction problems for first-order reducts of binary finitely bounded homogeneous structures. For such CSPs\, powerful universal-algebraic hardness conditions are known that are conjectured to describe the border between NP-hard and polynomial-time tractable CSPs. The connection to CSPs also implies that every ASNP sentence can be evaluated in polynomial time on classes of finite structures of bounded treewidth. We show that the syntax of ASNP is decidable. The proof relies on the fact that for classes of finite binary structures given by finitely many forbidden substructures\, the amalgamation property is decidable.\n\n\nThis will be a test talk for the presentation of an eponymous paper consisting of the 20 minute long prerecorded talk (like it will be presented at the conference) which will be followed up with a Q&A session to the talk where questions will be answered by Simon Knäuer\, one of the authors. Feedback and suggestions in preperation for the conference talk are heavily encouraged.\n\nThis talk will be held digitally. If there is any interest in attending\, please write an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200622T054325
SEQUENCE:30849
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Method of Refinement: Deriving Proof-Calculi from Semantics for Multi-Modal Logics
URL://iccl.inf.tu-dresden.de/web/The_Method_of_Refinement:_Deriving_Proof-Calculi_from_Semantics_for_Multi-Modal_Logics
UID://iccl.inf.tu-dresden.de/web/The_Method_of_Refinement:_Deriving_Proof-Calculi_from_Semantics_for_Multi-Modal_Logics
DTSTART:20200611T130000
DTEND:20200611T143000
LOCATION:Digital
DESCRIPTION:In this talk\, we look at how to derive nested calculi from labelled calculi for multi-modal logics\, thus connecting the general results for labelled calculi with the more refined formalism of nested sequents. Since labelled calculi are constructed by transforming the semantics of a logic into a proof system\, the method of refinement shows how one can derive simplified calculi from the semantics of a logic. The extraction of nested calculi from labelled calculi obtains via considerations pertaining to the elimination of structural rules in labelled derivations. As a consequence of the extraction process\, each nested calculus inherits fundamental proof-theoretic properties from its associated labelled calculus and is suitable for applications such as decidability and interpolation.\n\nTim is currently working at the informatics faculty of the TU Wien in Vienna as a PreDoc Researcher in the research unit "Theory and Logic".\n\nThis talk will be held digitally. If there is any interest in attending\, please write an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200520T082459
SEQUENCE:30588
END:VEVENT
BEGIN:VEVENT
SUMMARY:Compositional Matrix-Space Models: Learning Methods and Evaluation
URL://iccl.inf.tu-dresden.de/web/Compositional_Matrix-Space_Models:_Learning_Methods_and_Evaluation
UID://iccl.inf.tu-dresden.de/web/Compositional_Matrix-Space_Models:_Learning_Methods_and_Evaluation
DTSTART:20200528T130000
DTEND:20200528T143000
LOCATION:Digital
DESCRIPTION:There has been a lot of research on machine-readable representations of words for natural\nlanguage processing (NLP). One mainstream paradigm for the word meaning representation\ncomprises vector-space models obtained from the distributional information of words in\nthe text. Machine learning techniques have been proposed to produce such word representations\nfor computational linguistic tasks. Moreover\, the representation of multi-word\nstructures\, such as phrases\, in vector space can arguably be achieved by composing the distributional\nrepresentation of the constituent words. To this end\, mathematical operations have\nbeen introduced as composition methods in vector space. An alternative approach to word\nrepresentation and semantic compositionality in natural language has been compositional\nmatrix-space models. In this thesis\, two research directions are considered. In the first\,\nconsidering compositional matrix-space models\, we explore word meaning representations\nand semantic composition of multi-word structures in matrix space. The main motivation\nfor working on these models is that they have shown superiority over vector-space models\nregarding several properties. The most important property is that the composition operation\nin matrix-space models can be defined as standard matrix multiplication\; in contrast to\ncommon vector space composition operations\, this is sensitive to word order in language. We\ndesign and develop machine learning techniques that induce continuous and numeric representations\nof natural language in matrix space. The main goal in introducing representation\nmodels is enabling NLP systems to understand natural language to solve multiple related\ntasks. Therefore\, first\, different supervised machine learning approaches to train word\nmeaning representations and capture the compositionality of multi-word structures using\nthe matrix multiplication of words are proposed. The performance of matrix representation\nmodels learned by machine learning techniques is investigated in solving two NLP tasks\,\nnamely\, sentiment analysis and compositionality detection. Then\, learning techniques for\nlearning matrix-space models are proposed that introduce generic task-agnostic representation\nmodels\, also called word matrix embeddings. In these techniques\, word matrices are\ntrained using the distributional information of words in a given text corpus. We show the\neffectiveness of these models in the compositional representation of multi-word structures in\nnatural language.\nThe second research direction in this thesis explores effective approaches for evaluating\nthe capability of semantic composition methods in capturing the meaning representation of\ncompositional multi-word structures\, such as phrases. A common evaluation approach is\nexamining the ability of the methods in capturing the semantic relatedness between linguistic\nunits. The underlying assumption is that the more accurately a method of semantic composition\ncan determine the representation of a phrase\, the more accurately it can determine the\nrelatedness of that phrase with other phrases. To apply the semantic relatedness approach\,\ngold standard datasets have been introduced. In this thesis\, we identify the limitations of\nthe existing datasets and develop a new gold standard semantic relatedness dataset\, which\naddresses the issues of the existing datasets. The proposed dataset allows us to evaluate\nmeaning composition in vector- and matrix-space models.\n\nThe presentation will take 45 minutes without questions. Afterwards there will be Q&A.\nThis talk will be held digitally. If there is any interest in attending\, please write an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200508T121859
SEQUENCE:30495
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tracking False Information Online
URL://iccl.inf.tu-dresden.de/web/Tracking_False_Information_Online
UID://iccl.inf.tu-dresden.de/web/Tracking_False_Information_Online
DTSTART:20200227T161500
DTEND:20200227T174500
LOCATION:APB 1004
DESCRIPTION:Digital media enables fast sharing of information and discussions among users. While this comes with many benefits to today’s society\, such as broadening information access\, the manner in which information is disseminated also has obvious downsides. Since fast access to information is expected by many users and news outlets are often under financial pressure\, speedy access often comes at the expense of accuracy\, which leads to misinformation. Moreover\, digital media can be misused by campaigns to intentionally spread false information\, i.e. disinformation\, about events\, individuals or governments. In this talk\, I will present on different ways false information is spread online\, including misinformation and disinformation. I will then report findings from our recent and ongoing work on automatic fact checking\, stance detection and framing attitudes.
DTSTAMP:20200220T161257
SEQUENCE:30135
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Distributed Blockchain Model of Selfish Mining
URL://iccl.inf.tu-dresden.de/web/A_Distributed_Blockchain_Model_of_Selfish_Mining
UID://iccl.inf.tu-dresden.de/web/A_Distributed_Blockchain_Model_of_Selfish_Mining
DTSTART:20200226T123000
DTEND:20200226T140000
LOCATION:APB 3027
DESCRIPTION:Abstract:\n Bitcoin is currently still the most widely used cryptocurrency\, judging from its market cap and trade volume. A big part of Bitcoin's appeal is the mining process\, which makes sure that new transactions are being validated and processed in a distributed fashion\, maintaining a distributed ledger known as the blockchain. Miners receive a fee for every block of transactions that they mine\, as an incentive for their computational effort. In the long run\, they can expect a reward proportional to the computational power they provide to the network.\nHowever\, Eyal and Sirer introduced a strategy for a miner to time the publishing of blocks that will give them a significant edge in profits. \nThis talk will present a model for the behaviour of honest and selfish mining pools in UPPAAL\, that can be used for analysing properties of the mining process in the presence of network delay\, and taking into account the distributed nature of the process. The analysis shows what effects selfish mining would have on the share of profits\, but also on the number of orphaned blocks in the blockchain. This analysis also allows us to compares those results to real-world data\, to establish if there is evidence for the presence of selfish miners.\n\nBio:\n Ansgar Fehnker is Associate Professor for Programming Education at the Faculty of Electrical Engineeering\, Mathematics\, and Computer Science\, at the University of Twente. He is part of the Formal Methods and Tools group and has been previously professor at the University of the South Pacific and researcher at Australia's ICT research centre NICTA\, working on static analysis for C/C++ and on analysis of wireless network protocols with model checking. Prior to joining NICTA he was a PostDoc in the model-checking teams at Carnegie Mellon University\, US. He received his PhD from the Radboud University Nijmegen on verification of real-time and hybrid systems.
DTSTAMP:20200217T091734
SEQUENCE:30129
END:VEVENT
BEGIN:VEVENT
SUMMARY:Knowledge Graph Curation and Reasoning using the Example of the Scholarly Domain
URL://iccl.inf.tu-dresden.de/web/Knowledge_Graph_Curation_and_Reasoning_using_the_Example_of_the_Scholarly_Domain
UID://iccl.inf.tu-dresden.de/web/Knowledge_Graph_Curation_and_Reasoning_using_the_Example_of_the_Scholarly_Domain
DTSTART:20200130T130000
DTEND:20200130T143000
LOCATION:APB 3027
DESCRIPTION:Knowledge graphs allow organisations and enterprises to integrate their internal and external heterogeneous sources of information into a unified form and enable analytics and discovery of unknown knowledge. To exploit the information encoded in knowledge graphs\, analysis of the graph structure as well as the semantics of the represented relations\, is required. I will show this using the scholarly domain as an example. The heterogeneity of scholarly artifacts and their metadata spread over different Web data sources serve as a great use case platform for data analytics and reasoning methods. In this talk\, I will first have a look into major challenges of this domain using KG creation and curation leveraging semantic Web technologies. I will further showcase the application of Knowledge Graph Embedding models for link prediction scenarios of this domain.
DTSTAMP:20200123T134125
SEQUENCE:30034
END:VEVENT
BEGIN:VEVENT
SUMMARY:Modeling Computational Properties of Description Logics in ASP
URL://iccl.inf.tu-dresden.de/web/Modeling_Computational_Properties_of_Description_Logics_in_ASP
UID://iccl.inf.tu-dresden.de/web/Modeling_Computational_Properties_of_Description_Logics_in_ASP
DTSTART:20200129T090000
DTEND:20200129T103000
LOCATION:APB 3027
DESCRIPTION:Tracking the increasing volume of research about Description logics is getting harder. Besides\, those results interact together and can deduce new results. That is why we need a knowledge base to encode those results in a smart way and infer more results based on what we currently know. This talk presents an approach of how we can encode such information with the help of Answer Set Programming (ASP). In addition\, we show how such a system can be integrated into a website that visualizes the current research results and the inferences made based on them. We end by analyzing this approach and suggesting some future work.
DTSTAMP:20200129T094156
SEQUENCE:30083
END:VEVENT
BEGIN:VEVENT
SUMMARY:Checking Chase Termination over Ontologies of Existential Rules with Equality
URL://iccl.inf.tu-dresden.de/web/Checking_Chase_Termination_over_Ontologies_of_Existential_Rules_with_Equality
UID://iccl.inf.tu-dresden.de/web/Checking_Chase_Termination_over_Ontologies_of_Existential_Rules_with_Equality
DTSTART:20200123T130000
DTEND:20200123T143000
LOCATION:APB 3027
DESCRIPTION:The chase is a sound and complete algorithm for conjunctive query answering over ontologies of existential rules with equality. To enable its effective use\, we can apply acyclicity notions\; that is\, sufficient conditions that guarantee chase termination. Unfortunately\, most of these notions have only been defined for existential rule sets without equality. A proposed solution to circumvent this issue is to treat equality as an ordinary predicate with an explicit axiomatisation. We empirically show that this solution is not efficient in practice and propose an alternative approach. More precisely\, we show that\, if the chase terminates for any equality axiomatisation of an ontology\, then it terminates for the original ontology (which may contain equality). Therefore\, one can apply existing acyclicity notions to check chase termination over an axiomatisation of an ontology and then use the original ontology for reasoning. We show that\, in practice\, doing so results in a more efficient reasoning procedure. Furthermore\, we present equality model-faithful acyclicity\, a general acyclicity notion that can be directly applied to ontologies with equality.\n\n\nThis talk is a rehearsal for AAAI 2020. \nJoint work with Jacopo Urbani.
DTSTAMP:20200107T165627
SEQUENCE:29972
END:VEVENT
BEGIN:VEVENT
SUMMARY:Musings on the Semantics of SPARQL
URL://iccl.inf.tu-dresden.de/web/Musings_on_the_Semantics_of_SPARQL
UID://iccl.inf.tu-dresden.de/web/Musings_on_the_Semantics_of_SPARQL
DTSTART:20200109T130000
DTEND:20200109T143000
LOCATION:APB 3027
DESCRIPTION:Graph simulations have found their way into different graph database management (GDBM) tasks\, e.g.\, in the shape of Offline indexing structures\, as theoretical models for graph schemas\, or as viable alternatives to matching patterns up to graph homomorphisms. Among other advantages\, it is often the tractability of the simulation problem being exploited in emerging applications. However\, when it\ncomes to evaluating the approaches\, only basic graph patterns (BGPs) and rather small data instances\, compared to today's large data instances like Wikidata or DBpedia\, are considered. In the first part of this talk\, I give some insights on how far graph simulations may be incorporated into full-fledged graph query processing. Therefore\, we analyze different semantic interpretations of SPARQL\, based on graph simulation\, w.r.t. correctness\, complexity\, and effectiveness. Second\, I briefly sketch why state-of-the-art simulation algorithms do not scale well in the graph query/data setting. I further show the effects of a devised solution that even integrates well with the SPARQL semantics we envisioned in the first part.
DTSTAMP:20200108T113625
SEQUENCE:29973
END:VEVENT
BEGIN:VEVENT
SUMMARY:SCF2 - an Argumentation Semantics for Rational Human Judgments on Argument Acceptability
URL://iccl.inf.tu-dresden.de/web/SCF2_-_an_Argumentation_Semantics_for_Rational_Human_Judgments_on_Argument_Acceptability
UID://iccl.inf.tu-dresden.de/web/SCF2_-_an_Argumentation_Semantics_for_Rational_Human_Judgments_on_Argument_Acceptability
DTSTART:20191219T130000
DTEND:20191219T143000
LOCATION:APB 3027
DESCRIPTION:In abstract argumentation theory\, many argumentation semantics have been proposed for evaluating argumentation frameworks. This paper is based on the following research question: Which semantics corresponds well to what humans consider a rational judgment on the acceptability of arguments? There are two systematic ways to approach this research question: A normative perspective is provided by the principle-based approach\, in which semantics are evaluated based on their satisfaction of various normatively desirable principles. A descriptive perspective is provided by the empirical approach\, in which cognitive studies are conducted to determine which semantics best predicts human judgments about arguments. In this paper\, we combine both approaches to motivate a new argumentation semantics called SCF2. For this purpose\, we introduce and motivate two new principles and show that no semantics from the literature satisfies both of them. We define SCF2 and prove that it satisfies both new principles. Furthermore\, we discuss findings of a recent empirical cognitive study that provide additional support to SCF2.
DTSTAMP:20191107T084228
SEQUENCE:29685
END:VEVENT
BEGIN:VEVENT
SUMMARY:Justifying All Differences Using Pseudo-Boolean Reasoning
URL://iccl.inf.tu-dresden.de/web/Justifying_All_Differences_Using_Pseudo-Boolean_Reasoning
UID://iccl.inf.tu-dresden.de/web/Justifying_All_Differences_Using_Pseudo-Boolean_Reasoning
DTSTART:20191217T150000
DTEND:20191217T160000
LOCATION:APB 2028
DESCRIPTION:Constraint programming solvers support rich global constraints and propagators\, which make them both powerful and hard to debug. In the Boolean satisfiability community\, prooflogging is the standard solution for generating trustworthy outputs\, and this has become key to the social acceptability of computer-generated proofs. However\, reusing this technology for constraint programming requires either much weaker propagation\, or an impractical blowup in proof length.\nThis paper demonstrates that simple\, clean\, and efficient proof logging is still possible for the all-different constraint\, through pseudo-Boolean reasoning. We explain how such proofs can be expressed and verified mechanistically\, describe an implementation\, and discuss the broader implications for proof logging in constraint programming.
DTSTAMP:20191211T111954
SEQUENCE:29883
END:VEVENT
BEGIN:VEVENT
SUMMARY:Standpoint logic: a multi-modal logic for reasoning within semantic indeterminacy
URL://iccl.inf.tu-dresden.de/web/Standpoint_logic:_a_multi-modal_logic_for_reasoning_within_semantic_indeterminacy
UID://iccl.inf.tu-dresden.de/web/Standpoint_logic:_a_multi-modal_logic_for_reasoning_within_semantic_indeterminacy
DTSTART:20191212T130000
DTEND:20191212T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Standpoint logic is a multi-modal logic intended at reasoning with different interpretations of semantically heterogeneous terms. The framework offers an alternative to “fuzzy” approaches to the representation of meaning and allows for the specification of “semantic commitments” and “penumbral connections”.\n\n\nIn this talk\, I will introduce the logic and provide an overview of its proof theory and semantics. I will demonstrate its expressivity in an application scenario in the forestry domain\, using data schemas from the repository Global Forest Watch and concepts from the ENVO ontology. I will finally discuss the complexity of the logic and some restrictions that could make implementations viable.
DTSTAMP:20191028T083947
SEQUENCE:29474
END:VEVENT
BEGIN:VEVENT
SUMMARY:What makes a variant of query determinacy (un)decidable?
URL://iccl.inf.tu-dresden.de/web/TBA3
UID://iccl.inf.tu-dresden.de/web/TBA3
DTSTART:20191205T130000
DTEND:20191205T143000
LOCATION:APB 3027
DESCRIPTION:Suppose there is a database we have no direct access to\, but there are views of this database available to us\, defined by some queries Q_1 \, Q_2 \, . . . Q_k. And we are given another query Q. Will we be able to compute Q only using the available views?\n\n\n\nThe above question\, call it "the question of determinacy"\, sounds almost philosophical. One can easily imagine a bearded man in himation chained to the wall of a cave watching the views projected on the wall and pondering whether\, from what he is able to see\, the reality can be faithfully reconstructed.\n\nFor us it is a database theory question though. And a really well motivated one\, with motivations ranging from query evaluation plans optimization (where we prefer a positive answer) to privacy issues (where the preferred answer is negative).\n\nQuery determinacy is a broad topic\, with literally hundreds of papers published since the late 1980s. This talk is not going to be a "survey" (which would be impossible\, within a one hour time frame\, and with this speaker)\, but rather a personal perspective of a person somehow involved in the recent developments in the area.\n\nFirst I will explain how\, in the last 30+ years\, the question of determinacy was formalized. There are many parameters here: obviously one needs to choose the query language of the queries Q_i and the query language of Q. But -- surprisingly -- there is also some choice regarding what the word "to compute" actually means in this context.\n\nThen I will concentrate on the variants of the decision problem of determinacy (for each choice of parameters there is one such problem -- Q_1 \, Q_2 \, . . . Q_k and Q constitute the instance\, and the question is whether Q_1 \, Q_2 \, . . . Q_k determine Q) and I will talk about how I understand the mechanisms rendering different variants of determinacy decidable or undecidable. This will be on a slightly informal level. No new theorems will be presented\, but I think I will be able to show simplified proofs of some of the earlier results.\n\nThis is a preview of the [https://diku-dk.github.io/edbticdt2020/?contents=invited_ICDT_talk.html invited talk at ICDT 2020].
DTSTAMP:20191202T153202
SEQUENCE:29823
END:VEVENT
BEGIN:VEVENT
SUMMARY:TE-ETH: Lower Bounds for QBFs of Bounded Treewidth
URL://iccl.inf.tu-dresden.de/web/TBA4
UID://iccl.inf.tu-dresden.de/web/TBA4
DTSTART:20191128T130000
DTEND:20191128T143000
LOCATION:APB 3027
DESCRIPTION:The problem of deciding the validity (QSAT) of quantified Boolean formulas (QBF) is a vivid research area in both theory and practice. In the field of parameterized algorithmics\, the well-studied graph measure treewidth turned out to be a successful parameter. A well-known result by Chen in parameterized complexity is that QSAT when parameterized by the treewidth of the primal graph of the input formula together with the quantifier depth of the formula is fixed-parameter tractable. More precisely\, the runtime of such an algorithm is polynomial in the formula size and exponential in the treewidth\, where the exponential function in the treewidth is a tower\, whose height is the quantifier depth. \n\n\nA natural question is whether one can significantly improve these results and decrease the tower while assuming the Exponential Time Hypothesis (ETH). In the last years\, there has been a growing interest in the quest of establishing lower bounds under ETH\, showing mostly problem-specific lower bounds up to the third level of the polynomial hierarchy. Still\, an important question is to settle this as general as possible and to cover the whole polynomial hierarchy. \n\nIn this work\, we show lower bounds based on the ETH for arbitrary QBFs parameterized by treewidth (and quantifier depth). More formally\, we establish lower bounds for QSAT and treewidth\, namely\, that under ETH there cannot be an algorithm that solves QSAT of quantifier depth i in runtime significantly better than i-fold exponential in the treewidth and polynomial in the input size. In doing so\, we provide a versatile reduction technique to compress treewidth that encodes the essence of dynamic programming on arbitrary tree decompositions. Further\, we describe a general methodology for a more fine-grained analysis of problems parameterized by treewidth that are at higher levels of the polynomial hierarchy.\n\n'''Authors:''' Johannes Klaus Fichte\, Markus Hecher\, Andreas Pfandler
DTSTAMP:20191128T115745
SEQUENCE:29795
END:VEVENT
BEGIN:VEVENT
SUMMARY:Efficiently Solving Unbounded Integer Programs in the context of SMT Solvers
URL://iccl.inf.tu-dresden.de/web/Efficiently_Solving_Unbounded_Integer_Programs_in_the_context_of_SMT_Solvers
UID://iccl.inf.tu-dresden.de/web/Efficiently_Solving_Unbounded_Integer_Programs_in_the_context_of_SMT_Solvers
DTSTART:20191122T100000
DTEND:20191122T113000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:'''\nSatisfiability modulo theories (SMT) solvers are automated theorem provers for logical formulas that range over combinations of various first-order theories. These theories typically correspond to domains found in programming languages\, e.g.\, the theories of bit vectors\, integers\, and arrays. This is intentional because SMT solvers were initially developed as back-end reasoning tools for automated software verification. These days\, SMT solvers are also used as back-end reasoning tools for various other applications\, e.g.\, verification of hybrid systems\, program synthesis\, and as brute-force tactics in various interactive theorem provers.\n\nIn this talk\, I will present two new techniques for the theory of linear integer arithmetic in the context of SMT solvers:\n\n1) The unit cube test [2\,3]\, a sound (although incomplete) test that finds solutions for integer programs(i.e.\, systems of linear inequalities) in polynomial time. The test is especially efficient on absolutely unbounded integer programs\, which are difficult to handle for many other decision procedures.\n\n2) A bounding transformation [1] that reduces any integer program in polynomial time to an equisatisfiable integer program that is bounded. The transformation is beneficial because it turns branch and bound into a complete and efficient decision procedure for integer programs.\n\n'''References:'''\n\n[1] A Reduction from Unbounded Linear Mixed Arithmetic Problems into Bounded Problems\, Martin Bromberger. IJCAR 2018\, volume 10900 of LNCS\, pages 329–345. Springer\, 2018.\n\n[2] New Techniques for Linear Arithmetic: Cubes and Equalities\, Martin Bromberger\, and Christoph Weidenbach. In FMSD volume 51(3)\, pages 433–461. Springer\, 2017.\n\n[3] Fast Cube Tests for LIA Constraint Solving\, Martin Bromberger\, and Christoph Weidenbach. In IJCAR 2016\, volume 9706 of LNCS\, pages 116–132. Springer\, 2016.
DTSTAMP:20191121T135506
SEQUENCE:29758
END:VEVENT
BEGIN:VEVENT
SUMMARY:A diamond in the rough: Theorizing column stores
URL://iccl.inf.tu-dresden.de/web/A_diamond_in_the_rough:_Theorizing_column_stores
UID://iccl.inf.tu-dresden.de/web/A_diamond_in_the_rough:_Theorizing_column_stores
DTSTART:20191121T130000
DTEND:20191121T143000
LOCATION:APB 3105
DESCRIPTION:Column stores have been a 'neglected child' relative to traditional\, row-oriented\, relation-focused database management systems: The systems people came up with them\, and the theoreticians did not really give them the time of day. This talk will discuss what happens when we pick up the slack and formalize a model for analytic computation with columns. In addition to sound conceptual grounding being its own aesthetic reward\, we will touch on some of the examples of how such a formalization enables architectural and performance improvements in real-life systems:\n\n\nSeamless integration of decompression and query execution\; removal of special-case handling of different column features (such as nullability and variable-length elements)\; closure of query execution plans to partial execution\; et cetera. Central to achieving such benefits will be the discussion of what constitutes a column\, how columns are to be represented\, and what they can represent.
DTSTAMP:20191107T103354
SEQUENCE:29693
END:VEVENT
BEGIN:VEVENT
SUMMARY:Can A.I. Provably Explain Itself? A gentle Introduction to Description Logics
URL://iccl.inf.tu-dresden.de/web/Can_A.I._Provably_Explain_Itself%3F_A_gentle_Introduction_to_Description_Logics
UID://iccl.inf.tu-dresden.de/web/Can_A.I._Provably_Explain_Itself%3F_A_gentle_Introduction_to_Description_Logics
DTSTART:20191114T130000
DTEND:20191114T143000
LOCATION:APB 3027
DESCRIPTION:The emergence of intelligent systems in self-driving cars\, planes\, medical diagnosis\, insurance and financial services among others has shown that when decisions are taken or suggested by automated systems it is essential that an explanation can be provided. The disconnect between how we make decisions and how machines make them\, and the fact that machines are making more and more decisions for us\, has given a new push for transparency in A.I. However\, the inner workings of machine learning algorithms remain difficult to understand\, and the methods of making these models explainable still require expensive human evaluation.\n\n\nOn the other hand\, knowledge representation based on description logics allow for providing description of the environment\, specifying constraints for the system states and detecting inconsistencies\, as well as operating information from heterogeneous (possibly incomplete) data sources\, and reasoning about the knowledge of an application domain. Because of the conceptual difference with machine learning algorithms\, the description logics formalism is much more similar to human reasoning and can be adapted to supply the user with necessary explications for a decision made. Additionally\, in order to model dynamic systems\, description logics have extensions enabling additionally temporal and probabilistic reasoning.\n\nIn this talk I will outline key pillars and basic principles of (onto)logical reasoning as well as its limitations. Finally\, I will say a few words about an initiative of TU Dresden and Saarland Informatics Campus developing together the concept of perspicuous computing and laying the scientific foundations for computerised systems that are able to express clearly their functioning.
DTSTAMP:20191030T152752
SEQUENCE:29562
END:VEVENT
BEGIN:VEVENT
SUMMARY:Interface between Logical Analysis of Data and Formal Concept Analysis
URL://iccl.inf.tu-dresden.de/web/Interface_between_Logical_Analysis_of_Data_and_Formal_Concept_Analysis
UID://iccl.inf.tu-dresden.de/web/Interface_between_Logical_Analysis_of_Data_and_Formal_Concept_Analysis
DTSTART:20191024T130000
DTEND:20191024T143000
LOCATION:APB 3027
DESCRIPTION:Logical Analysis of Data and Formal Concept Analysis are separately developed methodologies based on different mathematical foundations. We show that the two methodologies utilize the same basic building blocks. That enables us to develop an interface between the two methodologies. We provide some preliminary benefits of the interface\; most notably efficient algorithms for computing spanned patterns in Logical Analysis of Data using algorithms of Formal Concept Analysis.
DTSTAMP:20191021T094239
SEQUENCE:29357
END:VEVENT
BEGIN:VEVENT
SUMMARY:Knowledge Dynamics in Social Environments
URL://iccl.inf.tu-dresden.de/web/Knowledge_Dynamics_in_Social_Environments
UID://iccl.inf.tu-dresden.de/web/Knowledge_Dynamics_in_Social_Environments
DTSTART:20190926T130000
DTEND:20190926T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:'''\nSocial media platforms\, taken in conjunction\, can be seen as complex networks\; in this context\, understanding how agents react to sentiments expressed by their connections is of great interest. Here\, we show how Network Knowledge Bases help represent the integration of multiple social networks\, and explore how information flow can be handled via belief revision operators for local (agent-specific) knowledge bases. We report on preliminary experiments on Twitter data showing that different agent types react differently to the same information — this is a first step toward developing tools to predict how agents behave as information flows in their social environment.\n\n'''Bio:'''\nMaria Vanina Martinez\, University of Buenos Aires\, Argentina.
DTSTAMP:20190923T154215
SEQUENCE:29160
END:VEVENT
BEGIN:VEVENT
SUMMARY:A.M.B.R.O.S.I.A. - Conferring Immortality on Distributed Applications
URL://iccl.inf.tu-dresden.de/web/A.M.B.R.O.S.I.A._-_Conferring_Immortality_on_Distributed_Applications
UID://iccl.inf.tu-dresden.de/web/A.M.B.R.O.S.I.A._-_Conferring_Immortality_on_Distributed_Applications
DTSTART:20190920T130000
DTEND:20190920T140000
LOCATION:APB 3105
DESCRIPTION:'''Abstract:'''\n\nWhen writing today’s distributed programs\, which frequently span both devices and cloud services\, programmers are faced with complex decisions and coding tasks around coping with failure\, especially when these distributed components are stateful. If their application can be cast as pure data processing\, they benefit from the past 40-50 years of work from the database community\, which has shown how declarative database systems can completely isolate the developer from the possibility of failure in a performant manner. Unfortunately\, while there have been some attempts at bringing similar functionality into the more general distributed programming space\, a compelling general-purpose system must handle non-determinism\, be performant\, support a variety of machine types with varying resiliency goals\, and be language agnostic\, allowing distributed components written in different languages to communicate. This talk describes the first system\, publicly available on GitHub\, called Ambrosia\, to satisfy all these requirements. We coin the term “virtual resiliency”\, analogous to virtual memory\, for the platform feature which allows failure oblivious code to run in a failure resilient manner. We also introduce a programming construct\, the “impulse”\, which resiliently handles non-deterministic information originating from outside the resilient component. Of further interest to our community is the effective reapplication of much database performance optimization technology to make Ambrosia more performant than many of today’s non-resilient cloud solutions.\n\n\n'''Bio:'''\n\nOver the last 20 years\, I have worked at Microsoft in a combination of research and product roles. In particular\, I’ve spent about 15 years as a researcher at MSR\, doing fundamental research in streaming\, big data processing\, databases\, and distributed computing. My style of working is to attack difficult problems\, and through fundamental understanding and insight\, create new artifacts that enable important problems to be solved in vastly better ways. For instance\, my work on streaming data processing enabled people with real time data processing problems to specify their processing logic in new\, powerful ways\, and also resulted in an artifact called Trill\, which was orders of magnitude more performant than anything which preceded it. Within the academic community\, I have published many papers\, some with best paper awards (e.g. Best Paper Award at ICDE 2012)\, and two with test of time awards (e.g. SIGMOD 2011 Test of Time award and ICDT 2018 Test of Time award)\, and have also taken many organizational roles in database conferences. My research has also had significant impact on many Microsoft products\, including SQL Server\, Office\, Windows\, Bing\, and Halo\, as well as leading to the creation of entirely new products like Microsoft StreamInsight\, Azure Stream Analytics\, Trill\, and most recently\, Ambrosia. I spent 5 years building Microsoft StreamInsight\, serving as a founder and architect for the product. Trill has become the de-facto standard for temporal and stream data processing within Microsoft\, and years after creation\, is still the most expressive and performant general purpose stream data processor in the world. I am also an inventor of 30+ patents.
DTSTAMP:20190918T094958
SEQUENCE:29122
END:VEVENT
BEGIN:VEVENT
SUMMARY:Young Scientist's Third International Workshop on Trends in Information Processing (YSIP3)
URL://iccl.inf.tu-dresden.de/web/Young_Scientist%27s_Third_International_Workshop_on_Trends_in_Information_Processing_(YSIP3)
UID://iccl.inf.tu-dresden.de/web/Young_Scientist%27s_Third_International_Workshop_on_Trends_in_Information_Processing_(YSIP3)
DTSTART:20190917T000000
DTEND:20190920T000000
LOCATION:Stavropol and Arkhyz\, Russian Federation
DESCRIPTION:We invite young scientists – typically master or PhD students – to submit new scientific results – as\, for example\, obtained in their bachelor thesis\, project work\, master thesis\, or PhD project – in all areas of Information Processing.
DTSTAMP:20190117T101503
SEQUENCE:27542
END:VEVENT
BEGIN:VEVENT
SUMMARY:Automatic translation of clinical trial eligibility criteria into formal queries
URL://iccl.inf.tu-dresden.de/web/Automatic_translation_of_clinical_trial_eligibility_criteria_into_formal_queries
UID://iccl.inf.tu-dresden.de/web/Automatic_translation_of_clinical_trial_eligibility_criteria_into_formal_queries
DTSTART:20190916T130000
DTEND:20190916T140000
LOCATION:APB 3027
DESCRIPTION:Selecting patients for clinical trials is very labor-intensive. Our goal is to develop an automated system that can support doctors in this task. This paper describes a major step towards such a system: the automatic translation of clinical trial eligibility criteria from natural language into formal\, logic-based queries. First\, we develop a semantic annotation process that can capture many types of clinical trial criteria. Then\, we map the annotated criteria to the formal query language. We have built a prototype system based on state-of-the-art NLP tools such as Word2Vec\, Stanford NLP tools\, and the MetaMap Tagger\, and have evaluated the quality of the produced queries on a number of criteria from clinicaltrials.gov.
DTSTAMP:20190913T120845
SEQUENCE:29095
END:VEVENT
BEGIN:VEVENT
SUMMARY:A gentle introduction to partition width
URL://iccl.inf.tu-dresden.de/web/Lecture_on_Partition_Width
UID://iccl.inf.tu-dresden.de/web/Lecture_on_Partition_Width
DTSTART:20190912T130000
DTEND:20190912T143000
LOCATION:APB 3027
DESCRIPTION:In this talk we will take an introductory glance at the notion of "partition width"\, first conceived by Achim Blumensath. As partition width is also closely related to a notion of decomposition of an arbitrary structure into a tree-like shape\, the so called "partition refinement"\, we will also take a look at the relation of both these notions to more established notions of decomposition and width measures (namely tree-decompositions\, tree width\, and clique width).
DTSTAMP:20190802T123628
SEQUENCE:28943
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mixing Description Logics in Privacy-Preserving Ontology Publishing
URL://iccl.inf.tu-dresden.de/web/Mixing_Description_Logics_in_Privacy-Preserving_Ontology_Publishing
UID://iccl.inf.tu-dresden.de/web/Mixing_Description_Logics_in_Privacy-Preserving_Ontology_Publishing
DTSTART:20190905T130000
DTEND:20190905T140000
LOCATION:APB 2026
DESCRIPTION:In previous work\, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an instance store\, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL . We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy\, and have shown how\, in the context mentioned above\, optimal compliant (safe) generalizations of a given concept can be computed. In the present paper\, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular\, we investigate the situations where the attacker’s knowledge is given by an or an concept. In both cases\, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of \, it turns out to be actually lower (polynomial) for the more expressive DL .\n\n\nJoint work with Franz Baader. This is also a test-talk for a presentation at KI 2019.
DTSTAMP:20190902T110018
SEQUENCE:29045
END:VEVENT
BEGIN:VEVENT
SUMMARY:On the Expressive Power of Description Logics with Cardinality Constraints on Finite and Infinite Sets
URL://iccl.inf.tu-dresden.de/web/On_the_Expressive_Power_of_Description_Logics_with_Cardinality_Constraints_on_Finite_and_Infinite_Sets
UID://iccl.inf.tu-dresden.de/web/On_the_Expressive_Power_of_Description_Logics_with_Cardinality_Constraints_on_Finite_and_Infinite_Sets
DTSTART:20190829T130000
DTEND:20190829T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' In recent work we have extended the description logic (DL) ALCQ by means of more expressive number restrictions using numerical and set constraints stated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). It has been shown that reasoning in the resulting DL\, called ALCSCC\, is PSpace-complete without a TBox and ExpTime-complete w.r.t. a general TBox. The semantics of ALCSCC is defined in terms of finitely branching interpretations\, that is\, interpretations where every element has only finitely many role successors. This condition was needed since QFBAPA considers only finite sets. In this paper\, we first introduce a variant of ALCSCC\, called ALCSCC∞\, in which we lift this requirement (inexpressible in first-order logic) and show that the complexity results for ALCSCC mentioned above are preserved. Nevertheless\, like ALCSCC\, ALCSCC∞ is not a fragment of first-order logic. The main contribution of this paper is to give a characterization of the first-order fragment of ALCSCC∞. The most important tool used in the proof of this result is a notion of bisimulation that characterizes this fragment.\n\n\nJoint work with Franz Baader.\nThis talk is a rehearsal for a presentation at FroCoS 2019.\nDuration: 25 minutes without questions.
DTSTAMP:20190820T085458
SEQUENCE:28999
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chasing Sets: How to Use Existential Rules for Expressive Reasoning
URL://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning
UID://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning
DTSTART:20190801T130000
DTEND:20190801T140000
LOCATION:APB 3027
DESCRIPTION:Abstract: We propose that modern existential rule reasoners can enable fully declarative implementations of rule-based inference methods in knowledge representation\, in the sense that a particular calculus is captured by a fixed set of rules that can be evaluated on varying inputs (encoded as facts). We introduce Datalog(S) – Datalog with support for sets – as a surface language for such translations\, and show that it can be captured in a decidable fragment of existential rules. We then implement several known inference methods in Datalog(S)\, and empirically show that an existing existential rule reasoner can thus be used to solve practical reasoning problems.\n\n\nThis talk is a rehearsal for a presentation (15 minutes including questions) at IJCAI 2019.
DTSTAMP:20190731T095825
SEQUENCE:28893
END:VEVENT
BEGIN:VEVENT
SUMMARY:Worst-Case Optimal Querying of Very Expressive Description Logics with Path Expressions and Succinct Counting
URL://iccl.inf.tu-dresden.de/web/Worst-Case_Optimal_Querying_of_Very_Expressive_Description_Logics_with_Path_Expressions_and_Succinct_Counting
UID://iccl.inf.tu-dresden.de/web/Worst-Case_Optimal_Querying_of_Very_Expressive_Description_Logics_with_Path_Expressions_and_Succinct_Counting
DTSTART:20190730T130000
DTEND:20190730T133000
LOCATION:APB 3027
DESCRIPTION:'''Abstract.''' Among the most expressive knowledge representation formalisms are the description logics of the Z family. For well-behaved fragments of ZOIQ\, entailment of positive two-way regular path queries is well known to be 2EXPTIMEcomplete under the proviso of unary encoding of numbers in cardinality constraints. We show that this assumption can be dropped without an increase in complexity and EXPTIME-completeness can be achieved when bounding the number of query atoms\, using a novel reduction from query entailment to knowledge base satisfiability. These findings allow to strengthen other results regarding query entailment and query containment problems in very expressive description logics. Our results also carry over to GC2\, the two-variable guarded fragment of first order logic with counting quantifiers\, for which hitherto only conjunctive query entailment has been investigated.
DTSTAMP:20190718T124219
SEQUENCE:28841
END:VEVENT
BEGIN:VEVENT
SUMMARY:Extending EL++ with Linear Constraints on the Probability of Axioms
URL://iccl.inf.tu-dresden.de/web/Extending_EL%2B%2B_with_Linear_Constraints_on_the_Probability_of_Axioms
UID://iccl.inf.tu-dresden.de/web/Extending_EL%2B%2B_with_Linear_Constraints_on_the_Probability_of_Axioms
DTSTART:20190723T133000
DTEND:20190723T150000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' One of the main reasons to employ a description logic such as EL++ is the fact that it has efficient\, polynomial-time algorithmic properties such as deciding consistency and inferring subsumption. However\, simply by adding negation of concepts to it\, we obtain the expressivity of description logics whose decision procedure is ExpTime-complete. Similar complexity explosion occurs if we add probability assignments on concepts. To lower the resulting complexity\, we instead concentrate on assigning probabilities to Axioms/GCIs. We show that the consistency detection problem for such a probabilistic description logic is NP-complete\, and present a linear algebraic deterministic algorithm to solve it\, using the column generation technique. We also examine algorithms for the probabilistic extension problem\, which consists of inferring the minimum and maximum probabilities for a new axiom\, given a consistent probabilistic knowledge base. \n\n\nFuture work aims at finding fragments of probabilistic EL++ which are tractable.
DTSTAMP:20190717T120135
SEQUENCE:28813
END:VEVENT
BEGIN:VEVENT
SUMMARY:Reasoning about disclosure in data integration in the presence of source constraints
URL://iccl.inf.tu-dresden.de/web/Reasoning_about_disclosure_in_data_integration_in_the_presence_of_source_constraints
UID://iccl.inf.tu-dresden.de/web/Reasoning_about_disclosure_in_data_integration_in_the_presence_of_source_constraints
DTSTART:20190718T130000
DTEND:20190718T140000
LOCATION:APB 3027
DESCRIPTION:Joint work with M. Benedikt\, P. Bourhis\, L. Jachiet\n\n\n'''Abstract:''' Data integration systems allow users to access data sitting in multiple sources by means of queries over a global schema\, related to the sources via mappings. Data sources often contain sensitive information\, and thus an analysis is needed to verify that a schema satisfies a privacy policy\, given as a set of queries whose answers should not be accessible to users. Such an analysis should take into account not only knowledge that an attacker may have about the mappings\, but also what they may know about the semantics of the sources. In this talk\, I'll discuss the impact that source constraints can have on disclosure analysis.\n\n'''Speaker bio:''' Michaël Thomazo (Inria\, DI ENS\, ENS\, CNRS\, PSL University)
DTSTAMP:20190712T163358
SEQUENCE:28796
END:VEVENT
END:VCALENDAR