BEGIN:VCALENDAR
PRODID:-//SMW Project//Semantic Result Formats
VERSION:2.0
METHOD:PUBLISH
X-WR-CALNAME:ICCL-Veranstaltungen
X-WR-CALDESC:
BEGIN:VEVENT
SUMMARY:Adventures in Computer Science -- Grades 1 & 2
URL://iccl.inf.tu-dresden.de/web/Adventures_in_Computer_Science_--_Grades_1_%26_2
UID://iccl.inf.tu-dresden.de/web/Adventures_in_Computer_Science_--_Grades_1_%26_2
DTSTART:20230330T110000
DTEND:20230330T120000
LOCATION:APB room 3027
DESCRIPTION:Have you sorted you toys at home by color or by size? Have you\never looked for this one particular pencil? Have you asked yourself\, why\ndid this search take so long or how can it be faster? The "World of\nAlgorithms" (German: Die Welt der Algorithmen) provides some useful\ntools for you to achieve your goals as quick as possible. This includes\nproblems that never crossed your minds before. We are going to look at\ncertain problems and their (algorithmic) solutions. It is our goal the\nlearn about the strategies leading to the solutions. We are also facing\nproblems that are so hard to solve for which a quick solutions is out of\nreach\, even for a modern computer.\n\nThis is a translation of the advertisement I provided for pupils at the\n39th Primary School here at Dresden for a so-called "Ganztagsangebot"\n(GTA for short) which is basically a 1 hour/week course\ninstitutionalized as afternoon activity. My target group has been\nchildren at the level of 1st and 2nd grade (ages 6 to 8). In this talk\,\nI will briefly explain what I planned to do\, what I did and what I'm\nplanning to do in this GTA. I will mainly elaborate on the experiences I\ngathered in teaching 6-8 year-olds over the course of this school year.\n\nThe talk will take place in a hybrid fashion\, physically in the APB room 3027\, and online through the link:\nhttps://bbb.tu-dresden.de/b/pio-zwt-smp-aus
DTSTAMP:20230327T211414
SEQUENCE:37949
END:VEVENT
BEGIN:VEVENT
SUMMARY:Static Program Analysis in Datalog
URL://iccl.inf.tu-dresden.de/web/Static_Program_Analysis_in_Datalog
UID://iccl.inf.tu-dresden.de/web/Static_Program_Analysis_in_Datalog
DTSTART:20230323T110000
DTEND:20230323T110000
LOCATION:APB room 3027
DESCRIPTION:Abstract: Doop is a framework for Program Analysis of Java code. It utilizes the Datalog engine Souffle to run high-speed analyses on an Intermediate Code Representation. Existing analyses mainly deal with larger-scope Pointer Analysis\, but not with local optimization possibilities. In this talk\, I focus on the implementation of local static analyses like Constant Folding\, Liveness and Available Expressions in Doop. I will give an introduction into the related frameworks\, and point out benefits and limitations of Doop and Souffle regarding each analysis. Furthermore I will compare my results to existing tools for static code analysis and give an outlook whether Doop is suitable for local static analysis.\n\nThe talk will take place in a hybrid fashion\, physically in the APB room 3027\, and online through the link:\nhttps://bbb.tu-dresden.de/b/pio-zwt-smp-aus
DTSTAMP:20230320T112131
SEQUENCE:37925
END:VEVENT
BEGIN:VEVENT
SUMMARY:Relating Description Complexity to Entropy
URL://iccl.inf.tu-dresden.de/web/Relating_Description_Complexity_to_Entropy
UID://iccl.inf.tu-dresden.de/web/Relating_Description_Complexity_to_Entropy
DTSTART:20230316T110000
DTEND:20230316T120000
LOCATION:APB room 3027
DESCRIPTION:We demonstrate some novel links between entropy and description complexity\, a notion referring to the minimal formula length for specifying given properties. Let MLU be the logic obtained by extending propositional logic with the universal modality\, and let GMLU be the corresponding extension with the ability to count. In the finite\, MLU is expressively complete for specifying sets of variable assignments\, while GMLU is expressively complete for multisets. We show that for MLU\, the model classes with maximal Boltzmann entropy are the ones with maximal description complexity. Concerning GMLU\, we show that expected Boltzmann entropy is asymptotically equivalent to expected description complexity multiplied by the number of proposition symbols considered. To contrast these results\, we prove that this link breaks when we move to considering first-order logic FO over vocabularies with higher-arity relations. To establish the aforementioned result\, we show that almost all finite models require relatively large FO-formulas to define them. Our results relate to links between Kolmogorov complexity and entropy\, demonstrating a way to conceive such results in the logic-based scenario where relational structures are classified by formulas of different sizes.\n\nThe talk will take place in a hybrid fashion\, physically in the APB room 3027\, and online through the link:\nhttps://bbb.tu-dresden.de/b/pio-zwt-smp-aus
DTSTAMP:20230313T182807
SEQUENCE:37902
END:VEVENT
BEGIN:VEVENT
SUMMARY:From Provenance Polynomials to Provenance Patterns: More Mileage for Mere Mortals
URL://iccl.inf.tu-dresden.de/web/From_Provenance_Polynomials_to_Provenance_Patterns:_More_Mileage_for_Mere_Mortals
UID://iccl.inf.tu-dresden.de/web/From_Provenance_Polynomials_to_Provenance_Patterns:_More_Mileage_for_Mere_Mortals
DTSTART:20230309T110000
DTEND:20230309T120000
LOCATION:APB room 3027
DESCRIPTION:Abstract: \nIt is well known that the commutative semiring of integer polynomials N[X] provides an elegant unifying framework (a.k.a. how-provenance) for a number of earlier provenance approaches in databases. A provenance-annotated answer A’ to a query Q reveals how outputs depend on inputs from the annotated database D′ and thus it can be used to answer strictly more questions than the original answer A = Q(D) without provenance. We propose to use provenance patterns\, a related notion implicit in earlier work\, to support answering additional questions\, not answerable by A′ alone. We show that the pattern-enhanced answer A* can be easily obtained as a by-product of computing A or A’. Using a detailed running example\, we illustrate questions that can and cannot be answered using A\, A′\, and A*\, respectively. Time allowing\, I’d also like to touch upon related research in answering why-not provenance questions. \n\nAbout the speaker:\nBertram Ludäscher is a professor at the School of Information Sciences at the University of Illinois\, Urbana-Champaign\, where he is also a faculty affiliate with the National Center for Supercomputing Applications (NCSA) and the CS department. Prior to joining UIUC\, he was a professor at the CS department and the UC Davis Genome Center at the University of California\, Davis. His research interests range from practical questions in scientific data management and workflow automation\, to database theory\, and knowledge representation & reasoning. Before his faculty appointments\, he was a research scientist at the San Diego Supercomputer Center (SDSC) and an adjunct faculty at the CSE department at UCSD. He holds a M.S. (Dipl.-Inform.) in CS from the University of Karlsruhe (K.I.T.) and a PhD (Dr. rer. nat.) from the University of Freiburg.\n\nThe talk will take place in a hybrid fashion\, physically in the APB room 3027\, and online through the link:\nhttps://bbb.tu-dresden.de/b/pio-zwt-smp-aus
DTSTAMP:20230303T202039
SEQUENCE:37892
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tunas - Fishing for Diverse Answer Sets: A Multi-Shot Trade up Strategy
URL://iccl.inf.tu-dresden.de/web/Tunas_-_Fishing_for_Diverse_Answer_Sets:_A_Multi-Shot_Trade_up_Strategy
UID://iccl.inf.tu-dresden.de/web/Tunas_-_Fishing_for_Diverse_Answer_Sets:_A_Multi-Shot_Trade_up_Strategy
DTSTART:20230302T110000
DTEND:20230302T120000
LOCATION:APB room 2026
DESCRIPTION:Abstract: Answer set programming (ASP) solvers have advanced in recent years\, with a variety of different specialisation and overall development. Thus\, even more complex and detailed programs can be solved. A side effect of this development are growing solution spaces and the problem of how to find those answer sets one is interested in. One general approach is to give an overview in form of a small number of highly diverse answer sets. By choosing a favourite and repeating the process\, the user is able to leap through the solution space. But finding highly diverse answer sets is computationally expensive. In this paper we introduce a new approach called Tunas for Trade Up Navigation for Answer Sets to find diverse answer sets by reworking existing solution collections. The core idea is to collect diverse answer sets one after another by iteratively solving and updating the program. Once no more answer sets can be added to the collection\, the program is allowed to trade answer sets from the collection for different answer sets\, as long as the collection grows and stays diverse. Elaboration of the approach is possible in three variations\, which we implemented and compared to established methods in an empirical evaluation. The evaluation shows that the Tunas approach is competitive with existing methods\, and that efficiency of the approach is highly connected to the underlying logic program.\n\nThe talk will take place in a hybrid fashion\, physically in the APB room 3027\, and online through the link:\nhttps://bbb.tu-dresden.de/b/pio-zwt-smp-aus
DTSTAMP:20230221T172612
SEQUENCE:37858
END:VEVENT
BEGIN:VEVENT
SUMMARY:Multilinguality in Knowledge Graphs
URL://iccl.inf.tu-dresden.de/web/Multilinguality_in_Knowledge_Graphs
UID://iccl.inf.tu-dresden.de/web/Multilinguality_in_Knowledge_Graphs
DTSTART:20230216T110000
DTEND:20230216T120000
LOCATION:APB 3027
DESCRIPTION:Abstract: In this talk\, I will give an overview of my work in the domain of\nmultilingual knowledge graphs with a focus on Wikidata.\nI will speak about how well knowledge graphs are covered in terms of\nmultilingual information\, how to measure and increase their\nmutlilinguality\, and finally about the reuse of the multilingual\ninformation.\nIn particular\, I introduce a framework to measure knowledge graphs'\nlabels and language information for reuse in applications. Then\, I will\ngive an overview of the topic of machine translation of knowledge graph\nlabels. Further\, we explore the generation of multilingual text from\ntriples.\nI will tie in my most recent work on language models with regard to\nmultilingual and cross-cultural understanding.\n\nThe talk will take place in a hybrid fashion\, physically in the APB room 3027\, and online through the link:\nhttps://bbb.tu-dresden.de/b/pio-zwt-smp-aus
DTSTAMP:20230208T210700
SEQUENCE:37838
END:VEVENT
BEGIN:VEVENT
SUMMARY:Strong Equivalence in Non-monotonic Reasoning
URL://iccl.inf.tu-dresden.de/web/Strong_Equivalence_in_Non-monotonic_Reasoning
UID://iccl.inf.tu-dresden.de/web/Strong_Equivalence_in_Non-monotonic_Reasoning
DTSTART:20230119T110000
DTEND:20230119T120000
LOCATION:APB room 3027
DESCRIPTION:Abstract: In classical logic\, when two theories have the same models\, they are\nmutually replaceable in arbitrary contexts. This is not the case in\nvarious non-monotonic formalisms\, e.g. logic programs under the stable\nmodel semantics\, or abstract argumentation frameworks. There\, two\ntheories (programs/frameworks) can have the same models (stable\nmodels/extensions)\, but yet differ in their semantics when both are\nextended with the same third theory. To obtain mutual replaceability\, a\nstronger notion is needed – this equivalence across all possible\nextensions is known as strong equivalence.\n\nWe address the question on how non-monotonicity is related to this\ndivergence of standard and strong equivalence\, show examples of how\nstrong equivalence for concrete formalisms is analysed in the\nliterature\, and finally preview an abstract\, algebraic approach to\ncharacterising strong equivalence model-theoretically.
DTSTAMP:20230116T210853
SEQUENCE:37756
END:VEVENT
BEGIN:VEVENT
SUMMARY:Efficient Dependency Analysis for Rule-Based Ontologies
URL://iccl.inf.tu-dresden.de/web/Efficient_Dependency_Analysis_for_Rule-Based_Ontologies
UID://iccl.inf.tu-dresden.de/web/Efficient_Dependency_Analysis_for_Rule-Based_Ontologies
DTSTART:20230112T110000
DTEND:20230112T120000
LOCATION:APB room 3027
DESCRIPTION:Several types of dependencies have been proposed for the static analysis of existential rule ontologies\, promising insights about computational properties and possible practical uses of a given set of rules\, e.g.\, in ontology-based query answering. Unfortunately\, these dependencies are rarely implemented\, so their potential is hardly realised in practice. We focus on two kinds of rule dependencies - positive reliances and restraints - and design and implement optimised algorithms for their efficient computation. Experiments on real-world ontologies of up to more than 100\,000 rules show the scalability of our approach\, which lets us realise several previously proposed applications as practical case studies. In particular\, we can analyse to what extent rule-based bottom-up approaches of reasoning can be guaranteed to yield redundancy-free "lean" knowledge graphs (so-called cores) on practical ontologies.
DTSTAMP:20230106T113352
SEQUENCE:37707
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nested Sequents for First-Order Modal Logics via Reachability Rules
URL://iccl.inf.tu-dresden.de/web/Nested_Sequents_for_First-Order_Modal_Logics_via_Reachability_Rules
UID://iccl.inf.tu-dresden.de/web/Nested_Sequents_for_First-Order_Modal_Logics_via_Reachability_Rules
DTSTART:20221215T110000
DTEND:20221215T120000
LOCATION:APB room 3027
DESCRIPTION:In this talk\, we present cut-free nested sequent calculi for first-order modal logics. We consider a diverse class of first-order modal logics obtained by imposing certain Horn conditions on the relational and domain structure of QK models. Labeled sequent calculi are then obtained via standard techniques from the semantics of these logics. We show how to derive nested calculi from labeled via the 'structural refinement' technique\, which consists of eliminating structural rules\, introducing reachability rules\, and translating notation. The reachability rules obtained via this method possess a unique functionality\, consisting of propagating and/or checking if data exists along certain paths within a nested sequent. We demonstrate how such rules operate and touch on the significance of the nested sequent formulation (in the first-order setting) which emerges naturally from the structural refinement methodology.
DTSTAMP:20221211T170759
SEQUENCE:37645
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Relevance of Formal Logics for Cognitive Logics\, and Vice Versa
URL://iccl.inf.tu-dresden.de/web/The_Relevance_of_Formal_Logics_for_Cognitive_Logics,_and_Vice_Versa
UID://iccl.inf.tu-dresden.de/web/The_Relevance_of_Formal_Logics_for_Cognitive_Logics,_and_Vice_Versa
DTSTART:20221214T110000
DTEND:20221214T120000
LOCATION:APB room 2026
DESCRIPTION:Classical logics like propositional or predicate logic have been considered as the gold standard for rational human reasoning\, and hence as a solid\, desirable norm on which all human knowledge and decision making should be based\, ideally. For instance\, Boolean logic was set up as kind of an arithmetic framework that should help make rational reasoning computable in an objective way\, similar to the arithmetics of numbers. Computer scientists adopted this view to (literally) implement objective knowledge and rational deduction\, in particular for AI applications. Psychologists have used classical logics as norms to assess the rationality of human commonsense reasoning. However\, both disciplines could not ignore the severe limitations of classical logics\, e.g.\, computational complexity and undecidedness\, failures of logic-based AI systems in practice\, and lots of psychological paradoxes. Many of these problems are caused by the inability of classical logics to deal with uncertainty in an adequate way. Both disciplines have used probabilities as a way out of this dilemma\, hoping that numbers and the Kolmogoroff axioms can do the job (somehow). However\, psychologists have been observing also lots of paradoxes here (maybe even more).\n\nSo then\, are humans hopelessly irrational? Is human reasoning incompatible with formal\, axiomatic logics? In the end\, should computer-based knowledge and information processing be considered as superior in terms of objectivity and rationality?\n\nCognitive logics aim at overcoming the limitations of classical logics and resolving the observed paradoxes by proposing logic-based approaches that can model human reasoning consistently and coherently in benchmark examples. The basic idea is to reverse the normative way of assessing human reasoning in terms of logics resp. probabilities\, and to use typical human reasoning patterns as norms for assessing the cognitive quality of logics. Cognitive logics explore the broad field of logic-based approaches between the extreme points marked by classical logics and probability theory with the goal to find more suitable logics for AI applications\, on the one hand\, and to gain more insights into the rational structures of human reasoning\, on the other.
DTSTAMP:20221211T104845
SEQUENCE:37643
END:VEVENT
BEGIN:VEVENT
SUMMARY:IASCAR: Incremental Answer Set Counting by Anytime Refinement
URL://iccl.inf.tu-dresden.de/web/IASCAR:_Incremental_Answer_Set_Counting_by_Anytime_Refinement
UID://iccl.inf.tu-dresden.de/web/IASCAR:_Incremental_Answer_Set_Counting_by_Anytime_Refinement
DTSTART:20221124T110000
DTEND:20221124T120000
LOCATION:APB room 3027
DESCRIPTION:Answer set programming (ASP) is a popular declarative programming paradigm with various applications. Programs can easily have so many answer sets that they cannot be enumerated in practice\, but counting still allows to quantify solution spaces. If one counts under assumptions on literals\, one obtains a tool to comprehend parts of the solution space\, so called answer set navigation. But navigating through parts of the solution space requires counting many times\, which is expensive in theory. There\, knowledge compilation compiles instances into representations on which counting works in polynomial time. However\, these techniques exist only for CNF formulas and compiling ASP programs into CNF formulas can introduce an exponential overhead. In this paper\, we introduce a technique to iteratively count answer sets under assumptions on knowledge compilations of CNFs that encode supported models. Our anytime technique uses the principle of inclusion-exclusion to systematically improve bounds by over- and undercounting. In a preliminary empirical analysis we demonstrate promising results. After compiling the input (offline phase) our approach quickly (re)counts.
DTSTAMP:20221119T115255
SEQUENCE:37564
END:VEVENT
BEGIN:VEVENT
SUMMARY:Normalisations of Existential Rules: Not so Innocuous!
URL://iccl.inf.tu-dresden.de/web/Normalisations_of_Existential_Rules:_Not_so_Innocuous!
UID://iccl.inf.tu-dresden.de/web/Normalisations_of_Existential_Rules:_Not_so_Innocuous!
DTSTART:20221117T110000
DTEND:20221117T120000
LOCATION:APB room 3027
DESCRIPTION:Existential rules are an expressive knowledge representation language mainly developed to query data. In the literature\, they are often supposed to be in some normal form that simplifies technical developments. For instance\, a common assumption is that rule heads are atomic\, i.e.\, restricted to a single atom. Such assumptions are considered to be made without loss of generality as long as all sets of rules can be normalised while preserving entailment. However\, an important question is whether the properties that ensure the decidability of reasoning are preserved as well. We provide a systematic study of the impact of these procedures on the different chase variants with respect to chase (non-)termination and FO-rewritability. This also leads us to study open problems related to chase termination of independent interest.
DTSTAMP:20221114T103916
SEQUENCE:37538
END:VEVENT
BEGIN:VEVENT
SUMMARY:General Acyclicity and Cyclicity Notions for the Disjunctive Skolem Chase
URL://iccl.inf.tu-dresden.de/web/General_Acyclicity_and_Cyclicity_Notions_for_the_Disjunctive_Skolem_Chase
UID://iccl.inf.tu-dresden.de/web/General_Acyclicity_and_Cyclicity_Notions_for_the_Disjunctive_Skolem_Chase
DTSTART:20221110T110000
DTEND:20221110T120000
LOCATION:APB room 3027
DESCRIPTION:The disjunctive skolem chase is a sound\, complete\, and potentially non-terminating procedure for solving boolean conjunctive query entailment over knowledge bases of disjunctive existential rules. We develop novel acyclicity and cyclicity notions for this procedure\; that is\, we develop sufficient conditions to determine chase termination and non-termination. Our empirical evaluation shows that our novel notions are significantly more general than existing criteria.
DTSTAMP:20221107T205147
SEQUENCE:37513
END:VEVENT
BEGIN:VEVENT
SUMMARY:Finite-Cliquewidth Sets of Existential Rules: Toward a General Criterion for Decidable yet Highly Expressive Querying
URL://iccl.inf.tu-dresden.de/web/Finite-Cliquewidth_Sets_of_Existential_Rules:_Toward_a_General_Criterion_for_Decidable_yet_Highly_Expressive_Querying
UID://iccl.inf.tu-dresden.de/web/Finite-Cliquewidth_Sets_of_Existential_Rules:_Toward_a_General_Criterion_for_Decidable_yet_Highly_Expressive_Querying
DTSTART:20221103T110000
DTEND:20221003T120000
LOCATION:APB room 3027
DESCRIPTION:In our pursuit of generic criteria for decidable ontology-based querying\, we introduce 'finite-cliquewidth sets' (FCS) of existential rules\, a model-theoretically defined class of rule sets\, inspired by the cliquewidth measure from graph theory. By a generic argument\, we show that FCS ensures decidability of entailment for a sizable class of queries (dubbed 'DaMSOQs') subsuming conjunctive queries (CQs). The FCS class properly generalizes the class of finite-expansion sets (FES)\, and for signatures of arity at most 2\, the class of bounded-treewidth sets (BTS). For higher arities\, BTS is only indirectly subsumed by FCS by means of reification. Despite the generality of FCS\, we provide a rule set with decidable CQ entailment (by virtue of first-order-rewritability) that falls outside FCS\, thus demonstrating the incomparability of FCS and the class of finite-unification sets (FUS). In spite of this\, we show that if we restrict ourselves to single-headed rule sets over signatures of arity at most 2\, then FCS subsumes FUS.
DTSTAMP:20221031T140937
SEQUENCE:37469
END:VEVENT
BEGIN:VEVENT
SUMMARY:How to Agree to Disagree: Managing Ontological Perspectives using Standpoint Logic
URL://iccl.inf.tu-dresden.de/web/How_to_Agree_to_Disagree:_Managing_Ontological_Perspectives_using_Standpoint_Logic
UID://iccl.inf.tu-dresden.de/web/How_to_Agree_to_Disagree:_Managing_Ontological_Perspectives_using_Standpoint_Logic
DTSTART:20221020T110000
DTEND:20221020T120000
LOCATION:APB room 3027
DESCRIPTION:The importance of taking individual\, potentially conflicting perspectives into account when dealing with knowledge has been widely recognised. Many existing ontology management approaches fully merge knowledge perspectives\, which may require weakening in order to maintain consistency\; others represent the distinct views in an entirely detached way. As an alternative\, we propose Standpoint Logic\, a simple\, yet versatile multi-modal logic ``add-on'' for existing KR languages intended for the integrated representation of domain knowledge relative to diverse\, possibly conflicting \\emph{standpoints}\, which can be hierarchically organised\, combined\, and put in relation with each other.\nStarting from the generic framework of First-Order Standpoint Logic (FOSL)\, we subsequently focus our attention on the fragment of sentential formulas\, for which we provide a polytime translation into the standpoint-free version. This result yields decidability and favourable complexities for a variety of highly expressive decidable fragments of first-order logic. Using some encoding tricks\, we then establish a similar translation for the very expressive description logic SROIQb_s underlying the OWL 2 DL ontology language. By virtue of this result\, existing highly optimised OWL reasoners can be used to provide practical reasoning support for ontology languages extended by standpoint modelling.
DTSTAMP:20221017T094452
SEQUENCE:37385
END:VEVENT
BEGIN:VEVENT
SUMMARY:Discovering Fine-Grained Semantics in Knowledge Graph Relations
URL://iccl.inf.tu-dresden.de/web/Discovering_Fine-Grained_Semantics_in_Knowledge_Graph_Relations
UID://iccl.inf.tu-dresden.de/web/Discovering_Fine-Grained_Semantics_in_Knowledge_Graph_Relations
DTSTART:20221006T110000
DTEND:20221006T120000
LOCATION:APB room 3027
DESCRIPTION:Knowledge graphs (KGs) provide structured representation of data in the\nform of relations between different entities. The semantics of relations\nbetween words and entities are often ambiguous\, where it is common to\nfind polysemous relations that represent multiple semantics based on the\ncontext. This ambiguity in relation semantics also proliferates KG\ntriples. While the guidance from custom-designed ontologies addresses\nthis issue to some extent\, our analysis shows that the heterogeneity and\ncomplexity of real-world data still results in substantial relation\npolysemy within popular KGs. The correct semantic interpretation of KG\nrelations is necessary for many downstream applications such as entity\nclassification and question answering.\n\nWe present the problem of fine-grained relation discovery and a\ndata-driven method towards this task that leverages the vector\nrepresentations of the knowledge graph entities and relations available\nfrom relational learning models.\n\nWe show that by performing clustering over these vectors\, our method is\nable to not only identify the polysemous relations in knowledge graphs\,\nbut also discover the different semantics associated with them.\nExtensive empirical evaluation shows that fine-grained relations\ndiscovered by the proposed approach lead to substantial improvement in\nthe semantics in the Yago and NELL datasets\, as compared to baselines.\n\nThe talk will take place in a hybrid fashion\, physically in the seminar room\, and online through the link:\nhttps://bbb.tu-dresden.de/b/pio-zwt-smp-aus
DTSTAMP:20220930T084721
SEQUENCE:37209
END:VEVENT
BEGIN:VEVENT
SUMMARY:Computability and Complexity of (Iterated) Belief Revision
URL://iccl.inf.tu-dresden.de/web/Computability_and_Complexity_of_(Iterated)_Belief_Revision
UID://iccl.inf.tu-dresden.de/web/Computability_and_Complexity_of_(Iterated)_Belief_Revision
DTSTART:20221005T140000
DTEND:20221005T150000
LOCATION:APB 3027
DESCRIPTION:Belief revision denotes the process of the incorporation of new information while maintaining consistency whenever possible. In this talk\, at first\, we consider some known complexity results for belief revision. Second\, we consider the recent insight that the typical framework for iterated belief revision is computationally universal.
DTSTAMP:20221004T112955
SEQUENCE:37237
END:VEVENT
BEGIN:VEVENT
SUMMARY:An Existential Rule Framework for Computing Why-Provenance On-Demand for Datalog
URL://iccl.inf.tu-dresden.de/web/An_Existential_Rule_Framework_for_Computing_Why-Provenance_On-Demand_for_Datalog
UID://iccl.inf.tu-dresden.de/web/An_Existential_Rule_Framework_for_Computing_Why-Provenance_On-Demand_for_Datalog
DTSTART:20220922T110000
DTEND:20220922T120000
LOCATION:APB room 3027
DESCRIPTION:Why-provenance — explaining why a query result is obtained — is an essential asset for reaching the goal of Explainable AI. For instance\, recursive (Datalog) queries may show unexpected derivations due to complex entanglement of database atoms inside recursive rule applications. Provenance\, and why-provenance in particular\, helps debugging rule sets to eventually obtain the desired set of rules. There are three kinds of approaches to computing why-provenance for Datalog in the literature: (1) the complete ones\, (2) the approximate ones\, and (3) the theoretical ones. What all these approaches have in common is that they aim at computing provenance for all IDB atoms\, while only a few atoms might be requested to be explained. We contribute an on-demand approach: After deriving all entailed facts of a Datalog program\, we allow for querying for the provenance of particular IDB atoms and the structures involved in deriving provenance are computed only then. Our framework is based on terminating existential rules\, recording the different rule applications. We present two implementations of the framework\, one based on the semiring solver FPsolve\, the other one based Datalog(S)\, a recent extension of Datalog by set terms. We perform experiments on benchmark rule sets using both implementations and discuss feasibility of provenance on-demand.\n\nThe talk is based on the upcoming conference paper in RuleML+RR 2022 with Markus Krötzsch and Stephan Mennicke\, the paper can be found through the following link: https://iccl.inf.tu-dresden.de/web/Inproceedings3342\n\nThe talk will take place in a hybrid fashion\, physically in the seminar room\, and online through the link:\nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220909T131327
SEQUENCE:37062
END:VEVENT
BEGIN:VEVENT
SUMMARY:Finite Base Contraction and Expansion via Models - The ALC case
URL://iccl.inf.tu-dresden.de/web/Finite_Base_Contraction_and_Expansion_via_Models_-_The_ALC_case
UID://iccl.inf.tu-dresden.de/web/Finite_Base_Contraction_and_Expansion_via_Models_-_The_ALC_case
DTSTART:20220908T110000
DTEND:20220908T120000
LOCATION:APB room 3027
DESCRIPTION:We propose a new paradigm for Belief Change in which the new information is represented as sets of models\, while the agent’s body of knowledge is represented as a finite set of formulae\, that is\, a finite base. The focus on finiteness is crucial\nwhen we consider limited agents and reasoning algorithms. Moreover\, having the input as arbitrary set of models is more general than the usual treatment of formulae as input. In this setting\, we define new Belief Change operations akin to traditional expansion and contraction\, and we identify the rationality postulates that emerge due to the finite representability requirement. We also analyse different logics concerning compatibility with our framework. In this talk\, I will focus on the ALC description logic.\n\nThe talk will take place in a hybrid fashion\, physically in the seminar room\, and online through the link:\n\nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220906T094832
SEQUENCE:37037
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tunas - Fishing for Diverse Answer Sets: A Multi-shot Trade up Strateg
URL://iccl.inf.tu-dresden.de/web/Tunas_-_Fishing_for_Diverse_Answer_Sets:_A_Multi-shot_Trade_up_Strateg
UID://iccl.inf.tu-dresden.de/web/Tunas_-_Fishing_for_Diverse_Answer_Sets:_A_Multi-shot_Trade_up_Strateg
DTSTART:20220901T113000
DTEND:20220901T120000
LOCATION:APB room 3027
DESCRIPTION:Answer set programming (ASP) solvers have advanced in\nrecent years\, with a variety of different specialisation and overall devel-\nopment. Thus\, even more complex and detailed programs can be solved.\nA side effect of this development are growing solution spaces and the\nproblem of how to find those answer sets one is interested in. One gen-\neral approach is to give an overview in form of a small number of highly\ndiverse answer sets. By choosing a favourite and repeating the process\,\nthe user is able to leap through the solution space. But finding highly\ndiverse answer sets is computationally expensive. In this paper we intro-\nduce a new approach called Tunas for Trade Up Navigation for Answer\nSets to find diverse answer sets by reworking existing solution collec-\ntions. The core idea is to collect diverse answer sets one after another by\niteratively solving and updating the program. Once no more answer sets\ncan be added to the collection\, the program is allowed to trade answer\nsets from the collection for different answer sets\, as long as the collection\ngrows and stays diverse. Elaboration of the approach is possible in three\nvariations\, which we implemented and compared to established methods\nin an empirical evaluation. The evaluation shows that the Tunas app-\nroach is competitive with existing methods\, and that efficiency of the\napproach is highly connected to the underlying logic program.\n\nThe talk will take place in a hybrid fashion\, physically in the seminar room\, and online through the link:\n\nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220829T142934
SEQUENCE:36979
END:VEVENT
BEGIN:VEVENT
SUMMARY:Representing Abstract Dialectical Frameworks with Binary Decision Diagrams
URL://iccl.inf.tu-dresden.de/web/Representing_Abstract_Dialectical_Frameworks_with_Binary_Decision_Diagrams
UID://iccl.inf.tu-dresden.de/web/Representing_Abstract_Dialectical_Frameworks_with_Binary_Decision_Diagrams
DTSTART:20220901T110000
DTEND:20220901T113000
LOCATION:APB room 3027
DESCRIPTION:Abstract dialectical frameworks (ADFs) are a well-studied generalisation of the prominent argumentation frameworks due to Phan Minh Dung. In this paper we propose to use reduced ordered binary decision diagrams (roBDDs) as a suitable representation of the acceptance conditions of arguments within ADFs. We first show that computational complexity of reasoning on ADFs represented by roBDDs is milder than in the general case\, with a drop of one level in the polynomial hierarchy. Furthermore\, we present a framework to systematically define heuristics for search space exploitation\, based on easily retrievable properties of roBDDs and the recently proposed approach of weighted faceted navigation for answer set programming. Finally\, we present preliminary experiments of an implementation of our approach showing promise both when compared to state-of-the-art solvers and when developing heuristics for reasoning.\n\nThe talk will take place in a hybrid fashion\, physically in the seminar room\, and online through the link:\n\nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220829T142659
SEQUENCE:36977
END:VEVENT
BEGIN:VEVENT
SUMMARY:The More the Worst-Case-Merrier: A Generalized Condorcet Jury Theorem for Belief Fusion
URL://iccl.inf.tu-dresden.de/web/The_More_the_Worst-Case-Merrier:_A_Generalized_Condorcet_Jury_Theorem_for_Belief_Fusion
UID://iccl.inf.tu-dresden.de/web/The_More_the_Worst-Case-Merrier:_A_Generalized_Condorcet_Jury_Theorem_for_Belief_Fusion
DTSTART:20220728T110000
DTEND:20220728T120000
LOCATION:APB room 3027
DESCRIPTION:In multi-agent belief fusion\, there is increasing interest in results and methods from social choice theory. As a theoretical cornerstone\, the Condorcet Jury Theorem (CJT) states that given a number of equally competent\, independent agents where each is more likely to guess the true out of two alternatives\, the chances of determining this objective truth by majority voting increase with the number of participating agents\, approaching certainty. Past generalizations of the CJT have shown that some of its underlying assumptions can be weakened. Motivated by requirements from practical belief fusion scenarios\, we provide a significant further generalization that subsumes several of the previous ones. Our considered setting simultaneously allows for heterogeneous competence levels across the agents (even tolerating entirely incompetent or even malicious voters)\, and voting for any number of alternatives from a finite set. We derive practical lower bounds for the numbers of agents needed to give probabilistic guarantees for determining the true state through approval voting. We also demonstrate that the non-asymptotic part of the CJT fails in our setting for arbitrarily high numbers of voters.\n\nThe talk will take place in a hybrid fashion\, physically in the seminar room\, and online through the link:\n\nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220726T080754
SEQUENCE:36870
END:VEVENT
BEGIN:VEVENT
SUMMARY:KLM-Style Defeasible Reasoning
URL://iccl.inf.tu-dresden.de/web/KLM-Style_Defeasible_Reasoning
UID://iccl.inf.tu-dresden.de/web/KLM-Style_Defeasible_Reasoning
DTSTART:20220721T110000
DTEND:20220721T120000
LOCATION:APB room 3027
DESCRIPTION:This talk will give an overview of KLM-style defeasible reasoning\, a form of non-monotonic reasoning initially introduced by Kraus\, Lehmann\, and Magidor. The overview will start with the propositional roots of defeasible reasoning and its links with the theory of Belief Revision. I will then discuss its application to description logics and other fragments of first-order logic\, and the challenges inherent in lifting the intuition underlying propositional versions of defeasible reasoning to more expressive logics.\n\nThe talk will take place in a hybrid fashion\, physically in our seminar room\, and online through the link:\nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220704T081604
SEQUENCE:36809
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Glimpse into Propositional Model Counting
URL://iccl.inf.tu-dresden.de/web/A_Glimpse_into_Propositional_Model_Counting
UID://iccl.inf.tu-dresden.de/web/A_Glimpse_into_Propositional_Model_Counting
DTSTART:20220707T110000
DTEND:20220707T120000
LOCATION:Online
DESCRIPTION:Model counting (#SAT) asks to compute the number of satisfying assignments for a propositional formula. The decision version (SAT) received widespread interest in computational complexity\, formed many applications in modern combinatorial problem solving\, and can be solved effectively for millions of variables on structured instances. #SAT is much harder than SAT and requires more elaborate solving techniques. In this talk\, we revisit the problem\, its complexity\, and explain its connection to quantitative AI. We briefly overview solving techniques and illustrate a parameterized algorithm and implementation to tackle the problem. While purely parameterized approaches from theory often suffer practical limitations\, we elaborate that a parameterized algorithm can be successful when combining it with modern hardware that takes advantage of parallelism.
DTSTAMP:20220630T122522
SEQUENCE:36779
END:VEVENT
BEGIN:VEVENT
SUMMARY:Efficient Emulation of Datalog(S) in the Existential Rules Engine VLog
URL://iccl.inf.tu-dresden.de/web/Efficient_Emulation_of_Datalog(S)_in_the_Existential_Rules_Engine_VLog
UID://iccl.inf.tu-dresden.de/web/Efficient_Emulation_of_Datalog(S)_in_the_Existential_Rules_Engine_VLog
DTSTART:20220630T110000
DTEND:20220630T120000
LOCATION:APB room 3027
DESCRIPTION:We propose a translation of the expressive logic Datalog(S)\, a subset of Datalog^CV\, into optimised existential rules. We look into applications of the language\, in particular regarding description logic and graph theory\, as well as into expressivity. We explain the translation from earlier work in detail\, identify issues and propose solutions to increase performance. We explain the need for singularisation\, propose partial congruence as an alternative and define criteria for when this handling is not necessary. Furthermore\, we propose an alternative translation for which the standard chase universally terminates. To measure the impact of each optimisation\, we conduct an empirical evaluation on real-world data.\n\nThe talk is a defence\, with a hybrid attendance.\n\nlink: https://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220630T132340
SEQUENCE:36784
END:VEVENT
BEGIN:VEVENT
SUMMARY:Statistics for Knowledge Graph Modelling
URL://iccl.inf.tu-dresden.de/web/Statistics_for_Knowledge_Graph_Modelling
UID://iccl.inf.tu-dresden.de/web/Statistics_for_Knowledge_Graph_Modelling
DTSTART:20220623T110000
DTEND:20220623T120000
LOCATION:APB room 3027
DESCRIPTION:In this talk I will provide a high level overview of how statistical approaches can be used to model knowledge graphs. The talk is intended for everyone\, regardless of statistics background\, and will cover the basic vocabulary and concepts before moving on to a high level view of inference and how it can be used to approximate joint distributions. Finally\, I will tie these ideas with my current work.\n\nThe talk will take place in a hybrid fashion\, physical attendance is in our seminar room.\nLink: https://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220620T110446
SEQUENCE:36671
END:VEVENT
BEGIN:VEVENT
SUMMARY:Satisfiability Testing -- Recent Developments and Open Problems
URL://iccl.inf.tu-dresden.de/web/Satisfiability_Testing_--_Recent_Developments_and_Open_Problems
UID://iccl.inf.tu-dresden.de/web/Satisfiability_Testing_--_Recent_Developments_and_Open_Problems
DTSTART:20220616T110000
DTEND:20220616T120000
LOCATION:APB room 3027
DESCRIPTION:With the introduction of clause learning in 1996\,\nsatisfiability testing (SAT) solvers have become effective to replace\nother state-of-the-art algorithm in applications. Shortly after\,\nefficient data structures and heuristics have been introduced\, boosting\nthe performance of these solvers further. Recent developments shift SAT\nsolvers towards anytime algorithms\, to attempt to find a good average\nsolving time for a wide range of input problems. Consequently\, SAT\nsolvers have become the core tool to solve many industrial applications.\nHowever\, there are still limitations\, and open research questions that\nwould allow to make SAT solving even more effective. This presentation\nwill shed some light on various past and recent developments\, as well as\npoint out current known limitations as well as open research questions.\n\nThe talk is Hybird\, with a 20 person maximum capacity in the seminar room APB 3027\n\nOnline link: https://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220630T132015
SEQUENCE:36782
END:VEVENT
BEGIN:VEVENT
SUMMARY:The World Avatar project – a universal digital twin
URL://iccl.inf.tu-dresden.de/web/The_World_Avatar_project_%E2%80%93_a_universal_digital_twin
UID://iccl.inf.tu-dresden.de/web/The_World_Avatar_project_%E2%80%93_a_universal_digital_twin
DTSTART:20220615T100000
DTEND:20220615T110000
LOCATION:APB room 3027
DESCRIPTION:“The World Avatar” (TWA) project aims to create a digital ‘avatar’ of the real world. The digital world is represented using a dynamic knowledge graph that contains concepts and data that describe the world\, and an ecosystem of autonomous computational agents that simulate the behaviour of the world and that update the concepts and data so that the digital world remains current in time. In this manner\, it is proposed that it is possible to create digital twins that can describe the behaviour of any complex system\, and by doing so provide the ability to make data-driven decisions about how to optimise the systems. Examples will be presented from TWA including chemical knowledge creation\, laboratory automation\, building management\, smart city operation\, climate resilience and national energy scenarios. \n\nThe talk is Hybird!\, with a 20 person maximum capacity in the seminar room APB 3027\n\nOnline link: https://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220610T112318
SEQUENCE:36602
END:VEVENT
BEGIN:VEVENT
SUMMARY:Blockmodelling Knowledge Graphs
URL://iccl.inf.tu-dresden.de/web/Blockmodelling_Knowledge_Graphs
UID://iccl.inf.tu-dresden.de/web/Blockmodelling_Knowledge_Graphs
DTSTART:20220602T110000
DTEND:20220602T120000
LOCATION:APB room 3027
DESCRIPTION:In this talk\, I provide an introductory overview of knowledge graphs\, stochastic blockmodelling\, and the area where these two domains intersect. Furthermore\, I will highlight some of the work I have contributed to and am currently working on. The aim of the talk is to provide a gentle introduction to my work that can be a starting point for future discussion and collaboration.\n\nThe talk is Hybird!\, with a 20 person maximum capacity in the seminar room APB 3027\nOnline link: https://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220530T091347
SEQUENCE:36552
END:VEVENT
BEGIN:VEVENT
SUMMARY:Communication with Automata
URL://iccl.inf.tu-dresden.de/web/Communication_with_Automata
UID://iccl.inf.tu-dresden.de/web/Communication_with_Automata
DTSTART:20220519T110000
DTEND:20220519T120000
LOCATION:Online
DESCRIPTION:In this talk I will give an overview on the basic notions of Petri nets\, in order to derive two results and their proofs. The selection of the particular results is (1) a personal choice (because I like them) and (2) taught in the course "Models of Concurrent Systems" (apparently also known as "Concurrency Theory") this semester.\n\nThe talk is online\, \nLink: https://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220516T214636
SEQUENCE:36485
END:VEVENT
BEGIN:VEVENT
SUMMARY:Weighted Automata with Storage
URL://iccl.inf.tu-dresden.de/web/Weighted_Automata_with_Storage
UID://iccl.inf.tu-dresden.de/web/Weighted_Automata_with_Storage
DTSTART:20220512T110000
DTEND:20220512T120000
LOCATION:Https://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DESCRIPTION:This talk is an excerpt of the PhD defense talk by Luisa Hermann.\n\nIn this thesis\, we investigate weighted tree automata with storage theoretically. This model generalises finite state automata in three dimensions: (i) from words to trees\, (ii) by using an arbitrary storage type in addition to a finite-state control\, and (iii) by considering languages in a quantitative setting using a weight structure.
DTSTAMP:20220427T201150
SEQUENCE:36324
END:VEVENT
BEGIN:VEVENT
SUMMARY:Causality meets Configurable Systems
URL://iccl.inf.tu-dresden.de/web/Causality_meets_Configurable_Systems
UID://iccl.inf.tu-dresden.de/web/Causality_meets_Configurable_Systems
DTSTART:20220505T110000
DTEND:20220505T120000
LOCATION:Https://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DESCRIPTION:Detecting and understanding reasons for defects and inadvertent behavior in computing systems is challenging due to increasing complexity of such systems. Configurability adds a further layer of complexity as the number of system variants can be exponential in the number of configuration options. Determining reasons of configuration decisions that yield an effect\, e.g.\, ensuring quality of service\, is hence a crucial and difficult task. In this talk\, I will present a notion of causality at the level of configuration options and algorithms to compute such causes from (functional and non-functional) analysis results. The presented notion of causality is based on counterfactual reasoning and inspired by the seminal definition of actual causality by Halpern and Pearl. I will present various methods to explicate such reasons for effects\, e.g.\, based on well-established notions of responsibility and blame. By means of an evaluation on a wide range of configurable software systems\, including community benchmarks and real-world systems\, the feasibility of this approach is demonstrated by identifying root causes\, estimate the effects of configuration options\, and detect feature interactions.
DTSTAMP:20220427T201132
SEQUENCE:36323
END:VEVENT
BEGIN:VEVENT
SUMMARY:Expressivity of Planning with Horn Description Logic Ontologies
URL://iccl.inf.tu-dresden.de/web/Expressivity_of_Planning_with_Horn_Description_Logic_Ontologies
UID://iccl.inf.tu-dresden.de/web/Expressivity_of_Planning_with_Horn_Description_Logic_Ontologies
DTSTART:20220428T110000
DTEND:20220428T120000
LOCATION:Https://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DESCRIPTION:State constraints in AI Planning globally restrict the legal environment states. Standard planning languages make closed-domain and closed-world assumptions. Here we address open-world state constraints formalized by planning over a description logic (DL) ontology. Previously\, this combination of DL and planning has been investigated for the light-weight DL DL-Lite. Here we propose a novel compilation scheme into standard PDDL with derived predicates\, which applies to more expressive DLs and is based on the rewritability of DL queries into Datalog with stratified negation. We also provide a new rewritability result for the DL Horn-ALCHOIQ\, which allows us to apply our compilation scheme to quite expressive ontologies. In contrast\, we show that in the slight extension Horn-SROIQ no such compilation is possible unless the weak exponential hierarchy collapses. Finally\, we show that our approach can outperform previous work on existing benchmarks for planning with DL ontologies\, and is feasible on new benchmarks taking advantage of more expressive ontologies.\n\nThis talk is about joint work with Jörg Hoffmann\, Alisa Kovtunova\, [[Markus Krötzsch]]\, Bernhard Nebel\, and Marcel Steinmetz. It has been presented at the [https://aaai.org/Conferences/AAAI-22/ AAAI conference] 2022.\n\nAccess the [https://bbb.tu-dresden.de/b/ali-zgz-l8d-52n BBB Room] for the seminar room on April 28\, 11am.
DTSTAMP:20220427T195720
SEQUENCE:36319
END:VEVENT
BEGIN:VEVENT
SUMMARY:Computing Generalizations of Temporal EL Concepts with Next and Global
URL://iccl.inf.tu-dresden.de/web/Computing_Generalizations_of_Temporal_EL_Concepts_with_Next_and_Global
UID://iccl.inf.tu-dresden.de/web/Computing_Generalizations_of_Temporal_EL_Concepts_with_Next_and_Global
DTSTART:20220414T110000
DTEND:20220414T120000
LOCATION:Online
DESCRIPTION:In ontology-based applications\, the authoring of complex concepts or queries written in a description logic (DL) is a difficult task. An established approach to generate complex expressions from examples provided a user\, is the bottom-up approach. This approach employs two inferences: the most specific concept (MSC)\, which generalizes an ABox individual into a concept and the least common subsumer (LCS)\, which generalizes a collection of concepts into a single concept. In ontology-based situation recognition the situation to be recognized is formalized by a DL query using temporal operators and that is to answered over a sequence of ABoxes. Now\, while the bottom-up approach is well-investigated for the DL EL\, there are so far no methods for temporalized DLs. We consider here the temporalized DL that extends the DL EL with the LTL operators next (X) and global (G) and we present an approach that extends the LCS and the MSC to the temporalized setting.
DTSTAMP:20220401T123703
SEQUENCE:35908
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Price of Selfishness: Conjunctive Query Entailment for ALCSelf is 2EXPTIME-hard
URL://iccl.inf.tu-dresden.de/web/The_Price_of_Selfishness:_Conjunctive_Query_Entailment_for_ALCSelf_is_2EXPTIME-hard
UID://iccl.inf.tu-dresden.de/web/The_Price_of_Selfishness:_Conjunctive_Query_Entailment_for_ALCSelf_is_2EXPTIME-hard
DTSTART:20220407T110000
DTEND:20220407T120000
LOCATION:Online
DESCRIPTION:In logic-based knowledge representation\, query answering has essentially replaced mere satisfiability checking as the infer encing problem of primary interest. For knowledge bases in the basic description logic ALC\, the computational complexity of conjunctive query (CQ) answering is well known to be EXPTIME-complete and hence not harder than satisfiability. This does not change when the logic is extended by certain features (such as counting or role hierarchies)\, whereas adding others (inverses\, nominals or transitivity together with role-hierarchies) turns CQ answering exponentially harder. We contribute to this line of results by showing the surprising fact that even extending ALC by just the Self operator – which proved innocuous in many other contexts – increases the complexity of CQ entailment to 2EXPTIME. As common for this type of problem\, our proof establishes a reduction from alternating Turing machines running in exponential space\, but several novel ideas and encoding tricks are required to make the approach work in that specific\, restricted setting.
DTSTAMP:20220401T123114
SEQUENCE:35906
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Datalog hammer for supervisor verification conditions modulo simple linear arithmetic
URL://iccl.inf.tu-dresden.de/web/A_Datalog_hammer_for_supervisor_verification_conditions_modulo_simple_linear_arithmetic
UID://iccl.inf.tu-dresden.de/web/A_Datalog_hammer_for_supervisor_verification_conditions_modulo_simple_linear_arithmetic
DTSTART:20220331T110000
DTEND:20220331T120000
LOCATION:Online
DESCRIPTION:In a previous paper [1]\, we have shown that clause sets belonging to the Horn Bernays-Schönfinkel fragment over simple linear arithmetic (HBS(SLA)) can be translated into HBS clause sets over a finite set of first-order constants.\nHBS clause sets are also called Datalog programs and HBS(SLA) can be seen as an extension of Datalog programs that allow simple arithmetic inequalities in the bodies of rules. The translation from HBS(SLA) to HBS preserves validity and satisfiability and it is still applicable if we extend our input with positive universally or existentially quantified verification conditions (conjectures). We call this translation a Datalog hammer. The combination of its implementation in SPASS-SPL with the Datalog reasoner VLog establishes an effective way of deciding verification conditions in the Horn fragment. As a result\, we were able to verify supervisor code for two examples: a lane change assistant in a car and an electronic control unit of a supercharged combustion engine.\n\nIn this talk\, we present several new improvements to our Datalog hammer: we have generalized it to mixed real-integer arithmetic and finite first-order sorts\; we extended the class of acceptable inequalities beyond variable bounds and positively grounded inequalities\, and we significantly reduced the size of the hammer output by a soft typing discipline. We call the result the sorted Datalog hammer. It not only allows us to handle more complex supervisor code and to model already considered supervisor code more concisely\, but it also improves our performance on real-world benchmark examples.\n\nReferences:\n[1] Bromberger\, M.\, Dragoste\, I.\, Faqeh\, R.\, Fetzer\, C.\, Krötzsch\, M.\, Weidenbach\, C.: A Datalog hammer for supervisor verification conditions modulo simple linear arithmetic. In: Reger\, G.\,Konev\, B. (eds.) Frontiers of Combining Systems - 13th International Symposium\, FroCoS 2021\, Birmingham\, United Kingdom\, September 8-10\, 2021. Proceedings. Lecture Notes in Computer Science\, vol. 12941\, pp. 3–24. Springer (2021)\n\nOnline talk link: \nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220316T135330
SEQUENCE:35816
END:VEVENT
BEGIN:VEVENT
SUMMARY:Agile Project Management - An overview on techniques and the agile mindset
URL://iccl.inf.tu-dresden.de/web/Agile_Project_Management_-_An_overview_on_techniques_and_the_agile_mindset
UID://iccl.inf.tu-dresden.de/web/Agile_Project_Management_-_An_overview_on_techniques_and_the_agile_mindset
DTSTART:20220217T110000
DTEND:20220217T120000
LOCATION:Online
DESCRIPTION:Being agile has become a fashionable word for project management and working methods for companies in recent years.\nThis is synonymous with modern structures\, effective organization\, and working patterns.\nA number of concepts such as "design thinking"\, "lean startup" or "scrum" are at least known by their name.\nIn the academic community\, these ideas are not applied often and sometimes look a bit overburdening.\nThe aim of this presentation is to provide an overview of the agile mindset and a general introduction to the methodologies used.\nWe will focus on the aspects which might be interesting for academic people\, as not all roles and artifacts can and should be realized in a reasonable way in our current environment.\n\nFirst\, we address the basic question of what agility is and why we need to be agile.\nFollowing this\, we will examine the paradigms and methods used in the agile mindset.\nFinally\, we will look at Scrum\, which is a method for carrying out complex tasks done in small agile teams.\nThis is particularly interesting as we typically work in small teams in the academic environment.\n\nThis is a condensed overview of the experience from a two-day skill workshop on agile methods.\n\nThe talk is online\, link:\n\nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220211T095206
SEQUENCE:35779
END:VEVENT
BEGIN:VEVENT
SUMMARY:Approximation Fixpoint Theory – A Unifying Framework for Non-monotonic Semantics Part 2
URL://iccl.inf.tu-dresden.de/web/Approximation_Fixpoint_Theory_%E2%80%93_A_Unifying_Framework_for_Non-monotonic_Semantics_Part_2
UID://iccl.inf.tu-dresden.de/web/Approximation_Fixpoint_Theory_%E2%80%93_A_Unifying_Framework_for_Non-monotonic_Semantics_Part_2
DTSTART:20220210T110000
DTEND:20220210T120000
LOCATION:Online
DESCRIPTION:Motivated by structural similarities in the semantics of Reiter's\ndefault logic and the stable model semantics of logic programming (among others)\, Denecker\, Marek\, and Truszczyński set out to isolate these similarities in a purely algebraic setting. The result is now known as approximation fixpoint theory\, and allows to study the semantics of the major non-monotonic knowledge representation formalisms in an abstract\, uniform framework. After briefly recalling some lattice theory\, we will present the main concepts of approximation fixpoint theory\, applying it to the case of\n\nlogic programming as a running example.\n\nThis is the second part of the talk\,\n\nLink: https://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220127T133128
SEQUENCE:35728
END:VEVENT
BEGIN:VEVENT
SUMMARY:Approximation Fixpoint Theory – A Unifying Framework for Non-monotonic Semantics
URL://iccl.inf.tu-dresden.de/web/Approximation_Fixpoint_Theory_%E2%80%93_A_Unifying_Framework_for_Non-monotonic_Semantics
UID://iccl.inf.tu-dresden.de/web/Approximation_Fixpoint_Theory_%E2%80%93_A_Unifying_Framework_for_Non-monotonic_Semantics
DTSTART:20220203T110000
DTEND:20220203T120000
LOCATION:Online
DESCRIPTION:Motivated by structural similarities in the semantics of Reiter's\ndefault logic and the stable model semantics of logic programming (among\nothers)\, Denecker\, Marek\, and Truszczyński set out to isolate these\nsimilarities in a purely algebraic setting. The result is now known as\napproximation fixpoint theory\, and allows to study the semantics of the\nmajor non-monotonic knowledge representation formalisms in an abstract\,\nuniform framework.\nAfter briefly recalling some lattice theory\, we will present the main\nconcepts of approximation fixpoint theory\, applying it to the case of\nlogic programming as a running example.
DTSTAMP:20220127T103830
SEQUENCE:35724
END:VEVENT
BEGIN:VEVENT
SUMMARY:Dynamic Workflow Composition with OSLO-steps: Data Re-use and Simplification of Automated Administration
URL://iccl.inf.tu-dresden.de/web/Dynamic_Workflow_Composition_with_OSLO-steps:_Data_Re-use_and_Simplification_of_Automated_Administration
UID://iccl.inf.tu-dresden.de/web/Dynamic_Workflow_Composition_with_OSLO-steps:_Data_Re-use_and_Simplification_of_Automated_Administration
DTSTART:20220127T110000
DTEND:20220127T120000
LOCATION:Online
DESCRIPTION:e-Government applications have hard-coded and non-personalized user journeys with high maintenance costs to keep up with\, e.g.\, changing legislation. Automatic administrative workflows are needed. We present the OSLO-steps vocabulary and the workflow composer: combined\, they are a means to create cross-organizational interoperable user journeys\, adapted to the user's needs. We identify the requirements for automating administrative workflows and present an architecture and its implemented components. By using Linked Data principles to decentrally describe independent steps using states as pre- and postconditions\, and composing workflows on-the-fly whilst matching a user's state to those preconditions\, we automatically generate next steps to reach the user's goal. The validated solution shows its feasibility\, and the upcoming interest around interoperable personal data pods (e.g.\, via Solid) can further increase its potential.\nThe talk is about this paper: https://dl.acm.org/doi/pdf/10.1145/3460210.3493559?casa_token=fT0Z86qmc-wAAAAA:bp0nt6XO3jXbZmFewjEWK5UlBvmzQv5xch8QAC_PTnz6LVQYGOQ7TSO-jEwxVehYA7eshdZc1wT-\n\nthe talk is online\, link: \nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220125T101415
SEQUENCE:35715
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tuple-Generating Dependencies Capture Complex Values
URL://iccl.inf.tu-dresden.de/web/Tuple-Generating_Dependencies_Capture_Complex_Values
UID://iccl.inf.tu-dresden.de/web/Tuple-Generating_Dependencies_Capture_Complex_Values
DTSTART:20220120T110000
DTEND:20220120T120000
LOCATION:Online
DESCRIPTION:We formalise a variant of Datalog that allows complex values constructed by nesting elements of\nthe input database in sets and tuples. We study its complexity and give a translation into sets\nof tuple-generating dependencies (TGDs) for which the standard chase terminates on any input\ndatabase. We identify a fragment for which reasoning is tractable. As membership is undecidable\nfor this fragment\, we develop decidable sufficient conditions. \n\nThe talk is about an accepted paper with Markus in ICDT 2022.\n\nLink to the online talk:\nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220106T094428
SEQUENCE:35517
END:VEVENT
BEGIN:VEVENT
SUMMARY:An Introduction to Proof Theory II: More on the Sequent Calculus
URL://iccl.inf.tu-dresden.de/web/An_Introduction_to_Proof_Theory_II:_More_on_the_Sequent_Calculus
UID://iccl.inf.tu-dresden.de/web/An_Introduction_to_Proof_Theory_II:_More_on_the_Sequent_Calculus
DTSTART:20220113T110000
DTEND:20220113T120000
LOCATION:Online
DESCRIPTION:This talk is a sequel to the first lecture "An Introduction to Proof Theory I: Sequent Calculus\," which can be found at the following link: https://www.youtube.com/watch?v=grjMRgmjddE. In this talk\, we will briefly review notions introduced in the first talk such as what a sequent is\, types of inference rules\, and types of rule redundancy. Building off these notions\, we will then look at invertible rules\, cut-elimination\, and touch on how logics can be shown decidable via proof-search. All such concepts will be introduced within the framework of a sequent calculus for classical propositional logic. \n\nThe talk is online via BigBlueButton.\n\nLink:\nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220107T142732
SEQUENCE:35524
END:VEVENT
BEGIN:VEVENT
SUMMARY:Answering Queries with Negation over Existential Rules
URL://iccl.inf.tu-dresden.de/web/Answering_Queries_with_Negation_over_Existential_Rules
UID://iccl.inf.tu-dresden.de/web/Answering_Queries_with_Negation_over_Existential_Rules
DTSTART:20220106T110000
DTEND:20220106T120000
LOCATION:Online
DESCRIPTION:Ontology-based query answering with existential rules is well understood and implemented for positive queries\, in particular conjunctive queries. The situation changes drastically for queries with negation\, where there is no agreed-upon semantics or standard implementation. Stratification\, as used for Datalog\, is not enough for existential rules\, since the latter still admit multiple universal models that can differ on negative queries. We\, therefore\, propose universal core models as a basis for meaningful (non-monotonic) semantics for queries with negation. Since cores are hard to compute\, we identify syntactic descriptions of queries that can equivalently be answered over other types of models. This leads to fragments of queries with negation that can safely be evaluated by current chase implementations. We establish new techniques to estimate how the core model differs from other universal models\, and we incorporate our findings into a new reasoning approach for existential rules with negation.\n\nThe talk is about an upcoming publication in AAAI 2022 by Stefan Ellmauthaler\, Markus Krötzsch\, Stephan Mennicke\n\nThe talk is online\,\nLink: https://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20220103T134220
SEQUENCE:35513
END:VEVENT
BEGIN:VEVENT
SUMMARY:Reliance-Based Optimization of Existential Rule Reasoning
URL://iccl.inf.tu-dresden.de/web/Reliance-Based_Optimization_of_Existential_Rule_Reasoning
UID://iccl.inf.tu-dresden.de/web/Reliance-Based_Optimization_of_Existential_Rule_Reasoning
DTSTART:20211209T094500
DTEND:20211209T104500
LOCATION:Online
DESCRIPTION:Existential rules is a powerful formalism for describing implicit knowledge contained within a dataset. Extracting such knowledge can be achieved with the chase\, which is a well-known algorithm for computing universal models. This is done by exhaustively calculating the consequences of each given rule. However\, the order in which the rules are selected can have a significant impact on the run time of the procedure as well as the number of derived facts. It was discovered in recent work that core-stratified rule sets allow for a selection strategy that is guaranteed to produce so-called core models\, which\, in the finite case\, correspond to the smallest possible universal models. The strategy is based on considering certain syntactic relationships between the rules called reliances. These indicate whether it is possible for a rule to produce facts used by another or if selecting a rule in the wrong order may introduce redundant facts into the result.\n\nIn this work\, we utilize these reliances to devise rule application strategies that optimize the chase procedure based on the following criteria: First\, we try to minimize the number of times rules are applied during the chase\, aiming to improve run times. Second\, we want to avoid applying rules in a way which introduces redundant facts. The goal here is to minimize the size of the resulting model\, ideally producing a core. While it is always possible to derive a core model in core-stratified rule sets\, we show situations where our approach is guaranteed to produce cores even if the rule set is not stratified. Moreover\, we give a detailed description of the algorithms necessary for detecting a reliance relationship between two given rules as well as prove their correctness. We implement our approach into the rule reasoning engine VLog and evaluate its effectiveness on several knowledge bases used for benchmarking as well as some real-world datasets. We find a significant improvement in the run times for a small portion of the considered knowledge bases and are able to match VLog in the remaining ones.\n\nThis is a diploma defense\, the talk is online.\nLink: https://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20211202T142824
SEQUENCE:35364
END:VEVENT
BEGIN:VEVENT
SUMMARY:Capturing Homomorphism-Closed Decidable Queries with Existential Rules
URL://iccl.inf.tu-dresden.de/web/Capturing_Homomorphism-Closed_Decidable_Queries_with_Existential_Rules
UID://iccl.inf.tu-dresden.de/web/Capturing_Homomorphism-Closed_Decidable_Queries_with_Existential_Rules
DTSTART:20211202T110000
DTEND:20211202T120000
LOCATION:Online
DESCRIPTION:Existential rules are a very popular ontology-mediated query language for which the chase represents a generic computational approach for query answering. It is straightforward that existential rule queries exhibiting chase termination are decidable and can only recognize properties that are preserved under homomorphisms. In this paper\, we show the converse: every decidable query that is closed under homomorphism can be expressed by an existential rule set for which the standard chase universally terminates. Membership in this fragment is not decidable\, but we show via a diagonalization argument that this is unavoidable.\nThis work has been presented in a relatively short presentation at KR 2021. This extended talk will have room for more in-depth explanations and discussions.\n\nThe talk is via BigBlueButton\, link:\nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20211125T120927
SEQUENCE:35320
END:VEVENT
BEGIN:VEVENT
SUMMARY:On Classical Decidable Logics extended with Percentage Quantifiers and Arithmetics
URL://iccl.inf.tu-dresden.de/web/On_Classical_Decidable_Logics_extended_with_Percentage_Quantifiers_and_Arithmetics
UID://iccl.inf.tu-dresden.de/web/On_Classical_Decidable_Logics_extended_with_Percentage_Quantifiers_and_Arithmetics
DTSTART:20211125T110000
DTEND:20211125T120000
LOCATION:Online
DESCRIPTION:The talk is based on my forthcoming FSTTCS'21 paper (https://iccl.inf.tu-dresden.de/web/Inproceedings3301/en) with the eponymous title\, co-authored with my two bachelor students (Anna Pacanowska and Maja Orłowska) and my college Tony Tan from Taiwan.\nIn this work we investigated the extensions of classical decidable logics with percentage quantifiers\, specifying how frequently a formula is satisfied in the indented model. We showed\, surprisingly\, that both the two-variable logic and the guarded fragment become undecidable under such extension\, sharpening the existing results in the literature. Our negative results are supplemented by decidability of the two-variable guarded fragment with even more expressive counting\, namely Presburger constraints. Our results can be applied to infer decidability of various modal and description logics\, e.g. Presburger Modal Logics with Converse or ALCI\, with expressive cardinality constraints.\n\nThe talk is online\, link to the talk:\n\nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20211122T135932
SEQUENCE:35311
END:VEVENT
BEGIN:VEVENT
SUMMARY:Flexible Dispute Derivations with Forward and Backward Arguments for Assumption-Based Argumentation
URL://iccl.inf.tu-dresden.de/web/Flexible_Dispute_Derivations_with_Forward_and_Backward_Arguments_for_Assumption-Based_Argumentation
UID://iccl.inf.tu-dresden.de/web/Flexible_Dispute_Derivations_with_Forward_and_Backward_Arguments_for_Assumption-Based_Argumentation
DTSTART:20211118T110000
DTEND:20211118T120000
LOCATION:Online
DESCRIPTION:Assumption-based argumentation (ABA) is one of the main general frameworks for structured argumentation. Dispute derivations for ABA allow for evaluating claims in a dialectical manner: i.e. on the basis of an exchange of arguments and counter-arguments for a claim between a proponent and an opponent of the claim. Current versions of dispute derivations are geared towards determining (credulous) acceptance of claims w.r.t. the admissibility-based semantics that ABA inherits from abstract argumentation. Relatedly\, they make use of backwards or top down reasoning for constructing arguments. In this work we define flexible dispute derivations with forward as well as backward reasoning allowing us\, in particular\, to also have dispute derivations for finding admissible\, complete\, and stable assumption sets rather than only determine acceptability of claims. We give an argumentation-based definition of such dispute derivations and a more implementation friendly alternative representation in which disputes involve exchange of claims and rules rather than arguments. These can be seen as elaborations on\, in particular\, existing graph-based dispute derivations on two fronts: first\, in also allowing for forward reasoning\; second\, in that all arguments put forward in the dispute are represented by a graph and not only the proponents.\nor check:\nhttps://link.springer.com/chapter/10.1007/978-3-030-89391-0_9\n\nThe talk is online\, the room link for the talk:\nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20211115T093925
SEQUENCE:35280
END:VEVENT
BEGIN:VEVENT
SUMMARY:Admissibility in Probabilistic Argumentation
URL://iccl.inf.tu-dresden.de/web/Admissibility_in_Probabilistic_Argumentation
UID://iccl.inf.tu-dresden.de/web/Admissibility_in_Probabilistic_Argumentation
DTSTART:20211104T110000
DTEND:20211104T120000
LOCATION:Online
DESCRIPTION:Abstract argumentation is a prominent reasoning framework. It comes with a variety of semantics and has lately been enhanced by probabilities to enable a quantitative treatment of argumentation. While admissibility is a fundamental notion in the classical setting\, it has been merely reflected so far in the probabilistic setting. In this paper\, we address the quantitative treatment of argumentation based on probabilistic notions of admissibility in a way that they form fully conservative extensions of classical notions. In particular\, our building blocks are not the beliefs regarding single arguments. Instead\, we start from the fairly natural idea that whatever argumentation semantics is to be considered\, semantics systematically induces constraints on the joint probability distribution on the sets of arguments. In some cases there might be many such distributions\, even infinitely many ones\, in other cases\, there may be one or none. Standard semantic notions are shown to induce such sets of constraints\, and so do their probabilistic extensions. This allows them to be tackled by SMT solvers\, as we demonstrate by a proof-of-concept implementation. We present a taxonomy of semantic notions\, also in relation to published work\, together with a running example illustrating our achievements.\n\nThis is a talk regarding the paper at KR 2021. The talk will be given online via BigBlueButton. To access the room\, use the following link:\n\nRoom Link: \nhttps://bbb.tu-dresden.de/b/ali-zgz-l8d-52n
DTSTAMP:20211029T233155
SEQUENCE:35133
END:VEVENT
BEGIN:VEVENT
SUMMARY:Indexing for Datalog Materialisation with Leapfrog Triejoin
URL://iccl.inf.tu-dresden.de/web/Indexing_for_Datalog_Materialisation_with_Leapfrog_Triejoin
UID://iccl.inf.tu-dresden.de/web/Indexing_for_Datalog_Materialisation_with_Leapfrog_Triejoin
DTSTART:20211028T110000
DTEND:20211028T120000
LOCATION:Online
DESCRIPTION:Datalog is a well-understood relational query language and there are efficient reasoners based on different concepts and technologies. Nevertheless\, the search for faster and more efficient reasoners continues. A promising approach for further performance improvements is the so-called leapfrog triejoin by Veldhuizen. Leapfrog triejoin is a variable-oriented join algorithm\, similar to an ordered merge join\, and it computes the matches of a Datalog rule as a result trie following a previously defined variable order. The literature about leapfrog triejoin focuses on its application to a single join or\, respectively\, Datalog rule. Unfortunately\, applying leapfrog triejoin to whole Datalog programs during materialization yields new challenges. We concentrate on these challenges as well as on potential solutions.\n\nAs leapfrog triejoin requires suitable data structures for each rule\, it is prone to be inefficient when considering the rules of a program in isolation\, as this might introduce avoidable redundancy. Moreover\, the choice of ‘good’ variable orders is crucial for the performance of leapfrog triejoin. Hence\, we propose criteria for the quality of variable orders. For finding good variable orders\, we propose an optimal approach based on Answer Set Programming as well as a heuristic approach. Furthermore\, we show the trade-off between the optimality of the ASP approach and the required time in an evaluation.\n\nMoreover\, we generalize leapfrog triejoin to use partial variable orders. We discuss the resulting generalization of the data structures\, i.e.\, f-representations instead of tries\, and we show how to efficiently prepare them for leapfrog triejoin. To realize the potential of considering independent variables separately and the more succinct representation of relations\, we adapt leapfrog triejoin for partial variable orders. We show that\, on a set of benchmarks\, partial variable orders are of higher quality than total ones w.r.t. our optimization criteria.\n\nThis is a Diploma project defense and will be given online via BigBlueButton. To access the room\, use one of the following links:\n\nWith ZIH-Login:\nhttps://selfservice.zih.tu-dresden.de/l/link.php?m=154204&p=b7e638fd\n\nWithout ZIH-Login:\nhttps://selfservice.zih.tu-dresden.de/link.php?m=154204&p=19a754b0
DTSTAMP:20211022T141110
SEQUENCE:34970
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ongoing Research in the NAVAS Project
URL://iccl.inf.tu-dresden.de/web/Ongoing_Research_in_the_NAVAS_Project
UID://iccl.inf.tu-dresden.de/web/Ongoing_Research_in_the_NAVAS_Project
DTSTART:20211021T110000
DTEND:20211021T120000
LOCATION:Online
DESCRIPTION:This talk provides an insight into the latest research of answer set navigation methods.
DTSTAMP:20211015T152537
SEQUENCE:34885
END:VEVENT
END:VCALENDAR