BEGIN:VCALENDAR
PRODID:-//SMW Project//Semantic Result Formats
VERSION:2.0
METHOD:PUBLISH
X-WR-CALNAME:ICCL-Veranstaltungen
BEGIN:VEVENT
SUMMARY:Tracking False Information Online
URL://iccl.inf.tu-dresden.de/web/Tracking_False_Information_Online
UID://iccl.inf.tu-dresden.de/web/Tracking_False_Information_Online
DTSTART:20200227T130000
DTEND:20200127T143000
LOCATION:APB 3027
DESCRIPTION:Digital media enables fast sharing of information and discussions among users. While this comes with many benefits to today’s society\, such as broadening information access\, the manner in which information is disseminated also has obvious downsides. Since fast access to information is expected by many users and news outlets are often under financial pressure\, speedy access often comes at the expense of accuracy\, which leads to misinformation. Moreover\, digital media can be misused by campaigns to intentionally spread false information\, i.e. disinformation\, about events\, individuals or governments. In this talk\, I will present on different ways false information is spread online\, including misinformation and disinformation. I will then report findings from our recent and ongoing work on automatic fact checking\, stance detection and framing attitudes.
DTSTAMP:20200123T180331
SEQUENCE:30043
END:VEVENT
BEGIN:VEVENT
SUMMARY:Knowledge Graph Curation and Reasoning using the Example of the Scholarly Domain
URL://iccl.inf.tu-dresden.de/web/Knowledge_Graph_Curation_and_Reasoning_using_the_Example_of_the_Scholarly_Domain
UID://iccl.inf.tu-dresden.de/web/Knowledge_Graph_Curation_and_Reasoning_using_the_Example_of_the_Scholarly_Domain
DTSTART:20200130T130000
DTEND:20200130T143000
LOCATION:APB 3027
DESCRIPTION:Knowledge graphs allow organisations and enterprises to integrate their internal and external heterogeneous sources of information into a unified form and enable analytics and discovery of unknown knowledge. To exploit the information encoded in knowledge graphs\, analysis of the graph structure as well as the semantics of the represented relations\, is required. I will show this using the scholarly domain as an example. The heterogeneity of scholarly artifacts and their metadata spread over different Web data sources serve as a great use case platform for data analytics and reasoning methods. In this talk\, I will first have a look into major challenges of this domain using KG creation and curation leveraging semantic Web technologies. I will further showcase the application of Knowledge Graph Embedding models for link prediction scenarios of this domain.
DTSTAMP:20200123T134125
SEQUENCE:30034
END:VEVENT
BEGIN:VEVENT
SUMMARY:Modeling Computational Properties of Description Logics in ASP
URL://iccl.inf.tu-dresden.de/web/Modeling_Computational_Properties_of_Description_Logics_in_ASP
UID://iccl.inf.tu-dresden.de/web/Modeling_Computational_Properties_of_Description_Logics_in_ASP
DTSTART:20200129T090000
DTEND:20200129T103000
LOCATION:APB 3027
DESCRIPTION:Tracking the increasing volume of research about Description logics is getting harder. Besides\, those results interact together and can deduce new results. That is why we need a knowledge base to encode those results in a smart way and infer more results based on what we currently know. This talk presents an approach of how we can encode such information with the help of Answer Set Programming (ASP). In addition\, we show how such a system can be integrated into a website that visualizes the current research results and the inferences made based on them. We end by analyzing this approach and suggesting some future work.
DTSTAMP:20200128T165250
SEQUENCE:30078
END:VEVENT
BEGIN:VEVENT
SUMMARY:Checking Chase Termination over Ontologies of Existential Rules with Equality
URL://iccl.inf.tu-dresden.de/web/Checking_Chase_Termination_over_Ontologies_of_Existential_Rules_with_Equality
UID://iccl.inf.tu-dresden.de/web/Checking_Chase_Termination_over_Ontologies_of_Existential_Rules_with_Equality
DTSTART:20200123T130000
DTEND:20200123T143000
LOCATION:APB 3027
DESCRIPTION:The chase is a sound and complete algorithm for conjunctive query answering over ontologies of existential rules with equality. To enable its effective use\, we can apply acyclicity notions\; that is\, sufficient conditions that guarantee chase termination. Unfortunately\, most of these notions have only been defined for existential rule sets without equality. A proposed solution to circumvent this issue is to treat equality as an ordinary predicate with an explicit axiomatisation. We empirically show that this solution is not efficient in practice and propose an alternative approach. More precisely\, we show that\, if the chase terminates for any equality axiomatisation of an ontology\, then it terminates for the original ontology (which may contain equality). Therefore\, one can apply existing acyclicity notions to check chase termination over an axiomatisation of an ontology and then use the original ontology for reasoning. We show that\, in practice\, doing so results in a more efficient reasoning procedure. Furthermore\, we present equality model-faithful acyclicity\, a general acyclicity notion that can be directly applied to ontologies with equality.\n\n\nThis talk is a rehearsal for AAAI 2020. \nJoint work with Jacopo Urbani.
DTSTAMP:20200107T165627
SEQUENCE:29972
END:VEVENT
BEGIN:VEVENT
SUMMARY:Musings on the Semantics of SPARQL
URL://iccl.inf.tu-dresden.de/web/Musings_on_the_Semantics_of_SPARQL
UID://iccl.inf.tu-dresden.de/web/Musings_on_the_Semantics_of_SPARQL
DTSTART:20200109T130000
DTEND:20200109T143000
LOCATION:APB 3027
DESCRIPTION:Graph simulations have found their way into different graph database management (GDBM) tasks\, e.g.\, in the shape of Offline indexing structures\, as theoretical models for graph schemas\, or as viable alternatives to matching patterns up to graph homomorphisms. Among other advantages\, it is often the tractability of the simulation problem being exploited in emerging applications. However\, when it\ncomes to evaluating the approaches\, only basic graph patterns (BGPs) and rather small data instances\, compared to today's large data instances like Wikidata or DBpedia\, are considered. In the first part of this talk\, I give some insights on how far graph simulations may be incorporated into full-fledged graph query processing. Therefore\, we analyze different semantic interpretations of SPARQL\, based on graph simulation\, w.r.t. correctness\, complexity\, and effectiveness. Second\, I briefly sketch why state-of-the-art simulation algorithms do not scale well in the graph query/data setting. I further show the effects of a devised solution that even integrates well with the SPARQL semantics we envisioned in the first part.
DTSTAMP:20200108T113625
SEQUENCE:29973
END:VEVENT
BEGIN:VEVENT
SUMMARY:SCF2 - an Argumentation Semantics for Rational Human Judgments on Argument Acceptability
URL://iccl.inf.tu-dresden.de/web/SCF2_-_an_Argumentation_Semantics_for_Rational_Human_Judgments_on_Argument_Acceptability
UID://iccl.inf.tu-dresden.de/web/SCF2_-_an_Argumentation_Semantics_for_Rational_Human_Judgments_on_Argument_Acceptability
DTSTART:20191219T130000
DTEND:20191219T143000
LOCATION:APB 3027
DESCRIPTION:In abstract argumentation theory\, many argumentation semantics have been proposed for evaluating argumentation frameworks. This paper is based on the following research question: Which semantics corresponds well to what humans consider a rational judgment on the acceptability of arguments? There are two systematic ways to approach this research question: A normative perspective is provided by the principle-based approach\, in which semantics are evaluated based on their satisfaction of various normatively desirable principles. A descriptive perspective is provided by the empirical approach\, in which cognitive studies are conducted to determine which semantics best predicts human judgments about arguments. In this paper\, we combine both approaches to motivate a new argumentation semantics called SCF2. For this purpose\, we introduce and motivate two new principles and show that no semantics from the literature satisfies both of them. We define SCF2 and prove that it satisfies both new principles. Furthermore\, we discuss findings of a recent empirical cognitive study that provide additional support to SCF2.
DTSTAMP:20191107T084228
SEQUENCE:29685
END:VEVENT
BEGIN:VEVENT
SUMMARY:Justifying All Differences Using Pseudo-Boolean Reasoning
URL://iccl.inf.tu-dresden.de/web/Justifying_All_Differences_Using_Pseudo-Boolean_Reasoning
UID://iccl.inf.tu-dresden.de/web/Justifying_All_Differences_Using_Pseudo-Boolean_Reasoning
DTSTART:20191217T150000
DTEND:20191217T160000
LOCATION:APB 2028
DESCRIPTION:Constraint programming solvers support rich global constraints and propagators\, which make them both powerful and hard to debug. In the Boolean satisfiability community\, prooflogging is the standard solution for generating trustworthy outputs\, and this has become key to the social acceptability of computer-generated proofs. However\, reusing this technology for constraint programming requires either much weaker propagation\, or an impractical blowup in proof length.\nThis paper demonstrates that simple\, clean\, and efficient proof logging is still possible for the all-different constraint\, through pseudo-Boolean reasoning. We explain how such proofs can be expressed and verified mechanistically\, describe an implementation\, and discuss the broader implications for proof logging in constraint programming.
DTSTAMP:20191211T111954
SEQUENCE:29883
END:VEVENT
BEGIN:VEVENT
SUMMARY:Standpoint logic: a multi-modal logic for reasoning within semantic indeterminacy
URL://iccl.inf.tu-dresden.de/web/Standpoint_logic:_a_multi-modal_logic_for_reasoning_within_semantic_indeterminacy
UID://iccl.inf.tu-dresden.de/web/Standpoint_logic:_a_multi-modal_logic_for_reasoning_within_semantic_indeterminacy
DTSTART:20191212T130000
DTEND:20191212T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Standpoint logic is a multi-modal logic intended at reasoning with different interpretations of semantically heterogeneous terms. The framework offers an alternative to “fuzzy” approaches to the representation of meaning and allows for the specification of “semantic commitments” and “penumbral connections”.\n\n\nIn this talk\, I will introduce the logic and provide an overview of its proof theory and semantics. I will demonstrate its expressivity in an application scenario in the forestry domain\, using data schemas from the repository Global Forest Watch and concepts from the ENVO ontology. I will finally discuss the complexity of the logic and some restrictions that could make implementations viable.
DTSTAMP:20191028T083947
SEQUENCE:29474
END:VEVENT
BEGIN:VEVENT
SUMMARY:What makes a variant of query determinacy (un)decidable?
URL://iccl.inf.tu-dresden.de/web/TBA3
UID://iccl.inf.tu-dresden.de/web/TBA3
DTSTART:20191205T130000
DTEND:20191205T143000
LOCATION:APB 3027
DESCRIPTION:Suppose there is a database we have no direct access to\, but there are views of this database available to us\, defined by some queries Q_1 \, Q_2 \, . . . Q_k. And we are given another query Q. Will we be able to compute Q only using the available views?\n\n\n\nThe above question\, call it "the question of determinacy"\, sounds almost philosophical. One can easily imagine a bearded man in himation chained to the wall of a cave watching the views projected on the wall and pondering whether\, from what he is able to see\, the reality can be faithfully reconstructed.\n\nFor us it is a database theory question though. And a really well motivated one\, with motivations ranging from query evaluation plans optimization (where we prefer a positive answer) to privacy issues (where the preferred answer is negative).\n\nQuery determinacy is a broad topic\, with literally hundreds of papers published since the late 1980s. This talk is not going to be a "survey" (which would be impossible\, within a one hour time frame\, and with this speaker)\, but rather a personal perspective of a person somehow involved in the recent developments in the area.\n\nFirst I will explain how\, in the last 30+ years\, the question of determinacy was formalized. There are many parameters here: obviously one needs to choose the query language of the queries Q_i and the query language of Q. But -- surprisingly -- there is also some choice regarding what the word "to compute" actually means in this context.\n\nThen I will concentrate on the variants of the decision problem of determinacy (for each choice of parameters there is one such problem -- Q_1 \, Q_2 \, . . . Q_k and Q constitute the instance\, and the question is whether Q_1 \, Q_2 \, . . . Q_k determine Q) and I will talk about how I understand the mechanisms rendering different variants of determinacy decidable or undecidable. This will be on a slightly informal level. No new theorems will be presented\, but I think I will be able to show simplified proofs of some of the earlier results.\n\nThis is a preview of the [https://diku-dk.github.io/edbticdt2020/?contents=invited_ICDT_talk.html invited talk at ICDT 2020].
DTSTAMP:20191202T153202
SEQUENCE:29823
END:VEVENT
BEGIN:VEVENT
SUMMARY:TE-ETH: Lower Bounds for QBFs of Bounded Treewidth
URL://iccl.inf.tu-dresden.de/web/TBA4
UID://iccl.inf.tu-dresden.de/web/TBA4
DTSTART:20191128T130000
DTEND:20191128T143000
LOCATION:APB 3027
DESCRIPTION:The problem of deciding the validity (QSAT) of quantified Boolean formulas (QBF) is a vivid research area in both theory and practice. In the field of parameterized algorithmics\, the well-studied graph measure treewidth turned out to be a successful parameter. A well-known result by Chen in parameterized complexity is that QSAT when parameterized by the treewidth of the primal graph of the input formula together with the quantifier depth of the formula is fixed-parameter tractable. More precisely\, the runtime of such an algorithm is polynomial in the formula size and exponential in the treewidth\, where the exponential function in the treewidth is a tower\, whose height is the quantifier depth. \n\n\nA natural question is whether one can significantly improve these results and decrease the tower while assuming the Exponential Time Hypothesis (ETH). In the last years\, there has been a growing interest in the quest of establishing lower bounds under ETH\, showing mostly problem-specific lower bounds up to the third level of the polynomial hierarchy. Still\, an important question is to settle this as general as possible and to cover the whole polynomial hierarchy. \n\nIn this work\, we show lower bounds based on the ETH for arbitrary QBFs parameterized by treewidth (and quantifier depth). More formally\, we establish lower bounds for QSAT and treewidth\, namely\, that under ETH there cannot be an algorithm that solves QSAT of quantifier depth i in runtime significantly better than i-fold exponential in the treewidth and polynomial in the input size. In doing so\, we provide a versatile reduction technique to compress treewidth that encodes the essence of dynamic programming on arbitrary tree decompositions. Further\, we describe a general methodology for a more fine-grained analysis of problems parameterized by treewidth that are at higher levels of the polynomial hierarchy.\n\n'''Authors:''' Johannes Klaus Fichte\, Markus Hecher\, Andreas Pfandler
DTSTAMP:20191128T115745
SEQUENCE:29795
END:VEVENT
BEGIN:VEVENT
SUMMARY:Efficiently Solving Unbounded Integer Programs in the context of SMT Solvers
URL://iccl.inf.tu-dresden.de/web/Efficiently_Solving_Unbounded_Integer_Programs_in_the_context_of_SMT_Solvers
UID://iccl.inf.tu-dresden.de/web/Efficiently_Solving_Unbounded_Integer_Programs_in_the_context_of_SMT_Solvers
DTSTART:20191122T100000
DTEND:20191122T113000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:'''\nSatisfiability modulo theories (SMT) solvers are automated theorem provers for logical formulas that range over combinations of various first-order theories. These theories typically correspond to domains found in programming languages\, e.g.\, the theories of bit vectors\, integers\, and arrays. This is intentional because SMT solvers were initially developed as back-end reasoning tools for automated software verification. These days\, SMT solvers are also used as back-end reasoning tools for various other applications\, e.g.\, verification of hybrid systems\, program synthesis\, and as brute-force tactics in various interactive theorem provers.\n\nIn this talk\, I will present two new techniques for the theory of linear integer arithmetic in the context of SMT solvers:\n\n1) The unit cube test [2\,3]\, a sound (although incomplete) test that finds solutions for integer programs(i.e.\, systems of linear inequalities) in polynomial time. The test is especially efficient on absolutely unbounded integer programs\, which are difficult to handle for many other decision procedures.\n\n2) A bounding transformation [1] that reduces any integer program in polynomial time to an equisatisfiable integer program that is bounded. The transformation is beneficial because it turns branch and bound into a complete and efficient decision procedure for integer programs.\n\n'''References:'''\n\n[1] A Reduction from Unbounded Linear Mixed Arithmetic Problems into Bounded Problems\, Martin Bromberger. IJCAR 2018\, volume 10900 of LNCS\, pages 329–345. Springer\, 2018.\n\n[2] New Techniques for Linear Arithmetic: Cubes and Equalities\, Martin Bromberger\, and Christoph Weidenbach. In FMSD volume 51(3)\, pages 433–461. Springer\, 2017.\n\n[3] Fast Cube Tests for LIA Constraint Solving\, Martin Bromberger\, and Christoph Weidenbach. In IJCAR 2016\, volume 9706 of LNCS\, pages 116–132. Springer\, 2016.
DTSTAMP:20191121T135506
SEQUENCE:29758
END:VEVENT
BEGIN:VEVENT
SUMMARY:A diamond in the rough: Theorizing column stores
URL://iccl.inf.tu-dresden.de/web/A_diamond_in_the_rough:_Theorizing_column_stores
UID://iccl.inf.tu-dresden.de/web/A_diamond_in_the_rough:_Theorizing_column_stores
DTSTART:20191121T130000
DTEND:20191121T143000
LOCATION:APB 3105
DESCRIPTION:Column stores have been a 'neglected child' relative to traditional\, row-oriented\, relation-focused database management systems: The systems people came up with them\, and the theoreticians did not really give them the time of day. This talk will discuss what happens when we pick up the slack and formalize a model for analytic computation with columns. In addition to sound conceptual grounding being its own aesthetic reward\, we will touch on some of the examples of how such a formalization enables architectural and performance improvements in real-life systems:\n\n\nSeamless integration of decompression and query execution\; removal of special-case handling of different column features (such as nullability and variable-length elements)\; closure of query execution plans to partial execution\; et cetera. Central to achieving such benefits will be the discussion of what constitutes a column\, how columns are to be represented\, and what they can represent.
DTSTAMP:20191107T103354
SEQUENCE:29693
END:VEVENT
BEGIN:VEVENT
SUMMARY:Can A.I. Provably Explain Itself? A gentle Introduction to Description Logics
URL://iccl.inf.tu-dresden.de/web/Can_A.I._Provably_Explain_Itself%3F_A_gentle_Introduction_to_Description_Logics
UID://iccl.inf.tu-dresden.de/web/Can_A.I._Provably_Explain_Itself%3F_A_gentle_Introduction_to_Description_Logics
DTSTART:20191114T130000
DTEND:20191114T143000
LOCATION:APB 3027
DESCRIPTION:The emergence of intelligent systems in self-driving cars\, planes\, medical diagnosis\, insurance and financial services among others has shown that when decisions are taken or suggested by automated systems it is essential that an explanation can be provided. The disconnect between how we make decisions and how machines make them\, and the fact that machines are making more and more decisions for us\, has given a new push for transparency in A.I. However\, the inner workings of machine learning algorithms remain difficult to understand\, and the methods of making these models explainable still require expensive human evaluation.\n\n\nOn the other hand\, knowledge representation based on description logics allow for providing description of the environment\, specifying constraints for the system states and detecting inconsistencies\, as well as operating information from heterogeneous (possibly incomplete) data sources\, and reasoning about the knowledge of an application domain. Because of the conceptual difference with machine learning algorithms\, the description logics formalism is much more similar to human reasoning and can be adapted to supply the user with necessary explications for a decision made. Additionally\, in order to model dynamic systems\, description logics have extensions enabling additionally temporal and probabilistic reasoning.\n\nIn this talk I will outline key pillars and basic principles of (onto)logical reasoning as well as its limitations. Finally\, I will say a few words about an initiative of TU Dresden and Saarland Informatics Campus developing together the concept of perspicuous computing and laying the scientific foundations for computerised systems that are able to express clearly their functioning.
DTSTAMP:20191030T152752
SEQUENCE:29562
END:VEVENT
BEGIN:VEVENT
SUMMARY:Interface between Logical Analysis of Data and Formal Concept Analysis
URL://iccl.inf.tu-dresden.de/web/Interface_between_Logical_Analysis_of_Data_and_Formal_Concept_Analysis
UID://iccl.inf.tu-dresden.de/web/Interface_between_Logical_Analysis_of_Data_and_Formal_Concept_Analysis
DTSTART:20191024T130000
DTEND:20191024T143000
LOCATION:APB 3027
DESCRIPTION:Logical Analysis of Data and Formal Concept Analysis are separately developed methodologies based on different mathematical foundations. We show that the two methodologies utilize the same basic building blocks. That enables us to develop an interface between the two methodologies. We provide some preliminary benefits of the interface\; most notably efficient algorithms for computing spanned patterns in Logical Analysis of Data using algorithms of Formal Concept Analysis.
DTSTAMP:20191021T094239
SEQUENCE:29357
END:VEVENT
BEGIN:VEVENT
SUMMARY:Knowledge Dynamics in Social Environments
URL://iccl.inf.tu-dresden.de/web/Knowledge_Dynamics_in_Social_Environments
UID://iccl.inf.tu-dresden.de/web/Knowledge_Dynamics_in_Social_Environments
DTSTART:20190926T130000
DTEND:20190926T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:'''\nSocial media platforms\, taken in conjunction\, can be seen as complex networks\; in this context\, understanding how agents react to sentiments expressed by their connections is of great interest. Here\, we show how Network Knowledge Bases help represent the integration of multiple social networks\, and explore how information flow can be handled via belief revision operators for local (agent-specific) knowledge bases. We report on preliminary experiments on Twitter data showing that different agent types react differently to the same information — this is a first step toward developing tools to predict how agents behave as information flows in their social environment.\n\n'''Bio:'''\nMaria Vanina Martinez\, University of Buenos Aires\, Argentina.
DTSTAMP:20190923T154215
SEQUENCE:29160
END:VEVENT
BEGIN:VEVENT
SUMMARY:A.M.B.R.O.S.I.A. - Conferring Immortality on Distributed Applications
URL://iccl.inf.tu-dresden.de/web/A.M.B.R.O.S.I.A._-_Conferring_Immortality_on_Distributed_Applications
UID://iccl.inf.tu-dresden.de/web/A.M.B.R.O.S.I.A._-_Conferring_Immortality_on_Distributed_Applications
DTSTART:20190920T130000
DTEND:20190920T140000
LOCATION:APB 3105
DESCRIPTION:'''Abstract:'''\n\nWhen writing today’s distributed programs\, which frequently span both devices and cloud services\, programmers are faced with complex decisions and coding tasks around coping with failure\, especially when these distributed components are stateful. If their application can be cast as pure data processing\, they benefit from the past 40-50 years of work from the database community\, which has shown how declarative database systems can completely isolate the developer from the possibility of failure in a performant manner. Unfortunately\, while there have been some attempts at bringing similar functionality into the more general distributed programming space\, a compelling general-purpose system must handle non-determinism\, be performant\, support a variety of machine types with varying resiliency goals\, and be language agnostic\, allowing distributed components written in different languages to communicate. This talk describes the first system\, publicly available on GitHub\, called Ambrosia\, to satisfy all these requirements. We coin the term “virtual resiliency”\, analogous to virtual memory\, for the platform feature which allows failure oblivious code to run in a failure resilient manner. We also introduce a programming construct\, the “impulse”\, which resiliently handles non-deterministic information originating from outside the resilient component. Of further interest to our community is the effective reapplication of much database performance optimization technology to make Ambrosia more performant than many of today’s non-resilient cloud solutions.\n\n\n'''Bio:'''\n\nOver the last 20 years\, I have worked at Microsoft in a combination of research and product roles. In particular\, I’ve spent about 15 years as a researcher at MSR\, doing fundamental research in streaming\, big data processing\, databases\, and distributed computing. My style of working is to attack difficult problems\, and through fundamental understanding and insight\, create new artifacts that enable important problems to be solved in vastly better ways. For instance\, my work on streaming data processing enabled people with real time data processing problems to specify their processing logic in new\, powerful ways\, and also resulted in an artifact called Trill\, which was orders of magnitude more performant than anything which preceded it. Within the academic community\, I have published many papers\, some with best paper awards (e.g. Best Paper Award at ICDE 2012)\, and two with test of time awards (e.g. SIGMOD 2011 Test of Time award and ICDT 2018 Test of Time award)\, and have also taken many organizational roles in database conferences. My research has also had significant impact on many Microsoft products\, including SQL Server\, Office\, Windows\, Bing\, and Halo\, as well as leading to the creation of entirely new products like Microsoft StreamInsight\, Azure Stream Analytics\, Trill\, and most recently\, Ambrosia. I spent 5 years building Microsoft StreamInsight\, serving as a founder and architect for the product. Trill has become the de-facto standard for temporal and stream data processing within Microsoft\, and years after creation\, is still the most expressive and performant general purpose stream data processor in the world. I am also an inventor of 30+ patents.
DTSTAMP:20190918T094958
SEQUENCE:29122
END:VEVENT
BEGIN:VEVENT
SUMMARY:Young Scientist's Third International Workshop on Trends in Information Processing (YSIP3)
URL://iccl.inf.tu-dresden.de/web/Young_Scientist%27s_Third_International_Workshop_on_Trends_in_Information_Processing_(YSIP3)
UID://iccl.inf.tu-dresden.de/web/Young_Scientist%27s_Third_International_Workshop_on_Trends_in_Information_Processing_(YSIP3)
DTSTART:20190917T000000
DTEND:20190920T000000
LOCATION:Stavropol and Arkhyz\, Russian Federation
DESCRIPTION:We invite young scientists – typically master or PhD students – to submit new scientific results – as\, for example\, obtained in their bachelor thesis\, project work\, master thesis\, or PhD project – in all areas of Information Processing.
DTSTAMP:20190117T101503
SEQUENCE:27542
END:VEVENT
BEGIN:VEVENT
SUMMARY:Automatic translation of clinical trial eligibility criteria into formal queries
URL://iccl.inf.tu-dresden.de/web/Automatic_translation_of_clinical_trial_eligibility_criteria_into_formal_queries
UID://iccl.inf.tu-dresden.de/web/Automatic_translation_of_clinical_trial_eligibility_criteria_into_formal_queries
DTSTART:20190916T130000
DTEND:20190916T140000
LOCATION:APB 3027
DESCRIPTION:Selecting patients for clinical trials is very labor-intensive. Our goal is to develop an automated system that can support doctors in this task. This paper describes a major step towards such a system: the automatic translation of clinical trial eligibility criteria from natural language into formal\, logic-based queries. First\, we develop a semantic annotation process that can capture many types of clinical trial criteria. Then\, we map the annotated criteria to the formal query language. We have built a prototype system based on state-of-the-art NLP tools such as Word2Vec\, Stanford NLP tools\, and the MetaMap Tagger\, and have evaluated the quality of the produced queries on a number of criteria from clinicaltrials.gov.
DTSTAMP:20190913T120845
SEQUENCE:29095
END:VEVENT
BEGIN:VEVENT
SUMMARY:A gentle introduction to partition width
URL://iccl.inf.tu-dresden.de/web/Lecture_on_Partition_Width
UID://iccl.inf.tu-dresden.de/web/Lecture_on_Partition_Width
DTSTART:20190912T130000
DTEND:20190912T143000
LOCATION:APB 3027
DESCRIPTION:In this talk we will take an introductory glance at the notion of "partition width"\, first conceived by Achim Blumensath. As partition width is also closely related to a notion of decomposition of an arbitrary structure into a tree-like shape\, the so called "partition refinement"\, we will also take a look at the relation of both these notions to more established notions of decomposition and width measures (namely tree-decompositions\, tree width\, and clique width).
DTSTAMP:20190802T123628
SEQUENCE:28943
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mixing Description Logics in Privacy-Preserving Ontology Publishing
URL://iccl.inf.tu-dresden.de/web/Mixing_Description_Logics_in_Privacy-Preserving_Ontology_Publishing
UID://iccl.inf.tu-dresden.de/web/Mixing_Description_Logics_in_Privacy-Preserving_Ontology_Publishing
DTSTART:20190905T130000
DTEND:20190905T140000
LOCATION:APB 2026
DESCRIPTION:In previous work\, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an instance store\, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL . We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy\, and have shown how\, in the context mentioned above\, optimal compliant (safe) generalizations of a given concept can be computed. In the present paper\, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular\, we investigate the situations where the attacker’s knowledge is given by an or an concept. In both cases\, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of \, it turns out to be actually lower (polynomial) for the more expressive DL .\n\n\nJoint work with Franz Baader. This is also a test-talk for a presentation at KI 2019.
DTSTAMP:20190902T110018
SEQUENCE:29045
END:VEVENT
BEGIN:VEVENT
SUMMARY:On the Expressive Power of Description Logics with Cardinality Constraints on Finite and Infinite Sets
URL://iccl.inf.tu-dresden.de/web/On_the_Expressive_Power_of_Description_Logics_with_Cardinality_Constraints_on_Finite_and_Infinite_Sets
UID://iccl.inf.tu-dresden.de/web/On_the_Expressive_Power_of_Description_Logics_with_Cardinality_Constraints_on_Finite_and_Infinite_Sets
DTSTART:20190829T130000
DTEND:20190829T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' In recent work we have extended the description logic (DL) ALCQ by means of more expressive number restrictions using numerical and set constraints stated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). It has been shown that reasoning in the resulting DL\, called ALCSCC\, is PSpace-complete without a TBox and ExpTime-complete w.r.t. a general TBox. The semantics of ALCSCC is defined in terms of finitely branching interpretations\, that is\, interpretations where every element has only finitely many role successors. This condition was needed since QFBAPA considers only finite sets. In this paper\, we first introduce a variant of ALCSCC\, called ALCSCC∞\, in which we lift this requirement (inexpressible in first-order logic) and show that the complexity results for ALCSCC mentioned above are preserved. Nevertheless\, like ALCSCC\, ALCSCC∞ is not a fragment of first-order logic. The main contribution of this paper is to give a characterization of the first-order fragment of ALCSCC∞. The most important tool used in the proof of this result is a notion of bisimulation that characterizes this fragment.\n\n\nJoint work with Franz Baader.\nThis talk is a rehearsal for a presentation at FroCoS 2019.\nDuration: 25 minutes without questions.
DTSTAMP:20190820T085458
SEQUENCE:28999
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chasing Sets: How to Use Existential Rules for Expressive Reasoning
URL://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning
UID://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning
DTSTART:20190801T130000
DTEND:20190801T140000
LOCATION:APB 3027
DESCRIPTION:Abstract: We propose that modern existential rule reasoners can enable fully declarative implementations of rule-based inference methods in knowledge representation\, in the sense that a particular calculus is captured by a fixed set of rules that can be evaluated on varying inputs (encoded as facts). We introduce Datalog(S) – Datalog with support for sets – as a surface language for such translations\, and show that it can be captured in a decidable fragment of existential rules. We then implement several known inference methods in Datalog(S)\, and empirically show that an existing existential rule reasoner can thus be used to solve practical reasoning problems.\n\n\nThis talk is a rehearsal for a presentation (15 minutes including questions) at IJCAI 2019.
DTSTAMP:20190731T095825
SEQUENCE:28893
END:VEVENT
BEGIN:VEVENT
SUMMARY:Worst-Case Optimal Querying of Very Expressive Description Logics with Path Expressions and Succinct Counting
URL://iccl.inf.tu-dresden.de/web/Worst-Case_Optimal_Querying_of_Very_Expressive_Description_Logics_with_Path_Expressions_and_Succinct_Counting
UID://iccl.inf.tu-dresden.de/web/Worst-Case_Optimal_Querying_of_Very_Expressive_Description_Logics_with_Path_Expressions_and_Succinct_Counting
DTSTART:20190730T130000
DTEND:20190730T133000
LOCATION:APB 3027
DESCRIPTION:'''Abstract.''' Among the most expressive knowledge representation formalisms are the description logics of the Z family. For well-behaved fragments of ZOIQ\, entailment of positive two-way regular path queries is well known to be 2EXPTIMEcomplete under the proviso of unary encoding of numbers in cardinality constraints. We show that this assumption can be dropped without an increase in complexity and EXPTIME-completeness can be achieved when bounding the number of query atoms\, using a novel reduction from query entailment to knowledge base satisfiability. These findings allow to strengthen other results regarding query entailment and query containment problems in very expressive description logics. Our results also carry over to GC2\, the two-variable guarded fragment of first order logic with counting quantifiers\, for which hitherto only conjunctive query entailment has been investigated.
DTSTAMP:20190718T124219
SEQUENCE:28841
END:VEVENT
BEGIN:VEVENT
SUMMARY:Extending EL++ with Linear Constraints on the Probability of Axioms
URL://iccl.inf.tu-dresden.de/web/Extending_EL%2B%2B_with_Linear_Constraints_on_the_Probability_of_Axioms
UID://iccl.inf.tu-dresden.de/web/Extending_EL%2B%2B_with_Linear_Constraints_on_the_Probability_of_Axioms
DTSTART:20190723T133000
DTEND:20190723T150000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' One of the main reasons to employ a description logic such as EL++ is the fact that it has efficient\, polynomial-time algorithmic properties such as deciding consistency and inferring subsumption. However\, simply by adding negation of concepts to it\, we obtain the expressivity of description logics whose decision procedure is ExpTime-complete. Similar complexity explosion occurs if we add probability assignments on concepts. To lower the resulting complexity\, we instead concentrate on assigning probabilities to Axioms/GCIs. We show that the consistency detection problem for such a probabilistic description logic is NP-complete\, and present a linear algebraic deterministic algorithm to solve it\, using the column generation technique. We also examine algorithms for the probabilistic extension problem\, which consists of inferring the minimum and maximum probabilities for a new axiom\, given a consistent probabilistic knowledge base. \n\n\nFuture work aims at finding fragments of probabilistic EL++ which are tractable.
DTSTAMP:20190717T120135
SEQUENCE:28813
END:VEVENT
BEGIN:VEVENT
SUMMARY:Reasoning about disclosure in data integration in the presence of source constraints
URL://iccl.inf.tu-dresden.de/web/Reasoning_about_disclosure_in_data_integration_in_the_presence_of_source_constraints
UID://iccl.inf.tu-dresden.de/web/Reasoning_about_disclosure_in_data_integration_in_the_presence_of_source_constraints
DTSTART:20190718T130000
DTEND:20190718T140000
LOCATION:APB 3027
DESCRIPTION:Joint work with M. Benedikt\, P. Bourhis\, L. Jachiet\n\n\n'''Abstract:''' Data integration systems allow users to access data sitting in multiple sources by means of queries over a global schema\, related to the sources via mappings. Data sources often contain sensitive information\, and thus an analysis is needed to verify that a schema satisfies a privacy policy\, given as a set of queries whose answers should not be accessible to users. Such an analysis should take into account not only knowledge that an attacker may have about the mappings\, but also what they may know about the semantics of the sources. In this talk\, I'll discuss the impact that source constraints can have on disclosure analysis.\n\n'''Speaker bio:''' Michaël Thomazo (Inria\, DI ENS\, ENS\, CNRS\, PSL University)
DTSTAMP:20190712T163358
SEQUENCE:28796
END:VEVENT
BEGIN:VEVENT
SUMMARY:Automatic Extraction of Compositional Matrix-Space Models of Language
URL://iccl.inf.tu-dresden.de/web/Automatic_Extraction_of_Compositional_Matrix-Space_Models_of_Language
UID://iccl.inf.tu-dresden.de/web/Automatic_Extraction_of_Compositional_Matrix-Space_Models_of_Language
DTSTART:20190715T110000
DTEND:20190715T123000
LOCATION:APB 3027
DESCRIPTION:Learning word representations in distributional semantic models to capture the semantics and compositionality of natural language is a central research area of computational linguistics. Compositional Matrix-Space Models (CMSMs) introduce a novel word representation alternative to Vector Space Models (VSMs). This talk presents the results of learning Compositional Matrix-Space Models to capture the semantics and compositionality in natural language processing tasks including sentiment analysis and compositionality detection of short phrases. Then\, a new dataset is introduced for examining compositional distributional semantic models and present benchmark experiments on using the developed dataset as a testbed to evaluate semantic composition in distributional semantic models.
DTSTAMP:20190708T112532
SEQUENCE:28772
END:VEVENT
BEGIN:VEVENT
SUMMARY:Epistemic Answer Set Programming
URL://iccl.inf.tu-dresden.de/web/Epistemic_Answer_Set_Programming
UID://iccl.inf.tu-dresden.de/web/Epistemic_Answer_Set_Programming
DTSTART:20190711T130000
DTEND:20190711T143000
LOCATION:APB 3027
DESCRIPTION:Today it is widely accepted by the logic programming community that answer set programming (ASP) requires a more powerful introspective reasoning with the use of modalities. Although there has been a long-lasting debate among researchers about how to correctly extend ASP with epistemic modal operators\, there is still no agreement on a fully satisfactory semantics that is able to offer intuitive results for epistemic logic programs. In this talk\, we introduce a recent epistemic extension of ASP called epistemic ASP (EASP)\, endowed with the epistemic answer set semantics: minimal (with respect to truth) models which are maximal under two different orderings that minimise knowledge. Then we compare EASP with existing successful (to some extent) approaches in the literature\, showing the advantages and the novelties of the new semantics: compared to Gelfond's epistemic specifications (ES)\, EASP defines a sufficiently strong language of a simpler syntactic character. Its semantics through a minimality criterion with respect to truth and knowledge is a natural and conservative generalisation of ASP's original answer set semantics. Moreover\, compared to all other semantics proposals for ES\, the epistemic answer set semantics provides a comprehensive solution to unintended results for epistemic logic programs including constraints. Finally\, we briefly discuss some formal properties of EASP such as epistemic splitting\, strong equivalence and foundedness. \n\n\n'''Speaker info:''' Ezgi Iraz Su\, IRIT (Lilac)\, Université de Toulouse 3 (Université Paul Sabatier)
DTSTAMP:20190625T121430
SEQUENCE:28697
END:VEVENT
BEGIN:VEVENT
SUMMARY:Introduction to p-adic numbers and analysis
URL://iccl.inf.tu-dresden.de/web/Introduction_to_p-adic_numbers_and_analysis
UID://iccl.inf.tu-dresden.de/web/Introduction_to_p-adic_numbers_and_analysis
DTSTART:20190710T133000
DTEND:20190710T150000
LOCATION:APB 3027
DESCRIPTION:The p-adic numbers (where p is a prime number) can be seen as one possible link between number theory and analysis. They therefore play an important role in various mathematical areas. The so-called strong triangle inequality of the p-adic absolute value (|\;a+b|\;≤max(|\;a|\;\,|\;b|\;) for p-adic numbers a and b) has many strange and surprising consequences.\nIn our talk\, we give the definition of p-adic numbers\, several elementary results on p-adic functional analysis and a short overwiev on possible applications.
DTSTAMP:20190707T075702
SEQUENCE:28768
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chasing Sets: How to Use Existential Rules for Expressive Reasoning
URL://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning_(Extended_Abstract)
UID://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning_(Extended_Abstract)
DTSTART:20190613T133000
DTEND:20190613T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We propose that modern existential rule reasoners can enable fully declarative implementations of rule-based inference methods in knowledge representation\, in the sense that a particular calculus is captured by a fixed set of rules that can be evaluated on varying inputs (encoded as facts). We introduce Datalog(S) – Datalog with support for sets – as a surface language for such translations\, and show that it can be captured in a decidable fragment of existential rules. We then implement several known inference methods in Datalog(S)\, and empirically show that an existing existential rule reasoner can thus be used to solve practical reasoning problems.\n\n\nThis talk is a rehearsal for a SHORT ORAL presentation (17 minutes without questions) at DL 2019.
DTSTAMP:20190606T082710
SEQUENCE:28585
END:VEVENT
BEGIN:VEVENT
SUMMARY:Discovering Implicational Knowledge in Wikidata
URL://iccl.inf.tu-dresden.de/web/Discovering_Implicational_Knowledge_in_Wikidata
UID://iccl.inf.tu-dresden.de/web/Discovering_Implicational_Knowledge_in_Wikidata
DTSTART:20190613T130000
DTEND:20190613T133000
LOCATION:APB 3027
DESCRIPTION:Knowledge graphs have recently become the state-of-the-art tool for representing the diverse and complex knowledge of the world. Among the freely available knowledge graphs\, Wikidata stands out by being collaboratively edited and curated. Amidst the vast numbers of facts\, complex knowledge is just waiting to be discovered\, but the sheer size of Wikidata makes this infeasible for human editors. We apply Formal Concept Analysis to efficiently identify and succinctly represent comprehensible implications that are implicitly present in the data. As a first step\, we describe a systematic process to extract conceptual knowledge from Wikidata's complex data model\, thus providing a method for obtaining large real-world data sets for FCA. We conduct experiments that show the principal feasibility of the approach\, yet also illuminate some of the limitations\, and give examples of interesting knowledge discovered.\n\n\nThis will be a rehearsal talk for ICFCA-2019 (20 minutes including questions).
DTSTAMP:20190607T170543
SEQUENCE:28613
END:VEVENT
BEGIN:VEVENT
SUMMARY:Projection in a Description Logic of Context with Actions
URL://iccl.inf.tu-dresden.de/web/Projection_in_a_Description_Logic_of_Context_with_Actions
UID://iccl.inf.tu-dresden.de/web/Projection_in_a_Description_Logic_of_Context_with_Actions
DTSTART:20190606T130000
DTEND:20190606T134500
LOCATION:APB 3027
DESCRIPTION:Satyadharma Tirtarasa and Benjamin Zarrieß. '''Projection in a Description Logic of Context with Actions'''. In Proceedings of the 32nd International Workshop on Description Logics (DL'19)\, Oslo\, Norway\, June 2019. Springer. To appear.\n\n\n'''Abstract:''' Projection is the problem of checking whether the execution of a given sequence of actions will achieve its goal starting from some initial state. In this paper\, we study a setting where we combine a two-dimensional Description Logic of context (ConDL) with an action formalism. We choose a well-studied ConDL where both: the possible states of a dynamical system itself (object level) and also different context-dependent views on this system state (context level) are organised in relational structures and can be described using usual DL constructs. To represent how such a system and its views evolve we introduce a suitable action formalism. It allows one to describe change on both levels. Furthermore\, the observable changes on the object level due to an action execution can also be context-dependent. We show that the formalism is well-behaved in the sense that projection has the same complexity as standard reasoning tasks in case ALCO is the underlying DL. \n\nThis talk is a rehearsal for a SHORT ORAL presentation (17 minutes without questions) at DL 2019.
DTSTAMP:20190603T115201
SEQUENCE:28557
END:VEVENT
BEGIN:VEVENT
SUMMARY:Explorations into Belief State Compression
URL://iccl.inf.tu-dresden.de/web/Explorations_into_Belief_State_Compression
UID://iccl.inf.tu-dresden.de/web/Explorations_into_Belief_State_Compression
DTSTART:20190528T133000
DTEND:20190528T143000
LOCATION:APB 3027
DESCRIPTION:A knowledge base is an integral part of a logic-based artificial intelligence system. The size of the knowledge base has a great effect on the derivation time of a logic-based agent. In this thesis\, I present a variety of algorithms for a particular variant of knowledge base size reduction referred to as “Belief State Compression”. Each proposed algorithm can be “lossy” or “lossless” depending on the (in)ability to recover the removed information\; and “redundant” or “irredundant” with respect to the necessity of the remaining information in order to remain lossless. Belief state compression differs from previous approaches in at least three aspects. First\, it takes its objects to be support-structured sets of unconstrained\, rather than flat sets of syntactically constrained\, logical formulas\, which we refer to as belief states. Second\, classical notions of minimality and redundancy are replaced by weaker\, resource-bounded alternatives based on the support structure. Third\, in “lossy” variants of compression\, the compressed knowledge base logically implies only a practically-relevant subset of the original knowledge base. Six variants of belief state compression\, falling into three major classes\, are presented. Experimental results show that a combination of five of them results in mostly irredundant\, lossless compressions\, while maintaining reasonable run times.
DTSTAMP:20190523T202953
SEQUENCE:28517
END:VEVENT
BEGIN:VEVENT
SUMMARY:Quine's Fluted Fragment
URL://iccl.inf.tu-dresden.de/web/Quine%27s_Fluted_Fragment
UID://iccl.inf.tu-dresden.de/web/Quine%27s_Fluted_Fragment
DTSTART:20190509T130000
DTEND:20190509T143000
DESCRIPTION:We consider the fluted fragment\, a decidable fragment of first-order logic with an unbounded number of variables\, originally identified in 1968 by W. V. Quine. We show that the satisfiability problem for this fragment has non-elementary complexity\, thus refuting an earlier published claim by W.C. Purdy that it is in NExpTime. More precisely\, we consider the intersection of the fluted fragment and the m-variable fragment of first-order logic\, for all non-negative m. We obtain upper and lower complexity bounds for this fragment that coincide for all m up to the value 4.\n\n\n'''Short bio:''' Ian Pratt-Hartmann studied mathematics and philosophy at [http://www.bnc.ox.ac.uk Brasenose College\, Oxford]\, and philosophy at [http://www.princeton.edu/main/ Princeton] and [http://www.stanford.edu/ Stanford] Universities\, gaining his PhD. from Princeton. He is currently Senior Lecturer in the Department of Computer Science at the [http://www.manchester.ac.uk/ University of Manchester]. Since February\, 2014\, Dr. Pratt-Hartmann has held a joint appointment in the [http://informatyka.wmfi.uni.opole.pl/ Institute of Computer Science] at the [http://www.uni.opole.pl/ University of Opole]. His academic interests range widely over computational logic\, natural language semantics and artificial intelligence.
DTSTAMP:20190503T071228
SEQUENCE:28293
END:VEVENT
BEGIN:VEVENT
SUMMARY:Data Science Use Cases for Lifestyle Banking
URL://iccl.inf.tu-dresden.de/web/Data_Science_Use_Cases_for_Lifestyle_Banking
UID://iccl.inf.tu-dresden.de/web/Data_Science_Use_Cases_for_Lifestyle_Banking
DTSTART:20190425T130000
DTEND:20190425T143000
LOCATION:APB 3027
DESCRIPTION:Abstract:\nTraditional banking concerns itself with risk understanding\, credit underwriting\, cash need\, liquidity\, etc. Data science and machine learning have proved useful in banking businesses to mitigate risk while targeting the right customers with cash need. In this talk\, we will explore a lifestyle side of banking that goes beyond the traditional realm and delves more into alternative signals to customers needs. We will see example data science use cases that have been successfully implemented in banking business at Siam Commercial Bank.
DTSTAMP:20190411T134324
SEQUENCE:28212
END:VEVENT
BEGIN:VEVENT
SUMMARY:Closed-World Semantics for Conjunctive Queries with Negation over ELH_bottom Ontologies
URL://iccl.inf.tu-dresden.de/web/Closed-World_Semantics_for_Conjunctive_Queries_with_Negation_over_ELH_bottom_Ontologies
UID://iccl.inf.tu-dresden.de/web/Closed-World_Semantics_for_Conjunctive_Queries_with_Negation_over_ELH_bottom_Ontologies
DTSTART:20190418T130000
DTEND:20190418T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Ontology-mediated query answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information\, however\, there exist only few ontology based approaches. Moreover\, these proposals conflate the closed-domain and closed-world assumption\, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. We propose a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic ELH-bottom\, which is based on the minimal canonical model. We propose a rewriting strategy for dealing with negated query atoms\, which shows that query answering is possible in polynomial time in data complexity.\n\n\nThis work was awarded Best Paper at JELIA 2019: https://jelia2019.mat.unical.it/awards.
DTSTAMP:20190418T100913
SEQUENCE:28243
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Power of the Terminating Chase
URL://iccl.inf.tu-dresden.de/web/The_Power_of_the_Terminating_Chase
UID://iccl.inf.tu-dresden.de/web/The_Power_of_the_Terminating_Chase
DTSTART:20190411T130000
DTEND:20190411T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract''':\nThe chase has become a staple of modern database theory with applications in data integration\, query optimisation\, data exchange\, ontology-based query answering\, and many other areas. Most application scenarios and implementations require the chase to terminate and produce a finite universal model\, and a large arsenal of sufficient termination criteria is available to guarantee this (generally undecidable) condition. In this invited tutorial\, we therefore ask about the expressive power of logical theories for which the chase terminates. Specifically\, which database properties can be recognised by such theories\, i.e.\, which Boolean queries can they realise? For the skolem (semi-oblivious) chase\, and almost any known termination criterion\, this expressivity is just that of plain Datalog. Surprisingly\, this limitation of most prior research does not apply to the chase in general. Indeed\, we show that standard–chase terminating theories can realise queries with data complexities ranging from PTime to non-elementary that are out of reach for the terminating skolem chase. A “Datalog-first” standard chase that prioritises applications of rules without existential quantifiers makes modelling simpler – and we conjecture: computationally more efficient. This is one of the many open questions raised by our insights\, and we conclude with an outlook on the research opportunities in this area.\n\nThis work has been published and presented at ICDT 2019\, Lisbon\, Portugal.
DTSTAMP:20190327T143510
SEQUENCE:27953
END:VEVENT
BEGIN:VEVENT
SUMMARY:Beyond NP Revolution
URL://iccl.inf.tu-dresden.de/web/Beyond_NP_Revolution
UID://iccl.inf.tu-dresden.de/web/Beyond_NP_Revolution
DTSTART:20190409T133000
DTEND:20190409T150000
LOCATION:APB 2026
DESCRIPTION:'''Abstract:''' The paradigmatic NP-complete problem of Boolean satisfiability (SAT) solving is a central problem in Computer Science. While the mention of SAT can be traced to early 19th century\, efforts to develop practically successful SAT solvers go back to 1950s. The past 20 years have witnessed a "NP revolution" with the development of conflict-driven clause-learning (CDCL) SAT solvers. Such solvers combine a classical backtracking search with a rich set of effective heuristics. While 20 years ago SAT solvers were able to solve instances with at most a few hundred variables\, modern SAT solvers solve instances with up to millions of variables in a reasonable time. The "NP-revolution" opens up opportunities to design practical algorithms with rigorous guarantees for problems in complexity classes beyond NP by replacing a NP oracle with a SAT Solver. In this talk\, we will discuss how we use NP revolution to design practical algorithms for two fundamental problems in artificial intelligence and formal methods: Constrained Counting and Sampling.\n\n\n'''Bio:''' Kuldeep Meel is an Assistant Professor of Computer Science in School of Computing at the National University of Singapore where he holds the Sung Kah Kay Assistant Professorship. He received his Ph.D. (2017) and M.S. (2014) degree in Computer Science from Rice University. He holds B. Tech. (with Honors) degree (2012) in Computer Science and Engineering from Indian Institute of Technology\, Bombay. His research interests lie at the intersection of Artificial Intelligence and Formal Methods. Meel has co-presented tutorials at top-tier AI conferences\, IJCAI 2018\, AAAI 2017\, and UAI 2016. His work received the 2018 Ralph Budd Award for Best PhD Thesis in Engineering\, 2014 Outstanding Masters Thesis Award from Vienna Center of Logic and Algorithms and Best Student Paper Award at CP 2015. He received the IBM Ph.D. Fellowship and the 2016-17 Lodieska Stockbridge Vaughn Fellowship for his work on constrained sampling and counting.
DTSTAMP:20190404T131214
SEQUENCE:28083
END:VEVENT
BEGIN:VEVENT
SUMMARY:Making sense of conflicting defeasible rules in the controlled natural language ACE: design of a system with support for existential quantification using skolemization
URL://iccl.inf.tu-dresden.de/web/Making_sense_of_conflicting_defeasible_rules_in_the_controlled_natural_language_ACE:_design_of_a_system_with_support_for_existential_quantification_using_skolemization
UID://iccl.inf.tu-dresden.de/web/Making_sense_of_conflicting_defeasible_rules_in_the_controlled_natural_language_ACE:_design_of_a_system_with_support_for_existential_quantification_using_skolemization
DTSTART:20190404T130000
DTEND:20190404T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We motivate and present the design of a system implementing what we (joint work with Hannes Strass previously at the University of Leipzig as well as Adam Z. Wyner at Swansea University) have dubbed the "EMIL" (acronym for "extracting meaning out of inconsistent language") pipeline. The pipeline in question takes potentially conflicting rules expressed in a fragment of a prominent controlled natural language\, ACE\, yet extended with means of expressing defeasible rules in the form of normality assumptions. It makes sense of such rules using a recently formulated argumentation-inspired semantics\, verbalising possible points of view that can plausibly be held based on the rules in ACE. The approach we describe is ultimately based on reductions to answer-set-programming (ASP)\; simulating existential quantification by using skolemization in a manner resembling a translation for ASP formalized in the context of ∃-ASP. We discuss the advantages of this approach to building on the existing ACE interface to rule-systems\, ACERules.\n\n\n'''Bio:''' Martin Diller is finishing his PhD at TU Wien (Austria). The focus of his PhD has mainly been on implementing problems in (abstract and structured) argumentation via complexity-sensitive translations to logical formalisms (quantified boolean formulas and answer-set-programming). He holds a joint MSc degree in computational logic from TU Dresden\, FU Bozen-Bolzano\, and TU Wien. Before that he studied Philosophy & Computer Science at Universidad Nacional de Córdoba\, Argentina and was also briefly part of research groups in Epistemology & Computer Science there. He has also been at several other institutes and universities for internships and research stays working on applied argumentation & automated reasoning: UCL in London (England)\, University of Aberdeen (Scotland)\, and NICTA-Canberra (Australia).
DTSTAMP:20190328T120428
SEQUENCE:27961
END:VEVENT
BEGIN:VEVENT
SUMMARY:Third Workshop on Human Reasoning and Computational Logic
URL://iccl.inf.tu-dresden.de/web/Third_Workshop_on_Human_Reasoning_and_Computational_Logic
UID://iccl.inf.tu-dresden.de/web/Third_Workshop_on_Human_Reasoning_and_Computational_Logic
DTSTART:20190404T090000
DTEND:20190405T170000
LOCATION:APB 2026
DESCRIPTION:From the 4th to the 5th of April 2019\, we organize the third workshop on Human Reasoning and Computational Logic at TU Dresden\, Germany. The goal of this workshop is to provide a platform for the scientific exchange with respect to Human Reasoning between the areas of Cognitive Science and Computational Logic.
DTSTAMP:20190130T094758
SEQUENCE:27640
END:VEVENT
BEGIN:VEVENT
SUMMARY:Temporal Logics with Probabilistic Distributions
URL://iccl.inf.tu-dresden.de/web/Temporal_Logics_with_Probabilistic_Distributions2
UID://iccl.inf.tu-dresden.de/web/Temporal_Logics_with_Probabilistic_Distributions2
DTSTART:20190328T130000
DTEND:20190328T143000
LOCATION:APB 3027
DESCRIPTION:In many applications such as monitoring of dynamical systems\, the data are actually time-dependent\, e.g.\, describing the states of a dynamical system at different points in time. Moreover\, events are more likely or less likely to happen in certain time points defined by the type of an event. There exist many well-studied distributions which can characterise the natures of events among us. For example\, according to a Pareto distribution\, a.k.a. a power-law distribution\, the longer something has gone on\, the longer we expect it to continue going on. Like new companies or start-ups\, either (with the high probability) they fail during the first year of existence\, or\, if they manage to survive for decades\, their chances of collapse are extremely small.\n\n\nIn order to capture dynamic time-dependent and probabilistic patterns of knowledge\, using basic notions from probability theory and statistics\, we have introduced temporal logics of expectation\, where we can speak about statements not only occurring eventually in the future\, but giving additional information on when they are likely to happen. The resulted combination of a temporal DL-Lite fragment (with a two-dimensional semantics\, where one dimension is for time and the other for the DL domain) and an additional probabilistic constructor "distribution eventuality" is interpreted over multiple weighted worlds\, viz.\, temporal DL interpretations.
DTSTAMP:20190321T163823
SEQUENCE:27910
END:VEVENT
BEGIN:VEVENT
SUMMARY:Knowledge Graph Embedding
URL://iccl.inf.tu-dresden.de/web/TBA2
UID://iccl.inf.tu-dresden.de/web/TBA2
DTSTART:20190207T130000
DTEND:20190207T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract''': Recently graph embeddings have been taken up by the community as a tool to solve various tasks in machine learning and the general AI community. In this talk I will give a gentle introduction to the topic and also give some pointers to currently ongoing research. We start from looking at why graph embeddings are needed in the first place and how they could be used. We will then focus on graphs containing a large variety of information\, typically called knowledge graphs\, often represented in RDF. These graphs are hard to embed compared to e.g.\, uniform simple networks) because they contain multiple edge and vertex types\, relation directionality\, literals\, etc. What we will cover are a few basic techniques on how these embeddings can be computed. We plan to look into at least one example of translational based methods\, one from matrix decomposition\, and methods based on co-occurrence and statistical information. Finally we will discuss about a couple of open problems and some of the topics currently worked on.\n\n\n'''Short biography''': Michael Cochez is a postdoctoral researcher at the Fraunhofer Institute for Applied Information Technology FIT in Germany. In this position Michael is working on transferring research results from the academic world to the industry. Besides the industry exposure\, he conducts research in areas related to data analysis and knowledge representation\, like knowledge graph embedding\, scalable clustering\, frequent itemset mining\, stream sampling\, prototype-based ontologies\, ontology matching\, and knowledge evolution. This research is currently mainly conducted at the RWTH Aachen university\, Germany. Before joining Fraunhofer\, he obtained his Ph.D. degree from the University of Jyväskyä\, Finland under the supervision of Vagan Terziyan and Ferrante Neri (De Montfort University - Leicester). He obtained his master degree from the same university and his bachelor degree from the University of Antwerp\, Belgium. Michael Cochez is currently on a partial leave from a postdoc at the University of Jyväskylä and is also a scientific advisor for WE-OPT-IT Oy (former MyOpt Oy) in Finland.
DTSTAMP:20190201T103020
SEQUENCE:27647
END:VEVENT
BEGIN:VEVENT
SUMMARY:Learning Ontologies with Epistemic Reasoning: The EL Case
URL://iccl.inf.tu-dresden.de/web/Learning_Ontologies_with_Epistemic_Reasoning:_The_EL_Case
UID://iccl.inf.tu-dresden.de/web/Learning_Ontologies_with_Epistemic_Reasoning:_The_EL_Case
DTSTART:20190123T093000
DTEND:20190123T110000
LOCATION:APB 3027
DESCRIPTION:'''Abstract''': We investigate the problem of learning description logic ontologies from entailments via queries\, using epistemic reasoning. We introduce a new learning model consisting of epistemic membership and example queries and show that polynomial learnability in this model coincides with polynomial learnability in Angluin’s exact learning model with membership and equivalence queries. We then instantiate our learning framework to EL and show some complexity results for an epistemic extension of EL where epistemic operators can be applied over the axioms. Finally\, we transfer known results for EL ontologies and its fragments to our learning model based on epistemic reasoning.
DTSTAMP:20190115T143531
SEQUENCE:27525
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ontology-Based Query Answering for Probabilistic Temporal Data
URL://iccl.inf.tu-dresden.de/web/Ontology-Based_Query_Answering_for_Probabilistic_Temporal_Data
UID://iccl.inf.tu-dresden.de/web/Ontology-Based_Query_Answering_for_Probabilistic_Temporal_Data
DTSTART:20190117T130000
DTEND:20190117T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We investigate ontology-based query answering for data that are both temporal\nand probabilistic\, which might occur in contexts such as stream reasoning or\nsituation recognition with uncertain data. We present a\nframework that allows to represent temporal probabilistic data\, and\nintroduce a query language with which complex temporal and probabilistic\npatterns can be described. Specifically\, this language combines conjunctive\nqueries with operators from linear time logic as well as probability operators.\nWe analyse the complexities of evaluating queries in this language in various\nsettings. While in some cases\, combining the temporal and the\nprobabilistic dimension in such a way comes at the cost of increased complexity\,\nwe also determine cases for which this increase can be avoided. \n\nThis is a talk based on a paper that will be presented at this year’s AAAI conference.
DTSTAMP:20190114T122705
SEQUENCE:27507
END:VEVENT
BEGIN:VEVENT
SUMMARY:Privacy-Preserving Ontology Publishing for EL Instance Stores
URL://iccl.inf.tu-dresden.de/web/Privacy-Preserving_Ontology_Publishing_for_EL_Instance_Stores
UID://iccl.inf.tu-dresden.de/web/Privacy-Preserving_Ontology_Publishing_for_EL_Instance_Stores
DTSTART:20190110T130000
DTEND:20190110T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We make a first step towards adapting an existing approach for privacy-preserving publishing of linked data to Description Logic (DL) ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using concepts of the DL EL\, which corresponds to the setting where the ontology is an EL instance store. We introduce the notions of compliance of a concept with a policy and of the safety of a concept for a policy and show how optimal compliant (safe) generalizations of a given EL concept can be computed. In addition\, we investigate the complexity of the optimality problem.\n\n \nThis is joint work with Franz Baader and Francesco Kriegel.
DTSTAMP:20190109T151057
SEQUENCE:27479
END:VEVENT
BEGIN:VEVENT
SUMMARY:Embodied Terminology: Language\, Knowledge\, and Cognition
URL://iccl.inf.tu-dresden.de/web/Embodied_Terminology:_Language,_Knowledge,_and_Cognition
UID://iccl.inf.tu-dresden.de/web/Embodied_Terminology:_Language,_Knowledge,_and_Cognition
DTSTART:20181206T130000
DTEND:20181206T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Meaning formation in specialized language is yet an open puzzle to be solved and several methods have been proposed to piece it together. This talk looks to the paradigm of embodied cognition for such a method\, which believes that cognitive processes\, including language production and understanding\, are deeply rooted in physical interactions with the world. More specifically it looks at image schemas that capture recurrent sensorimotor patterns giving coherence and structure to our experiences and shaping our language and knowledge. Potential theoretical contributions of embodied cognition to the meaning formation in specialized languages are discussed alongside automated methods for the identification of image schemas in natural languages. A coherent\, robust\, and language agnostic theory of and method for embodied terminology holds the promise to boost socio-economically effective\, cognitively grounded\, and technologically powerful terminology management and translation technologies. \n\n\n'''Title of the lecture:''' Translatorische Terminologiewissenschaft und Übersetzungstechnologien (German)\n\n'''Lecture description:''' This demonstration lesson will be held in German since this is required for the hearing and will represent the second lesson of the Master-level lecture on translational terminology science and translation technologies. Technologies of specialized communication will be discussed with a particular focus on multilingual\, systematic\, and onomasiological terminology management.\n\n'''Details:''' Appointment training: this '''research talk''' and the following '''demonstration lesson''' represent a trial run for a hearing within the application procedure for a '''tenure-track professorship''' for terminology science and translation technologies. Please join and ask many challenging questions.
DTSTAMP:20181203T100011
SEQUENCE:27219
END:VEVENT
BEGIN:VEVENT
SUMMARY:Satisfiability in the Triguarded Fragment of First-Order Logic
URL://iccl.inf.tu-dresden.de/web/Satisfiability_in_the_Triguarded_Fragment_of_First-Order_Logic
UID://iccl.inf.tu-dresden.de/web/Satisfiability_in_the_Triguarded_Fragment_of_First-Order_Logic
DTSTART:20181129T130000
DTEND:20181129T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Most Description Logics (DLs) can be translated into well-known decidable fragments of first-order logic FO\, including the guarded fragment GF and the two-variable fragment FO2. Given their prominence in DL research\, we take closer look at GF and FO2\, and present a new fragment that subsumes both. This fragment\, called the triguarded fragment (denoted TGF)\, is obtained by relaxing the standard definition of GF: quantification is required to be guarded only for subformulae with three or more free variables. We show that satisfiability of equality-free TGF is N2ExpTime-complete\, but becomes NExpTime-complete if we bound the arity of predicates by a constant (a natural assumption in the context of DLs). Finally\, we observe that many natural extensions of TGF\, including the addition of equality\, lead to undecidability. \n\n\nThis talk is a presentation given at the 31st International Workshop on Description Logics\, 2018.
DTSTAMP:20181104T132422
SEQUENCE:27062
END:VEVENT
BEGIN:VEVENT
SUMMARY:Big Data Variety: On-Demand Data Integration
URL://iccl.inf.tu-dresden.de/web/Big_Data_Variety:_On-Demand_Data_Integration
UID://iccl.inf.tu-dresden.de/web/Big_Data_Variety:_On-Demand_Data_Integration
DTSTART:20181126T145000
DTEND:20181126T162000
LOCATION:APB 3105
DESCRIPTION:'''Abstract.''' As big data systems get more complex\, the data variety challenge has become the driving factor in current big data projects. From a technical perspective\, data variety mainly boils down to data integration\, which\, unfortunately\, is far away from being a resolved problem. Current efforts highlight the need to broaden the perspective beyond the data community and use semantic-aware formalisms\, such as knowledge graphs\, to tackle this problem. In this talk\, we will revise the current state-of-the-art of the data variety challenge and present recent solutions to manage the problem.\n\n\n'''Bio.''' I’m currently a tenure-track 2 lecturer at the Departament d’Enginyeria de Serveis i Sistemes d’Informació (ESSI)\, which belongs to the Universitat Politècnica de Catalunya (UPC-BarcelonaTech). also coordinate the IT4BI Erasmus Mundus Master at UPC and the Big Data Management and Analytics postgraduate course at UPC School. Although my hometown is Lleida\, I have already lived for more than 10 years in Barcelona. On March 2004 I obtained my bachelor’s degree in Informatics Engineering at Facultat d’Informàtica de Barcelona (FIB) Later\, on February 2010 I obtained my doctoral degree in Computing. My PhD thesis\, directed by Dr. Alberto Abelló and entitled “Automating the Multidimensional Design of Data Warehouses”\, can be found here. My main topics of interest are business intelligence\, Big Data and the semantic web. My PhD thesis focused on data warehousing but since then\, I have been working on many other topics such as NOSQL (and any technology beyond relational databases)\, bridging Big Data management and analytics\, open data platforms (mostly at the database level)\, recommendation systems and semantic-aware systems (based or exploiting semantic formalisms such as ontology languages or RDF). I am also interested in agile methodologies / formalisms to incorporate non-technical people in the design\, maintenance and evolution of database systems.
DTSTAMP:20181121T124047
SEQUENCE:27171
END:VEVENT
BEGIN:VEVENT
SUMMARY:Extending Datalog with Sets Using an Encoding in Existential Rules
URL://iccl.inf.tu-dresden.de/web/Extending_Datalog_with_Sets_Using_an_Encoding_in_Existential_Rules
UID://iccl.inf.tu-dresden.de/web/Extending_Datalog_with_Sets_Using_an_Encoding_in_Existential_Rules
DTSTART:20181126T130000
DTEND:20181126T143000
LOCATION:APB 3027
DESCRIPTION:We extend Datalog with sets\, in order to facilitate modelling with this logic\, and define the resulting\, extended logic. To allow practical reasoning over this logic\, we show a translation algorithm into first-order logic and prove its correctness. Furthermore\, we show that this translation exhibits optimal runtimes during the restricted chase\, compared to Datalog\, which means that reasoning over the extended logic is practically viable. Lastly\, we explore possible applications of this logic in practice.\n\n\nThis presentation and the subsequent question session constitute a defence for acquiring a Bachelor of Science degree in computer science.
DTSTAMP:20181110T102350
SEQUENCE:27105
END:VEVENT
BEGIN:VEVENT
SUMMARY:Standard and Non-Standard Inferences in the Description Logic FL0 Using Tree Automata
URL://iccl.inf.tu-dresden.de/web/TBA
UID://iccl.inf.tu-dresden.de/web/TBA
DTSTART:20181115T130000
DTEND:20181115T143000
LOCATION:APB 3027
DESCRIPTION:Although being quite inexpressive\, the description logic (DL) FL0\, which provides only conjunction\, value restriction and the top concept as concept constructors\, has an intractable subsumption problem in the presence of terminologies (TBoxes): subsumption reasoning w.r.t. acyclic FL0 TBoxes is coNP-complete\, and becomes even ExpTime-complete in case general TBoxes are used. In this talk\, I will describe an approach that uses automata working on infinite trees to solve both standard and non-standard inferences in FL0 w.r.t. general TBoxes. I will start by sketching an alternative proof of the ExpTime upper bound for subsumption in FL0 w.r.t. general TBoxes based on the use of looping tree automata. Afterwards\, I will explain how to employ parity tree automata to tackle non-standard inference problems such as computing the least common subsumer w.r.t. general TBoxes.
DTSTAMP:20181120T122106
SEQUENCE:27159
END:VEVENT
BEGIN:VEVENT
SUMMARY:From Horn-SRIQ to Datalog: A Data-Independent Transformation that Preserves Assertion Entailment
URL://iccl.inf.tu-dresden.de/web/From_Horn-SRIQ_to_Datalog:_A_Data-Independent_Transformation_that_Preserves_Assertion_Entailment
UID://iccl.inf.tu-dresden.de/web/From_Horn-SRIQ_to_Datalog:_A_Data-Independent_Transformation_that_Preserves_Assertion_Entailment
DTSTART:20181108T130000
DTEND:20181108T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Ontology-based access to large data-sets has recently gained a lot of attention. To access data efficiently\, one approach is to rewrite the ontology into Datalog\, and then use powerful Datalog engines to compute implicit entailments. Existing rewriting techniques support Description Logics (DLs) from ELH to Horn-SHIQ. We go one step further and present one such data-independent rewriting technique for Horn-SRIQ\, the extension of Horn-SHIQ that supports non-transitive\, complex roles---an expressive feature prominently used in many real-world ontologies. We evaluated our rewriting technique on a large known corpus of ontologies. Our experiments show that the resulting rewritings are of moderate size and that the our approach is more efficient than state-of-the-art DL reasoners when reasoning with data-intensive ontologies. \n\n\nThis is joint work with Larry González and Patrick Koopman. It has been accepted at AAAI 2019.
DTSTAMP:20181104T111047
SEQUENCE:27061
END:VEVENT
END:VCALENDAR