BEGIN:VCALENDAR
PRODID:-//SMW Project//Semantic Result Formats
VERSION:2.0
METHOD:PUBLISH
X-WR-CALNAME:ICCL KBS_Seminar
BEGIN:VEVENT
SUMMARY:Notation3 Logic: From informal to formal semantics
URL://iccl.inf.tu-dresden.de/web/Notation3_Logic:_From_informal_to_formal_semantics
UID://iccl.inf.tu-dresden.de/web/Notation3_Logic:_From_informal_to_formal_semantics
DTSTART:20200730T130000
DTEND:20200730T143000
LOCATION:Online
DESCRIPTION:Notation3 Logic is a rule-based extension of RDF. Since its invention\, the logic has been refined and applied in several reasoning engines like for example EYE\, Cwm and FuXi. But despite these developments\, a clear formal definition of Notation3’s semantics is still missing and the details of the logic are only defined in an informal way. This lack of formalisation does not only cause theoretical problems - the relationship to other logics cannot be investigated - it also has practical consequences: in many cases the interpretations of the same formula differ between reasoning engines. In this talk\, I will explain these differences and discuss how the formal semantics of the logic can be defined based on the informal specifications and the implementations.\n\n\nThis talk will be held online. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200727T072445
SEQUENCE:31071
END:VEVENT
BEGIN:VEVENT
SUMMARY:On the Complexity of Synthesis of nop-Free Boolean Petri Nets.
URL://iccl.inf.tu-dresden.de/web/On_the_Complexity_of_Synthesis_of_nop-Free_Boolean_Petri_Nets.
UID://iccl.inf.tu-dresden.de/web/On_the_Complexity_of_Synthesis_of_nop-Free_Boolean_Petri_Nets.
DTSTART:20200724T130000
DTEND:20200724T143000
LOCATION:Online
DESCRIPTION:In a Boolean Petri net\, the interaction nop allows places and transitions to be independent\, so that the firing of a transition does not affect the marking of a place\, and vice versa. Recently\, the complexity of synthesis of nets equipped with nop has been investigated thoroughly\, while the question for the rest 128 types of nets remains open. This work tackles the case of nop-free nets synthesis\, that is\, the Boolean nets where places and transitions are always related via interactions that are able to modify the marking of a place. In this paper\, we show that\, for nop-free nets\, the absence of swap leads always to a polynomial time synthesis procedure. Moreover\, we give a first hint\, that the presence of swap might make the synthesis for these types NP-complete.\n\n\nThis talk will be held online. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200722T203026
SEQUENCE:31066
END:VEVENT
BEGIN:VEVENT
SUMMARY:Imprecise Probabilities in Decision-making
URL://iccl.inf.tu-dresden.de/web/Imprecise_Probabilities_in_Decision-making
UID://iccl.inf.tu-dresden.de/web/Imprecise_Probabilities_in_Decision-making
DTSTART:20200709T130000
DTEND:20200709T143000
LOCATION:Online
DESCRIPTION:An agent’s beliefs come in different strengths. We understand her degree of belief in a\nproposition as a measure of the strength of her belief in that proposition. According to the\northodox Bayesian picture\, an agent's degree of belief is best represented by a single probability\nfunction. In particular\, the Bayesian claims that agents must assign numerically precise\nprobabilities to every proposition that they can entertain. On an alternative account\, an agent’s\nbeliefs ought to be modeled based on imprecise probabilities. With that\, imprecise degrees of\nbelief can be represented by a set of probability functions.\nRecently\, however\, imprecise probabilities have come under attack. Adam Elga (2010) claims\nthat there is no adequate account of the way they can be manifested in decision-making. In\nresponse to Elga\, more elaborate accounts of the imprecise framework have been developed. One\nof them is based on Supervaluationism\, originally\, a semantic approach to vague predicates. Still\,\nSeamus Bradley (2019) shows that those accounts that solve Elga’s problem\, have a more severe\ndefect: they undermine a central motivation to introduce imprecise probabilities in the first place.\nThe aim of my presentation is to modify the supervaluationist approach in such a way that it\naccounts for both Elga’s and Bradley’s challenges to the imprecise framework.\n\n\nThis talk will be held digitally. If there is any interest in attending\, please write an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200706T044025
SEQUENCE:30938
END:VEVENT
BEGIN:VEVENT
SUMMARY:ASNP: a tame fragment of existential second-order logic
URL://iccl.inf.tu-dresden.de/web/ASNP:_a_tame_fragment_of_existential_second-order_logic
UID://iccl.inf.tu-dresden.de/web/ASNP:_a_tame_fragment_of_existential_second-order_logic
DTSTART:20200625T130000
DTEND:20200625T143000
LOCATION:Digital
DESCRIPTION:Amalgamation SNP (ASNP) is a fragment of existential second-order logic that strictly contains binary connected MMSNP of Feder and Vardi and binary connected guarded monotone SNP of Bienvenu\, ten Cate\, Lutz\, and Wolter\; it is a promising candidate for an expressive subclass of NP that exhibits a complexity dichotomy. We show that ASNP has a complexity dichotomy if and only if the infinite-domain dichotomy conjecture holds for constraint satisfaction problems for first-order reducts of binary finitely bounded homogeneous structures. For such CSPs\, powerful universal-algebraic hardness conditions are known that are conjectured to describe the border between NP-hard and polynomial-time tractable CSPs. The connection to CSPs also implies that every ASNP sentence can be evaluated in polynomial time on classes of finite structures of bounded treewidth. We show that the syntax of ASNP is decidable. The proof relies on the fact that for classes of finite binary structures given by finitely many forbidden substructures\, the amalgamation property is decidable.\n\n\nThis will be a test talk for the presentation of an eponymous paper consisting of the 20 minute long prerecorded talk (like it will be presented at the conference) which will be followed up with a Q&A session to the talk where questions will be answered by Simon Knäuer\, one of the authors. Feedback and suggestions in preperation for the conference talk are heavily encouraged.\n\nThis talk will be held digitally. If there is any interest in attending\, please write an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200622T054325
SEQUENCE:30849
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Method of Refinement: Deriving Proof-Calculi from Semantics for Multi-Modal Logics
URL://iccl.inf.tu-dresden.de/web/The_Method_of_Refinement:_Deriving_Proof-Calculi_from_Semantics_for_Multi-Modal_Logics
UID://iccl.inf.tu-dresden.de/web/The_Method_of_Refinement:_Deriving_Proof-Calculi_from_Semantics_for_Multi-Modal_Logics
DTSTART:20200611T130000
DTEND:20200611T143000
LOCATION:Digital
DESCRIPTION:In this talk\, we look at how to derive nested calculi from labelled calculi for multi-modal logics\, thus connecting the general results for labelled calculi with the more refined formalism of nested sequents. Since labelled calculi are constructed by transforming the semantics of a logic into a proof system\, the method of refinement shows how one can derive simplified calculi from the semantics of a logic. The extraction of nested calculi from labelled calculi obtains via considerations pertaining to the elimination of structural rules in labelled derivations. As a consequence of the extraction process\, each nested calculus inherits fundamental proof-theoretic properties from its associated labelled calculus and is suitable for applications such as decidability and interpolation.\n\nTim is currently working at the informatics faculty of the TU Wien in Vienna as a PreDoc Researcher in the research unit "Theory and Logic".\n\nThis talk will be held digitally. If there is any interest in attending\, please write an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200520T082459
SEQUENCE:30588
END:VEVENT
BEGIN:VEVENT
SUMMARY:Compositional Matrix-Space Models: Learning Methods and Evaluation
URL://iccl.inf.tu-dresden.de/web/Compositional_Matrix-Space_Models:_Learning_Methods_and_Evaluation
UID://iccl.inf.tu-dresden.de/web/Compositional_Matrix-Space_Models:_Learning_Methods_and_Evaluation
DTSTART:20200528T130000
DTEND:20200528T143000
LOCATION:Digital
DESCRIPTION:There has been a lot of research on machine-readable representations of words for natural\nlanguage processing (NLP). One mainstream paradigm for the word meaning representation\ncomprises vector-space models obtained from the distributional information of words in\nthe text. Machine learning techniques have been proposed to produce such word representations\nfor computational linguistic tasks. Moreover\, the representation of multi-word\nstructures\, such as phrases\, in vector space can arguably be achieved by composing the distributional\nrepresentation of the constituent words. To this end\, mathematical operations have\nbeen introduced as composition methods in vector space. An alternative approach to word\nrepresentation and semantic compositionality in natural language has been compositional\nmatrix-space models. In this thesis\, two research directions are considered. In the first\,\nconsidering compositional matrix-space models\, we explore word meaning representations\nand semantic composition of multi-word structures in matrix space. The main motivation\nfor working on these models is that they have shown superiority over vector-space models\nregarding several properties. The most important property is that the composition operation\nin matrix-space models can be defined as standard matrix multiplication\; in contrast to\ncommon vector space composition operations\, this is sensitive to word order in language. We\ndesign and develop machine learning techniques that induce continuous and numeric representations\nof natural language in matrix space. The main goal in introducing representation\nmodels is enabling NLP systems to understand natural language to solve multiple related\ntasks. Therefore\, first\, different supervised machine learning approaches to train word\nmeaning representations and capture the compositionality of multi-word structures using\nthe matrix multiplication of words are proposed. The performance of matrix representation\nmodels learned by machine learning techniques is investigated in solving two NLP tasks\,\nnamely\, sentiment analysis and compositionality detection. Then\, learning techniques for\nlearning matrix-space models are proposed that introduce generic task-agnostic representation\nmodels\, also called word matrix embeddings. In these techniques\, word matrices are\ntrained using the distributional information of words in a given text corpus. We show the\neffectiveness of these models in the compositional representation of multi-word structures in\nnatural language.\nThe second research direction in this thesis explores effective approaches for evaluating\nthe capability of semantic composition methods in capturing the meaning representation of\ncompositional multi-word structures\, such as phrases. A common evaluation approach is\nexamining the ability of the methods in capturing the semantic relatedness between linguistic\nunits. The underlying assumption is that the more accurately a method of semantic composition\ncan determine the representation of a phrase\, the more accurately it can determine the\nrelatedness of that phrase with other phrases. To apply the semantic relatedness approach\,\ngold standard datasets have been introduced. In this thesis\, we identify the limitations of\nthe existing datasets and develop a new gold standard semantic relatedness dataset\, which\naddresses the issues of the existing datasets. The proposed dataset allows us to evaluate\nmeaning composition in vector- and matrix-space models.\n\nThe presentation will take 45 minutes without questions. Afterwards there will be Q&A.\nThis talk will be held digitally. If there is any interest in attending\, please write an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200508T121859
SEQUENCE:30495
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tracking False Information Online
URL://iccl.inf.tu-dresden.de/web/Tracking_False_Information_Online
UID://iccl.inf.tu-dresden.de/web/Tracking_False_Information_Online
DTSTART:20200227T161500
DTEND:20200227T174500
LOCATION:APB 1004
DESCRIPTION:Digital media enables fast sharing of information and discussions among users. While this comes with many benefits to today’s society\, such as broadening information access\, the manner in which information is disseminated also has obvious downsides. Since fast access to information is expected by many users and news outlets are often under financial pressure\, speedy access often comes at the expense of accuracy\, which leads to misinformation. Moreover\, digital media can be misused by campaigns to intentionally spread false information\, i.e. disinformation\, about events\, individuals or governments. In this talk\, I will present on different ways false information is spread online\, including misinformation and disinformation. I will then report findings from our recent and ongoing work on automatic fact checking\, stance detection and framing attitudes.
DTSTAMP:20200220T161257
SEQUENCE:30135
END:VEVENT
BEGIN:VEVENT
SUMMARY:Knowledge Graph Curation and Reasoning using the Example of the Scholarly Domain
URL://iccl.inf.tu-dresden.de/web/Knowledge_Graph_Curation_and_Reasoning_using_the_Example_of_the_Scholarly_Domain
UID://iccl.inf.tu-dresden.de/web/Knowledge_Graph_Curation_and_Reasoning_using_the_Example_of_the_Scholarly_Domain
DTSTART:20200130T130000
DTEND:20200130T143000
LOCATION:APB 3027
DESCRIPTION:Knowledge graphs allow organisations and enterprises to integrate their internal and external heterogeneous sources of information into a unified form and enable analytics and discovery of unknown knowledge. To exploit the information encoded in knowledge graphs\, analysis of the graph structure as well as the semantics of the represented relations\, is required. I will show this using the scholarly domain as an example. The heterogeneity of scholarly artifacts and their metadata spread over different Web data sources serve as a great use case platform for data analytics and reasoning methods. In this talk\, I will first have a look into major challenges of this domain using KG creation and curation leveraging semantic Web technologies. I will further showcase the application of Knowledge Graph Embedding models for link prediction scenarios of this domain.
DTSTAMP:20200123T134125
SEQUENCE:30034
END:VEVENT
BEGIN:VEVENT
SUMMARY:Modeling Computational Properties of Description Logics in ASP
URL://iccl.inf.tu-dresden.de/web/Modeling_Computational_Properties_of_Description_Logics_in_ASP
UID://iccl.inf.tu-dresden.de/web/Modeling_Computational_Properties_of_Description_Logics_in_ASP
DTSTART:20200129T090000
DTEND:20200129T103000
LOCATION:APB 3027
DESCRIPTION:Tracking the increasing volume of research about Description logics is getting harder. Besides\, those results interact together and can deduce new results. That is why we need a knowledge base to encode those results in a smart way and infer more results based on what we currently know. This talk presents an approach of how we can encode such information with the help of Answer Set Programming (ASP). In addition\, we show how such a system can be integrated into a website that visualizes the current research results and the inferences made based on them. We end by analyzing this approach and suggesting some future work.
DTSTAMP:20200129T094156
SEQUENCE:30083
END:VEVENT
BEGIN:VEVENT
SUMMARY:Checking Chase Termination over Ontologies of Existential Rules with Equality
URL://iccl.inf.tu-dresden.de/web/Checking_Chase_Termination_over_Ontologies_of_Existential_Rules_with_Equality
UID://iccl.inf.tu-dresden.de/web/Checking_Chase_Termination_over_Ontologies_of_Existential_Rules_with_Equality
DTSTART:20200123T130000
DTEND:20200123T143000
LOCATION:APB 3027
DESCRIPTION:The chase is a sound and complete algorithm for conjunctive query answering over ontologies of existential rules with equality. To enable its effective use\, we can apply acyclicity notions\; that is\, sufficient conditions that guarantee chase termination. Unfortunately\, most of these notions have only been defined for existential rule sets without equality. A proposed solution to circumvent this issue is to treat equality as an ordinary predicate with an explicit axiomatisation. We empirically show that this solution is not efficient in practice and propose an alternative approach. More precisely\, we show that\, if the chase terminates for any equality axiomatisation of an ontology\, then it terminates for the original ontology (which may contain equality). Therefore\, one can apply existing acyclicity notions to check chase termination over an axiomatisation of an ontology and then use the original ontology for reasoning. We show that\, in practice\, doing so results in a more efficient reasoning procedure. Furthermore\, we present equality model-faithful acyclicity\, a general acyclicity notion that can be directly applied to ontologies with equality.\n\n\nThis talk is a rehearsal for AAAI 2020. \nJoint work with Jacopo Urbani.
DTSTAMP:20200107T165627
SEQUENCE:29972
END:VEVENT
BEGIN:VEVENT
SUMMARY:Musings on the Semantics of SPARQL
URL://iccl.inf.tu-dresden.de/web/Musings_on_the_Semantics_of_SPARQL
UID://iccl.inf.tu-dresden.de/web/Musings_on_the_Semantics_of_SPARQL
DTSTART:20200109T130000
DTEND:20200109T143000
LOCATION:APB 3027
DESCRIPTION:Graph simulations have found their way into different graph database management (GDBM) tasks\, e.g.\, in the shape of Offline indexing structures\, as theoretical models for graph schemas\, or as viable alternatives to matching patterns up to graph homomorphisms. Among other advantages\, it is often the tractability of the simulation problem being exploited in emerging applications. However\, when it\ncomes to evaluating the approaches\, only basic graph patterns (BGPs) and rather small data instances\, compared to today's large data instances like Wikidata or DBpedia\, are considered. In the first part of this talk\, I give some insights on how far graph simulations may be incorporated into full-fledged graph query processing. Therefore\, we analyze different semantic interpretations of SPARQL\, based on graph simulation\, w.r.t. correctness\, complexity\, and effectiveness. Second\, I briefly sketch why state-of-the-art simulation algorithms do not scale well in the graph query/data setting. I further show the effects of a devised solution that even integrates well with the SPARQL semantics we envisioned in the first part.
DTSTAMP:20200108T113625
SEQUENCE:29973
END:VEVENT
BEGIN:VEVENT
SUMMARY:SCF2 - an Argumentation Semantics for Rational Human Judgments on Argument Acceptability
URL://iccl.inf.tu-dresden.de/web/SCF2_-_an_Argumentation_Semantics_for_Rational_Human_Judgments_on_Argument_Acceptability
UID://iccl.inf.tu-dresden.de/web/SCF2_-_an_Argumentation_Semantics_for_Rational_Human_Judgments_on_Argument_Acceptability
DTSTART:20191219T130000
DTEND:20191219T143000
LOCATION:APB 3027
DESCRIPTION:In abstract argumentation theory\, many argumentation semantics have been proposed for evaluating argumentation frameworks. This paper is based on the following research question: Which semantics corresponds well to what humans consider a rational judgment on the acceptability of arguments? There are two systematic ways to approach this research question: A normative perspective is provided by the principle-based approach\, in which semantics are evaluated based on their satisfaction of various normatively desirable principles. A descriptive perspective is provided by the empirical approach\, in which cognitive studies are conducted to determine which semantics best predicts human judgments about arguments. In this paper\, we combine both approaches to motivate a new argumentation semantics called SCF2. For this purpose\, we introduce and motivate two new principles and show that no semantics from the literature satisfies both of them. We define SCF2 and prove that it satisfies both new principles. Furthermore\, we discuss findings of a recent empirical cognitive study that provide additional support to SCF2.
DTSTAMP:20191107T084228
SEQUENCE:29685
END:VEVENT
BEGIN:VEVENT
SUMMARY:Standpoint logic: a multi-modal logic for reasoning within semantic indeterminacy
URL://iccl.inf.tu-dresden.de/web/Standpoint_logic:_a_multi-modal_logic_for_reasoning_within_semantic_indeterminacy
UID://iccl.inf.tu-dresden.de/web/Standpoint_logic:_a_multi-modal_logic_for_reasoning_within_semantic_indeterminacy
DTSTART:20191212T130000
DTEND:20191212T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Standpoint logic is a multi-modal logic intended at reasoning with different interpretations of semantically heterogeneous terms. The framework offers an alternative to “fuzzy” approaches to the representation of meaning and allows for the specification of “semantic commitments” and “penumbral connections”.\n\n\nIn this talk\, I will introduce the logic and provide an overview of its proof theory and semantics. I will demonstrate its expressivity in an application scenario in the forestry domain\, using data schemas from the repository Global Forest Watch and concepts from the ENVO ontology. I will finally discuss the complexity of the logic and some restrictions that could make implementations viable.
DTSTAMP:20191028T083947
SEQUENCE:29474
END:VEVENT
BEGIN:VEVENT
SUMMARY:TBA3
URL://iccl.inf.tu-dresden.de/web/TBA3
UID://iccl.inf.tu-dresden.de/web/TBA3
DTSTART:20191205T130000
DTEND:20191205T143000
LOCATION:APB 3027
DESCRIPTION:Suppose there is a database we have no direct access to\, but there are views of this database available to us\, defined by some queries Q_1 \, Q_2 \, . . . Q_k. And we are given another query Q. Will we be able to compute Q only using the available views?\n\n\n\nThe above question\, call it "the question of determinacy"\, sounds almost philosophical. One can easily imagine a bearded man in himation chained to the wall of a cave watching the views projected on the wall and pondering whether\, from what he is able to see\, the reality can be faithfully reconstructed.\n\nFor us it is a database theory question though. And a really well motivated one\, with motivations ranging from query evaluation plans optimization (where we prefer a positive answer) to privacy issues (where the preferred answer is negative).\n\nQuery determinacy is a broad topic\, with literally hundreds of papers published since the late 1980s. This talk is not going to be a "survey" (which would be impossible\, within a one hour time frame\, and with this speaker)\, but rather a personal perspective of a person somehow involved in the recent developments in the area.\n\nFirst I will explain how\, in the last 30+ years\, the question of determinacy was formalized. There are many parameters here: obviously one needs to choose the query language of the queries Q_i and the query language of Q. But -- surprisingly -- there is also some choice regarding what the word "to compute" actually means in this context.\n\nThen I will concentrate on the variants of the decision problem of determinacy (for each choice of parameters there is one such problem -- Q_1 \, Q_2 \, . . . Q_k and Q constitute the instance\, and the question is whether Q_1 \, Q_2 \, . . . Q_k determine Q) and I will talk about how I understand the mechanisms rendering different variants of determinacy decidable or undecidable. This will be on a slightly informal level. No new theorems will be presented\, but I think I will be able to show simplified proofs of some of the earlier results.\n\nThis is a preview of the [https://diku-dk.github.io/edbticdt2020/?contents=invited_ICDT_talk.html invited talk at ICDT 2020].
DTSTAMP:20191202T153202
SEQUENCE:29823
END:VEVENT
BEGIN:VEVENT
SUMMARY:TBA4
URL://iccl.inf.tu-dresden.de/web/TBA4
UID://iccl.inf.tu-dresden.de/web/TBA4
DTSTART:20191128T130000
DTEND:20191128T143000
LOCATION:APB 3027
DESCRIPTION:The problem of deciding the validity (QSAT) of quantified Boolean formulas (QBF) is a vivid research area in both theory and practice. In the field of parameterized algorithmics\, the well-studied graph measure treewidth turned out to be a successful parameter. A well-known result by Chen in parameterized complexity is that QSAT when parameterized by the treewidth of the primal graph of the input formula together with the quantifier depth of the formula is fixed-parameter tractable. More precisely\, the runtime of such an algorithm is polynomial in the formula size and exponential in the treewidth\, where the exponential function in the treewidth is a tower\, whose height is the quantifier depth. \n\n\nA natural question is whether one can significantly improve these results and decrease the tower while assuming the Exponential Time Hypothesis (ETH). In the last years\, there has been a growing interest in the quest of establishing lower bounds under ETH\, showing mostly problem-specific lower bounds up to the third level of the polynomial hierarchy. Still\, an important question is to settle this as general as possible and to cover the whole polynomial hierarchy. \n\nIn this work\, we show lower bounds based on the ETH for arbitrary QBFs parameterized by treewidth (and quantifier depth). More formally\, we establish lower bounds for QSAT and treewidth\, namely\, that under ETH there cannot be an algorithm that solves QSAT of quantifier depth i in runtime significantly better than i-fold exponential in the treewidth and polynomial in the input size. In doing so\, we provide a versatile reduction technique to compress treewidth that encodes the essence of dynamic programming on arbitrary tree decompositions. Further\, we describe a general methodology for a more fine-grained analysis of problems parameterized by treewidth that are at higher levels of the polynomial hierarchy.\n\n'''Authors:''' Johannes Klaus Fichte\, Markus Hecher\, Andreas Pfandler
DTSTAMP:20191128T115745
SEQUENCE:29795
END:VEVENT
BEGIN:VEVENT
SUMMARY:A diamond in the rough: Theorizing column stores
URL://iccl.inf.tu-dresden.de/web/A_diamond_in_the_rough:_Theorizing_column_stores
UID://iccl.inf.tu-dresden.de/web/A_diamond_in_the_rough:_Theorizing_column_stores
DTSTART:20191121T130000
DTEND:20191121T143000
LOCATION:APB 3105
DESCRIPTION:Column stores have been a 'neglected child' relative to traditional\, row-oriented\, relation-focused database management systems: The systems people came up with them\, and the theoreticians did not really give them the time of day. This talk will discuss what happens when we pick up the slack and formalize a model for analytic computation with columns. In addition to sound conceptual grounding being its own aesthetic reward\, we will touch on some of the examples of how such a formalization enables architectural and performance improvements in real-life systems:\n\n\nSeamless integration of decompression and query execution\; removal of special-case handling of different column features (such as nullability and variable-length elements)\; closure of query execution plans to partial execution\; et cetera. Central to achieving such benefits will be the discussion of what constitutes a column\, how columns are to be represented\, and what they can represent.
DTSTAMP:20191107T103354
SEQUENCE:29693
END:VEVENT
BEGIN:VEVENT
SUMMARY:Can A.I. Provably Explain Itself? A gentle Introduction to Description Logics
URL://iccl.inf.tu-dresden.de/web/Can_A.I._Provably_Explain_Itself%3F_A_gentle_Introduction_to_Description_Logics
UID://iccl.inf.tu-dresden.de/web/Can_A.I._Provably_Explain_Itself%3F_A_gentle_Introduction_to_Description_Logics
DTSTART:20191114T130000
DTEND:20191114T143000
LOCATION:APB 3027
DESCRIPTION:The emergence of intelligent systems in self-driving cars\, planes\, medical diagnosis\, insurance and financial services among others has shown that when decisions are taken or suggested by automated systems it is essential that an explanation can be provided. The disconnect between how we make decisions and how machines make them\, and the fact that machines are making more and more decisions for us\, has given a new push for transparency in A.I. However\, the inner workings of machine learning algorithms remain difficult to understand\, and the methods of making these models explainable still require expensive human evaluation.\n\n\nOn the other hand\, knowledge representation based on description logics allow for providing description of the environment\, specifying constraints for the system states and detecting inconsistencies\, as well as operating information from heterogeneous (possibly incomplete) data sources\, and reasoning about the knowledge of an application domain. Because of the conceptual difference with machine learning algorithms\, the description logics formalism is much more similar to human reasoning and can be adapted to supply the user with necessary explications for a decision made. Additionally\, in order to model dynamic systems\, description logics have extensions enabling additionally temporal and probabilistic reasoning.\n\nIn this talk I will outline key pillars and basic principles of (onto)logical reasoning as well as its limitations. Finally\, I will say a few words about an initiative of TU Dresden and Saarland Informatics Campus developing together the concept of perspicuous computing and laying the scientific foundations for computerised systems that are able to express clearly their functioning.
DTSTAMP:20191030T152752
SEQUENCE:29562
END:VEVENT
BEGIN:VEVENT
SUMMARY:Interface between Logical Analysis of Data and Formal Concept Analysis
URL://iccl.inf.tu-dresden.de/web/Interface_between_Logical_Analysis_of_Data_and_Formal_Concept_Analysis
UID://iccl.inf.tu-dresden.de/web/Interface_between_Logical_Analysis_of_Data_and_Formal_Concept_Analysis
DTSTART:20191024T130000
DTEND:20191024T143000
LOCATION:APB 3027
DESCRIPTION:Logical Analysis of Data and Formal Concept Analysis are separately developed methodologies based on different mathematical foundations. We show that the two methodologies utilize the same basic building blocks. That enables us to develop an interface between the two methodologies. We provide some preliminary benefits of the interface\; most notably efficient algorithms for computing spanned patterns in Logical Analysis of Data using algorithms of Formal Concept Analysis.
DTSTAMP:20191021T094239
SEQUENCE:29357
END:VEVENT
BEGIN:VEVENT
SUMMARY:Knowledge Dynamics in Social Environments
URL://iccl.inf.tu-dresden.de/web/Knowledge_Dynamics_in_Social_Environments
UID://iccl.inf.tu-dresden.de/web/Knowledge_Dynamics_in_Social_Environments
DTSTART:20190926T130000
DTEND:20190926T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:'''\nSocial media platforms\, taken in conjunction\, can be seen as complex networks\; in this context\, understanding how agents react to sentiments expressed by their connections is of great interest. Here\, we show how Network Knowledge Bases help represent the integration of multiple social networks\, and explore how information flow can be handled via belief revision operators for local (agent-specific) knowledge bases. We report on preliminary experiments on Twitter data showing that different agent types react differently to the same information — this is a first step toward developing tools to predict how agents behave as information flows in their social environment.\n\n'''Bio:'''\nMaria Vanina Martinez\, University of Buenos Aires\, Argentina.
DTSTAMP:20190923T154215
SEQUENCE:29160
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lecture on Partition Width
URL://iccl.inf.tu-dresden.de/web/Lecture_on_Partition_Width
UID://iccl.inf.tu-dresden.de/web/Lecture_on_Partition_Width
DTSTART:20190912T130000
DTEND:20190912T143000
LOCATION:APB 3027
DESCRIPTION:In this talk we will take an introductory glance at the notion of "partition width"\, first conceived by Achim Blumensath. As partition width is also closely related to a notion of decomposition of an arbitrary structure into a tree-like shape\, the so called "partition refinement"\, we will also take a look at the relation of both these notions to more established notions of decomposition and width measures (namely tree-decompositions\, tree width\, and clique width).
DTSTAMP:20190802T123628
SEQUENCE:28943
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mixing Description Logics in Privacy-Preserving Ontology Publishing
URL://iccl.inf.tu-dresden.de/web/Mixing_Description_Logics_in_Privacy-Preserving_Ontology_Publishing
UID://iccl.inf.tu-dresden.de/web/Mixing_Description_Logics_in_Privacy-Preserving_Ontology_Publishing
DTSTART:20190905T130000
DTEND:20190905T140000
LOCATION:APB 2026
DESCRIPTION:In previous work\, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an instance store\, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL . We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy\, and have shown how\, in the context mentioned above\, optimal compliant (safe) generalizations of a given concept can be computed. In the present paper\, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular\, we investigate the situations where the attacker’s knowledge is given by an or an concept. In both cases\, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of \, it turns out to be actually lower (polynomial) for the more expressive DL .\n\n\nJoint work with Franz Baader. This is also a test-talk for a presentation at KI 2019.
DTSTAMP:20190902T110018
SEQUENCE:29045
END:VEVENT
BEGIN:VEVENT
SUMMARY:On the Expressive Power of Description Logics with Cardinality Constraints on Finite and Infinite Sets
URL://iccl.inf.tu-dresden.de/web/On_the_Expressive_Power_of_Description_Logics_with_Cardinality_Constraints_on_Finite_and_Infinite_Sets
UID://iccl.inf.tu-dresden.de/web/On_the_Expressive_Power_of_Description_Logics_with_Cardinality_Constraints_on_Finite_and_Infinite_Sets
DTSTART:20190829T130000
DTEND:20190829T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' In recent work we have extended the description logic (DL) ALCQ by means of more expressive number restrictions using numerical and set constraints stated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). It has been shown that reasoning in the resulting DL\, called ALCSCC\, is PSpace-complete without a TBox and ExpTime-complete w.r.t. a general TBox. The semantics of ALCSCC is defined in terms of finitely branching interpretations\, that is\, interpretations where every element has only finitely many role successors. This condition was needed since QFBAPA considers only finite sets. In this paper\, we first introduce a variant of ALCSCC\, called ALCSCC∞\, in which we lift this requirement (inexpressible in first-order logic) and show that the complexity results for ALCSCC mentioned above are preserved. Nevertheless\, like ALCSCC\, ALCSCC∞ is not a fragment of first-order logic. The main contribution of this paper is to give a characterization of the first-order fragment of ALCSCC∞. The most important tool used in the proof of this result is a notion of bisimulation that characterizes this fragment.\n\n\nJoint work with Franz Baader.\nThis talk is a rehearsal for a presentation at FroCoS 2019.\nDuration: 25 minutes without questions.
DTSTAMP:20190820T085458
SEQUENCE:28999
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chasing Sets: How to Use Existential Rules for Expressive Reasoning
URL://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning
UID://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning
DTSTART:20190801T130000
DTEND:20190801T140000
LOCATION:APB 3027
DESCRIPTION:Abstract: We propose that modern existential rule reasoners can enable fully declarative implementations of rule-based inference methods in knowledge representation\, in the sense that a particular calculus is captured by a fixed set of rules that can be evaluated on varying inputs (encoded as facts). We introduce Datalog(S) – Datalog with support for sets – as a surface language for such translations\, and show that it can be captured in a decidable fragment of existential rules. We then implement several known inference methods in Datalog(S)\, and empirically show that an existing existential rule reasoner can thus be used to solve practical reasoning problems.\n\n\nThis talk is a rehearsal for a presentation (15 minutes including questions) at IJCAI 2019.
DTSTAMP:20190731T095825
SEQUENCE:28893
END:VEVENT
BEGIN:VEVENT
SUMMARY:Worst-Case Optimal Querying of Very Expressive Description Logics with Path Expressions and Succinct Counting
URL://iccl.inf.tu-dresden.de/web/Worst-Case_Optimal_Querying_of_Very_Expressive_Description_Logics_with_Path_Expressions_and_Succinct_Counting
UID://iccl.inf.tu-dresden.de/web/Worst-Case_Optimal_Querying_of_Very_Expressive_Description_Logics_with_Path_Expressions_and_Succinct_Counting
DTSTART:20190730T130000
DTEND:20190730T133000
LOCATION:APB 3027
DESCRIPTION:'''Abstract.''' Among the most expressive knowledge representation formalisms are the description logics of the Z family. For well-behaved fragments of ZOIQ\, entailment of positive two-way regular path queries is well known to be 2EXPTIMEcomplete under the proviso of unary encoding of numbers in cardinality constraints. We show that this assumption can be dropped without an increase in complexity and EXPTIME-completeness can be achieved when bounding the number of query atoms\, using a novel reduction from query entailment to knowledge base satisfiability. These findings allow to strengthen other results regarding query entailment and query containment problems in very expressive description logics. Our results also carry over to GC2\, the two-variable guarded fragment of first order logic with counting quantifiers\, for which hitherto only conjunctive query entailment has been investigated.
DTSTAMP:20190718T124219
SEQUENCE:28841
END:VEVENT
BEGIN:VEVENT
SUMMARY:Reasoning about disclosure in data integration in the presence of source constraints
URL://iccl.inf.tu-dresden.de/web/Reasoning_about_disclosure_in_data_integration_in_the_presence_of_source_constraints
UID://iccl.inf.tu-dresden.de/web/Reasoning_about_disclosure_in_data_integration_in_the_presence_of_source_constraints
DTSTART:20190718T130000
DTEND:20190718T140000
LOCATION:APB 3027
DESCRIPTION:Joint work with M. Benedikt\, P. Bourhis\, L. Jachiet\n\n\n'''Abstract:''' Data integration systems allow users to access data sitting in multiple sources by means of queries over a global schema\, related to the sources via mappings. Data sources often contain sensitive information\, and thus an analysis is needed to verify that a schema satisfies a privacy policy\, given as a set of queries whose answers should not be accessible to users. Such an analysis should take into account not only knowledge that an attacker may have about the mappings\, but also what they may know about the semantics of the sources. In this talk\, I'll discuss the impact that source constraints can have on disclosure analysis.\n\n'''Speaker bio:''' Michaël Thomazo (Inria\, DI ENS\, ENS\, CNRS\, PSL University)
DTSTAMP:20190712T163358
SEQUENCE:28796
END:VEVENT
BEGIN:VEVENT
SUMMARY:Epistemic Answer Set Programming
URL://iccl.inf.tu-dresden.de/web/Epistemic_Answer_Set_Programming
UID://iccl.inf.tu-dresden.de/web/Epistemic_Answer_Set_Programming
DTSTART:20190711T130000
DTEND:20190711T143000
LOCATION:APB 3027
DESCRIPTION:Today it is widely accepted by the logic programming community that answer set programming (ASP) requires a more powerful introspective reasoning with the use of modalities. Although there has been a long-lasting debate among researchers about how to correctly extend ASP with epistemic modal operators\, there is still no agreement on a fully satisfactory semantics that is able to offer intuitive results for epistemic logic programs. In this talk\, we introduce a recent epistemic extension of ASP called epistemic ASP (EASP)\, endowed with the epistemic answer set semantics: minimal (with respect to truth) models which are maximal under two different orderings that minimise knowledge. Then we compare EASP with existing successful (to some extent) approaches in the literature\, showing the advantages and the novelties of the new semantics: compared to Gelfond's epistemic specifications (ES)\, EASP defines a sufficiently strong language of a simpler syntactic character. Its semantics through a minimality criterion with respect to truth and knowledge is a natural and conservative generalisation of ASP's original answer set semantics. Moreover\, compared to all other semantics proposals for ES\, the epistemic answer set semantics provides a comprehensive solution to unintended results for epistemic logic programs including constraints. Finally\, we briefly discuss some formal properties of EASP such as epistemic splitting\, strong equivalence and foundedness. \n\n\n'''Speaker info:''' Ezgi Iraz Su\, IRIT (Lilac)\, Université de Toulouse 3 (Université Paul Sabatier)
DTSTAMP:20190625T121430
SEQUENCE:28697
END:VEVENT
BEGIN:VEVENT
SUMMARY:Introduction to p-adic numbers and analysis
URL://iccl.inf.tu-dresden.de/web/Introduction_to_p-adic_numbers_and_analysis
UID://iccl.inf.tu-dresden.de/web/Introduction_to_p-adic_numbers_and_analysis
DTSTART:20190710T133000
DTEND:20190710T150000
LOCATION:APB 3027
DESCRIPTION:The p-adic numbers (where p is a prime number) can be seen as one possible link between number theory and analysis. They therefore play an important role in various mathematical areas. The so-called strong triangle inequality of the p-adic absolute value (|\;a+b|\;≤max(|\;a|\;\,|\;b|\;) for p-adic numbers a and b) has many strange and surprising consequences.\nIn our talk\, we give the definition of p-adic numbers\, several elementary results on p-adic functional analysis and a short overwiev on possible applications.
DTSTAMP:20190707T075702
SEQUENCE:28768
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chasing Sets: How to Use Existential Rules for Expressive Reasoning (Extended Abstract)
URL://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning_(Extended_Abstract)
UID://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning_(Extended_Abstract)
DTSTART:20190613T133000
DTEND:20190613T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We propose that modern existential rule reasoners can enable fully declarative implementations of rule-based inference methods in knowledge representation\, in the sense that a particular calculus is captured by a fixed set of rules that can be evaluated on varying inputs (encoded as facts). We introduce Datalog(S) – Datalog with support for sets – as a surface language for such translations\, and show that it can be captured in a decidable fragment of existential rules. We then implement several known inference methods in Datalog(S)\, and empirically show that an existing existential rule reasoner can thus be used to solve practical reasoning problems.\n\n\nThis talk is a rehearsal for a SHORT ORAL presentation (17 minutes without questions) at DL 2019.
DTSTAMP:20190606T082710
SEQUENCE:28585
END:VEVENT
BEGIN:VEVENT
SUMMARY:Discovering Implicational Knowledge in Wikidata
URL://iccl.inf.tu-dresden.de/web/Discovering_Implicational_Knowledge_in_Wikidata
UID://iccl.inf.tu-dresden.de/web/Discovering_Implicational_Knowledge_in_Wikidata
DTSTART:20190613T130000
DTEND:20190613T133000
LOCATION:APB 3027
DESCRIPTION:Knowledge graphs have recently become the state-of-the-art tool for representing the diverse and complex knowledge of the world. Among the freely available knowledge graphs\, Wikidata stands out by being collaboratively edited and curated. Amidst the vast numbers of facts\, complex knowledge is just waiting to be discovered\, but the sheer size of Wikidata makes this infeasible for human editors. We apply Formal Concept Analysis to efficiently identify and succinctly represent comprehensible implications that are implicitly present in the data. As a first step\, we describe a systematic process to extract conceptual knowledge from Wikidata's complex data model\, thus providing a method for obtaining large real-world data sets for FCA. We conduct experiments that show the principal feasibility of the approach\, yet also illuminate some of the limitations\, and give examples of interesting knowledge discovered.\n\n\nThis will be a rehearsal talk for ICFCA-2019 (20 minutes including questions).
DTSTAMP:20190607T170543
SEQUENCE:28613
END:VEVENT
BEGIN:VEVENT
SUMMARY:Projection in a Description Logic of Context with Actions
URL://iccl.inf.tu-dresden.de/web/Projection_in_a_Description_Logic_of_Context_with_Actions
UID://iccl.inf.tu-dresden.de/web/Projection_in_a_Description_Logic_of_Context_with_Actions
DTSTART:20190606T130000
DTEND:20190606T134500
LOCATION:APB 3027
DESCRIPTION:Satyadharma Tirtarasa and Benjamin Zarrieß. '''Projection in a Description Logic of Context with Actions'''. In Proceedings of the 32nd International Workshop on Description Logics (DL'19)\, Oslo\, Norway\, June 2019. Springer. To appear.\n\n\n'''Abstract:''' Projection is the problem of checking whether the execution of a given sequence of actions will achieve its goal starting from some initial state. In this paper\, we study a setting where we combine a two-dimensional Description Logic of context (ConDL) with an action formalism. We choose a well-studied ConDL where both: the possible states of a dynamical system itself (object level) and also different context-dependent views on this system state (context level) are organised in relational structures and can be described using usual DL constructs. To represent how such a system and its views evolve we introduce a suitable action formalism. It allows one to describe change on both levels. Furthermore\, the observable changes on the object level due to an action execution can also be context-dependent. We show that the formalism is well-behaved in the sense that projection has the same complexity as standard reasoning tasks in case ALCO is the underlying DL. \n\nThis talk is a rehearsal for a SHORT ORAL presentation (17 minutes without questions) at DL 2019.
DTSTAMP:20190603T115201
SEQUENCE:28557
END:VEVENT
BEGIN:VEVENT
SUMMARY:Explorations into Belief State Compression
URL://iccl.inf.tu-dresden.de/web/Explorations_into_Belief_State_Compression
UID://iccl.inf.tu-dresden.de/web/Explorations_into_Belief_State_Compression
DTSTART:20190528T133000
DTEND:20190528T143000
LOCATION:APB 3027
DESCRIPTION:A knowledge base is an integral part of a logic-based artificial intelligence system. The size of the knowledge base has a great effect on the derivation time of a logic-based agent. In this thesis\, I present a variety of algorithms for a particular variant of knowledge base size reduction referred to as “Belief State Compression”. Each proposed algorithm can be “lossy” or “lossless” depending on the (in)ability to recover the removed information\; and “redundant” or “irredundant” with respect to the necessity of the remaining information in order to remain lossless. Belief state compression differs from previous approaches in at least three aspects. First\, it takes its objects to be support-structured sets of unconstrained\, rather than flat sets of syntactically constrained\, logical formulas\, which we refer to as belief states. Second\, classical notions of minimality and redundancy are replaced by weaker\, resource-bounded alternatives based on the support structure. Third\, in “lossy” variants of compression\, the compressed knowledge base logically implies only a practically-relevant subset of the original knowledge base. Six variants of belief state compression\, falling into three major classes\, are presented. Experimental results show that a combination of five of them results in mostly irredundant\, lossless compressions\, while maintaining reasonable run times.
DTSTAMP:20190523T202953
SEQUENCE:28517
END:VEVENT
BEGIN:VEVENT
SUMMARY:Quine's Fluted Fragment
URL://iccl.inf.tu-dresden.de/web/Quine%27s_Fluted_Fragment
UID://iccl.inf.tu-dresden.de/web/Quine%27s_Fluted_Fragment
DTSTART:20190509T130000
DTEND:20190509T143000
DESCRIPTION:We consider the fluted fragment\, a decidable fragment of first-order logic with an unbounded number of variables\, originally identified in 1968 by W. V. Quine. We show that the satisfiability problem for this fragment has non-elementary complexity\, thus refuting an earlier published claim by W.C. Purdy that it is in NExpTime. More precisely\, we consider the intersection of the fluted fragment and the m-variable fragment of first-order logic\, for all non-negative m. We obtain upper and lower complexity bounds for this fragment that coincide for all m up to the value 4.\n\n\n'''Short bio:''' Ian Pratt-Hartmann studied mathematics and philosophy at [http://www.bnc.ox.ac.uk Brasenose College\, Oxford]\, and philosophy at [http://www.princeton.edu/main/ Princeton] and [http://www.stanford.edu/ Stanford] Universities\, gaining his PhD. from Princeton. He is currently Senior Lecturer in the Department of Computer Science at the [http://www.manchester.ac.uk/ University of Manchester]. Since February\, 2014\, Dr. Pratt-Hartmann has held a joint appointment in the [http://informatyka.wmfi.uni.opole.pl/ Institute of Computer Science] at the [http://www.uni.opole.pl/ University of Opole]. His academic interests range widely over computational logic\, natural language semantics and artificial intelligence.
DTSTAMP:20190503T071228
SEQUENCE:28293
END:VEVENT
BEGIN:VEVENT
SUMMARY:Data Science Use Cases for Lifestyle Banking
URL://iccl.inf.tu-dresden.de/web/Data_Science_Use_Cases_for_Lifestyle_Banking
UID://iccl.inf.tu-dresden.de/web/Data_Science_Use_Cases_for_Lifestyle_Banking
DTSTART:20190425T130000
DTEND:20190425T143000
LOCATION:APB 3027
DESCRIPTION:Abstract:\nTraditional banking concerns itself with risk understanding\, credit underwriting\, cash need\, liquidity\, etc. Data science and machine learning have proved useful in banking businesses to mitigate risk while targeting the right customers with cash need. In this talk\, we will explore a lifestyle side of banking that goes beyond the traditional realm and delves more into alternative signals to customers needs. We will see example data science use cases that have been successfully implemented in banking business at Siam Commercial Bank.
DTSTAMP:20190411T134324
SEQUENCE:28212
END:VEVENT
BEGIN:VEVENT
SUMMARY:Closed-World Semantics for Conjunctive Queries with Negation over ELH bottom Ontologies
URL://iccl.inf.tu-dresden.de/web/Closed-World_Semantics_for_Conjunctive_Queries_with_Negation_over_ELH_bottom_Ontologies
UID://iccl.inf.tu-dresden.de/web/Closed-World_Semantics_for_Conjunctive_Queries_with_Negation_over_ELH_bottom_Ontologies
DTSTART:20190418T130000
DTEND:20190418T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Ontology-mediated query answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information\, however\, there exist only few ontology based approaches. Moreover\, these proposals conflate the closed-domain and closed-world assumption\, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. We propose a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic ELH-bottom\, which is based on the minimal canonical model. We propose a rewriting strategy for dealing with negated query atoms\, which shows that query answering is possible in polynomial time in data complexity.\n\n\nThis work was awarded Best Paper at JELIA 2019: https://jelia2019.mat.unical.it/awards.
DTSTAMP:20190418T100913
SEQUENCE:28243
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Power of the Terminating Chase
URL://iccl.inf.tu-dresden.de/web/The_Power_of_the_Terminating_Chase
UID://iccl.inf.tu-dresden.de/web/The_Power_of_the_Terminating_Chase
DTSTART:20190411T130000
DTEND:20190411T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract''':\nThe chase has become a staple of modern database theory with applications in data integration\, query optimisation\, data exchange\, ontology-based query answering\, and many other areas. Most application scenarios and implementations require the chase to terminate and produce a finite universal model\, and a large arsenal of sufficient termination criteria is available to guarantee this (generally undecidable) condition. In this invited tutorial\, we therefore ask about the expressive power of logical theories for which the chase terminates. Specifically\, which database properties can be recognised by such theories\, i.e.\, which Boolean queries can they realise? For the skolem (semi-oblivious) chase\, and almost any known termination criterion\, this expressivity is just that of plain Datalog. Surprisingly\, this limitation of most prior research does not apply to the chase in general. Indeed\, we show that standard–chase terminating theories can realise queries with data complexities ranging from PTime to non-elementary that are out of reach for the terminating skolem chase. A “Datalog-first” standard chase that prioritises applications of rules without existential quantifiers makes modelling simpler – and we conjecture: computationally more efficient. This is one of the many open questions raised by our insights\, and we conclude with an outlook on the research opportunities in this area.\n\nThis work has been published and presented at ICDT 2019\, Lisbon\, Portugal.
DTSTAMP:20190327T143510
SEQUENCE:27953
END:VEVENT
BEGIN:VEVENT
SUMMARY:Making sense of conflicting defeasible rules in the controlled natural language ACE: design of a system with support for existential quantification using skolemization
URL://iccl.inf.tu-dresden.de/web/Making_sense_of_conflicting_defeasible_rules_in_the_controlled_natural_language_ACE:_design_of_a_system_with_support_for_existential_quantification_using_skolemization
UID://iccl.inf.tu-dresden.de/web/Making_sense_of_conflicting_defeasible_rules_in_the_controlled_natural_language_ACE:_design_of_a_system_with_support_for_existential_quantification_using_skolemization
DTSTART:20190404T130000
DTEND:20190404T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We motivate and present the design of a system implementing what we (joint work with Hannes Strass previously at the University of Leipzig as well as Adam Z. Wyner at Swansea University) have dubbed the "EMIL" (acronym for "extracting meaning out of inconsistent language") pipeline. The pipeline in question takes potentially conflicting rules expressed in a fragment of a prominent controlled natural language\, ACE\, yet extended with means of expressing defeasible rules in the form of normality assumptions. It makes sense of such rules using a recently formulated argumentation-inspired semantics\, verbalising possible points of view that can plausibly be held based on the rules in ACE. The approach we describe is ultimately based on reductions to answer-set-programming (ASP)\; simulating existential quantification by using skolemization in a manner resembling a translation for ASP formalized in the context of ∃-ASP. We discuss the advantages of this approach to building on the existing ACE interface to rule-systems\, ACERules.\n\n\n'''Bio:''' Martin Diller is finishing his PhD at TU Wien (Austria). The focus of his PhD has mainly been on implementing problems in (abstract and structured) argumentation via complexity-sensitive translations to logical formalisms (quantified boolean formulas and answer-set-programming). He holds a joint MSc degree in computational logic from TU Dresden\, FU Bozen-Bolzano\, and TU Wien. Before that he studied Philosophy & Computer Science at Universidad Nacional de Córdoba\, Argentina and was also briefly part of research groups in Epistemology & Computer Science there. He has also been at several other institutes and universities for internships and research stays working on applied argumentation & automated reasoning: UCL in London (England)\, University of Aberdeen (Scotland)\, and NICTA-Canberra (Australia).
DTSTAMP:20190328T120428
SEQUENCE:27961
END:VEVENT
BEGIN:VEVENT
SUMMARY:Temporal Logics with Probabilistic Distributions2
URL://iccl.inf.tu-dresden.de/web/Temporal_Logics_with_Probabilistic_Distributions2
UID://iccl.inf.tu-dresden.de/web/Temporal_Logics_with_Probabilistic_Distributions2
DTSTART:20190328T130000
DTEND:20190328T143000
LOCATION:APB 3027
DESCRIPTION:In many applications such as monitoring of dynamical systems\, the data are actually time-dependent\, e.g.\, describing the states of a dynamical system at different points in time. Moreover\, events are more likely or less likely to happen in certain time points defined by the type of an event. There exist many well-studied distributions which can characterise the natures of events among us. For example\, according to a Pareto distribution\, a.k.a. a power-law distribution\, the longer something has gone on\, the longer we expect it to continue going on. Like new companies or start-ups\, either (with the high probability) they fail during the first year of existence\, or\, if they manage to survive for decades\, their chances of collapse are extremely small.\n\n\nIn order to capture dynamic time-dependent and probabilistic patterns of knowledge\, using basic notions from probability theory and statistics\, we have introduced temporal logics of expectation\, where we can speak about statements not only occurring eventually in the future\, but giving additional information on when they are likely to happen. The resulted combination of a temporal DL-Lite fragment (with a two-dimensional semantics\, where one dimension is for time and the other for the DL domain) and an additional probabilistic constructor "distribution eventuality" is interpreted over multiple weighted worlds\, viz.\, temporal DL interpretations.
DTSTAMP:20190321T163823
SEQUENCE:27910
END:VEVENT
BEGIN:VEVENT
SUMMARY:TBA2
URL://iccl.inf.tu-dresden.de/web/TBA2
UID://iccl.inf.tu-dresden.de/web/TBA2
DTSTART:20190207T130000
DTEND:20190207T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract''': Recently graph embeddings have been taken up by the community as a tool to solve various tasks in machine learning and the general AI community. In this talk I will give a gentle introduction to the topic and also give some pointers to currently ongoing research. We start from looking at why graph embeddings are needed in the first place and how they could be used. We will then focus on graphs containing a large variety of information\, typically called knowledge graphs\, often represented in RDF. These graphs are hard to embed compared to e.g.\, uniform simple networks) because they contain multiple edge and vertex types\, relation directionality\, literals\, etc. What we will cover are a few basic techniques on how these embeddings can be computed. We plan to look into at least one example of translational based methods\, one from matrix decomposition\, and methods based on co-occurrence and statistical information. Finally we will discuss about a couple of open problems and some of the topics currently worked on.\n\n\n'''Short biography''': Michael Cochez is a postdoctoral researcher at the Fraunhofer Institute for Applied Information Technology FIT in Germany. In this position Michael is working on transferring research results from the academic world to the industry. Besides the industry exposure\, he conducts research in areas related to data analysis and knowledge representation\, like knowledge graph embedding\, scalable clustering\, frequent itemset mining\, stream sampling\, prototype-based ontologies\, ontology matching\, and knowledge evolution. This research is currently mainly conducted at the RWTH Aachen university\, Germany. Before joining Fraunhofer\, he obtained his Ph.D. degree from the University of Jyväskyä\, Finland under the supervision of Vagan Terziyan and Ferrante Neri (De Montfort University - Leicester). He obtained his master degree from the same university and his bachelor degree from the University of Antwerp\, Belgium. Michael Cochez is currently on a partial leave from a postdoc at the University of Jyväskylä and is also a scientific advisor for WE-OPT-IT Oy (former MyOpt Oy) in Finland.
DTSTAMP:20190201T103020
SEQUENCE:27647
END:VEVENT
BEGIN:VEVENT
SUMMARY:Learning Ontologies with Epistemic Reasoning: The EL Case
URL://iccl.inf.tu-dresden.de/web/Learning_Ontologies_with_Epistemic_Reasoning:_The_EL_Case
UID://iccl.inf.tu-dresden.de/web/Learning_Ontologies_with_Epistemic_Reasoning:_The_EL_Case
DTSTART:20190123T093000
DTEND:20190123T110000
LOCATION:APB 3027
DESCRIPTION:'''Abstract''': We investigate the problem of learning description logic ontologies from entailments via queries\, using epistemic reasoning. We introduce a new learning model consisting of epistemic membership and example queries and show that polynomial learnability in this model coincides with polynomial learnability in Angluin’s exact learning model with membership and equivalence queries. We then instantiate our learning framework to EL and show some complexity results for an epistemic extension of EL where epistemic operators can be applied over the axioms. Finally\, we transfer known results for EL ontologies and its fragments to our learning model based on epistemic reasoning.
DTSTAMP:20190115T143531
SEQUENCE:27525
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ontology-Based Query Answering for Probabilistic Temporal Data
URL://iccl.inf.tu-dresden.de/web/Ontology-Based_Query_Answering_for_Probabilistic_Temporal_Data
UID://iccl.inf.tu-dresden.de/web/Ontology-Based_Query_Answering_for_Probabilistic_Temporal_Data
DTSTART:20190117T130000
DTEND:20190117T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We investigate ontology-based query answering for data that are both temporal\nand probabilistic\, which might occur in contexts such as stream reasoning or\nsituation recognition with uncertain data. We present a\nframework that allows to represent temporal probabilistic data\, and\nintroduce a query language with which complex temporal and probabilistic\npatterns can be described. Specifically\, this language combines conjunctive\nqueries with operators from linear time logic as well as probability operators.\nWe analyse the complexities of evaluating queries in this language in various\nsettings. While in some cases\, combining the temporal and the\nprobabilistic dimension in such a way comes at the cost of increased complexity\,\nwe also determine cases for which this increase can be avoided. \n\nThis is a talk based on a paper that will be presented at this year’s AAAI conference.
DTSTAMP:20190114T122705
SEQUENCE:27507
END:VEVENT
BEGIN:VEVENT
SUMMARY:Privacy-Preserving Ontology Publishing for EL Instance Stores
URL://iccl.inf.tu-dresden.de/web/Privacy-Preserving_Ontology_Publishing_for_EL_Instance_Stores
UID://iccl.inf.tu-dresden.de/web/Privacy-Preserving_Ontology_Publishing_for_EL_Instance_Stores
DTSTART:20190110T130000
DTEND:20190110T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We make a first step towards adapting an existing approach for privacy-preserving publishing of linked data to Description Logic (DL) ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using concepts of the DL EL\, which corresponds to the setting where the ontology is an EL instance store. We introduce the notions of compliance of a concept with a policy and of the safety of a concept for a policy and show how optimal compliant (safe) generalizations of a given EL concept can be computed. In addition\, we investigate the complexity of the optimality problem.\n\n \nThis is joint work with Franz Baader and Francesco Kriegel.
DTSTAMP:20190109T151057
SEQUENCE:27479
END:VEVENT
BEGIN:VEVENT
SUMMARY:Embodied Terminology: Language\, Knowledge\, and Cognition
URL://iccl.inf.tu-dresden.de/web/Embodied_Terminology:_Language,_Knowledge,_and_Cognition
UID://iccl.inf.tu-dresden.de/web/Embodied_Terminology:_Language,_Knowledge,_and_Cognition
DTSTART:20181206T130000
DTEND:20181206T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Meaning formation in specialized language is yet an open puzzle to be solved and several methods have been proposed to piece it together. This talk looks to the paradigm of embodied cognition for such a method\, which believes that cognitive processes\, including language production and understanding\, are deeply rooted in physical interactions with the world. More specifically it looks at image schemas that capture recurrent sensorimotor patterns giving coherence and structure to our experiences and shaping our language and knowledge. Potential theoretical contributions of embodied cognition to the meaning formation in specialized languages are discussed alongside automated methods for the identification of image schemas in natural languages. A coherent\, robust\, and language agnostic theory of and method for embodied terminology holds the promise to boost socio-economically effective\, cognitively grounded\, and technologically powerful terminology management and translation technologies. \n\n\n'''Title of the lecture:''' Translatorische Terminologiewissenschaft und Übersetzungstechnologien (German)\n\n'''Lecture description:''' This demonstration lesson will be held in German since this is required for the hearing and will represent the second lesson of the Master-level lecture on translational terminology science and translation technologies. Technologies of specialized communication will be discussed with a particular focus on multilingual\, systematic\, and onomasiological terminology management.\n\n'''Details:''' Appointment training: this '''research talk''' and the following '''demonstration lesson''' represent a trial run for a hearing within the application procedure for a '''tenure-track professorship''' for terminology science and translation technologies. Please join and ask many challenging questions.
DTSTAMP:20181203T100011
SEQUENCE:27219
END:VEVENT
BEGIN:VEVENT
SUMMARY:Satisfiability in the Triguarded Fragment of First-Order Logic
URL://iccl.inf.tu-dresden.de/web/Satisfiability_in_the_Triguarded_Fragment_of_First-Order_Logic
UID://iccl.inf.tu-dresden.de/web/Satisfiability_in_the_Triguarded_Fragment_of_First-Order_Logic
DTSTART:20181129T130000
DTEND:20181129T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Most Description Logics (DLs) can be translated into well-known decidable fragments of first-order logic FO\, including the guarded fragment GF and the two-variable fragment FO2. Given their prominence in DL research\, we take closer look at GF and FO2\, and present a new fragment that subsumes both. This fragment\, called the triguarded fragment (denoted TGF)\, is obtained by relaxing the standard definition of GF: quantification is required to be guarded only for subformulae with three or more free variables. We show that satisfiability of equality-free TGF is N2ExpTime-complete\, but becomes NExpTime-complete if we bound the arity of predicates by a constant (a natural assumption in the context of DLs). Finally\, we observe that many natural extensions of TGF\, including the addition of equality\, lead to undecidability. \n\n\nThis talk is a presentation given at the 31st International Workshop on Description Logics\, 2018.
DTSTAMP:20181104T132422
SEQUENCE:27062
END:VEVENT
BEGIN:VEVENT
SUMMARY:TBA
URL://iccl.inf.tu-dresden.de/web/TBA
UID://iccl.inf.tu-dresden.de/web/TBA
DTSTART:20181115T130000
DTEND:20181115T143000
LOCATION:APB 3027
DESCRIPTION:Although being quite inexpressive\, the description logic (DL) FL0\, which provides only conjunction\, value restriction and the top concept as concept constructors\, has an intractable subsumption problem in the presence of terminologies (TBoxes): subsumption reasoning w.r.t. acyclic FL0 TBoxes is coNP-complete\, and becomes even ExpTime-complete in case general TBoxes are used. In this talk\, I will describe an approach that uses automata working on infinite trees to solve both standard and non-standard inferences in FL0 w.r.t. general TBoxes. I will start by sketching an alternative proof of the ExpTime upper bound for subsumption in FL0 w.r.t. general TBoxes based on the use of looping tree automata. Afterwards\, I will explain how to employ parity tree automata to tackle non-standard inference problems such as computing the least common subsumer w.r.t. general TBoxes.
DTSTAMP:20181120T122106
SEQUENCE:27159
END:VEVENT
BEGIN:VEVENT
SUMMARY:From Horn-SRIQ to Datalog: A Data-Independent Transformation that Preserves Assertion Entailment
URL://iccl.inf.tu-dresden.de/web/From_Horn-SRIQ_to_Datalog:_A_Data-Independent_Transformation_that_Preserves_Assertion_Entailment
UID://iccl.inf.tu-dresden.de/web/From_Horn-SRIQ_to_Datalog:_A_Data-Independent_Transformation_that_Preserves_Assertion_Entailment
DTSTART:20181108T130000
DTEND:20181108T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Ontology-based access to large data-sets has recently gained a lot of attention. To access data efficiently\, one approach is to rewrite the ontology into Datalog\, and then use powerful Datalog engines to compute implicit entailments. Existing rewriting techniques support Description Logics (DLs) from ELH to Horn-SHIQ. We go one step further and present one such data-independent rewriting technique for Horn-SRIQ\, the extension of Horn-SHIQ that supports non-transitive\, complex roles---an expressive feature prominently used in many real-world ontologies. We evaluated our rewriting technique on a large known corpus of ontologies. Our experiments show that the resulting rewritings are of moderate size and that the our approach is more efficient than state-of-the-art DL reasoners when reasoning with data-intensive ontologies. \n\n\nThis is joint work with Larry González and Patrick Koopman. It has been accepted at AAAI 2019.
DTSTAMP:20181104T111047
SEQUENCE:27061
END:VEVENT
BEGIN:VEVENT
SUMMARY:Temporal constraint satisfaction problems in least fixed point logic
URL://iccl.inf.tu-dresden.de/web/Temporal_constraint_satisfaction_problems_in_least_fixed_point_logic
UID://iccl.inf.tu-dresden.de/web/Temporal_constraint_satisfaction_problems_in_least_fixed_point_logic
DTSTART:20181101T130000
DTEND:20181101T143000
LOCATION:APB 3027
DESCRIPTION:The constraint satisfaction problem (CSP) for a fixed structure L with finite relational signature is the computational problem of deciding whether a given finite structure of the same signature homomorphically maps to L. A temporal constraint language is a structure over the rational numbers Q whose relations are first-order definable in (Q\;<). In 2009\, Bodirsky and Kara presented a complete classification of the computational complexity of CSPs for temporal constraint languages. In contrast to finite domain structures\, there are temporal constraint languages whose CSP cannot be solved by any Datalog program but can be expressed in least fixed point logic (LFP). An example is CSP(Q\; { (x\,y\,z)\, where x>y or x>z} )\, known as the and/or scheduling problem. I will give a proof of a dichotomy for LFP expressibility of CSPs of temporal constraint languages. For a temporal constraint language L\, exactly one of the following is true: either L interprets all finite structures primitively positively with parameters and CSP(L) is inexpressible in LFP\, or CSP(L) is inexpressible in LFP if and only if L admits a primitive positive definition of the relation X:={ (x\,y\,z)\, where x>y=z or y>z=x or z>x=y}.
DTSTAMP:20181029T100450
SEQUENCE:27007
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ontological Modelling in Wikidata
URL://iccl.inf.tu-dresden.de/web/Ontological_Modelling_in_Wikidata
UID://iccl.inf.tu-dresden.de/web/Ontological_Modelling_in_Wikidata
DTSTART:20181025T130000
DTEND:20181025T143000
LOCATION:APB 3027
DESCRIPTION:This was an invited keynote talk at the 9th Workshop on Ontology Design and Patterns (WOP'18)\, 2018.\n\n\n'''Abstract:''' Wikidata\, the knowledge base of Wikimedia\, has been extremely successful in building and sustaining new communities of editors and users. Since its inception in 2012\, it has developed from an experimental “data wiki” into a well-organised reference knowledge base with an amazing array of applications. Developing an ontological schema for such an open and rapidly expanding project is a huge undertaking\, and difficult challenges arise on many levels. The community has directed significant efforts towards vocabulary development\, many guidelines and rules have been created\, and tools are used for helping editors to avoid and correct modelling errors. Nevertheless\, the distributed nature of Wikidata editing often means that ontology design\, too\, is distributed\, and a coherent global view is only being worked on once significant amounts of data have been added. The result is a knowledge graph with a widely varying modelling quality across different sub-domains.\n\nThe big question for researchers is how their insights and methods can help here. The Wikidata community is widely aware of semantic web activities and existing standards and academic publications play a role in many discussions. Yet\, there seems to be only little direct exchange between the communities. In this talk\, I will review the current state of Wikidata and its connection to semantic web standards such as RDF and SPARQL. I will try to raise awareness of the particular requirements of Wikidata\, and argue that these are of general interest for the data-driven curation of knowledge graphs.
DTSTAMP:20181019T105041
SEQUENCE:26935
END:VEVENT
BEGIN:VEVENT
SUMMARY:Making Repairs in Description Logics More Gentle
URL://iccl.inf.tu-dresden.de/web/Making_Repairs_in_Description_Logics_More_Gentle
UID://iccl.inf.tu-dresden.de/web/Making_Repairs_in_Description_Logics_More_Gentle
DTSTART:20181018T133000
DTEND:20181018T143000
LOCATION:APB 3027
DESCRIPTION:Abstract:\n"The classical approach for repairing a Description Logic ontology O in the sense of removing an unwanted consequence α is to delete a minimal number of axioms from O such that the resulting ontology O′ does not have the consequence α. However\, the complete deletion of axioms may be too rough\, in the sense that it may also remove consequences that are actually wanted. To alleviate this problem\, we propose a more gentle notion of repair in which axioms are not deleted but only weakened. On the one hand\, we investigate the general properties of this gentle repair method. On the other hand\, we propose and analyze concrete approaches for weakening axioms expressed in the Description Logic EL."\n\nThis is a rehearsal talk for KR 2018.
DTSTAMP:20181018T083442
SEQUENCE:26904
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Combined Approach to Query Answering in Horn-ALCHOIQ2
URL://iccl.inf.tu-dresden.de/web/The_Combined_Approach_to_Query_Answering_in_Horn-ALCHOIQ2
UID://iccl.inf.tu-dresden.de/web/The_Combined_Approach_to_Query_Answering_in_Horn-ALCHOIQ2
DTSTART:20181018T130000
DTEND:20181018T133000
LOCATION:APB 3027
DESCRIPTION:Abstract:\n"Combined approaches have become a successful technique for solving\nconjunctive query (CQ) answering over description logics (DL) ontologies.\nNevertheless\, existing approaches are restricted to tractable DL languages.\nIn this work\, we extend the combined method to the more expressive DL\nHorn-ALCHOIQ---a language for which CQ answering is ExpTime-complete—\nin order to develop an efficient and scalable CQ answering procedure which\nis worst-case optimal for \\halchoiq and \\elho ontologies. We implement and\nstudy the feasibility of our algorithm\, and compare its performance to the DL\nreasoner Konclude."\n\nThis is a rehearsal talk for KR 2018.
DTSTAMP:20181018T083532
SEQUENCE:26905
END:VEVENT
BEGIN:VEVENT
SUMMARY:Efficient Model Construction for Horn Logic with VLog: Extended Abstract
URL://iccl.inf.tu-dresden.de/web/Efficient_Model_Construction_for_Horn_Logic_with_VLog:_Extended_Abstract
UID://iccl.inf.tu-dresden.de/web/Efficient_Model_Construction_for_Horn_Logic_with_VLog:_Extended_Abstract
DTSTART:20181011T130000
DTEND:20181011T143000
LOCATION:APB 3027
DESCRIPTION:Abstract: "We extend the Datalog engine VLog to develop a column-oriented implementation of the skolem and the restricted chase – two variants of a sound and complete algorithm used for model construction over theories of existential rules. We conduct an extensive evaluation over several data-intensive theories with millions of facts and thousands of rules\, and show that VLog can compete with the state of the art\, regarding runtime\, scalability\, and memory efficiency."\n\n\nThis is a rehearsal talk for the DL Workshop 2018.
DTSTAMP:20181018T013010
SEQUENCE:26721
END:VEVENT
END:VCALENDAR