BEGIN:VCALENDAR
PRODID:-//SMW Project//Semantic Result Formats
VERSION:2.0
METHOD:PUBLISH
X-WR-CALNAME:ICCL-Veranstaltungen
BEGIN:VEVENT
SUMMARY:SCF2 - an Argumentation Semantics for Rational Human Judgments on Argument Acceptability
URL://iccl.inf.tu-dresden.de/web/SCF2_-_an_Argumentation_Semantics_for_Rational_Human_Judgments_on_Argument_Acceptability
UID://iccl.inf.tu-dresden.de/web/SCF2_-_an_Argumentation_Semantics_for_Rational_Human_Judgments_on_Argument_Acceptability
DTSTART:20191219T130000
DTEND:20191219T143000
LOCATION:APB 3027
DESCRIPTION:In abstract argumentation theory\, many argumentation semantics have been proposed for evaluating argumentation frameworks. This paper is based on the following research question: Which semantics corresponds well to what humans consider a rational judgment on the acceptability of arguments? There are two systematic ways to approach this research question: A normative perspective is provided by the principle-based approach\, in which semantics are evaluated based on their satisfaction of various normatively desirable principles. A descriptive perspective is provided by the empirical approach\, in which cognitive studies are conducted to determine which semantics best predicts human judgments about arguments. In this paper\, we combine both approaches to motivate a new argumentation semantics called SCF2. For this purpose\, we introduce and motivate two new principles and show that no semantics from the literature satisfies both of them. We define SCF2 and prove that it satisfies both new principles. Furthermore\, we discuss findings of a recent empirical cognitive study that provide additional support to SCF2.
DTSTAMP:20191107T084228
SEQUENCE:29685
END:VEVENT
BEGIN:VEVENT
SUMMARY:Standpoint logic: a multi-modal logic for reasoning within semantic indeterminacy
URL://iccl.inf.tu-dresden.de/web/Standpoint_logic:_a_multi-modal_logic_for_reasoning_within_semantic_indeterminacy
UID://iccl.inf.tu-dresden.de/web/Standpoint_logic:_a_multi-modal_logic_for_reasoning_within_semantic_indeterminacy
DTSTART:20191212T130000
DTEND:20191212T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Standpoint logic is a multi-modal logic intended at reasoning with different interpretations of semantically heterogeneous terms. The framework offers an alternative to “fuzzy” approaches to the representation of meaning and allows for the specification of “semantic commitments” and “penumbral connections”.\n\n\nIn this talk\, I will introduce the logic and provide an overview of its proof theory and semantics. I will demonstrate its expressivity in an application scenario in the forestry domain\, using data schemas from the repository Global Forest Watch and concepts from the ENVO ontology. I will finally discuss the complexity of the logic and some restrictions that could make implementations viable.
DTSTAMP:20191028T083947
SEQUENCE:29474
END:VEVENT
BEGIN:VEVENT
SUMMARY:TBA
URL://iccl.inf.tu-dresden.de/web/TBA3
UID://iccl.inf.tu-dresden.de/web/TBA3
DTSTART:20191205T130000
DTEND:20191205T143000
LOCATION:APB 3027
DESCRIPTION:TBA
DTSTAMP:20191017T123352
SEQUENCE:29338
END:VEVENT
BEGIN:VEVENT
SUMMARY:TE-ETH: Lower Bounds for QBFs of Bounded Treewidth
URL://iccl.inf.tu-dresden.de/web/TBA4
UID://iccl.inf.tu-dresden.de/web/TBA4
DTSTART:20191128T130000
DTEND:20191128T143000
LOCATION:APB 3027
DESCRIPTION:The problem of deciding the validity (QSAT) of quantified Boolean formulas (QBF) is a vivid research area in both theory and practice. In the field of parameterized algorithmics\, the well-studied graph measure treewidth turned out to be a successful parameter. A well-known result by Chen in parameterized complexity is that QSAT when parameterized by the treewidth of the primal graph of the input formula together with the quantifier depth of the formula is fixed-parameter tractable. More precisely\, the runtime of such an algorithm is polynomial in the formula size and exponential in the\ntreewidth\, where the exponential function in the treewidth is a tower\, whose height is the quantifier depth. A natural question is whether one can significantly improve these results and decrease the tower while assuming the Exponential Time Hypothesis (ETH). In the last years\, there has been a\ngrowing interest in the quest of establishing lower bounds under ETH\, showing mostly problem-specific lower bounds up to the third level of the polynomial hierarchy. Still\, an important question is to settle this as general as possible and to cover the whole polynomial hierarchy. In this\nwork\, we show lower bounds based on the ETH for arbitrary QBFs parameterized by treewidth (and quantifier depth). More formally\, we establish lower bounds for QSAT and treewidth\, namely\, that under ETH there cannot be an algorithm that solves QSAT of quantifier depth i in runtime\nsignificantly better than i-fold exponential in the treewidth and polynomial in the input size. In doing so\, we provide a versatile reduction technique to compress treewidth that encodes the essence of dynamic programming on arbitrary tree decompositions. Further\, we describe a general methodology for a more fine-grained analysis of problems parameterized by treewidth that are at higher levels of the polynomial hierarchy.\n\n'''Authors:''' Johannes Klaus Fichte\, Markus Hecher\, Andreas Pfandler
DTSTAMP:20191030T103043
SEQUENCE:29546
END:VEVENT
BEGIN:VEVENT
SUMMARY:A diamond in the rough: Theorizing column stores
URL://iccl.inf.tu-dresden.de/web/A_diamond_in_the_rough:_Theorizing_column_stores
UID://iccl.inf.tu-dresden.de/web/A_diamond_in_the_rough:_Theorizing_column_stores
DTSTART:20191121T130000
DTEND:20191121T143000
LOCATION:APB 3105
DESCRIPTION:Column stores have been a 'neglected child' relative to traditional\, row-oriented\, relation-focused database management systems: The systems people came up with them\, and the theoreticians did not really give them the time of day. This talk will discuss what happens when we pick up the slack and formalize a model for analytic computation with columns. In addition to sound conceptual grounding being its own aesthetic reward\, we will touch on some of the examples of how such a formalization enables architectural and performance improvements in real-life systems:\n\n\nSeamless integration of decompression and query execution\; removal of special-case handling of different column features (such as nullability and variable-length elements)\; closure of query execution plans to partial execution\; et cetera. Central to achieving such benefits will be the discussion of what constitutes a column\, how columns are to be represented\, and what they can represent.
DTSTAMP:20191107T103354
SEQUENCE:29693
END:VEVENT
BEGIN:VEVENT
SUMMARY:Can A.I. Provably Explain Itself? A gentle Introduction to Description Logics
URL://iccl.inf.tu-dresden.de/web/Can_A.I._Provably_Explain_Itself%3F_A_gentle_Introduction_to_Description_Logics
UID://iccl.inf.tu-dresden.de/web/Can_A.I._Provably_Explain_Itself%3F_A_gentle_Introduction_to_Description_Logics
DTSTART:20191114T130000
DTEND:20191114T143000
LOCATION:APB 3027
DESCRIPTION:The emergence of intelligent systems in self-driving cars\, planes\, medical diagnosis\, insurance and financial services among others has shown that when decisions are taken or suggested by automated systems it is essential that an explanation can be provided. The disconnect between how we make decisions and how machines make them\, and the fact that machines are making more and more decisions for us\, has given a new push for transparency in A.I. However\, the inner workings of machine learning algorithms remain difficult to understand\, and the methods of making these models explainable still require expensive human evaluation.\n\n\nOn the other hand\, knowledge representation based on description logics allow for providing description of the environment\, specifying constraints for the system states and detecting inconsistencies\, as well as operating information from heterogeneous (possibly incomplete) data sources\, and reasoning about the knowledge of an application domain. Because of the conceptual difference with machine learning algorithms\, the description logics formalism is much more similar to human reasoning and can be adapted to supply the user with necessary explications for a decision made. Additionally\, in order to model dynamic systems\, description logics have extensions enabling additionally temporal and probabilistic reasoning.\n\nIn this talk I will outline key pillars and basic principles of (onto)logical reasoning as well as its limitations. Finally\, I will say a few words about an initiative of TU Dresden and Saarland Informatics Campus developing together the concept of perspicuous computing and laying the scientific foundations for computerised systems that are able to express clearly their functioning.
DTSTAMP:20191030T152752
SEQUENCE:29562
END:VEVENT
BEGIN:VEVENT
SUMMARY:Interface between Logical Analysis of Data and Formal Concept Analysis
URL://iccl.inf.tu-dresden.de/web/Interface_between_Logical_Analysis_of_Data_and_Formal_Concept_Analysis
UID://iccl.inf.tu-dresden.de/web/Interface_between_Logical_Analysis_of_Data_and_Formal_Concept_Analysis
DTSTART:20191024T130000
DTEND:20191024T143000
LOCATION:APB 3027
DESCRIPTION:Logical Analysis of Data and Formal Concept Analysis are separately developed methodologies based on different mathematical foundations. We show that the two methodologies utilize the same basic building blocks. That enables us to develop an interface between the two methodologies. We provide some preliminary benefits of the interface\; most notably efficient algorithms for computing spanned patterns in Logical Analysis of Data using algorithms of Formal Concept Analysis.
DTSTAMP:20191021T094239
SEQUENCE:29357
END:VEVENT
BEGIN:VEVENT
SUMMARY:Knowledge Dynamics in Social Environments
URL://iccl.inf.tu-dresden.de/web/Knowledge_Dynamics_in_Social_Environments
UID://iccl.inf.tu-dresden.de/web/Knowledge_Dynamics_in_Social_Environments
DTSTART:20190926T130000
DTEND:20190926T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:'''\nSocial media platforms\, taken in conjunction\, can be seen as complex networks\; in this context\, understanding how agents react to sentiments expressed by their connections is of great interest. Here\, we show how Network Knowledge Bases help represent the integration of multiple social networks\, and explore how information flow can be handled via belief revision operators for local (agent-specific) knowledge bases. We report on preliminary experiments on Twitter data showing that different agent types react differently to the same information — this is a first step toward developing tools to predict how agents behave as information flows in their social environment.\n\n'''Bio:'''\nMaria Vanina Martinez\, University of Buenos Aires\, Argentina.
DTSTAMP:20190923T154215
SEQUENCE:29160
END:VEVENT
BEGIN:VEVENT
SUMMARY:A.M.B.R.O.S.I.A. - Conferring Immortality on Distributed Applications
URL://iccl.inf.tu-dresden.de/web/A.M.B.R.O.S.I.A._-_Conferring_Immortality_on_Distributed_Applications
UID://iccl.inf.tu-dresden.de/web/A.M.B.R.O.S.I.A._-_Conferring_Immortality_on_Distributed_Applications
DTSTART:20190920T130000
DTEND:20190920T140000
LOCATION:APB 3105
DESCRIPTION:'''Abstract:'''\n\nWhen writing today’s distributed programs\, which frequently span both devices and cloud services\, programmers are faced with complex decisions and coding tasks around coping with failure\, especially when these distributed components are stateful. If their application can be cast as pure data processing\, they benefit from the past 40-50 years of work from the database community\, which has shown how declarative database systems can completely isolate the developer from the possibility of failure in a performant manner. Unfortunately\, while there have been some attempts at bringing similar functionality into the more general distributed programming space\, a compelling general-purpose system must handle non-determinism\, be performant\, support a variety of machine types with varying resiliency goals\, and be language agnostic\, allowing distributed components written in different languages to communicate. This talk describes the first system\, publicly available on GitHub\, called Ambrosia\, to satisfy all these requirements. We coin the term “virtual resiliency”\, analogous to virtual memory\, for the platform feature which allows failure oblivious code to run in a failure resilient manner. We also introduce a programming construct\, the “impulse”\, which resiliently handles non-deterministic information originating from outside the resilient component. Of further interest to our community is the effective reapplication of much database performance optimization technology to make Ambrosia more performant than many of today’s non-resilient cloud solutions.\n\n\n'''Bio:'''\n\nOver the last 20 years\, I have worked at Microsoft in a combination of research and product roles. In particular\, I’ve spent about 15 years as a researcher at MSR\, doing fundamental research in streaming\, big data processing\, databases\, and distributed computing. My style of working is to attack difficult problems\, and through fundamental understanding and insight\, create new artifacts that enable important problems to be solved in vastly better ways. For instance\, my work on streaming data processing enabled people with real time data processing problems to specify their processing logic in new\, powerful ways\, and also resulted in an artifact called Trill\, which was orders of magnitude more performant than anything which preceded it. Within the academic community\, I have published many papers\, some with best paper awards (e.g. Best Paper Award at ICDE 2012)\, and two with test of time awards (e.g. SIGMOD 2011 Test of Time award and ICDT 2018 Test of Time award)\, and have also taken many organizational roles in database conferences. My research has also had significant impact on many Microsoft products\, including SQL Server\, Office\, Windows\, Bing\, and Halo\, as well as leading to the creation of entirely new products like Microsoft StreamInsight\, Azure Stream Analytics\, Trill\, and most recently\, Ambrosia. I spent 5 years building Microsoft StreamInsight\, serving as a founder and architect for the product. Trill has become the de-facto standard for temporal and stream data processing within Microsoft\, and years after creation\, is still the most expressive and performant general purpose stream data processor in the world. I am also an inventor of 30+ patents.
DTSTAMP:20190918T094958
SEQUENCE:29122
END:VEVENT
BEGIN:VEVENT
SUMMARY:Young Scientist's Third International Workshop on Trends in Information Processing (YSIP3)
URL://iccl.inf.tu-dresden.de/web/Young_Scientist%27s_Third_International_Workshop_on_Trends_in_Information_Processing_(YSIP3)
UID://iccl.inf.tu-dresden.de/web/Young_Scientist%27s_Third_International_Workshop_on_Trends_in_Information_Processing_(YSIP3)
DTSTART:20190917T000000
DTEND:20190920T000000
LOCATION:Stavropol and Arkhyz\, Russian Federation
DESCRIPTION:We invite young scientists – typically master or PhD students – to submit new scientific results – as\, for example\, obtained in their bachelor thesis\, project work\, master thesis\, or PhD project – in all areas of Information Processing.
DTSTAMP:20190117T101503
SEQUENCE:27542
END:VEVENT
BEGIN:VEVENT
SUMMARY:Automatic translation of clinical trial eligibility criteria into formal queries
URL://iccl.inf.tu-dresden.de/web/Automatic_translation_of_clinical_trial_eligibility_criteria_into_formal_queries
UID://iccl.inf.tu-dresden.de/web/Automatic_translation_of_clinical_trial_eligibility_criteria_into_formal_queries
DTSTART:20190916T130000
DTEND:20190916T140000
LOCATION:APB 3027
DESCRIPTION:Selecting patients for clinical trials is very labor-intensive. Our goal is to develop an automated system that can support doctors in this task. This paper describes a major step towards such a system: the automatic translation of clinical trial eligibility criteria from natural language into formal\, logic-based queries. First\, we develop a semantic annotation process that can capture many types of clinical trial criteria. Then\, we map the annotated criteria to the formal query language. We have built a prototype system based on state-of-the-art NLP tools such as Word2Vec\, Stanford NLP tools\, and the MetaMap Tagger\, and have evaluated the quality of the produced queries on a number of criteria from clinicaltrials.gov.
DTSTAMP:20190913T120845
SEQUENCE:29095
END:VEVENT
BEGIN:VEVENT
SUMMARY:A gentle introduction to partition width
URL://iccl.inf.tu-dresden.de/web/Lecture_on_Partition_Width
UID://iccl.inf.tu-dresden.de/web/Lecture_on_Partition_Width
DTSTART:20190912T130000
DTEND:20190912T143000
LOCATION:APB 3027
DESCRIPTION:In this talk we will take an introductory glance at the notion of "partition width"\, first conceived by Achim Blumensath. As partition width is also closely related to a notion of decomposition of an arbitrary structure into a tree-like shape\, the so called "partition refinement"\, we will also take a look at the relation of both these notions to more established notions of decomposition and width measures (namely tree-decompositions\, tree width\, and clique width).
DTSTAMP:20190802T123628
SEQUENCE:28943
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mixing Description Logics in Privacy-Preserving Ontology Publishing
URL://iccl.inf.tu-dresden.de/web/Mixing_Description_Logics_in_Privacy-Preserving_Ontology_Publishing
UID://iccl.inf.tu-dresden.de/web/Mixing_Description_Logics_in_Privacy-Preserving_Ontology_Publishing
DTSTART:20190905T130000
DTEND:20190905T140000
LOCATION:APB 2026
DESCRIPTION:In previous work\, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an instance store\, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL . We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy\, and have shown how\, in the context mentioned above\, optimal compliant (safe) generalizations of a given concept can be computed. In the present paper\, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular\, we investigate the situations where the attacker’s knowledge is given by an or an concept. In both cases\, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of \, it turns out to be actually lower (polynomial) for the more expressive DL .\n\n\nJoint work with Franz Baader. This is also a test-talk for a presentation at KI 2019.
DTSTAMP:20190902T110018
SEQUENCE:29045
END:VEVENT
BEGIN:VEVENT
SUMMARY:On the Expressive Power of Description Logics with Cardinality Constraints on Finite and Infinite Sets
URL://iccl.inf.tu-dresden.de/web/On_the_Expressive_Power_of_Description_Logics_with_Cardinality_Constraints_on_Finite_and_Infinite_Sets
UID://iccl.inf.tu-dresden.de/web/On_the_Expressive_Power_of_Description_Logics_with_Cardinality_Constraints_on_Finite_and_Infinite_Sets
DTSTART:20190829T130000
DTEND:20190829T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' In recent work we have extended the description logic (DL) ALCQ by means of more expressive number restrictions using numerical and set constraints stated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). It has been shown that reasoning in the resulting DL\, called ALCSCC\, is PSpace-complete without a TBox and ExpTime-complete w.r.t. a general TBox. The semantics of ALCSCC is defined in terms of finitely branching interpretations\, that is\, interpretations where every element has only finitely many role successors. This condition was needed since QFBAPA considers only finite sets. In this paper\, we first introduce a variant of ALCSCC\, called ALCSCC∞\, in which we lift this requirement (inexpressible in first-order logic) and show that the complexity results for ALCSCC mentioned above are preserved. Nevertheless\, like ALCSCC\, ALCSCC∞ is not a fragment of first-order logic. The main contribution of this paper is to give a characterization of the first-order fragment of ALCSCC∞. The most important tool used in the proof of this result is a notion of bisimulation that characterizes this fragment.\n\n\nJoint work with Franz Baader.\nThis talk is a rehearsal for a presentation at FroCoS 2019.\nDuration: 25 minutes without questions.
DTSTAMP:20190820T085458
SEQUENCE:28999
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chasing Sets: How to Use Existential Rules for Expressive Reasoning
URL://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning
UID://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning
DTSTART:20190801T130000
DTEND:20190801T140000
LOCATION:APB 3027
DESCRIPTION:Abstract: We propose that modern existential rule reasoners can enable fully declarative implementations of rule-based inference methods in knowledge representation\, in the sense that a particular calculus is captured by a fixed set of rules that can be evaluated on varying inputs (encoded as facts). We introduce Datalog(S) – Datalog with support for sets – as a surface language for such translations\, and show that it can be captured in a decidable fragment of existential rules. We then implement several known inference methods in Datalog(S)\, and empirically show that an existing existential rule reasoner can thus be used to solve practical reasoning problems.\n\n\nThis talk is a rehearsal for a presentation (15 minutes including questions) at IJCAI 2019.
DTSTAMP:20190731T095825
SEQUENCE:28893
END:VEVENT
BEGIN:VEVENT
SUMMARY:Worst-Case Optimal Querying of Very Expressive Description Logics with Path Expressions and Succinct Counting
URL://iccl.inf.tu-dresden.de/web/Worst-Case_Optimal_Querying_of_Very_Expressive_Description_Logics_with_Path_Expressions_and_Succinct_Counting
UID://iccl.inf.tu-dresden.de/web/Worst-Case_Optimal_Querying_of_Very_Expressive_Description_Logics_with_Path_Expressions_and_Succinct_Counting
DTSTART:20190730T130000
DTEND:20190730T133000
LOCATION:APB 3027
DESCRIPTION:'''Abstract.''' Among the most expressive knowledge representation formalisms are the description logics of the Z family. For well-behaved fragments of ZOIQ\, entailment of positive two-way regular path queries is well known to be 2EXPTIMEcomplete under the proviso of unary encoding of numbers in cardinality constraints. We show that this assumption can be dropped without an increase in complexity and EXPTIME-completeness can be achieved when bounding the number of query atoms\, using a novel reduction from query entailment to knowledge base satisfiability. These findings allow to strengthen other results regarding query entailment and query containment problems in very expressive description logics. Our results also carry over to GC2\, the two-variable guarded fragment of first order logic with counting quantifiers\, for which hitherto only conjunctive query entailment has been investigated.
DTSTAMP:20190718T124219
SEQUENCE:28841
END:VEVENT
BEGIN:VEVENT
SUMMARY:Extending EL++ with Linear Constraints on the Probability of Axioms
URL://iccl.inf.tu-dresden.de/web/Extending_EL%2B%2B_with_Linear_Constraints_on_the_Probability_of_Axioms
UID://iccl.inf.tu-dresden.de/web/Extending_EL%2B%2B_with_Linear_Constraints_on_the_Probability_of_Axioms
DTSTART:20190723T133000
DTEND:20190723T150000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' One of the main reasons to employ a description logic such as EL++ is the fact that it has efficient\, polynomial-time algorithmic properties such as deciding consistency and inferring subsumption. However\, simply by adding negation of concepts to it\, we obtain the expressivity of description logics whose decision procedure is ExpTime-complete. Similar complexity explosion occurs if we add probability assignments on concepts. To lower the resulting complexity\, we instead concentrate on assigning probabilities to Axioms/GCIs. We show that the consistency detection problem for such a probabilistic description logic is NP-complete\, and present a linear algebraic deterministic algorithm to solve it\, using the column generation technique. We also examine algorithms for the probabilistic extension problem\, which consists of inferring the minimum and maximum probabilities for a new axiom\, given a consistent probabilistic knowledge base. \n\n\nFuture work aims at finding fragments of probabilistic EL++ which are tractable.
DTSTAMP:20190717T120135
SEQUENCE:28813
END:VEVENT
BEGIN:VEVENT
SUMMARY:Reasoning about disclosure in data integration in the presence of source constraints
URL://iccl.inf.tu-dresden.de/web/Reasoning_about_disclosure_in_data_integration_in_the_presence_of_source_constraints
UID://iccl.inf.tu-dresden.de/web/Reasoning_about_disclosure_in_data_integration_in_the_presence_of_source_constraints
DTSTART:20190718T130000
DTEND:20190718T140000
LOCATION:APB 3027
DESCRIPTION:Joint work with M. Benedikt\, P. Bourhis\, L. Jachiet\n\n\n'''Abstract:''' Data integration systems allow users to access data sitting in multiple sources by means of queries over a global schema\, related to the sources via mappings. Data sources often contain sensitive information\, and thus an analysis is needed to verify that a schema satisfies a privacy policy\, given as a set of queries whose answers should not be accessible to users. Such an analysis should take into account not only knowledge that an attacker may have about the mappings\, but also what they may know about the semantics of the sources. In this talk\, I'll discuss the impact that source constraints can have on disclosure analysis.\n\n'''Speaker bio:''' Michaël Thomazo (Inria\, DI ENS\, ENS\, CNRS\, PSL University)
DTSTAMP:20190712T163358
SEQUENCE:28796
END:VEVENT
BEGIN:VEVENT
SUMMARY:Automatic Extraction of Compositional Matrix-Space Models of Language
URL://iccl.inf.tu-dresden.de/web/Automatic_Extraction_of_Compositional_Matrix-Space_Models_of_Language
UID://iccl.inf.tu-dresden.de/web/Automatic_Extraction_of_Compositional_Matrix-Space_Models_of_Language
DTSTART:20190715T110000
DTEND:20190715T123000
LOCATION:APB 3027
DESCRIPTION:Learning word representations in distributional semantic models to capture the semantics and compositionality of natural language is a central research area of computational linguistics. Compositional Matrix-Space Models (CMSMs) introduce a novel word representation alternative to Vector Space Models (VSMs). This talk presents the results of learning Compositional Matrix-Space Models to capture the semantics and compositionality in natural language processing tasks including sentiment analysis and compositionality detection of short phrases. Then\, a new dataset is introduced for examining compositional distributional semantic models and present benchmark experiments on using the developed dataset as a testbed to evaluate semantic composition in distributional semantic models.
DTSTAMP:20190708T112532
SEQUENCE:28772
END:VEVENT
BEGIN:VEVENT
SUMMARY:Epistemic Answer Set Programming
URL://iccl.inf.tu-dresden.de/web/Epistemic_Answer_Set_Programming
UID://iccl.inf.tu-dresden.de/web/Epistemic_Answer_Set_Programming
DTSTART:20190711T130000
DTEND:20190711T143000
LOCATION:APB 3027
DESCRIPTION:Today it is widely accepted by the logic programming community that answer set programming (ASP) requires a more powerful introspective reasoning with the use of modalities. Although there has been a long-lasting debate among researchers about how to correctly extend ASP with epistemic modal operators\, there is still no agreement on a fully satisfactory semantics that is able to offer intuitive results for epistemic logic programs. In this talk\, we introduce a recent epistemic extension of ASP called epistemic ASP (EASP)\, endowed with the epistemic answer set semantics: minimal (with respect to truth) models which are maximal under two different orderings that minimise knowledge. Then we compare EASP with existing successful (to some extent) approaches in the literature\, showing the advantages and the novelties of the new semantics: compared to Gelfond's epistemic specifications (ES)\, EASP defines a sufficiently strong language of a simpler syntactic character. Its semantics through a minimality criterion with respect to truth and knowledge is a natural and conservative generalisation of ASP's original answer set semantics. Moreover\, compared to all other semantics proposals for ES\, the epistemic answer set semantics provides a comprehensive solution to unintended results for epistemic logic programs including constraints. Finally\, we briefly discuss some formal properties of EASP such as epistemic splitting\, strong equivalence and foundedness. \n\n\n'''Speaker info:''' Ezgi Iraz Su\, IRIT (Lilac)\, Université de Toulouse 3 (Université Paul Sabatier)
DTSTAMP:20190625T121430
SEQUENCE:28697
END:VEVENT
BEGIN:VEVENT
SUMMARY:Introduction to p-adic numbers and analysis
URL://iccl.inf.tu-dresden.de/web/Introduction_to_p-adic_numbers_and_analysis
UID://iccl.inf.tu-dresden.de/web/Introduction_to_p-adic_numbers_and_analysis
DTSTART:20190710T133000
DTEND:20190710T150000
LOCATION:APB 3027
DESCRIPTION:The p-adic numbers (where p is a prime number) can be seen as one possible link between number theory and analysis. They therefore play an important role in various mathematical areas. The so-called strong triangle inequality of the p-adic absolute value (|\;a+b|\;≤max(|\;a|\;\,|\;b|\;) for p-adic numbers a and b) has many strange and surprising consequences.\nIn our talk\, we give the definition of p-adic numbers\, several elementary results on p-adic functional analysis and a short overwiev on possible applications.
DTSTAMP:20190707T075702
SEQUENCE:28768
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chasing Sets: How to Use Existential Rules for Expressive Reasoning
URL://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning_(Extended_Abstract)
UID://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning_(Extended_Abstract)
DTSTART:20190613T133000
DTEND:20190613T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We propose that modern existential rule reasoners can enable fully declarative implementations of rule-based inference methods in knowledge representation\, in the sense that a particular calculus is captured by a fixed set of rules that can be evaluated on varying inputs (encoded as facts). We introduce Datalog(S) – Datalog with support for sets – as a surface language for such translations\, and show that it can be captured in a decidable fragment of existential rules. We then implement several known inference methods in Datalog(S)\, and empirically show that an existing existential rule reasoner can thus be used to solve practical reasoning problems.\n\n\nThis talk is a rehearsal for a SHORT ORAL presentation (17 minutes without questions) at DL 2019.
DTSTAMP:20190606T082710
SEQUENCE:28585
END:VEVENT
BEGIN:VEVENT
SUMMARY:Discovering Implicational Knowledge in Wikidata
URL://iccl.inf.tu-dresden.de/web/Discovering_Implicational_Knowledge_in_Wikidata
UID://iccl.inf.tu-dresden.de/web/Discovering_Implicational_Knowledge_in_Wikidata
DTSTART:20190613T130000
DTEND:20190613T133000
LOCATION:APB 3027
DESCRIPTION:Knowledge graphs have recently become the state-of-the-art tool for representing the diverse and complex knowledge of the world. Among the freely available knowledge graphs\, Wikidata stands out by being collaboratively edited and curated. Amidst the vast numbers of facts\, complex knowledge is just waiting to be discovered\, but the sheer size of Wikidata makes this infeasible for human editors. We apply Formal Concept Analysis to efficiently identify and succinctly represent comprehensible implications that are implicitly present in the data. As a first step\, we describe a systematic process to extract conceptual knowledge from Wikidata's complex data model\, thus providing a method for obtaining large real-world data sets for FCA. We conduct experiments that show the principal feasibility of the approach\, yet also illuminate some of the limitations\, and give examples of interesting knowledge discovered.\n\n\nThis will be a rehearsal talk for ICFCA-2019 (20 minutes including questions).
DTSTAMP:20190607T170543
SEQUENCE:28613
END:VEVENT
BEGIN:VEVENT
SUMMARY:Projection in a Description Logic of Context with Actions
URL://iccl.inf.tu-dresden.de/web/Projection_in_a_Description_Logic_of_Context_with_Actions
UID://iccl.inf.tu-dresden.de/web/Projection_in_a_Description_Logic_of_Context_with_Actions
DTSTART:20190606T130000
DTEND:20190606T134500
LOCATION:APB 3027
DESCRIPTION:Satyadharma Tirtarasa and Benjamin Zarrieß. '''Projection in a Description Logic of Context with Actions'''. In Proceedings of the 32nd International Workshop on Description Logics (DL'19)\, Oslo\, Norway\, June 2019. Springer. To appear.\n\n\n'''Abstract:''' Projection is the problem of checking whether the execution of a given sequence of actions will achieve its goal starting from some initial state. In this paper\, we study a setting where we combine a two-dimensional Description Logic of context (ConDL) with an action formalism. We choose a well-studied ConDL where both: the possible states of a dynamical system itself (object level) and also different context-dependent views on this system state (context level) are organised in relational structures and can be described using usual DL constructs. To represent how such a system and its views evolve we introduce a suitable action formalism. It allows one to describe change on both levels. Furthermore\, the observable changes on the object level due to an action execution can also be context-dependent. We show that the formalism is well-behaved in the sense that projection has the same complexity as standard reasoning tasks in case ALCO is the underlying DL. \n\nThis talk is a rehearsal for a SHORT ORAL presentation (17 minutes without questions) at DL 2019.
DTSTAMP:20190603T115201
SEQUENCE:28557
END:VEVENT
BEGIN:VEVENT
SUMMARY:Explorations into Belief State Compression
URL://iccl.inf.tu-dresden.de/web/Explorations_into_Belief_State_Compression
UID://iccl.inf.tu-dresden.de/web/Explorations_into_Belief_State_Compression
DTSTART:20190528T133000
DTEND:20190528T143000
LOCATION:APB 3027
DESCRIPTION:A knowledge base is an integral part of a logic-based artificial intelligence system. The size of the knowledge base has a great effect on the derivation time of a logic-based agent. In this thesis\, I present a variety of algorithms for a particular variant of knowledge base size reduction referred to as “Belief State Compression”. Each proposed algorithm can be “lossy” or “lossless” depending on the (in)ability to recover the removed information\; and “redundant” or “irredundant” with respect to the necessity of the remaining information in order to remain lossless. Belief state compression differs from previous approaches in at least three aspects. First\, it takes its objects to be support-structured sets of unconstrained\, rather than flat sets of syntactically constrained\, logical formulas\, which we refer to as belief states. Second\, classical notions of minimality and redundancy are replaced by weaker\, resource-bounded alternatives based on the support structure. Third\, in “lossy” variants of compression\, the compressed knowledge base logically implies only a practically-relevant subset of the original knowledge base. Six variants of belief state compression\, falling into three major classes\, are presented. Experimental results show that a combination of five of them results in mostly irredundant\, lossless compressions\, while maintaining reasonable run times.
DTSTAMP:20190523T202953
SEQUENCE:28517
END:VEVENT
BEGIN:VEVENT
SUMMARY:Quine's Fluted Fragment
URL://iccl.inf.tu-dresden.de/web/Quine%27s_Fluted_Fragment
UID://iccl.inf.tu-dresden.de/web/Quine%27s_Fluted_Fragment
DTSTART:20190509T130000
DTEND:20190509T143000
DESCRIPTION:We consider the fluted fragment\, a decidable fragment of first-order logic with an unbounded number of variables\, originally identified in 1968 by W. V. Quine. We show that the satisfiability problem for this fragment has non-elementary complexity\, thus refuting an earlier published claim by W.C. Purdy that it is in NExpTime. More precisely\, we consider the intersection of the fluted fragment and the m-variable fragment of first-order logic\, for all non-negative m. We obtain upper and lower complexity bounds for this fragment that coincide for all m up to the value 4.\n\n\n'''Short bio:''' Ian Pratt-Hartmann studied mathematics and philosophy at [http://www.bnc.ox.ac.uk Brasenose College\, Oxford]\, and philosophy at [http://www.princeton.edu/main/ Princeton] and [http://www.stanford.edu/ Stanford] Universities\, gaining his PhD. from Princeton. He is currently Senior Lecturer in the Department of Computer Science at the [http://www.manchester.ac.uk/ University of Manchester]. Since February\, 2014\, Dr. Pratt-Hartmann has held a joint appointment in the [http://informatyka.wmfi.uni.opole.pl/ Institute of Computer Science] at the [http://www.uni.opole.pl/ University of Opole]. His academic interests range widely over computational logic\, natural language semantics and artificial intelligence.
DTSTAMP:20190503T071228
SEQUENCE:28293
END:VEVENT
BEGIN:VEVENT
SUMMARY:Data Science Use Cases for Lifestyle Banking
URL://iccl.inf.tu-dresden.de/web/Data_Science_Use_Cases_for_Lifestyle_Banking
UID://iccl.inf.tu-dresden.de/web/Data_Science_Use_Cases_for_Lifestyle_Banking
DTSTART:20190425T130000
DTEND:20190425T143000
LOCATION:APB 3027
DESCRIPTION:Abstract:\nTraditional banking concerns itself with risk understanding\, credit underwriting\, cash need\, liquidity\, etc. Data science and machine learning have proved useful in banking businesses to mitigate risk while targeting the right customers with cash need. In this talk\, we will explore a lifestyle side of banking that goes beyond the traditional realm and delves more into alternative signals to customers needs. We will see example data science use cases that have been successfully implemented in banking business at Siam Commercial Bank.
DTSTAMP:20190411T134324
SEQUENCE:28212
END:VEVENT
BEGIN:VEVENT
SUMMARY:Closed-World Semantics for Conjunctive Queries with Negation over ELH_bottom Ontologies
URL://iccl.inf.tu-dresden.de/web/Closed-World_Semantics_for_Conjunctive_Queries_with_Negation_over_ELH_bottom_Ontologies
UID://iccl.inf.tu-dresden.de/web/Closed-World_Semantics_for_Conjunctive_Queries_with_Negation_over_ELH_bottom_Ontologies
DTSTART:20190418T130000
DTEND:20190418T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Ontology-mediated query answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information\, however\, there exist only few ontology based approaches. Moreover\, these proposals conflate the closed-domain and closed-world assumption\, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. We propose a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic ELH-bottom\, which is based on the minimal canonical model. We propose a rewriting strategy for dealing with negated query atoms\, which shows that query answering is possible in polynomial time in data complexity.\n\n\nThis work was awarded Best Paper at JELIA 2019: https://jelia2019.mat.unical.it/awards.
DTSTAMP:20190418T100913
SEQUENCE:28243
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Power of the Terminating Chase
URL://iccl.inf.tu-dresden.de/web/The_Power_of_the_Terminating_Chase
UID://iccl.inf.tu-dresden.de/web/The_Power_of_the_Terminating_Chase
DTSTART:20190411T130000
DTEND:20190411T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract''':\nThe chase has become a staple of modern database theory with applications in data integration\, query optimisation\, data exchange\, ontology-based query answering\, and many other areas. Most application scenarios and implementations require the chase to terminate and produce a finite universal model\, and a large arsenal of sufficient termination criteria is available to guarantee this (generally undecidable) condition. In this invited tutorial\, we therefore ask about the expressive power of logical theories for which the chase terminates. Specifically\, which database properties can be recognised by such theories\, i.e.\, which Boolean queries can they realise? For the skolem (semi-oblivious) chase\, and almost any known termination criterion\, this expressivity is just that of plain Datalog. Surprisingly\, this limitation of most prior research does not apply to the chase in general. Indeed\, we show that standard–chase terminating theories can realise queries with data complexities ranging from PTime to non-elementary that are out of reach for the terminating skolem chase. A “Datalog-first” standard chase that prioritises applications of rules without existential quantifiers makes modelling simpler – and we conjecture: computationally more efficient. This is one of the many open questions raised by our insights\, and we conclude with an outlook on the research opportunities in this area.\n\nThis work has been published and presented at ICDT 2019\, Lisbon\, Portugal.
DTSTAMP:20190327T143510
SEQUENCE:27953
END:VEVENT
BEGIN:VEVENT
SUMMARY:Beyond NP Revolution
URL://iccl.inf.tu-dresden.de/web/Beyond_NP_Revolution
UID://iccl.inf.tu-dresden.de/web/Beyond_NP_Revolution
DTSTART:20190409T133000
DTEND:20190409T150000
LOCATION:APB 2026
DESCRIPTION:'''Abstract:''' The paradigmatic NP-complete problem of Boolean satisfiability (SAT) solving is a central problem in Computer Science. While the mention of SAT can be traced to early 19th century\, efforts to develop practically successful SAT solvers go back to 1950s. The past 20 years have witnessed a "NP revolution" with the development of conflict-driven clause-learning (CDCL) SAT solvers. Such solvers combine a classical backtracking search with a rich set of effective heuristics. While 20 years ago SAT solvers were able to solve instances with at most a few hundred variables\, modern SAT solvers solve instances with up to millions of variables in a reasonable time. The "NP-revolution" opens up opportunities to design practical algorithms with rigorous guarantees for problems in complexity classes beyond NP by replacing a NP oracle with a SAT Solver. In this talk\, we will discuss how we use NP revolution to design practical algorithms for two fundamental problems in artificial intelligence and formal methods: Constrained Counting and Sampling.\n\n\n'''Bio:''' Kuldeep Meel is an Assistant Professor of Computer Science in School of Computing at the National University of Singapore where he holds the Sung Kah Kay Assistant Professorship. He received his Ph.D. (2017) and M.S. (2014) degree in Computer Science from Rice University. He holds B. Tech. (with Honors) degree (2012) in Computer Science and Engineering from Indian Institute of Technology\, Bombay. His research interests lie at the intersection of Artificial Intelligence and Formal Methods. Meel has co-presented tutorials at top-tier AI conferences\, IJCAI 2018\, AAAI 2017\, and UAI 2016. His work received the 2018 Ralph Budd Award for Best PhD Thesis in Engineering\, 2014 Outstanding Masters Thesis Award from Vienna Center of Logic and Algorithms and Best Student Paper Award at CP 2015. He received the IBM Ph.D. Fellowship and the 2016-17 Lodieska Stockbridge Vaughn Fellowship for his work on constrained sampling and counting.
DTSTAMP:20190404T131214
SEQUENCE:28083
END:VEVENT
BEGIN:VEVENT
SUMMARY:Making sense of conflicting defeasible rules in the controlled natural language ACE: design of a system with support for existential quantification using skolemization
URL://iccl.inf.tu-dresden.de/web/Making_sense_of_conflicting_defeasible_rules_in_the_controlled_natural_language_ACE:_design_of_a_system_with_support_for_existential_quantification_using_skolemization
UID://iccl.inf.tu-dresden.de/web/Making_sense_of_conflicting_defeasible_rules_in_the_controlled_natural_language_ACE:_design_of_a_system_with_support_for_existential_quantification_using_skolemization
DTSTART:20190404T130000
DTEND:20190404T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We motivate and present the design of a system implementing what we (joint work with Hannes Strass previously at the University of Leipzig as well as Adam Z. Wyner at Swansea University) have dubbed the "EMIL" (acronym for "extracting meaning out of inconsistent language") pipeline. The pipeline in question takes potentially conflicting rules expressed in a fragment of a prominent controlled natural language\, ACE\, yet extended with means of expressing defeasible rules in the form of normality assumptions. It makes sense of such rules using a recently formulated argumentation-inspired semantics\, verbalising possible points of view that can plausibly be held based on the rules in ACE. The approach we describe is ultimately based on reductions to answer-set-programming (ASP)\; simulating existential quantification by using skolemization in a manner resembling a translation for ASP formalized in the context of ∃-ASP. We discuss the advantages of this approach to building on the existing ACE interface to rule-systems\, ACERules.\n\n\n'''Bio:''' Martin Diller is finishing his PhD at TU Wien (Austria). The focus of his PhD has mainly been on implementing problems in (abstract and structured) argumentation via complexity-sensitive translations to logical formalisms (quantified boolean formulas and answer-set-programming). He holds a joint MSc degree in computational logic from TU Dresden\, FU Bozen-Bolzano\, and TU Wien. Before that he studied Philosophy & Computer Science at Universidad Nacional de Córdoba\, Argentina and was also briefly part of research groups in Epistemology & Computer Science there. He has also been at several other institutes and universities for internships and research stays working on applied argumentation & automated reasoning: UCL in London (England)\, University of Aberdeen (Scotland)\, and NICTA-Canberra (Australia).
DTSTAMP:20190328T120428
SEQUENCE:27961
END:VEVENT
BEGIN:VEVENT
SUMMARY:Third Workshop on Human Reasoning and Computational Logic
URL://iccl.inf.tu-dresden.de/web/Third_Workshop_on_Human_Reasoning_and_Computational_Logic
UID://iccl.inf.tu-dresden.de/web/Third_Workshop_on_Human_Reasoning_and_Computational_Logic
DTSTART:20190404T090000
DTEND:20190405T170000
LOCATION:APB 2026
DESCRIPTION:From the 4th to the 5th of April 2019\, we organize the third workshop on Human Reasoning and Computational Logic at TU Dresden\, Germany. The goal of this workshop is to provide a platform for the scientific exchange with respect to Human Reasoning between the areas of Cognitive Science and Computational Logic.
DTSTAMP:20190130T094758
SEQUENCE:27640
END:VEVENT
BEGIN:VEVENT
SUMMARY:Temporal Logics with Probabilistic Distributions
URL://iccl.inf.tu-dresden.de/web/Temporal_Logics_with_Probabilistic_Distributions2
UID://iccl.inf.tu-dresden.de/web/Temporal_Logics_with_Probabilistic_Distributions2
DTSTART:20190328T130000
DTEND:20190328T143000
LOCATION:APB 3027
DESCRIPTION:In many applications such as monitoring of dynamical systems\, the data are actually time-dependent\, e.g.\, describing the states of a dynamical system at different points in time. Moreover\, events are more likely or less likely to happen in certain time points defined by the type of an event. There exist many well-studied distributions which can characterise the natures of events among us. For example\, according to a Pareto distribution\, a.k.a. a power-law distribution\, the longer something has gone on\, the longer we expect it to continue going on. Like new companies or start-ups\, either (with the high probability) they fail during the first year of existence\, or\, if they manage to survive for decades\, their chances of collapse are extremely small.\n\n\nIn order to capture dynamic time-dependent and probabilistic patterns of knowledge\, using basic notions from probability theory and statistics\, we have introduced temporal logics of expectation\, where we can speak about statements not only occurring eventually in the future\, but giving additional information on when they are likely to happen. The resulted combination of a temporal DL-Lite fragment (with a two-dimensional semantics\, where one dimension is for time and the other for the DL domain) and an additional probabilistic constructor "distribution eventuality" is interpreted over multiple weighted worlds\, viz.\, temporal DL interpretations.
DTSTAMP:20190321T163823
SEQUENCE:27910
END:VEVENT
BEGIN:VEVENT
SUMMARY:Knowledge Graph Embedding
URL://iccl.inf.tu-dresden.de/web/TBA2
UID://iccl.inf.tu-dresden.de/web/TBA2
DTSTART:20190207T130000
DTEND:20190207T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract''': Recently graph embeddings have been taken up by the community as a tool to solve various tasks in machine learning and the general AI community. In this talk I will give a gentle introduction to the topic and also give some pointers to currently ongoing research. We start from looking at why graph embeddings are needed in the first place and how they could be used. We will then focus on graphs containing a large variety of information\, typically called knowledge graphs\, often represented in RDF. These graphs are hard to embed compared to e.g.\, uniform simple networks) because they contain multiple edge and vertex types\, relation directionality\, literals\, etc. What we will cover are a few basic techniques on how these embeddings can be computed. We plan to look into at least one example of translational based methods\, one from matrix decomposition\, and methods based on co-occurrence and statistical information. Finally we will discuss about a couple of open problems and some of the topics currently worked on.\n\n\n'''Short biography''': Michael Cochez is a postdoctoral researcher at the Fraunhofer Institute for Applied Information Technology FIT in Germany. In this position Michael is working on transferring research results from the academic world to the industry. Besides the industry exposure\, he conducts research in areas related to data analysis and knowledge representation\, like knowledge graph embedding\, scalable clustering\, frequent itemset mining\, stream sampling\, prototype-based ontologies\, ontology matching\, and knowledge evolution. This research is currently mainly conducted at the RWTH Aachen university\, Germany. Before joining Fraunhofer\, he obtained his Ph.D. degree from the University of Jyväskyä\, Finland under the supervision of Vagan Terziyan and Ferrante Neri (De Montfort University - Leicester). He obtained his master degree from the same university and his bachelor degree from the University of Antwerp\, Belgium. Michael Cochez is currently on a partial leave from a postdoc at the University of Jyväskylä and is also a scientific advisor for WE-OPT-IT Oy (former MyOpt Oy) in Finland.
DTSTAMP:20190201T103020
SEQUENCE:27647
END:VEVENT
BEGIN:VEVENT
SUMMARY:Learning Ontologies with Epistemic Reasoning: The EL Case
URL://iccl.inf.tu-dresden.de/web/Learning_Ontologies_with_Epistemic_Reasoning:_The_EL_Case
UID://iccl.inf.tu-dresden.de/web/Learning_Ontologies_with_Epistemic_Reasoning:_The_EL_Case
DTSTART:20190123T093000
DTEND:20190123T110000
LOCATION:APB 3027
DESCRIPTION:'''Abstract''': We investigate the problem of learning description logic ontologies from entailments via queries\, using epistemic reasoning. We introduce a new learning model consisting of epistemic membership and example queries and show that polynomial learnability in this model coincides with polynomial learnability in Angluin’s exact learning model with membership and equivalence queries. We then instantiate our learning framework to EL and show some complexity results for an epistemic extension of EL where epistemic operators can be applied over the axioms. Finally\, we transfer known results for EL ontologies and its fragments to our learning model based on epistemic reasoning.
DTSTAMP:20190115T143531
SEQUENCE:27525
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ontology-Based Query Answering for Probabilistic Temporal Data
URL://iccl.inf.tu-dresden.de/web/Ontology-Based_Query_Answering_for_Probabilistic_Temporal_Data
UID://iccl.inf.tu-dresden.de/web/Ontology-Based_Query_Answering_for_Probabilistic_Temporal_Data
DTSTART:20190117T130000
DTEND:20190117T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We investigate ontology-based query answering for data that are both temporal\nand probabilistic\, which might occur in contexts such as stream reasoning or\nsituation recognition with uncertain data. We present a\nframework that allows to represent temporal probabilistic data\, and\nintroduce a query language with which complex temporal and probabilistic\npatterns can be described. Specifically\, this language combines conjunctive\nqueries with operators from linear time logic as well as probability operators.\nWe analyse the complexities of evaluating queries in this language in various\nsettings. While in some cases\, combining the temporal and the\nprobabilistic dimension in such a way comes at the cost of increased complexity\,\nwe also determine cases for which this increase can be avoided. \n\nThis is a talk based on a paper that will be presented at this year’s AAAI conference.
DTSTAMP:20190114T122705
SEQUENCE:27507
END:VEVENT
BEGIN:VEVENT
SUMMARY:Privacy-Preserving Ontology Publishing for EL Instance Stores
URL://iccl.inf.tu-dresden.de/web/Privacy-Preserving_Ontology_Publishing_for_EL_Instance_Stores
UID://iccl.inf.tu-dresden.de/web/Privacy-Preserving_Ontology_Publishing_for_EL_Instance_Stores
DTSTART:20190110T130000
DTEND:20190110T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We make a first step towards adapting an existing approach for privacy-preserving publishing of linked data to Description Logic (DL) ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using concepts of the DL EL\, which corresponds to the setting where the ontology is an EL instance store. We introduce the notions of compliance of a concept with a policy and of the safety of a concept for a policy and show how optimal compliant (safe) generalizations of a given EL concept can be computed. In addition\, we investigate the complexity of the optimality problem.\n\n \nThis is joint work with Franz Baader and Francesco Kriegel.
DTSTAMP:20190109T151057
SEQUENCE:27479
END:VEVENT
BEGIN:VEVENT
SUMMARY:Embodied Terminology: Language\, Knowledge\, and Cognition
URL://iccl.inf.tu-dresden.de/web/Embodied_Terminology:_Language,_Knowledge,_and_Cognition
UID://iccl.inf.tu-dresden.de/web/Embodied_Terminology:_Language,_Knowledge,_and_Cognition
DTSTART:20181206T130000
DTEND:20181206T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Meaning formation in specialized language is yet an open puzzle to be solved and several methods have been proposed to piece it together. This talk looks to the paradigm of embodied cognition for such a method\, which believes that cognitive processes\, including language production and understanding\, are deeply rooted in physical interactions with the world. More specifically it looks at image schemas that capture recurrent sensorimotor patterns giving coherence and structure to our experiences and shaping our language and knowledge. Potential theoretical contributions of embodied cognition to the meaning formation in specialized languages are discussed alongside automated methods for the identification of image schemas in natural languages. A coherent\, robust\, and language agnostic theory of and method for embodied terminology holds the promise to boost socio-economically effective\, cognitively grounded\, and technologically powerful terminology management and translation technologies. \n\n\n'''Title of the lecture:''' Translatorische Terminologiewissenschaft und Übersetzungstechnologien (German)\n\n'''Lecture description:''' This demonstration lesson will be held in German since this is required for the hearing and will represent the second lesson of the Master-level lecture on translational terminology science and translation technologies. Technologies of specialized communication will be discussed with a particular focus on multilingual\, systematic\, and onomasiological terminology management.\n\n'''Details:''' Appointment training: this '''research talk''' and the following '''demonstration lesson''' represent a trial run for a hearing within the application procedure for a '''tenure-track professorship''' for terminology science and translation technologies. Please join and ask many challenging questions.
DTSTAMP:20181203T100011
SEQUENCE:27219
END:VEVENT
BEGIN:VEVENT
SUMMARY:Satisfiability in the Triguarded Fragment of First-Order Logic
URL://iccl.inf.tu-dresden.de/web/Satisfiability_in_the_Triguarded_Fragment_of_First-Order_Logic
UID://iccl.inf.tu-dresden.de/web/Satisfiability_in_the_Triguarded_Fragment_of_First-Order_Logic
DTSTART:20181129T130000
DTEND:20181129T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Most Description Logics (DLs) can be translated into well-known decidable fragments of first-order logic FO\, including the guarded fragment GF and the two-variable fragment FO2. Given their prominence in DL research\, we take closer look at GF and FO2\, and present a new fragment that subsumes both. This fragment\, called the triguarded fragment (denoted TGF)\, is obtained by relaxing the standard definition of GF: quantification is required to be guarded only for subformulae with three or more free variables. We show that satisfiability of equality-free TGF is N2ExpTime-complete\, but becomes NExpTime-complete if we bound the arity of predicates by a constant (a natural assumption in the context of DLs). Finally\, we observe that many natural extensions of TGF\, including the addition of equality\, lead to undecidability. \n\n\nThis talk is a presentation given at the 31st International Workshop on Description Logics\, 2018.
DTSTAMP:20181104T132422
SEQUENCE:27062
END:VEVENT
BEGIN:VEVENT
SUMMARY:Big Data Variety: On-Demand Data Integration
URL://iccl.inf.tu-dresden.de/web/Big_Data_Variety:_On-Demand_Data_Integration
UID://iccl.inf.tu-dresden.de/web/Big_Data_Variety:_On-Demand_Data_Integration
DTSTART:20181126T145000
DTEND:20181126T162000
LOCATION:APB 3105
DESCRIPTION:'''Abstract.''' As big data systems get more complex\, the data variety challenge has become the driving factor in current big data projects. From a technical perspective\, data variety mainly boils down to data integration\, which\, unfortunately\, is far away from being a resolved problem. Current efforts highlight the need to broaden the perspective beyond the data community and use semantic-aware formalisms\, such as knowledge graphs\, to tackle this problem. In this talk\, we will revise the current state-of-the-art of the data variety challenge and present recent solutions to manage the problem.\n\n\n'''Bio.''' I’m currently a tenure-track 2 lecturer at the Departament d’Enginyeria de Serveis i Sistemes d’Informació (ESSI)\, which belongs to the Universitat Politècnica de Catalunya (UPC-BarcelonaTech). also coordinate the IT4BI Erasmus Mundus Master at UPC and the Big Data Management and Analytics postgraduate course at UPC School. Although my hometown is Lleida\, I have already lived for more than 10 years in Barcelona. On March 2004 I obtained my bachelor’s degree in Informatics Engineering at Facultat d’Informàtica de Barcelona (FIB) Later\, on February 2010 I obtained my doctoral degree in Computing. My PhD thesis\, directed by Dr. Alberto Abelló and entitled “Automating the Multidimensional Design of Data Warehouses”\, can be found here. My main topics of interest are business intelligence\, Big Data and the semantic web. My PhD thesis focused on data warehousing but since then\, I have been working on many other topics such as NOSQL (and any technology beyond relational databases)\, bridging Big Data management and analytics\, open data platforms (mostly at the database level)\, recommendation systems and semantic-aware systems (based or exploiting semantic formalisms such as ontology languages or RDF). I am also interested in agile methodologies / formalisms to incorporate non-technical people in the design\, maintenance and evolution of database systems.
DTSTAMP:20181121T124047
SEQUENCE:27171
END:VEVENT
BEGIN:VEVENT
SUMMARY:Extending Datalog with Sets Using an Encoding in Existential Rules
URL://iccl.inf.tu-dresden.de/web/Extending_Datalog_with_Sets_Using_an_Encoding_in_Existential_Rules
UID://iccl.inf.tu-dresden.de/web/Extending_Datalog_with_Sets_Using_an_Encoding_in_Existential_Rules
DTSTART:20181126T130000
DTEND:20181126T143000
LOCATION:APB 3027
DESCRIPTION:We extend Datalog with sets\, in order to facilitate modelling with this logic\, and define the resulting\, extended logic. To allow practical reasoning over this logic\, we show a translation algorithm into first-order logic and prove its correctness. Furthermore\, we show that this translation exhibits optimal runtimes during the restricted chase\, compared to Datalog\, which means that reasoning over the extended logic is practically viable. Lastly\, we explore possible applications of this logic in practice.\n\n\nThis presentation and the subsequent question session constitute a defence for acquiring a Bachelor of Science degree in computer science.
DTSTAMP:20181110T102350
SEQUENCE:27105
END:VEVENT
BEGIN:VEVENT
SUMMARY:Standard and Non-Standard Inferences in the Description Logic FL0 Using Tree Automata
URL://iccl.inf.tu-dresden.de/web/TBA
UID://iccl.inf.tu-dresden.de/web/TBA
DTSTART:20181115T130000
DTEND:20181115T143000
LOCATION:APB 3027
DESCRIPTION:Although being quite inexpressive\, the description logic (DL) FL0\, which provides only conjunction\, value restriction and the top concept as concept constructors\, has an intractable subsumption problem in the presence of terminologies (TBoxes): subsumption reasoning w.r.t. acyclic FL0 TBoxes is coNP-complete\, and becomes even ExpTime-complete in case general TBoxes are used. In this talk\, I will describe an approach that uses automata working on infinite trees to solve both standard and non-standard inferences in FL0 w.r.t. general TBoxes. I will start by sketching an alternative proof of the ExpTime upper bound for subsumption in FL0 w.r.t. general TBoxes based on the use of looping tree automata. Afterwards\, I will explain how to employ parity tree automata to tackle non-standard inference problems such as computing the least common subsumer w.r.t. general TBoxes.
DTSTAMP:20181120T122106
SEQUENCE:27159
END:VEVENT
BEGIN:VEVENT
SUMMARY:From Horn-SRIQ to Datalog: A Data-Independent Transformation that Preserves Assertion Entailment
URL://iccl.inf.tu-dresden.de/web/From_Horn-SRIQ_to_Datalog:_A_Data-Independent_Transformation_that_Preserves_Assertion_Entailment
UID://iccl.inf.tu-dresden.de/web/From_Horn-SRIQ_to_Datalog:_A_Data-Independent_Transformation_that_Preserves_Assertion_Entailment
DTSTART:20181108T130000
DTEND:20181108T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Ontology-based access to large data-sets has recently gained a lot of attention. To access data efficiently\, one approach is to rewrite the ontology into Datalog\, and then use powerful Datalog engines to compute implicit entailments. Existing rewriting techniques support Description Logics (DLs) from ELH to Horn-SHIQ. We go one step further and present one such data-independent rewriting technique for Horn-SRIQ\, the extension of Horn-SHIQ that supports non-transitive\, complex roles---an expressive feature prominently used in many real-world ontologies. We evaluated our rewriting technique on a large known corpus of ontologies. Our experiments show that the resulting rewritings are of moderate size and that the our approach is more efficient than state-of-the-art DL reasoners when reasoning with data-intensive ontologies. \n\n\nThis is joint work with Larry González and Patrick Koopman. It has been accepted at AAAI 2019.
DTSTAMP:20181104T111047
SEQUENCE:27061
END:VEVENT
BEGIN:VEVENT
SUMMARY:Temporal constraint satisfaction problems in least fixed point logic
URL://iccl.inf.tu-dresden.de/web/Temporal_constraint_satisfaction_problems_in_least_fixed_point_logic
UID://iccl.inf.tu-dresden.de/web/Temporal_constraint_satisfaction_problems_in_least_fixed_point_logic
DTSTART:20181101T130000
DTEND:20181101T143000
LOCATION:APB 3027
DESCRIPTION:The constraint satisfaction problem (CSP) for a fixed structure L with finite relational signature is the computational problem of deciding whether a given finite structure of the same signature homomorphically maps to L. A temporal constraint language is a structure over the rational numbers Q whose relations are first-order definable in (Q\;<). In 2009\, Bodirsky and Kara presented a complete classification of the computational complexity of CSPs for temporal constraint languages. In contrast to finite domain structures\, there are temporal constraint languages whose CSP cannot be solved by any Datalog program but can be expressed in least fixed point logic (LFP). An example is CSP(Q\; { (x\,y\,z)\, where x>y or x>z} )\, known as the and/or scheduling problem. I will give a proof of a dichotomy for LFP expressibility of CSPs of temporal constraint languages. For a temporal constraint language L\, exactly one of the following is true: either L interprets all finite structures primitively positively with parameters and CSP(L) is inexpressible in LFP\, or CSP(L) is inexpressible in LFP if and only if L admits a primitive positive definition of the relation X:={ (x\,y\,z)\, where x>y=z or y>z=x or z>x=y}.
DTSTAMP:20181029T100450
SEQUENCE:27007
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ontological Modelling in Wikidata
URL://iccl.inf.tu-dresden.de/web/Ontological_Modelling_in_Wikidata
UID://iccl.inf.tu-dresden.de/web/Ontological_Modelling_in_Wikidata
DTSTART:20181025T130000
DTEND:20181025T143000
LOCATION:APB 3027
DESCRIPTION:This was an invited keynote talk at the 9th Workshop on Ontology Design and Patterns (WOP'18)\, 2018.\n\n\n'''Abstract:''' Wikidata\, the knowledge base of Wikimedia\, has been extremely successful in building and sustaining new communities of editors and users. Since its inception in 2012\, it has developed from an experimental “data wiki” into a well-organised reference knowledge base with an amazing array of applications. Developing an ontological schema for such an open and rapidly expanding project is a huge undertaking\, and difficult challenges arise on many levels. The community has directed significant efforts towards vocabulary development\, many guidelines and rules have been created\, and tools are used for helping editors to avoid and correct modelling errors. Nevertheless\, the distributed nature of Wikidata editing often means that ontology design\, too\, is distributed\, and a coherent global view is only being worked on once significant amounts of data have been added. The result is a knowledge graph with a widely varying modelling quality across different sub-domains.\n\nThe big question for researchers is how their insights and methods can help here. The Wikidata community is widely aware of semantic web activities and existing standards and academic publications play a role in many discussions. Yet\, there seems to be only little direct exchange between the communities. In this talk\, I will review the current state of Wikidata and its connection to semantic web standards such as RDF and SPARQL. I will try to raise awareness of the particular requirements of Wikidata\, and argue that these are of general interest for the data-driven curation of knowledge graphs.
DTSTAMP:20181019T105041
SEQUENCE:26935
END:VEVENT
BEGIN:VEVENT
SUMMARY:Making Repairs in Description Logics More Gentle
URL://iccl.inf.tu-dresden.de/web/Making_Repairs_in_Description_Logics_More_Gentle
UID://iccl.inf.tu-dresden.de/web/Making_Repairs_in_Description_Logics_More_Gentle
DTSTART:20181018T133000
DTEND:20181018T143000
LOCATION:APB 3027
DESCRIPTION:Abstract:\n"The classical approach for repairing a Description Logic ontology O in the sense of removing an unwanted consequence α is to delete a minimal number of axioms from O such that the resulting ontology O′ does not have the consequence α. However\, the complete deletion of axioms may be too rough\, in the sense that it may also remove consequences that are actually wanted. To alleviate this problem\, we propose a more gentle notion of repair in which axioms are not deleted but only weakened. On the one hand\, we investigate the general properties of this gentle repair method. On the other hand\, we propose and analyze concrete approaches for weakening axioms expressed in the Description Logic EL."\n\nThis is a rehearsal talk for KR 2018.
DTSTAMP:20181018T083442
SEQUENCE:26904
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Combined Approach to Query Answering in Horn-ALCHOIQ
URL://iccl.inf.tu-dresden.de/web/The_Combined_Approach_to_Query_Answering_in_Horn-ALCHOIQ2
UID://iccl.inf.tu-dresden.de/web/The_Combined_Approach_to_Query_Answering_in_Horn-ALCHOIQ2
DTSTART:20181018T130000
DTEND:20181018T133000
LOCATION:APB 3027
DESCRIPTION:Abstract:\n"Combined approaches have become a successful technique for solving\nconjunctive query (CQ) answering over description logics (DL) ontologies.\nNevertheless\, existing approaches are restricted to tractable DL languages.\nIn this work\, we extend the combined method to the more expressive DL\nHorn-ALCHOIQ---a language for which CQ answering is ExpTime-complete—\nin order to develop an efficient and scalable CQ answering procedure which\nis worst-case optimal for \\halchoiq and \\elho ontologies. We implement and\nstudy the feasibility of our algorithm\, and compare its performance to the DL\nreasoner Konclude."\n\nThis is a rehearsal talk for KR 2018.
DTSTAMP:20181018T083532
SEQUENCE:26905
END:VEVENT
BEGIN:VEVENT
SUMMARY:Efficient Model Construction for Horn Logic with VLog: Extended Abstract
URL://iccl.inf.tu-dresden.de/web/Efficient_Model_Construction_for_Horn_Logic_with_VLog:_Extended_Abstract
UID://iccl.inf.tu-dresden.de/web/Efficient_Model_Construction_for_Horn_Logic_with_VLog:_Extended_Abstract
DTSTART:20181011T130000
DTEND:20181011T143000
LOCATION:APB 3027
DESCRIPTION:Abstract: "We extend the Datalog engine VLog to develop a column-oriented implementation of the skolem and the restricted chase – two variants of a sound and complete algorithm used for model construction over theories of existential rules. We conduct an extensive evaluation over several data-intensive theories with millions of facts and thousands of rules\, and show that VLog can compete with the state of the art\, regarding runtime\, scalability\, and memory efficiency."\n\n\nThis is a rehearsal talk for the DL Workshop 2018.
DTSTAMP:20181018T013010
SEQUENCE:26721
END:VEVENT
BEGIN:VEVENT
SUMMARY:Report on research visit to NRC
URL://iccl.inf.tu-dresden.de/web/Report_on_research_visit_to_NRC
UID://iccl.inf.tu-dresden.de/web/Report_on_research_visit_to_NRC
DTSTART:20181004T130000
DTEND:20181004T143000
LOCATION:APB 3027
DESCRIPTION:Report on research visit to NRC
DTSTAMP:20181210T125955
SEQUENCE:27274
END:VEVENT
BEGIN:VEVENT
SUMMARY:Extending Matching in Description Logics
URL://iccl.inf.tu-dresden.de/web/Extending_Matching_in_Description_Logics
UID://iccl.inf.tu-dresden.de/web/Extending_Matching_in_Description_Logics
DTSTART:20180927T130000
DTEND:20180927T143000
LOCATION:APB 2026
DESCRIPTION:Matching concept descriptions against concept patterns was introduced as a non-standard inference task in Description Logics motivated by applications in knowledge bases. It was first investigated for the DL FL0\, for which a polynomial-time algorithm was obtained. In this talk\, we will present recent results that extend matching in FL0 in two aspects. Initially\, we investigate approximate matching\, that increases the recall of the method by checking whether the input description and pattern can “almost” match\, rather than become equivalent. The meaning of almost is formalised using distance measures between concepts. Afterwards\, we examine matching in the presence of TBoxes\, i.e.\, background knowledge of concept inclusions. It is well-known that subsumption reasoning in FL0 rises from P to ExpTime when TBoxes are considered\, and we prove that the same is true for matching. Finally\, we present some preliminary ideas on approximate matching in the presence of TBoxes.
DTSTAMP:20181018T013053
SEQUENCE:26723
END:VEVENT
END:VCALENDAR