BEGIN:VCALENDAR
PRODID:-//SMW Project//Semantic Result Formats
VERSION:2.0
METHOD:PUBLISH
X-WR-CALNAME:ICCL-Veranstaltungen
BEGIN:VEVENT
SUMMARY:Young Scientist's Third International Workshop on Trends in Information Processing (YSIP3)
URL://iccl.inf.tu-dresden.de/web/Young_Scientist%27s_Third_International_Workshop_on_Trends_in_Information_Processing_(YSIP3)
UID://iccl.inf.tu-dresden.de/web/Young_Scientist%27s_Third_International_Workshop_on_Trends_in_Information_Processing_(YSIP3)
DTSTART:20190917T000000
DTEND:20190920T000000
LOCATION:Stavropol and Arkhyz\, Russian Federation
DESCRIPTION:We invite young scientists – typically master or PhD students – to submit new scientific results – as\, for example\, obtained in their bachelor thesis\, project work\, master thesis\, or PhD project – in all areas of Information Processing.
DTSTAMP:20190117T101503
SEQUENCE:27542
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chasing Sets: How to Use Existential Rules for Expressive Reasoning
URL://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning_(Extended_Abstract)
UID://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning_(Extended_Abstract)
DTSTART:20190613T133000
DTEND:20190613T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We propose that modern existential rule reasoners can enable fully declarative implementations of rule-based inference methods in knowledge representation\, in the sense that a particular calculus is captured by a fixed set of rules that can be evaluated on varying inputs (encoded as facts). We introduce Datalog(S) – Datalog with support for sets – as a surface language for such translations\, and show that it can be captured in a decidable fragment of existential rules. We then implement several known inference methods in Datalog(S)\, and empirically show that an existing existential rule reasoner can thus be used to solve practical reasoning problems.\n\n\nThis talk is a rehearsal for a SHORT ORAL presentation (17 minutes without questions) at DL 2019.
DTSTAMP:20190606T082710
SEQUENCE:28585
END:VEVENT
BEGIN:VEVENT
SUMMARY:Discovering Implicational Knowledge in Wikidata
URL://iccl.inf.tu-dresden.de/web/Discovering_Implicational_Knowledge_in_Wikidata
UID://iccl.inf.tu-dresden.de/web/Discovering_Implicational_Knowledge_in_Wikidata
DTSTART:20190613T130000
DTEND:20190613T133000
LOCATION:APB 3027
DESCRIPTION:Knowledge graphs have recently become the state-of-the-art tool for representing the diverse and complex knowledge of the world. Among the freely available knowledge graphs\, Wikidata stands out by being collaboratively edited and curated. Amidst the vast numbers of facts\, complex knowledge is just waiting to be discovered\, but the sheer size of Wikidata makes this infeasible for human editors. We apply Formal Concept Analysis to efficiently identify and succinctly represent comprehensible implications that are implicitly present in the data. As a first step\, we describe a systematic process to extract conceptual knowledge from Wikidata's complex data model\, thus providing a method for obtaining large real-world data sets for FCA. We conduct experiments that show the principal feasibility of the approach\, yet also illuminate some of the limitations\, and give examples of interesting knowledge discovered.\n\n\nThis will be a rehearsal talk for ICFCA-2019 (20 minutes including questions).
DTSTAMP:20190607T170543
SEQUENCE:28613
END:VEVENT
BEGIN:VEVENT
SUMMARY:Projection in a Description Logic of Context with Actions
URL://iccl.inf.tu-dresden.de/web/Projection_in_a_Description_Logic_of_Context_with_Actions
UID://iccl.inf.tu-dresden.de/web/Projection_in_a_Description_Logic_of_Context_with_Actions
DTSTART:20190606T130000
DTEND:20190606T134500
LOCATION:APB 3027
DESCRIPTION:Satyadharma Tirtarasa and Benjamin Zarrieß. '''Projection in a Description Logic of Context with Actions'''. In Proceedings of the 32nd International Workshop on Description Logics (DL'19)\, Oslo\, Norway\, June 2019. Springer. To appear.\n\n\n'''Abstract:''' Projection is the problem of checking whether the execution of a given sequence of actions will achieve its goal starting from some initial state. In this paper\, we study a setting where we combine a two-dimensional Description Logic of context (ConDL) with an action formalism. We choose a well-studied ConDL where both: the possible states of a dynamical system itself (object level) and also different context-dependent views on this system state (context level) are organised in relational structures and can be described using usual DL constructs. To represent how such a system and its views evolve we introduce a suitable action formalism. It allows one to describe change on both levels. Furthermore\, the observable changes on the object level due to an action execution can also be context-dependent. We show that the formalism is well-behaved in the sense that projection has the same complexity as standard reasoning tasks in case ALCO is the underlying DL. \n\nThis talk is a rehearsal for a SHORT ORAL presentation (17 minutes without questions) at DL 2019.
DTSTAMP:20190603T115201
SEQUENCE:28557
END:VEVENT
BEGIN:VEVENT
SUMMARY:Explorations into Belief State Compression
URL://iccl.inf.tu-dresden.de/web/Explorations_into_Belief_State_Compression
UID://iccl.inf.tu-dresden.de/web/Explorations_into_Belief_State_Compression
DTSTART:20190528T133000
DTEND:20190528T143000
LOCATION:APB 3027
DESCRIPTION:A knowledge base is an integral part of a logic-based artificial intelligence system. The size of the knowledge base has a great effect on the derivation time of a logic-based agent. In this thesis\, I present a variety of algorithms for a particular variant of knowledge base size reduction referred to as “Belief State Compression”. Each proposed algorithm can be “lossy” or “lossless” depending on the (in)ability to recover the removed information\; and “redundant” or “irredundant” with respect to the necessity of the remaining information in order to remain lossless. Belief state compression differs from previous approaches in at least three aspects. First\, it takes its objects to be support-structured sets of unconstrained\, rather than flat sets of syntactically constrained\, logical formulas\, which we refer to as belief states. Second\, classical notions of minimality and redundancy are replaced by weaker\, resource-bounded alternatives based on the support structure. Third\, in “lossy” variants of compression\, the compressed knowledge base logically implies only a practically-relevant subset of the original knowledge base. Six variants of belief state compression\, falling into three major classes\, are presented. Experimental results show that a combination of five of them results in mostly irredundant\, lossless compressions\, while maintaining reasonable run times.
DTSTAMP:20190523T202953
SEQUENCE:28517
END:VEVENT
BEGIN:VEVENT
SUMMARY:Quine's Fluted Fragment
URL://iccl.inf.tu-dresden.de/web/Quine%27s_Fluted_Fragment
UID://iccl.inf.tu-dresden.de/web/Quine%27s_Fluted_Fragment
DTSTART:20190509T130000
DTEND:20190509T143000
DESCRIPTION:We consider the fluted fragment\, a decidable fragment of first-order logic with an unbounded number of variables\, originally identified in 1968 by W. V. Quine. We show that the satisfiability problem for this fragment has non-elementary complexity\, thus refuting an earlier published claim by W.C. Purdy that it is in NExpTime. More precisely\, we consider the intersection of the fluted fragment and the m-variable fragment of first-order logic\, for all non-negative m. We obtain upper and lower complexity bounds for this fragment that coincide for all m up to the value 4.\n\n\n'''Short bio:''' Ian Pratt-Hartmann studied mathematics and philosophy at [http://www.bnc.ox.ac.uk Brasenose College\, Oxford]\, and philosophy at [http://www.princeton.edu/main/ Princeton] and [http://www.stanford.edu/ Stanford] Universities\, gaining his PhD. from Princeton. He is currently Senior Lecturer in the Department of Computer Science at the [http://www.manchester.ac.uk/ University of Manchester]. Since February\, 2014\, Dr. Pratt-Hartmann has held a joint appointment in the [http://informatyka.wmfi.uni.opole.pl/ Institute of Computer Science] at the [http://www.uni.opole.pl/ University of Opole]. His academic interests range widely over computational logic\, natural language semantics and artificial intelligence.
DTSTAMP:20190503T071228
SEQUENCE:28293
END:VEVENT
BEGIN:VEVENT
SUMMARY:Data Science Use Cases for Lifestyle Banking
URL://iccl.inf.tu-dresden.de/web/Data_Science_Use_Cases_for_Lifestyle_Banking
UID://iccl.inf.tu-dresden.de/web/Data_Science_Use_Cases_for_Lifestyle_Banking
DTSTART:20190425T130000
DTEND:20190425T143000
LOCATION:APB 3027
DESCRIPTION:Abstract:\nTraditional banking concerns itself with risk understanding\, credit underwriting\, cash need\, liquidity\, etc. Data science and machine learning have proved useful in banking businesses to mitigate risk while targeting the right customers with cash need. In this talk\, we will explore a lifestyle side of banking that goes beyond the traditional realm and delves more into alternative signals to customers needs. We will see example data science use cases that have been successfully implemented in banking business at Siam Commercial Bank.
DTSTAMP:20190411T134324
SEQUENCE:28212
END:VEVENT
BEGIN:VEVENT
SUMMARY:Closed-World Semantics for Conjunctive Queries with Negation over ELH_bottom Ontologies
URL://iccl.inf.tu-dresden.de/web/Closed-World_Semantics_for_Conjunctive_Queries_with_Negation_over_ELH_bottom_Ontologies
UID://iccl.inf.tu-dresden.de/web/Closed-World_Semantics_for_Conjunctive_Queries_with_Negation_over_ELH_bottom_Ontologies
DTSTART:20190418T130000
DTEND:20190418T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Ontology-mediated query answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information\, however\, there exist only few ontology based approaches. Moreover\, these proposals conflate the closed-domain and closed-world assumption\, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. We propose a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic ELH-bottom\, which is based on the minimal canonical model. We propose a rewriting strategy for dealing with negated query atoms\, which shows that query answering is possible in polynomial time in data complexity.\n\n\nThis work was awarded Best Paper at JELIA 2019: https://jelia2019.mat.unical.it/awards.
DTSTAMP:20190418T100913
SEQUENCE:28243
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Power of the Terminating Chase
URL://iccl.inf.tu-dresden.de/web/The_Power_of_the_Terminating_Chase
UID://iccl.inf.tu-dresden.de/web/The_Power_of_the_Terminating_Chase
DTSTART:20190411T130000
DTEND:20190411T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract''':\nThe chase has become a staple of modern database theory with applications in data integration\, query optimisation\, data exchange\, ontology-based query answering\, and many other areas. Most application scenarios and implementations require the chase to terminate and produce a finite universal model\, and a large arsenal of sufficient termination criteria is available to guarantee this (generally undecidable) condition. In this invited tutorial\, we therefore ask about the expressive power of logical theories for which the chase terminates. Specifically\, which database properties can be recognised by such theories\, i.e.\, which Boolean queries can they realise? For the skolem (semi-oblivious) chase\, and almost any known termination criterion\, this expressivity is just that of plain Datalog. Surprisingly\, this limitation of most prior research does not apply to the chase in general. Indeed\, we show that standard–chase terminating theories can realise queries with data complexities ranging from PTime to non-elementary that are out of reach for the terminating skolem chase. A “Datalog-first” standard chase that prioritises applications of rules without existential quantifiers makes modelling simpler – and we conjecture: computationally more efficient. This is one of the many open questions raised by our insights\, and we conclude with an outlook on the research opportunities in this area.\n\nThis work has been published and presented at ICDT 2019\, Lisbon\, Portugal.
DTSTAMP:20190327T143510
SEQUENCE:27953
END:VEVENT
BEGIN:VEVENT
SUMMARY:Beyond NP Revolution
URL://iccl.inf.tu-dresden.de/web/Beyond_NP_Revolution
UID://iccl.inf.tu-dresden.de/web/Beyond_NP_Revolution
DTSTART:20190409T133000
DTEND:20190409T150000
LOCATION:APB 2026
DESCRIPTION:'''Abstract:''' The paradigmatic NP-complete problem of Boolean satisfiability (SAT) solving is a central problem in Computer Science. While the mention of SAT can be traced to early 19th century\, efforts to develop practically successful SAT solvers go back to 1950s. The past 20 years have witnessed a "NP revolution" with the development of conflict-driven clause-learning (CDCL) SAT solvers. Such solvers combine a classical backtracking search with a rich set of effective heuristics. While 20 years ago SAT solvers were able to solve instances with at most a few hundred variables\, modern SAT solvers solve instances with up to millions of variables in a reasonable time. The "NP-revolution" opens up opportunities to design practical algorithms with rigorous guarantees for problems in complexity classes beyond NP by replacing a NP oracle with a SAT Solver. In this talk\, we will discuss how we use NP revolution to design practical algorithms for two fundamental problems in artificial intelligence and formal methods: Constrained Counting and Sampling.\n\n\n'''Bio:''' Kuldeep Meel is an Assistant Professor of Computer Science in School of Computing at the National University of Singapore where he holds the Sung Kah Kay Assistant Professorship. He received his Ph.D. (2017) and M.S. (2014) degree in Computer Science from Rice University. He holds B. Tech. (with Honors) degree (2012) in Computer Science and Engineering from Indian Institute of Technology\, Bombay. His research interests lie at the intersection of Artificial Intelligence and Formal Methods. Meel has co-presented tutorials at top-tier AI conferences\, IJCAI 2018\, AAAI 2017\, and UAI 2016. His work received the 2018 Ralph Budd Award for Best PhD Thesis in Engineering\, 2014 Outstanding Masters Thesis Award from Vienna Center of Logic and Algorithms and Best Student Paper Award at CP 2015. He received the IBM Ph.D. Fellowship and the 2016-17 Lodieska Stockbridge Vaughn Fellowship for his work on constrained sampling and counting.
DTSTAMP:20190404T131214
SEQUENCE:28083
END:VEVENT
BEGIN:VEVENT
SUMMARY:Making sense of conflicting defeasible rules in the controlled natural language ACE: design of a system with support for existential quantification using skolemization
URL://iccl.inf.tu-dresden.de/web/Making_sense_of_conflicting_defeasible_rules_in_the_controlled_natural_language_ACE:_design_of_a_system_with_support_for_existential_quantification_using_skolemization
UID://iccl.inf.tu-dresden.de/web/Making_sense_of_conflicting_defeasible_rules_in_the_controlled_natural_language_ACE:_design_of_a_system_with_support_for_existential_quantification_using_skolemization
DTSTART:20190404T130000
DTEND:20190404T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We motivate and present the design of a system implementing what we (joint work with Hannes Strass previously at the University of Leipzig as well as Adam Z. Wyner at Swansea University) have dubbed the "EMIL" (acronym for "extracting meaning out of inconsistent language") pipeline. The pipeline in question takes potentially conflicting rules expressed in a fragment of a prominent controlled natural language\, ACE\, yet extended with means of expressing defeasible rules in the form of normality assumptions. It makes sense of such rules using a recently formulated argumentation-inspired semantics\, verbalising possible points of view that can plausibly be held based on the rules in ACE. The approach we describe is ultimately based on reductions to answer-set-programming (ASP)\; simulating existential quantification by using skolemization in a manner resembling a translation for ASP formalized in the context of ∃-ASP. We discuss the advantages of this approach to building on the existing ACE interface to rule-systems\, ACERules.\n\n\n'''Bio:''' Martin Diller is finishing his PhD at TU Wien (Austria). The focus of his PhD has mainly been on implementing problems in (abstract and structured) argumentation via complexity-sensitive translations to logical formalisms (quantified boolean formulas and answer-set-programming). He holds a joint MSc degree in computational logic from TU Dresden\, FU Bozen-Bolzano\, and TU Wien. Before that he studied Philosophy & Computer Science at Universidad Nacional de Córdoba\, Argentina and was also briefly part of research groups in Epistemology & Computer Science there. He has also been at several other institutes and universities for internships and research stays working on applied argumentation & automated reasoning: UCL in London (England)\, University of Aberdeen (Scotland)\, and NICTA-Canberra (Australia).
DTSTAMP:20190328T120428
SEQUENCE:27961
END:VEVENT
BEGIN:VEVENT
SUMMARY:Third Workshop on Human Reasoning and Computational Logic
URL://iccl.inf.tu-dresden.de/web/Third_Workshop_on_Human_Reasoning_and_Computational_Logic
UID://iccl.inf.tu-dresden.de/web/Third_Workshop_on_Human_Reasoning_and_Computational_Logic
DTSTART:20190404T090000
DTEND:20190405T170000
LOCATION:APB 2026
DESCRIPTION:From the 4th to the 5th of April 2019\, we organize the third workshop on Human Reasoning and Computational Logic at TU Dresden\, Germany. The goal of this workshop is to provide a platform for the scientific exchange with respect to Human Reasoning between the areas of Cognitive Science and Computational Logic.
DTSTAMP:20190130T094758
SEQUENCE:27640
END:VEVENT
BEGIN:VEVENT
SUMMARY:Temporal Logics with Probabilistic Distributions
URL://iccl.inf.tu-dresden.de/web/Temporal_Logics_with_Probabilistic_Distributions2
UID://iccl.inf.tu-dresden.de/web/Temporal_Logics_with_Probabilistic_Distributions2
DTSTART:20190328T130000
DTEND:20190328T143000
LOCATION:APB 3027
DESCRIPTION:In many applications such as monitoring of dynamical systems\, the data are actually time-dependent\, e.g.\, describing the states of a dynamical system at different points in time. Moreover\, events are more likely or less likely to happen in certain time points defined by the type of an event. There exist many well-studied distributions which can characterise the natures of events among us. For example\, according to a Pareto distribution\, a.k.a. a power-law distribution\, the longer something has gone on\, the longer we expect it to continue going on. Like new companies or start-ups\, either (with the high probability) they fail during the first year of existence\, or\, if they manage to survive for decades\, their chances of collapse are extremely small.\n\n\nIn order to capture dynamic time-dependent and probabilistic patterns of knowledge\, using basic notions from probability theory and statistics\, we have introduced temporal logics of expectation\, where we can speak about statements not only occurring eventually in the future\, but giving additional information on when they are likely to happen. The resulted combination of a temporal DL-Lite fragment (with a two-dimensional semantics\, where one dimension is for time and the other for the DL domain) and an additional probabilistic constructor "distribution eventuality" is interpreted over multiple weighted worlds\, viz.\, temporal DL interpretations.
DTSTAMP:20190321T163823
SEQUENCE:27910
END:VEVENT
BEGIN:VEVENT
SUMMARY:Knowledge Graph Embedding
URL://iccl.inf.tu-dresden.de/web/TBA2
UID://iccl.inf.tu-dresden.de/web/TBA2
DTSTART:20190207T130000
DTEND:20190207T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract''': Recently graph embeddings have been taken up by the community as a tool to solve various tasks in machine learning and the general AI community. In this talk I will give a gentle introduction to the topic and also give some pointers to currently ongoing research. We start from looking at why graph embeddings are needed in the first place and how they could be used. We will then focus on graphs containing a large variety of information\, typically called knowledge graphs\, often represented in RDF. These graphs are hard to embed compared to e.g.\, uniform simple networks) because they contain multiple edge and vertex types\, relation directionality\, literals\, etc. What we will cover are a few basic techniques on how these embeddings can be computed. We plan to look into at least one example of translational based methods\, one from matrix decomposition\, and methods based on co-occurrence and statistical information. Finally we will discuss about a couple of open problems and some of the topics currently worked on.\n\n\n'''Short biography''': Michael Cochez is a postdoctoral researcher at the Fraunhofer Institute for Applied Information Technology FIT in Germany. In this position Michael is working on transferring research results from the academic world to the industry. Besides the industry exposure\, he conducts research in areas related to data analysis and knowledge representation\, like knowledge graph embedding\, scalable clustering\, frequent itemset mining\, stream sampling\, prototype-based ontologies\, ontology matching\, and knowledge evolution. This research is currently mainly conducted at the RWTH Aachen university\, Germany. Before joining Fraunhofer\, he obtained his Ph.D. degree from the University of Jyväskyä\, Finland under the supervision of Vagan Terziyan and Ferrante Neri (De Montfort University - Leicester). He obtained his master degree from the same university and his bachelor degree from the University of Antwerp\, Belgium. Michael Cochez is currently on a partial leave from a postdoc at the University of Jyväskylä and is also a scientific advisor for WE-OPT-IT Oy (former MyOpt Oy) in Finland.
DTSTAMP:20190201T103020
SEQUENCE:27647
END:VEVENT
BEGIN:VEVENT
SUMMARY:Learning Ontologies with Epistemic Reasoning: The EL Case
URL://iccl.inf.tu-dresden.de/web/Learning_Ontologies_with_Epistemic_Reasoning:_The_EL_Case
UID://iccl.inf.tu-dresden.de/web/Learning_Ontologies_with_Epistemic_Reasoning:_The_EL_Case
DTSTART:20190123T093000
DTEND:20190123T110000
LOCATION:APB 3027
DESCRIPTION:'''Abstract''': We investigate the problem of learning description logic ontologies from entailments via queries\, using epistemic reasoning. We introduce a new learning model consisting of epistemic membership and example queries and show that polynomial learnability in this model coincides with polynomial learnability in Angluin’s exact learning model with membership and equivalence queries. We then instantiate our learning framework to EL and show some complexity results for an epistemic extension of EL where epistemic operators can be applied over the axioms. Finally\, we transfer known results for EL ontologies and its fragments to our learning model based on epistemic reasoning.
DTSTAMP:20190115T143531
SEQUENCE:27525
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ontology-Based Query Answering for Probabilistic Temporal Data
URL://iccl.inf.tu-dresden.de/web/Ontology-Based_Query_Answering_for_Probabilistic_Temporal_Data
UID://iccl.inf.tu-dresden.de/web/Ontology-Based_Query_Answering_for_Probabilistic_Temporal_Data
DTSTART:20190117T130000
DTEND:20190117T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We investigate ontology-based query answering for data that are both temporal\nand probabilistic\, which might occur in contexts such as stream reasoning or\nsituation recognition with uncertain data. We present a\nframework that allows to represent temporal probabilistic data\, and\nintroduce a query language with which complex temporal and probabilistic\npatterns can be described. Specifically\, this language combines conjunctive\nqueries with operators from linear time logic as well as probability operators.\nWe analyse the complexities of evaluating queries in this language in various\nsettings. While in some cases\, combining the temporal and the\nprobabilistic dimension in such a way comes at the cost of increased complexity\,\nwe also determine cases for which this increase can be avoided. \n\nThis is a talk based on a paper that will be presented at this year’s AAAI conference.
DTSTAMP:20190114T122705
SEQUENCE:27507
END:VEVENT
BEGIN:VEVENT
SUMMARY:Privacy-Preserving Ontology Publishing for EL Instance Stores
URL://iccl.inf.tu-dresden.de/web/Privacy-Preserving_Ontology_Publishing_for_EL_Instance_Stores
UID://iccl.inf.tu-dresden.de/web/Privacy-Preserving_Ontology_Publishing_for_EL_Instance_Stores
DTSTART:20190110T130000
DTEND:20190110T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We make a first step towards adapting an existing approach for privacy-preserving publishing of linked data to Description Logic (DL) ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using concepts of the DL EL\, which corresponds to the setting where the ontology is an EL instance store. We introduce the notions of compliance of a concept with a policy and of the safety of a concept for a policy and show how optimal compliant (safe) generalizations of a given EL concept can be computed. In addition\, we investigate the complexity of the optimality problem.\n\n \nThis is joint work with Franz Baader and Francesco Kriegel.
DTSTAMP:20190109T151057
SEQUENCE:27479
END:VEVENT
BEGIN:VEVENT
SUMMARY:Embodied Terminology: Language\, Knowledge\, and Cognition
URL://iccl.inf.tu-dresden.de/web/Embodied_Terminology:_Language,_Knowledge,_and_Cognition
UID://iccl.inf.tu-dresden.de/web/Embodied_Terminology:_Language,_Knowledge,_and_Cognition
DTSTART:20181206T130000
DTEND:20181206T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Meaning formation in specialized language is yet an open puzzle to be solved and several methods have been proposed to piece it together. This talk looks to the paradigm of embodied cognition for such a method\, which believes that cognitive processes\, including language production and understanding\, are deeply rooted in physical interactions with the world. More specifically it looks at image schemas that capture recurrent sensorimotor patterns giving coherence and structure to our experiences and shaping our language and knowledge. Potential theoretical contributions of embodied cognition to the meaning formation in specialized languages are discussed alongside automated methods for the identification of image schemas in natural languages. A coherent\, robust\, and language agnostic theory of and method for embodied terminology holds the promise to boost socio-economically effective\, cognitively grounded\, and technologically powerful terminology management and translation technologies. \n\n\n'''Title of the lecture:''' Translatorische Terminologiewissenschaft und Übersetzungstechnologien (German)\n\n'''Lecture description:''' This demonstration lesson will be held in German since this is required for the hearing and will represent the second lesson of the Master-level lecture on translational terminology science and translation technologies. Technologies of specialized communication will be discussed with a particular focus on multilingual\, systematic\, and onomasiological terminology management.\n\n'''Details:''' Appointment training: this '''research talk''' and the following '''demonstration lesson''' represent a trial run for a hearing within the application procedure for a '''tenure-track professorship''' for terminology science and translation technologies. Please join and ask many challenging questions.
DTSTAMP:20181203T100011
SEQUENCE:27219
END:VEVENT
BEGIN:VEVENT
SUMMARY:Satisfiability in the Triguarded Fragment of First-Order Logic
URL://iccl.inf.tu-dresden.de/web/Satisfiability_in_the_Triguarded_Fragment_of_First-Order_Logic
UID://iccl.inf.tu-dresden.de/web/Satisfiability_in_the_Triguarded_Fragment_of_First-Order_Logic
DTSTART:20181129T130000
DTEND:20181129T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Most Description Logics (DLs) can be translated into well-known decidable fragments of first-order logic FO\, including the guarded fragment GF and the two-variable fragment FO2. Given their prominence in DL research\, we take closer look at GF and FO2\, and present a new fragment that subsumes both. This fragment\, called the triguarded fragment (denoted TGF)\, is obtained by relaxing the standard definition of GF: quantification is required to be guarded only for subformulae with three or more free variables. We show that satisfiability of equality-free TGF is N2ExpTime-complete\, but becomes NExpTime-complete if we bound the arity of predicates by a constant (a natural assumption in the context of DLs). Finally\, we observe that many natural extensions of TGF\, including the addition of equality\, lead to undecidability. \n\n\nThis talk is a presentation given at the 31st International Workshop on Description Logics\, 2018.
DTSTAMP:20181104T132422
SEQUENCE:27062
END:VEVENT
BEGIN:VEVENT
SUMMARY:Big Data Variety: On-Demand Data Integration
URL://iccl.inf.tu-dresden.de/web/Big_Data_Variety:_On-Demand_Data_Integration
UID://iccl.inf.tu-dresden.de/web/Big_Data_Variety:_On-Demand_Data_Integration
DTSTART:20181126T145000
DTEND:20181126T162000
LOCATION:APB 3105
DESCRIPTION:'''Abstract.''' As big data systems get more complex\, the data variety challenge has become the driving factor in current big data projects. From a technical perspective\, data variety mainly boils down to data integration\, which\, unfortunately\, is far away from being a resolved problem. Current efforts highlight the need to broaden the perspective beyond the data community and use semantic-aware formalisms\, such as knowledge graphs\, to tackle this problem. In this talk\, we will revise the current state-of-the-art of the data variety challenge and present recent solutions to manage the problem.\n\n\n'''Bio.''' I’m currently a tenure-track 2 lecturer at the Departament d’Enginyeria de Serveis i Sistemes d’Informació (ESSI)\, which belongs to the Universitat Politècnica de Catalunya (UPC-BarcelonaTech). also coordinate the IT4BI Erasmus Mundus Master at UPC and the Big Data Management and Analytics postgraduate course at UPC School. Although my hometown is Lleida\, I have already lived for more than 10 years in Barcelona. On March 2004 I obtained my bachelor’s degree in Informatics Engineering at Facultat d’Informàtica de Barcelona (FIB) Later\, on February 2010 I obtained my doctoral degree in Computing. My PhD thesis\, directed by Dr. Alberto Abelló and entitled “Automating the Multidimensional Design of Data Warehouses”\, can be found here. My main topics of interest are business intelligence\, Big Data and the semantic web. My PhD thesis focused on data warehousing but since then\, I have been working on many other topics such as NOSQL (and any technology beyond relational databases)\, bridging Big Data management and analytics\, open data platforms (mostly at the database level)\, recommendation systems and semantic-aware systems (based or exploiting semantic formalisms such as ontology languages or RDF). I am also interested in agile methodologies / formalisms to incorporate non-technical people in the design\, maintenance and evolution of database systems.
DTSTAMP:20181121T124047
SEQUENCE:27171
END:VEVENT
BEGIN:VEVENT
SUMMARY:Extending Datalog with Sets Using an Encoding in Existential Rules
URL://iccl.inf.tu-dresden.de/web/Extending_Datalog_with_Sets_Using_an_Encoding_in_Existential_Rules
UID://iccl.inf.tu-dresden.de/web/Extending_Datalog_with_Sets_Using_an_Encoding_in_Existential_Rules
DTSTART:20181126T130000
DTEND:20181126T143000
LOCATION:APB 3027
DESCRIPTION:We extend Datalog with sets\, in order to facilitate modelling with this logic\, and define the resulting\, extended logic. To allow practical reasoning over this logic\, we show a translation algorithm into first-order logic and prove its correctness. Furthermore\, we show that this translation exhibits optimal runtimes during the restricted chase\, compared to Datalog\, which means that reasoning over the extended logic is practically viable. Lastly\, we explore possible applications of this logic in practice.\n\n\nThis presentation and the subsequent question session constitute a defence for acquiring a Bachelor of Science degree in computer science.
DTSTAMP:20181110T102350
SEQUENCE:27105
END:VEVENT
BEGIN:VEVENT
SUMMARY:Standard and Non-Standard Inferences in the Description Logic FL0 Using Tree Automata
URL://iccl.inf.tu-dresden.de/web/TBA
UID://iccl.inf.tu-dresden.de/web/TBA
DTSTART:20181115T130000
DTEND:20181115T143000
LOCATION:APB 3027
DESCRIPTION:Although being quite inexpressive\, the description logic (DL) FL0\, which provides only conjunction\, value restriction and the top concept as concept constructors\, has an intractable subsumption problem in the presence of terminologies (TBoxes): subsumption reasoning w.r.t. acyclic FL0 TBoxes is coNP-complete\, and becomes even ExpTime-complete in case general TBoxes are used. In this talk\, I will describe an approach that uses automata working on infinite trees to solve both standard and non-standard inferences in FL0 w.r.t. general TBoxes. I will start by sketching an alternative proof of the ExpTime upper bound for subsumption in FL0 w.r.t. general TBoxes based on the use of looping tree automata. Afterwards\, I will explain how to employ parity tree automata to tackle non-standard inference problems such as computing the least common subsumer w.r.t. general TBoxes.
DTSTAMP:20181120T122106
SEQUENCE:27159
END:VEVENT
BEGIN:VEVENT
SUMMARY:From Horn-SRIQ to Datalog: A Data-Independent Transformation that Preserves Assertion Entailment
URL://iccl.inf.tu-dresden.de/web/From_Horn-SRIQ_to_Datalog:_A_Data-Independent_Transformation_that_Preserves_Assertion_Entailment
UID://iccl.inf.tu-dresden.de/web/From_Horn-SRIQ_to_Datalog:_A_Data-Independent_Transformation_that_Preserves_Assertion_Entailment
DTSTART:20181108T130000
DTEND:20181108T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Ontology-based access to large data-sets has recently gained a lot of attention. To access data efficiently\, one approach is to rewrite the ontology into Datalog\, and then use powerful Datalog engines to compute implicit entailments. Existing rewriting techniques support Description Logics (DLs) from ELH to Horn-SHIQ. We go one step further and present one such data-independent rewriting technique for Horn-SRIQ\, the extension of Horn-SHIQ that supports non-transitive\, complex roles---an expressive feature prominently used in many real-world ontologies. We evaluated our rewriting technique on a large known corpus of ontologies. Our experiments show that the resulting rewritings are of moderate size and that the our approach is more efficient than state-of-the-art DL reasoners when reasoning with data-intensive ontologies. \n\n\nThis is joint work with Larry González and Patrick Koopman. It has been accepted at AAAI 2019.
DTSTAMP:20181104T111047
SEQUENCE:27061
END:VEVENT
BEGIN:VEVENT
SUMMARY:Temporal constraint satisfaction problems in least fixed point logic
URL://iccl.inf.tu-dresden.de/web/Temporal_constraint_satisfaction_problems_in_least_fixed_point_logic
UID://iccl.inf.tu-dresden.de/web/Temporal_constraint_satisfaction_problems_in_least_fixed_point_logic
DTSTART:20181101T130000
DTEND:20181101T143000
LOCATION:APB 3027
DESCRIPTION:The constraint satisfaction problem (CSP) for a fixed structure L with finite relational signature is the computational problem of deciding whether a given finite structure of the same signature homomorphically maps to L. A temporal constraint language is a structure over the rational numbers Q whose relations are first-order definable in (Q\;<). In 2009\, Bodirsky and Kara presented a complete classification of the computational complexity of CSPs for temporal constraint languages. In contrast to finite domain structures\, there are temporal constraint languages whose CSP cannot be solved by any Datalog program but can be expressed in least fixed point logic (LFP). An example is CSP(Q\; { (x\,y\,z)\, where x>y or x>z} )\, known as the and/or scheduling problem. I will give a proof of a dichotomy for LFP expressibility of CSPs of temporal constraint languages. For a temporal constraint language L\, exactly one of the following is true: either L interprets all finite structures primitively positively with parameters and CSP(L) is inexpressible in LFP\, or CSP(L) is inexpressible in LFP if and only if L admits a primitive positive definition of the relation X:={ (x\,y\,z)\, where x>y=z or y>z=x or z>x=y}.
DTSTAMP:20181029T100450
SEQUENCE:27007
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ontological Modelling in Wikidata
URL://iccl.inf.tu-dresden.de/web/Ontological_Modelling_in_Wikidata
UID://iccl.inf.tu-dresden.de/web/Ontological_Modelling_in_Wikidata
DTSTART:20181025T130000
DTEND:20181025T143000
LOCATION:APB 3027
DESCRIPTION:This was an invited keynote talk at the 9th Workshop on Ontology Design and Patterns (WOP'18)\, 2018.\n\n\n'''Abstract:''' Wikidata\, the knowledge base of Wikimedia\, has been extremely successful in building and sustaining new communities of editors and users. Since its inception in 2012\, it has developed from an experimental “data wiki” into a well-organised reference knowledge base with an amazing array of applications. Developing an ontological schema for such an open and rapidly expanding project is a huge undertaking\, and difficult challenges arise on many levels. The community has directed significant efforts towards vocabulary development\, many guidelines and rules have been created\, and tools are used for helping editors to avoid and correct modelling errors. Nevertheless\, the distributed nature of Wikidata editing often means that ontology design\, too\, is distributed\, and a coherent global view is only being worked on once significant amounts of data have been added. The result is a knowledge graph with a widely varying modelling quality across different sub-domains.\n\nThe big question for researchers is how their insights and methods can help here. The Wikidata community is widely aware of semantic web activities and existing standards and academic publications play a role in many discussions. Yet\, there seems to be only little direct exchange between the communities. In this talk\, I will review the current state of Wikidata and its connection to semantic web standards such as RDF and SPARQL. I will try to raise awareness of the particular requirements of Wikidata\, and argue that these are of general interest for the data-driven curation of knowledge graphs.
DTSTAMP:20181019T105041
SEQUENCE:26935
END:VEVENT
BEGIN:VEVENT
SUMMARY:Making Repairs in Description Logics More Gentle
URL://iccl.inf.tu-dresden.de/web/Making_Repairs_in_Description_Logics_More_Gentle
UID://iccl.inf.tu-dresden.de/web/Making_Repairs_in_Description_Logics_More_Gentle
DTSTART:20181018T133000
DTEND:20181018T143000
LOCATION:APB 3027
DESCRIPTION:Abstract:\n"The classical approach for repairing a Description Logic ontology O in the sense of removing an unwanted consequence α is to delete a minimal number of axioms from O such that the resulting ontology O′ does not have the consequence α. However\, the complete deletion of axioms may be too rough\, in the sense that it may also remove consequences that are actually wanted. To alleviate this problem\, we propose a more gentle notion of repair in which axioms are not deleted but only weakened. On the one hand\, we investigate the general properties of this gentle repair method. On the other hand\, we propose and analyze concrete approaches for weakening axioms expressed in the Description Logic EL."\n\nThis is a rehearsal talk for KR 2018.
DTSTAMP:20181018T083442
SEQUENCE:26904
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Combined Approach to Query Answering in Horn-ALCHOIQ
URL://iccl.inf.tu-dresden.de/web/The_Combined_Approach_to_Query_Answering_in_Horn-ALCHOIQ2
UID://iccl.inf.tu-dresden.de/web/The_Combined_Approach_to_Query_Answering_in_Horn-ALCHOIQ2
DTSTART:20181018T130000
DTEND:20181018T133000
LOCATION:APB 3027
DESCRIPTION:Abstract:\n"Combined approaches have become a successful technique for solving\nconjunctive query (CQ) answering over description logics (DL) ontologies.\nNevertheless\, existing approaches are restricted to tractable DL languages.\nIn this work\, we extend the combined method to the more expressive DL\nHorn-ALCHOIQ---a language for which CQ answering is ExpTime-complete—\nin order to develop an efficient and scalable CQ answering procedure which\nis worst-case optimal for \\halchoiq and \\elho ontologies. We implement and\nstudy the feasibility of our algorithm\, and compare its performance to the DL\nreasoner Konclude."\n\nThis is a rehearsal talk for KR 2018.
DTSTAMP:20181018T083532
SEQUENCE:26905
END:VEVENT
BEGIN:VEVENT
SUMMARY:Efficient Model Construction for Horn Logic with VLog: Extended Abstract
URL://iccl.inf.tu-dresden.de/web/Efficient_Model_Construction_for_Horn_Logic_with_VLog:_Extended_Abstract
UID://iccl.inf.tu-dresden.de/web/Efficient_Model_Construction_for_Horn_Logic_with_VLog:_Extended_Abstract
DTSTART:20181011T130000
DTEND:20181011T143000
LOCATION:APB 3027
DESCRIPTION:Abstract: "We extend the Datalog engine VLog to develop a column-oriented implementation of the skolem and the restricted chase – two variants of a sound and complete algorithm used for model construction over theories of existential rules. We conduct an extensive evaluation over several data-intensive theories with millions of facts and thousands of rules\, and show that VLog can compete with the state of the art\, regarding runtime\, scalability\, and memory efficiency."\n\n\nThis is a rehearsal talk for the DL Workshop 2018.
DTSTAMP:20181018T013010
SEQUENCE:26721
END:VEVENT
BEGIN:VEVENT
SUMMARY:Report on research visit to NRC
URL://iccl.inf.tu-dresden.de/web/Report_on_research_visit_to_NRC
UID://iccl.inf.tu-dresden.de/web/Report_on_research_visit_to_NRC
DTSTART:20181004T130000
DTEND:20181004T143000
LOCATION:APB 3027
DESCRIPTION:Report on research visit to NRC
DTSTAMP:20181210T125955
SEQUENCE:27274
END:VEVENT
BEGIN:VEVENT
SUMMARY:Extending Matching in Description Logics
URL://iccl.inf.tu-dresden.de/web/Extending_Matching_in_Description_Logics
UID://iccl.inf.tu-dresden.de/web/Extending_Matching_in_Description_Logics
DTSTART:20180927T130000
DTEND:20180927T143000
LOCATION:APB 2026
DESCRIPTION:Matching concept descriptions against concept patterns was introduced as a non-standard inference task in Description Logics motivated by applications in knowledge bases. It was first investigated for the DL FL0\, for which a polynomial-time algorithm was obtained. In this talk\, we will present recent results that extend matching in FL0 in two aspects. Initially\, we investigate approximate matching\, that increases the recall of the method by checking whether the input description and pattern can “almost” match\, rather than become equivalent. The meaning of almost is formalised using distance measures between concepts. Afterwards\, we examine matching in the presence of TBoxes\, i.e.\, background knowledge of concept inclusions. It is well-known that subsumption reasoning in FL0 rises from P to ExpTime when TBoxes are considered\, and we prove that the same is true for matching. Finally\, we present some preliminary ideas on approximate matching in the presence of TBoxes.
DTSTAMP:20181018T013053
SEQUENCE:26723
END:VEVENT
BEGIN:VEVENT
SUMMARY:Complexity of Projection with Stochastic Actions in a Probabilistic Description Logic
URL://iccl.inf.tu-dresden.de/web/Complexity_of_Projection_with_Stochastic_Actions_in_a_Probabilistic_Description_Logic
UID://iccl.inf.tu-dresden.de/web/Complexity_of_Projection_with_Stochastic_Actions_in_a_Probabilistic_Description_Logic
DTSTART:20180920T130000
DTEND:20180920T143000
LOCATION:APB 3027
DESCRIPTION:Integrating probabilistic notions of uncertainty into languages for reasoning about actions is a popular approach to adequately deal for instance with possibly fallible acting and sensing. In this talk\, an action language extended with quantitative notions of uncertainty is considered. In our setting\, the initial beliefs of an agent are represented as a probabilistic knowledge base with axioms formulated in the Description Logic ALCO. Action descriptions describe the possibly context-sensitive and nondeterministic effects of actions and provide likelihood distributions over the different possible outcomes. A decidability result for the probabilistic projection problem\, which is the basic reasoning task needed for predicting the outcome of action sequences\, is presented. Furthermore\, I will talk about the problem of how the nondeterminism in the action model affects the complexity of the projection problem.
DTSTAMP:20181018T013111
SEQUENCE:26724
END:VEVENT
BEGIN:VEVENT
SUMMARY:Faceted Answer-Set Navigation
URL://iccl.inf.tu-dresden.de/web/Faceted_Answer-Set_Navigation
UID://iccl.inf.tu-dresden.de/web/Faceted_Answer-Set_Navigation
DTSTART:20180906T130000
DTEND:20180906T143000
LOCATION:APB 3027
DESCRIPTION:Abstract: Even for small logic programs\, the number of resulting answer-sets can be tremendous. In such cases\, users might be incapable of comprehending the space of answer-sets as a whole nor being able to identify a specific answer-set according to their needs. To overcome this difficulty\, we propose a general formal framework that takes an arbitrary logic program as input\, and allows for navigating the space of answer-sets in a systematic interactive way analogous to faceted browsing. The navigation is carried out stepwise\, where each step narrows down the remaining solutions\, eventually arriving at a single one. We formulate two navigation modes\, one stringent conflict avoiding\, and a “free” mode\, where conflicting selections of facets might occur. For the latter mode\, we provide efficient algorithms for resolving the conflicts. We provide an implementation of our approach and demonstrate that our framework is able to handle logic programs for which it is currently infeasible to retrieve all answer sets.\n\n\nThis is joint work with Sebastian Rudolph and Lukas Schweizer\, to appear in the Proceedings of the 2nd International Joint Conference on Rules and Reasoning\, 2018.
DTSTAMP:20181018T013124
SEQUENCE:26725
END:VEVENT
BEGIN:VEVENT
SUMMARY:Weighted Model Counting on the GPU by Exploiting Small Treewidth
URL://iccl.inf.tu-dresden.de/web/Weighted_Model_Counting_on_the_GPU_by_Exploiting_Small_Treewidth
UID://iccl.inf.tu-dresden.de/web/Weighted_Model_Counting_on_the_GPU_by_Exploiting_Small_Treewidth
DTSTART:20180830T130000
DTEND:20180830T143000
LOCATION:APB 3027
DESCRIPTION:We propose a novel solver that efficiently finds almost the exact number of solutions of a Boolean formula (#Sat) and the weighted model count of a weighted Boolean formula (WMC) if the treewidth of the given formula is sufficiently small. The basis of our approach are dynamic programming algorithms on tree decompositions\, which we engineered towards efficient parallel execution on the GPU. We provide thorough experiments and compare the runtime of our system with state-of-the-art #Sat and WMC solvers. Our results are encouraging in the sense that also complex reasoning problems can be tackled by parameterized algorithms executed on the GPU if instances have treewidth at most 30\, which is the case for more than half of counting and weighted counting benchmark instances.
DTSTAMP:20181018T013201
SEQUENCE:26727
END:VEVENT
BEGIN:VEVENT
SUMMARY:Classification of the finite polymorphism-homogeneous tournaments with loops
URL://iccl.inf.tu-dresden.de/web/Classification_of_the_finite_polymorphism-homogeneous_tournaments_with_loops
UID://iccl.inf.tu-dresden.de/web/Classification_of_the_finite_polymorphism-homogeneous_tournaments_with_loops
DTSTART:20180823T130000
DTEND:20180823T143000
LOCATION:APB 3027
DESCRIPTION:The classical notion of "homogeneity" plays a part in model theory. In this talk we look at a variant of this property\, the so called "polymorphism-homogeneity"\, first explicitly examined by Christian and Maja Pech\, by classifying the finite tournaments (loops allowed)\, which are polymorphism-homogeneous. For this we utilize the classification of the finite homomorphism-homogeneous tournaments with loops\, done by A. Ilić\, D. Mašulović and U. Rajković.
DTSTAMP:20181018T013335
SEQUENCE:26728
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Single Approach to Decide Chase Termination on Linear Existential Rules
URL://iccl.inf.tu-dresden.de/web/A_Single_Approach_to_Decide_Chase_Termination_on_Linear_Existential_Rules
UID://iccl.inf.tu-dresden.de/web/A_Single_Approach_to_Decide_Chase_Termination_on_Linear_Existential_Rules
DTSTART:20180809T130000
DTEND:20180809T143000
LOCATION:APB 3027
DESCRIPTION:**Abstract:** Existential rules are a knowledge representation and reasoning formalism that extends both positive rules a la Datalog and most description logics used in ontology-based query answering. The chase is a fundamental tool for reasoning on knowledge bases composed of an instance and a set of existential rules. It is well-known that deciding\, given a chase variant and a set of existential rules\, whether the chase will halt for any instance is an undecidable problem. Hence\, a crucial issue is whether it becomes decidable for known classes of existential rules. We focus on three main chase variants\, namely semi-oblivious\, restricted and core chase\, and consider linear existential rules\, a simple yet important subclass of existential rules. We show that the termination problem is decidable in these three variants with a novel unified approach based on a single graph and a single notion of forbidden pattern.\n\nJoint work with M. Leclère\, M.-L. Mugnier and F. Ulliana.\n\n**Speaker Bio:** Michaël Thomazo is an INRIA researcher (CEDAR team)\, currently working on ontology-based query answering\, and especially efficient evaluation of reformulated queries. He had been a post-doc at TU Dresden\, working with S. Rudolph\, and got his Ph.D from the University of Montpellier\, supervised by J.-F. Baget and M.-L. Mugnier. You can find more details at http://pages.saclay.inria.fr/michael.thomazo/.
DTSTAMP:20181018T013523
SEQUENCE:26730
END:VEVENT
BEGIN:VEVENT
SUMMARY:Modal Separation Logic: an ongoing quest for elementary complexity
URL://iccl.inf.tu-dresden.de/web/Modal_Separation_Logic:_an_ongoing_quest_for_elementary_complexity
UID://iccl.inf.tu-dresden.de/web/Modal_Separation_Logic:_an_ongoing_quest_for_elementary_complexity
DTSTART:20180808T130000
DTEND:20180808T140000
LOCATION:TBA
DESCRIPTION:The talk will be dedicated to my recent work on Quantified Computation Tree Logic (QCTL) with Stephane Demri (LSV at ENS P-S\, France) and my ongoing work \nwith Stephane and Raul Fervari (Universidad Nacional de Córdoba\, Argentina) \non Modal Separation Logic (MSL).\n\n'''Abstract:''' Modal separation logic MSL is a logic introduced recently by Demri and Fervari \nin [DF-aiml18]\, whose models are memory states as in separation logic and \nthe logical connectives include modal operators as well as separating \nconjunction and magic wand. With these operators\, some fragments of MSL can be \nseen as genuine modal logics whereas some others capture standard separation \nlogics\, leading to an original language to speak about memory states. \nThe decidability status and the computational complexity of several fragments \nof MSL were established in [DF-aiml18]\, leading to surprising hardness or \neasiness results\, obtained by designing proof methods that take into \naccount the modal and separation features of MSL.\n\nWe will focus on a fragment of MSL\, where only <>^{-1} (predecessor) \nand * (separating conjunction) operators are allowed. It is not hard to see that\nsuch a fragment is decidable by employing a theorem about decidability of MSO \nwith one functional symbol. However the exact complexity of the logic remains \nunknown. During the talk I will present an approach to solve this problem by \ntranslating MSL into QCTL_EX\, an extension of well-known CTL (where only EX \noperator is allowed) with propositional quantification. We will see how this \napproach failed\, but as a sub-product we obtain several nice complexity and \nexpressivity results for QCTL_EX. We will also see what makes QCTL_EX hard \nand why there is still a chance that the mentioned fragment of MSL could \nhave an elementary complexity.\n\n[DF-aiml18] S. Demri and R. Fervari. On the complexity of modal separation logics. In AiML'18. College Publications\, August 2018. To appear.\n\n'''Bio:''' Bartosz Bednarczyk - University of Wrocław & LSV ENS Paris-Saclay & University of Oxford
DTSTAMP:20181019T105240
SEQUENCE:26936
END:VEVENT
BEGIN:VEVENT
SUMMARY:Towards Conjunctive Query Answering in Defeasible EL_bot
URL://iccl.inf.tu-dresden.de/web/Towards_Conjunctive_Query_Answering_in_Defeasible_EL_bot
UID://iccl.inf.tu-dresden.de/web/Towards_Conjunctive_Query_Answering_in_Defeasible_EL_bot
DTSTART:20180726T130000
DTEND:20180726T143000
LOCATION:APB 3027
DESCRIPTION:In Defeasible Description Logics (DDLs)\, typical or default information about concepts can be stated in the form of defeasible concept inclusions (DCIs) in a DBox. As opposed to general concept inclusions (GCIs)\, which are strict in the sense that e.g. "every element of C must be an element of D"\, defeasible statements are more vague and allow to state that "elements of C are usually elements of D"\, where the term usually insinuates that DCIs are satisfied individually for elements not satisfying properties contradicting the DCI. In 2017\, we have discovered and resolved a severe shortcoming of earlier approaches to define semantics for DDLs rooted in the preferential approach by KLM. We introduce a new kind of canonical model in order to determine consequences in defeasible EL_bot. These so-called typicality models extend classical canonical models by additional concept representatives\, which are individually forced to satisfy certain DCIs. Typicality models allow us to reconstruct the consequences obtained by previous approaches and extend them appropriately to alleviate the discovered insufficiency. More recently\, we have extended this formalism beyond the reasoning service subsumption\, to also be capable of deciding defeasible instance checks.\n\nThe natural next step is to strengthen the deductive power of our formalism to enable conjunctive query answering (CQA) in defeasible EL_bot\, a reasoning service that\, to the best of our knowledge\, has not been studied for DDLs. We conjecture that the query rewriting approach by Lutz et al. (2009)\, allowing the use of classical canonical models to answer conjunctive queries\, can be adopted to answer defeasible conjunctive queries by using typicality models\, due to their canonical nature.\n\nThis endeavour is still in its very early stages and aspects of this talk concerning CQA are preliminary ideas and working theories. Input and feedback from the audience is greatly appreciated.
DTSTAMP:20181018T013556
SEQUENCE:26731
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Dichotomy for Evaluating Simple Regular Path Queries
URL://iccl.inf.tu-dresden.de/web/A_Dichotomy_for_Evaluating_Simple_Regular_Path_Queries
UID://iccl.inf.tu-dresden.de/web/A_Dichotomy_for_Evaluating_Simple_Regular_Path_Queries
DTSTART:20180705T130000
DTEND:20180705T143000
LOCATION:APB 3027
DESCRIPTION:Regular path queries (RPQs) are a central component of graph databases. We investigate decision- and enumeration problems concerning the evaluation of RPQs under several semantics that have recently been considered: arbitrary paths\, shortest paths\, and simple paths.\nWhereas arbitrary and shortest paths can be enumerated in polynomial delay\, the situation is much more intricate for simple paths. For instance\, already the question if a given graph contains a simple path of a certain length has cases with highly non-trivial solutions and cases that are long-standing open problems. We study RPQ evaluation for simple paths from a parameterized complexity perspective and define a class of simple transitive expressions that is prominent in practice and for which we can prove a dichotomy for the evaluation problem. We observe that\, even though simple path semantics is intractable for RPQs in general\, it is feasible for the vast majority of RPQs that are used in practice. At the heart of our study on simple paths is a result of independent interest: the two disjoint paths problem in directed graphs is W[1]-hard if parameterized by the length of one of the two paths.\n\n'''Bio:''' Wim Martens is a professor for Theoretical Computer Science at the University of Bayreuth. He is interested in theoretical aspects of data management\, formal language theory\, logic\, and complexity. He was an invited speaker at STOC 2017 and his research received several awards\, including the ACM SIGMOD research highlight award and the Dissertation Award for Computer Science\, Belgium. Currently\, he is on the editorial board of ACM TODS and he is chairing the ICDT Council. His talk reports about research for which he recently received the Best Paper Award of the International Conference on Database Theory 2018.
DTSTAMP:20181019T105441
SEQUENCE:26938
END:VEVENT
BEGIN:VEVENT
SUMMARY:Provenance and Probabilities in Relational Databases
URL://iccl.inf.tu-dresden.de/web/Provenance_and_Probabilities_in_Relational_Databases
UID://iccl.inf.tu-dresden.de/web/Provenance_and_Probabilities_in_Relational_Databases
DTSTART:20180622T093000
DTEND:20180622T110000
LOCATION:APB 3027
DESCRIPTION:We review the basics of data provenance in relational databases. We describe different provenance formalisms\, from Boolean provenance to provenance semirings and beyond\, that can be used for a wide variety of purposes\, to obtain additional information on the output of a query. We discuss representation systems for data provenance\, circuits in particular\, with a focus on practical implementation. Finally\, we explain how provenance is practically used for probabilistic query evaluation in probabilistic databases.\n\n\n'''Bio:''' Pierre Senellart is a Professor in the Computer Science Department at the École normale supérieure (ENS\, PSL University) in Paris\, France\, and an Adjunct Professor at Télécom ParisTech. He is an alumnus of ENS and obtained his M.Sc. (2003) and Ph.D. (2007) in computer science from Université Paris-Sud\, studying under the supervision of Serge Abiteboul. Before joining ENS\, he was an Associate Professor (2008–2013) then a Professor (2013–2016) at Télécom ParisTech. He also held secondary appointments as Lecturer at the University of Hong Kong in 2012–2013\, and as Senior Research Fellow at the National University of Singapore from 2014 to 2016. His research interests focus around practical and theoretical aspects of Web data management\, including Web crawling and archiving\, Web information extraction\, uncertainty management\, Web mining\, and intensional data management.
DTSTAMP:20181019T105352
SEQUENCE:26937
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Combined Approach to Query Answering in Horn-ALCHOIQ
URL://iccl.inf.tu-dresden.de/web/The_Combined_Approach_to_Query_Answering_in_Horn-ALCHOIQ
UID://iccl.inf.tu-dresden.de/web/The_Combined_Approach_to_Query_Answering_in_Horn-ALCHOIQ
DTSTART:20180607T130000
DTEND:20180607T143000
LOCATION:APB 3027
DESCRIPTION:Abstract: "Combined approaches have become a successful technique for solving conjunctive query (CQ) answering over description logics (DL) ontologies. Nevertheless\, existing approaches are restricted to tractable DL languages. In this work\, we extend the combined method to the more expressive DL Horn-ALCHOIQ—a language for which CQ answering is EXPTIME-complete—in order to develop an efficient and scalable CQ answering procedure which is worst case optimal for Horn-ALCHOIQ and ELHO ontologies. We implement and study the feasibility of our algorithm\, and compare its performance to the DL reasoner Konclude." \n\n\nThis paper has been submitted to KR 2018. The extended technical report can be found at https://iccl.inf.tu-dresden.de/web/Techreport3024/en.
DTSTAMP:20181018T013645
SEQUENCE:26734
END:VEVENT
BEGIN:VEVENT
SUMMARY:Efficient Model Construction for Horn Logic with VLog
URL://iccl.inf.tu-dresden.de/web/Efficient_Model_Construction_for_Horn_Logic_with_VLog
UID://iccl.inf.tu-dresden.de/web/Efficient_Model_Construction_for_Horn_Logic_with_VLog
DTSTART:20180531T130000
DTEND:20180531T143000
LOCATION:APB 3027
DESCRIPTION:Abstract: \n "We extend the Datalog engine VLog to develop a column-oriented\nimplementation of the skolem and the restricted chase – two variants of a sound\nand complete algorithm used for model construction over theories of existential\nrules. We conduct an extensive evaluation over several data-intensive theories\nwith millions of facts and thousands of rules\, and show that VLog can compete\nwith the state of the art\, regarding runtime\, scalability\, and memory efficiency."\n\nThis work was developed is a collaboration with Jacopo Urbani and Ceriel Jacobs from Vrije Universiteit Amsterdam. \nThe paper will be presented and published at IJCAR 2018. It is available at https://iccl.inf.tu-dresden.de/web/Article3046/en.
DTSTAMP:20181018T013657
SEQUENCE:26735
END:VEVENT
BEGIN:VEVENT
SUMMARY:A proof of CSP Dichotomy conjecture
URL://iccl.inf.tu-dresden.de/web/A_proof_of_CSP_Dichotomy_conjecture
UID://iccl.inf.tu-dresden.de/web/A_proof_of_CSP_Dichotomy_conjecture
DTSTART:20180525T131500
DTEND:20180525T144500
LOCATION:WIL/C115\, or in WIL/C204 if more space is needed
DESCRIPTION:Many natural combinatorial problems can be expressed as constraint satisfaction problems. This class of problems is known to be NP-complete in general\, but certain restrictions on the form of the constraints can ensure tractability. The standard way to parameterize interesting subclasses of the constraint satisfaction problem is via finite constraint languages. The main problem is to classify those subclasses that are solvable in polynomial time and those that are NP-complete. It was conjectured that if a constraint language has a weak near unanimity polymorphism then the corresponding constraint satisfaction problem is tractable\, otherwise it is NP-complete. The hardness result is known since 2001. We present an algorithm that solves Constraint Satisfaction Problem in polynomial time for constraint languages having a weak near unanimity polymorphism\, which proves the remaining part of the conjecture.
DTSTAMP:20180517T104618
SEQUENCE:25823
END:VEVENT
BEGIN:VEVENT
SUMMARY:Adaptive Language Interaction
URL://iccl.inf.tu-dresden.de/web/Adaptive_Language_Interaction
UID://iccl.inf.tu-dresden.de/web/Adaptive_Language_Interaction
DTSTART:20180503T110000
DTEND:20180503T123000
LOCATION:APB 1004
DESCRIPTION:This talk is part of Dresden Talks on Interaction & Visualization: \n\n\nLanguage-based interaction with digital agents (e.g. Siri\, Alexa) has become ubiquitous\, and is used in various situations and by an increasingly large variety of different users. Research shows however that a dialog system should not just be able to understand and generate language correctly\, but that it should also adapt the way it formulates its messages to fit the user and the situation (for instance\, it should use simpler formulations to avoid distraction during driving).\n\nIn this talk\, I will start out by presenting an information-theoretic measure\, surprisal\, as a way of quantifying linguistically induced cognitive load on a word-by-word basis. I will then proceed to talk about neural network models that we have recently developed to estimate semantic surprisal\, i.e. the amount of cognitive load that will be caused by an unexpected word like "bathtub" in context\, such as "I did the dishes in the bathtub.".\n\nFinally\, I will report on our recent work using a novel pupillometry-based measure of cognitive load\, the Index of Cognitive Activity (ICA)\, which allows us to assess cognitive load in dual task settings such as driving a car.
DTSTAMP:20180418T103555
SEQUENCE:25660
END:VEVENT
BEGIN:VEVENT
SUMMARY:A pragmatic approach to translation: Vocabulary alignment through Multiagent Interaction and Observation
URL://iccl.inf.tu-dresden.de/web/A_pragmatic_approach_to_translation:_Vocabulary_alignment_through_Multiagent_Interaction_and_Observation
UID://iccl.inf.tu-dresden.de/web/A_pragmatic_approach_to_translation:_Vocabulary_alignment_through_Multiagent_Interaction_and_Observation
DTSTART:20180426T130000
DTEND:20180426T143000
LOCATION:APB 3027
DESCRIPTION:Talk by: Paula Chocrón \nInstitute: Artificial Intelligence Research Institute (IIIA-CSIC)\, Barcelona\, Spain \n\nAbstract: \n"Every speaker that has been abroad knows that understanding a foreign language is easier when performing simple\, well-defined interactions. For example\, it is easier to ask for a coffee than to discuss politics in a language we do not master. In this talk I will discuss how this idea can be applied to help achieve meaningful communication in artificial multi-agent systems. In open\, heterogeneous environments\, it is likely that interlocutors with different backgrounds use different languages. Can the contextual information about the tasks being performed be used to learn a translation that allows agents to interact?"\n\nI will start by presenting a notion of context that is based on the formal specifications of the tasks performed by agents. I will then show how this context can be used by agents to align their vocabularies dynamically\, by learning mappings from the experience of previous interactions. In doing so\, we will also rethink the traditional approach to semantic matching and its evaluation\, tackling the following questions: What does it mean for agents to "understand each other"? When is an alignment good for a particular application? Finally\, I will present an application to agents that interact using procedural protocols obtained from the WikiHow website\, showing how they can infer a translation between English and Spanish without using external resources.
DTSTAMP:20181018T013833
SEQUENCE:26736
END:VEVENT
BEGIN:VEVENT
SUMMARY:Getting the most out of Wikidata: Semantic Technology Usage in Wikipedia’s Knowledge Graph
URL://iccl.inf.tu-dresden.de/web/Seminar_talk,_title:_%22Practical_Linked_Data_Access_via_SPARQL:_The_Case_of_Wikidata%22
UID://iccl.inf.tu-dresden.de/web/Seminar_talk,_title:_%22Practical_Linked_Data_Access_via_SPARQL:_The_Case_of_Wikidata%22
DTSTART:20180420T095000
DTEND:20180420T110000
LOCATION:APB 3105
DESCRIPTION:Abstract:\n"Wikidata\, the knowledge base of Wikimedia\, has been extremely successful in building and sustaining new communities of editors and users. Since its inception in 2012\, it has developed from an experimental “data wiki” into a well-organised reference knowledge base with an amazing array of applications. You can now come across Wikidata content in your mobile phone\, in newspaper articles\, and even in aeroplanes. A key towards this success has been the development of a whole ecosystem of tools and APIs for accessing\, querying\, and analysing Wikidata content. The most prominent such feature is Wikidata's powerful query service\, which is based on the Semantic Web data exchange standard RDF and its related query language SPARQL. The talk will give an overview over these developments and focus specifically on the possibilities and current usage of SPARQL on Wikidata."\n\nThis is joined work with Adrian Bielefeldt and Julius Gonsior. The paper is at https://iccl.inf.tu-dresden.de/web/Inproceedings3196. This talk is a preparation for the presentation at WWW.
DTSTAMP:20180420T062015
SEQUENCE:25678
END:VEVENT
BEGIN:VEVENT
SUMMARY:Integrating Semantic Web in the Real World: A journey between two cities
URL://iccl.inf.tu-dresden.de/web/Integrating_Semantic_Web_in_the_Real_World:_A_journey_between_two_cities
UID://iccl.inf.tu-dresden.de/web/Integrating_Semantic_Web_in_the_Real_World:_A_journey_between_two_cities
DTSTART:20180420T090000
DTEND:20180420T095000
LOCATION:APB 3105
DESCRIPTION:Abstract: \n"An early vision in Computer Science has been to create intelligent systems capable of reasoning on large amounts of data. Today\, this vision can be delivered by integrating Relational Databases with the Semantic Web using the W3C standards: a graph data model (RDF)\, ontology language (OWL)\, mapping language (R2RML) and query language (SPARQL). The research community has successfully been showing how intelligent systems can be created with Semantic Web technologies\, dubbed now as Knowledge Graphs. However\, where is the mainstream industry adoption? What are the barriers to adoption? Are these engineering and social barriers or are they open scientific problems that need to be addressed? This talk will chronicle our journey of deploying Semantic Web technologies with real world users to address Business Intelligence and Data Integration needs\, describe technical and social obstacles that are present in large organizations\, and scientific and engineering challenges that require attention."\n\n\nSpeaker's Biography:\nJuan F. Sequeda is the co-founder of Capsenta\, a spin-off from his research\, and the Senior Director of Capsenta Labs. He holds a PhD in Computer Science from the University of Texas at Austin. His research interests are on the intersection of Logic and Data and in particular between the Semantic Web and Relational Databases for data integration\, ontology based data access and semantic/graph data management. Juan is the recipient of the NSF Graduate Research Fellowship\, received 2nd Place in the 2013 Semantic Web Challenge for his work on ConstituteProject.org\, Best Student Research Paper at the 2014 International Semantic Web Conference and the 2015 Best Transfer and Innovation Project awarded by Institute for Applied Informatics. Juan is the General Chair of 2018 Alberto Mendelzon Workshop on Foundations of Databases (AMW2018)\, was the PC chair of the ISWC 2017 In-Use track\, is on the Editorial Board of the Journal of Web Semantics and member of multiple program committees (ISWC\, ESWC\, WWW\, AAAI\, IJCAI). Juan is a member of the Linked Data Benchmark Council (LDBC) Graph Query Languages task force and has also been an invited expert and standards editor at the World Wide Web Consortium (W3C).
DTSTAMP:20181018T013853
SEQUENCE:26737
END:VEVENT
BEGIN:VEVENT
SUMMARY:Learning Word Representation in Compositional Matrix-Space Models
URL://iccl.inf.tu-dresden.de/web/Learning_Word_Representation_in_Compositional_Matrix-Space_Models
UID://iccl.inf.tu-dresden.de/web/Learning_Word_Representation_in_Compositional_Matrix-Space_Models
DTSTART:20180412T130000
DTEND:20180412T143000
LOCATION:APB 3027
DESCRIPTION:Learning word representations to capture the semantics and compositionality of language has received much research interest in natural language processing. Beyond the popular vector space models for word representations and compositionality\, Compositional Matrix-Space Models (CMSMs) have been proposed. In this talk\, I introduce the principle idea of CMSM and its application in NLP tasks. Then\, I address the problem of learning matrix representation of words in CMSMs for the task of fine-grained sentiment analysis. I show that our approach\, which learns the word matrices gradually in two steps\, outperforms other approaches in CMSMs in terms of quality and computational cost.
DTSTAMP:20181018T013913
SEQUENCE:26738
END:VEVENT
BEGIN:VEVENT
SUMMARY:Second Workshop on Human Reasoning and Computational Logic
URL://iccl.inf.tu-dresden.de/web/Second_Workshop_on_Human_Reasoning_and_Computational_Logic
UID://iccl.inf.tu-dresden.de/web/Second_Workshop_on_Human_Reasoning_and_Computational_Logic
DTSTART:20180410T090000
DTEND:20180412T180000
LOCATION:APB 2026
DESCRIPTION:From the 10th to the 12th of April 2018\, we organize the second workshop on Human Reasoning and Computational Logic at TU Dresden\, Germany. The goal of this workshop is to provide a platform for the scientific exchange with respect to Human Reasoning between the areas of Cognitive Science and Computational Logic.
DTSTAMP:20180306T115908
SEQUENCE:25240
END:VEVENT
BEGIN:VEVENT
SUMMARY:Modelling Dynamics in Semantic Web Knowledge Graphs with Formal Concept Analysis
URL://iccl.inf.tu-dresden.de/web/Modelling_Dynamics_in_Semantic_Web_Knowledge_Graphs_with_Formal_Concept_Analysis
UID://iccl.inf.tu-dresden.de/web/Modelling_Dynamics_in_Semantic_Web_Knowledge_Graphs_with_Formal_Concept_Analysis
DTSTART:20180405T130000
DTEND:20180405T143000
LOCATION:APB 3027
DESCRIPTION:In this paper\, we propose a novel data-driven schema for large-scale heterogeneous knowledge graphs inspired by Formal Concept Analysis (FCA). We first extract the sets of properties associated with individual entities\; these property sets (aka. characteristic sets) are annotated with cardinalities and used to induce a lattice based on set-containment relations\, forming a natural hierarchical structure describing the knowledge graph. We then propose an algebra over such schema lattices\, which allows to compute diffs between lattices (for example\, to summarise the changes from one version of a knowledge graph to another)\, to add lattices (for example\, to project future changes)\, and so forth. While we argue that this lattice structure (and associated algebra) may have various applications\, we currently focus on the use-case of modelling and predicting the dynamic behaviour of knowledge graphs. We instantiate and evaluate our methods for analysing how versions of the Wikidata knowledge graph have changed over a period of 11 weeks. We propose algorithms for constructing the lattice-based schema from Wikidata\, and evaluate their efficiency and scalability. We then evaluate use of the resulting schema(ta) for predicting how the knowledge graph will evolve in future versions.\n\n\nAuthors: Larry Gonzalez and Aidan Hogan
DTSTAMP:20181018T013940
SEQUENCE:26739
END:VEVENT
BEGIN:VEVENT
SUMMARY:Functional models and Data Complexity for FL0
URL://iccl.inf.tu-dresden.de/web/Functional_models_and_Data_Complexity_for_FL0
UID://iccl.inf.tu-dresden.de/web/Functional_models_and_Data_Complexity_for_FL0
DTSTART:20180315T092000
DTEND:20180315T105000
LOCATION:APB 3027
DESCRIPTION:In the early days of Description Logic research\, the inexpressive DL FL0 was considered to be the smallest possible DL. When it was later shown that subsumption reasoning wrt general TBoxes is as hard as for the more expressive ALC\, the community lost interest in this DL. However\, the recent approach of reasoning with so-called functional models has rekindled the interest\, since\, on the one hand it has the potential to outperform ALC reasoners (even though it stays ExpTime-hard in the worst case)\, and on the other hand\, it has already helped in answering open problems about non-standard inferences for FL0\, like existence and computation of least common subsumer and most specific concept\, and the exact data complexity of query answering.\nIn this talk\, we will describe the approach of functional models\, and demonstrate how it has been used to show that answering instance queries for FL0 is tractable regarding data complexity.
DTSTAMP:20181018T014002
SEQUENCE:26740
END:VEVENT
END:VCALENDAR