BEGIN:VCALENDAR
PRODID:-//SMW Project//Semantic Result Formats
VERSION:2.0
METHOD:PUBLISH
X-WR-CALNAME:ICCL-Veranstaltungen
BEGIN:VEVENT
SUMMARY:Positivity-Hardness and Saturation Points in Markov Decision Processes
URL://iccl.inf.tu-dresden.de/web/Positivity-Hardness_and_Saturation_Points_in_Markov_Decision_Processes
UID://iccl.inf.tu-dresden.de/web/Positivity-Hardness_and_Saturation_Points_in_Markov_Decision_Processes
DTSTART:20201105T130000
DTEND:20201105T143000
LOCATION:Online
DESCRIPTION:The Positivity problem for linear recurrence sequences is a number-theoretic decision problem whose decidability has been open for many decades. We show that this Positivity problem can be encoded into several optimization problems on Markov decision processes\, a standard model combining non-deterministic and probabilistic behavior. This reduction demonstrates the inherent mathematical difficulty of various problems in the formal verification of probabilistic systems.\nFor naturally restricted versions of most of these problems\, we can nevertheless establish decidability. The key ingredient is the existence of so-called saturation points which can be seen as bounds on the memory required for an optimal strategy in the Markov decision process.\n\n\nThe talk will be approximately 45 minutes long and takes place online. To access the talk use the following links:\n\nIf you have a ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/l/link.php?meeting_id=42224&pin=4c120949\n\n\nIf you do not have a ZIH-login:\n\nhttps://selfservice.zih.tu-dresden.de/link.php?meeting_id=42224&pin=d3219e91
DTSTAMP:20201030T155351
SEQUENCE:31781
END:VEVENT
BEGIN:VEVENT
SUMMARY:Multi-Context Stream Reasoning
URL://iccl.inf.tu-dresden.de/web/TBA6
UID://iccl.inf.tu-dresden.de/web/TBA6
DTSTART:20201029T130000
DTEND:20201029T143000
LOCATION:Online
DESCRIPTION:The field of artificial intelligence and knowledge representation (KR) has originated a high variety of formalisms\, notions\, languages\, and formats during the past decades. Each approach has been motivated and designed with specific applications in mind. Nowadays\, in the century of Industry 4.0\, the Internet of Things\, and Smart-devices\, we are interested in ways to connect the various approaches and allow them to distribute and exchange their knowledge and beliefs in an uniform way. In addition most information is provided continuously over time in streams of data. This leads to the problem that these sophisticated knowledge representation approaches cannot understand each others point of views\, their positions on semantics are not necessarily compatible either\, and most formalisms are designed as one-shot computations. These three problems between different KR-formalisms has been tackled by the concept of stream-aware Multi-Context Systems\, which allow methods to transfer information under a strong and generalised notion of semantics over time.\n\n\nStefan Ellmauthaler is a post-doctoral researcher in the group of Gerhard Brewka in Leipzig. He received his B.Sc. in 'Medicine and Computer Science' and his diploma in 'Computational Intelligence' at the Vienna University of Technology. His doctoral thesis 'Multi-Context Reasoning in Continuous Data-Flow Environments' has been supervised by Gerhard Brewka at the Leipzig University. Currently his research fields are abstract argumentation\, logic programming\, multi-context systems\, and hybrid reasoning approaches like stream reasoning and reactive reasoning.\n\n\nThe talk will take place online. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20201020T122226
SEQUENCE:31570
END:VEVENT
BEGIN:VEVENT
SUMMARY:Forgetting Atoms and Arguments
URL://iccl.inf.tu-dresden.de/web/TBA5
UID://iccl.inf.tu-dresden.de/web/TBA5
DTSTART:20201027T111500
DTEND:20201027T124500
LOCATION:Online
DESCRIPTION:Whereas the operation of forgetting has recently seen a considerable amount of attention in the context of Answer Set Programming (ASP)\, most of it has focused on theoretical aspects\, leaving the practical issues largely untouched. Recent studies include results about what sets of properties operators should satisfy\, as well as the abstract characterization of several operators and their theoretical limits. However\, no concrete operators have been investigated.\nWe address this issue by presenting the first concrete operator that satisfies strong persistence (SP) - a property that seems to best capture the essence of forgetting in the context of ASP - whenever this is possible\, and many other important properties. The operator is syntactic\, limiting the computation of the forgetting result to manipulating the rules in which the atoms to be forgotten occur\, naturally yielding a forgetting result that is close to the original program. \nWhenever (SP) cannot be satisfied\, it is inevitable to break some relations between the atoms not to be forgotten.\nTherefore\, an alternative to satisfying (SP) only when possible is to enhance the logical language so that all such relations can be maintained after the forgetting operation.\nWe borrow from the recently introduced notion of fork – a conservative extension of Equilibrium Logic and its monotonic basis\, the logic of Here-and-There – which has been shown to be sufficient to overcome the problems related to satisfying (SP). We map this notion into the language of logic programs\, enhancing it with so-called anonymous cycles\, and we introduce a concrete syntactical forgetting operator over this enhanced language that we show to always obey strong persistence.\nResearch of forgetting in the context of Dung-style argumentation frameworks (AFs) is much less developed. For example\, properties that forgetting operators over AFs should satisfy have only been compiled this year. We provide a summery of relevant results and preliminary thoughts on how to proceed in the field.\n\n\nShort-Bio:\nMatti Berthold received his B.Sc. in Computer Science at the University of Leipzig\, before studying Computational Logic at TU Dresden and FCT NOVA Lisbon. During his studies he had the chance to assist and conduct teaching and publish and present the results of both his theses.\n\n\nThe talk will take place online. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20201020T122044
SEQUENCE:31569
END:VEVENT
BEGIN:VEVENT
SUMMARY:Inconsistency Values for Logic Programs under Answer Set Semantics
URL://iccl.inf.tu-dresden.de/web/Inconsistency_Values_for_Logic_Programs_under_Answer_Set_Semantics
UID://iccl.inf.tu-dresden.de/web/Inconsistency_Values_for_Logic_Programs_under_Answer_Set_Semantics
DTSTART:20201022T130000
DTEND:20201022T143000
LOCATION:Online
DESCRIPTION:The proposition that conflicting pieces of information cannot exist together appears to be too simplistic for developing intelligent systems. Knowledge representation tools such as ASP (Answer Set Programming) rely on non-monotonic reasoning\, i.e.\, programs may contain inconsistent information without collapsing. While measuring inconsistency in monotonic formalisms has been investigated for some time now\, my master thesis addresses the issue of measuring inconsistency in ASP in order to provide an approach for analysing and resolving inconsistency in logic-based knowledge representation. Taking nonmonotonicity into account demands for a generalisation of the notion of inconsistency\, namely Strong Inconsistency [2]. Aiming to generalise A. Hunter's and S. Konieczny's\napproach [1] to non-monotonic formalisms\, the key idea is to characterise and use Minimal Strong Inconsistency Values for inducing a game in coalitional form on a knowledge base\, and to then use the Shapley Value in\norder to obtain a solution to this sort of blame game\, which reflects the impact of each element on the overall inconsistency in the base. Based on the latter I characterise the Minimal Strong Inconsistency Shapley Strong\nInconsistency (MSISSI) value and the MSISSI measure\, which provide an approach to describing inconsistency in ASP and allow for repairing inconsistent knowledge bases.\n\n[1] Anthony Hunter and Sébastien Konieczny. "On the measure of conflicts: Shapley inconsistency values". In: Artifcial Intelligence 174.14 (2010)\, pp. 1007-1026.\n\n[2] Markus Ulbricht\, Matthias Thimm\, and Gerhard Brewka. "Measuring Strong Inconsistency." In: AAAI. 2018\, pp. 1989-1996.\n\n\nDominik Rusovac was born in Vienna and studied Philosophy at the University of Vienna. Then he studied Logic at the University of Leipzig\, focussing on knowledge representation and formal systems. After graduating in 2019 he has been working since January 2020 as a software developer.\n\n\nThe talk will take around 45 minutes after which there will be the opportunity to ask questions and it will take place online. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20201016T090429
SEQUENCE:31550
END:VEVENT
BEGIN:VEVENT
SUMMARY:Don’t Repeat Yourself: Termination of the Skolem Chase on Disjunctive Existential Rules
URL://iccl.inf.tu-dresden.de/web/Termination_of_the_Skolem_Chase_on_Disjunctive_Existential_Rules
UID://iccl.inf.tu-dresden.de/web/Termination_of_the_Skolem_Chase_on_Disjunctive_Existential_Rules
DTSTART:20201008T130000
DTEND:20201008T143000
LOCATION:Online
DESCRIPTION:Disjunctive Existential Rules are a fragment of first order logic that is already expressive enough to capture many description logics. Conjunctive Query answering is an important reasoning task over such rules. Although this problem is undecidable\, different variants of the chase provide sound and complete algorithms that can be used as a reasoning basis. Since it is undecidable if these algorithms terminate for a specific rule set\, sufficient conditions for termination\, called acyclicity notions\, are introduced. We develop Disjunctive Model Faithful Acyclicity (DMFA) as a novel acyclicity notion for the disjunctive skolem chase variant by using ideas from Model Faithful Acyclicity (MFA) for the non-disjunctive skolem chase and Restricted Model Faithful Acyclicity (RMFA) for the restricted chase. Our research shows that our notion captures MFA and is able give a better approximation for termination on disjunctive existential rules. Acyclicity notions for the restricted chase like RMFA still capture DMFA but these notions are not sound for the disjunctive skolem chase. Our results encourage the use of the disjunctive skolem chase in practical applications\, which is implementable using well optimized ASP solvers.\n\n\nThis talk is a defence of the speakers "Großer Beleg".\nFurthermore it will be held online and take approximately 45 minutes which consist of an uninterrupted talk and a following question and answers part. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20201002T175430
SEQUENCE:31420
END:VEVENT
BEGIN:VEVENT
SUMMARY:Smoke Test Planning using Answer Set Programming
URL://iccl.inf.tu-dresden.de/web/Smoke_Test_Planning_using_Answer_Set_Programming
UID://iccl.inf.tu-dresden.de/web/Smoke_Test_Planning_using_Answer_Set_Programming
DTSTART:20200910T130000
DTEND:20200910T143000
LOCATION:Online
DESCRIPTION:Smoke testing is an important method to increase stability and reliability of hardware-depending systems. Due to concurrent access to the same physical resource and the impracticality of the use of virtualization\, smoke testing requires some form of planning. In this paper\, we propose to decompose test cases in terms of atomic actions consisting of preconditions and effects. We present a solution based on answer set programming with multi-shot solving that automatically generates short parallel test plans. Experiments suggest that the approach is feasible for non-inherently sequential test cases and scales up to thousands of test cases.\n\n\nThis talk will be held online and take approximately 30 minutes after which questions can be asked. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200910T121605
SEQUENCE:31266
END:VEVENT
BEGIN:VEVENT
SUMMARY:Notation3 Logic: From informal to formal semantics
URL://iccl.inf.tu-dresden.de/web/Notation3_Logic:_From_informal_to_formal_semantics
UID://iccl.inf.tu-dresden.de/web/Notation3_Logic:_From_informal_to_formal_semantics
DTSTART:20200730T130000
DTEND:20200730T143000
LOCATION:Online
DESCRIPTION:Notation3 Logic is a rule-based extension of RDF. Since its invention\, the logic has been refined and applied in several reasoning engines like for example EYE\, Cwm and FuXi. But despite these developments\, a clear formal definition of Notation3’s semantics is still missing and the details of the logic are only defined in an informal way. This lack of formalisation does not only cause theoretical problems - the relationship to other logics cannot be investigated - it also has practical consequences: in many cases the interpretations of the same formula differ between reasoning engines. In this talk\, I will explain these differences and discuss how the formal semantics of the logic can be defined based on the informal specifications and the implementations.\n\n\nThis talk will be held online. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200727T072445
SEQUENCE:31071
END:VEVENT
BEGIN:VEVENT
SUMMARY:On the Complexity of Synthesis of nop-Free Boolean Petri Nets
URL://iccl.inf.tu-dresden.de/web/On_the_Complexity_of_Synthesis_of_nop-Free_Boolean_Petri_Nets.
UID://iccl.inf.tu-dresden.de/web/On_the_Complexity_of_Synthesis_of_nop-Free_Boolean_Petri_Nets.
DTSTART:20200724T130000
DTEND:20200724T143000
LOCATION:Online
DESCRIPTION:In a Boolean Petri net\, the interaction nop allows places and transitions to be independent\, so that the firing of a transition does not affect the marking of a place\, and vice versa. Recently\, the complexity of synthesis of nets equipped with nop has been investigated thoroughly\, while the question for the rest 128 types of nets remains open. This work tackles the case of nop-free nets synthesis\, that is\, the Boolean nets where places and transitions are always related via interactions that are able to modify the marking of a place. In this paper\, we show that\, for nop-free nets\, the absence of swap leads always to a polynomial time synthesis procedure. Moreover\, we give a first hint\, that the presence of swap might make the synthesis for these types NP-complete.\n\n\nThis talk will be held online. If there is any interest in attending\, please send an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200722T203026
SEQUENCE:31066
END:VEVENT
BEGIN:VEVENT
SUMMARY:Imprecise Probabilities in Decision-making
URL://iccl.inf.tu-dresden.de/web/Imprecise_Probabilities_in_Decision-making
UID://iccl.inf.tu-dresden.de/web/Imprecise_Probabilities_in_Decision-making
DTSTART:20200709T130000
DTEND:20200709T143000
LOCATION:Online
DESCRIPTION:An agent’s beliefs come in different strengths. We understand her degree of belief in a\nproposition as a measure of the strength of her belief in that proposition. According to the\northodox Bayesian picture\, an agent's degree of belief is best represented by a single probability\nfunction. In particular\, the Bayesian claims that agents must assign numerically precise\nprobabilities to every proposition that they can entertain. On an alternative account\, an agent’s\nbeliefs ought to be modeled based on imprecise probabilities. With that\, imprecise degrees of\nbelief can be represented by a set of probability functions.\nRecently\, however\, imprecise probabilities have come under attack. Adam Elga (2010) claims\nthat there is no adequate account of the way they can be manifested in decision-making. In\nresponse to Elga\, more elaborate accounts of the imprecise framework have been developed. One\nof them is based on Supervaluationism\, originally\, a semantic approach to vague predicates. Still\,\nSeamus Bradley (2019) shows that those accounts that solve Elga’s problem\, have a more severe\ndefect: they undermine a central motivation to introduce imprecise probabilities in the first place.\nThe aim of my presentation is to modify the supervaluationist approach in such a way that it\naccounts for both Elga’s and Bradley’s challenges to the imprecise framework.\n\n\nThis talk will be held digitally. If there is any interest in attending\, please write an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200706T044025
SEQUENCE:30938
END:VEVENT
BEGIN:VEVENT
SUMMARY:ASNP: a tame fragment of existential second-order logic
URL://iccl.inf.tu-dresden.de/web/ASNP:_a_tame_fragment_of_existential_second-order_logic
UID://iccl.inf.tu-dresden.de/web/ASNP:_a_tame_fragment_of_existential_second-order_logic
DTSTART:20200625T130000
DTEND:20200625T143000
LOCATION:Digital
DESCRIPTION:Amalgamation SNP (ASNP) is a fragment of existential second-order logic that strictly contains binary connected MMSNP of Feder and Vardi and binary connected guarded monotone SNP of Bienvenu\, ten Cate\, Lutz\, and Wolter\; it is a promising candidate for an expressive subclass of NP that exhibits a complexity dichotomy. We show that ASNP has a complexity dichotomy if and only if the infinite-domain dichotomy conjecture holds for constraint satisfaction problems for first-order reducts of binary finitely bounded homogeneous structures. For such CSPs\, powerful universal-algebraic hardness conditions are known that are conjectured to describe the border between NP-hard and polynomial-time tractable CSPs. The connection to CSPs also implies that every ASNP sentence can be evaluated in polynomial time on classes of finite structures of bounded treewidth. We show that the syntax of ASNP is decidable. The proof relies on the fact that for classes of finite binary structures given by finitely many forbidden substructures\, the amalgamation property is decidable.\n\n\nThis will be a test talk for the presentation of an eponymous paper consisting of the 20 minute long prerecorded talk (like it will be presented at the conference) which will be followed up with a Q&A session to the talk where questions will be answered by Simon Knäuer\, one of the authors. Feedback and suggestions in preperation for the conference talk are heavily encouraged.\n\nThis talk will be held digitally. If there is any interest in attending\, please write an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200622T054325
SEQUENCE:30849
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Method of Refinement: Deriving Proof-Calculi from Semantics for Multi-Modal Logics
URL://iccl.inf.tu-dresden.de/web/The_Method_of_Refinement:_Deriving_Proof-Calculi_from_Semantics_for_Multi-Modal_Logics
UID://iccl.inf.tu-dresden.de/web/The_Method_of_Refinement:_Deriving_Proof-Calculi_from_Semantics_for_Multi-Modal_Logics
DTSTART:20200611T130000
DTEND:20200611T143000
LOCATION:Digital
DESCRIPTION:In this talk\, we look at how to derive nested calculi from labelled calculi for multi-modal logics\, thus connecting the general results for labelled calculi with the more refined formalism of nested sequents. Since labelled calculi are constructed by transforming the semantics of a logic into a proof system\, the method of refinement shows how one can derive simplified calculi from the semantics of a logic. The extraction of nested calculi from labelled calculi obtains via considerations pertaining to the elimination of structural rules in labelled derivations. As a consequence of the extraction process\, each nested calculus inherits fundamental proof-theoretic properties from its associated labelled calculus and is suitable for applications such as decidability and interpolation.\n\nTim is currently working at the informatics faculty of the TU Wien in Vienna as a PreDoc Researcher in the research unit "Theory and Logic".\n\nThis talk will be held digitally. If there is any interest in attending\, please write an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200520T082459
SEQUENCE:30588
END:VEVENT
BEGIN:VEVENT
SUMMARY:Compositional Matrix-Space Models: Learning Methods and Evaluation
URL://iccl.inf.tu-dresden.de/web/Compositional_Matrix-Space_Models:_Learning_Methods_and_Evaluation
UID://iccl.inf.tu-dresden.de/web/Compositional_Matrix-Space_Models:_Learning_Methods_and_Evaluation
DTSTART:20200528T130000
DTEND:20200528T143000
LOCATION:Digital
DESCRIPTION:There has been a lot of research on machine-readable representations of words for natural\nlanguage processing (NLP). One mainstream paradigm for the word meaning representation\ncomprises vector-space models obtained from the distributional information of words in\nthe text. Machine learning techniques have been proposed to produce such word representations\nfor computational linguistic tasks. Moreover\, the representation of multi-word\nstructures\, such as phrases\, in vector space can arguably be achieved by composing the distributional\nrepresentation of the constituent words. To this end\, mathematical operations have\nbeen introduced as composition methods in vector space. An alternative approach to word\nrepresentation and semantic compositionality in natural language has been compositional\nmatrix-space models. In this thesis\, two research directions are considered. In the first\,\nconsidering compositional matrix-space models\, we explore word meaning representations\nand semantic composition of multi-word structures in matrix space. The main motivation\nfor working on these models is that they have shown superiority over vector-space models\nregarding several properties. The most important property is that the composition operation\nin matrix-space models can be defined as standard matrix multiplication\; in contrast to\ncommon vector space composition operations\, this is sensitive to word order in language. We\ndesign and develop machine learning techniques that induce continuous and numeric representations\nof natural language in matrix space. The main goal in introducing representation\nmodels is enabling NLP systems to understand natural language to solve multiple related\ntasks. Therefore\, first\, different supervised machine learning approaches to train word\nmeaning representations and capture the compositionality of multi-word structures using\nthe matrix multiplication of words are proposed. The performance of matrix representation\nmodels learned by machine learning techniques is investigated in solving two NLP tasks\,\nnamely\, sentiment analysis and compositionality detection. Then\, learning techniques for\nlearning matrix-space models are proposed that introduce generic task-agnostic representation\nmodels\, also called word matrix embeddings. In these techniques\, word matrices are\ntrained using the distributional information of words in a given text corpus. We show the\neffectiveness of these models in the compositional representation of multi-word structures in\nnatural language.\nThe second research direction in this thesis explores effective approaches for evaluating\nthe capability of semantic composition methods in capturing the meaning representation of\ncompositional multi-word structures\, such as phrases. A common evaluation approach is\nexamining the ability of the methods in capturing the semantic relatedness between linguistic\nunits. The underlying assumption is that the more accurately a method of semantic composition\ncan determine the representation of a phrase\, the more accurately it can determine the\nrelatedness of that phrase with other phrases. To apply the semantic relatedness approach\,\ngold standard datasets have been introduced. In this thesis\, we identify the limitations of\nthe existing datasets and develop a new gold standard semantic relatedness dataset\, which\naddresses the issues of the existing datasets. The proposed dataset allows us to evaluate\nmeaning composition in vector- and matrix-space models.\n\nThe presentation will take 45 minutes without questions. Afterwards there will be Q&A.\nThis talk will be held digitally. If there is any interest in attending\, please write an e-mail to thomas.feller@tu-dresden.de.
DTSTAMP:20200508T121859
SEQUENCE:30495
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tracking False Information Online
URL://iccl.inf.tu-dresden.de/web/Tracking_False_Information_Online
UID://iccl.inf.tu-dresden.de/web/Tracking_False_Information_Online
DTSTART:20200227T161500
DTEND:20200227T174500
LOCATION:APB 1004
DESCRIPTION:Digital media enables fast sharing of information and discussions among users. While this comes with many benefits to today’s society\, such as broadening information access\, the manner in which information is disseminated also has obvious downsides. Since fast access to information is expected by many users and news outlets are often under financial pressure\, speedy access often comes at the expense of accuracy\, which leads to misinformation. Moreover\, digital media can be misused by campaigns to intentionally spread false information\, i.e. disinformation\, about events\, individuals or governments. In this talk\, I will present on different ways false information is spread online\, including misinformation and disinformation. I will then report findings from our recent and ongoing work on automatic fact checking\, stance detection and framing attitudes.
DTSTAMP:20200220T161257
SEQUENCE:30135
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Distributed Blockchain Model of Selfish Mining
URL://iccl.inf.tu-dresden.de/web/A_Distributed_Blockchain_Model_of_Selfish_Mining
UID://iccl.inf.tu-dresden.de/web/A_Distributed_Blockchain_Model_of_Selfish_Mining
DTSTART:20200226T123000
DTEND:20200226T140000
LOCATION:APB 3027
DESCRIPTION:Abstract:\n Bitcoin is currently still the most widely used cryptocurrency\, judging from its market cap and trade volume. A big part of Bitcoin's appeal is the mining process\, which makes sure that new transactions are being validated and processed in a distributed fashion\, maintaining a distributed ledger known as the blockchain. Miners receive a fee for every block of transactions that they mine\, as an incentive for their computational effort. In the long run\, they can expect a reward proportional to the computational power they provide to the network.\nHowever\, Eyal and Sirer introduced a strategy for a miner to time the publishing of blocks that will give them a significant edge in profits. \nThis talk will present a model for the behaviour of honest and selfish mining pools in UPPAAL\, that can be used for analysing properties of the mining process in the presence of network delay\, and taking into account the distributed nature of the process. The analysis shows what effects selfish mining would have on the share of profits\, but also on the number of orphaned blocks in the blockchain. This analysis also allows us to compares those results to real-world data\, to establish if there is evidence for the presence of selfish miners.\n\nBio:\n Ansgar Fehnker is Associate Professor for Programming Education at the Faculty of Electrical Engineeering\, Mathematics\, and Computer Science\, at the University of Twente. He is part of the Formal Methods and Tools group and has been previously professor at the University of the South Pacific and researcher at Australia's ICT research centre NICTA\, working on static analysis for C/C++ and on analysis of wireless network protocols with model checking. Prior to joining NICTA he was a PostDoc in the model-checking teams at Carnegie Mellon University\, US. He received his PhD from the Radboud University Nijmegen on verification of real-time and hybrid systems.
DTSTAMP:20200217T091734
SEQUENCE:30129
END:VEVENT
BEGIN:VEVENT
SUMMARY:Knowledge Graph Curation and Reasoning using the Example of the Scholarly Domain
URL://iccl.inf.tu-dresden.de/web/Knowledge_Graph_Curation_and_Reasoning_using_the_Example_of_the_Scholarly_Domain
UID://iccl.inf.tu-dresden.de/web/Knowledge_Graph_Curation_and_Reasoning_using_the_Example_of_the_Scholarly_Domain
DTSTART:20200130T130000
DTEND:20200130T143000
LOCATION:APB 3027
DESCRIPTION:Knowledge graphs allow organisations and enterprises to integrate their internal and external heterogeneous sources of information into a unified form and enable analytics and discovery of unknown knowledge. To exploit the information encoded in knowledge graphs\, analysis of the graph structure as well as the semantics of the represented relations\, is required. I will show this using the scholarly domain as an example. The heterogeneity of scholarly artifacts and their metadata spread over different Web data sources serve as a great use case platform for data analytics and reasoning methods. In this talk\, I will first have a look into major challenges of this domain using KG creation and curation leveraging semantic Web technologies. I will further showcase the application of Knowledge Graph Embedding models for link prediction scenarios of this domain.
DTSTAMP:20200123T134125
SEQUENCE:30034
END:VEVENT
BEGIN:VEVENT
SUMMARY:Modeling Computational Properties of Description Logics in ASP
URL://iccl.inf.tu-dresden.de/web/Modeling_Computational_Properties_of_Description_Logics_in_ASP
UID://iccl.inf.tu-dresden.de/web/Modeling_Computational_Properties_of_Description_Logics_in_ASP
DTSTART:20200129T090000
DTEND:20200129T103000
LOCATION:APB 3027
DESCRIPTION:Tracking the increasing volume of research about Description logics is getting harder. Besides\, those results interact together and can deduce new results. That is why we need a knowledge base to encode those results in a smart way and infer more results based on what we currently know. This talk presents an approach of how we can encode such information with the help of Answer Set Programming (ASP). In addition\, we show how such a system can be integrated into a website that visualizes the current research results and the inferences made based on them. We end by analyzing this approach and suggesting some future work.
DTSTAMP:20200129T094156
SEQUENCE:30083
END:VEVENT
BEGIN:VEVENT
SUMMARY:Checking Chase Termination over Ontologies of Existential Rules with Equality
URL://iccl.inf.tu-dresden.de/web/Checking_Chase_Termination_over_Ontologies_of_Existential_Rules_with_Equality
UID://iccl.inf.tu-dresden.de/web/Checking_Chase_Termination_over_Ontologies_of_Existential_Rules_with_Equality
DTSTART:20200123T130000
DTEND:20200123T143000
LOCATION:APB 3027
DESCRIPTION:The chase is a sound and complete algorithm for conjunctive query answering over ontologies of existential rules with equality. To enable its effective use\, we can apply acyclicity notions\; that is\, sufficient conditions that guarantee chase termination. Unfortunately\, most of these notions have only been defined for existential rule sets without equality. A proposed solution to circumvent this issue is to treat equality as an ordinary predicate with an explicit axiomatisation. We empirically show that this solution is not efficient in practice and propose an alternative approach. More precisely\, we show that\, if the chase terminates for any equality axiomatisation of an ontology\, then it terminates for the original ontology (which may contain equality). Therefore\, one can apply existing acyclicity notions to check chase termination over an axiomatisation of an ontology and then use the original ontology for reasoning. We show that\, in practice\, doing so results in a more efficient reasoning procedure. Furthermore\, we present equality model-faithful acyclicity\, a general acyclicity notion that can be directly applied to ontologies with equality.\n\n\nThis talk is a rehearsal for AAAI 2020. \nJoint work with Jacopo Urbani.
DTSTAMP:20200107T165627
SEQUENCE:29972
END:VEVENT
BEGIN:VEVENT
SUMMARY:Musings on the Semantics of SPARQL
URL://iccl.inf.tu-dresden.de/web/Musings_on_the_Semantics_of_SPARQL
UID://iccl.inf.tu-dresden.de/web/Musings_on_the_Semantics_of_SPARQL
DTSTART:20200109T130000
DTEND:20200109T143000
LOCATION:APB 3027
DESCRIPTION:Graph simulations have found their way into different graph database management (GDBM) tasks\, e.g.\, in the shape of Offline indexing structures\, as theoretical models for graph schemas\, or as viable alternatives to matching patterns up to graph homomorphisms. Among other advantages\, it is often the tractability of the simulation problem being exploited in emerging applications. However\, when it\ncomes to evaluating the approaches\, only basic graph patterns (BGPs) and rather small data instances\, compared to today's large data instances like Wikidata or DBpedia\, are considered. In the first part of this talk\, I give some insights on how far graph simulations may be incorporated into full-fledged graph query processing. Therefore\, we analyze different semantic interpretations of SPARQL\, based on graph simulation\, w.r.t. correctness\, complexity\, and effectiveness. Second\, I briefly sketch why state-of-the-art simulation algorithms do not scale well in the graph query/data setting. I further show the effects of a devised solution that even integrates well with the SPARQL semantics we envisioned in the first part.
DTSTAMP:20200108T113625
SEQUENCE:29973
END:VEVENT
BEGIN:VEVENT
SUMMARY:SCF2 - an Argumentation Semantics for Rational Human Judgments on Argument Acceptability
URL://iccl.inf.tu-dresden.de/web/SCF2_-_an_Argumentation_Semantics_for_Rational_Human_Judgments_on_Argument_Acceptability
UID://iccl.inf.tu-dresden.de/web/SCF2_-_an_Argumentation_Semantics_for_Rational_Human_Judgments_on_Argument_Acceptability
DTSTART:20191219T130000
DTEND:20191219T143000
LOCATION:APB 3027
DESCRIPTION:In abstract argumentation theory\, many argumentation semantics have been proposed for evaluating argumentation frameworks. This paper is based on the following research question: Which semantics corresponds well to what humans consider a rational judgment on the acceptability of arguments? There are two systematic ways to approach this research question: A normative perspective is provided by the principle-based approach\, in which semantics are evaluated based on their satisfaction of various normatively desirable principles. A descriptive perspective is provided by the empirical approach\, in which cognitive studies are conducted to determine which semantics best predicts human judgments about arguments. In this paper\, we combine both approaches to motivate a new argumentation semantics called SCF2. For this purpose\, we introduce and motivate two new principles and show that no semantics from the literature satisfies both of them. We define SCF2 and prove that it satisfies both new principles. Furthermore\, we discuss findings of a recent empirical cognitive study that provide additional support to SCF2.
DTSTAMP:20191107T084228
SEQUENCE:29685
END:VEVENT
BEGIN:VEVENT
SUMMARY:Justifying All Differences Using Pseudo-Boolean Reasoning
URL://iccl.inf.tu-dresden.de/web/Justifying_All_Differences_Using_Pseudo-Boolean_Reasoning
UID://iccl.inf.tu-dresden.de/web/Justifying_All_Differences_Using_Pseudo-Boolean_Reasoning
DTSTART:20191217T150000
DTEND:20191217T160000
LOCATION:APB 2028
DESCRIPTION:Constraint programming solvers support rich global constraints and propagators\, which make them both powerful and hard to debug. In the Boolean satisfiability community\, prooflogging is the standard solution for generating trustworthy outputs\, and this has become key to the social acceptability of computer-generated proofs. However\, reusing this technology for constraint programming requires either much weaker propagation\, or an impractical blowup in proof length.\nThis paper demonstrates that simple\, clean\, and efficient proof logging is still possible for the all-different constraint\, through pseudo-Boolean reasoning. We explain how such proofs can be expressed and verified mechanistically\, describe an implementation\, and discuss the broader implications for proof logging in constraint programming.
DTSTAMP:20191211T111954
SEQUENCE:29883
END:VEVENT
BEGIN:VEVENT
SUMMARY:Standpoint logic: a multi-modal logic for reasoning within semantic indeterminacy
URL://iccl.inf.tu-dresden.de/web/Standpoint_logic:_a_multi-modal_logic_for_reasoning_within_semantic_indeterminacy
UID://iccl.inf.tu-dresden.de/web/Standpoint_logic:_a_multi-modal_logic_for_reasoning_within_semantic_indeterminacy
DTSTART:20191212T130000
DTEND:20191212T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Standpoint logic is a multi-modal logic intended at reasoning with different interpretations of semantically heterogeneous terms. The framework offers an alternative to “fuzzy” approaches to the representation of meaning and allows for the specification of “semantic commitments” and “penumbral connections”.\n\n\nIn this talk\, I will introduce the logic and provide an overview of its proof theory and semantics. I will demonstrate its expressivity in an application scenario in the forestry domain\, using data schemas from the repository Global Forest Watch and concepts from the ENVO ontology. I will finally discuss the complexity of the logic and some restrictions that could make implementations viable.
DTSTAMP:20191028T083947
SEQUENCE:29474
END:VEVENT
BEGIN:VEVENT
SUMMARY:What makes a variant of query determinacy (un)decidable?
URL://iccl.inf.tu-dresden.de/web/TBA3
UID://iccl.inf.tu-dresden.de/web/TBA3
DTSTART:20191205T130000
DTEND:20191205T143000
LOCATION:APB 3027
DESCRIPTION:Suppose there is a database we have no direct access to\, but there are views of this database available to us\, defined by some queries Q_1 \, Q_2 \, . . . Q_k. And we are given another query Q. Will we be able to compute Q only using the available views?\n\n\n\nThe above question\, call it "the question of determinacy"\, sounds almost philosophical. One can easily imagine a bearded man in himation chained to the wall of a cave watching the views projected on the wall and pondering whether\, from what he is able to see\, the reality can be faithfully reconstructed.\n\nFor us it is a database theory question though. And a really well motivated one\, with motivations ranging from query evaluation plans optimization (where we prefer a positive answer) to privacy issues (where the preferred answer is negative).\n\nQuery determinacy is a broad topic\, with literally hundreds of papers published since the late 1980s. This talk is not going to be a "survey" (which would be impossible\, within a one hour time frame\, and with this speaker)\, but rather a personal perspective of a person somehow involved in the recent developments in the area.\n\nFirst I will explain how\, in the last 30+ years\, the question of determinacy was formalized. There are many parameters here: obviously one needs to choose the query language of the queries Q_i and the query language of Q. But -- surprisingly -- there is also some choice regarding what the word "to compute" actually means in this context.\n\nThen I will concentrate on the variants of the decision problem of determinacy (for each choice of parameters there is one such problem -- Q_1 \, Q_2 \, . . . Q_k and Q constitute the instance\, and the question is whether Q_1 \, Q_2 \, . . . Q_k determine Q) and I will talk about how I understand the mechanisms rendering different variants of determinacy decidable or undecidable. This will be on a slightly informal level. No new theorems will be presented\, but I think I will be able to show simplified proofs of some of the earlier results.\n\nThis is a preview of the [https://diku-dk.github.io/edbticdt2020/?contents=invited_ICDT_talk.html invited talk at ICDT 2020].
DTSTAMP:20191202T153202
SEQUENCE:29823
END:VEVENT
BEGIN:VEVENT
SUMMARY:TE-ETH: Lower Bounds for QBFs of Bounded Treewidth
URL://iccl.inf.tu-dresden.de/web/TBA4
UID://iccl.inf.tu-dresden.de/web/TBA4
DTSTART:20191128T130000
DTEND:20191128T143000
LOCATION:APB 3027
DESCRIPTION:The problem of deciding the validity (QSAT) of quantified Boolean formulas (QBF) is a vivid research area in both theory and practice. In the field of parameterized algorithmics\, the well-studied graph measure treewidth turned out to be a successful parameter. A well-known result by Chen in parameterized complexity is that QSAT when parameterized by the treewidth of the primal graph of the input formula together with the quantifier depth of the formula is fixed-parameter tractable. More precisely\, the runtime of such an algorithm is polynomial in the formula size and exponential in the treewidth\, where the exponential function in the treewidth is a tower\, whose height is the quantifier depth. \n\n\nA natural question is whether one can significantly improve these results and decrease the tower while assuming the Exponential Time Hypothesis (ETH). In the last years\, there has been a growing interest in the quest of establishing lower bounds under ETH\, showing mostly problem-specific lower bounds up to the third level of the polynomial hierarchy. Still\, an important question is to settle this as general as possible and to cover the whole polynomial hierarchy. \n\nIn this work\, we show lower bounds based on the ETH for arbitrary QBFs parameterized by treewidth (and quantifier depth). More formally\, we establish lower bounds for QSAT and treewidth\, namely\, that under ETH there cannot be an algorithm that solves QSAT of quantifier depth i in runtime significantly better than i-fold exponential in the treewidth and polynomial in the input size. In doing so\, we provide a versatile reduction technique to compress treewidth that encodes the essence of dynamic programming on arbitrary tree decompositions. Further\, we describe a general methodology for a more fine-grained analysis of problems parameterized by treewidth that are at higher levels of the polynomial hierarchy.\n\n'''Authors:''' Johannes Klaus Fichte\, Markus Hecher\, Andreas Pfandler
DTSTAMP:20191128T115745
SEQUENCE:29795
END:VEVENT
BEGIN:VEVENT
SUMMARY:Efficiently Solving Unbounded Integer Programs in the context of SMT Solvers
URL://iccl.inf.tu-dresden.de/web/Efficiently_Solving_Unbounded_Integer_Programs_in_the_context_of_SMT_Solvers
UID://iccl.inf.tu-dresden.de/web/Efficiently_Solving_Unbounded_Integer_Programs_in_the_context_of_SMT_Solvers
DTSTART:20191122T100000
DTEND:20191122T113000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:'''\nSatisfiability modulo theories (SMT) solvers are automated theorem provers for logical formulas that range over combinations of various first-order theories. These theories typically correspond to domains found in programming languages\, e.g.\, the theories of bit vectors\, integers\, and arrays. This is intentional because SMT solvers were initially developed as back-end reasoning tools for automated software verification. These days\, SMT solvers are also used as back-end reasoning tools for various other applications\, e.g.\, verification of hybrid systems\, program synthesis\, and as brute-force tactics in various interactive theorem provers.\n\nIn this talk\, I will present two new techniques for the theory of linear integer arithmetic in the context of SMT solvers:\n\n1) The unit cube test [2\,3]\, a sound (although incomplete) test that finds solutions for integer programs(i.e.\, systems of linear inequalities) in polynomial time. The test is especially efficient on absolutely unbounded integer programs\, which are difficult to handle for many other decision procedures.\n\n2) A bounding transformation [1] that reduces any integer program in polynomial time to an equisatisfiable integer program that is bounded. The transformation is beneficial because it turns branch and bound into a complete and efficient decision procedure for integer programs.\n\n'''References:'''\n\n[1] A Reduction from Unbounded Linear Mixed Arithmetic Problems into Bounded Problems\, Martin Bromberger. IJCAR 2018\, volume 10900 of LNCS\, pages 329–345. Springer\, 2018.\n\n[2] New Techniques for Linear Arithmetic: Cubes and Equalities\, Martin Bromberger\, and Christoph Weidenbach. In FMSD volume 51(3)\, pages 433–461. Springer\, 2017.\n\n[3] Fast Cube Tests for LIA Constraint Solving\, Martin Bromberger\, and Christoph Weidenbach. In IJCAR 2016\, volume 9706 of LNCS\, pages 116–132. Springer\, 2016.
DTSTAMP:20191121T135506
SEQUENCE:29758
END:VEVENT
BEGIN:VEVENT
SUMMARY:A diamond in the rough: Theorizing column stores
URL://iccl.inf.tu-dresden.de/web/A_diamond_in_the_rough:_Theorizing_column_stores
UID://iccl.inf.tu-dresden.de/web/A_diamond_in_the_rough:_Theorizing_column_stores
DTSTART:20191121T130000
DTEND:20191121T143000
LOCATION:APB 3105
DESCRIPTION:Column stores have been a 'neglected child' relative to traditional\, row-oriented\, relation-focused database management systems: The systems people came up with them\, and the theoreticians did not really give them the time of day. This talk will discuss what happens when we pick up the slack and formalize a model for analytic computation with columns. In addition to sound conceptual grounding being its own aesthetic reward\, we will touch on some of the examples of how such a formalization enables architectural and performance improvements in real-life systems:\n\n\nSeamless integration of decompression and query execution\; removal of special-case handling of different column features (such as nullability and variable-length elements)\; closure of query execution plans to partial execution\; et cetera. Central to achieving such benefits will be the discussion of what constitutes a column\, how columns are to be represented\, and what they can represent.
DTSTAMP:20191107T103354
SEQUENCE:29693
END:VEVENT
BEGIN:VEVENT
SUMMARY:Can A.I. Provably Explain Itself? A gentle Introduction to Description Logics
URL://iccl.inf.tu-dresden.de/web/Can_A.I._Provably_Explain_Itself%3F_A_gentle_Introduction_to_Description_Logics
UID://iccl.inf.tu-dresden.de/web/Can_A.I._Provably_Explain_Itself%3F_A_gentle_Introduction_to_Description_Logics
DTSTART:20191114T130000
DTEND:20191114T143000
LOCATION:APB 3027
DESCRIPTION:The emergence of intelligent systems in self-driving cars\, planes\, medical diagnosis\, insurance and financial services among others has shown that when decisions are taken or suggested by automated systems it is essential that an explanation can be provided. The disconnect between how we make decisions and how machines make them\, and the fact that machines are making more and more decisions for us\, has given a new push for transparency in A.I. However\, the inner workings of machine learning algorithms remain difficult to understand\, and the methods of making these models explainable still require expensive human evaluation.\n\n\nOn the other hand\, knowledge representation based on description logics allow for providing description of the environment\, specifying constraints for the system states and detecting inconsistencies\, as well as operating information from heterogeneous (possibly incomplete) data sources\, and reasoning about the knowledge of an application domain. Because of the conceptual difference with machine learning algorithms\, the description logics formalism is much more similar to human reasoning and can be adapted to supply the user with necessary explications for a decision made. Additionally\, in order to model dynamic systems\, description logics have extensions enabling additionally temporal and probabilistic reasoning.\n\nIn this talk I will outline key pillars and basic principles of (onto)logical reasoning as well as its limitations. Finally\, I will say a few words about an initiative of TU Dresden and Saarland Informatics Campus developing together the concept of perspicuous computing and laying the scientific foundations for computerised systems that are able to express clearly their functioning.
DTSTAMP:20191030T152752
SEQUENCE:29562
END:VEVENT
BEGIN:VEVENT
SUMMARY:Interface between Logical Analysis of Data and Formal Concept Analysis
URL://iccl.inf.tu-dresden.de/web/Interface_between_Logical_Analysis_of_Data_and_Formal_Concept_Analysis
UID://iccl.inf.tu-dresden.de/web/Interface_between_Logical_Analysis_of_Data_and_Formal_Concept_Analysis
DTSTART:20191024T130000
DTEND:20191024T143000
LOCATION:APB 3027
DESCRIPTION:Logical Analysis of Data and Formal Concept Analysis are separately developed methodologies based on different mathematical foundations. We show that the two methodologies utilize the same basic building blocks. That enables us to develop an interface between the two methodologies. We provide some preliminary benefits of the interface\; most notably efficient algorithms for computing spanned patterns in Logical Analysis of Data using algorithms of Formal Concept Analysis.
DTSTAMP:20191021T094239
SEQUENCE:29357
END:VEVENT
BEGIN:VEVENT
SUMMARY:Knowledge Dynamics in Social Environments
URL://iccl.inf.tu-dresden.de/web/Knowledge_Dynamics_in_Social_Environments
UID://iccl.inf.tu-dresden.de/web/Knowledge_Dynamics_in_Social_Environments
DTSTART:20190926T130000
DTEND:20190926T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:'''\nSocial media platforms\, taken in conjunction\, can be seen as complex networks\; in this context\, understanding how agents react to sentiments expressed by their connections is of great interest. Here\, we show how Network Knowledge Bases help represent the integration of multiple social networks\, and explore how information flow can be handled via belief revision operators for local (agent-specific) knowledge bases. We report on preliminary experiments on Twitter data showing that different agent types react differently to the same information — this is a first step toward developing tools to predict how agents behave as information flows in their social environment.\n\n'''Bio:'''\nMaria Vanina Martinez\, University of Buenos Aires\, Argentina.
DTSTAMP:20190923T154215
SEQUENCE:29160
END:VEVENT
BEGIN:VEVENT
SUMMARY:A.M.B.R.O.S.I.A. - Conferring Immortality on Distributed Applications
URL://iccl.inf.tu-dresden.de/web/A.M.B.R.O.S.I.A._-_Conferring_Immortality_on_Distributed_Applications
UID://iccl.inf.tu-dresden.de/web/A.M.B.R.O.S.I.A._-_Conferring_Immortality_on_Distributed_Applications
DTSTART:20190920T130000
DTEND:20190920T140000
LOCATION:APB 3105
DESCRIPTION:'''Abstract:'''\n\nWhen writing today’s distributed programs\, which frequently span both devices and cloud services\, programmers are faced with complex decisions and coding tasks around coping with failure\, especially when these distributed components are stateful. If their application can be cast as pure data processing\, they benefit from the past 40-50 years of work from the database community\, which has shown how declarative database systems can completely isolate the developer from the possibility of failure in a performant manner. Unfortunately\, while there have been some attempts at bringing similar functionality into the more general distributed programming space\, a compelling general-purpose system must handle non-determinism\, be performant\, support a variety of machine types with varying resiliency goals\, and be language agnostic\, allowing distributed components written in different languages to communicate. This talk describes the first system\, publicly available on GitHub\, called Ambrosia\, to satisfy all these requirements. We coin the term “virtual resiliency”\, analogous to virtual memory\, for the platform feature which allows failure oblivious code to run in a failure resilient manner. We also introduce a programming construct\, the “impulse”\, which resiliently handles non-deterministic information originating from outside the resilient component. Of further interest to our community is the effective reapplication of much database performance optimization technology to make Ambrosia more performant than many of today’s non-resilient cloud solutions.\n\n\n'''Bio:'''\n\nOver the last 20 years\, I have worked at Microsoft in a combination of research and product roles. In particular\, I’ve spent about 15 years as a researcher at MSR\, doing fundamental research in streaming\, big data processing\, databases\, and distributed computing. My style of working is to attack difficult problems\, and through fundamental understanding and insight\, create new artifacts that enable important problems to be solved in vastly better ways. For instance\, my work on streaming data processing enabled people with real time data processing problems to specify their processing logic in new\, powerful ways\, and also resulted in an artifact called Trill\, which was orders of magnitude more performant than anything which preceded it. Within the academic community\, I have published many papers\, some with best paper awards (e.g. Best Paper Award at ICDE 2012)\, and two with test of time awards (e.g. SIGMOD 2011 Test of Time award and ICDT 2018 Test of Time award)\, and have also taken many organizational roles in database conferences. My research has also had significant impact on many Microsoft products\, including SQL Server\, Office\, Windows\, Bing\, and Halo\, as well as leading to the creation of entirely new products like Microsoft StreamInsight\, Azure Stream Analytics\, Trill\, and most recently\, Ambrosia. I spent 5 years building Microsoft StreamInsight\, serving as a founder and architect for the product. Trill has become the de-facto standard for temporal and stream data processing within Microsoft\, and years after creation\, is still the most expressive and performant general purpose stream data processor in the world. I am also an inventor of 30+ patents.
DTSTAMP:20190918T094958
SEQUENCE:29122
END:VEVENT
BEGIN:VEVENT
SUMMARY:Young Scientist's Third International Workshop on Trends in Information Processing (YSIP3)
URL://iccl.inf.tu-dresden.de/web/Young_Scientist%27s_Third_International_Workshop_on_Trends_in_Information_Processing_(YSIP3)
UID://iccl.inf.tu-dresden.de/web/Young_Scientist%27s_Third_International_Workshop_on_Trends_in_Information_Processing_(YSIP3)
DTSTART:20190917T000000
DTEND:20190920T000000
LOCATION:Stavropol and Arkhyz\, Russian Federation
DESCRIPTION:We invite young scientists – typically master or PhD students – to submit new scientific results – as\, for example\, obtained in their bachelor thesis\, project work\, master thesis\, or PhD project – in all areas of Information Processing.
DTSTAMP:20190117T101503
SEQUENCE:27542
END:VEVENT
BEGIN:VEVENT
SUMMARY:Automatic translation of clinical trial eligibility criteria into formal queries
URL://iccl.inf.tu-dresden.de/web/Automatic_translation_of_clinical_trial_eligibility_criteria_into_formal_queries
UID://iccl.inf.tu-dresden.de/web/Automatic_translation_of_clinical_trial_eligibility_criteria_into_formal_queries
DTSTART:20190916T130000
DTEND:20190916T140000
LOCATION:APB 3027
DESCRIPTION:Selecting patients for clinical trials is very labor-intensive. Our goal is to develop an automated system that can support doctors in this task. This paper describes a major step towards such a system: the automatic translation of clinical trial eligibility criteria from natural language into formal\, logic-based queries. First\, we develop a semantic annotation process that can capture many types of clinical trial criteria. Then\, we map the annotated criteria to the formal query language. We have built a prototype system based on state-of-the-art NLP tools such as Word2Vec\, Stanford NLP tools\, and the MetaMap Tagger\, and have evaluated the quality of the produced queries on a number of criteria from clinicaltrials.gov.
DTSTAMP:20190913T120845
SEQUENCE:29095
END:VEVENT
BEGIN:VEVENT
SUMMARY:A gentle introduction to partition width
URL://iccl.inf.tu-dresden.de/web/Lecture_on_Partition_Width
UID://iccl.inf.tu-dresden.de/web/Lecture_on_Partition_Width
DTSTART:20190912T130000
DTEND:20190912T143000
LOCATION:APB 3027
DESCRIPTION:In this talk we will take an introductory glance at the notion of "partition width"\, first conceived by Achim Blumensath. As partition width is also closely related to a notion of decomposition of an arbitrary structure into a tree-like shape\, the so called "partition refinement"\, we will also take a look at the relation of both these notions to more established notions of decomposition and width measures (namely tree-decompositions\, tree width\, and clique width).
DTSTAMP:20190802T123628
SEQUENCE:28943
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mixing Description Logics in Privacy-Preserving Ontology Publishing
URL://iccl.inf.tu-dresden.de/web/Mixing_Description_Logics_in_Privacy-Preserving_Ontology_Publishing
UID://iccl.inf.tu-dresden.de/web/Mixing_Description_Logics_in_Privacy-Preserving_Ontology_Publishing
DTSTART:20190905T130000
DTEND:20190905T140000
LOCATION:APB 2026
DESCRIPTION:In previous work\, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an instance store\, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL . We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy\, and have shown how\, in the context mentioned above\, optimal compliant (safe) generalizations of a given concept can be computed. In the present paper\, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular\, we investigate the situations where the attacker’s knowledge is given by an or an concept. In both cases\, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of \, it turns out to be actually lower (polynomial) for the more expressive DL .\n\n\nJoint work with Franz Baader. This is also a test-talk for a presentation at KI 2019.
DTSTAMP:20190902T110018
SEQUENCE:29045
END:VEVENT
BEGIN:VEVENT
SUMMARY:On the Expressive Power of Description Logics with Cardinality Constraints on Finite and Infinite Sets
URL://iccl.inf.tu-dresden.de/web/On_the_Expressive_Power_of_Description_Logics_with_Cardinality_Constraints_on_Finite_and_Infinite_Sets
UID://iccl.inf.tu-dresden.de/web/On_the_Expressive_Power_of_Description_Logics_with_Cardinality_Constraints_on_Finite_and_Infinite_Sets
DTSTART:20190829T130000
DTEND:20190829T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' In recent work we have extended the description logic (DL) ALCQ by means of more expressive number restrictions using numerical and set constraints stated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). It has been shown that reasoning in the resulting DL\, called ALCSCC\, is PSpace-complete without a TBox and ExpTime-complete w.r.t. a general TBox. The semantics of ALCSCC is defined in terms of finitely branching interpretations\, that is\, interpretations where every element has only finitely many role successors. This condition was needed since QFBAPA considers only finite sets. In this paper\, we first introduce a variant of ALCSCC\, called ALCSCC∞\, in which we lift this requirement (inexpressible in first-order logic) and show that the complexity results for ALCSCC mentioned above are preserved. Nevertheless\, like ALCSCC\, ALCSCC∞ is not a fragment of first-order logic. The main contribution of this paper is to give a characterization of the first-order fragment of ALCSCC∞. The most important tool used in the proof of this result is a notion of bisimulation that characterizes this fragment.\n\n\nJoint work with Franz Baader.\nThis talk is a rehearsal for a presentation at FroCoS 2019.\nDuration: 25 minutes without questions.
DTSTAMP:20190820T085458
SEQUENCE:28999
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chasing Sets: How to Use Existential Rules for Expressive Reasoning
URL://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning
UID://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning
DTSTART:20190801T130000
DTEND:20190801T140000
LOCATION:APB 3027
DESCRIPTION:Abstract: We propose that modern existential rule reasoners can enable fully declarative implementations of rule-based inference methods in knowledge representation\, in the sense that a particular calculus is captured by a fixed set of rules that can be evaluated on varying inputs (encoded as facts). We introduce Datalog(S) – Datalog with support for sets – as a surface language for such translations\, and show that it can be captured in a decidable fragment of existential rules. We then implement several known inference methods in Datalog(S)\, and empirically show that an existing existential rule reasoner can thus be used to solve practical reasoning problems.\n\n\nThis talk is a rehearsal for a presentation (15 minutes including questions) at IJCAI 2019.
DTSTAMP:20190731T095825
SEQUENCE:28893
END:VEVENT
BEGIN:VEVENT
SUMMARY:Worst-Case Optimal Querying of Very Expressive Description Logics with Path Expressions and Succinct Counting
URL://iccl.inf.tu-dresden.de/web/Worst-Case_Optimal_Querying_of_Very_Expressive_Description_Logics_with_Path_Expressions_and_Succinct_Counting
UID://iccl.inf.tu-dresden.de/web/Worst-Case_Optimal_Querying_of_Very_Expressive_Description_Logics_with_Path_Expressions_and_Succinct_Counting
DTSTART:20190730T130000
DTEND:20190730T133000
LOCATION:APB 3027
DESCRIPTION:'''Abstract.''' Among the most expressive knowledge representation formalisms are the description logics of the Z family. For well-behaved fragments of ZOIQ\, entailment of positive two-way regular path queries is well known to be 2EXPTIMEcomplete under the proviso of unary encoding of numbers in cardinality constraints. We show that this assumption can be dropped without an increase in complexity and EXPTIME-completeness can be achieved when bounding the number of query atoms\, using a novel reduction from query entailment to knowledge base satisfiability. These findings allow to strengthen other results regarding query entailment and query containment problems in very expressive description logics. Our results also carry over to GC2\, the two-variable guarded fragment of first order logic with counting quantifiers\, for which hitherto only conjunctive query entailment has been investigated.
DTSTAMP:20190718T124219
SEQUENCE:28841
END:VEVENT
BEGIN:VEVENT
SUMMARY:Extending EL++ with Linear Constraints on the Probability of Axioms
URL://iccl.inf.tu-dresden.de/web/Extending_EL%2B%2B_with_Linear_Constraints_on_the_Probability_of_Axioms
UID://iccl.inf.tu-dresden.de/web/Extending_EL%2B%2B_with_Linear_Constraints_on_the_Probability_of_Axioms
DTSTART:20190723T133000
DTEND:20190723T150000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' One of the main reasons to employ a description logic such as EL++ is the fact that it has efficient\, polynomial-time algorithmic properties such as deciding consistency and inferring subsumption. However\, simply by adding negation of concepts to it\, we obtain the expressivity of description logics whose decision procedure is ExpTime-complete. Similar complexity explosion occurs if we add probability assignments on concepts. To lower the resulting complexity\, we instead concentrate on assigning probabilities to Axioms/GCIs. We show that the consistency detection problem for such a probabilistic description logic is NP-complete\, and present a linear algebraic deterministic algorithm to solve it\, using the column generation technique. We also examine algorithms for the probabilistic extension problem\, which consists of inferring the minimum and maximum probabilities for a new axiom\, given a consistent probabilistic knowledge base. \n\n\nFuture work aims at finding fragments of probabilistic EL++ which are tractable.
DTSTAMP:20190717T120135
SEQUENCE:28813
END:VEVENT
BEGIN:VEVENT
SUMMARY:Reasoning about disclosure in data integration in the presence of source constraints
URL://iccl.inf.tu-dresden.de/web/Reasoning_about_disclosure_in_data_integration_in_the_presence_of_source_constraints
UID://iccl.inf.tu-dresden.de/web/Reasoning_about_disclosure_in_data_integration_in_the_presence_of_source_constraints
DTSTART:20190718T130000
DTEND:20190718T140000
LOCATION:APB 3027
DESCRIPTION:Joint work with M. Benedikt\, P. Bourhis\, L. Jachiet\n\n\n'''Abstract:''' Data integration systems allow users to access data sitting in multiple sources by means of queries over a global schema\, related to the sources via mappings. Data sources often contain sensitive information\, and thus an analysis is needed to verify that a schema satisfies a privacy policy\, given as a set of queries whose answers should not be accessible to users. Such an analysis should take into account not only knowledge that an attacker may have about the mappings\, but also what they may know about the semantics of the sources. In this talk\, I'll discuss the impact that source constraints can have on disclosure analysis.\n\n'''Speaker bio:''' Michaël Thomazo (Inria\, DI ENS\, ENS\, CNRS\, PSL University)
DTSTAMP:20190712T163358
SEQUENCE:28796
END:VEVENT
BEGIN:VEVENT
SUMMARY:Automatic Extraction of Compositional Matrix-Space Models of Language
URL://iccl.inf.tu-dresden.de/web/Automatic_Extraction_of_Compositional_Matrix-Space_Models_of_Language
UID://iccl.inf.tu-dresden.de/web/Automatic_Extraction_of_Compositional_Matrix-Space_Models_of_Language
DTSTART:20190715T110000
DTEND:20190715T123000
LOCATION:APB 3027
DESCRIPTION:Learning word representations in distributional semantic models to capture the semantics and compositionality of natural language is a central research area of computational linguistics. Compositional Matrix-Space Models (CMSMs) introduce a novel word representation alternative to Vector Space Models (VSMs). This talk presents the results of learning Compositional Matrix-Space Models to capture the semantics and compositionality in natural language processing tasks including sentiment analysis and compositionality detection of short phrases. Then\, a new dataset is introduced for examining compositional distributional semantic models and present benchmark experiments on using the developed dataset as a testbed to evaluate semantic composition in distributional semantic models.
DTSTAMP:20190708T112532
SEQUENCE:28772
END:VEVENT
BEGIN:VEVENT
SUMMARY:Epistemic Answer Set Programming
URL://iccl.inf.tu-dresden.de/web/Epistemic_Answer_Set_Programming
UID://iccl.inf.tu-dresden.de/web/Epistemic_Answer_Set_Programming
DTSTART:20190711T130000
DTEND:20190711T143000
LOCATION:APB 3027
DESCRIPTION:Today it is widely accepted by the logic programming community that answer set programming (ASP) requires a more powerful introspective reasoning with the use of modalities. Although there has been a long-lasting debate among researchers about how to correctly extend ASP with epistemic modal operators\, there is still no agreement on a fully satisfactory semantics that is able to offer intuitive results for epistemic logic programs. In this talk\, we introduce a recent epistemic extension of ASP called epistemic ASP (EASP)\, endowed with the epistemic answer set semantics: minimal (with respect to truth) models which are maximal under two different orderings that minimise knowledge. Then we compare EASP with existing successful (to some extent) approaches in the literature\, showing the advantages and the novelties of the new semantics: compared to Gelfond's epistemic specifications (ES)\, EASP defines a sufficiently strong language of a simpler syntactic character. Its semantics through a minimality criterion with respect to truth and knowledge is a natural and conservative generalisation of ASP's original answer set semantics. Moreover\, compared to all other semantics proposals for ES\, the epistemic answer set semantics provides a comprehensive solution to unintended results for epistemic logic programs including constraints. Finally\, we briefly discuss some formal properties of EASP such as epistemic splitting\, strong equivalence and foundedness. \n\n\n'''Speaker info:''' Ezgi Iraz Su\, IRIT (Lilac)\, Université de Toulouse 3 (Université Paul Sabatier)
DTSTAMP:20190625T121430
SEQUENCE:28697
END:VEVENT
BEGIN:VEVENT
SUMMARY:Introduction to p-adic numbers and analysis
URL://iccl.inf.tu-dresden.de/web/Introduction_to_p-adic_numbers_and_analysis
UID://iccl.inf.tu-dresden.de/web/Introduction_to_p-adic_numbers_and_analysis
DTSTART:20190710T133000
DTEND:20190710T150000
LOCATION:APB 3027
DESCRIPTION:The p-adic numbers (where p is a prime number) can be seen as one possible link between number theory and analysis. They therefore play an important role in various mathematical areas. The so-called strong triangle inequality of the p-adic absolute value (|\;a+b|\;≤max(|\;a|\;\,|\;b|\;) for p-adic numbers a and b) has many strange and surprising consequences.\nIn our talk\, we give the definition of p-adic numbers\, several elementary results on p-adic functional analysis and a short overwiev on possible applications.
DTSTAMP:20190707T075702
SEQUENCE:28768
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chasing Sets: How to Use Existential Rules for Expressive Reasoning
URL://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning_(Extended_Abstract)
UID://iccl.inf.tu-dresden.de/web/Chasing_Sets:_How_to_Use_Existential_Rules_for_Expressive_Reasoning_(Extended_Abstract)
DTSTART:20190613T133000
DTEND:20190613T140000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' We propose that modern existential rule reasoners can enable fully declarative implementations of rule-based inference methods in knowledge representation\, in the sense that a particular calculus is captured by a fixed set of rules that can be evaluated on varying inputs (encoded as facts). We introduce Datalog(S) – Datalog with support for sets – as a surface language for such translations\, and show that it can be captured in a decidable fragment of existential rules. We then implement several known inference methods in Datalog(S)\, and empirically show that an existing existential rule reasoner can thus be used to solve practical reasoning problems.\n\n\nThis talk is a rehearsal for a SHORT ORAL presentation (17 minutes without questions) at DL 2019.
DTSTAMP:20190606T082710
SEQUENCE:28585
END:VEVENT
BEGIN:VEVENT
SUMMARY:Discovering Implicational Knowledge in Wikidata
URL://iccl.inf.tu-dresden.de/web/Discovering_Implicational_Knowledge_in_Wikidata
UID://iccl.inf.tu-dresden.de/web/Discovering_Implicational_Knowledge_in_Wikidata
DTSTART:20190613T130000
DTEND:20190613T133000
LOCATION:APB 3027
DESCRIPTION:Knowledge graphs have recently become the state-of-the-art tool for representing the diverse and complex knowledge of the world. Among the freely available knowledge graphs\, Wikidata stands out by being collaboratively edited and curated. Amidst the vast numbers of facts\, complex knowledge is just waiting to be discovered\, but the sheer size of Wikidata makes this infeasible for human editors. We apply Formal Concept Analysis to efficiently identify and succinctly represent comprehensible implications that are implicitly present in the data. As a first step\, we describe a systematic process to extract conceptual knowledge from Wikidata's complex data model\, thus providing a method for obtaining large real-world data sets for FCA. We conduct experiments that show the principal feasibility of the approach\, yet also illuminate some of the limitations\, and give examples of interesting knowledge discovered.\n\n\nThis will be a rehearsal talk for ICFCA-2019 (20 minutes including questions).
DTSTAMP:20190607T170543
SEQUENCE:28613
END:VEVENT
BEGIN:VEVENT
SUMMARY:Projection in a Description Logic of Context with Actions
URL://iccl.inf.tu-dresden.de/web/Projection_in_a_Description_Logic_of_Context_with_Actions
UID://iccl.inf.tu-dresden.de/web/Projection_in_a_Description_Logic_of_Context_with_Actions
DTSTART:20190606T130000
DTEND:20190606T134500
LOCATION:APB 3027
DESCRIPTION:Satyadharma Tirtarasa and Benjamin Zarrieß. '''Projection in a Description Logic of Context with Actions'''. In Proceedings of the 32nd International Workshop on Description Logics (DL'19)\, Oslo\, Norway\, June 2019. Springer. To appear.\n\n\n'''Abstract:''' Projection is the problem of checking whether the execution of a given sequence of actions will achieve its goal starting from some initial state. In this paper\, we study a setting where we combine a two-dimensional Description Logic of context (ConDL) with an action formalism. We choose a well-studied ConDL where both: the possible states of a dynamical system itself (object level) and also different context-dependent views on this system state (context level) are organised in relational structures and can be described using usual DL constructs. To represent how such a system and its views evolve we introduce a suitable action formalism. It allows one to describe change on both levels. Furthermore\, the observable changes on the object level due to an action execution can also be context-dependent. We show that the formalism is well-behaved in the sense that projection has the same complexity as standard reasoning tasks in case ALCO is the underlying DL. \n\nThis talk is a rehearsal for a SHORT ORAL presentation (17 minutes without questions) at DL 2019.
DTSTAMP:20190603T115201
SEQUENCE:28557
END:VEVENT
BEGIN:VEVENT
SUMMARY:Explorations into Belief State Compression
URL://iccl.inf.tu-dresden.de/web/Explorations_into_Belief_State_Compression
UID://iccl.inf.tu-dresden.de/web/Explorations_into_Belief_State_Compression
DTSTART:20190528T133000
DTEND:20190528T143000
LOCATION:APB 3027
DESCRIPTION:A knowledge base is an integral part of a logic-based artificial intelligence system. The size of the knowledge base has a great effect on the derivation time of a logic-based agent. In this thesis\, I present a variety of algorithms for a particular variant of knowledge base size reduction referred to as “Belief State Compression”. Each proposed algorithm can be “lossy” or “lossless” depending on the (in)ability to recover the removed information\; and “redundant” or “irredundant” with respect to the necessity of the remaining information in order to remain lossless. Belief state compression differs from previous approaches in at least three aspects. First\, it takes its objects to be support-structured sets of unconstrained\, rather than flat sets of syntactically constrained\, logical formulas\, which we refer to as belief states. Second\, classical notions of minimality and redundancy are replaced by weaker\, resource-bounded alternatives based on the support structure. Third\, in “lossy” variants of compression\, the compressed knowledge base logically implies only a practically-relevant subset of the original knowledge base. Six variants of belief state compression\, falling into three major classes\, are presented. Experimental results show that a combination of five of them results in mostly irredundant\, lossless compressions\, while maintaining reasonable run times.
DTSTAMP:20190523T202953
SEQUENCE:28517
END:VEVENT
BEGIN:VEVENT
SUMMARY:Quine's Fluted Fragment
URL://iccl.inf.tu-dresden.de/web/Quine%27s_Fluted_Fragment
UID://iccl.inf.tu-dresden.de/web/Quine%27s_Fluted_Fragment
DTSTART:20190509T130000
DTEND:20190509T143000
DESCRIPTION:We consider the fluted fragment\, a decidable fragment of first-order logic with an unbounded number of variables\, originally identified in 1968 by W. V. Quine. We show that the satisfiability problem for this fragment has non-elementary complexity\, thus refuting an earlier published claim by W.C. Purdy that it is in NExpTime. More precisely\, we consider the intersection of the fluted fragment and the m-variable fragment of first-order logic\, for all non-negative m. We obtain upper and lower complexity bounds for this fragment that coincide for all m up to the value 4.\n\n\n'''Short bio:''' Ian Pratt-Hartmann studied mathematics and philosophy at [http://www.bnc.ox.ac.uk Brasenose College\, Oxford]\, and philosophy at [http://www.princeton.edu/main/ Princeton] and [http://www.stanford.edu/ Stanford] Universities\, gaining his PhD. from Princeton. He is currently Senior Lecturer in the Department of Computer Science at the [http://www.manchester.ac.uk/ University of Manchester]. Since February\, 2014\, Dr. Pratt-Hartmann has held a joint appointment in the [http://informatyka.wmfi.uni.opole.pl/ Institute of Computer Science] at the [http://www.uni.opole.pl/ University of Opole]. His academic interests range widely over computational logic\, natural language semantics and artificial intelligence.
DTSTAMP:20190503T071228
SEQUENCE:28293
END:VEVENT
BEGIN:VEVENT
SUMMARY:Data Science Use Cases for Lifestyle Banking
URL://iccl.inf.tu-dresden.de/web/Data_Science_Use_Cases_for_Lifestyle_Banking
UID://iccl.inf.tu-dresden.de/web/Data_Science_Use_Cases_for_Lifestyle_Banking
DTSTART:20190425T130000
DTEND:20190425T143000
LOCATION:APB 3027
DESCRIPTION:Abstract:\nTraditional banking concerns itself with risk understanding\, credit underwriting\, cash need\, liquidity\, etc. Data science and machine learning have proved useful in banking businesses to mitigate risk while targeting the right customers with cash need. In this talk\, we will explore a lifestyle side of banking that goes beyond the traditional realm and delves more into alternative signals to customers needs. We will see example data science use cases that have been successfully implemented in banking business at Siam Commercial Bank.
DTSTAMP:20190411T134324
SEQUENCE:28212
END:VEVENT
BEGIN:VEVENT
SUMMARY:Closed-World Semantics for Conjunctive Queries with Negation over ELH_bottom Ontologies
URL://iccl.inf.tu-dresden.de/web/Closed-World_Semantics_for_Conjunctive_Queries_with_Negation_over_ELH_bottom_Ontologies
UID://iccl.inf.tu-dresden.de/web/Closed-World_Semantics_for_Conjunctive_Queries_with_Negation_over_ELH_bottom_Ontologies
DTSTART:20190418T130000
DTEND:20190418T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract:''' Ontology-mediated query answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information\, however\, there exist only few ontology based approaches. Moreover\, these proposals conflate the closed-domain and closed-world assumption\, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. We propose a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic ELH-bottom\, which is based on the minimal canonical model. We propose a rewriting strategy for dealing with negated query atoms\, which shows that query answering is possible in polynomial time in data complexity.\n\n\nThis work was awarded Best Paper at JELIA 2019: https://jelia2019.mat.unical.it/awards.
DTSTAMP:20190418T100913
SEQUENCE:28243
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Power of the Terminating Chase
URL://iccl.inf.tu-dresden.de/web/The_Power_of_the_Terminating_Chase
UID://iccl.inf.tu-dresden.de/web/The_Power_of_the_Terminating_Chase
DTSTART:20190411T130000
DTEND:20190411T143000
LOCATION:APB 3027
DESCRIPTION:'''Abstract''':\nThe chase has become a staple of modern database theory with applications in data integration\, query optimisation\, data exchange\, ontology-based query answering\, and many other areas. Most application scenarios and implementations require the chase to terminate and produce a finite universal model\, and a large arsenal of sufficient termination criteria is available to guarantee this (generally undecidable) condition. In this invited tutorial\, we therefore ask about the expressive power of logical theories for which the chase terminates. Specifically\, which database properties can be recognised by such theories\, i.e.\, which Boolean queries can they realise? For the skolem (semi-oblivious) chase\, and almost any known termination criterion\, this expressivity is just that of plain Datalog. Surprisingly\, this limitation of most prior research does not apply to the chase in general. Indeed\, we show that standard–chase terminating theories can realise queries with data complexities ranging from PTime to non-elementary that are out of reach for the terminating skolem chase. A “Datalog-first” standard chase that prioritises applications of rules without existential quantifiers makes modelling simpler – and we conjecture: computationally more efficient. This is one of the many open questions raised by our insights\, and we conclude with an outlook on the research opportunities in this area.\n\nThis work has been published and presented at ICDT 2019\, Lisbon\, Portugal.
DTSTAMP:20190327T143510
SEQUENCE:27953
END:VEVENT
BEGIN:VEVENT
SUMMARY:Beyond NP Revolution
URL://iccl.inf.tu-dresden.de/web/Beyond_NP_Revolution
UID://iccl.inf.tu-dresden.de/web/Beyond_NP_Revolution
DTSTART:20190409T133000
DTEND:20190409T150000
LOCATION:APB 2026
DESCRIPTION:'''Abstract:''' The paradigmatic NP-complete problem of Boolean satisfiability (SAT) solving is a central problem in Computer Science. While the mention of SAT can be traced to early 19th century\, efforts to develop practically successful SAT solvers go back to 1950s. The past 20 years have witnessed a "NP revolution" with the development of conflict-driven clause-learning (CDCL) SAT solvers. Such solvers combine a classical backtracking search with a rich set of effective heuristics. While 20 years ago SAT solvers were able to solve instances with at most a few hundred variables\, modern SAT solvers solve instances with up to millions of variables in a reasonable time. The "NP-revolution" opens up opportunities to design practical algorithms with rigorous guarantees for problems in complexity classes beyond NP by replacing a NP oracle with a SAT Solver. In this talk\, we will discuss how we use NP revolution to design practical algorithms for two fundamental problems in artificial intelligence and formal methods: Constrained Counting and Sampling.\n\n\n'''Bio:''' Kuldeep Meel is an Assistant Professor of Computer Science in School of Computing at the National University of Singapore where he holds the Sung Kah Kay Assistant Professorship. He received his Ph.D. (2017) and M.S. (2014) degree in Computer Science from Rice University. He holds B. Tech. (with Honors) degree (2012) in Computer Science and Engineering from Indian Institute of Technology\, Bombay. His research interests lie at the intersection of Artificial Intelligence and Formal Methods. Meel has co-presented tutorials at top-tier AI conferences\, IJCAI 2018\, AAAI 2017\, and UAI 2016. His work received the 2018 Ralph Budd Award for Best PhD Thesis in Engineering\, 2014 Outstanding Masters Thesis Award from Vienna Center of Logic and Algorithms and Best Student Paper Award at CP 2015. He received the IBM Ph.D. Fellowship and the 2016-17 Lodieska Stockbridge Vaughn Fellowship for his work on constrained sampling and counting.
DTSTAMP:20190404T131214
SEQUENCE:28083
END:VEVENT
END:VCALENDAR