Attribut:Long description EN

Aus International Center for Computational Logic
Wechseln zu:Navigation, Suche

Dies ist eine Eigenschaft des Typs Zeichenkette.

Unterhalb werden 7 Seiten angezeigt, auf denen für dieses Attribut ein Datenwert gespeichert wurde.
C
Coprocessor is a standalone binary that can simplify formulas of different problems. Whereas most techniques are developed to work on CNF formulas, optimization problems like MaxSAT, as well as MUS and QBF are also supported. The implemented simplification techniques includes almost all published simplification techniques. The tool provides statistical output, as well as the opportunity to specify the order of the techniques and to exclude variables from the simplification. Hence, Coprocessor can also be used to simplify formulas partially -- which is especially useful if multiple solutions based on a set of variables should be found.  +
N
''Nemo'' is a free, datalog-based rule engine for fast and scalable analytic data processing in memory. It is available as a command-line tool <tt>nmo</tt>, through bindings to other programming languages, and via a [https://tools.iccl.inf.tu-dresden.de/nemo/ browser-based web application]. Goals of Nemo are performance, declarativity, versatility, and reliability. It is written in Rust. Nemo's data model aims at compatibility with [https://www.w3.org/TR/rdf11-concepts/ RDF]/[https://www.w3.org/TR/sparql11-overview/ SPARQL] while preserving established logic programming conventions and features. The rule language supported by Nemo is an extension of Datalog that incorporates, among others, the following features: * Datatypes with suitable built-in predicates and functions (e.g., arithmetic comparisons and operations) * Existential rules (a.k.a. tuple-generating dependencies) * Stratified negation Nemo can be used for ''data analysis'', ''query answering'', ''data integration'', and also specialised tasks such as ''program analysis'' or ''ontological reasoning''. It allows the use of multiple datasources at the same time, of various types and formats (RDF graphs, CSV files, etc.). Nemo is a successor project of [[VLog/en|VLog]], and uses some of the same techniques (esp. columnar storage), but has a completely new architecture and implementation that enables Nemo to support further features while achieving faster performance.  +
R
Riss is a SAT solver (package) that is based on MiniSat 2.2 and Glucose 2.2 and uses the conflict driven clause learning algorithm. It furthermore includes the formula simplification tool Coprocessor, which can be used to simplify the formula before search, as well as during search. Riss includes many search algorithm extensions, and currently (2014) has about 500 parameters. Another feature of Riss is that it can emit unsatisfiability proofs in the DRAT format for almost all implemented techniques. These proofs can also be verified online while solving the actual problem. The Riss framework furthermore includes a CNF formula feature extraction tool, so that equipped with a machine learning tool, a configuration of Riss can be chosen on a formula basis. Furthermore, there is the parallel portfolio solver Priss, which is also able to produce unsatisfiability proofs while respecting shared clauses. Another parallel solving algorithm is included, iterative partitionin. The related system is called Pcasso. Instead of running multiple configurations of a solver on a single formula, Pcasso partitions the search space of the formula, an assigns solvers to each partition, as well as to the original formula. If there are idle resources, partitions are re-partitioned recursively. Furthermore, Pcasso does allow clause sharing. Inprocessing is currently not supported, but will be added in the near furture.  +
S
Semantic MediaWiki (SMW) is an extension of MediaWiki – the wiki application best known for powering Wikipedia – that helps to search, organise, tag, browse, evaluate, and share the wiki's content. While traditional wikis contain only text which computers can neither understand nor evaluate, SMW adds semantic annotations that allow a wiki to function as a collaborative database. First released in 2005, Semantic MediaWiki has since grown into a successful open source project that is used on hundreds of sites, including the home of the International Center for Computational Logic. In addition, a large number of related extensions have been created that extend the ability to edit, display and browse through the data stored by SMW: the term "Semantic MediaWiki" is sometimes used to refer to this entire family of extensions.  +
V
'''A current successor system of VLog is [[Nemo/en|Nemo]], which improves upon the methods and features of VLog in several ways.''' <b>VLog</b> (Vertical Datalog) is a <b>rule engine</b> that supports reasoning over <i>Horn existential rules</i> with stratified negation and, implicitly, <i>Datalog</i>. It uses a novel strategy based on a vertical storage architecture [1], which exhibits a state-of-the-art performance, distinguishing itself by excellent memory footprint and scalability [2]. The engine applies a bottom-up strategy for existential rules, supporting the two most studied materialisation algorithms: the <i>Skolem</i> and the <i>Restricted (Standard) Chase</i>. The latter is recommended for use, as it leads to termination in more cases. VLog can be used for <i>query answering</i>, <i>federated reasoning</i> and <i>data integration</i>. It allows the use of multiple datasources at the same time, of various types (RDF stores, CSV files, OWL ontologies and remote SPARQL endpoints). VLog is open-source, and it provides two main tools for such reasoning tasks: * the core C++ reasoner <b>VLog</b> (https://github.com/karmaresearch/vlog), which is a command-line client with an interactive web interface that provides graphical representations of each rule executions, useful for tracing and debugging rule programs. The pre-compiled binaries can be obtained via Docker (karmaresearch/vlog), which facilitates a platfom-independent use. * the Java API <b>Rulewerk</b> (https://github.com/knowsys/rulewerk), which integrates the core reasoner for all major OSs, allowing an easy embedding into other applications, and providing additional functionality. It comprises of multiple <i>Maven</i> modules (<i>org.semanticweb.rulewerk</i>): <i>rulewerk-core</i> provides essential data models for rules and facts, and an interface for essential reasoner functionality; <i>rulewerk-parser</i> supports processing knowledge bases in Rulewerk syntax; <i>rulewerk-owlapi</i> supports converting rules from OWL ontology using the OWL API; <i>rulewerk-rdf</i> supports loading data from RDF files; <i>rulewerk-graal</i> supports converting rules, facts and queries from Graal API objects and DLGP files; <i>rulewerk-client</i> is stand-alone application that builds a command-line client for Rulewerk; <i>rulewerk-commands</i> provides support for running commands, as done by the client; <i>rulewerk-vlog</i> supports using VLog as a reasoning backend for Rulewerk; and <i>rulewerk-examples</i> demonstrates the use of above functionality. [1] Urbani, J., Jacobs, C., Krötzsch, M.: Column-oriented Datalog materialization for large knowledge graphs. In: Proc. 30th AAAI Conf. on Artificial Intelligence (AAAI’16). pp. 258–264. AAAI Press (2016) [2] Urbani, J., Krötzsch, M., Jacobs, C., Dragoste, I., Carral, D.: Efficient model construction for Horn logic with VLog: System description. In: Proc. 9th Int. Joint Conf. on Automated Reasoning (IJCAR’18). LNAI, Springer (2018)  
W
This page describes multiple files with anonymised logs of several hundred million '''SPARQL queries from the Wikidata SPARQL endpoint''' that accompany the publication <div class="card" ><div class="card-body">Stanislav Malyshev, Markus Krötzsch, Larry González, Julius Gonsior, Adrian Bielefeldt:<br/> '''[[Inproceedings3044/en|Getting the Most out of Wikidata: Semantic Technology Usage in Wikipedia’s Knowledge Graph]].'''<br/> In Proceedings of the 17th International Semantic Web Conference (ISWC-18), Springer 2018. [https://iccl.inf.tu-dresden.de/w/images/5/5a/Malyshev-et-al-Wikidata-SPARQL-ISWC-2018.pdf PDF]</div></div> Further related publications can be found in the publications tab. The following datasets are currently available. Details on how this data was created are explained below. We also offer a [https://iccl.inf.tu-dresden.de/w/images/8/81/Wikidata-queries-sample.tsv sample snippet] that illustrates the structure of the files. <table class="table"> <tr> <th>Interval</th> <th>First day</th> <th>Last day</th> <th class="text-right">Queries</th> <th>Download (tsv.gz)</th> <th>Size</th> </tr> <tr> <td>Interval 1</td> <td>2017-06-12</td> <td>2017-07-09</td> <td class="text-right">59,547,909</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2017-06-12_2017-07-09/2017-06-12_2017-07-09_all.tsv.gz All queries, success (HTTP code 200)]</td> <td>2.7G</td> </tr> <tr> <td></td> <td></td> <td></td> <td class="text-right">192,330</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2017-06-12_2017-07-09/2017-06-12_2017-07-09_organic.tsv.gz Organic queries, success (HTTP code 200)]</td> <td>5.7M</td> </tr> <tr> <td></td> <td></td> <td></td> <td class="text-right">10,853</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2017-06-12_2017-07-09/2017-06-12_2017-07-09_status_500.tsv.gz All queries, timeout (HTTP code 500)]</td> <td>463K</td> </tr> <tr> <td>Interval 2</td> <td>2017-07-10</td> <td>2017-08-06</td> <td class="text-right">66,459,799</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2017-07-10_2017-08-06/2017-07-10_2017-08-06_all.tsv.gz All queries, success (HTTP code 200)]</td> <td>2.6G</td> </tr> <tr> <th></th> <td></td> <td></td> <td class="text-right">200,726</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2017-07-10_2017-08-06/2017-07-10_2017-08-06_organic.tsv.gz Organic queries, success (HTTP code 200)]</td> <td>5.7M</td> </tr> <tr> <td></td> <td></td> <td></td> <td class="text-right">10,933</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2017-07-10_2017-08-06/2017-07-10_2017-08-06_status_500.tsv.gz All queries, timeout (HTTP code 500)]</td> <td>447K</td> </tr> <tr> <td>Interval 3</td> <td>2017-08-07</td> <td>2017-09-03</td> <td class="text-right">78,000,469</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2017-08-07_2017-09-03/2017-08-07_2017-09-03_all.tsv.gz All queries, success (HTTP code 200)]</td> <td>2.8G</td> </tr> <tr> <th></th> <td></td> <td></td> <td class="text-right">268,464</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2017-08-07_2017-09-03/2017-08-07_2017-09-03_organic.tsv.gz Organic queries, success (HTTP code 200)]</td> <td>8.8M</td> </tr> <tr> <td></td> <td></td> <td></td> <td class="text-right">16,488</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2017-08-07_2017-09-03/2017-08-07_2017-09-03_status_500.tsv.gz All queries, timeout (HTTP code 500)]</td> <td>594K</td> </tr> <tr> <td>Interval 4</td> <td>2017-12-03</td> <td>2017-12-30</td> <td class="text-right">101,545,006</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2017-12-03_2017-12-30/2017-12-03_2017-12-30_all.tsv.gz All queries, success (HTTP code 200)]</td> <td>3.1G</td> </tr> <tr> <th></th> <td></td> <td></td> <td class="text-right">500,339</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2017-12-03_2017-12-30/2017-12-03_2017-12-30_organic.tsv.gz Organic queries, success (HTTP code 200)]</td> <td>15M</td> </tr> <tr> <td></td> <td></td> <td></td> <td class="text-right">16,922</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2017-12-03_2017-12-30/2017-12-03_2017-12-30_status_500.tsv.gz All queries, timeout (HTTP code 500)]</td> <td>727K</td> </tr> <tr> <td>Interval 5</td> <td>2018-01-01</td> <td>2018-01-28</td> <td class="text-right">91,827,133</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2018-01-01_2018-01-28/2018-01-01_2018-01-28_all.tsv.gz All queries, success (HTTP code 200)]</td> <td>3.0G</td> </tr> <tr> <th></th> <td></td> <td></td> <td class="text-right">600,767</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2018-01-01_2018-01-28/2018-01-01_2018-01-28_organic.tsv.gz Organic queries, success (HTTP code 200)]</td> <td>15M</td> </tr> <tr> <td></td> <td></td> <td></td> <td class="text-right">19,262</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2018-01-01_2018-01-28/2018-01-01_2018-01-28_status_500.tsv.gz All queries, timeout (HTTP code 500)]</td> <td>839K</td> </tr> <tr> <td>Interval 6</td> <td>2018-01-29</td> <td>2018-02-25</td> <td class="text-right">96,186,795</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2018-01-29_2018-02-25/2018-01-29_2018-02-25_all.tsv.gz All queries, success (HTTP code 200)]</td> <td>3.1G</td> </tr> <tr> <th></th> <td></td> <td></td> <td class="text-right">895,767</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2018-01-29_2018-02-25/2018-01-29_2018-02-25_organic.tsv.gz Organic queries, success (HTTP code 200)]</td> <td>19M</td> </tr> <tr> <td></td> <td></td> <td></td> <td class="text-right">22,848</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2018-01-29_2018-02-25/2018-01-29_2018-02-25_status_500.tsv.gz All queries, timeout (HTTP code 500)]</td> <td>1.1M</td> </tr> <tr> <td>Interval 7</td> <td>2018-02-26</td> <td>2018-03-25</td> <td class="text-right">82,211,741</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2018-02-26_2018-03-25/2018-02-26_2018-03-25_all.tsv.gz All queries, success (HTTP code 200)]</td> <td>1.8G</td> </tr> <tr> <th></th> <td></td> <td></td> <td class="text-right">872,555</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2018-02-26_2018-03-25/2018-02-26_2018-03-25_organic.tsv.gz Organic queries, success (HTTP code 200)]</td> <td>18M</td> </tr> <tr> <td></td> <td></td> <td></td> <td class="text-right">25,667</td> <td>[https://analytics.wikimedia.org/datasets/one-off/wikidata/sparql_query_logs/2018-02-26_2018-03-25/2018-02-26_2018-03-25_status_500.tsv.gz All queries, timeout (HTTP code 500)]</td> <td>1.2M</td> </tr> </table> All of the above are published under [https://creativecommons.org/publicdomain/zero/1.0/ License CC-0], which minimises legal obstacles in re-use. The authors believe that the good scientific practice of acknowledging related work with a citation does not need to be enforced legally. ==What is in this dataset?== The datasets consists of several files that each contain SPARQL logs from a specific time interval, complete with SPARQL query, timestamp, and user agent information. Queries are anonymised as described below, but are valid SPARQL queries. Files are in gzipped tab-separated values format, each containing the following columns: # '''Anonymised query:''' The original query, reformatted and processed for reducing identifiability. This string is URL-encoded. # '''Timestamp:''' The exact time (timezone GMT) of the request, in ISO format. # '''Source category:''' This field indicates whether we believe that the query was issued by an automated process. This is true for all queries that came from non-browser agents, and in addition for some queries that used a browser-like agent. The field specifies the classification into ''robotic'' and ''organic'' traffic as explained in the [[Inproceedings3044/en|paper]]. # '''User agent:''' A simplified/anonymised version of the user agent string that was used with the request. It is simply "browser" for all browser-like agents, and might be slightly more specific for bot-like agents (e.g. "PHP" or "curl"). See below. Overall, the data amounts to around 575 million requests. Removing all queries that we believe are sent by bots, there are still than 3.5 million queries remaining (labelled "organic" above; these files are excerpts from the complete set). The queries are very diverse in terms of size and structure. ==Where does this data come from?== [[Wikidata/en|Wikidata]], the knowledge base of Wikimedia, is collecting a large amount of structured knowledge across all Wikimedia projects and languages. Since 2015, a [https://query.wikidata.org query service] is available to retrieve and analyze this data using the SPARQL query language. Since the data is rich and the query language is powerful, [https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/queries/examples many complex questions] can be asked and answered in this way. The service is used not only by individual power users but also by applications inside and outside of Wikimedia, which issue a large number of queries to provide users with the information they request. ==How was the data created?== All source code used for generating the data is published in a [https://github.com/Wikidata/QueryAnalysis dedicated git repository]. ===Anonymised query=== The '''query strings''' were processed to remove potentially identifying information as far as possible, and to reduce spurious signals that could be used to reconstruct user traces. The following steps were performed: * '''Stage 1:''' A SPARQL programming library (OpenRDF) was used to transform the original query string into an object model. If this fails (invalid query), the query is dropped completely. We do not publish any information about invalid requests. * '''Stage 2:''' The structure of the parsed SPARQL query was modified: ** All comments were removed ** All string literals in the query were replaced by placeholders of the form "<tt>string</tt>''number''" that have no relationship to the original string (we simply enumerate the strings as found in the query). *** The same string was uniformly replaced by the same placeholder within each query, but the same string across different queries was usually not be replaced by the same placeholder. *** The only exception are very short strings of at most 10 characters, strings that represent a number, lists of language tags in language service calls (e.g., "en,de,fr"), and a small number of explicitly whitelisted strings that are used to configure the query service (e.g., the string "com.bigdata.rdf.graph.analytics.BFS" that instructs BlazeGraph to do a breadth-first search). These strings were preserved. ** All variable names were replaced by generated variable names "<tt>var</tt>''number''" or "<tt>var</tt>''number''<tt>Label</tt>" *** Replacement was uniform on the level of queries like for strings. *** The ending "Label" was preserved, since BlazeGraph has a special handling for such variables. ** All geographic coordinates were rounded to the next full degree (latitude and longitude). This was also done with coordinates in the alternative, more detailed format, where latitude and longitude are separate numerical values. * '''Stage 3:''' The modified query was converted back into a string ** All formatting details (whitespace, indentation, ...) were standardized in this process ** No namespace abbreviations are used in the generated query, and no namespace declarations are given. '''Example:''' The well-known example query for the 10 largest cities with a female mayor: <div dir="ltr" class="mw-geshi mw-code mw-content-ltr"><div class="sparql source-sparql"> <span class="co0">#Largest cities with female mayor</span> <span class="co0">#added before 2016-10</span> <span class="co0">#TEMPLATE={"template":"Largest ?c with ?sex head of government","variables":{"?sex":{"query":" SELECT ?id WHERE { ?id wdt:P31 wd:Q48264 . } "},"?c":{"query":"SELECT DISTINCT ?id WHERE {  ?c wdt:P31 ?id.  ?c p:P6 ?mayor. }"} } }</span> <span class="kw1">SELECT</span> <span class="kw1">DISTINCT</span> <span class="re1">?city</span> <span class="re1">?cityLabel</span> <span class="re1">?mayor</span> <span class="re1">?mayorLabel</span> <span class="kw1">WHERE</span> <span class="br0">{</span> <span class="kw1">BIND</span><span class="br0">(</span><span class="re2">wd:</span>Q6581072 <span class="kw1">AS</span> <span class="re1">?sex</span><span class="br0">)</span> <span class="kw1">BIND</span><span class="br0">(</span><span class="re2">wd:</span>Q515 <span class="kw1">AS</span> <span class="re1">?c</span><span class="br0">)</span> <span class="re1">?city</span> <span class="re2">wdt:</span>P31<span class="sy1">/</span><span class="re2">wdt:</span>P279<span class="sy1">*</span> <span class="re1">?c</span> <span class="sy0">.</span> <span class="co0"># find instances of subclasses of city</span> <span class="re1">?city</span> <span class="re2">p:</span>P6 <span class="re1">?statement</span> <span class="sy0">.</span> <span class="co0"># with a P6 (head of goverment) statement</span> <span class="re1">?statement</span> <span class="re2">ps:</span>P6 <span class="re1">?mayor</span> <span class="sy0">.</span> <span class="co0"># ... that has the value ?mayor</span> <span class="re1">?mayor</span> <span class="re2">wdt:</span>P21 <span class="re1">?sex</span> <span class="sy0">.</span> <span class="co0"># ... where the ?mayor has P21 (sex or gender) female</span> <span class="kw1">FILTER</span> <span class="kw1">NOT</span> <span class="kw1">EXISTS</span> <span class="br0">{</span> <span class="re1">?statement</span> <span class="re2">pq:</span>P582 <span class="re1">?x</span> <span class="br0">}</span> <span class="co0"># ... but the statement has no P582 (end date) qualifier</span>   <span class="co0"># Now select the population value of the ?city</span> <span class="co0"># (wdt: properties use only statements of "preferred" rank if any, usually meaning "current population")</span> <span class="re1">?city</span> <span class="re2">wdt:</span>P1082 <span class="re1">?population</span> <span class="sy0">.</span> <span class="co0"># Optionally, find English labels for city and mayor:</span> <span class="kw1">SERVICE</span> <span class="re2">wikibase:</span>label <span class="br0">{</span> <span class="re2">bd:</span>serviceParam <span class="re2">wikibase:</span>language <span class="st0">"en"</span> <span class="sy0">.</span> <span class="br0">}</span> <span class="br0">}</span> <span class="kw1">ORDER</span> <span class="kw1">BY</span> <span class="kw1">DESC</span><span class="br0">(</span><span class="re1">?population</span><span class="br0">)</span> <span class="kw1">LIMIT</span> <span class="nu0">10</span> </div></div> turns into the following normalized query, which yields the same results: <div dir="ltr" class="mw-geshi mw-code mw-content-ltr"><div class="sparql source-sparql"> <span class="kw1">SELECT</span> <span class="kw1">DISTINCT</span> <span class="re1">?var1</span> <span class="re1">?var1Label</span> <span class="re1">?var2</span> <span class="re1">?var2Label</span> <span class="kw1">WHERE</span> <span class="br0">{</span> <span class="kw1">BIND</span> <span class="br0">(</span> <span class="co1"><http://www.wikidata.org/entity/Q6581072></span> <span class="kw1">AS</span> <span class="re1">?var3</span> <span class="br0">)</span><span class="sy0">.</span> <span class="kw1">BIND</span> <span class="br0">(</span> <span class="co1"><http://www.wikidata.org/entity/Q515></span> <span class="kw1">AS</span> <span class="re1">?var4</span> <span class="br0">)</span><span class="sy0">.</span> <span class="re1">?var1</span> <span class="br0">(</span> <span class="co1"><http://www.wikidata.org/prop/direct/P31></span> <span class="sy1">/</span> <span class="co1"><http://www.wikidata.org/prop/direct/P279></span> <span class="sy1">*</span><span class="br0">)</span> <span class="re1">?var4</span> <span class="sy0">.</span> <span class="re1">?var1</span> <span class="co1"><http://www.wikidata.org/prop/P6></span> <span class="re1">?var5</span> <span class="sy0">.</span> <span class="re1">?var5</span> <span class="co1"><http://www.wikidata.org/prop/statement/P6></span> <span class="re1">?var2</span> <span class="sy0">.</span> <span class="re1">?var2</span> <span class="co1"><http://www.wikidata.org/prop/direct/P21></span> <span class="re1">?var3</span> <span class="sy0">.</span> <span class="kw1">FILTER</span> <span class="br0">(</span> <span class="br0">(</span> <span class="kw1">NOT</span> <span class="kw1">EXISTS</span> <span class="br0">{</span> <span class="re1">?var5</span> <span class="co1"><http://www.wikidata.org/prop/qualifier/P582></span> <span class="re1">?var6</span> <span class="sy0">.</span> <span class="br0">}</span> <span class="br0">)</span> <span class="br0">)</span> <span class="sy0">.</span> <span class="re1">?var1</span> <span class="co1"><http://www.wikidata.org/prop/direct/P1082></span> <span class="re1">?var7</span> <span class="sy0">.</span> <span class="kw1">SERVICE</span> <span class="co1"><http://wikiba.se/ontology#label></span> <span class="br0">{</span> <span class="co1"><http://www.bigdata.com/rdf#serviceParam></span> <span class="co1"><http://wikiba.se/ontology#language></span> <span class="st0">"en"</span><span class="sy0">.</span> <span class="br0">}</span> <span class="br0">}</span> <span class="kw1">ORDER</span> <span class="kw1">BY</span> <span class="kw1">DESC</span><span class="br0">(</span> <span class="re1">?var7</span> <span class="br0">)</span> <span class="kw1">LIMIT</span> <span class="nu0">10</span> </div></div> ===User agent=== The '''user agent''' was set to be "<tt>browser</tt>" for all user agents that start with "<tt>Mozilla</tt>" (for example "<tt>Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; chromeframe/12.0.742.100)</tt>" would be considered a "browser" for this purpose). Some additional browser-like strings were substituted manually with "browser" as well. For requests that do not originate from browsers, the "user agent" is a coarse description of the software or tool that was used in making the request. All agent strings have been stripped of system information and overly detailed version information. Agent strings that occurred less than 10,000 times in a twelve week window, or that only occurred in a single week were always replaced by <tt>other</tt>. A manually checked whitelist was used to decide which strings to keep. ===Source category=== The '''source category''' field is "<tt>robotic</tt>" if we believe that the source of the query was a bot (i.e, some automated software tool issuing large numbers of queries without human intervention). This is the case if the user agent was not a browser, or if the query traffic pattern was very unnatural (e.g., millions of similar queries in one hour). This corresponds to the classification into ''robotic'' and ''organic'' traffic as explained in the [[Inproceedings3044/en|paper]]. This field is there for convenience and only makes explicit how we interpreted the logs. As shown in our publications, organic queries are only a tiny fraction of all queries, but at the same time are structurally more diverse. In contrast, robotic queries contain many trivial queries generated automatically. For some research works, the organic queries might therefore be of special interest.  
a
Abstract Dialectical Frameworks (ADF) are a generalisation of Dung’s Argumentation frameworks. “Abstract Dialectical Frameworks solved by Binary Decision Diagrams, developed in Dresden” ( ADF -BDD) is a novel approach that relies on the translation of the acceptance conditions of a given ADF into reduced ordered binary decision diagrams (roBDD). Our system is based on the consideration that many otherwise hard to decide problems in ADF semantics (e. g., answering SAT-questions) can be solved in polynomial time on roBDDs. Our novel approach differs to the currently used systems, like the SAT-based approaches or the wide spectrum of answer set programming (ASP) focused approaches. ADF -BDD is written in RUST to provide good performance while enforcing a high amount of memory- and type-safety. In addition the rust-compiler produces highly optimised machine code, while keeping the whole tech stack simple.  +