Natural-Language Understanding

From International Center for Computational Logic
Toggle side column

Natural-Language Understanding

Natural Language Understanding (NLU) is a broad area of research that addresses the problem of interpreting and modeling the semantics and pragmatics of natural language, using a diverse set of techniques for this purpose. Since human reasoning is often mediated by natural language, success in this area is key for intelligent systems to successfully support human reasoning. Themes in our range of interest include the representation and reasoning with human-like concepts, with incomplete, uncertain or vague statements and with pragmatic or non-explicit information. The methods we use for investigation are mostly based on (but not restricted to) logical frameworks.
Natural Language Understanding (NLU) is a broad area of research that addresses the problem of interpreting and modeling the semantics and pragmatics of natural language, using a diverse set of techniques for this purpose. Since human reasoning is often mediated by natural language, success in this area is key for intelligent systems to successfully support human reasoning. Themes in our range of interest include the representation and reasoning with human-like concepts, with incomplete, uncertain or vague statements and with pragmatic or non-explicit information. The methods we use for investigation are mostly based on (but not restricted to) logical frameworks.

Scientific Staff


Journal Articles

Shima Asaadi, Eugenie Giesbrecht, Sebastian Rudolph
Compositional matrix-space models of language: Definitions, properties, and learning methods
Natural Language Engineering, 29(1):1-49, January 2023
Details Download

Proceedings Articles

Shima Asaadi, Saif M. Mohammad, Svetlana Kiritchenko
Big BiRD: A Large, Fine-Grained, Bigram Relatedness Dataset for Examining Semantic Composition
Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), June 2019
Details Download
Shima Asaadi, Sebastian Rudolph
Gradual Learning of Matrix-Space Models of Language for Sentiment Analysis
In Phil Blunsom, Antoine Bordes, Kyunghyun Cho, Shay B. Cohen, Chris Dyer, Edward Grefenstette, Karl Moritz Hermann, Laura Rimell, Jason Weston, Scott Yih, eds., Proceedings of the 2nd Workshop on Representation Learning for NLP, ACL2017, 178-185, August 2017. Association for Computational Linguistics
Details Download
Shima Asaadi, Sebastian Rudolph
On the Correspondence between Compositional Matrix-Space Models of Language and Weighted Automata
Proceedings of the ACL Workshop on Statistical Natural Language Processing and Weighted Automata (StatFSM 2016), August 2016
Details Download

Talks and Miscellaneous

SECAI-SQUARE-SHORT.pdf

SECAI
School of Embedded Composite Artificial Intelligence