Gradual Learning of Matrix-Space Models of Language for Sentiment Analysis
From International Center for Computational Logic
Gradual Learning of Matrix-Space Models of Language for Sentiment Analysis
Shima AsaadiShima Asaadi, Sebastian RudolphSebastian Rudolph
Shima Asaadi, Sebastian Rudolph
Gradual Learning of Matrix-Space Models of Language for Sentiment Analysis
In Phil Blunsom, Antoine Bordes, Kyunghyun Cho, Shay B. Cohen, Chris Dyer, Edward Grefenstette, Karl Moritz Hermann, Laura Rimell, Jason Weston, Scott Yih, eds., Proceedings of the 2nd Workshop on Representation Learning for NLP, ACL2017, 178-185, August 2017. Association for Computational Linguistics
Gradual Learning of Matrix-Space Models of Language for Sentiment Analysis
In Phil Blunsom, Antoine Bordes, Kyunghyun Cho, Shay B. Cohen, Chris Dyer, Edward Grefenstette, Karl Moritz Hermann, Laura Rimell, Jason Weston, Scott Yih, eds., Proceedings of the 2nd Workshop on Representation Learning for NLP, ACL2017, 178-185, August 2017. Association for Computational Linguistics
- KurzfassungAbstract
Learning word representations to capture the semantics and compositionality of language has received much research interest in natural language processing. Beyond the popular vector space models, matrix representations for words have been proposed, since then, matrix multiplication can serve as natural composition operation. In this work, we investigate the problem of learning matrix representations of words. We present a learning approach for compositional matrix-space models for the task of sentiment analysis. We show that our approach, which learns the matrices gradually in two steps, outperforms other approaches and a gradient-descent baseline in terms of quality and computational cost. - Projekt:Project: QuantLA
- Forschungsgruppe:Research Group: Computational LogicComputational Logic
@InProceedings{asaadi-rudolph:2017:RepL4NLP,
author = {Asaadi, Shima and Rudolph, Sebastian},
title = {Gradual Learning of Matrix-Space Models of Language for Sentiment Analysis},
booktitle = {Proceedings of the 2nd Workshop on Representation Learning for NLP},
month = {August},
year = {2017},
address = {Vancouver, Canada},
publisher = {Association for Computational Linguistics},
pages = {178--185},
abstract = {Learning word representations to capture the semantics and compositionality of
language has received much research interest in natural language processing.
Beyond the popular vector space models, matrix representations for words have
been proposed, since then, matrix multiplication can serve as natural
composition operation. In this work, we investigate the problem of learning
matrix representations of words. We present a learning approach for
compositional matrix-space models for the task of sentiment analysis. We show
that our approach, which learns the matrices gradually in two steps,
outperforms other approaches and a gradient-descent baseline in terms of
quality and computational cost.},
url = {http://www.aclweb.org/anthology/W17-2621}
}