Towards Breaking the Language and Modality Barrier: Learning Cross-lingual Cross-modal Semantic Representations
From International Center for Computational Logic
Towards Breaking the Language and Modality Barrier: Learning Cross-lingual Cross-modal Semantic Representations
Talk by Achim Rettinger
- Location: APB 3027
- Start: 19. January 2017 at 11:00 am
- End: 19. January 2017 at 11:30 am
- Research group: Computational Logic
- iCal
Information retrieval and machine learning approaches are running in the background of most of the applications we use in our daily digital life. The assistance they are providing is manifold, but relies on a set of core content processing tasks requiring compatible content representations. However, this is rarely the case in real-world scenarios.
This talk is concerned with shared representation formalisms for content encoded in heterogeneous modalities. The heterogeneity may result from intra-modal varieties, like text in different languages for the modality of natural language, or by the different modalities themselves, like when relating text to images. I will present three ways to obtain a joint representation of heterogeneously represented content. The first one is based on explicit semantics as encoded in knowledge graphs, the second one extends this approach by adding implicit semantics extracted from large data sets and the final one relies on joint learning without utilizing explicit semantics.
The presented approaches contribute to the long standing challenges of braking the language and modality barriers which enables the joint semantic processing of content in originally incompatible representation formalisms. This constitutes a fundamental building block for a more human-notion of semantic data analytics.