LIUM-CVC Submissions for WMT17 Multimodal Translation Task

Abstract

This paper describes the monomodal and multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT17 Shared Task on Multimodal Translation. We mainly explored two multimodal architectures where either global visual features or convolutional feature maps are integrated in order to benefit from visual context. Our final systems ranked first for both En-De and En-Fr language pairs according to the automatic evaluation metrics METEOR and BLEU.

Publication
Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers
Date
Avatar
Ozan Caglayan
PhD Student in Machine Translation