Publications

(2019). Probing the Need for Visual Context in Multimodal Machine Translation. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).

PDF

(2018). How2: A Large-scale Dataset for Multimodal Language Understanding. Proceedings of the Workshop on Visually Grounded Interaction and Language (NeurIPS 2018).

PDF

(2018). LIUM-CVC Submissions for WMT18 Multimodal Translation Task. Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers.

PDF

(2017). NMTPY: A Flexible Toolkit for Advanced Neural Machine Translation Systems. The Prague Bulletin of Mathematical Linguistics.

PDF

(2017). LIUM-CVC Submissions for WMT17 Multimodal Translation Task. Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers.

PDF

(2017). LIUM Machine Translation Systems for WMT17 News Translation Task. Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers.

PDF

(2016). Multimodal Attention for Neural Machine Translation. arXiv preprint arXiv:1609.03976.

PDF

(2016). Does Multimodality Help Human and Machine for Translation and Image Captioning?. Proceedings of the First Conference on Machine Translation.

PDF