Lightweight Cross-Lingual Sentence Representation Learning

Cross-lingual sentence illustration models learn tasks like cross-lingual sentence retrieval and cross-lingual information transfer without the need of the want for instruction a new monolingual illustration design from scratch. Having said that, there has been tiny exploration of lightweight models.

Writing software code.

Composing software package code. Impression credit rating: pxhere.com, CC0 General public Area

A recent paper on arXiv.org introduces a lightweight twin-transformer architecture with just two levels. It substantially decreases memory intake and accelerates the instruction to even more increase performance. Two contrastive studying methods are proposed for generative tasks to compensate for the studying bottleneck of the lightweight transformer. The experiments on cross-lingual tasks like multilingual document classification validate the capacity of the advised design to generate robust sentence representations.

Huge-scale models for studying mounted-dimensional cross-lingual sentence representations like Huge-scale models for studying mounted-dimensional cross-lingual sentence representations like LASER (Artetxe and Schwenk, 2019b) direct to major enhancement in functionality on downstream tasks. Having said that, even more raises and modifications primarily based on these kinds of massive-scale models are commonly impractical thanks to memory constraints. In this function, we introduce a lightweight twin-transformer architecture with just 2 levels for generating memory-successful cross-lingual sentence representations. We explore unique instruction tasks and observe that present cross-lingual instruction tasks leave a whole lot to be wanted for this shallow architecture. To ameliorate this, we suggest a novel cross-lingual language design, which combines the present single-phrase masked language design with the freshly proposed cross-lingual token-stage reconstruction undertaking. We even more augment the instruction undertaking by the introduction of two computationally-lite sentence-stage contrastive studying tasks to increase the alignment of cross-lingual sentence illustration space, which compensates for the studying bottleneck of the lightweight transformer for generative tasks. Our comparisons with competing models on cross-lingual sentence retrieval and multilingual document classification validate the efficiency of the freshly proposed instruction tasks for a shallow design.

Investigation paper: Mao, Z., Gupta, P., Chu, C., Jaggi, M., and Kurohashi, S., “Lightweight Cross-Lingual Sentence Representation Learning”, 2021. Url: https://arxiv.org/abdominal muscles/2105.13856