XLM-R
91 papers with code • 0 benchmarks • 1 datasets
XLM-R
Benchmarks
These leaderboards are used to track progress in XLM-R
Libraries
Use these libraries to find XLM-R models and implementationsMost implemented papers
Unsupervised Cross-lingual Representation Learning at Scale
We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale.
AdapterHub: A Framework for Adapting Transformers
We propose AdapterHub, a framework that allows dynamic "stitching-in" of pre-trained adapters for different tasks and languages.
MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages
We present the MASSIVE dataset--Multilingual Amazon Slu resource package (SLURP) for Slot-filling, Intent classification, and Virtual assistant Evaluation.
Emotion Classification in a Resource Constrained Language Using Transformer-based Approach
A Bengali emotion corpus consists of 6243 texts is developed for the classification task.
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer
The main goal behind state-of-the-art pre-trained multilingual models such as multilingual BERT and XLM-R is enabling and bootstrapping NLP applications in low-resource languages through zero-shot or few-shot cross-lingual transfer.
XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation
In this paper, we introduce XGLUE, a new benchmark dataset that can be used to train large-scale cross-lingual pre-trained models using multilingual and bilingual corpora and evaluate their performance across a diverse set of cross-lingual tasks.
BERTweet: A pre-trained language model for English Tweets
We present BERTweet, the first public large-scale pre-trained language model for English Tweets.
A Bayesian Multilingual Document Model for Zero-shot Topic Identification and Discovery
In this paper, we present a Bayesian multilingual document model for learning language-independent document embeddings.
Applying Occam's Razor to Transformer-Based Dependency Parsing: What Works, What Doesn't, and What is Really Necessary
We find that the choice of pre-trained embeddings has by far the greatest impact on parser performance and identify XLM-R as a robust choice across the languages in our study.
ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic
To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation.