Our optimized method, RoBERTa, produces state-of-the-art results on the widely used NLP benchmark, General Language Understanding Evaluation (GLUE). In addition to a paper detailing those results, we’re releasing the models and code that we used to demonstrate our approach’s effectiveness.
https://ai.meta.com/blog/roberta-an-optimized-method-for-pretraining-self-supervised-nlp-systems/

