A new language representation model called BERT: Bidirectional Encoder Representations from Transformers.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks
Continue ReadingUniversal Language Model Fine-tuning for Text Classification
The article puts forward ULMFiT, effective transfer learning method that can be applied to any task in NLP. With only 100 labeled examples, it matches the performance of training from scratch on 100x more data and reduces error by 18-24% on the state-of-the-art on six text classification tasks
Continue Reading