AllenNLP is an NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. It consists of:
Choose a task to see what models are available:
Viewing Models for Question Answering:
MODEL | EM | PARAMETERS | PAPER | YEAR | |
---|---|---|---|---|---|
|
RoBERTa SWAG |
356 Million
|
|||
|
Physical Interaction Question Answering |
356 Million
|
|||
|
Transformer QA |
84
|
355 Million
|
||
|
RoBERTa Common Sense QA |
356 Million
|
|||
|
Numerically Augmented QA Net |
14 Million
|
|||
|
GPT2-based Next Token Language Model |
163 Million
|
|||
|
ELMo-BiDAF |
71
|
113 Million
|
||
|
BiDAF |
66
|
12 Million
|
Viewing Models for Sentiment Analysis:
MODEL | ACCURACY | PARAMETERS | PAPER | YEAR | |
---|---|---|---|---|---|
|
RoBERTa large SST |
95.11
|
355 Million
|
||
|
GLoVe-LSTM |
87%
|
10 Million
|
Viewing Models for Named Entity Recognition:
MODEL | F1 | PARAMETERS | PAPER | YEAR | |
---|---|---|---|---|---|
|
ELMo-based Named Entity Recognition |
98 Million
|
|||
|
Fine Grained Named Entity Recognition |
88
|
101 Million
|
||
|
Fine Grained Named Entity Recognition with Transformer |
88
|
125 Million
|
Viewing Models for Visual Question Answering:
MODEL | PARAMETERS | PAPER | YEAR | |
---|---|---|---|---|
|
ViLBERT - Visual Question Answering |
245 Million
|
Viewing Models for Natural Language Inference:
MODEL | PARAMETERS | PAPER | YEAR | |
---|---|---|---|---|
|
RoBERTa SNLI |
356 Million
|
||
|
RoBERTa MNLI |
356 Million
|
||
|
Enhanced LSTM for Natural Language Inference |
100 Million
|
||
|
ELMo-based Decomposable Attention |
94 Million
|
Viewing Models for Language Modelling:
MODEL | PARAMETERS | PAPER | YEAR | |
---|---|---|---|---|
|
GPT2-based Next Token Language Model |
163 Million
|
||
|
BERT-based Masked Language Model |
131 Million
|
Viewing Models for Common Sense Reasoning:
MODEL | PARAMETERS | PAPER | YEAR | |
---|---|---|---|---|
|
RoBERTa SWAG |
356 Million
|
||
|
Physical Interaction Question Answering |
356 Million
|
||
|
RoBERTa Common Sense QA |
356 Million
|
Viewing Models for Dependency Parsing:
MODEL | LAS | PARAMETERS | PAPER | YEAR | |
---|---|---|---|---|---|
|
Deep Biaffine Attention for Neural Dependency Parsing |
94.44
|
20 Million
|
Viewing Models for Open Information Extraction:
MODEL | PARAMETERS | PAPER | YEAR | |
---|---|---|---|---|
|
Open Information Extraction |
15 Million
|
Viewing Models for Coreference Resolution:
MODEL | PARAMETERS | PAPER | YEAR | |
---|---|---|---|---|
|
Coreference Resolution |
366 Million
|
Viewing Models for Semantic Role Labeling:
MODEL | F1 | PARAMETERS | PAPER | YEAR | |
---|---|---|---|---|---|
|
SRL BERT |
86.49
|
110 Million
|
Viewing Models for Constituency Parsing:
MODEL | F1 SCORE | PARAMETERS | PAPER | YEAR | |
---|---|---|---|---|---|
|
Constituency Parser with ELMo embeddings |
94.11
|
98 Million
|