HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Question Answering
Question Answering On Natural Questions Long
Question Answering On Natural Questions Long
Metrics
EM
Results
Performance results of various models on this benchmark
Columns
Model Name
EM
Paper Title
Repository
FiE
58.4
0.8% Nyquist computational ghost imaging via non-experimental deep learning
-
DensePhrases
71.9
Learning Dense Representations of Phrases at Scale
R2-D2 w HN-DPR
55.9
R2-D2: A Modular Baseline for Open-Domain Question Answering
-
UnitedQA (Hybrid)
54.7
UnitedQA: A Hybrid Approach for Open Domain Question Answering
-
BERTwwm + SQuAD 2
-
Frustratingly Easy Natural Question Answering
-
Cluster-Former (#C=512)
-
Cluster-Former: Clustering-based Sparse Transformer for Long-Range Dependency Encoding
-
DrQA
-
Reading Wikipedia to Answer Open-Domain Questions
Locality-Sensitive Hashing
-
Reformer: The Efficient Transformer
UniK-QA
54.9
UniK-QA: Unified Representations of Structured and Unstructured Knowledge for Open-Domain Question Answering
BERTjoint
-
A BERT Baseline for the Natural Questions
Sparse Attention
-
Generating Long Sequences with Sparse Transformers
BPR (linear scan; l=1000)
41.6
Efficient Passage Retrieval with Hashing for Open-domain Question Answering
DecAtt + DocReader
-
Natural Questions: a Benchmark for Question Answering Research
0 of 13 row(s) selected.
Previous
Next