Question Answering On Fairytaleqa
Metrics
F1
Rouge-L
Results
Performance results of various models on this benchmark
Model Name | F1 | Rouge-L | Paper Title | Repository |
---|---|---|---|---|
BART | 0.088 | 0.108 | Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension | |
BART fine-tuned on FairytaleQA | 0.536 | 0.533 | Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension | |
DistilBERT | 0.082 | 0.097 | Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension | |
BART fine-tuned on NarrativeQA | 0.492 | 0.475 | Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension |
0 of 4 row(s) selected.