Question Answering On Muld Narrativeqa
Metrics
BLEU-1
BLEU-4
METEOR
Rouge-L
Results
Performance results of various models on this benchmark
Model Name | BLEU-1 | BLEU-4 | METEOR | Rouge-L | Paper Title | Repository |
---|---|---|---|---|---|---|
Longformer | 19.84 | 62 | 4.52 | 22.09 | MuLD: The Multitask Long Document Benchmark | |
T5 | 17.67 | 55 | 3.36 | 19.03 | MuLD: The Multitask Long Document Benchmark |
0 of 2 row(s) selected.