Question Answering On Bioasq
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Model Name | Accuracy | Paper Title | Repository |
---|---|---|---|
BioLinkBERT (base) | 91.4 | LinkBERT: Pretraining Language Models with Document Links | |
GAL 120B (zero-shot) | 94.3 | Galactica: A Large Language Model for Science | |
PubMedBERT uncased | 87.56 | Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing | |
BioLinkBERT (large) | 94.8 | LinkBERT: Pretraining Language Models with Document Links | |
BLOOM (zero-shot) | 91.4 | Galactica: A Large Language Model for Science | |
GPT-4 | 85.71 | Evaluation of large language model performance on the Biomedical Language Understanding and Reasoning Benchmark | - |
OPT (zero-shot) | 81.4 | Galactica: A Large Language Model for Science |
0 of 7 row(s) selected.