Question Answering On Mapeval Api 1
Metrics
Accuracy (%)
Results
Performance results of various models on this benchmark
Model Name | Accuracy (%) | Paper Title | Repository |
---|---|---|---|
GPT-3.5-Turbo (Chameleon) | 49.33 | MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models | - |
Claude-3.5-Sonnet (ReAct) | 64.00 | MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models | - |
0 of 2 row(s) selected.