Visual Question Answering
Visual Question Answering (VQA) is a subtask in the field of computer vision that aims to enable machines to understand image content and accurately answer questions related to images through multimodal analysis. The core objective of this task is to integrate visual and linguistic information to enhance the machine's scene understanding capabilities. VQA holds significant value in applications such as intelligent assistance systems, image search, and content moderation, facilitating a more natural human-machine interaction experience.
MM-Vet
GPT-4V
MM-Vet v2
ViP-Bench
GPT-4V-turbo-detail:high (Visual Prompt)
VQA v2 test-dev
BLIP-2 ViT-G OPT 6.7B (fine-tuned)
BenchLMM
GPT-4V
MMBench
CuMo-7B
MSRVTT-QA
Aurora (ours, r=64) Aurora (ours, r=64)
VQA v2 val
VQA v2 test-std
OFA
MMHal-Bench
MSVD-QA
PlotQA-D1
PlotQA-D2
VQA v2
Emu-I *
AMBER
RLAIF-V 12B
CLEVR
NeSyCoCo Neuro-Symbolic
COCO Visual Question Answering (VQA) real images 2.0 open ended
EarthVQA
SOBA
GQA
GRIT
OFA
MapEval-Visual
MM-Vet (w/o External Tools)
Emu-14B
TextVQA test-standard
PromptCap
V*bench
IVM-Enhanced GPT4-V
VisualMRC
LayoutT5 (Large)
VizWiz
Emu-I *
MS COCO