🐮 Multi-Object Hallucination in Vision-Language Models 🐎

Recognition-based Object Probing Evaluation (🪢 ROPE)

NeurIPS 2024

Xuweiyi Chen*,1,2, Ziqiao Ma*,1, Xuejun Zhang*,1,
Sihan Xu1, Shengyi Qian1,3, Jianing Yang1, David Fouhey3, Joyce Y. Chai1
1University of Michigan 2University of Virginia 3New York University
*Denotes Equal Contribution

Multi-Object Hallucination in Vision-Language Models investigates how models misperceive when tasked with focusing on multiple objects simultaneously.

Abstract

Large vision language models (LVLMs) often suffer from object hallucination, producing objects not present in the given images. While current benchmarks for object hallucination primarily concentrate on the presence of a single object class rather than individual entities, this work systematically investigates multi-object hallucination, examining how models misperceive (e.g., invent nonexistent objects or become distracted) when tasked with focusing on multiple objects simultaneously. We introduce Recognition-based Object Probing Evaluation (ROPE), an automated evaluation protocol that considers the distribution of object classes within a single image during testing and uses visual referring prompts to eliminate ambiguity. With comprehensive empirical studies and analysis of potential factors leading to multi-object hallucination, we found that (1) LVLMs suffer more hallucinations when focusing on multiple objects compared to a single object. (2) The tested object class distribution affects hallucination behaviors, indicating that LVLMs may follow shortcuts and spurious correlations. (3) Hallucinatory behaviors are influenced by data-specific factors, salience and frequency, and model intrinsic behaviors. We hope to enable LVLMs to recognize and reason about multiple objects that often occur in realistic visual scenes, provide insights, and quantify our progress towards mitigating the issues.

Case Study: Comparing ROPE with Existing Benchmarks

A case study that compares our Recognition-based Object Probing Evaluation (ROPE) benchmark with existing benchmarks for object hallucination in GPT-4V. ROPE offers an automated evaluation protocol with controlled output formatting and uses visual prompts to distinctly ground to objects, thus mitigating referential ambiguity. Unlike binary inquiries relying solely on textual descriptions, ROPE challenges the model to identify multiple objects concurrently. We observe that, while GPT-4V can identify the whisk to the left of a knife when prompted about it, the model hallucinates a "fork" when directly tasked to recognize multiple objects.

Comparison of ROPE with Existing Benchmarks
Compare ROPE and Existing Benchmarks GPT-4V.

Different Instruction Settings of ROPE

Different types of instruction settings of ROPE. In a single turn of prompting without format enforcement, we probe the model to recognize the 5 objects referred to by the visual prompts (a) one at a time in the single-object setting and (b) concurrently in the multi-object setting. We further enforce the model to follow the format template and decode only the object tokens for each of the five objects (c) without output manipulation in student forcing and (d) replacing all previously generated object tokens with the ground truth classes in teacher forcing.

Different Instruction Settings of ROPE
Different Instruction Settings of ROPE

ROPE Demonstration under Heterogeneous Setting

A heterogeneous ROPE sample tested with Default multi-object query, where each of the 5 objects belongs to different object classes. We label the output class as either correct or hallucinated.

Different Instruction Settings of ROPE
Heterogeneous ROPE sample

ROPE Demonstration under Homogeneous Setting

A homogeneous ROPE sample, where the 5 objects belong to the same object class, and a corresponding adversarial ROPE sample, where the last object belongs to a different object class.

Different Instruction Settings of ROPE
Homogeneous ROPE sample

ROPE Leaderboard

Default Multi-Object Student-Forcing Teacher-Forcing Single-Object
Models Wild Hom. Het. Wild Hom. Het. Wild Hom. Het. Wild Hom. Het.
Seen
Yi-VL-6B 2.95 5.65 1.99 3.44 6.80 3.78 5.45 26.25 4.36 0.19 0.30 0.13
Yi-VL-34B 8.50 15.35 3.33 8.97 16.30 4.23 10.09 19.75 4.94 0.22 2.60 0.13
LLaVA-7B 31.29 67.50 8.00 31.28 67.25 11.22 31.49 92.15 12.37 35.32 62.35 17.37
LLaVA-13B 31.54 67.63 12.64 31.49 73.25 11.54 34.97 94.25 16.03 43.13 80.60 23.91
LLaVA-34B 39.95 85.75 18.85 52.75 85.20 33.91 56.41 95.81 25.31 55.05 86.50 18.97
Qwen VL 2.73 6.60 1.03 6.25 16.00 3.65 18.74 71.50 5.45 8.73 16.05 5.58
Qwen VL-C 8.72 16.90 6.67 5.26 8.60 4.10 12.11 47.75 8.08 25.99 43.40 13.21
CogVLM 0.04 0.00 0.00 0.00 0.00 0.00 0.10 0.95 0.00 0.00 0.00 0.00
CogVLM-G 0.00 0.00 0.00 9.86 13.50 6.79 22.64 75.45 0.45 11.25 22.65 7.12
CogVLM-C 12.89 22.75 7.18 25.37 43.63 12.03 28.25 72.80 17.50 30.16 56.00 16.35
LLaVA-7B* N/A N/A N/A 9.16 16.40 5.51 N/A N/A N/A 11.68 23.55 9.36
GLaMM* N/A N/A N/A 27.11 53.35 13.01 N/A N/A N/A 63.81 81.75 53.40
GroundHOG* N/A N/A N/A 23.57 30.80 24.23 N/A N/A N/A 44.80 43.10 38.97
IDEFICS 0.00 1.45 0.13 6.25 18.70 0.64 17.37 76.15 10.06 4.62 0.00 0.32
CogVLM-2 21.51 37.55 17.31 37.02 70.85 12.69 37.10 73.50 17.44 21.16 38.75 13.65
MiniCPM-V 34.75 59.91 17.37 31.62 62.80 13.65 32.16 68.05 16.79 27.42 55.35 16.92
GPT4V** 53.80 77.55 40.83 N/A N/A N/A N/A N/A N/A 55.89 78.25 41.03
GPT4O** 71.27 89.25 66.03 N/A N/A N/A N/A N/A N/A 60.77 73.92 54.31
Unseen
Yi-VL-6B 2.74 3.88 1.14 3.18 4.24 5.20 4.04 10.90 10.57 0.14 0.45 0.08
Yi-VL-34B 7.77 15.63 4.23 10.28 18.04 7.97 11.24 22.49 12.03 0.46 2.37 0.41
LLaVA-7B 30.56 68.12 10.33 30.55 68.16 10.24 31.89 90.33 13.25 34.88 64.41 16.18
LLaVA-13B 27.56 63.10 8.37 27.41 63.10 8.37 35.65 91.09 14.80 42.66 71.92 23.41
LLaVA-34B 29.30 79.43 17.72 29.45 91.18 14.39 37.40 95.51 17.92 51.71 77.88 30.81
Qwen VL 2.80 1.95 7.06 7.17 16.41 4.15 10.34 58.00 4.07 17.73 31.22 9.51
Qwen VL-C 18.86 30.73 8.78 16.16 27.80 7.72 21.81 58.00 11.14 34.20 57.31 15.37
CogVLM 0.03 0.00 0.00 0.00 0.00 0.00 0.00 0.15 0.00 0.00 0.00 0.00
CogVLM-G 0.00 0.00 0.00 8.20 1.47 5.77 23.82 81.20 1.81 10.32 10.74 9.11
CogVLM-C 15.56 26.57 5.53 17.18 41.27 6.02 22.81 56.04 6.67 30.56 52.00 13.50
LLaVA-7B* N/A N/A N/A 7.59 12.12 4.88 N/A N/A N/A 12.71 22.49 8.46
GLaMM* N/A N/A N/A 29.11 54.53 14.23 N/A N/A N/A 68.65 77.06 52.28
GroundHOG* N/A N/A N/A 23.11 24.69 26.26 N/A N/A N/A 40.73 30.37 38.13
IDEFICS 0.39 0.37 0.33 9.03 24.45 2.68 24.80 83.02 7.64 4.62 3.67 6.50
CogVLM-2 20.99 35.06 15.93 24.64 38.04 23.17 26.74 46.04 26.59 11.13 30.94 5.77
MiniCPM-V 32.96 59.92 16.60 31.77 58.98 14.15 31.87 60.98 16.34 25.56 47.76 14.39
GPT4V** 45.46 63.12 34.17 N/A N/A N/A N/A N/A N/A 47.34 64.94 35.45
GPT4O** 63.27 80.29 54.47 N/A N/A N/A N/A N/A N/A 63.45 79.84 53.74

Citation

@inproceedings{chen2024multiobject,
  title={Multi-Object Hallucination in Vision Language Models},
  author={Chen, Xuweiyi and Ma, Ziqiao and Zhang, Xuejun and Xu, Sihan and Qian, Shengyi and Yang, Jianing and Fouhey, David and Chai, Joyce},
  booktitle={3rd Workshop on Advances in Language and Vision Research (ALVR)},
  year={2024}
}