Expert Confusion Annotation

Instructions

Welcome, and thank you for contributing to this expert annotation process.

Data Source: We started with 45,340 clips from the DAISEE dataset. After two rounds of filtering using different large VLMs (Qwen and Gemini) and one round of research assistant screening, we retained the clips that most clearly exhibit confusion. We hope domain experts can further validate these clips and exclude any remaining negative items.

This page is designed to help you quickly review short video clips and record your judgments on whether each clip contains confusion. The annotation process is simple:

1. Each video will load automatically. After watching the clip, please answer the question: "Does this clip contain confusion?"
2. Select one of the three options: Yes, No, or Unsure. Once you click your choice, the result will be automatically added to the Annotation CSV section below.
3. Please review your annotations before saving. After confirming that everything is correct, click the Save button at the bottom of the page. Your annotations will be saved locally on your computer. Even if the webpage is accidentally closed, you can continue later by clicking the Resume Previous Progress button.
4. If you want to revise a previous annotation, you can replay the corresponding video and simply select a new answer. The Annotation CSV section below will update automatically.
5. After completing all annotations, click Download Annotation CSV to export the final annotation file.


Please send the completed CSV annotation file to ludong@buffalo.edu at your convenience.
Thank you again for your time and support.
Completed: 0 / 0
Current: -
Video
-
Annotation
Does this clip contain confusion?
Unsure: unclear boundary, insufficient evidence, or more context needed
Annotation CSV