Auto-Annotation from Expert-Crafted Guidelines

The 1st workshop on AutoExpert

This website is under construction.

Location: tbd, Denver CO

Time: morning Local Time, June XX, 2026

in conjunction with CVPR 2026, Denver CO, USA


Overview

Machine-learned visual systems are transforming numerous fields such as autonomous driving, biodiversity assessment, and ecological monitoring, but they hunger for vast, high-quality annotated data. Asking domain experts to manually annotate large-scale data is unrealistic; the current paradigm to scale up data annotation is to have domain experts craft annotation guidelines using visual examples and descriptions for non-expert annotators to apply. This paradigm is commonly adopted by companies which provide data labeling services. Lacking domain knowledge, ordinary annotators often produce annotations that are erroneous, subjective, biased, and inconsistent. Further, this process is labor-intensive, tedious, and costly. This workshop aims to pioneer auto-annotation, developing AI agents that can interpret expert-crafted annotation guidelines and generate labels automatically. In essence, we seek to replace ordinary human annotators with AI.


Topics

This workshop aims to bring together computer vision researchers and practitioners from both academia and industry who are interested in the topic of auto-anontation from expert-crafted guidelines (AutoExpert). It involves multiple research topics as listed below.

  • data: web-scale of data, domain-specific data, multimodal data, synthetic data, etc.
  • concepts: taxonomy, ontology, vocabulary, expert/human-in-the-loop, etc.
  • models: foundation models, expert models, Large Multimodal Models (LMMs), Large Language Models (LLMs), Vision-Language Models (VLMs), Large Vision Models (LVMs), etc.
  • learning: foundation model adaptation, few-shot learning, semi-supervised learning, domain adaptation, active learning, etc.
  • social impact: inter-disciplinary research, real-world application, responsible AI, etc.
  • misc: dataset curation, annotation guidelines, machine-expert interaction, etc.

Speakers


Image
Shu Kong
UMacau

Image
Serge Belongie
University of Copenhagen

Image
Jason Corso
UMich & Voxel51



Organizers

Please contact Shu Kong with any questions: aimerykong [at] gmail [dot] com


Image
Shu Kong
UMacau

Image
Jason Corso
UMich & Voxel51


Advisory Board

Image
Serge Belongie
University of Copenhagen



Challenge Organizers


Image
Shu Kong
UMacau

Image
Tian Liu
Texas A&M




Important Dates and Details



Program Schedule

The schedule will be finalized soon!

CDT
Event
Presenter / Title
Links
08:45 - 09:00
Opening remarks
Shu Kong University of Macau
Auto-Annotation from Expert-Crafted Annotation Guidelines
09:00 - 9:35
Invited talk #1
Pietro Perona, Caltech
tba
9:35 - 10:10
Invited talk #2
Jason Corso, UMich & Voxel51
tba
10:10 - 10:45
Invited talk #3
Subhransu Maji, UMass
tba
10:45 - 10:50
Coffee break
Image
10:50 - 11:25
Invited talk #4
Serge Belongie, University of Copenhagen
tba
11:25 - 11:55
Challenge
Challenge Competitions
insights and lessons
11:55 - 12:00
Closing remarks
tba tba
12:00 - 13:00
Lunch
Image