Loading...
Menu
Bridging the Gap Between Computational Photography and Visual Recognition:
8th UG2+ Prize Challenge
CVPR 2026

The rapid development of computer vision algorithms increasingly allows automatic visual recognition to be incorporated into a suite of emerging applications. Some of these applications have less-than-ideal circumstances such as low-visibility environments, causing image captures to have degradations. In other more extreme applications, such as imagers for flexible wearables, smart clothing sensors, ultra-thin headset cameras, implantable in vivo imaging, and others, standard camera systems cannot even be deployed, requiring new types of imaging devices. Computational photography addresses the concerns above by designing new computational techniques and incorporating them into the image capture and formation pipeline. This raises a set of new questions. For example, what is the current state-of-the-art for image restoration for images captured in non-ideal circumstances? How can inference be performed on novel kinds of computational photography devices?

Continuing the success of the 1st (CVPR'18), 2nd (CVPR'19), 3rd (CVPR'20), 4th (CVPR'21), 5th (CVPR'22), 6th (CVPR'23), and 7th (CVPR'24) UG2 Prize Challenge workshops, we provide its 8th version for CVPR 2026. It will inherit the successful benchmark dataset, platform and evaluation tools used by the previous UG2 workshops, but will also look at brand new aspects of the overall problem, significantly augmenting its existing scope.

Original high-quality contributions are solicited on the following topics:
  • Novel algorithms for robust object detection, segmentation or recognition on outdoor mobility platforms (UAVs, gliders, autonomous cars, outdoor robots etc.), under real-world adverse conditions and image degradations (haze, rain, snow, hail, dust, underwater, low-illumination, low resolution, etc.)
  • Novel models and theories for explaining, quantifying, and optimizing the mutual influence between the low-level computational photography tasks and various high-level computer vision tasks, and for the underlying degradation and recovery processes, of real-world images going through complicated adverse visual conditions.
  • Novel evaluation methods and metrics for image restoration and enhancement algorithms, with a particular emphasis on no-reference metrics, since for most real outdoor images with adverse visual conditions it is hard to obtain any clean "ground truth" to compare with.

Challenge Categories

Winners

Keynote speakers

Available Challenges

What is the current state-of-the-art for image restoration for images captured in non-ideal circumstances? How can inference be performed on novel kinds of computational photography devices?

The UG2+ Challenge seeks to advance the analysis of "difficult" imagery by applying image restoration and enhancement algorithms to improve analysis performance. Participants are tasked with developing novel algorithms to improve the analysis of imagery captured under problematic conditions.

Track 1: Image Restoration under All-weather Conditions

Images captured under adverse weather conditions such as rain, fog, haze, and snow suffer from severe quality degradation. This challenge focuses on developing robust image restoration algorithms that can handle the full spectrum of real-world weather degradations, improving downstream vision task performance under all-weather conditions.

Competition page on Codabench

Track 2: Semantic Segmentation in Adverse Weather

Common weather phenomena including rain, snow, and fog introduce visual degradations that significantly impact the performance of semantic segmentation algorithms. This challenge aims to spark the development of novel segmentation algorithms robust to adverse weather conditions, bridging the domain gap between clear and degraded imagery.

Competition page on Codabench

Track 3: Dynamic Object Segmentation in Turbulence (DOST)

Atmospheric turbulence causes severe image degradation including spatially-varying blur, distortion, and intensity fluctuations that challenge both detection and segmentation of dynamic objects. This challenge promotes the development of algorithms for segmenting moving objects in turbulence-degraded video sequences.

Competition page on Codabench

Keynote speakers

Speaker Photo
Srinivasa Narasimhan
Carnegie Mellon University
Speaker Photo
Robby T. Tan
National University of Singapore
Speaker Photo
Matthew O'Toole
Carnegie Mellon University
Speaker Photo
Felix Heide
Princeton University
Speaker Photo
Huaijin "George" Chen
University of Hawaii at Manoa

Important Dates

Challenge Registration Open

February 16, 2026

Challenge End

May 13, 2026

Challenge Result (Arxiv) Paper Submission

May 27, 2026

Team Notification of Challenge Winners

May 31, 2026

Public Announcement of Challenge Winners

June 3, 2026

CVPR Workshop

June (Date TBA), 2026

Advisory Committee

Speaker Photo
Alex Wong
Yale University
Speaker Photo
Dong Lao
Louisiana State University
Speaker Photo
Jinwei Ye
George Mason University
Speaker Photo
Achuta Kadambi
University of California, Los Angeles

Organizing Committee

Speaker Photo
Patrick Rim
Yale University
Speaker Photo
Jenny Lee
Yale University
Speaker Photo
Mary Xie
Yale University
Speaker Photo
Hyoungseob Park
Yale University
Speaker Photo
Howard Zhang
University of California, Los Angeles
Speaker Photo
Rishi Upadhyay
University of California, Los Angeles
Speaker Photo
Yi Xiao
Louisiana State University
Footer