Skip to content

[Nature Machine Intelligence 2025] Emulating Human-like Adaptive Vision for Efficient and Flexible Machine Visual Perception

License

Notifications You must be signed in to change notification settings

LeapLabTHU/AdaptiveNN

Repository files navigation

AdaptiveNN (NMI'25)

Image

Image

This repo contains the official code and pre-trained models for the paper AdaptiveNN.

Title:   Emulating Human-like Adaptive Vision for Efficient and Flexible Machine Visual Perception

Authors:  Yulin Wang(王语霖), Yang Yue(乐洋), Yang Yue(乐阳), Huanqian Wang, Haojun Jiang, Yizeng Han, Zanlin Ni, Yifan Pu, Minglei Shi, Rui Lu, Qisen Yang, Andrew Zhao, Zhuofan Xia, Shiji Song#, Gao Huang#. ( Equal Contribution, # Corresponding Author)

Institute:  Department of Automation, Tsinghua University

Publish:   Nature Machine Intelligence 2025

Abstract

Human vision is highly adaptive, efficiently sampling intricate environments by sequentially fixating on task-relevant regions. In contrast, prevailing machine vision models passively process entire scenes at once, resulting in excessive resource demands scaling with spatial–temporal input resolution and model size, yielding critical limitations impeding both future advancements and real-world application. Here we introduce AdaptiveNN, a general framework aiming to enable the transition from ‘passive’ to ‘active and adaptive’ vision models. AdaptiveNN formulates visual perception as a coarse-to-fine sequential decision-making process, progressively identifying and attending to regions pertinent to the task, incrementally combining information across fixations and actively concluding observation when sufficient. We establish a theory integrating representation learning with self-rewarding reinforcement learning, enabling end-to-end training of the non-differentiable AdaptiveNN without additional supervision on fixation locations. We assess AdaptiveNN on 17 benchmarks spanning 9 tasks, including large-scale visual recognition, fine-grained discrimination, visual search, processing images from real driving and medical scenarios, language-driven embodied artificial intelligence and side-by-side comparisons with humans. AdaptiveNN achieves up to 28 times inference cost reduction without sacrificing accuracy, flexibly adapts to varying task demands and resource budgets without retraining, and provides enhanced interpretability via its fixation patterns, demonstrating a promising avenue towards efficient, flexible and interpretable computer vision. Furthermore, AdaptiveNN exhibits closely human-like perceptual behaviours in many cases, revealing its potential as a valuable tool for investigating visual cognition.

Image

Usage

Please refer to GET_STARTED.md for training and evaluation instructions. Our pretrained model can be downloaded from this link.

Visualizations

Qualitative assessment showcasing the visual fixations localized by AdaptiveNN(-DeiT-S), with boxes marking the locations of fixations and colors indicating the model’s decision to conclude (green) or continue (red) observation at each step. Step indices are presented at the top left of the boxes. Ground truth labels are displayed at the bottom left of the images.

Image

Results

Quantitative comparisons of AdaptiveNN and traditional non-adaptive models on top of identical backbones: Top-1 validation accuracy versus average computational cost for inferring the model. To obtain non-adaptive models with varying costs, we consider two common approaches: adjusting model sizes and input resolutions.

Image

Acknowledgements

This repository is built using the timm library and ConvNeXt repository.

Reference

If you find our code or papers useful for your research, please cite:

@article{wang2025emulating,
  title={Emulating human-like adaptive vision for efficient and flexible machine visual perception},
  author={Wang, Yulin and Yue, Yang and Yue, Yang and Wang, Huanqian and Jiang, Haojun and Han, Yizeng and Ni, Zanlin and Pu, Yifan and Shi, Minglei and Lu, Rui and others},
  journal={Nature Machine Intelligence},
  pages={1--19},
  year={2025},
  publisher={Nature Publishing Group UK London}
}

Contact

If you have any question, feel free to contact the authors.

Yulin Wang(王语霖): [email protected]

Yang Yue(乐洋): [email protected]

Yang Yue(乐阳): [email protected]

About

[Nature Machine Intelligence 2025] Emulating Human-like Adaptive Vision for Efficient and Flexible Machine Visual Perception

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages