Taskonomy

Disentangling Task Transfer Learning

CVPR18 [Best Paper]
Amir R. Zamir, Alexander Sax*, William B. Shen*
Leonidas Guibas, Jitendra Malik, Silvio Savarese

Transfer Learning API

Visit the live API to find a supervision-efficient transfer strategy for your specified arguments.

API Page

Live Demo of 20 Tasks

Upload an image and see live results for 20 distinct vision tasks in the Task Bank, all output from our networks.

Live Demo

Paper

The paper and supplementary material describing the methodology and evaluation.

PDF

Transfer Visualization

Examine any combination of source to target transfers via sample videos.

Transfer Visualization Page

Models

Download pretrained models of the method.

Pretrained Models

Data

Download the data. Almost 4M multiple-annotated images of indoor spaces.

Dataset

Abstract

Do visual tasks have a relationship, or are they unrelated? For instance, could having surface normals simplify estimating the depth of an image? Intuition answers these questions positively, implying existence of a "structure" among visual tasks. Knowing this structure has notable values; it is the concept underlying transfer learning and provides a principled way for identifying redundancies across tasks, e.g., to seamlessly reuse supervision among related tasks or solve many tasks in one system without piling up the complexity.
We propose a fully computational approach for modeling the structure of the space of visual tasks. This is done via finding (first and higher-order) transfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D, and semantic tasks in a latent space. The product is a computational taxonomic map for task transfer learning. We study the consequences of this structure, e.g. nontrivial emerged relationships, and exploit them to reduce the demand for labeled data. For example, we show that the total number of labeled datapoints needed for solving a set of 10 tasks can be reduced by roughly 2/3 (compared to training independently) while keeping the performance nearly the same. We provide a set of tools for computing and probing this taxonomical structure including a solver that users can employ to devise efficient supervision policies for their use cases.

Image

Process overview. The steps involved in creating the taxonomy.



Image

Transfer learning relationships across perceptual task. Found fully computationally using Taskonomy.

Force Atlas plot of tasks, showing tasks going from isolated random locations to positioned according to their computed relationships.

Transfer Learning API

The provided API uses our results to recommend a superior set of transfers. By using these transfers, we can get similar results close to a fully supervised network using substantially less data.

Image

Example taxonomies. Generated from the API.

Transfer Visualization

In order to evaluate the quality of the learned transfer functions, we ran the transfer networks on a random youtube video. Visit the Transfer Visualization page to analyze how well different sources transfer to a target, or how well a source transfers to different targets. You can compare the results to a fully superivsed network as well as to baselines trained on ImageNet or not employing trasnfer learning at all.

Transfer Function. We use one or multiple source tasks to predict a target task's output.

Task Bank: A Unified Bank of 25 Pretrained Visual Estimators

Click on each task to see sample results.
Try the live demo on your query image.
Download pretrained models in the bank.

Image
Denoising Autoencoder

Uncorrupted version of corrupted image.

Image
Surface Normals

Pixel-wise surface normals.

Image
Z-buffer Depth

Depth estimation.

Image
Colorization

Colorizing input grayscale images.

Image
Reshading

Reshading with new lighting placed at camera location.

Image
Room Layout

Orientation and aspect ratio of cubic room layout.

Image
Camera Pose (fixated)

Relative camera pose with matching optical centers.

Image
Camera Pose (nonfix.)

Relative camera pose with distinct optical centers.

Image
Vanishing Points

Three Manhattan-world vanishing points.

Image
Curvatures

Magnitude of 3D principal curvatures.

Image
Unsupervised 2D Segm.

Segmentation (graph cut approximation) on RGB.

Image
Unsupervised 2.5D Segm.

Segmentation (graph cut approximation) on RGB-D-Normals-Curvature image.

Image
3D Keypoints

3D Keypoint estimation from underlying scene 3D.

Image
2D Keypoints

Keypoint estimation from RGB-only (texture features).

Image
Occlusion Edges

Edges which occlude parts of the scene.

Image
Texture Edges

Edges computed from RGB only (texture edges).

Image
Inpainting

Filling in masked center of image.

Image
Semantic Segmentation

Pixel-wise semantic labeling (via knowledge distillation from MS COCO).

Image
Object Classification

1000-way object classification (via knowledge distillation from ImageNet).

Image
Scene Classification

Scene Classification (via knowledge distillation from MIT Places).

Image
Jigsaw Puzzle

Putting scrambled image pieces back together.

Image
Egomotion

Odometry (camera poses) given three input images.

Image
Autoencoder

Image compression and decompression.

Image
Point Matching

Classifying if centers of two images match or not.

Dataset

4.5 Mil.

Scenes

600

Buildings

25

Tags per Image

1024

Resolution

We provide a large and high-quality dataset of varied indoor scenes.

Complete pixel-level geometric information via aligned meshes.

Semantic information via knowledge distillation from ImageNet, MS COCO, and MIT Places.

Globally consistent camera poses. Complete camera intrinsics.

High-definition images.

3x times big as ImageNet.


Paper

Taskonomy: Disentangling Task Transfer Learning.
Zamir, Sax*, Shen*, Guibas, Malik, Savarese.

CVPR 2018 [Best Paper Award] IJCAI 2019 Inivted [Sister Conference Best Papers Track]

@inproceedings{taskonomy2018,
 title={Taskonomy: Disentangling Task Transfer Learning},
 author={Amir R. Zamir and Alexander Sax and William B. Shen and Leonidas J. Guibas and Jitendra Malik and Silvio Savarese},
 booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
 year={2018},
 organization={IEEE},
}

Authors

Image

Amir Zamir

Stanford, UC Berkeley
Image

Alexander (Sasha) Sax

Stanford
Image

William B. Shen

Stanford
Image

Jitendra Malik

UC Berkeley
Image

Leonidas Guibas

Stanford
Image

Silvio Savarese

Stanford