CoDi: Any-to-Any Generation via Composable Diffusion

1University of North Carolina at Chapel Hill, 2Microsoft Azure Cognitive Services Research * Work done at Microsoft internship and UNC. Corresponding Authors
NeurIPS 2023
Image

Abstract

We present Composable Diffusion (CoDi), a novel generative model capable of generating any combination of output modalities, such as language, image, video, or audio, from any combination of input modalities. Unlike existing generative AI systems, CoDi can generate multiple modalities in parallel and its input is not limited to a subset of modalities like text or image. Despite the absence of training datasets for many combinations of modalities, we propose to align modalities in both the input and output space. This allows CoDi to freely condition on any input combination and generate any group of modalities, even if they are not present in the training data. CoDi employs a novel composable generation strategy which involves building a shared multimodal space by bridging alignment in the diffusion process, enabling the synchronized generation of intertwined modalities, such as temporally aligned video and audio. Highly customizable and flexible, CoDi achieves strong joint-modality generation quality, and outperforms or is on par with the unimodal state-of-the-art for single-modality synthesis.

Model Architecture

Image
Composable diffusion uses a multi-stage training scheme to be able to train on only a linear number of tasks but inference on all combinations of input and output modalities.

Multi-Outputs Joint Generation

Model takes in single or multiple prompts including video, image, text, or audio to generate multiple aligned outputs like video with accompanying sound.


Text + Image + Audio → Video + Audio

"Teddy bear on a skateboard, 4k, high resolution"

Image
Image

Text + Audio + Image → Text + Image

"Teddy bear on a skateboard, 4k, high resolution"

Image
Image

"A toy on the street sitting on a board"

Image

Audio + Image → Text + Image

Image
Image

"Playing piano in a forest."

Image

Text + Image → Text + Image

"Cyberpunk vibe."

Image
Image

"Cyberpunk, city, movie scene, retro ambience."

Image

Text → Video + Audio

"Fireworks in the sky."

Image

Text → Video + Audio

"Dive in coral reef."

Image

Text → Video + Audio

"Train coming into station."

Image

Text → Text + Audio + Image

"Sea shore sound ambience."

Image

"Wave crashes the shore, sea gulls."

Image

Text → Text + Audio + Image

"Street ambience."

Image

"Noisy street, cars, traffics.."

Image

Multiple Conditioning

Model takes in multiple inputs including video, image, text, or audio to generate outputs.


Text + Audio → Image

"Oil painting, cosmic horror painting, elegant intricate artstation concept art by craig mullins detailed"

Image
Image

Text + Image → Image

"Gently flowers in a vase, still life, by Albert Williams"

Image
Image
Image

Text + Audio → Video

"Forward moving camera view."

Image
Image

Text + Image → Video

"Red gorgonian and tropical fish."

Image
Image
Image

Text + Image → Video

"Eating on a coffee table."

Image
Image
Image

Video + Audio → Text

Image

"Panda eating bamboo, people laughing."


Image + Audio → Audio

Image
Image

Text + Image → Audio

"Horn, blow whistle"

Image
Image

Single-to-Single Generation

Model takes in a single prompt including video, image, text, or audio to generate a single output.


Text → Image

"Concept art by sylvain sarrailh of a haunted japan temple in a forest"

Image
Image

Audio → Image

Image
Image

Image → Video

Image
Image
Image

Image → Audio

Image
Image

Audio → Text

Image

"A magical sound, game."


Image → Text

Image
Image

"Mountain view, sunset."


BibTeX


@inproceedings{
	tang2023anytoany,
	title={Any-to-Any Generation via Composable Diffusion},
	author={Zineng Tang and Ziyi Yang and Chenguang Zhu and Michael Zeng and Mohit Bansal},
	booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
	year={2023},
	url={https://openreview.net/forum?id=2EDqbSCnmF}
}