My research fouces on multimodal learning, including representational alignment in vision-language models, generative foundation models, and the explainability of vision-language models with scene graphs and visual grounding.
In my current project, I explore the interpretable latent space of text-to-image models to align them for responsible and fair image generation.
Previous work interprets vectors in an interpretable latent space of diffusion models as semantic concepts. However, existing approaches cannot discover directions for arbitrary concepts, such as those related to inappropriate concepts. In this work, we propose a novel self-supervised approach to find interpretable latent directions for a given concept. With the discovered vectors, we further propose a simple approach to mitigate inappropriate generation.
We explore two types of large-scale multimodal generative models, image-to-text and text-to-image. The image-to-text model generates abstract descriptions of an image, whereas the text-to-image model decodes the text into low-level visual pixel features. These two models are closely related but their relationship is little understood. In this work, we study if large multimodal generative models understand each other. Specifically, if Flamingo describes an image in text, can DALLE reconstruct an image similar to the input image from the text?
In this paper, we take inspiration from attributes of the brain, to develop a computational framework to find the optimal low cost path between a source node and a destination node in a generalized graph.
We present a unified computational theory of an agent’s perception and memory. Episodic memory and semantic memory evolved as emergent properties in a development to gain a deeper understanding of sensory information, to provide a context, and to provide a sense of the current state of the world.
We find that Graphhopper outperforms state-of-the-art scene graph reasoning model on both manually curated and automatically generated scene graphs by a significant margin.
We propose a novel method that approaches the VQA task by performing context-driven, sequential reasoning based on the objects and their semantic and spatial relationships present in the scene.
Last updated: 10 April 2024
Website template from Jon Barron