Pinar Yanardag

E-mail/Google Scholar/Twitter/Github

Image

I am a tenure-track assistant professor at Virginia Tech, Department of Computer Science where I lead đź’Ž GEMLAB. I am also a member of Sanghani Center for AI and Discovery Analytics.

Prior to Virginia Tech, I was a postdoc at MIT. I received my Ph.D. in Computer Science at Purdue. During my graduate studies, I also worked at Amazon (P13N team) and VMware. I am a Fulbright PhD Fellow and Google Anita Borg Memorial Scholar.

My research is published at top computer science conferences such as CVPR, ICCV, NeurIPS, and featured in mainstream media (e.g., The Washington Post, BBC, CNN) and magazines (e.g., Motherboard, Rolling Stone). For more information, see the Publications page.

My main research area is centered on the development of generative AI methods, targeting three key aspects:

  • Discovering and Leveraging Latent Capabilities for Efficient Control: My research uncovers and exploits internal representations of generative models to enable efficient, training-free control of image and video synthesis. I develop methods for fine-grained, controllable edits (Conform, CVPR’24, Fluxspace, CVPR’25, NoiseCLR, CVPR’24), and extend these controls to the temporal domain through video and motion editing (RAVE, CVPR’24), video diffusion models (Motionflow, AAAI’26) and dynamic view sysnthesis (Inverse DVS, NeurIPS’25).

  • Integrating Cross-Modal Capabilities for Advanced Generation Tasks: My research bridges vision and language by unifying the reasoning capabilities of VLMs/LLMs with the generative priors of visual models, enabling advanced tasks beyond either modality alone. I develop agentic frameworks (CREA, NeurIPS’25), harness VLMs for multi-modal explainability (DiffEx, CVPR’25), and propose RL-based methods for visual synthesis (C-DPO, NeurIPS’25).

  • Personalized, Human-Aligned, and Democratized Generative AI: My research moves beyond one-size-fits-all generative AI by developing methods for personalized content generation (LoRACLR, CVPR’25, CloRA, ICCV’25, LoRAShop, NeurIPS’25). In parallel, I propose techniques to democratize generative AI for creators, enabling accessible and intuitive creative control (Plot’n Polish, AAAI’26, Stylebreeder, NeurIPS’24).

Prior to joining Virginia Tech, I was CEO of AI Fiction, a creative design studio specializing in AI. Some of our work includes generative AI for HBO’s Westworld, for which I was a Creative Director nominee at 72nd Primetime Emmy Awards. I also co-founded GLITCH–the world’s first generative AI clothing line.

I’m passionate about helping the general public understand and appreciate generative models. At MIT, I launched the How to Generate (Almost) Anything project, which demystifies generative AI through collaboration with artists and artisans, and encourages broader dialogue about its future and everyday implications. I also taught “AI & Fashion” course at the London College of Fashion. See the Courses section for more information.

Please contact me via pinary at vt.edu.

news

Oct 10, 2025 Four papers are accepted to NeurIPS’25 (main)!
Apr 3, 2025 One paper is accepted to ICML 2025 as Oral!
Feb 3, 2025 Three papers are accepted to CVPR 2025!
Mar 3, 2024 Three papers (1 oral, 1 highlight) are accepted to CVPR’24!

selected publications

  1. NeurIPS 2025
    [Spotlight] LoRAShop: Training-Free Multi-Concept Image Generation and Editing with Rectified Flow Transformers
    Yusuf Dalva, Hidir Yesiltepe, and Pinar Yanardag
    The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS) 2025
  2. ICCV 2025
    [Spotlight] CLoRA: A Contrastive Approach to Compose Multiple LoRA Models
    Tuna Han Salih Meral*, Enis Simsar*, Federico Tombari, and Pinar Yanardag
    International Conference on Computer Vision (ICCV) 2025
  3. CVPR 2025
    FluxSpace: Disentangled Semantic Editing in Rectified Flow Transformers
    Yusuf Dalva, Kavana Venkatesh, and Pinar Yanardag
    Conference on Computer Vision and Pattern Recognition
  4. CVPR 2024
    [ORAL] NoiseCLR: A Contrastive Learning Approach for Unsupervised Discovery of Interpretable Directions in Diffusion Models
    Yusuf Dalva, and Pinar Yanardag
    Conference on Computer Vision and Pattern Recognition
  5. CVPR 2024
    [Highlight] RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models
    Ozgur Kara*, Bariscan Kurtkaya*, Hidir Yesiltepe, James M. Rehg, and Pinar Yanardag
    Conference on Computer Vision and Pattern Recognition