Visual Question Answering and Dialog Workshop
Location: Seaside Ballroom B, Long Beach Convention & Entertainment Center
at CVPR 2019, June 17, Long Beach, California, USA


Image


Introduction

The primary goal of this workshop is two-fold. First is to benchmark progress in Visual Question Answering and Visual Dialog.


Invited Speakers

Image

Alex Schwing
University of Illinois at Urbana-Champaign

Image

Lisa Hendricks
University of California, Berkeley

Image

Yoav Artzi
Cornell University

Image

Layla El Asri
Microsoft Research

Image

Christopher Manning
Stanford University

Image

Sanja Fidler
University of Toronto / NVIDIA

Image

Karl Moritz Hermann
Google DeepMind


(Tentative) Program (Venue: Seaside Ballroom B, Convention Center)

9:00 AM - 9:10 AM
Image
Welcome
Devi Parikh (Georgia Tech / Facebook AI Research)
[Slides] [Video]
9:10 AM - 9:35 AM
Image
Invited Talk
Alex Schwing (University of Illinois at Urbana-Champaign)
[Video]
9:35 AM - 10:00 AM
Image
Invited Talk
Lisa Hendricks (University of California, Berkeley)
[Slides] [Video]
10:00 AM - 10:15 AM
Image
VQA Challenge Talk (Overview, Analysis and Winner Announcement)
Ayush Shrivastava (Georgia Tech)
[Slides] [Video]
10:15 AM - 10:20 AM
Image Image Image
Image Image
Image
VQA Challenge Runner-up Talk
Team: MSM@MSRA
Members: Bei Liu, Zhicheng Huang, Zhaoyang Zeng, Zheyu Chen and Jianlong Fu
[Slides] [Video]
10:20 AM - 10:25 AM
Image Image Image Image
Image
VQA Challenge Winner Talk
Team: MIL@HDU
Members: Zhou Yu, Jun Yu, Yuhao Cui and Jing Li
[Slides] [Video]
10:25 AM - 10:50 AM
Morning Break
10:50 AM - 11:15 AM
Image
Invited Talk
Christopher Manning (Stanford University)
[Slides] [Video]
11:15 AM - 11:30 AM
Image
GQA Challenge Talk (Overview, Analysis and Winner Announcement)
Drew Hudson (Stanford University)
[Slides] [Video]
11:30 AM - 11:40 AM
Image Image Image
Image Image
GQA Challenge Winner Talk
Team: Kakao Brain
Members: Eun-Sol Kim, Yu-Jung Heo and Woo-Young Kang
[Slides] [Video]
11:40 AM - 11:55 AM
Image
TextVQA Challenge Talk (Overview, Analysis and Winner Announcement)
Amanpreet Singh (Facebook AI Research)
[Slides] [Video]
11:55 AM - 12:00 PM
Image Image Image Image
Image Image
TextVQA Challenge Runner-up Talk
Team: Team-Schwail
Members: Harsh Agrawal, Jyoti Aneja, Maghav Kumar and Alex Schwing
[Slides] [Video]
12:00 PM - 12:05 PM
Image Image Image Image
Image Image
TextVQA Challenge Winner Talk
Team: DCD_ZJU
Members: Yuetan Lin, Hongrui Zhao, Yanan Li and Donghui Wang
[Slides] [Video]
12:05 PM - 1:35 PM
Lunch (On your own)
1:35 PM - 2:00 PM
Image
Invited Talk
Karl Moritz Hermann (Google DeepMind)
[Slides] [Video]
2:00 PM - 2:25 PM
Image
Invited Talk
Layla El Asri (Microsoft Research)
[Slides] [Video]
2:25 PM - 2:40 PM
Image
Visual Dialog Challenge Talk (Overview, Analysis and Winner Announcement)
Abhishek Das (Georgia Tech)
[Slides] [Video]
2:40 PM - 2:50 PM
Image Image Image Image
Image Image
Image Image   Alibaba DAMO Academy
Visual Dialog Challenge Winner Talk
Team: MReaL - BDAI
Members: Jiaxin Qi, Yulei Niu, Hanwang Zhang, Jianqiang Huang, Xian-Sheng Hua and Ji-Rong Wen
[Slides] [Video]
2:50 PM - 4:05 PM
Image Image
Image Image Image
Poster session and Afternoon break
Location: Pacific Arena Ballroom
Allotted Poster Boards: #168 to #207
4:05 PM - 4:30 PM
Image
Invited Talk
Sanja Fidler (University of Toronto / NVIDIA)
[Video]
4:30 PM - 4:55 PM
Image
Invited Talk
Yoav Artzi (Cornell University)
[Slides] [Video]
4:55 PM - 5:40 PM
Image Image Image
Image Image Image
Panel: Future Directions
[Video]
5:40 PM - 5:50 PM
Image
Closing Remarks
Devi Parikh (Georgia Tech / Facebook AI Research)
[Slides] [Video]

Poster Presentation Instructions

The physical dimensions of the poster stands that will be available this year are 8 feet wide by 4 feet high. Please review the reference poster template for more details on how to prepare your poster. You do NOT have to use this template, but please read the instructions carefully and prepare your posters accordingly.


Submission Instructions

We invite submissions of extended abstracts of at most 2 pages describing work in areas such as: Visual Question Answering, Visual Dialog, (Textual) Question Answering, (Textual) Dialog Systems, Commonsense knowledge, Video Question Answering, Video Dialog, Vision + Language, and Vision + Language + Action (Embodied Agents). Accepted submissions will be presented as posters at the workshop. The extended abstract should follow the CVPR formatting guidelines and be emailed as a single PDF to the email id mentioned below. Please use the following LaTeX/Word templates.

  • LaTeX/Word Templates (zip): cvpr2019AuthorKit.zip

    • Dual Submissions
      We encourage submissions of relevant work that has been previously published, or is to be presented at the main conference. The accepted abstracts will not appear in the official IEEE proceedings.

      Where to Submit?
      Please send your abstracts to [email protected]


    Dates

    Jan 2019 Challenge Announcement
    mid-May 2019 Challenge Submission
    May 24, 2019 Extended Workshop Paper Submission
    Jun 2, 2019 Notification to Authors
    Jun 17, 2019 Workshop


    Organizers

    Image

    Abhishek Das
    Georgia Tech

    Image

    Ayush Shrivastava
    Georgia Tech

    Image

    Karan Desai
    Georgia Tech

    Image

    Yash Goyal
    Georgia Tech

    Image

    Aishwarya Agrawal
    Georgia Tech

    Image

    Amanpreet Singh
    Facebook AI Research

    Image

    Meet Shah
    Facebook AI Research

    Image

    Drew Hudson
    Stanford University

    Image

    Satwik Kottur
    Carnegie Mellon

    Image

    Rishabh Jain
    Georgia Tech

    Image

    Vivek
    Natarajan

    Image

    Stefan Lee
    Georgia Tech

    Image

    Peter Anderson
    Georgia Tech

    Image

    Xinlei Chen
    Facebook AI Research

    Image

    Marcus Rohrbach
    Facebook AI Research

    Image

    Dhruv Batra
    Georgia Tech / Facebook AI Research

    Image

    Devi Parikh
    Georgia Tech / Facebook AI Research


    Sponsors

    This work is supported by grants awarded to Dhruv Batra and Devi Parikh.


    Contact: [email protected]