I am a Staff Research Scientist at Meta GenAI , working on foundation models. Previously, I was a Research Scientist at XR Tech in Meta Reality Labs working on 3D Scene Understanding. Prior to joining Meta, I did my Ph.D. at TUM Visual Computing Group headed by Prof. Matthias Niessner, where I work on Computer Vision and 3D Scene Understanding.
During my Ph.D., I did an internship at Facebook AI Research (FAIR) with Prof. Saining Xie and Dr. Benjamin Graham on 3D representation and data-efficient learning.
Before that, I obtained my master at RWTH Computer Vision Group headed by Prof. Bastian Leibe, where I studied on Computer Vision and Machine Learning.
I am interested in research and applications on Generative Models on Image/Video/3D, as well as 3D Computer Vision, e.g. 3D Reconstruction, VR/AR, Robotics and Autonomous Driving etc.
Introducing MetaQueries, a minimal recipe for building state-of-the-art unified multimodal understanding (text output) and generation (pixel output) models.
We propose a Linear-complexity text-to-video Generation (LinGen) framework whose cost scales linearly in the number of pixels. For the first time, LinGen enables high-resolution minute-length video generation on a single GPU without compromising quality
Given a textual description of the overall room style and a rough 3D room layout based on 3D semantic bounding boxes, our method called ControlRoom3D creates diverse and globally plausible 3D room meshes which align well with the room layout.
Our block caching technique allows us to avoid these unnecessary computations, therefore speeding up inference by a factor of 1.5x-1.8x while maintaining image quality.
Successful point cloud registration relies on accurate correspondences established upon powerful descriptors. However, existing neural descriptors either leverage a rotation-variant backbone whose performance declines under large rotations, or encode local geometry that is less distinctive. To address this issue, we introduce RIGA to learn descriptors that are Rotation-Invariant by design and Globally-Aware.
NeRF-Det is a novel method for 3D detection with posed RGB images as input. Our method makes novel use of NeRF in an end-to-end manner to explicitly estimate 3D geometry, thereby improving 3D detection performance.
We introduce RoITr, a Rotation-Invariant Transformer to cope with the pose variations in the point cloud matching task. On the challenging 3DLoMatch benchmark, RoITr surpasses the existing methods by at least 13 and 5 percentage points in terms of the Inlier Ratio and the Registration Recall, respectively
We demonstrate the Mask3D is particularly effective in embedding 3D priors into the powerful 2D ViT backbone, enabling improved representation learning for various scene understanding tasks, such as semantic segmentation, instance segmentation and object detection.
We introduce PCR-CG: a novel 3D point cloud registration module explicitly embedding the color signals into geometry representation. With our designed 2D-3D projection module, the pixel features in a square region centered at correspondences perceived from images are effectively correlated with point cloud representations.
Recent advances in 3D perception have shown impressive progress in understanding geometric structures of 3D shapes and even scenes. Inspired by these advances in geometric understanding, we aim to imbue image-based perception with representations learned under geometric constraints.
Inspired by 2D panoptic segmentation, we propose to unify the tasks of geometric reconstruction, 3D semantic segmentation, and 3D instance segmentation into the task of panoptic 3D scene reconstruction -- from a single RGB image, predicting the complete geometric reconstruction of the scene in the camera frustum of the image, along with semantic and instance segmentations.
Our study reveals that exhaustive labelling of 3D point clouds might be unnecessary; and remarkably, on ScanNet, even using 0.1% of point labels, we still achieve 89% (instance segmentation) and 96% (semantic segmentation) of the baseline performance that uses full annotations.
In this work, we introduce RfD-Net that jointly detects and reconstructs dense object surfaces directly from raw point clouds. Instead of representing scenes with regular grids, our method leverages the sparsity of point cloud data and focuses on predicting shapes that are recognized with high objectness.
This paper introduces the task of semantic instance completion: from an incomplete,
RGB-D scan of a scene, we detect the individual object instances comprising the scene and jointly infer their complete object geometry.
We introduce 3D-SIS, a novel neural network architecture for 3D semantic instance segmentation in commodity RGB-D scans. The core idea of our method is to jointly learn from both geometric and color signal, thus enabling accurate instance predictions.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
ACM Transactions on Multimedia Computing Communications and Applications (TOMM)
International Journal of Computer Vision (IJCV)
ISPRS Journal of Photogrammetry and Remote Sensing
IEEE Robotics and Automation Letters (RA-L)
IEEE Transactions on Image Processing (TIP)
Pattern Recognition Letters
Neurocomputing
Computers & Graphics
Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH)
Conference on Computer Vision and Pattern Recognition (CVPR)
International Conference on Computer Vision (ICCV)
European Conference on Computer Vision (ECCV)
International Conference on Machine Learning (ICML)
Neural Information Processing Systems (NeurIPS)
International Conference on Learning Representations (ICLR)
International Conference on Robotics and Automation (ICRA)
Association for the Advancement of Artificial Intelligence (AAAI)
International Joint Conference on Artificial Intelligence (IJCAI)
ACM Multimedia (ACMMM)