Hi welcome! I am a research scientist at Snap Research working with Kfir Aberman on Personalized AI Generation. I graduated as a Ph.D. in Computer Science at KAUST, under the supervision of Prof. Bernard Ghanem.
During my Ph.D. studies, I was fortunate to intern at serveral leading research institutions, including Snap Research, Meta Reality Lab, Microsoft Research, Megvii Research, and SenseTime Research.
Prior to that, I received my B.Eng degree with the highest undergraduate honor from Xi'an Jiaotong University (XJTU).
My primary research interests lie in 3D perception and 3D generation.
My representative work includes 3D Foundation Model PointNeXt (NeurIPS), ASSANet (NeurIPS Spotlight), DeepGCNs (T-PAMI), and 3D generation Magic123 (ICLR) and AToM.
If you are interested in working in personalized/3D-related generation with us, please drop me a message through guocheng.qian [at] outlook.com
Check my full publication at Google Scholar
AToM trains a single text-to-mesh model on many prompts using 2D diffusion without 3D supervision, yileds high-quality textured meshes under a second, and generalizes to unseen prompts.
Magic123 is a coarse-to-fine image-to-3D pipeline that produces high-quality high-resolution 3D content from a single unposed image by the guidance of both 2D and 3D priors.
We propose a state-of-the-art Standard Transformer model for point cloud understanding and find that image pretraining helps point cloud tasks.
ZeroSeg trains open-vocabulary zero-shot semantic segmentation models using only CLIP Vision Encoder
PointNeXt boosts the performance of PointNet++ to the state-of-the-art level with improved training and scaling strategies.
ASSANet makes PointNet++ faster and more accurate.
This work transfers concepts such as residual/dense connections and dilated convolutions from CNNs to GCNs in order to successfully train very deep GCNs.