Hello! I am a research scientist at
Facebook AI Research (FAIR), Menlo Park. I received my Ph.D. and M.S. degrees from CSE Department at UC San Diego, advised by Zhuowen Tu.
During my PhD study, I also interned at NEC Labs, Adobe, Facebook, Google, DeepMind. Prior to that, I obtained my bachelor degree from Shanghai Jiao Tong University.
My primary areas of interest in research are deep learning and computer vision. My goal is to develop improved representation learning techniques that aid machines in comprehending and utilizing massive amounts of structured information, as well as to push the boundaries of visual recognition by learning better representations at scale.
I am serving as an Area Chair for ECCV 2020/2022, ICCV 2021, and CVPR 2021/2022.
I'm hiring research interns at FAIR.
If you are passionate about using representation learning to solve challenging tasks in computer vision and machine learning, shoot me an email to apply.
Organizing/Invited Talk @ Tutorials on Visual Recognition for Images, Video, and 3D
(* indicate equal contribution)
A ConvNet for the 2020s
SLIP: Self-supervision meets Language-Image Pre-training
Masked Feature Prediction for Self-Supervised Visual Pre-Training
Benchmarking Detection Transfer Learning with Vision Transformers
Masked Autoencoders are Scalable Vision Learners
Pri3D: Can 3D Priors Help 2D Representation Learning?
An Empirical Study of Training Self-supervised Vision Transformers
On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness
Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts
Sample-Efficient Neural Architecture Search by Learning Action Space
Graph Structure of Neural Networks
PointContrast: Unsupervised Pre-training for 3D Point Cloud Understanding
Are Labels Necessary for Neural Architecture Search?
Momentum Contrast for Unsupervised Visual Representation Learning
Best Paper Nomination (top 30)
Decoupling Representation and Classifier for Long-Tailed Recognition
On Network Design Spaces for Visual Recognition
Exploring Randomly Wired Neural Networks for Image Recognition
Deep Representation Learning with Induced Structural Priors
Ph.D. Thesis, UC San Diego 2018
Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification
Attentional ShapeContextNet for Point Cloud Recognition
Aggregated Residual Transformations for Deep Neural Networks
Top-down Learning for Structured Labeling with Convolutional Pseudoprior
Holistically-Nested Edge Detection
Marr Prize Honorable Mention
Oral Presentation at the NeurIPS'14 Deep Learning Workshop
Hyper-class Augmented and Regularized Deep Learning for Fine-grained Image Classification