Lixin YANG | 杨理欣

 lik'sin ˈyäŋ 

Morning, I’m a fourth year PhD candidate in the department of Computer Science, Shanghai Jiao Tong University (SJTU). Starting from 2019, I have been in Machine Vision and Intelligence Group under the supervision of Prof. Cewu Lu. Prior to that, I received my M.S degree at the Intelligent Robot Lab in SJTU. My research interests include Computer Vision, Robotics Vision, 3D Vision and Graphics. Currently, I am focusing on modeling and imitating the interaction of hand manipulating objects, including 3D hand pose and shape from X, hand-object reconstruction, animation and synthesis. I am also interested in NERF and motion retargeting.

Looking for cooperation and self-motivated interns. Contact me if you are interested in the above topics

Email  /  Google Scholar  /  GitHub  /  LinkedIn  /  Twitter

profile photo
News
  • [2022.10] 👩🏻‍❤️‍👨🏻 I got married to my beautiful beloved girl.
  • [2022.10] Invited Talk at IDEA, Thanks Ailing Zeng for hosting.
  • [2022.09] 🎉 DART got accepted by NeurIPS 2022 - Datasets and Benchmarks Track.
  • [2022.07] Invited Talk: 基于图像的手物交互重建与虚拟人手生成 | 智东西公开课 | AI新青年讲座 视频 (中文)
  • [2022.04] Invited Talk at MPI-IS Perceiving Systems. Thanks Yuliang Xiu for hosting.   INFO
  • [2022.03] 🎉 Two paper were accepted by CVPR 2022: one Oral, one poster.
  • [2021.07] 🎉 One paper got accepted by ICCV 2021
Publications
DART: Articulated Hand Model with Diverse Accessories and Rich Textures
Daiheng Gao*,   Yuliang Xiu*,   KaiLin Li*,   Lixin Yang*,
Feng Wang, Peng Zhang, Bang Zhang, Cewu Lu, Ping Tan

NeurIPS, 2022 - Datasets and Benchmarks Track
Website / Paper / arXiv / Code / Video

In this paper, we extend MANO with more Diverse Accessories and Rich Textures, namely DART. DART is comprised of 325 exquisite hand-crafted texture maps which vary in appearance and cover different kinds of blemishes, make-ups, and accessories. We also generate large-scale (800K), diverse, and high-fidelity hand images, paired with perfect-aligned 3D labels, called DARTset.

OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction
Lixin Yang*, Kailin Li* Xinyu Zhan*, Fei Wu, Anran Xu, Liu Liu, Cewu Lu
(*=equal contribution)

CVPR, 2022
Website / Paper / arXiv / Toolkit / Tink

Learning how humans manipulate objects requires machines to acquire knowledge from two perspectives: one for understanding object affordances and the other for learning human’s interactions based on the affordances. In this work, we propose a multi-modal and rich-annotated knowledge repository, OakInk, for visual and cognitive understanding of hand-object interactions. Check our website for more details !

ArtiBoost: Boosting Articulated 3D Hand-Object Pose Estimation via Online Exploration and Synthesis
Lixin Yang*, Kailin Li* Xinyu Zhan, Jun Lv, Wenqiang Xu, Jiefeng Li, Cewu Lu
(*=equal contribution)

CVPR, 2022   (Oral Presentation)
Paper / arXiv / Code

We propose an online data enrichment method that boosts articulated hand-object pose estimation from the data perspective. During training, ArtiBoost alternatively performs data exploration and synthesis. Even with a simple baseline, our method can boost it to outperform previous SOTA on several hand-object benchmarks.

CPF: Learning a Contact Potential Field to Model the Hand-Object Interaction
Lixin Yang, Xinyu Zhan, Kailin Li, Wenqiang Xu, Jiefeng Li, Cewu Lu

ICCV, 2021
Project / Paper / Supp / arXiv / Code / 知乎

We highlight contact in the hand-object interaction modeling task by proposing an explicit representation named Contact Potential Field (CPF). In CPF, we treate each contacting hand-object vertex pair as a spring-mass system, Hence the whole system forms a potential filed with minimal elastic energy at the grasp position.

HandTailor: Towards High-Precision Monocular 3D Hand Recovery
Jun Lv, Wenqiang Xu, Lixin Yang, Sucheng Qian, Chongzhao Mao, Cewu Lu

BMVC, 2021
arXiv / Code

We combine a learning-based hand module and an optimization-based tailor module to achieve high-precision hand mesh recovery from a monocular RGB image. Towards accuracy-oriented and in-the-wild scenarios.

HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation
Jiefeng Li, Chao Xu, Zhicun Chen, Siyuan Bian, Lixin Yang, Cewu Lu

CVPR, 2021
Project / paper / supp / arXiv / Code

We bridge the gap between body mesh estimation and 3D keypoint estimation. We propose a novel hybrid inverse kinematics solution (HybrIK). HybrIK directly transforms accurate 3D joints to relative body-part rotations for 3D body mesh reconstruction, via the twist-and-swing decomposition.

BiHand: Recovering Hand Mesh with Multi-stage Bisected Hourglass Networks
Lixin Yang, Jiasen Li, Wenqiang Xu, Yiqun Diao, Cewu Lu

BMVC, 2020
Paper / arXiv / Code

We introduce an end-to-end learnable model, BiHand, to recover hand mesh from RGB image. BiHand adopts a novel bisecting design which allows the networks to encapsulate two closely related information (e.g. 2D keypoints and silhouette) to facilitate network performance.

website template