Lixin Yang \ 杨理欣

 \lik'sin ˈyäŋ\ 

Morning, I’m a 3rd Year PhD student in the Department of Computer Science at Shanghai Jiao Tong University. Starting from 2019, I have been in MVIG Lab under the supervision of Prof. Cewu Lu. Prior to that, I received my M.S degree at the Intelligent Robot Lab in SJTU. My research interests include Computer Vision, 3D Vision, V-SLAM, and Robotics. Currently, I am focusing on modeling and imitating the interactions of hand manipulating object.

Email  /  Google Scholar  /  GitHub  /  LinkedIn  /  Twitter

profile photo
News
  • [2022.04] Invited Talk at Max-Planck-Institute for Intelligent Systems (MPI-IS) Perceiving Systems Department.
    Thanks Yuliang Xiu for hosting.
  • [2022.03] 🎉 Two paper were accepted by CVPR 2022: one Oral, one poster.
  • [2021.11] Unlock a new role: the reviewer
  • [2021.07] 🎉 One paper got accepted by ICCV 2021
Publications
OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction
Lixin Yang*, Kailin Li* Xinyu Zhan*, Fei Wu, Anran Xu, Liu Liu, Cewu Lu
(*=equal contribution)
CVPR, 2022
project / arxiv / code

Learning how humans manipulate objects requires machines to acquire knowledge from two perspectives: one for understanding object affordances and the other for learning human’s interactions based on the affordances. In this work, we propose a multi-modal and rich-annotated knowledge repository, OakInk, for visual and cognitive understanding of hand-object interactions. Check our website for more details !

ArtiBoost: Boosting Articulated 3D Hand-Object Pose Estimation via Online Exploration and Synthesis
Lixin Yang*, Kailin Li* Xinyu Zhan, Jun Lv, Wenqiang Xu, Jiefeng Li, Cewu Lu
(*=equal contribution)
CVPR, 2022   (Oral Presentation)
arxiv / code

We propose a lightweight online data enrichment method that boosts articulated hand-object pose estimation from the data perspective. During training, ArtiBoost alternatively performs data exploration and synthesis. Even with a simple baseline, our method can boost it to outperform previous SOTA on several hand-object benchmarks.

CPF: Learning a Contact Potential Field to Model the Hand-Object Interaction
Lixin Yang, Xinyu Zhan, Kailin Li, Wenqiang Xu, Jiefeng Li, Cewu Lu
ICCV, 2021
project / paper / supp / arxiv / code / 知乎

We highlight contact in the hand-object interaction modeling task by proposing an explicit representation named Contact Potential Field (CPF). In CPF, we treate each contacting hand-object vertex pair as a spring-mass system, Hence the whole system forms a potential filed with minimal elastic energy at the grasp position.

HandTailor: Towards High-Precision Monocular 3D Hand Recovery
Jun Lv, Wenqiang Xu, Lixin Yang, Sucheng Qian, Chongzhao Mao, Cewu Lu
BMVC, 2021
arxiv / code

We combine a learning-based hand module and an optimization-based tailor module to achieve high-precision hand mesh recovery from a monocular RGB image. Towards accuracy-oriented and in-the-wild scenarios.

HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation
Jiefeng Li, Chao Xu, Zhicun Chen, Siyuan Bian, Lixin Yang, Cewu Lu
CVPR, 2021
project / paper / supp / arxiv / code

We bridge the gap between body mesh estimation and 3D keypoint estimation. We propose a novel hybrid inverse kinematics solution (HybrIK). HybrIK directly transforms accurate 3D joints to relative body-part rotations for 3D body mesh reconstruction, via the twist-and-swing decomposition.

BiHand: Recovering Hand Mesh with Multi-stage Bisected Hourglass Networks
Lixin Yang, Jiasen Li, Wenqiang Xu, Yiqun Diao, Cewu Lu
BMVC, 2020
paper / arxiv / code

We introduce an end-to-end learnable model, BiHand, to recover hand mesh from RGB image. BiHand adopts a novel bisecting design which allows the networks to encapsulate two closely related information (e.g. 2D keypoints and silhouette) to facilitate network performance.

website template