Lixin YANG (杨理欣)

Research Assistant Professor at Shanghai Jiao Tong University (Fall 2023)
Member of Machine Vision and Intelligence Group (MVIG)
助理研究员,上海交通大学电子信息与电气工程学院,计算机系
Email: siriusyang at sjtu dot edu dot cn

Bio.  I’m a incoming research Assistant Professor in the department of Computer Science, Shanghai Jiao Tong University (SJTU). Starting from 2019, I have been in Machine Vision and Intelligence Group under the supervision of Prof. Cewu Lu. Prior to that, I received my M.S degree at the Intelligent Robot Lab in SJTU. My research interests include Computer Vision, Robotic Vision, 3D Vision and Graphics. Currently, I am focusing on modeling and imitating the hand manipulating objects, including 3D hand pose and shape estimation, (hand-held) object reconstruction, and grasp synthesis. I am also interested in NERF and motion synthesis.

Join Us.  I am looking for cooperation and self-motivated interns. Contact me if you are interested in the above topics

Email  /  Google Scholar  /  GitHub  /  LinkedIn  /  Twitter  /  Resumé (PDF)

profile photo
News
  • [2023.08] 📣 I will joint the CS department, SJTU as a research Assistant Professor in Fall 2023.
  • [2023.08] 📣 I defend my doctoral thesis and earn my Ph.D!
  • [2023.08] 👨🏻‍🏫 I am honored to be an invited speaker at the HANDS workshop at ICCV 2023.
  • [2023.07] 🎉 One paper: CHORD is accepted by ICCV 2023 🇫🇷.
  • [2023.06] 👨🏻‍🏫 Invited Talk at 智东西公开课 | 研讨会: 3D手部重建及具身智能交互. 视频 (中文).
  • [2023.02] 🎉 One paper: POEM is accepted by CVPR 2023.
  • [2022.10] 👩🏻‍❤️‍👨🏻 I have taken the wonderful journey of marriage alongside my cherished wife.
  • [2022.10] 👨🏻‍🏫 Invited Talk at International Digital Economy Academy (IDEA), Thanks Ailing Zeng for hosting.
  • [2022.09] 🎉 One paper: DART got accepted by NeurIPS 2022 - Datasets and Benchmarks Track.
  • [2022.07] 👨🏻‍🏫 Invited Talk at 智东西公开课 | AI新青年讲座: 基于图像的手物交互重建与虚拟人手生成. 视频 (中文).
  • [2022.04] 👨🏻‍🏫 Invited Talk at MPI-IS Perceiving Systems. Thanks Yuliang Xiu for hosting. (info).
  • [2022.03] 🎉 Two paper were accepted by CVPR 2022: one Oral, one poster.
  • [2021.07] 🎉 One paper got accepted by ICCV 2021.
Publications
Color-NeuS: Reconstructing Neural Implicit Surfaces with Color
Licheng Zhong* ,  Lixin Yang* ,  KaiLin LiHaoyu ZhenMei HanCewu Lu

arXiv preprint, 2023
Project Page / arXiv / Code / Data
Details Color-NeuS focuses on mesh reconstruction with color. We remove view-dependent color while using a relighting network to maintain volume rendering performance. Mesh is extracted from the SDF network, and vertex color is derived from the global color network. We conceived a in hand object scanning task and gathered several videos for it to evaluate Color-NeuS.
CHORD: Category-level in-Hand Object Reconstruction via Shape Deformation
KaiLin Li * ,  Lixin Yang* ,  Haoyu ZhenZenan LinXinyu ZhanLicheng ZhongJian XuKejian WuCewu Lu

ICCV, 2023
Project Page / arXiv / Code / PyBlend
Details We proposed a new method CHORD which exploits the categorical shape prior for reconstructing the shape of intra-class objects. In addition, we constructed a new dataset, COMIC, of category-level hand-object interaction. COMIC encompasses a diverse collection of object instances, materials, hand interactions, and viewing directions, as illustrated.
POEM: Reconstructing Hand in a Point Embedded Multi-view Stereo
Lixin YangJian XuLicheng ZhongXinyu ZhanZhicheng Wang
Kejian WuCewu Lu

CVPR, 2023
arXiv / Code
Details POEM (Point Embedded Multi-view) focuses on reconstructing an articulation body with "true scale" and "accurate pose" from a series of sparsely arranged camera views. In practice, we used the example of hand. POEM explores the power of points, using a cluster of (x, y, z) coordinates with natural positional encoding to find associations in multi-view stereo.
DART: Articulated Hand Model with Diverse Accessories and Rich Textures
Daiheng Gao*,   Yuliang Xiu*,  KaiLin Li*,  Lixin Yang*,
Feng Wang, Peng Zhang, Bang Zhang, Cewu Lu, Ping Tan

NeurIPS, 2022 - Datasets and Benchmarks Track
Website / Paper / arXiv / Code / Video
Details We extend MANO with more Diverse Accessories and Rich Textures, namely DART. DART is comprised of 325 exquisite hand-crafted texture maps which vary in appearance and cover different kinds of blemishes, make-ups, and accessories. We also generate large-scale (800K), diverse, and high-fidelity hand images, paired with perfect-aligned 3D labels, called DARTset.
OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction
Lixin Yang*, Kailin Li* Xinyu Zhan*, Fei Wu, Anran Xu, Liu Liu, Cewu Lu
(*=equal contribution)

CVPR, 2022
Website / Paper / arXiv / Toolkit / Tink
Details Learning how humans manipulate objects requires machines to acquire knowledge from two perspectives: one for understanding object affordances and the other for learning human’s interactions based on the affordances. In this work, we propose a multi-modal and rich-annotated knowledge repository, OakInk, for visual and cognitive understanding of hand-object interactions. Check our website for more details !
ArtiBoost: Boosting Articulated 3D Hand-Object Pose Estimation via Online Exploration and Synthesis
Lixin Yang*, Kailin Li* Xinyu Zhan, Jun Lv, Wenqiang Xu, Jiefeng Li, Cewu Lu
(*=equal contribution)

CVPR, 2022   (Oral Presentation)
Paper / arXiv / Code
Details We propose ArtiBoost, a lightweight online data enhancement method that boosts articulated hand-object pose estimation from the data perspective ArtiBoost can cover diverse hand-object poses and camera viewpoints through sampling in a Composited hand-object Configuration and Viewpoint space (CCV-space) and can adaptively enrich the current hard-discernable items by loss-feedback and sample re-weighting. ArtiBoost alternatively performs data exploration and synthesis within a learning pipeline, and those synthetic data are blended into real-world source data for training.
CPF: Learning a Contact Potential Field to Model the Hand-Object Interaction
Lixin Yang, Xinyu Zhan, Kailin Li, Wenqiang Xu, Jiefeng Li, Cewu Lu

ICCV, 2021
Project / Paper / Supp / arXiv / Code / 知乎
Details Modeling the hand-object (HO) interaction not only requires estimation of the HO pose, but also pays attention to the contact due to their interaction. In this paper, we present an explicit contact representation namely Contact Potential Field (CPF), and a learning-fitting hybrid framework namely MIHO to Modeling the Interaction of Hand and Object. In CPF, we treat each contacting HO vertex pair as a spring-mass system. Hence the whole system forms a potential field with minimal elastic energy at the grasp position.
HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation
Jiefeng Li, Chao Xu, Zhicun Chen, Siyuan Bian, Lixin Yang, Cewu Lu

CVPR, 2021
Project / paper / supp / arXiv / Code
HandTailor: Towards High-Precision Monocular 3D Hand Recovery
Jun Lv, Wenqiang Xu, Lixin Yang, Sucheng Qian, Chongzhao Mao, Cewu Lu

BMVC, 2021
Paper / arXiv / Code
BiHand: Recovering Hand Mesh with Multi-stage Bisected Hourglass Networks
Lixin Yang, Jiasen Li, Wenqiang Xu, Yiqun Diao, Cewu Lu

BMVC, 2020
Paper / arXiv / Code


website template