Lixin YANG (杨理欣)
Research Assistant Professor, 助理研究员,上海交通大学人工智能学院
Shanghai Jiao Tong University, School of Artificial Intelligence.
Member of Machine Vision and Intelligence Group (MVIG)
Email: siriusyang at sjtu dot edu dot cn
Bio.  I’m a Research Assistant Professor in Shanghai
Jiao Tong
University (SJTU), School of Artificial Intelligence (SAI).
Since 2019, I have been part of the Machine Vision and Intelligence Group
under the
supervision of Prof. Cewu Lu, where I obtained my Ph.D. in 2023.
Prior to that, I received my
M.S degree at the Intelligent Robot Lab in SJTU.
My research interests include 3D Vision and
Robotics.
Currently, I am focusing on modeling and imitating the hand manipulating objects,
including 3D hand | object pose | shape estimation,
grasp | motion generation, imitation learning, dexterous manipulation.
Join Us.  I am looking for 1) Master Student at SJTU School of Artificial
Intelligence,
2) self-motivated undergraduate research interns. Contact me if you are interested in
the above topics
Email  / 
Google
Scholar  / 
GitHub  / 
Twitter
|
|
- [2024.09]
🎉 One paper on articulated object image manipulation got accepted by NeurIPS 2024 🇨🇦.
- [2024.07]
🎉 One paper: SemGrasp is accepted by ECCV 2024 🇮🇹.
- [2024.02]
🎉 One paper: OakInk2 is accepted by CVPR 2024 🇺🇸.
- [2024.02]
🎉 The Contact Potential Field is accepted by TPAMI.
- [2023.12]
🎉 One paper: FAVOR is accepted by AAAI 2024 🇨🇦.
- [2023.10]
🎉 One paper: Color-NeuS is
accepted by 3DV 2024 🇨🇭.
- [2023.08]
📣 I defend my doctoral thesis and earn my Ph.D!
- [2023.08]
👨🏻🏫 I am honored to be an invited speaker at the HANDS
workshop at ICCV 2023.
- [2023.07]
🎉 One paper: CHORD is accepted by ICCV 2023 🇫🇷.
- [2023.06]
👨🏻🏫 Invited Talk at 智东西公开课 | 研讨会: 3D手部重建及具身智能交互. 视频
(中文).
- [2023.02]
🎉 One paper: POEM is accepted by
CVPR 2023.
- [2022.10]
👩🏻❤️👨🏻 I have taken the wonderful journey of marriage alongside my cherished wife.
View older entries
- [2022.10]
👨🏻🏫 Invited Talk at International Digital Economy
Academy (IDEA), Thanks Ailing Zeng for hosting.
- [2022.09]
🎉 One paper: DART got accepted by
NeurIPS
2022 - Datasets and Benchmarks Track.
- [2022.07]
👨🏻🏫 Invited Talk at 智东西公开课 | AI新青年讲座: 基于图像的手物交互重建与虚拟人手生成. 视频
(中文).
- [2022.04]
👨🏻🏫 Invited Talk at MPI-IS Perceiving Systems.
Thanks Yuliang Xiu for hosting.
(info).
- [2022.03]
🎉 Two paper were accepted by CVPR 2022:
one Oral, one poster.
- [2021.07]
🎉 One paper got accepted by ICCV 2021.
|
Motion Before Action: Diffusing Object Motion as Manipulation Condition
Yue Su*, 
Xinyu Zhan*, 
Hongjie Fang, 
Yong-Lu Li, 
Cewu Lu, 
Lixin Yang#, 
arXiv preprint, 2024
project
/
arXiv
/
code
A two cascaded diffusion processes for object motion generation and robot action generation under object
motion guidance.
|
|
SemGrasp: Semantic Grasp Generation via Language Aligned Discretization
Kailin Li, 
Jingbo Wang, 
Lixin Yang, 
Cewu Lu, 
Bo Dai
ECCV, 2024
project
/
arXiv
A MLLM-based method that infuses language instructions into grasp generation; & A new language-pose
dataset, CapGrasp,
featuring detailed caption of grasping poses.
|
|
OAKINK2: A Dataset of Bimanual Hands-Object Manipulation in Complex Task Completion
Xinyu Zhan* , 
Lixin Yang* , 
Yifei Zhao, 
Kangrui Mao, 
Hanlin Xu, 
Zenan Lin, 
Kailin Li, 
Cewu Lu
CVPR, 2024
project
/
arXiv
A 4D motion dataset focusing on bimanual object manipulation tasks involved in complex daily
activities; & A three-tiered task abstraction: Object Affordance, Primitive Task, and
Complex Task, to systematically organize manipulation tasks.
|
|
FAVOR: Full-Body AR-Driven Virtual Object Rearrangement Guided by Instruction Text
Kailin Li* , 
Lixin Yang* , 
Zenan Lin, 
Jian Xu, 
Xinyu Zhan, 
Yifei Zhao, 
Pengxiang Zhu, 
Wenxiong Kang, 
Kejian Wu, 
Cewu Lu
AAAI, 2024
(coming):
project
/
arXiv
/
code
/
data
A full-body human motion dataset that captures text-guided desktop object rearrangement through MoCap
and AR glasses; & A pipeline for generating avatar's motion of object rearrangement driven by
text instruction.
|
|
Color-NeuS: Reconstructing Neural Implicit Surfaces with Color
Licheng Zhong* , 
Lixin Yang* , 
Kailin Li , 
Haoyu Zhen, 
Mei Han, 
Cewu Lu
3DV, 2024
project
/
arXiv
/
code
/
data
A NeuS-based reconstruction method for both object surface and texture.
Can be used to reconstruct object with complex geometry and texture given a hand-held object video.
|
|
CHORD: Category-level in-Hand Object Reconstruction via Shape Deformation
Kailin Li * , 
Lixin Yang* , 
Haoyu Zhen, 
Zenan Lin, 
Xinyu Zhan, 
Licheng Zhong, 
Jian Xu, 
Kejian Wu, 
Cewu Lu
ICCV, 2023
project
/
arXiv
/
code (coming)
/
tool
A single-view hand-held object reconstruction method that exploits the categorical shape prior to
reconstruct the shape of intra-class objects; & A new synthetic dataset, COMIC, that contains the
category-level collection of objects with diverse shape, materials, interacting poses, and viewing
directions.
|
|
POEM: Reconstructing Hand in a Point Embedded Multi-view Stereo
Lixin Yang, 
Jian Xu, 
Licheng Zhong, 
Xinyu Zhan, 
Zhicheng Wang, 
Kejian Wu, 
Cewu Lu
CVPR, 2023
arXiv
/
code
A multi-view hand mesh recovery (HMR) method with Transformer. It leverages the "power of points", including
Basis Points Set, point's positional encoding and point-Transformer, to unify and merge information from
sparsely arranged cameras.
|
|
DART: Articulated Hand Model with Diverse Accessories and Rich Textures
Daiheng Gao*,  
Yuliang Xiu*, 
Kailin Li*, 
Lixin Yang*,
Feng Wang, Peng Zhang, Bang Zhang,
Cewu Lu,
Ping Tan
NeurIPS, 2022 - Datasets and Benchmarks Track
project
/
arXiv
/
code
/
video
A MANO-derived hand model that contains 325 exquisite hand-crafted texture maps (vary in appearance and
cover different kinds of blemishes, make-ups, and accessories).
|
|
OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction
Lixin Yang*,
Kailin Li*
Xinyu Zhan*,
Fei Wu,
Anran Xu,
Liu Liu,
Cewu Lu
(*=equal contribution)
CVPR, 2022
project
/
paper
/
arXiv
/
code
A dataset that focuses on human grasp based on object's affordance.
It contains two knowledge base: 1) Object affordance knowledge (Oak) and 2) Interaction knowledge (Ink).
A new model: Tink, for transferring interaction pose from one object to another.
|
|
ArtiBoost: Boosting Articulated 3D Hand-Object Pose Estimation via Online Exploration
and Synthesis
Lixin Yang*,
Kailin Li*
Xinyu Zhan,
Jun Lv,
Wenqiang Xu,
Jiefeng Li,
Cewu Lu
(*=equal contribution)
CVPR, 2022   (Oral Presentation)
paper
/
arXiv
/
code
An online data syhthesis tool for articulated hand(-object) pose estimation.
An grasping systhesis method that can generate dexterous hand grasping poses for arbitrary object.
|
|
CPF: Learning a Contact Potential Field to Model the Hand-Object Interaction
Lixin Yang,
Xinyu Zhan,
Kailin Li,
Wenqiang Xu,
Jiefeng Li,
Cewu Lu
ICCV, 2021
project
/
paper
/
supp
/
arXiv
/
code
/
知乎
A novel contact representation (CPF) that used to imporve physical hand-object interaction.
A hybrid learning-fitting framework (MIHO) that aligns the top-down pose estimation with bottom-up contact
modeling.
|
|
HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and
Shape Estimation
Jiefeng Li,
Chao Xu,
Zhicun Chen,
Siyuan Bian,
Lixin Yang,
Cewu Lu
CVPR, 2021
project /
paper
/
supp
/
arXiv /
code
|
|
HandTailor: Towards High-Precision Monocular 3D Hand Recovery
Jun Lv, Wenqiang Xu, Lixin Yang, Sucheng Qian, Chongzhao Mao, Cewu Lu
BMVC, 2021
paper /
arXiv /
code
|
|
BiHand: Recovering Hand Mesh with Multi-stage Bisected Hourglass Networks
Lixin Yang, Jiasen Li, Wenqiang Xu, Yiqun Diao, Cewu Lu
BMVC, 2020
paper /
arXiv /
code
|
website template
|