Publications & Preprints
* denotes equal contribution.
LIMMT: Less is More for Motion Tracking
ICML 2026
Yu Guan*, Zekun Qi*, Chenghuai Lin, Xuchuan Chen, Wenyao Zhang,
Jilong Wang, XinQiang Yu, He Wang, Li Yi
International Conference on Machine Learning (ICML), 2026
A "less is more" framework for humanoid motion tracking: empirically shows that models trained
on a small, carefully curated subset of motion data can outperform those trained on massive,
unfiltered corpora.
Humanoid-GPT: Humanoid Generative Pre-Training for Zero-Shot Motion Tracking
CVPR 2026
Zekun Qi*, Xuchuan Chen*, Jilong Wang*, Chenghuai Lin*, Yunrui Lian, Zhikai Zhang,
Yu Guan, Wenyao Zhang, Xinqiang Yu, He Wang, Li Yi
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2026
A GPT-style Transformer with causal attention trained on a 2B-frame retargeted motion corpus,
enabling zero-shot whole-body control for the Unitree G1 humanoid.
HumanTracker: Towards a Comprehensive and Human-Aligned Motion Tracking Benchmark
Under Review
Dairu Liu*, Zekun Qi*, Jiayu Zeng*, Yu Guan, Chenghuai Lin,
Xuchuan Chen, XinQiang Yu, Wenyao Zhang, He Wang, Li Yi
European Conference on Computer Vision (ECCV), 2026 — Under Review
Vision To Touch: Generalizable Insertion Contact-State Representation with Real-World RL
XinQiang Yu*, Yuxuan Wan, Zekun Qi, Weiheng Liu, Wenyao Zhang, Bowen Xiao,
Yu Guan, Xuchuan Chen, Zhizheng Zhang, Li Yi, Zhaoxiang Zhang, He Wang