Research
My research interests span machine learning, computer vision, and natural language processing. I focus particularly on large language model alignment, reasoning with synthetic data, and open-world machine learning. Recent work shows that synthetic data from smaller, weaker models can be more effective than data from larger models for improving LLM reasoning. Representative papers are highlighted. (* indicates equal contribution)
|
Weak-to-Strong
|
Your Weak LLM is Secretly a Strong Teacher for Alignment
Leitian Tao, Yixuan Li
ICLR 2025
|
CodeLutra
|
CodeLutra: Boosting LLM Code Generation via Preference-Guided Refinement
Leitian Tao, Xiang Chen, Tong Yu, Tong Mai, Ryan Rossi, Yixuan Li, Saayan Mitra
Under Review
|
|
Non-parametric Outlier Synthesis
Leitian Tao, Xuefeng Du, Xiaojin Zhu, Yixuan Li
ICLR 2023
[Link]
|
|
Predicate Correlation Learning for Scene Graph Generation
Leitian Tao, Li Mi, Nannan Li, Xianhang Cheng, Yaosi Hu, Zhenzhong Chen
IEEE Transactions on Image Processing (TIP)
[Link]
|
|
Pyramid Feature Alignment Network for Video Deblurring
Leitian Tao, Zhenzhong Chen
arXiv preprint
[Link]
|
Academic Service
Journal Reviewer: IJCV
Conference Reviewer: ICML, ICLR, NeurIPS
|
|