Helloπ, I am a 1st-year PhD student in Research Center for Social Computing and Interactive Robotics (SCIR), at Harbin Institute of Technology (HIT, China). My advisor is Prof. Wanxiang Che. Previously, my primary research interests were centered on deep learning for natural language generation (NLG), such as grammatical error correction. Recently, I have shifted my focus to studying efficient inference for large language models, particularly in the area of speculative decoding.
My main research interest is efficient LLMs, including inference acceleration and synthetic data. If you are interested in my research or potential collaborations, please feel free to reach out to me at yixuanwang@ir.hit.edu.cn.
If you like the template of this homepage, welcome to star and fork Yi Renβs open-sourced template version AcadHomepage .
π₯ News
- 2025.05: Β ππ Our Token Recycling (Main) and TagEvol (Findings) are accepted by ACL 2025.
- 2024.09: Β ππ Our Make-Some-Noise is accepted by EMNLP 2024.
- 2024.09: Β ππ Celebrate the birth of the homepage.
π Publications
πSpeculative Decoding
- Arxiv Think Before You Accept: Semantic Reflective Verification for Faster Speculative Decoding, Yixuan Wang, Yijun Liu, Shiyu Ji, Yuzhuang Xu, Yang Xu, Qingfu Zhu, Wanxiang Che.
- ACL2025 Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling, Xianzhen Luo, Yixuan Wang, Qingfu Zhu, Zhiming Zhang, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che. [code]
- EMNLP2024 Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training, Yixuan Wang*, Xianzhen Luo*, Fuxuan Wei, Yijun Liu, Qingfu Zhu, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che. [code]
β‘οΈKV Cache Compression
- Arxiv Lookahead Q-Cache: Achieving More Consistent KV Cache Eviction via Pseudo Query, Yixuan Wang*, Shiyu Ji*, Yijun Liu, Yuzhuang Xu, Yang Xu, Qingfu Zhu, Wanxiang Che.
πData Augmentation
- ACL2025(Findings) Tag-Evol: Achieving Efficient Instruction Evolving via Tag Injection, Yixuan Wang*, Shiqi Zhou*, Chuanzhe Guo, Qingfu Zhu. [code]
- ACL2024(Findings) Improving Grammatical Error Correction via Contextual Data Augmentation, Yixuan Wang, Baoxin Wang, Yijun Liu, Qingfu Zhu, Dayong Wu, Wanxiang Che. [code]
πGrammatical Error Correction
- ACL2024(Findings) Improving Grammatical Error Correction via Contextual Data Augmentation, Yixuan Wang, Baoxin Wang, Yijun Liu, Qingfu Zhu, Dayong Wu, Wanxiang Che. [code]
- LREC-COLING2024 LM-Combiner: A Contextual Rewriting Model for Chinese Grammatical Error Correction, Yixuan Wang, Baoxin Wang, Yijun Liu, Dayong Wu, Wanxiang Che. [code]
- COLING2022(Oral) Adaptive Unsupervised Self training for Disfluency Detection, Zhongyuan Wang, Yixuan Wang, Shaolei Wang, Wanxiang Che. [code]
π Honors and Awards
- The Prize of First Class for the Wu Wenjun Science and Technology Award (ε΄ζδΏδΊΊε·₯ζΊθ½η§ζθΏζ₯δΈηε₯), 2024
- National Scholarship, 2023
- Academic Scholarship, First Class 2022
- Peopleβs Scholarship (2018, 2019, 2020)
- 1st place in Chinese Essay Fluency Evaluation, CCL2023
π Educations
- 2022.09 - Now, Master-Phd Student, Harbin Institute of Technology, Harbin.
- 2018.09 - 2022.06, Undergraduate, Harbin Institute of Technology, Harbin.
π» Internships
- 2025.06 - Present, Baidu, China.
- 2023.06 - 2023.09, Joint Laboratory of HIT and iFLYTEK Research (HFL), China.