I am currently a 4th year Ph.D. student at Gaoling School of Artificial Intelligence (GSAI) in Renmin University of China. I am dedicated to creating more powerful foundation language models.
News within a Year
-
Oct 2025: Awarded National Scholarship for Doctoral Students (Ranked 1st).
-
Sept 2025: Paper “PolarQuant” accepted to NeurIPS 2025.
-
July 2025: Joined ByteDance Seed.
-
July 2025: Paper “HoPE” received the ACL 2025 SAC Highlights Award (Top 1.5%).
-
May 2025: Paper “Autonomy-of-Experts Models” accepted to ICML 2025.
Honors and Awards
- National Scholarship for Doctoral Students (1st-Ranked in GSAI, 2025)
- ACL 2025, SAC Highlights Paper Award (Top 1.5%)
- CIE-Tencent Doctoral Student Research Incentive Program, HunYuan Large Language Model Special Project, 2025 (1 of 17 selected individuals nationwide)
- CCF-Tencent Rhino-Bird Elite Talent Program, 2024 (1 of 50 selected individuals nationwide)
- Outstanding Innovative Talents Cultivation Funded Programs of Renmin University of China, 2023 & 2024.
Academic Services
- Conference Reviewer: EMNLP & ACL (ARR Area Chair), ICML, ICLR, NeurIPS, WWW, KDD
- Journal Reviewer: ACM TIST
Internships
- 2025.07 - Now, ByteDance Seed.
- 2024.05 - 2025.07, Tencent Hunyuan, mentored by Ruobing Xie.
- 2023.09 - 2024.05, Alibaba, Tongyi Lab.
- 2023.03 - 2023.09, Microsoft Research, Machine Learning Area, mentored by Xu Tan. Our collaborative efforts are dedicated to the Muzic project, which boasts 4k stars on GitHub.
Publications within a Year (First Author)
-
PolarQuant: Leveraging Polar Transformation for Efficient Key Cache Quantization and Decoding Acceleration. Songhao Wu* and Ang Lv* et al. Thirty-ninth Annual Conference on Neural Information Processing Systems (NeurIPS’25).
-
Autonomy-of-Experts Models. Ang Lv et al. Proceedings of the 42nd International Conference on Machine Learning (ICML’25).
-
Language Models “Grok” to Copy. Ang Lv et al. Proceedings of the 2025 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL’25 short).
-
PEAR: Position-Embedding-Agnostic Attention Re-weighting Enhances Retrieval-Augmented Generation with Zero Inference Overhead. Tao Tan*, Yining Qian* and Ang Lv* et al. The Web Conference (WWW’25 oral)
-
An Analysis and Mitigation of the Reversal Curse. Ang Lv et al. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP’24).
-
Mixture of In-Context Experts Enhance LLMs’ Long Context Awareness. Hongzhan Lin* and Ang Lv* et al. Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS’24).