About Me

Hello! I am Kunlun Zhu, a graduate student at the University of Illinois Urbana-Champaign, I am currently fortunate to work with Prof. Jiaxuan You at Ulab. Prior to this, I spent two years as a Research Assistant in the Natural Language Processing group at Tsinghua University and as an Algorithm Engineer at ModelBest Inc., working under the guidance of Prof. Zhiyuan Liu. I’m also proud to be a member of OpenBMB.

My research journey has been diverse and enriching. I had the opportunity to work as a Research Assistant for 6 months at the Mila Quebec AI Institute’s Graph Team, collaborating with Prof. Jian Tang. Additionally, I spent a year as a research intern at Carnegie Mellon University’s Robotics Institute, working alongside Prof. Katia Sycara. More details about my experience can be found at CV.

I’m always eager to expand my research horizons! If you’re a master’s or undergraduate student seeking research experience, or a PhD student interested in collaboration, I’d be delighted to hear from you. Feel free to reach out to me at kunlunz2@illinois.edu to explore potential research opportunities together.

Research

My research interests are centered around large language models (LLMs) and their applications. I’m particularly fascinated by:

  • Multimodal aspects of LLMs
  • Instruction tuning techniques
  • LLM applications, including:
    • Agent-based systems
    • Tool learning
    • Multi-agent systems
    • Agents for scientific research
  • Retrieval Augmented Generation(RAG) & QA system

I’m excited about pushing the boundaries of what’s possible with LLMs and exploring how they can be leveraged to solve complex problems across various domains.

Selected Publications & Preprints

  1. K. Zhu, Y. Luo, D. Xu, R. Wang, S. Yu, S. Wang, Y. Yan, Z. Liu, X. Han, Z. Liu, et al. “RAGEval: Scenario Specific RAG Evaluation Dataset Generation Framework.” arXiv:2408.01262, 2024.
  2. Y. Qin, S. Hu, Y. Lin, W. Chen, N. Ding, G. Cui, Z. Zeng, Y. Huang, C. Xiao, C. Han, et al. “Tool learning with foundation models.” [Under Review at Nature Communications] arXiv:2304.08354, 2023.

  3. Y. Qin, Z. Cai, D. Jin, L. Yan, S. Liang, K. Zhu, Y. Lin, X. Han, N. Ding, H. Wang, et al. “WebCPM: Interactive Web Search for Chinese Long-form Question Answering.” In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL 2023), vol. 1, pp. 8968-8988, 2023.

  4. Y. Qin, S. Liang, Y. Ye, K. Zhu, L. Yan, Y. Lu, Y. Lin, X. Cong, et al. “Toolllm: Facilitating large language models to master 16000+ real-world apis.” In International Conference on Learning Representations (ICLR 2024 spotlight), 2024.

  5. S. Liang*, K. Zhu*, R. Tian*, Y. Qin, H. Wang, X. Cong, Z. Liu, X. Liu, M. Sun. “Exploring format consistency for instruction tuning.” Transactions on Machine Learning Research, 2023.

  6. K. Zhu, S. Liang, X. Han, Z. Zheng, G. Zeng, Z. Liu, M. Sun. “QASnowball: An Iterative Bootstrapping Framework for High-Quality Question-Answering Data Generation.” arXiv:2309.10326, 2023.

  7. X. Tang*, Q. Jin*, K. Zhu*, T. Yuan*, Y. Zhang*, W. Zhou, M. Qu, Y. Zhao, J. Tang, Z. Zhang, et al. “Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science.” In ICLR 2024 LLMAgent Workshop, 2024.

  8. T. Feng*, K. Zhu*, C. Jin*, J. Liu*, H. Tu, Z. Cheng, G. Lin, J. You. “How Far Are We From AGI.” arXiv:2405.10313, 2024.

  9. C. Qian, Z. Xie, Y. Wang, W. Liu, K. Zhu, Y. Dang, Z. Du, W. Chen, C. Yang, Z. Liu, et al. “Scaling Large-Language-Model-based Multi-Agent Collaboration.” arXiv:2406.07155, 2024.

Services

  • ICLR 2025: Reviewer
  • NeurIPS 2025: Main Conference Reviewer
  • ICLR 2024: Reviewer for workshops on “How far are we from AGI” and “LLM agent”
  • ACL 2024: Reviewer for the “Word Play” workshop
  • ACL ARR 2024: June Reviewer