I’m a Post-doctoral Researcher at The Hong Kong University of Science and Technology under supervision of Prof. Bo Li and Prof. Xiaowen Chu. Previously, I obtained my PhD degree of Computer Science from Hong Kong Baptist University under the supervision of Prof. Xiaowen Chu and Prof. Amelie Chi Zhou, and co-supervised by Prof. Bo Han. Before that, I got my Bachelor degree from HUST.

Research Interests I am interested in undersanding how deep learning models are optimized and how they learn knowledge and do reasoning. In next at least one year, I will delve into studying LLM reasoning mechanisms, agent workflow, fast finetuning and inference, and some practical LLMs’ applications in CS or other areas in which knowledge can be conveniently represented digitally.

I’m open for academic collaborations. If you are interested, please feel free to contact me.

🔥 News

See all news items

  • [2025.01]  🎉🎉 Our paper “Hot-pluggable Federated Learning: Briding General and Personalized FL via Dynamic Selection” is selected at ICLR 2025. This paper proposes a selective federated learning approach to integate personalized modules into general federated learning. (Paper)
  • [2025.01]  🎉🎉 Our paper “STBLLM: Breaking the 1-Bit Barrier with Structured Binary LLMs” is selected at ICLR 2025. This paper introduces STBLLM, a novel approach that breaks the 1-bit barrier in language models by leveraging Structured Binary LLMs. (Paper)
  • [2025.01]  🎉🎉 Our paper “The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?” is selected as ICLR Blogpost 2025. This blog proposes a lottery LLM hypothesis suggesting that for a given LLM and task, there exists a smaller lottery LLM capable of producing the same performance with the original LLM with the assistances of multi-step reasoning and external tools. (Paper)
  • [2025.01]  🎉🎉 Our paper “Can LLM Simulations Truly Reflect Humanity? A Deep Dive.” is selected as ICLR Blogpost 2025. This blog rethinks LLM-based simulations by emphasizing both their limitations and the necessities for advancing LLM simulations. It offer actionable insights and strategies for enhancing the applicability of LLM simulations in human society in the future. (Paper)
  • [2025.01]  🎉🎉 Our paper “ParZC: Parametric Zero-Cost Proxies for Efficient NAS. “ is selected at AAAI 2025 for Oral Presentation!
  • [2024.12]  🎉🎉 Our paper “ParZC: Parametric Zero-Cost Proxies for Efficient NAS. “ is accepted at AAAI 2025! Parametric Zero-Cost Proxies (ParZC) method improves zero-shot Neural Architecture Search by addressing unequal node importance and using novel techniques for uncertainty estimation and architecture ranking. (paper and codes will come soon…)
  • [2024.11]  🎉🎉 I’m selected as the Top Reviewer of NeurIPS 2024 for both main and D&B tracks (Link).
  • [2024.10]  🎉🎉 Our paper “Hot Pluggable Federated Learning.” has been selected by the FL@FM-NeurIPS’24 workshop to receive the Outstanding Student Paper Award!. Congratulations to all co-authors!
  • [2024.10]  🎉🎉 Our paper “FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models.” is accepted In ASPLOS 2025! In this paper, we design and implement a new training system modularizes various operators in the entire MoE model, providing more fine-grained computation and communication scheduling, and achieving better computation communication overlap through appropriate gradient segmentation.. (paper and codes will come soon…)
  • [2024.09]  🎉🎉 Our paper “Hot Pluggable Federated Learning.” is accepted at Workshop Federated Foundation Models@NeurIPS 2024 as an Oral paper!. In this paper, we propose a new method to regard model heads as pluggable modules appended after the model backbone. (paper and codes will come soon…)
  • [2024.09]  🎉🎉 Our paper “FuseFL: One-Shot Federated Learning through the Lens of Causality with Progressive Model Fusion.” is accepted at NeurIPS 2024 as a Spotlight paper ! This work identifies the cause of low performance of one-shot FL, and proposes FuseFL to progressively train and fuses DNN model following a bottom-up manner, reducing communication costs to an extremely low degree. (paper and codes will come soon…)
  • [2024.09]  🎉🎉 Our paper “Discovering Sparsity Allocation for Layer-wise Pruning of Large Language Models.” is accepted at NeurIPS 2024</ strong>. In this paper, we present a new method for optimizing layerwise sparsity allocation in large language models. (paper and codes will come soon…)
  • [2024.09]  🎉🎉 Our paper “Should We Really Edit Language Models? On the Evaluation of Edited Language Models.” is accepted at NeurIPS 2024. In this paper, we benchmark the methods of editing LLMs and see how they influence LLM performance. (paper and codes will come soon…)

📖 Educations

  • 2020.09 - 2024.08, Hong Kong Baptist University, PhD in Computer Science
  • 2014.09 - 2018.06, Huazhong University of Science and Technology, Bachelor in Telecommunications Engineering。

💻 Work & Research Experience

  • 09/2024-present: PostDoc Researcher, The Hong Kong University of Science and Technology, advised by Prof. Bo Li and Prof. Xiaowen Chu.
  • 09/2023-08/2024: Visiting Researcher, The Hong Kong University of Science and Technology (Guangzhou), advised by Prof. Xiaowen Chu.
  • 02/2023-05/2023: Visiting Researcher, National University of Singapore, advised by Prof. Bingsheng He.
  • 06/2022-10/2022: Research Intern, FedML Inc, advised by Dr. Chaoyang He.
  • 10/2018-09/2020: Research Assistant, Hong Kong Baptist University, advised by Prof. Xiaowen Chu.

📝 Selected Publications

See full publication list and Google Scholar link.

The * represents equal contribution, 📧 corresponding author.

  • K. Lai*, Z. Tang*, X. Pan, P. Dong, X. Liu, H. Chen, L. Shen, B. Li, X. Chu. Mediator: Memory-efficient LLM Merging with Less Parameter Conflicts and Uncertainty Based Routing. Arxiv 2025.

  • X. Liu*, Z. Tang*, P. Dong, Z. Li, B. Li, X. Hu, X. Chu. ChunkKV: Semantic-Preserving KV Cache Compression for Efficient Long-Context LLM Inference. Arxiv 2025.

  • X. Liu, Z. Tang, H. Chen, P. Dong, Z. Li, X. Zhou, B. Li, X. Hu, X. Chu. Can LLMs Maintain Fundamental Abilities under KV Cache Compression? Arxiv 2025.

  • L. Shen*, Z. Tang*, L. Wu, Y. Zhang, X. Chu, T. Qin, B. Han. Hot-pluggable Federated Learning: briding General and Personalized FL via Dynamic Selection. In ICLR 2025.

  • Z. Tang, X. Liu, Q. Wang, P. Dong, B. He, X. Chu, B. Li. The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve? ICLR Blogpost Track 2025.

  • Q. Wang, Z. Tang, B. He. Can LLM Simulations Truly Reflect Humanity? A Deep Dive. ICLR Blogpost Track 2025.

  • P. Dong, L. Li, Y. Zhong, D. Du, R. Fan, Y. Chen, Z. Tang, Q. Wang, W. Xue, Y. Guo, X. Chu. STBLLM: Breaking the 1-Bit Barrier with Structured Binary LLMs. In ICLR 2025.

  • P. Dong, L. Li, Z. Tang, X. Liu, Z. Wei, Q. Wang, X. Chu. ParZC: Parametric Zero-Cost Proxies for Efficient NAS. In AAAI 2025.

  • X. Pan, W. Lin, L. Zhang, S. Shi, Z. Tang, R. Wang, B. Li, X. Chu. FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models. In ASPLOS 2025.

  • Z. Tang, Y. Zhang, P. Dong, Y. Cheung, A. C. Zhou, B. Han, X. Chu. FuseFL: One-Shot Federated Learning through the Lens of Causality with Progressive Model Fusion.” In NeurIPS 2024 (spotlight).

  • L. Shen*, Z. Tang*💡 , L. Wu, Y. Zhang, X. Chu, T. Qin, B. Han. “Hot Pluggable Federated Learning.” In Workshop of Federated Foundation Models@NeurIPS 2024 (Oral, Outstanding Student Paper Award).

  • L. Li, P. Dong, Z. Tang, X. Liu, Q. Wang, W. Luo, W. Xue, Q. Liu, X. Chu, Y. Guo. Discovering Sparsity Allocation for Layer-wise Pruning of Large Language Models. In NeurIPS 2024.

  • Q. Li, X. Liu, Z. Tang, P. Dong, Z. Li, X. Pan, X. Chu. Should We Really Edit Language Models? On the Evaluation of Edited Language Models. In NeurIPS 2024.

  • P. Dong, L. Li, X. Liu, Z. Tang, X. Liu, Q. Wang, X. Chu. LPZero: Language Model Zero-cost Proxy Search from Zero. In EMNLP 2024 Findings.

  • Z. Tang, J. Huang, R. Yan, Y. Wang, Z. Tang💡📧, S. Shi, A. C. Zhou, X. Chu📧. Bandwidth-Aware and Overlap-Weighted Compression for Communication-Efficient Federated Learning. In ICPP 2024.

  • P. Dong, L. Li, Z. Tang, X. Liu, X. Pan, Q. Wang, X. Chu📧. Evolving Symbolic Pruning Metric From Scratch for Large Language Models. In ICML 2024.

  • Z. Tang, Y. Zhang, S. Shi, X. Tian, T. Liu, B. Han, X.📧 Chu. FedImpro: Measuring and Improving Client Update in Federated Learning. In ICLR 2024.

  • Y. Wang, Y. Chen, Z. Li, Z. Tang, R. Guo, X. Wang, Q. Wang, AC Zhou, X. Chu📧. BurstGPT: A Real-world Workload Dataset to Optimize LLM Serving Systems. arXiv preprint arXiv:2401.17644.

  • Y. Wang, S. Shi, X. He, Z. Tang, X. Pan, Y. Zheng, X. Wu, AC Zhou, B. He, X. Chu📧. Reliable and Efficient In-Memory Fault Tolerance of Large Language Model Pretraining. arXiv preprint arXiv:2310.12670.

  • Z. Tang, Y. Wang, X. He, L. Zhang, X. Pan, Q. Wang, R. Zeng, K. Zhao, S. Shi📧, B. He, X. Chu📧. FusionAI: Decentralized Training and Deploying LLMs with Massive Consumer-Level GPUs. In IJCAI-LLM Workshop 2023.09.,

  • Z. Tang, S. Shi, B. Li, X. Chu📧. GossipFL: A Decentralized Federated Learning Framework with Sparsified and Adaptive Communication. In IEEE Transactions on Parallel and Distributed Systems, 2022.

  • Z. Tang, Y. Zhang*, S. Shi, X. He, B. Han, X. Chu📧. Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning. In Proceedings of the 39th International Conference on Machine Learning, 2022.

  • C. He, A. D. Shah, Z. Tang, D. Fan, A. N. Sivashunmugam, K. Bhogaraju, M. Shimpi, L. Shen, X. Chu, M. Soltanolkotabi and S. Avestimehr. FedCV: A Federated Learning Framework for Diverse Computer Vision Tasks. In FL-AAAI-22 workshop, 2022.

  • Z. Tang, Zhikai Hu, Shaohuai Shi, Yiu-ming Cheung, Yilun Jin, Zhenghang Ren, Xiaowen Chu📧. Data Resampling for Federated Learning with Non-IID Labels. In FTL-IJCAI workshop, 2021.

  • Z. Tang, S. Shi, and X. Chu📧. Communication-efficient decentralized learning withsparsification and adaptive peer selection. In ICDCS 2020.

  • Z. Tang, Y. Wang, Q. Wang, and X. Chu. The impact of gpu dvfs on the energy andperformance of deep learning: An empirical study. In Proceedings of the Tenth ACM International Conference on Future Energy Systems, e-Energy ‘19.

  • Z. Tang, X. Chu, R. Ran, S. Lee, S. Shi, Y. Zhang, Y. Wang, A. Liang, S. Avestimehr, C. He📧. FedML Parrot: A Scalable Federated Learning System via Heterogeneity-aware Scheduling on Sequential and Hierarchical Training. arXiv preprint arXiv:2303.01778.

  • Z. Tang, S. Shi, W. Wang, B. Li and X. Chu📧. Communication-efficient distributeddeep learning: A comprehensive survey. CoRR, abs/2003.06307, 2020.

👔 Professional Activities

  • Invited Program Committee Member (Reviewer):
    • Machine Learning: KDD’23,25, ICML’22,23,24,25, NeurIPS’22,23,24’25, ICLR’23,24,25, AAAI’23,25, AISTATS’23’25, UAI’22, IJCAI-ECAI’22
    • Networking & Systems: HPCC’21, ICDCS’22,23,24,25, ICPADS’22, IWQOS’23,24
  • Invited Reviewer for Journals
    • Machine Learning:
      • IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
      • Transactions on Machine Learning Research (TMLR)
      • IEEE Transactions on Neural Networks and Learning Systems (TNNS)
      • Journal of Artificial Intelligence Research (JAIR)
    • Networking & Systems:
      • ACM Transactions on Architecture and Code Optimization (TACO)
      • IEEE Transactions on Parallel and Distributed Systems (TPDS)
      • IEEE Journal on Selected Areas in Communications (JSAC)
      • IEEE Network Magzines
      • IEEE Transactions on Networking (ToN)
      • IEEE Transactions on Network Science and Engineering (TNSE)
      • IEEE Transactions on Intelligent Systems and Technology (TIST)
      • IEEE Computational Intelligence Magazine
      • Journal of Parallel and Distributed Computing (JPDC)
    • General:
      • ACM Computing Surveys

🎖 Honors and Awards

  • 2023/24 Fall, Research Performance Award, HKBU CS Department (Link).
  • 2024, NeurIPS Top Reviewer (Link).
  • 2024, Outstanding Student Paper Award of FL@FM-NeurIPS’24 workshop.
  • 2024, NeurIPS Scholar Award.
  • 2024, ICLR Scholar Award.
  • 2023/24 Spring, Research Performance Award, HKBU CS Department (Link).
  • 2022/23, Research Performance Award, HKBU CS Department (Link).
  • 2022/23 Fall, Teaching Performance Award, HKBU CS Department (Link).
  • 2021/22, Research Performance Award, HKBU CS Department (Link).
  • 2021/22 Fall, Teaching Performance Award, HKBU CS Department (Link).
  • 2020, Scholarship for Nominees of Hong Kong PhD Fellowship Scheme, HKBU CS Department (Link).
  • 2018, Outstanding Graduate, HUST
  • 2016, Scholarship of Academic Excellence, HUST

📕 Teaching

  • Teaching Assistant at HKBU
    • 2023 Spring Semester, COMP7940 Cloud Computing
    • 2022 Fall Semester, COMP7015 Artificial Intelligence
    • 2022 Spring Semester, COMP 7550 IT Project Management
    • 2021 Fall Semester, COMP 7015, Artificial Intelligence
    • 2021 Spring Semester, COMP 7930, Big Data Analytics

🛠️ Projects

411 Pageviews
Mar. 01st - Mar. 29th