Yan N, Su Y, Deng Y, Schober R (2025)
Publication Type: Journal article
Publication year: 2025
Book Volume: 63
Pages Range: 52-58
Journal Issue: 10
Federated learning (FL) provides a privacy-preserving solution for fine-tuning pretrained large language models (LLMs) using distributed private datasets, enabling task-specific adaptation while preserving data privacy. However, fine-tuning the extensive parameters in LLMs is particularly challenging in resource-constrained federated scenarios due to the significant communication and computational costs. To gain a deeper understanding of how these challenges can be addressed, this article conducts a comparative analysis three advanced federated LLM (FedLLM) frameworks that integrate knowledge distillation (KD) and split learning (SL) to mitigate these issues: 1) FedLLMs, where clients upload model parameters or gradients to enable straight-forward and effective fine-tuning; 2) KD-FedLLMs, which leverage KD for efficient knowledge sharing via logits; and 3) Split-FedLLMs, which split the LLMs into two parts, with one part executed on the client and the other one on the server, to balance the computational load. Each framework is evaluated based on key performance metrics, including model accuracy, communication overhead, and client-side computational load, offering insights into their effectiveness for various federated fine-tuning scenarios. Through this analysis, we identify framework-specific optimization opportunities to enhance the efficiency of FedLLMs and discuss broader research directions, highlighting open opportunities to better adapt FedLLMs for real-world applications. A use case is presented to demonstrate the performance comparison of these three frameworks under varying configurations and settings.
APA:
Yan, N., Su, Y., Deng, Y., & Schober, R. (2025). Federated Fine-Tuning of LLMs: Framework Comparison and Research Directions. IEEE Communications Magazine, 63(10), 52-58. https://doi.org/10.1109/MCOM.001.2400770
MLA:
Yan, Na, et al. "Federated Fine-Tuning of LLMs: Framework Comparison and Research Directions." IEEE Communications Magazine 63.10 (2025): 52-58.
BibTeX: Download