Home // IARIA Congress 2025, The 2025 IARIA Annual Congress on Frontiers in Science, Technology, Services, and Applications // View article


Quantized Rank Reduction: A Communications-Efficient Federated Learning Scheme for Network-Critical Applications

Authors:
Dimitrios Kritsiolis
Constantine Kotropoulos

Keywords: Federated learning; Tucker decomposition; SVD; quantization.

Abstract:
Federated learning is a machine learning approach that enables multiple devices (i.e., agents) to train a shared model cooperatively without exchanging raw data. This technique keeps data localized on user devices, ensuring privacy and security, while each agent trains the model on their own data and only shares model updates. The communication overhead is a significant challenge due to the frequent exchange of model updates between the agents and the central server. In this paper, we propose a communication-efficient federated learning scheme that utilizes low-rank approximation of neural network gradients and quantization to significantly reduce the network load of the decentralized learning process with minimal impact on the model’s accuracy.

Pages: 112 to 117

Copyright: Copyright (c) IARIA, 2025

Publication date: July 6, 2025

Published in: conference

ISBN: 978-1-68558-284-5

Location: Venice, Italy

Dates: from July 6, 2025 to July 10, 2025