Feng Zhu, Aritra Mitra, Robert W. Heath Jr.
IEEE Asilomar Conference on Signals, Systems, and Computers, 2025
This is a pure-theory paper. We study a general distributed stochastic approximation (SA) problem with heterogeneous agents, where the goal is to find the root of the average operator across agents.
This setting captures a wide range of applications, including:
- Federated learning
- Multi-agent reinforcement learning (RL)
- Variational inequalities and fixed-point problems
Existing approaches typically suffer from one or more of the following:
- ❌ Lack of linear speedup in sample complexity
- ❌ Bias due to heterogeneity across agents
- ❌ High communication cost
We propose DisSACC (Distributed Stochastic Approximation with Constant Communication), which is based on a simple but effective idea:
Instead of performing many noisy local updates, we aggregate samples locally to construct a refined operator, and perform fewer, higher-quality updates.
- Each agent collects multiple samples per round
- Constructs a variance-reduced operator
- Performs one update per round
- Server aggregates across agents
DisSACC achieves:
- ✅ Exact convergence to the global solution
- ✅ Linear speedup in the number of agents (M-fold variance reduction)
- ✅ No heterogeneity-induced bias
- ✅ Near-constant communication complexity:
$\tilde{O}(1)$
This work shows that it is possible to simultaneously achieve:
- statistical efficiency (sample complexity)
- communication efficiency
- robustness to heterogeneity
which are typically conflicting objectives in federated and multi-agent learning systems.
You can find the paper here:
If you find this work useful, please cite:
@inproceedings{zhu2025dissacc,
title={Distributed Stochastic Approximation with Constant Communication},
author={Zhu, Feng and Mitra, Aritra and Heath, Robert W.},
booktitle={IEEE Asilomar Conference},
year={2025}
}