The 8088 The 8088 ← All news
arXiv cs.LG AI Research Apr 20

Optimizing Stochastic Gradient Push under Broadcast Communications

★★★★★ significance 2/5

The paper proposes a new method for optimizing the mixing matrix in decentralized federated learning under broadcast communications. By utilizing Stochastic Gradient Push (SGP), the researchers can use asymmetric mixing matrices and directed communication graphs to significantly reduce convergence time.

Why it matters Efficient decentralized training architectures are critical for scaling federated learning across bandwidth-constrained wireless networks.
Read the original at arXiv cs.LG

Tags

#federated learning #stochastic gradient push #decentralized optimization #wireless networks

Related coverage