Distributed Federated Learning: Fundamental limits, tradeoffs, and guarantees

Interdisciplinary Areas: Data/Information/Computation, Security and Privacy

Project Description

Federated learning has been proposed as a learning architecture to overcome the limitations of centralized learning, by enabling a distributed learning approach, with the goal to reduce latency and the communication burden while preserving privacy. However, the design of communication- and resource-aware schemes with convergence guarantees is still an open problem in many distributed architectures. Furthermore, it is not clear whether distributed algorithms can match the oracle centralized performance (e.g., convergence rates and solution guarantees). It urges a rigorous theory on computational-communication complexity lower bounds of distributed algorithms. This will define the class of “optimal” schemes–those matching the lower bound–and consequently it will shed light on the optimal tradeoff between rounds of computation and communications to be performed by each node of the network to converge at the fastest rate to the desired solution while performing the minimum number of computations and communications. In this project, we will investigate these questions aiming at designing provably convergent schemes in federated learning that make the best use of the limited computational and communication resources of devices. We will explore the impact of communication constraints such as quantization, wireless interference and fading, and address the design of schemes that are robust to these features. Unlike the conventional federated learning approach, which is based on a master-slave topology, driven by ultra-low-latency applications such as mobile edge computing and vehicular to vehicular communications, we will also investigate a fully decentralized, device-centric approach to learning (mesh topologies), without the aid of a centralized server.

Start Date

June 2020

Postdoc Qualifications 

Ideal candidates are those having strong background in some of the following (possibly multiple) areas: optimization, machine learning, information theory, distributed computing, statistics.

Co-advisors 

Nicolo' Michelusi
michelus@purdue.edu
School of Electrical and Computer Engineering
https://engineering.purdue.edu/~michelus/

Gesualdo Scutari
gscutari@purdue.edu
School of Industrial Engineering
https://engineering.purdue.edu/~gscutari/

References 

Gesualdo Scutari and Ying Sun, ''Distributed Nonconvex Constrained Optimization over Time-Varying Digraphs,'' Mathematical Programming, vol. 176, no. 1-2, pp. 497-544, July 2019.

Loris Cannelli, Francisco Facchinei, Vyacheslav Kungurtsev, and Gesualdo Scutari, "Asynchronous Parallel Algorithms for Nonconvex Optimization”, Mathematical Programming, June 2019.

Chang-Shen Lee, Nicolo' Michelusi, and Gesualdo Scutari, "Limited Rate Distributed Weight-Balancing and Average Consensus over Digraphs" preprint, arXiv:1901.00611

M. Hussain and N. Michelusi, "Energy-Efficient Interactive Beam Alignment for Millimeter-Wave Networks," in IEEE Transactions on Wireless Communications, vol. 18, no. 2, pp. 838-851, Feb. 2019.
doi: 10.1109/TWC.2018.2885041

N. Michelusi, M. Nokleby, U. Mitra and R. Calderbank, "Multi-Scale Spectrum Sensing in Dense Multi-Cell Cognitive Networks," in IEEE Transactions on Communications, vol. 67, no. 4, pp. 2673-2688, April 2019.
doi: 10.1109/TCOMM.2018.2886020