2019-06-19 10:00:00 2019-06-19 11:00:00 America/Indiana/Indianapolis PhD Seminar - Loris Canelli "Asynchronous Parallel Algorithms for Big-Data Nonconvex Optimization" GRIS 302

June 19, 2019

PhD Seminar - Loris Canelli

Event Date: June 19, 2019
Hosted By: Dr. Gesualdo Scutari
Time: 10:00 - 11:00 AM
Location: GRIS 302
Contact Name: Anita Park
Contact Phone: 40680
Contact Email: apark@purdue.edu
Open To: all
Priority: No
School or Program: Industrial Engineering
College Calendar: Show
“Asynchronous Parallel Algorithms for Big-Data Nonconvex Optimization”

ABSTRACT

Recent years have witnessed a surge of interest in distributed optimization methods for multi-agent systems. Many such problems can be formulated as the cooperative minimization of a smooth (possibly nonconvex) function plus a nonsmooth convex regularizer. The smooth term in many practical applications has a partitioned structure, meaning that can be written as a finite sum of functions, each one depending only on a subset of the whole vector of variables. Each agent knows only one function of the sum-utility, and it can exchange information through a network with the agents that know the terms of the sum-utility depending on the same variables of its own function.
 
This partitioned setup can arise naturally from a physical network connecting different agents, such as in network utility maximization and resource allocation problems, state estimation in power networks, cooperative localization in wireless networks, or map building in robotic networks. On the other hand the partitioned setup could also describe settings where the network only matches logical interactions among the optimization variables, as in many machine learning problems (e.g., LASSO, Matrix Completion, PCA, etc.).
 
The focus of this research project is on the design of distributed asynchronous algorithms for this kind of framework. By asynchronous, it is considered a scheme where: a) agents can update their block-variables at any time, without any coordination; and b) when updating their own variables, agents can use a delayed out-of-sync information from other agents.
 
The result of this research project is the first asynchronous algorithmic framework tailored to the aforementioned problem. The proposed algorithmic framework is based on Successive Convex Approximation (SCA) techniques and can be applied to a wide variety of network-partitioned settings, without requiring any specific communication protocol or central coordination, but only some very mild and natural assumptions on how information spreads through the agents. It is proved that every limit point of the sequence generated by the algorithm is a stationary point of the problem, and that a stationarity measure is reduced at sublinear rate. If a standard error bound condition holds, the algorithm can be shown to actually converge to a single point, achieving a linear convergence rate in both the distance to the solution and the function values. This condition is less stringent than strong convexity, which is commonly required to establish linear convergence rate, and it is satisfied by a wide class of problems of interest, such as, just to name a few, LASSO, Group LASSO, and Logistic Regression. Thus, the proposed scheme is the first asynchronous algorithm to have linear convergence rate for these applications.