Nonconvex (Decentralized) Stochastic Optimization: Statistical and Computational Guarantees

Interdisciplinary Areas: Data and Engineering Applications

Project Description

There is growing interest in large-scale machine learning and optimization. This project targets learning and optimization tasks formulated as stochastic, nonconvex constrained optimization problem. Special interest is in instances containing nonconvex, stochastic constraints (in different forms). The goal of the project is to develop efficient algorithms with strong statistical and computational guarantees. Both Sample Average and Sample Average Approximation methods will be postulated, to model off-line and only scenarios. Also, decomposition methods of the aforementioned algorithms will be considered, implementable in particular on mesh networks. where no centralized coordination is present – these computational architectures are more compliant with future edge-computing systems. Revealing trade-offs in the statistical error, computational complexity, and communication costs will be an important goal of the project.  

Start Date

August 2025

Postdoc Qualifications

We are seeking candidates that have a record of scholarship under the rubric of “computational optimization” and statistics. Candidates may come from different backgrounds, such as operations research, statistics, electrical engineering, computer science or a related field. All applicants must have strong mathematical and computing skills. Prior knowledge in stochastic programming is a plus. Applicants should possess a Ph.D. in Mathematics, Operation Research, Statistics, Electrical Engineering, or other relevant fields. Applicants are expected to be outstanding, intellectually curious researchers at an early stage of their scholarly career.

Co-advisors

Gesualdo Scutari
Pedro and Barbara Granadillo Professor of Industrial Engineering and Electrical and Computer Engineering
Email: gscutari[AT]purdue[DOT]edu
Url: https://engineering.purdue.edu/~gscutari/

Raghu Pasupathy,
Prof. of Statistics
Email: pasupath[AT]purdue[DOT]edu
Url: https://web.ics.purdue.edu/~pasupath/

Bibliography

Francisco Facchinei, Vyacheslav Kungurtsev, Lorenzo Lampariello, and Gesualdo Scutari, "Ghost Penalty in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity," Mathematics of Operations Research, 46(2), 595-626, 2021

Y. Sun, A. Daneshmand, and G. Scutari, “Distributed Optimization Based on Gradient-tracking Revisited: Enhancing Convergence Rate via Surrogation,” SIAM J. Optim., 32(4), 354–385, 2022

Y. Ji, G. Scutari, Y. Sun, H. Honnappa, "Distributed Sparse Regression via Penalization", J. of Machine Learning Research, 24(127) 1--62, 2023

R. Pasupathy and Y. Song. “Adaptive Retrospective Approximation for Solving Two-Stage Stochastic Programs,” SIAM Journal on Optimization, to appear.

R. Pasupathy, P. W. Glynn, S. Ghosh, and F. S. Hashemi. “On sampling rates in simulation-based recursions,” SIAM Journal on Optimization, 28(1): 45--73. 2018.