Robust Deep Learning in Computer Vision

Interdisciplinary Areas: Data and Engineering Applications, Autonomous and Connected Systems

Project Description

Recognizing the vulnerability of deep neural networks to adversarial attacks in a variety of application domains, the machine learning community has an increasing interest in understanding the robustness of deep learning. This project is at the intersection of machine learning, computer vision and mathematical analysis. Despite the tremendous success achieved by neural networks in numerous computer vision tasks, fundamental challenges remain to be addressed, e.g., distortion from input perturbations. In this project, we will explore robust deep learning for computer vision tasks. The work will be a combination of theoretical contributions in machine learning, algorithm development and testing in real-world computer vision systems. 

Start Date

03/01/2022

Postdoc Qualifications

Desired candidate would have PhD in ECE, CS, Math, Stats, or related areas, with experience with both mathematical analysis as well as programming in machine learning. 

Co-Advisors

Guang Lin, guanglin@purdue.edu, Departments of Mathematics,
url: https://www.math.purdue.edu/~lin491

Qiang Qiu, qqiu@purdue.edu, Electrical and Computer Engineering,
url: https://web.ics.purdue.edu/~qqiu 

Bibliography

Douglas Heaven, Why deep-learning AIs are so easy to fool, Nature, Volume 574, Issue 7777, p.163-166, October 2019

Stephan Zheng, Yang Song, Thomas Leung, Ian Goodfellow, Improving the Robustness of Deep Neural Networks via Stability Training, CVPR 2016