Skip navigation

Adversarial Attacks and Robustness

Research Summary

The ability to maliciously perturb any input to a deep learning model and cause a misclassification with high confidence belies a lack of security and credibility to what the models have learned. We are working on methods that explain why adversarial attacks occur, and how we can make models more robust to them.

Recent Publications:

Current Students: Indranil Chakraborty, Chankyu Lee, Wachirawit Ponghiran, Saima Sharmin
  1. Panda, Priyadarshini, and Kaushik Roy. "Explainable learning: Implicit generative modelling during training for adversarial robustness." arXiv preprint arXiv:1807.02188 (2018). 
  2. Sharmin, Saima, et al. "A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks." IJCNN 2019.
  3. Panda, Priyadarshini, Indranil Chakraborty, and Kaushik Roy. "Discretization based Solutions for Secure Machine Learning against Adversarial Attacks." IEEE Access (2019).