Task 019: Robustness of Neural Systems

Event Date: February 25, 2021
Time: 11:00 am (ET) / 8:00am (PT)
Priority: No
College Calendar: Show
Youngeun Kim, Yale University
Towards Deep, Interpretable, and Robust Spiking Neural Networks: Algorithmic Approaches

Abstract:

Spiking Neural Networks (SNNs) have recently emerged as an alternative to deep learning due to their huge energy efficiency benefits on neuromorphic hardware. In this presentation, we suggest important techniques for training SNNs which bring a huge benefit in terms of latency, accuracy, interpretability, and robustness.
 
Training Deep SNNs
SNNs with surrogate gradients presents computational benefits due to short latency and is also considered as a more bio-plausible approach. However, due to the non-differentiable nature of spiking neurons, the training becomes problematic and surrogate methods have thus been limited to shallow networks compared to the conversion method. To address this training issue with surrogate gradients, we revisit batch normalization and propose a temporal Batch Normalization Through Time (BNTT) technique. The temporally evolving learnable parameters in BNTT allow a neuron to control its spike rate through different time-steps, enabling low-latency and low-energy training from scratch. Our BNTT method reduces the latency (more than 4X faster compared to state-of-the-art SNNs) and energy consumption (9X less compared to standard  ANN) significantly, while preserving competitive classification accuracy.
 
Interpretability in SNNs
Another critical limitation of SNNs is the lack of interpretability. While a considerable amount of attention has been given to optimizing SNNs, the development of explainability still is at its infancy. We present a bio-plausible visualization tool for SNNs, called Spike Activation Map (SAM) compatible with BNTT training. The proposed SAM highlights spikes having short inter-spike interval, containing discriminative information for classification. This approach does not require any backpropagation processes, circumventing error accumulation problem encountered in conventional visualization tools (such as, Grad-CAM) for ANNs. We provide a comprehensive analysis on how internal spikes work in various SNN training configurations depending on optimization types and leak behavior.
 
Robustness in SNNs
Finally, with proposed BNTT and SAM, we highlight the robustness aspect of SNNs. Compared to ANNs and conventional SNN training techniques, BNTT significantly decreases the number of time-steps while considering the temporal dynamics of spike trains. As a result, BNTT shows strong robustness regarding both Gaussian noise and FGSM attack. Also, SAM shows almost consistent interpretation even though the networks are faced with adversarial examples. Overall, using SNN with BNTT and SAM for a secured system (e.g., military defense) will be a huge advantage in terms of robust performance and interpretation.