Skip navigation

Current Projects

I. Energy Efficient Deep Learning

II. Adversarial Attacks and Robustness

III. Continual and Incremental Learning

IV. In-Memory Computing Devices, Circuits, and Systems using CMOS and post-CMOS Memories for Neuromorphic/Machine Learning Applications

V. Stochastic Computing: Algorithms to Devices


I. Energy Efficient Deep Learning

With the increasing availability of compute power, state of the art CNNs are growing rapidly in size, making them prohibitive to deploy in power-constrained environments. To enable ubiquitous use of deep learning techniques, we are looking into techniques that reduce the size of a model with minimal degradation in accuracy. Some of the algorithmic techniques we have devised to tackle this include identifying redundancy among the weights of a model in a single shot, discretized leaning and inference methods, and strategies that change network architecture to enable efficient inference. We have also developed hardware driven approximation methods that leverage the inherent error resilience of CNNs for model compression.

Recent Publications:

  1. Garg, Isha, Priyadarshini Panda, and Kaushik Roy. "A low effort approach to structured CNN design using PCA." arXiv preprint arXiv:1812.06224 (2018).
  2. Panda, Priyadarshini, Abhronil Sengupta, and Kaushik Roy. "Conditional deep learning for energy-efficient and enhanced pattern recognition." 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2016.
  3. Chakraborty, Indranil, et al. "PCA-driven Hybrid network design for enabling Intelligence at the Edge." arXiv preprint arXiv:1906.01493 (2019).
  4. P. Panda, A. Ankit, P. Wijesinghe, and K. Roy, “FALCON: Feature Driven Selective Classification for Energy-Efficient Image Recognition,” IEEE Transactions on CAD, 2016.
  5. Sarwar, et al. "Energy-efficient neural computing with approximate multipliers." ACM Journal on Emerging Technologies in Computing Systems (JETC) 14.2 (2018): 16.
Current Students: Jason Allred, Aayush Ankit, Indranil Chakraborty, Isha Garg, Deboleena Roy

II. Adversarial Attacks and Robustness

The ability to maliciously perturb any input to a deep learning model and cause a misclassification with high confidence belies a lack of security and credibility to what the models have learned. We are working on methods that explain why adversarial attacks occur, and how we can make models more robust to them.

Recent Publications:

  1. Panda, Priyadarshini, and Kaushik Roy. "Explainable learning: Implicit generative modelling during training for adversarial robustness." arXiv preprint arXiv:1807.02188 (2018). 
  2. Sharmin, Saima, et al. "A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks." IJCNN 2019.
  3. Panda, Priyadarshini, Indranil Chakraborty, and Kaushik Roy. "Discretization based Solutions for Secure Machine Learning against Adversarial Attacks." IEEE Access (2019).
Current Students: Indranil Chakraborty, Chankyu Lee, Wachirawit Ponghiran, Saima Sharmin 

III. Continual and Incremental Learning

The capacity to learn new things without forgetting already present knowledge is innate to humans. However, all neural networks suffer from the problem of catastrophic forgetting, making it hard to grow networks and learn newer incoming data in a fluid manner. We are exploring techniques that utilize stochasticity or architectural enhancements that can enable lifelong learning.

Recent Publications:

  1. Allred, Jason M., and Kaushik Roy. "Stimulating STDP to Exploit Locality for Lifelong Learning without Catastrophic Forgetting." arXiv preprint arXiv:1902.03187 (2019).
  2. Roy, Deboleena, Priyadarshini Panda, and Kaushik Roy. "Tree-CNN: A hierarchical deep convolutional neural network for incremental learning." arXiv preprint arXiv:1802.05800 (2018).
  3. Panda, Priyadarshini, et al. "Asp: Learning to forget with adaptive synaptic plasticity in spiking neural networks." IEEE Journal on Emerging and Selected Topics in Circuits and Systems 8.1 (2017): 51-64.
  4. Sarwar, Syed Shakib, Aayush Ankit, and Kaushik Roy. "Incremental learning in deep convolutional neural networks using partial network sharing." arXiv preprint arXiv:1712.02719 (2017).
Current Students: Aayush Ankit, Jason Allred, Deboleena Roy

IV. In-Memory Computing Devices, Circuits, and Systems using CMOS and post-CMOS Memories for Neuromorphic/Machine Learning Applications

A.   Compute-in-Memory using CMOS SRAM and DRAM Arrays

‘In-Memory computing’ is a promising candidate to achieve significant throughput and energy benefits. Currently, we are exploring novel ideas to incorporate compute capabilities inside SRAM and DRAM. We have shown that digital bulk-bitwise/arithmetic operations and analog binary/multi-bit dot-product computations/ multiplication can be performed inside SRAM arrays. Moreover, in-memory full addition has been shown using commodity DRAM banks. Currently, we are exploring different methodologies to add compute functionalities in embedded DRAM gain cells. We run neuromorphic/machine learning applications (e.g. ANNs, SNNs, k-NN) on such in-memory computing based systems to evaluate performance and energy benefits against conventional von Neumann systems.

Recent Publications:

  1. Agrawal, A. Jaiswal, B. Han, G. Srinivasan, and K. Roy, “Xcel-ram: Accelerating binary neural networks in high-throughput SRAM compute arrays,” in IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 66, no. 8, pp. 3064-3076, Aug. 2019.
  2. A. Jaiswal, I. Chakraborty, A. Agrawal, and K. Roy, “8t sram cell as a multi-bit dot product engine for beyond von-neumann computing,” arXiv preprint arXiv:1802.08601, 2018.
  3. A. Agrawal, A. Jaiswal, C. Lee, and K. Roy, “X-sram: Enabling inmemory boolean computations in cmos static random access memories,” IEEE Transactions on Circuits and Systems I: Regular Papers, no. 99, pp. 1–14, 2018.
Current Students: Amogh Agrawal, Mustafa Ali, Sangamesh Kodge

B.   Spin-based devices and Memristive crossbars as in-Memory Computing Primitives

Spin-based and resistive memories are promising candidates to replace CMOS-based memory technologies due to their non volatility. We are currently exploring different methodologies to add compute capabilities to such non-volatile memories. Additionally, we develop memristor-based accelerator architectures to enable the acceleration of a wide variety of Machine Learning (ML) inference workloads. Moreover, we are exploring various methods to model non-idealities in memristor crossbars and their effect on application accuracy.

Recent Publications:

  1. Aayush Ankit, et.al. “PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference”. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS '19). ACM, New York, NY, USA, 715-731.
  2. A. Jaiswal, A. Agrawal, and K. Roy, “In-situ, in-memory stateful vector logic operations based on voltage controlled magnetic anisotropy,” Scientific Reports, vol. 8, no. 1, p. 5738, 2018.
  3. I. Chakraborty, D. Roy and K. Roy, "Technology Aware Training in Memristive Neuromorphic Systems for Nonideal Synaptic Crossbars," in IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 2, no. 5, pp. 335-344, Oct. 2018.
  4. Amogh Agrawal, Chankyu Lee and Kaushik Roy, “X-CHANGR: Changing Memristive Crossbar Mapping for Mitigating Line-Resistance Induced Accuracy Degradation in Deep Neural Networks,” arXiv preprint arXiv:1907.00285, 2019.
Current Students: Aayush Ankit, Indranil Chakraborty, Amogh Agrawal, Mustafa Ali

C.   ROM-Embedded RAM Structures in SRAM and STT-MRAM and their adoption to Accelerate Neuromorphic Applications

Embedding ROM storage in RAM arrays provides an almost cost-free opportunity to perform transcendental functions and other lookup-table (LUT) based computations in memory. We have developed RAM arrays with the ability to store ROM data in the same array using different memory technologies like CMOS SRAM and STT-MRAM. Moreover, we adopt such ROM-embedded RAM structures in accelerator architectures to accelerate Neuromorphic applications that depend heavily on such LUT-based operations.

Recent Publications:

  1. A. Agrawal, A. Ankit and K. Roy, "SPARE: Spiking Neural Network Acceleration Using ROM-Embedded RAMs as In-Memory-Computation Primitives," in IEEE Transactions on Computers, vol. 68, no. 8, pp. 1190-1200, 1 Aug. 2019.
  2. A. Agrawal and K. Roy, "RECache: ROM-Embedded 8-Transistor SRAM Caches for Efficient Neural Computing," 2018 IEEE International Workshop on Signal Processing Systems (SiPS), Cape Town, 2018, pp. 19-24.
  3. X. Fong, R. Venkatesan, D. Lee, A. Raghunathan and K. Roy, "Embedding Read-Only Memory in Spin-Transfer Torque MRAM-Based On-Chip Caches," in IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 24, no. 3, pp. 992-1002, March 2016.
  4. D. Lee and K. Roy, "Area Efficient ROM-Embedded SRAM Cache," in IEEE Transactions on Very Large-Scale Integration (VLSI) Systems, vol. 21, no. 9, pp. 1583-1595, Sept. 2013.
  5. D. Lee, X. Fong, and K. Roy, “R-MRAM: A ROM-Embedded STT MRAM Cache,” IEEE Electron Device Letters 34 (10), 1256-1258, 2013.
Current Students: Amogh Agrawal

V. Stochastic Computing: Algorithms to Devices

Stochastic computing algorithms find widespread utility in a range of applications including stochastic neural networks, Bayesian inference, and optimization problems (graph coloring, traveling salesman problem) where exact solutions may not be required and error resiliency is built into the algorithms. CMOS-based realizations of stochastic algorithms are area- and power-intensive due to the need for expensive random number generators to implement the stochastic operations. We have devised non-von Neumann architectures using the inherent stochastic switching characteristics of Magnetic Tunnel Junctions (MTJs) in the presence of thermal noise for energy-efficient realization of the stochastic algorithms.

A.   Stochastic Neural Networks

We proposed stochastic Spiking Neural Networks (SNNs), realized using MTJ as stochastic spiking neurons that switch probabilistically based on the input, wherein the binary weights are trained offline using error backpropagation. In addition, we proposed SNNs composed of stochastic binary weights trained using hardware friendly Spike Timing Dependent Plasticity (STDP) based probabilistic learning algorithm, which can be enabled by MTJ-based synaptic crossbar arrays with high energy barrier to realize state-compressed hardware with on-chip learning capability. Advantageously, use of high energy barrier MTJ (30-40KT where K is the Boltzmann constant and T is the operating temperature) not only allows compact stochastic primitives, but also enables the same device to be used as a stable memory element meeting the data retention requirement. Such stochastic MTJ-based realization can be potentially an order of magnitude energy-efficient compared to CMOS-only implementations.

Recent Publications:

  1. Srinivasan, G., Sengupta, A. and Roy, K. "Magnetic tunnel junction based long-term short-term stochastic synapse for a spiking neural network with on-chip STDP learning," Scientific Reports, 6, p.29545, 2016.
  2. Sengupta A., Panda, P, Wijesinghe P., Kim Y., and Roy, K., “Magnetic Tunnel Junction Mimics Stochastic Cortical Spiking Neurons,” Scientific Reports, 6, p. 30039, 2016.
  3. Srinivasan, G. and Roy, K. "ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing," Frontiers in Neuroscience, 13, p.189, 2019.
  4. Sengupta, A., Srinivasan, G., Roy, D. and Roy, K. "Stochastic inference and learning enabled by magnetic tunnel junctions," In 2018 IEEE International Electron Devices Meeting (IEDM), p. 5-6, IEEE, December 2018.
  5. Sengupta, A., Parsa, M., Han, B. and Roy, K. "Probabilistic deep spiking neural systems enabled by magnetic tunnel junction," IEEE Transactions on Electron Devices, 63(7), pp.2963-2970, 2016.
  6. Liyanagedera, C.M., Sengupta, A., Jaiswal, A. and Roy, K. "Stochastic spiking neural networks enabled by magnetic tunnel junctions: From nontelegraphic to telegraphic switching regimes," Physical Review Applied, 8(6), p.064017, 2017.
Current Students: Bing Han, Chamika Liyanagedera, Maryam Parsa, Deboleena Roy, Gopal Srinivasan

B.   Stochastic Optimization

We have also demonstrated the effectiveness of stochastic MTJ-based compute primitive for efficiently realizing Bayesian inference and the Ising computing model (variant of Boltzmann machine) to solve difficult combinatorial optimization problems like the traveling salesman problem and graph coloring problem. The proposed stochastic MTJ-based device can act as “natural annealer”, helping the algorithms move out of local minima and arrive at near-optimal solutions.

Recent Publications:

  1. Shim, Y., Chen, S., Sengupta, A. and Roy, K. "Stochastic spin-orbit torque devices as elements for bayesian inference," Scientific reports, 7(1), p.14101, 2017.
  2. Shim, Y., Jaiswal, A. and Roy, K. "Ising computation based combinatorial optimization using spin-Hall effect (SHE) induced stochastic magnetization reversal," Journal of Applied Physics, 121(19), p.193902, 2017.
  3. Shim, Y., Sengupta, A. and Roy, K. "Biased Random Walk Using Stochastic Switching of Nanomagnets: Application to SAT Solver," IEEE Transactions on Electron Devices, 65(4), pp.1617-1624, 2018.
  4. Wijesinghe, P., Liyanagedera, C. and Roy, K. "Analog approach to constraint satisfaction enabled by spin orbit torque magnetic tunnel junctions," Scientific reports, 8(1), p.6940, 2018.
Current Students: Shuhan Chen, Chamika Liyanagedera

VI. Neuromorphic Computing Enabled by CMOS and Emerging Device Technologies

A.   Deep Neural Networks Enabled by Emerging Technologies

In the current era of ubiquitous autonomous intelligence, there is a growing need for moving Artificial Intelligence (AI) to the edge to cope with the increasing demand for autonomous systems like drones, self-driving cars, and smart wearables. Deploying deep neural networks in resource constrained edge devices necessitates significant rethinking of the conventional Von Neumann architecture. We have proposed non-Von Neumann architectures enabled by emerging device technologies such as Magnetic Tunnel Junctions (MTJs), Ag-Si memristors, and Resistive Random Access Memories (ReRAMs) for efficiently realizing Analog Neural Networks (ANNs) and bio-plausible Spiking Neural Networks (SNNs).

Recent Publications:

  1. Sengupta, A., Shim, Y. and Roy, K. "Proposal for an all-spin artificial neural network: Emulating neural and synaptic functionalities through domain wall motion in ferromagnets," IEEE transactions on biomedical circuits and systems, 10(6), pp.1152-1160, 2016.
  2. Chakraborty, I., Roy, D. and Roy, K. "Technology Aware Training in Memristive Neuromorphic Systems for Nonideal Synaptic Crossbars," IEEE Transactions on Emerging Topics in Computational Intelligence, 2(5), pp.335-344, 2018.
  3. Ankit, A., Hajj, I.E., Chalamalasetti, S.R., Ndu, G., Foltin, M., Williams, R.S., Faraboschi, P., Hwu, W.M.W., Strachan, J.P., Roy, K. and Milojicic, D.S. "PUMA: A programmable ultra-efficient memristor-based accelerator for machine learning inference," In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 715-731, ACM, April 2019.
  4. Sengupta, A., Banerjee, A. and Roy, K. "Hybrid spintronic-CMOS spiking neural network with on-chip learning: Devices, circuits, and systems," Physical Review Applied, 6(6), p.064003, 2016.
  5. Sengupta, A., Ankit, A. and Roy, K. "Efficient Neuromorphic Systems and Emerging Technologies: Prospects and Perspectives," In Emerging Technology and Architecture for Big-data Analytics, pp. 261-274, Springer, Cham, 2017.
  6. Ankit, A., Sengupta, A., Panda, P. and Roy, K. "Resparc: A reconfigurable and energy-efficient architecture with memristive crossbars for deep spiking neural networks," In Proceedings of the 54th Annual Design Automation Conference 2017, p. 27, ACM, June 2017.
Current Students: Aayush Ankit, Aparajita Banerjee, Indranil Chakraborty, Deboleena Roy
 

B.   Spin Orbit Torque (SOT)-MTJs, Ag-Si Memristors, and CMOS as ‘Stochastic Bits’ for non-Von Neumann Neural Computing

In addition to neural hardware architectures, the nature of computing - deterministic versus stochastic has substantial influence on the degree of computational efficiency. Deterministic neuronal and synaptic models require multi-bit precision to store the parameters governing their dynamics. We proposed ‘stochastic bits’ enabled neurons and synapses (stochastic during training and deterministic during inference) that compute probabilistically with one-bit precision for state-compressed neuromorphic computing. We presented energy-efficient realizations of the ‘stochastic bit’ using SOT-MTJ, Ag-Si memristor, and CMOS technology. We demonstrated the efficacy of ‘stochastic bits’ enabled neural networks using binary ANNs and SNNs for energy- and memory-efficient training and/or inference on-chip.

Recent Publications:

  1. Sengupta, A., Parsa, M., Han, B. and Roy, K. "Probabilistic deep spiking neural systems enabled by magnetic tunnel junction," IEEE Transactions on Electron Devices, 63(7), p.2963-2970, 2016.
  2. Sengupta, A., Panda, P., Wijesinghe, P., Kim, Y. and Roy, K. "Magnetic tunnel junction mimics stochastic cortical spiking neurons, Scientific reports, 6, p.30039, 2016.
  3. Srinivasan, G., Sengupta, A. and Roy, K. "Magnetic tunnel junction based long-term short-term stochastic synapse for a spiking neural network with on-chip STDP learning," Scientific reports, 6, p.29545, 2016.
  4. Srinivasan, G., Sengupta, A. and Roy, K. "Magnetic tunnel junction enabled all-spin stochastic spiking neural network," In Design, Automation & Test in Europe Conference & Exhibition (2017), p. 530-535, IEEE, March 2017.
  5. Liyanagedera, C.M., Sengupta, A., Jaiswal, A. and Roy, K. "Stochastic spiking neural networks enabled by magnetic tunnel junctions: From nontelegraphic to telegraphic switching regimes," Physical Review Applied, 8(6), p.064017, 2017.
  6. Wijesinghe, P., Ankit, A., Sengupta, A. and Roy, K. "An all-memristor deep spiking neural computing system: A step toward realizing the low-power stochastic brain," IEEE Transactions on Emerging Topics in Computational Intelligence, 2(5), pp.345-358, 2018.
  7. Roy, D., Srinivasan, G., Panda, P., Tomsett, R., Desai, N., Ganti, R., and Roy, K. "Neural Networks at the Edge," IEEE International Conference on Smart Computing, Washington, DC, USA, pp. 45-50, 2019.
Current Students: Aayush Ankit, Bing Han,Chamika Liyanagedera, Maryam Parsa, Deboleena Roy, Gopal Srinivasan

 


VII. Training Methodologies for Deep Spiking Neural Networks

Spiking Neural Networks (SNNs), regarded as the third generation of neural nets, attempt to more closely mimic certain types of computations performed in the human brain to achieve higher energy efficiency in cognitive tasks. SNNs encode input information in the temporal domain using sparse spiking events. The intrinsic sparse spike-based information processing capability of SNNs can be exploited to achieve improved energy efficiency in neuromorphic hardware implementations. However, SNN training algorithms are much less well developed, leading to gap in the accuracy offered by SNNs compared to their ANN counterparts. We proposed different training methodologies that overcome the discontinuous nature of spike trains and effectively utilize spike timing information for training large-scale SNNs that yield comparable accuracy to ANNs for complex recognition tasks.

A.   ANN-to-SNN Conversion

In order to circumvent the training difficulty posed by the non-differentiable dynamics of the spiking neurons, we proposed ANN-to-SNN conversion scheme for realizing deep SNNs. The ANN-to-SNN conversion scheme trains standard ANN architectures like VGG and ResNet using ReLU activation and gradient descent error backpropagation. The trained weights are then mapped to SNN composed of Integrate-and-Fire (IF) spiking neurons by incorporating suitable weight and threshold balancing mechanisms to minimize accuracy loss during SNN inference. Our work is one of the first to demonstrate near loss-less ANN-to-SNN conversion and competitive accuracy on ImageNet.

Recent Publications:

  1. Sengupta, A., Ye, Y., Wang, R., Liu, C. and Roy, K. "Going deeper in spiking neural networks: VGG and residual architectures," Frontiers in neuroscience, 13, 2019.

B.   Spike-Based Error Backpropagation

In order to effectively incorporate spike timing information, we proposed spike-based error backpropagation algorithm for directly training SNNs using low-pass filtered spike train as the differentiable approximation for the Leaky-Integrate-and-Fire (LIF) spiking neuron. We demonstrated competitive accuracy using ~10× lower inference latency compared to that obtained using the conversion approaches.

Recent Publications:

  1. Lee, C., Panda, P., Srinivasan, G. and Roy, K. "Training deep spiking convolutional neural networks with stdp-based unsupervised pre-training followed by supervised fine-tuning," Frontiers in neuroscience, 12, 2018.
  2. Lee, C., Sarwar, S. S., Panda, P., Srinivasan, G. and Roy, K. "Enabling Spike-based Backpropagation in State-of-the-art Deep Neural Network Architectures," arXiv preprint arXiv:1903.06379, 2019.
Current Students: Chankyu Lee, Gopal Srinivasan

C.   Spike Timing Dependent Plasticity (STDP)

We proposed bio-plausible STDP-based unsupervised training methodology for both fully-connected and convolutional SNNs to enable on-chip training and inference in edge devices.

Recent Publications:

  1. Srinivasan, G., Roy, S., Raghunathan, V. and Roy, K. "Spike timing dependent plasticity based enhanced self-learning for efficient pattern recognition in spiking neural networks," In 2017 International Joint Conference on Neural Networks (IJCNN), p. 1847-1854, IEEE, May 2017.
  2. Lee, C., Srinivasan, G., Panda, P. and Roy, K. "Deep spiking convolutional neural network trained with unsupervised spike timing dependent plasticity," IEEE Transactions on Cognitive and Developmental Systems, 2018.
  3. Srinivasan, G., Panda, P. and Roy, K. "STDP-based unsupervised feature learning using convolution-over-time in spiking neural networks for energy-efficient neuromorphic computing," ACM Journal on Emerging Technologies in Computing Systems (JETC), 14(4), p.44, 2017.
Current Students: Chankyu Lee, Sourjya Roy, Gopal Srinivasan

D.   Stochastic Spike Timing Dependent Plasticity (Stochastic-STDP)

We proposed STDP-based stochastic learning rules, incorporating Hebbian and anti-Hebbian mechanisms, for achieving energy- and memory-efficient on-chip training and inference in SNNs composed of binary and quaternary synaptic weights. We also demonstrated efficient realization of stochastic STDP-trained binary SNNs enabled by CMOS and emerging device technologies.

Recent Publications:

  1. Srinivasan, G., Sengupta, A. and Roy, K. "Magnetic tunnel junction based long-term short-term stochastic synapse for a spiking neural network with on-chip STDP learning," Scientific reports, 6, p.29545, 2016.
  2. Srinivasan, G. and Roy, K. "ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing," Frontiers in Neuroscience, 13, p.189, 2019.
  3. Sengupta, A., Srinivasan, G., Roy, D. and Roy, K. "Stochastic inference and learning enabled by magnetic tunnel junctions," In 2018 IEEE International Electron Devices Meeting (IEDM), p. 15-6, IEEE, December 2018.
Current Students: Deboleena Roy, Gopal Srinivasan 

VIII. Recurrent Liquid State Machines for Spatiotemporal Pattern Recognition

Liquid State Machines (LSMs) are simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters. Such sparse and randomly interconnected recurrent SNNs exhibit highly non-linear dynamics that transform spatiotemporal inputs into rich high-dimensional representations based on the current and past context. The random input representations can be efficiently interpreted by an output (or readout) layer with trainable parameters. We proposed training and inference methodologies for single- and multi-liquid (ensemble) LSMs and demonstrated their efficacy for recognition (image, speech, and gesture recognition) and reinforcement learning tasks. In addition, we also developed analytical tools for explaining the LSM dynamics and performance.

Recent Publications:

  1. Panda, P. and Roy, K. "Learning to generate sequences with combination of hebbian and non-hebbian plasticity in recurrent spiking neural networks," Frontiers in neuroscience, 11, p.693, 2017.
  2. Panda, P. and Srinivasa, N. "Learning to recognize actions from limited training examples using a recurrent spiking neural model," Frontiers in neuroscience, 12, p.126, 2018.
  3. Srinivasan, G., Panda, P. and Roy, K. "Spilinc: spiking liquid-ensemble computing for unsupervised speech and image recognition," Frontiers in neuroscience, 12, p.524, 2018.
  4. Wijesinghe, P., Srinivasan, G., Panda, P. and Roy, K. "Analysis of Liquid Ensembles for Enhancing the Performance and Accuracy of Liquid State Machines," Frontiers in neuroscience, 13, p.504, 2019.
  5. Ponghiran, W., Srinivasan, G. and Roy, K. "Reinforcement Learning with Low-Complexity Liquid State Machines," arXiv preprint arXiv:1906.01695, 2019.
Current Students: Wachirawit Ponghiran, Gopal Srinivasan