BMIs (sometimes "brain-computer interfaces," BCIs) use neurological recording devices together with decoding algorithms to transform neural activity directly into spoken speech, text, control of a robotic arm, and the like. Although pioneered in animal models, they are now moving into humans in clinical trials. Achieving useful prosthetics for paralyzed or otherwise incapacitated individuals presents a number of engineering problems. Our lab is interested primarily in improving decoding algorithms, but we also interact with engineers working on device development and hardware design, as well as neuroscientists and medical doctors.
Algorithmic improvement will come through application of cutting-edge machine learning, especially artificial neural networks, and we're working on this. But (current) neural data are much "noisier" than the typical datasets analyzed in computer science and will require different approaches tailored to their statistical properties, which is why statistical learning theory and graphical models play a role in the group's research.
The group's foundational work is in (1) decoding spoken speech from electrocoricography (ECoG) and (2) decoding arm movements from Utah arrays implanted in monkeys. In both cases we established state-of-the-art results, in the former case by a very large margin.
The great embarrassment of computational neuroscience is that, after decades of work and despite increasingly high fidelity data, we still don't understand the "neural code." (Some dispute even the existence of such a code.) What is the basic currency of information: spike (i.e., action-potential) arrival times? spike rates? something else? Are neural signals intrinsically noisy? or are we failing to marginalize nuisance variables out of our experiments? or are we just reading the code wrong? Over (roughly) how many neurons is the representation of real-world variables distributed? Or do populations of neurons perhaps encode probability distributions over these variables? If the synaptic plasticity in cortical circuits is really implementing learning algorithms, how do the learning rules of those algorithms map onto the observed plasticity relationships (e.g., STDP)?
Our group is interested in these questions like these, especially as they relate to human and animal behaviors. But the ultimate aim is always to provide testable hypotheses for experimental neuroscientists--including our collaborators at Purdue.
Today, machine learning is dominated by a technology originating some 75 years ago in meditations on the nervous system (and Carnap's formal logic). In the meantime, artificial neural networks (ANNs) have made advances by importing more ideas from neuroscience, most notably receptive fields (convolutional networks) and stochastic (Poisson-like) transmission of information (dropout). It seems likely that investigations of biological and machine learning will continue to have a fruitful relationship.
For example: the learning algorithms for the most powerful (and popular) ANNs for time-series data require access to past information. How could a biological organism keep around such traces? Or are there algorithms that don't require them? We have shown for instance that, at least in certain architectures, learning performance will not be impaired if backprop-through-time is replaced with appropriately chosen unsupervised learning.
We are broadly interested in questions like these, including their mathematical formalization.