Restoring movement and sensation after injury — by ‘decoding’ neural activity

--

The idea behind a brain-machine interface is simple. Normally, movements are planned in parts of the brain called the premotor cortex and the motor cortex, sending signals to the spinal cord that activate the muscles needed to make precise, controlled movements. People with spinal cord injury, stroke, or neurodegenerative diseases may experience paralysis because that signal stream is interrupted at some stage. Yet these people often have most of their brain areas, including the premotor and motor cortices, intact.

A brain-machine interface (BMI) involves recording the neural signals associated with movement intention directly from those parts of the brain. You develop an algorithm that “decodes,” or maps, the relationship between the neural activity and movement intention. Then you can use that algorithm to control external devices, such as a robotic prosthetic arm or a cursor on a computer screen. In this way, you can provide artificial sensory feedback for those movements (to correct movement errors or reinforce accurate movement commands) in the form of precisely-patterned electrical current that can change ongoing neural activity in the brain.

This approach has the potential to restore to paralyzed individuals some independence and ability to interact with the world. To accomplish it, we first want to record from as many neurons as possible; we think we will be able to do a better job of figuring out what subjects intend to do if we have more channels of information. Each neuron should give us additional information, despite the fact that each neuron acts as part of a circuit and interacts with surrounding neurons.

In a research lab, to decode this neural activity, you first conduct a training session in which the subject moves their arm naturally, and you record neural activity associated with that movement to develop your decoding algorithm. If you are working with a paralyzed person, you have them watch a video of the movements on which they should be trained (e.g., a cursor moving across a screen), because watching movement evokes neural activity that is similar to the activity evoked while actually performing the movements. A second training stage could then be used to “refit” and refine the algorithm, based on the neural activity recorded during the task.

We humans rely very heavily on visual and proprioceptive feedback to plan and execute accurate movements. Visual feedback is familiar to all of us, but many people are less familiar with proprioception — the sense of our body’s position and movement through space.

As an example, I ask people to close their eyes and touch their noses. Virtually everyone can do this, because each of us has a mental model of our body that tells us where our noses are in relation to the rest of our body and lets us plan the correct movements to reach it.

One highly effective way to improve BMI control is to provide artificial proprioception. This helps even with simpler tasks like cursor control, but it’s absolutely necessary when we move toward high-degree-of-freedom tasks like controlling artificial limbs, as proprioception is critical for coordinating movements across multiple joints.

Artificial sensation is complex and difficult to achieve. Researchers are tasked not only with understanding the neural representation of sensation, but also with figuring out how to encode fine, precise, spatial and temporally-varying information about the state of an artificial limb — and doing so in a way that the natural nervous system recognizes and understands.

Brain-machine interfaces (Image courtesy of Dr. Joseph Makin’s lab, School of Electrical and Computer Engineering, Purdue University)

We don’t understand the neural code well enough to know what patterns of activity to input, and we don’t yet have a good way to input those patterns, even if we did know what to target. Instead, we need to rely on the brain’s naturally high levels of activity to learn the meaning of the artificial input. A substantial amount of neural plasticity and adaptation occurs in the brain. Neural responses can change quickly, and even adapt to learning to use a BMI in a way we don’t fully understand. From experience and practice, we believe the brain can learn to use artificial sensation — delivered via electrical stimulation.

My research team is using a technique called 2-photon imaging to better understand the patterns of neural activity evoked by electrical stimulation. This imaging process records neural activity via light in neurons that have been genetically modified to express proteins that fluoresce when a neuron is active and remain dark when the neuron is still. We’re also using virtual reality (thanks to the Purdue Envision Center) and electrophysiology (recording with multichannel electrode arrays in multiple brain areas simultaneously) to better understand how individual brain areas represent and process vision and proprioception, and how they work together to transform sensory data into the correct format for motor systems.

Additionally, we’re working with Dr. Joseph Makin in the Purdue School of Electrical and Computer Engineering to develop AI-based deep neural network models that can produce integrated, multisensory patterns of stimulation that mimic processes observed in the brain to generate neural activity similar to that observed during natural movements.

Finally, we are really interested in what makes a natural sensory signal and a machine-generated artificial sensory signal “reliable,” or believable — something the brain will trust and listen to, in order to move from research to clinical practice and restore function and capability to those challenged by paralysis.

Maria C. Dadarlat Makin, PhD

Assistant Professor

Weldon School of Biomedical Engineering

College of Engineering, Purdue University

--

--