“The brain itself is the ultimate intelligent machine,” says Zhongming Liu, assistant professor of biomedical engineering and electrical and computer engineering. “It constantly processes information, makes decisions and guides actions. Although it is not fully understood how the brain works, decades of neuroscience research have already fueled inspiration for engineers to design algorithms that have enabled machines to understand images, translate language, play games, and drive vehicles, to name a few examples. Brain-inspired algorithms have also begun to help neuroscientists understand the brain itself and eventually decode the human mind.”
Deep Neural Networks as a “Digital Mirror” of the Human Brain
Liu’s research group has demonstrated how to decode what the human brain is seeing by using artificial intelligence to interpret fMRI scans from people watching videos, representing a sort of mind-reading technology. “When light enters the eyes, visual information sweeps rapidly across a cascade of brain areas — the so-called feed-forward process,” Liu says. “If you see only a brief glimpse of a picture for a fraction of a second, there is just enough time for completion of the feed-forward process.”
Similar to the brain’s feed-forward processing is a convolutional neural network — a form of deep learning that mimics the neural coding in the brain and has been instrumental in enabling computers and smartphones to recognize faces and objects. “This type of network has made an enormous impact in the field of computer vision in recent years,” Liu says. “We use this network not only for computer vision but also to study the brain’s mechanism of human vision. Our idea is to use the neural network as a ‘digital mirror’ of the brain’s visual system, through which brain activity is interpretable, predictable and decodable.”
Liu’s team has harnessed this approach to see how the brain processes movies, a step toward decoding the brain while people are trying to make sense of complex and dynamic visual surroundings.
The researchers acquired 11.5 hours of fMRI data from each of three subjects watching 972 video clips, including those showing people or animals in action and nature scenes. First, the data were used to train the “encoding models” that used the convolutional neural network to predict the activity in the brain’s visual cortex while the subjects were watching the videos. Then they trained the “decoding models” that used the neural network to decode fMRI data from the subjects to reconstruct the videos, even ones the model had never watched before.
Findings were detailed in a 2017 paper in the journal Cerebral Cortex. The model was able to accurately decode the fMRI data into specific image categories. Actual video images were then presented side-by-side with the computer’s interpretation of what the person’s brain saw based on fMRI data.
“For example, a water animal, the moon, a turtle, a person, a bird in flight,” says the paper’s lead author, Haiguang Wen, assistant professor of biomedical engineering and electrical and computer engineering, who earned a doctorate in electrical and computer engineering and received an Outstanding Graduate Student Research Award from the College of Engineering. “A unique aspect of this work is that we are doing the decoding nearly in real time, as the subjects are watching the video. We scan the brain every two seconds, and the model rebuilds the visual experience as it occurs.”
The researchers were able to figure out how certain locations in the brain were associated with specific types of visual information. They also used models trained with data from one human subject to predict and decode the brain activity of a different human subject. “The technique is important because it demonstrates the potential for broad applications of such models to study brain function, even for people with visual deficits,” Liu says.
The research team recently reported a new method for “incremental and transfer learning” for a model to predict and decode brain activity. “In order for us to make the type of encoding or decoding model more powerful we need a whole lot of data,” Liu says. “So far, we acquired around 30 hours of data from three human subjects. We’ve built something that works reasonably well, but still not good enough. What we need is more than tens of hours. We need thousands of hours.”
However, it is impossible to conduct thousands of hours of fMRI scans from human subjects. “It is even difficult to acquire tens of hours from a single subject,” says Liu, who has a solution to the data problem.
“The idea is to scan one subject’s brain while they watch a video, and then scan another person’s brain while they watch a different video,” he says. “We developed a technique to combine this information from different subjects as if all the videos were seen by a single person.”
The approach represents a potentially powerful tool for research labs around the world to collect and share data and synthesize their efforts to better understand and decode the brain, a big data concept Liu calls “big vision.” Findings are detailed in a paper published in the journal NeuroImage.
The Brain: A Dynamic System Capable of Prediction
“The brain uses both feed-forward and feedback processes for visual perception,” Liu says. “The feedback process reconstructs 3D visual objects and scenes from 2D retinal images and predicts what happens next if the input is changing in time. To do so, the brain needs to process both spatial and temporal information, and implements a dynamical system for perception and action.”
The research team has now built a model that mimics not only brain processing of spatial information but also of temporal information, like the sequential frames of a video. “In a video, you have different frames at different times, and they depend on each other,” Liu says. “For example, if you see a cup falling down, you are able to predict, or anticipate where it is going to end up, and therefore guide an action to catch it. The brain doesn’t just passively perceive, but interacts with the environment, and we want to give machines this capability.”
A key principle in the research is known as “predictive coding.” The brain uses its feedback pathway to make predictions and uses its feedforward pathway to convey prediction errors. It dynamically refines representation to reduce the error of prediction. “So, it’s almost like the brain has a self-correcting mechanism, making a perceptual decision based on this iterative process,” Liu says.
His research group has created a “deep predictive coding” neural-network model that mimics this process. While state-of-the-art models are increasingly complex, his new model represents a more efficient and flexible framework for computer vision. “An intelligent system does not necessarily need to rely on a very deep neural network — a popular trend in deep-learning. Instead it should perhaps use smart tricks to reuse a relatively shallow network more like how the brain works,” Liu says.
New findings were reported in July 2018 during the International Conference on Machine Learning. Liu co-authored the paper with lead author Wen. The other co-authors were graduate students Kuan Han, Junxing Shi, and Yizhen Zhang; and Eugenio Culurciello, associate professor of biomedical engineering who has appointments in electrical and computer engineering, mechanical engineering, and health and human sciences.
“We think we are entering a new era of machine intelligence and neuroscience where research is focusing on the intersection of these two important fields,” Liu says. “Our mission in general is to advance artificial intelligence using brain-inspired concepts. In turn, we want to use artificial intelligence to help us understand the brain. So, we think this is a good strategy to help advance both fields in a way that otherwise would not be accomplished if we approached them separately.”
The work is affiliated with the Purdue Institute for Integrative Neuroscience.
Healthcare Engineering Is Steeped in Big Data
Healthcare systems around the world are rife with data, voluminous sources of information such as patient electronic health records, insurance claims and even a person’s individual genomics.
These data may enable healthcare providers to better understand the effects of specific therapies and policy decisions on patient care, and the impacts on the system as a whole, says Paul Griffin, St. Vincent Health Chair of Healthcare Engineering, professor of industrial engineering and director of the Regenstrief Center for Healthcare Engineering.
“If you think about your health, it’s not simply that something happens to you,” he says. “There is a process over time. We call that a care pathway. The question is, what can we learn from these data about the appropriate care pathways for individuals, and then, what are the implications with regard to downstream costs?”
A related issue is that of transportability. “That is, if something applies to one population, what can I say about how it will apply to a different population? If I give a particular drug, or if I perform a particular procedure in a city, say a hospital in Boston, what might that mean in rural Indiana?” Griffin says.
However, practically applying data is easier said than done. “There is so much potential, but we have not really leveraged the amount of data and types of data that we have to create the knowledge that we need to improve the system,” says Yuehwern Yih, associate director of the center and a professor of industrial engineering.
Regenstrief is working to improve healthcare through “Six Sigma data science,” a quality standard to maintain 99.997 percent accuracy. In manufacturing this translates into no more than one error per million opportunities. “And we are not there with healthcare,” Yih says. “We have on average 1.6 preventable medical errors per patient per day in the hospital. So, compared with manufacturing, we are very behind.”
Efforts to fix the problem are complicated by the sheer number of variables in healthcare. “People say medicine is a science, but I say sometimes it’s an art,” Yih says. “Every individual has a different reaction to a treatment and a drug. So how do we figure out the best way to treat a particular patient? What’s really crucial is, how do we put all the data together to strategize and form actionable decisions to take care of the population, either in the U.S. or in global health?”
The proper use of data is a matter of life or death in Uganda, where Yih’s research team is working to save the lives of mothers giving birth by developing a more efficient supply chain system for essential medications and materials. The work is funded by a Grand Challenges Explorations grant through the Bill & Melinda Gates Foundation. The grants were highly competitive: only 51 were awarded out of 1,500 applicants.
Globally, 99 percent of women dying in childbirth are in developing countries, and Uganda has one of the highest mortality rates. Previously when there were complications, the women bled to death because facilities were out of stock of two drugs, even though the drugs were not scarce, Yih says. “We found that a hospital having 600 births a month had been out of these two drugs for around two months, but a 40-minute drive away, I found a hospital that has 15 births a month with a three-year stock of these drugs expiring within two weeks,” she says.
To reduce the mortality rate, her team is connecting three types of data: patient records, laboratory data and inventory data. The researchers will create a cloud-based information system for the hospital and also build a predictive model, using prenatal care information and patient information to identify high-risk pregnancies.
“We are using that information to know precisely how much supply they will need in which month,” Yih says. “Then we will be able to put in that order. The problem is that they have a very limited budget. They can’t afford to overstock, and that’s precisely why they need to have a very precise order quantity.”
Also involved are Griffin; Seokcheon Lee, assistant professor of industrial engineering; Md Munirul Haque, a Regenstrief research scientist; and Andrea Burniske, program manager of innovation for the Office of Global Engineering Program’s Innovation for International Development Lab (I2D Lab). Collaborating are Makerere University in Uganda; the ResilientAfrica Network, a partnership of 20 African universities in 16 countries; and Management Sciences for Health, an organization working to improve the health of the world’s poorest and most vulnerable people.