Signal Processing Toolbox for Simultaneously Acquired fMRI and EEG

EEG signals simultaneously acquired during fMRI are noisy and contain artifacts that primarily arise from MRI gradient switching and cardiac pulsation. This toolbox includes a set of open-source Matlab functions implementing several published algorithms for removing such artifacts from EEG. These functions can be called individually or through a graphic user interface (GUI) compatible with the widely used EEG processing software (EEGLab). This work is supported by the National Institute of Mental Health through a grant (R01-MH104402).

Laboratory of Integrated Brain Imaging

Home    People    Research    Publication    Links    Resource    News    Internal

Matlab Toolbox for Separating and Analyzing Scale-Free and Rhythmic Neural Activity

Neurophysiological field-potential signals consist of both arrhythmic and rhythmic patterns indicative of the fractal and oscillatory dynamics arising from likely distinct mechanisms. Here, we present a new method, namely the irregular-resampling auto-spectral analysis (IRASA), to separate fractal and oscillatory components in the power spectrum of neurophysiological signal according to their distinct temporal and spectral characteristics (Wen and Liu, 2016). This work is supported by the National Institute of Mental Health through a grant (R01-MH104402).

Visualization of Deep Residual Networks

Deep residual network, or ResNet (He et al., 2016), explains brain responses to natural movies and reveals category representations organized in a nested hierarchy (Wen et al., 2018). Here, we visualized the features from individual layers in ResNet by optimizing visual inputs that maximize the unit activation (Yosinski et al., 2015). This work is supported by the National Institute of Mental Health through a grant (R01-MH104402).


Data publication for neural encoding and decoding with deep learning during natural vision

Tens of hours of fMRI data acquired from three subjects while watching a large variety of naturalistic videos. Such data can be used to compare the human brain and artificial intelligence systems that process, represent, and recognize the content of the visual world. Multiple studies in the lab have demonstrated the promise of using deep learning models to understand and decode brain activity during natural vision (e.g. Wen et al., 2017, Cerebral Cortex). The work is supported by the National Institute of Mental Health (R01-MH104402).

fMRI data from human subjects during repeated free-viewing of a natural movie stimulus

Thirteen subjects underwent four fMRI sessions with two conditions. Two sessions were obtained in the eyes-closed resting state, and the other two sessions occurred during free-viewing of an identical movie clip The Good, the Bad, and the Ugly, 1966, from 162:54 to 168:33 min. in the film). The visual input was presented using the MATLAB Psychophysics Toolbox, and they were delivered to subjects through a binocular goggle system (NordicNeuroLab, Norway) mounted on the head coil.  Each movie-stimulation (task) session began with a blank gray screen presented for 42 s, followed by the movie presented for 5 min and 37 s, and ended with the blank screen again for 30 s. The resting-state sessions had the same duration as the movie-stimulation sessions.This work is supported by the National Institute of Mental Health through a grant (R01-MH104402).

fMRI data from human subjects during continuous musical perception and imagery tasks

Multiple sessions of fMRI data from human subjects who were listening to a >8-min music (Beethoven’s Symphony 9), or were imagining the same music as cued by a movie that visualized this music. The data help to map the cortical networks and coding during musical perception or imagery (Zhang et al., 2017, Scientific Reports). The work is supported by the National Institute of Mental Health (R01-MH104402).

Image processing toolbox for gastric MRI to assess gastric emptying and motility

Matlab-based functions and scripts for analysis of contrast-enhanced gastric MRI in animals or humans. The toolbox enables image processing, segmentation, as well as analysis of gastric emptying and motility. Sample data are provided. See related work in our recent publications (Lu et al., IEEE TBME, 2017; Lu et al., NGM, 2018). The work is supported by the National Institutes of Health through the Stimulating Peripheral Activity to Relieve Conditions (SPARC) program (OT2OD023847).


Predictive Coding Network for Natural Image Classification

Source code with PyTorch for two versions of predictive coding network (PCN) trainable for image classification, implementing the models and algorithms reported in two papers (Wen et al., ICML, 2018) and (Han et al., NIPS, 2018). Both versions are inspired by “predictive coding” - a neuroscience theory about how the brain understands and explores the world through bottom-up and top-down processes. The two processes are implemented as feedforward and feedback connections that interact globally (namely global PCN) or locally (namely local PCN). The publicized code was developed and documented by Kuan Han and Di Fu, with initial contributions from Haiguang Wen. The code was tested on several benchmark datasets, including MNIST, CIFAR 10/100, and ImageNet, etc. The work is supported by the National Institute of Mental Health (R01-MH104402) and the College of Engineering at Purdue University.