WELDON SCHOOL OF BIOMEDICAL ENGINEERING

SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING

Signal Processing Toolbox for Simultaneously Acquired fMRI and EEG

EEG signals simultaneously acquired during fMRI are noisy and contain artifacts that primarily arise from MRI gradient switching and cardiac pulsation. This toolbox includes a set of open-source Matlab functions implementing several published algorithms for removing such artifacts from EEG. These functions can be called individually or through a graphic user interface (GUI) compatible with the widely used EEG processing software (EEGLab). This work is supported by the National Institute of Mental Health through a grant (R01-MH104402).
 

Laboratory of Integrated Brain Imaging

Home    People    Research    Publication    Links    Resource    News    Internal

Matlab Toolbox for Separating and Analyzing Scale-Free and Rhythmic Neural Activity

Neurophysiological field-potential signals consist of both arrhythmic and rhythmic patterns indicative of the fractal and oscillatory dynamics arising from likely distinct mechanisms. Here, we present a new method, namely the irregular-resampling auto-spectral analysis (IRASA), to separate fractal and oscillatory components in the power spectrum of neurophysiological signal according to their distinct temporal and spectral characteristics (Wen and Liu, 2016). This work is supported by the National Institute of Mental Health through a grant (R01-MH104402).
 

Visualization of Deep Residual Networks


Deep residual network, or ResNet (He et al., 2016), explains brain responses to natural movies and reveals category representations organized in a nested hierarchy (Wen et al., 2017). Here, we visualized the features from individual layers in ResNet by optimizing visual inputs that maximize the unit activation (Yosinski et al., 2015). This work is supported by the National Institute of Mental Health through a grant (R01-MH104402).

 

Data publication for neural encoding and decoding with deep learning during natural vision

Tens of hours of fMRI data acquired from three subjects while watching a large variety of naturalistic videos. Such data can be used to compare the human brain and artificial intelligence systems that process, represent, and recognize the content of the visual world. Multiple studies in the lab have demonstrated the promise of using deep learning models to understand and decode brain activity during natural vision (e.g. Wen et al., 2017, Cerebral Cortex). The work is supported by the National Institute of Mental Health (R01-MH104402).
 

fMRI data from human subjects during repeated free-viewing of a natural movie stimulus

Thirteen subjects underwent four fMRI sessions with two conditions. Two sessions were obtained in the eyes-closed resting state, and the other two sessions occurred during free-viewing of an identical movie clip The Good, the Bad, and the Ugly, 1966, from 162:54 to 168:33 min. in the film). The visual input was presented using the MATLAB Psychophysics Toolbox, and they were delivered to subjects through a binocular goggle system (NordicNeuroLab, Norway) mounted on the head coil.  Each movie-stimulation (task) session began with a blank gray screen presented for 42 s, followed by the movie presented for 5 min and 37 s, and ended with the blank screen again for 30 s. The resting-state sessions had the same duration as the movie-stimulation sessions.This work is supported by the National Institute of Mental Health through a grant (R01-MH104402).