Fusion of Functional Brain Imaging Modalities via Linear Programming
暂无分享,去创建一个
Fusion algorithms are employed in an attempt to construct a spatio-temporal estimate of neuronal activity using data gathered from multiple functional brain imaging modalities. Here, the estimate is built by placing a dipole in each voxel of the modality with highest spatial resolution, and estimating the time course of each dipole without constraining dipole orientations. The solution space thus consists of a matrix S of dimensionality 3N×T , which actually consists of 3 N×T matrices. Each such sub-matrix corresponds to the projection of the dipole to one axis [1]:
[1] A. Dale,et al. Improved Localizadon of Cortical Activity by Combining EEG and MEG with MRI Cortical Surface Reconstruction: A Linear Approach , 1993, Journal of Cognitive Neuroscience.
[2] Anthony N. Sinclair,et al. Recovery of a sparse spike time series by L1 norm deconvolution , 1994, IEEE Trans. Signal Process..
[3] M. D’Esposito,et al. The variability of human BOLD hemodynamic responses , 1998, NeuroImage.
[4] Richard M. Leahy,et al. Source localization using recursively applied and projected (RAP) MUSIC , 1997 .