It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Magnetic Resonance Imaging (MRI) has been widely used to acquire structural and functional information about the brain. In a group- or voxel-wise analysis, it is essential to correct the bias field of the radiofrequency coil and to extract the brain for accurate registration to the brain template. Although automatic methods have been developed, manual editing is still required, particularly for echo-planar imaging (EPI) due to its lower spatial resolution and larger geometric distortion. The needs of user interventions slow down data processing and lead to variable results between operators. Deep learning networks have been successfully used for automatic postprocessing. However, most networks are only designed for a specific processing and/or single image contrast (e.g., spin-echo or gradient-echo). This limitation markedly restricts the application and generalization of deep learning tools. To address these limitations, we developed a deep learning network based on the generative adversarial net (GAN) to automatically correct coil inhomogeneity and extract the brain from both spin- and gradient-echo EPI without user intervention. Using various quantitative indices, we show that this method achieved high similarity to the reference target and performed consistently across datasets acquired from rodents. These results highlight the potential of deep networks to integrate different postprocessing methods and adapt to different image contrasts. The use of the same network to process multimodality data would be a critical step toward a fully automatic postprocessing pipeline that could facilitate the analysis of large datasets with high consistency.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 University of Queensland, Queensland Brain Institute and Centre for Advanced Imaging, Brisbane, Australia (GRID:grid.1003.2) (ISNI:0000 0000 9320 7537)
2 Chang Gung University, Department of Medical Imaging and Radiological Sciences, and Graduate Institute of Artificial Intelligence, Taoyuan, Taiwan (GRID:grid.145695.a) (ISNI:0000 0004 1798 0922)
3 Chang Gung Memorial Hospital at Linkou, Department of Radiation Oncology, Taoyuan, Taiwan (GRID:grid.454210.6) (ISNI:0000 0004 1756 1461)
4 Chang Gung University, Department of Medical Imaging and Radiological Sciences, and Graduate Institute of Artificial Intelligence, Taoyuan, Taiwan (GRID:grid.145695.a) (ISNI:0000 0004 1798 0922); Chang Gung University and Chang Gung Memorial Hospital at Linkou, Medical Imaging Research Center, Institute for Radiological Research, Taoyuan, Taiwan (GRID:grid.145695.a); Chang Gung Memorial Hospital, Department of Psychiatry, Chiayi, Taiwan (GRID:grid.454212.4) (ISNI:0000 0004 1756 1410)