It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Machine learning has considerably improved medical image analysis in the past years. Although data-driven approaches are intrinsically adaptive and thus, generic, they often do not perform the same way on data from different imaging modalities. In particular computed tomography (CT) data poses many challenges to medical image segmentation based on convolutional neural networks (CNNs), mostly due to the broad dynamic range of intensities and the varying number of recorded slices of CT volumes. In this paper, we address these issues with a framework that adds domain-specific data preprocessing and augmentation to state-of-the-art CNN architectures. Our major focus is to stabilise the prediction performance over samples as a mandatory requirement for use in automated and semi-automated workflows in the clinical environment. To validate the architecture-independent effects of our approach we compare a neural architecture based on dilated convolutions for parallel multi-scale processing (a modified Mixed-Scale Dense Network: MS-D Net) to traditional scaling operations (a modified U-Net). Finally, we show that an ensemble model combines the strengths across different individual methods. Our framework is simple to implement into existing deep learning pipelines for CT analysis. It performs well on a range of tasks such as liver and kidney segmentation, without significant differences in prediction performance on strongly differing volume sizes and varying slice thickness. Thus our framework is an essential step towards performing robust segmentation of unknown real-world samples.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 AICURA medical, Berlin, Germany; Universität Bielefeld, Technische Fakultät, Bielefeld, Germany (GRID:grid.7491.b) (ISNI:0000 0001 0944 9128)
2 AICURA medical, Berlin, Germany (GRID:grid.7491.b); Technische Universität Dresden, Institute for Medical Informatics and Biometry, Carl Gustav Carus Faculty of Medicine, Dresden, Germany (GRID:grid.4488.0) (ISNI:0000 0001 2111 7257)
3 AICURA medical, Berlin, Germany (GRID:grid.4488.0)
4 Technische Universität Dresden, Institute for Medical Informatics and Biometry, Carl Gustav Carus Faculty of Medicine, Dresden, Germany (GRID:grid.4488.0) (ISNI:0000 0001 2111 7257); National Center of Tumor Diseases (NCT) Partner Site Dresden, Dresden, Germany (GRID:grid.4488.0)
5 Technische Universität Dresden, Institute for Medical Informatics and Biometry, Carl Gustav Carus Faculty of Medicine, Dresden, Germany (GRID:grid.4488.0) (ISNI:0000 0001 2111 7257); Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany (GRID:grid.419524.f) (ISNI:0000 0001 0041 5028)




