Abstract
Global livelihoods are impacted by the novel coronavirus (COVID-19) disease, which mostly affects the respiratory system and spreads via airborne transmission. The disease has spread to almost every nation and is still widespread worldwide. Early and reliable diagnosis is essential to prevent the development of this highly risky disease. The computer-aided diagnostic model facilitates medical practitioners in obtaining a quick and accurate diagnosis. To address these limitations, this study develops an optimized Xception convolutional neural network, called "XCovNet," for recognizing COVID-19 from point-of-care ultrasound (POCUS) images. This model employs a stack of modules, each of which has a slew of feature extractors that enable it to learn richer representations with fewer parameters. The model identifies the presence of COVID-19 by classifying POCUS images containing Coronavirus samples, viral pneumonia samples, and healthy ultrasound images. We compare and evaluate the proposed network with state-of-the-art (SOTA) deep learning models such as VGG, DenseNet, Inception-V3, ResNet, and Xception Networks. By using the XCovNet model, the previous study's problems are cautiously addressed and overhauled by achieving 99.76% accuracy, 99.89% specificity, 99.87% sensitivity, and 99.75% F1-score. To understand the underlying behavior of the proposed network, different tests are performed on different shuffle patterns. Thus, the proposed "XCovNet" can, in regions where test kits are limited, be used to help radiologists detect COVID-19 patients through ultrasound images in the current COVID-19 situation.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 VNR Vignana Jyothi Institute of Engineering and Technology, Department of Information Technology, Hyderabad, India (GRID:grid.411828.6) (ISNI:0000 0001 0683 7715)
2 LBEF Campus (Asia Pacific University of Technology & Innovation, Malaysia), Kathmandu, Nepal (GRID:grid.444468.e) (ISNI:0000 0004 6004 5032)
3 Thapar Institute of Engineering and Technology, Patiala, India (GRID:grid.412436.6) (ISNI:0000 0004 0500 6866)
4 University of Wollongong in Dubai, School of Computer Science, Dubai, United Arab Emirates (GRID:grid.444532.0) (ISNI:0000 0004 1763 6152)
5 University of Wollongong in Dubai, School of Computer Science, Dubai, United Arab Emirates (GRID:grid.444532.0) (ISNI:0000 0004 1763 6152); Middle East University, MEU Research Unit, Amman, Jordan (GRID:grid.449114.d) (ISNI:0000 0004 0457 5303)





