Article Info
Article history:
Received Apr 16, 2021
Revised Mar 24, 2022
Accepted Apr 01, 2022
Keywords:
Chest radiograph
Deep learning
Diagnosis
Neural network
Pneumonia
ABSTRACT
Accurate interpretation of chest radiographs outcome in epidemiological studies facilitates the process of correctly identifying chest-related or respiratory diseases. Despite the fact that radiological results have been used in the past and is being continuously used for diagnosis of pneumonia and other respiratory diseases, there abounds much variability in the interpretation of chest radiographs. This variability often leads to wrong diagnosis due to the fact that chest diseases often have common symptoms. Moreover, there is no single reliable test that can identify the symptoms of pneumonia. Therefore, this paper presents a standardized approach using convolutional neural network (CNN) and transfer learning technique for identifying pneumonia from chest radiographs that ensure accurate diagnosis and assist physicians in making precise prescriptions for the treatment of pneumonia. A training set consisting of 5,232 optical coherence tomography and chest X-ray images dataset from Mendelev public database was used for this research and the performance evaluation of the model developed on the test set yielded 88.14% accuracy, 90% precision, 85% recall and F1 score of 0.87.
This is an open access article under the CC BY-SA license.
(ProQuest: ... denotes formulae omitted.)
1.INTRODUCTION
Pneumonia, a pulmonary disease, in which air sacs in the lungs, also referred to as alveoli, are filled up with fluid such as pus [1]. It is a pulmonary infection occasioned by virus or bacteria, resulting in the death of approximately 1.4 million children yearly. By implication, this statistic indicates that about 18% of children born die at less than five years of age. Globally, nearly 156 million children are currently suffering from the attack of pneumonia [2]. Findings revealed a great burden of communicable diseases in the world in which about 30% of world childhood deaths are caused by acute respiratory infection [3]. Unlike other parts of the human body, the difficulty associated with accessing the chest region makes the diagnosis of common chest ailments very challenging to medical practitioners [4], [5]. To reduce the mortality rate caused by chest region diseases such as pneumonia, the World Health Organization (WHO) established a child health epidemiology grouped (CHERG) in the year 2001. CHERG was saddled with the responsibility of carrying out a systematic review and data collection improvement, methods, and assumptions, underlying the estimates of death's causes distribution in children for year 2000 [6].
2.REVIEW OF LITERATURE
Chest radiograph, also known as chest X-ray (CXR), is among periodically performed radiological procedures that use little dose of ionizing radiation to capture images of the interior of human chest, lungs, and heart [7]. It is useful in diagnosing, monitoring, and treating diverse lung conditions such as cancer, pneumonia, and tuberculosis [8]. Radiological result has been a major means of diagnosing pneumonia but the major problem with the approach is the lack of uniformity in chest radiograph's interpretation [9], [10] and hence a standard approach is required since there is no single reliable test that can identify the symptoms of pneumonia. Medical imaging means techniques and procedures used in creating images of human body parts such as radiography, Magnetic resonance imaging, ultrasound, and endoscopy [11]. Computers can be leveraged in the analyzing medical images to gain a better understanding and interpretation of medical images [12], by leveraging the hierarchical feature representation learned from data instead of the common hand-made features that are mostly designed based on domain-specific knowledge [13]. Deep learning incorporates feature engineering into the learning step in its learning analysis [14], and therefore requires only a dataset with little pre-processing where informative representations are discovered in a self-learning manner [15], [16]. One of the popular recent applications is AlphaGo and AlphaZero, developed by DeepMind [17]. Deep learning is also used in object detection to detect the position of an object in image. This application is useful to detect early symptoms of abnormality present in patients. Furthermore, it is used in image segmentation for finding anatomical structures that are present in an image.
Deep learning has received significant attention due to its ability to process a huge number features when dealing with unstructured data as could be found in [18], [19]. It was implemented in [20], [21] for the detection and localization of abnormalities in chest radiographs with huge success. At the center of deep learning is artificial neural networks (ANNs) models manually extract features from raw data or features learned by other simple models. This enables systems to authomatically learn useful representation and features from raw data, without the tedious manual procedure. Its choice in medical image analysis is mostly triggered by convolutional neural networks (CNNs) [22]-[24], which is good at learning useful representation of image data, and other data structured. To sufficiently use CNNs, features has to be typically designed by hand, and can identify features that are relevant in a dataset without human interaction [25], [26] which make it practicable to utilize features learned directly from data [27], [28]. While ANN need much data to learn the patterns and associations in data, deep learning does not. The diagnosis of diseases of the chest using radiographs have aroused research interests and has been deployed for the diagnosis of lung nodule [29] and the classification of lung tuberculosis [30]. Using open datasets, many convolutional models are based on several abnormal features [31] which revealed that the same CNN does not replicate performance on every abnormal feature. Accuracy is improved when the comparison is made between deep learning techniques and rule based techniques. Dependency based on statistics between labels was implemented to get better and accurate predictions, resulting in better performance than other methods implemented on 13 images that were selected out of 14 classes [32]. Mining algorithms and labels prediction arose from radiographs including their report have been researched [33]-[35], the labels of the radiographs were all limited to radiographs that have disease labels which resulted in a lack of contextual facts. Radiography detection of diseases was studied in [36]-[38], and reported categorization based on image views from radiographs was reported in [39] while isolation of body parts from chest radiographs plus computed tomography was implemented in [40]. Inception v3 is a known model that can be leveraged to achieve very high accuracy in image recognition [41] as applied in Bar et al. [42] with encouraging results and used in this paper because it requires few computing resources.
3.METHOD
The data used in this paper was obtained from optical coherence tomography, and X-ray images of chest from the Mendeley public database [43]. As presented in Figure 1, the training set used is 5,232 images out of 5856 chest X-ray images collected from children. Out of these, 3883 X-ray images belong to patients/children diagnosed with pneumonia while 1349 X-ray images belong to children that are free from pneumonia. The validation set were 16 images while the test set were 624 images. Labels were given to the images as it is done in supervised learning. A model was created using Inception-V3 transfer learning on Tensorflow, which was trained using 5232 images out of which 3883 were from pneumonia children and 1349 were from normal children. The trained model was tested with 624 images out of which 390 contains pneumonia and 234 were from normal children.
The research design consists of steps implemented on Inception-v3 CNN as indicated in Figure 2. The first stage constitutes the system architecture. In the second stage, the images are read into the system, while the third stage involves pre-processing the input image. The input image was irregular in size and cannot pass through the learning algorithm, which expects the input images to be of size 224·224. Using bilinear interpolation, the images were resized to the required dimension. The images were represented as array of pixel values ranging from intensity level 0 to 255. In order to ensure that the data is suitable for learning, the pixels were scaled down by (1).
.. (1)
Where p is the original value of pixel, and P' the new value of pixel within the range 0 to 1. The pre-processing tasks include image resize to ensure uniform dimension for the images. The images were then scaled to within range 0 to 1 for each pixel. A data generator object was used to deliver the images in batches of 64 images each. The next step is training the system before finally generating the model.
4.TRAINING THE NETWORK
Transfer learning was carried out from a pre-trained base model (Inception-V3). It is a publicly available model trained on the ImageNet database of 14 million annotated images classified into 1000 categories of objects. It is a deep CNN architecture that was trained for detection and classification based on the imagenet large-scale visual recognition challenge 2014 (ILSVRC14). The network's architecture was specially designed to optimally utilized the computing resources. The network has 27 layers deep including 5 max pooling layers as shown in Table 1. In order to adapt this architecture to the objective of diagnosing pneumonia from X-ray images, a global average pooling layer and a new dense layer were added to the end of the network. A new two class output layer replaced the softmax 1000-class output layer.
5.MODEL EVALUATION METRICS
The performance was based of the following metrics:
- Classification accuracy: this is the ratio of the correctly classified images to the total number of image samples, presented in (2).
... (2)
Where, CCI = correctly classified images and TNI = total number of images.
- Precision: this is the number of images correctly classified as having pneumonia against the number of images classified as having pneumonia multiplied by wrongly classified as having pneumonia. This is presented in (3).
... (3)
Where, CDP = correctly diagnosed pneumonia and WDP = wrongly diagnosed pneumonia.
- Recall (sensitivity): this is the ratio of number of images correctly classified as having pneumonia to the total number of images that actually have pneumonia multiplied by number of images wrongly classified as having pneumonia. This is presented in (4).
... (4)
Where, CDP = correctly diagnosed pneumonia and WDAP = wrongly diagnosed as pneumonia.
- F1 score: it is the weighted average of recall and precision. This measure shows the balance between precision and recall. This is presented in (5).
... (5)
6.RESULT AND DISCUSSION
The implementation of this work was done in python programming language in a python notebook environment. Training was done in the 'train.py' script, evaluation and generation of reports in the 'evaluate.py' script, and prediction in the 'predict.py' script. The dataset used for this project work are of two types. The first type contains radiography images of children that are suffering from pneumonia. The second dataset contains the radiography image of children that are normal children. Samples of the radiography images of normal children used are presented in Figure 3, while images for children with pneumonia are presented in Figure 4. The model was trained for 10 epochs. The system achieved a training accuracy of 95.66% with a loss of 0.1135 and validation accuracy of 93.75% with a loss of 0.0854.
7.EVALUATION OF THE MODEL
The metrics used for the model evaluation includes accuracy, precision, recall, and F1 score. From the results obtained, evaluation of the model on the test set yielded 88.14% accuracy, 90% precision, 85% recall and F1 score of 0.87 as presented in Figure 5. This is supported by the Confusion matrix presented in Table 2, where it is clear that out of 624 cases, presence of pneumonia was predicted 390 times and normal was predicted 234 times.
8.THE CONFUSION MATRIX
The performance of a classifier on a set of test data from which the true values are known is often described using a confusion matrix. The confusion matrix in Table 2 shows the actual classes and the predicted classes for the cases in the test set in this work. From the table, the total predicted as normal is 234. The total predicted as pneumonia is 390. The total for actual normal is 180 while the actual pneumonia is 444.
9.CONCLUSION
It can be concluded that using a pre-trained model reduces training time and yields better performance in the detection of pneumonia in chest radiographs. This further shows that deep neural networks with little data can be trained to achieve better recognition rate. Evaluation of the model developed on the test set yielded 88.14% accuracy, 90% precision, 85% recall and F1 score of 0.87. The model is very fast and can be used in medical department for analysis of chest radiographs for pneumonia detection. The model accuracy of 88.14% can still be improved upon by further training the network in order to improve its classification rate.
Corresponding Author:
Ojo Abayomi Fagbuagun
Department of Computer Science, Faculty of Science, Federal University Oye Ekiti Km 3, Oye-Are Road, Oye-Ekiti, Ekiti State, Nigeria
Email: [email protected]
BIOGRAPHIES OF AUTHORS
Ojo Abayomi Fagbuagun D P is a Ph.D. holder in Computer Science from The Federal University of Technology, Akure, Ondo State of Nigeria. He is a researcher in medical image processing and data analysis and software Engineering. He can be contacted at email: agelasticalibree@gmail. com.
Obinna Nwankwo В p hold an M.Sc. in Computer Science from the University of Lagos, Akoka, Nigeria. He got his Bachelor of Science degree from Cross River University of Technology Calabar. His research interests are in Software Engineering, Artificial intelligence, and Machine learning. He is a member of Nigerian Computer Society (NCS). He can be contacted at email: [email protected].
Samson Adebisi Akinpelu B P has MSc and BSc in Computer science and his research interests are in Machine Learning, Artificial Intelligence, and Software Engineering. He also specializes in block based programming for developing solutions to organizational progress and enhancement. He can be contacted at email: [email protected].
Olaiya Folorunsho H В P obtained his PhD at the University of llorín, Nigeria, Master of Science (M.Sc.) degree of the University of Ibadan. He is a member of the Computer Professionals Registration Council of Nigeria and Nigerian Computer Society (NCS). His research interests include Information Security, Data Mining, and Artificial Intelligence. He can be contacted at email: olaiya. folorunsho@fuoye. edu.ng.
REFERENCES
[1] T. Rahman et al., "Transfer learning with deep convolutional neural network (CNN) for pneumonia detection using chest X-ray," Applied Sciences, vol. 10, no. 9, 2020, doi: 10.3390/app10093233.
[2] I. Rudan, C. Boschi-Pinto, Z. Biloglav, K. Mulholland, and H. Campbell, "Epidemiology and etiology of childhood pneumonia," Bulletin of the World Health Organization, vol. 86 no. 5, pp. 408-416, May 2008, doi:10.2471/BLT.07.048769.
[3] I. Rudan et al., "Epidemiology and etiology of childhood pneumonia in 2010: estimates of incidence, severe morbidity, mortality, underlying risk factors and causative pathogens for 192 countries," Journal of global health; vol. 3. no. 1, pp. 1-14, 2013. [Online]. Available: https://www.researchgate.net/publication/258420816_Epidemiology_and_etiology_of_childhood_ pneumonia_in_2010_Estimates_of_incidence_severe_morbidity_mortality_underlying_risk_factors_and_causative_pathogens_fo r_192_countries.
[4] K. Kallianos et al., "How far have we come? artificial intelligence for chest radiograph interpretation," Clinical Radiology, vol. 74, no. 5, pp. 338-345, 2019, doi: 10.1016/j.crad.2018.12.015.
[5] T. Cherian, et al., "Standardized interpretation of pediatric chest radiographs for the diagnosis of pneumonia in epidemiological studies," Bulletin of the Word Health Organization, vol. 83, no. 5, pp. 353-359. [Online]. Available: https://www.who.int/bulletin/volumes/83/5/353.pdf?ua=1
[6] World Health Organization, "Standardization of interpretation of chest radiographs for the diagnosis of pneumonia in children," World Health Organization; Geneva, Switzerland, 2001. [Online]. Available: https://apps.who.int/iris/bitstream/handle/ 10665/66956/WHO_V_and_B_01.35.pdf
[7] M. Fizman, W. W. Chapman, D. Aronsky, R. S. Evans, and P. J. Haug, "Automatic detection of acute bacterium pneumonia from chest X-rays reports," Journal of the American Medical Informatics Association, vol. 7, no. 6, pp. 593-604, 2020, doi: 10.1136/jamia.2000.0070593.
[8] J. Evertsen, D. J. Baumgardner, A. Regnery, and I. Banerjee, "Dignosis and management of pneumonia and bronchitis in outpatient primary care practices", Primary Care Respiratory Journal, vol. 19, no. 3, pp. 237-241, 2010, doi: 10.4104/pcrj.2010.00024.
[9] D. Wootton and C. Feldman, "The diagnosis of pneumonia requires a chest radiograph (X-ray) - yes, no or sometimes," Pneumonia, vol. 5 pp. 1-7, Jun. 2014, doi: 10.15172/pneu.2014.5/464.
[10] W. C. Dai et al., "CT imaging and differential diagnosis of COVID-19," Canadian Association of Radiology Journal, vol. 71, no. 2, pp. 195-200, May, 2020, doi: 10.1177/0846537120913033.
[11] D. Ganguly, S. Chakraborty, M. Balitanas, and T. Kim, "Medical Imaging: A Review," Conference paper Communication in Computer and Information Science, 2010, vol. 78, pp. 504-506, doi: 10.1007/978-3-642-16444-6_63.
[12] D. Shen, G. Wu, and H.-Il Suk, "Deep learning in medical image analysis," Annual Review of Biomedical Engineering, vol. 19, no. 1, pp. 221-248, 2017, doi: 10.1146/annurev-bioeng-071516-044442.
[13] A. S. Lundervold and A. Lundervold, "An overview of deep learning in medical imaging focusing on MRI," Zeitschrift für Medizinische Physik, vol. 19, no. 2, pp. 102-127, May 2019, doi: 10.1016/j.zemedi.2018.11.002.
[14] J. Schmidhuber, "Deep learning in neural networks: An overview," Neural Networks, vol. 61, pp. 85-117, 2015, doi: 10.1016/j.neunet.2014.09.003.
[15] Y. Bengio, "Learning deep architectures for artificial intelligence," Foundations and Trends in Machine Learning, vol. 2, no. 1, pp. 1-127, 2009, doi: 10.1561/2200000006.
[16] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, pp. 436-444, 2015, doi: 10.1038/nature14539.
[17] D. Silver et al., "Mastering the game of go without human knowledge," Nature, vol. 550, pp. 354-359, 2017, doi: 10.1038/nature24270.
[18] A. Esteva et al., "Dermatologist-level classification of skin cancer with deep neural networks," Nature, vol 542, pp. 115-118, 2017, doi: 10.1038/nature21056.
[19] R. Poplin et al., "Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning," Nature Biomedical Engineering, vol. 2, pp. 158-164, Feb 2018, doi: 10.1038/s41551-018-0195-0.
[20] M. T. Islam, M. A. Aowal, A. T. Minhaz, and K. Ashraf, "Abnormality detection and localization in chest X-rays using deep convolutional neural networks," in Abnormality Detection and Localization in Chest X-rays, ArXiv, Sep. 2017. [Online]. Available: https://arxiv.org/pdf/1705.09850.pdf
[21] L. Yao, E. Poblenz, D. Dagunts, B. Covington, D. Bernard, and K. Lyman, "Learning to diagnose from scratch by exploiting dependencies among labels," Computer Vision and Pattern Recognition, 2018. [Online]. Avalilable https://arxiv.org/pdf/1710.10501.pdf
[22] G. Capizzi, G. L. Sciuto, P. Monforte, and C. Napoli, "Cascade feed forward neural network-based model for air pollutants evaluation of single monitoring stations in urban areas," International Journal of Electronics and Telecommunications, vol. 61, no. 4, pp. 327-332, 2015, doi: 10.1515/eletel-2015-0042.
[23] D. S. Kermany et al., "Identifying medical diagnoses and treatable diseases by image-based deep learning," Cell, vol. 172, no. 5, pp. 1122-1131, 2018, doi: 10.1016/j.cell.2018.02.010.
[24] S. Hijazi, R. Kumar, and C. Rowen, "Using convolutional neural networks for image recognition," Cadence, 2015. [Online]. Available: https://ip.cadence.com/uploads/901/cnn_wp-pdf
[25] J. Gu et al., "Recent Advances in Convolutional Neural Networks," Pattern Recognition, vol. 77, pp. 354-377, 2018, doi: 10.1016/j.patcog.2017.10.013.
[26] N. D. Bharad, K. Madhu, P. Madhavan, and M. V. R. Kuman, "Prediction of Bacterial Lung Infection Using Modified Convolutional Neural Networks," Journal of Critical Reviews, 2020, vol. 7, no. 8, pp 1390-1393. [Online]. Available: http://www.jcreview.com/admin/Uploads/Files/61c8756c51b424.86054132.pdf
[27] D. Ravi et al., "Deep Learning for Health Informatics," in IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 1, pp. 4-21, Jan. 2017, doi: 10.1109/JBHI.2016.2636665.
[28] N. Liu, L. Wan, Y. Zhang, T. Zhou, H. Huo, and T. Fang, "Exploiting Convolutional Neural Networks with Deeply Local Description for Remote Sensing Image Classification," in IEEE Access, vol. 6, pp. 11215-11228, 2018, doi: 10.1109/ACCESS.2018.2798799.
[29] P. Huang et al., "Added value of computer-aided CT image features for early lung cancer diagnosis with small pulmonary nodules: a matched case-control study," Radiology, vol. 286, no. 1, pp. 286-295, Jan. 2018, doi: 10.1148/radiol.2017162725.
[30] P. Lakhani and B. Sundaram, "Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks," Radiology, vol. 284, no. 2, pp. 574-582, Aug. 2017, doi: 10.1148/radiol.2017162326.
[31] D. D.-Fushman et al., "Preparing a collection of radiology examinations for distribution and retrieval," Journal of the American Medical Informatics Association, vol. 23, no. 2, pp. 304-310, Jul. 2015, doi: 10.1093/jamia/ocv080.
[32] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, "Chest X-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases," ArXiv, 2017. [Online]. Available: https://www.researchgate.net/publication/316736470_ChestXray8_Hospitalscale_Chest_Xray_Database_and_Benchmarks_on_W eakly-Supervised_Classification_and_Localization_of_Common_Thorax_Diseases
[33] H. -C. Shin, Le Lu, L. Kim, A. Seff, J. Yao, and R. M. Summers, "Interleaved text/image Deep Mining on a large-scale radiology database," 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1090-1099, doi: 10.1109/CVPR.2015.7298712.
[34] H. C. Shin, L. Lu, L. Kim, A. Seff, J. Yao, and R. M. Summers, "Interleaved text/image deep mining on a large-scale radiology database for automated image interpretation," Journal of Machine Learning Research, vol. 17, no. 1, pp. 1-31, 2016. [Online]. Available: https://dl.acm.org/doi/pdf710.5555/2946645.3007060.
[35] H. Boussaid and I. Kokkinos, "Fast and Exact: ADMM-Based Discriminative Shape Segmentation with Loopy Part Models," 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 4058-4065, doi: 10.1109/CVPR.2014.517.
[36] U. Avni, H. Greenspan, E. Konen, M. Sharon, and J. Goldberger, "X-ray Categorization and Retrieval on the Organ and Pathology Level, Using Patch-Based Visual Words," in IEEE Transactions on Medical Imaging, vol. 30, no. 3, pp. 733-746, March 2011, doi: 10.1109/TMI.2010.2095026.
[37] J. Melendez et al., "A Novel Multiple-Instance Learning-Based Approach to Computer-Aided Detection of Tuberculosis on Chest X-Rays," in IEEE Transactions on Medical Imaging, vol. 34, no. 1, pp. 179-192, Jan. 2015, doi: 10.1109/TMI.2014.2350539.
[38] S. Jaeger et al, "Automatic Tuberculosis Screening Using Chest Radiographs," in IEEE Transactions on Medical Imaging, vol. 33, no. 2, pp. 233-245, Feb. 2014, doi: 10.1109/TMI.2013.2284099.
[39] Z. Xue et al., "Chest X-ray Image View Classification," 2015 IEEE 28th International Symposium on Computer-Based Medical Systems, 2015, pp. 66-71, doi: 10.1109/CBMS.2015.49.
[40] S. Hermann, "Evaluation of Scan-Line Optimization for 3D Medical Image Registration," 2014 IEEE Conference on Computer Vision andPattern Recognition, 2014, pp. 3073-3080, doi: 10.1109/CVPR.2014.393.
[41] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2818-2826, doi: 10.1109/CVPR.2016.308.
[42] Y. Bar, I. Diamant, L. Wolf, S. Lieberman, E. Konen, and H. Greenspan, "Chest pathology detection using deep learning with non-medical training," 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), 2015, pp. 294-297, doi: 10.1109/ISBI.2015.7163871.
[43] D. Kermany et al., "Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning," Cell, vol. 172, no. 5, pp. 1122-1131, 2018, doi: 10.1016/j.cell.2018.02.010.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022. This work is published under https://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Received Apr 16, 2021 Revised Mar 24, 2022 Accepted Apr 01, 2022 Keywords: Chest radiograph Deep learning Diagnosis Neural network Pneumonia ABSTRACT Accurate interpretation of chest radiographs outcome in epidemiological studies facilitates the process of correctly identifying chest-related or respiratory diseases. [...]there is no single reliable test that can identify the symptoms of pneumonia. [...]this paper presents a standardized approach using convolutional neural network (CNN) and transfer learning technique for identifying pneumonia from chest radiographs that ensure accurate diagnosis and assist physicians in making precise prescriptions for the treatment of pneumonia. A training set consisting of 5,232 optical coherence tomography and chest X-ray images dataset from Mendelev public database was used for this research and the performance evaluation of the model developed on the test set yielded 88.14% accuracy, 90% precision, 85% recall and F1 score of 0.87. [42] with encouraging results and used in this paper because it requires few computing resources. 3.METHOD The data used in this paper was obtained from optical coherence tomography, and X-ray images of chest from the Mendeley public database [43].
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Department of Computer Science, Faculty of Science, Federal University Oye Ekiti, Oye Ekiti, Ekiti State, Nigeria
2 Department of Computer Science, College of Computing and telecommunications, Novena University, Ogume, Delta-State, Nigeria