It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
The study aimed to achieve the following objectives: (1) to perform the fusion of thermal and visible tongue images with various fusion rules of discrete wavelet transform (DWT) to classify diabetes and normal subjects; (2) to obtain the statistical features in the required region of interest from the tongue image before and after fusion; (3) to distinguish the healthy and diabetes using fused tongue images based on deep and machine learning algorithms. The study participants comprised of 80 normal subjects and age- and sex-matched 80 diabetes patients. The biochemical tests such as fasting glucose, postprandial, Hba1c are taken for all the participants. The visible and thermal tongue images are acquired using digital single lens reference camera and thermal infrared cameras, respectively. The digital and thermal tongue images are fused based on the wavelet transform method. Then Gray level co-occurrence matrix features are extracted individually from the visible, thermal, and fused tongue images. The machine learning classifiers and deep learning networks such as VGG16 and ResNet50 was used to classify the normal and diabetes mellitus. Image quality metrics are implemented to compare the classifiers’ performance before and after fusion. Support vector machine outperformed the machine learning classifiers, well after fusion with an accuracy of 88.12% compared to before the fusion process (Thermal-84.37%; Visible-63.1%). VGG16 produced the classification accuracy of 94.37% after fusion and attained 90.62% and 85% before fusion of individual thermal and visible tongue images, respectively. Therefore, this study results indicates that fused tongue images might be used as a non-contact elemental tool for pre-screening type II diabetes mellitus.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 SRM Institute of Science and Technology, Department of Biomedical Engineering, College of Engineering and Technology, Kattankulathur, India (GRID:grid.412742.6) (ISNI:0000 0004 0635 5080); Saveetha Institute of Medical and Technical Sciences, Department of Biomedical Engineering, Saveetha School of Engineering, Chennai, India (GRID:grid.412431.1) (ISNI:0000 0004 0444 045X)
2 SRM Institute of Science and Technology, Department of Biomedical Engineering, College of Engineering and Technology, Kattankulathur, India (GRID:grid.412742.6) (ISNI:0000 0004 0635 5080); Batangas University, College of Engineering, Architecture and Fine Arts, Batangas City, Philippines (GRID:grid.442931.9) (ISNI:0000 0004 0501 8146)
3 Prince Mohammad Bin Fahd University, Center for Artificial Intelligence, Khobar, Saudi Arabia (GRID:grid.449337.e) (ISNI:0000 0004 1756 6721)
4 Princess Nourah bint Abdulrahman University, Department of Information Systems, College of Computer and Information Sciences, Riyadh, Saudi Arabia (GRID:grid.449346.8) (ISNI:0000 0004 0501 7602)




