Content area

Abstract

Skin cancer remains a major global health concern where early detection can significantly improve treatment outcomes. Traditional methods rely on expert evaluation, which can be prone to errors. DSSCC-Net, a deep CNN model integrated with SMOTE-Tomek oversampling, improves classification accuracy and effectively handles class imbalance in dermoscopic datasets. Trained and validated on the HAM10000, ISIC 2018 and PH2 datasets, DSSCC-Net achieved an average accuracy of 97.82% ± 0.37%, precision of 97%, recall of 97% and an AUC of 99.43%. Additional analysis using Grad-CAM and expert-labeled masks validated the model’s explainability. DSSCC-Net demonstrates state-of-the-art performance and readiness for real-world clinical integration. Current CNN-based models struggle with accurately classifying underrepresented skin lesion classes due to dataset imbalances and fail to achieve consistently high performance across diverse populations. There is a pressing need for a robust, efficient, and interpretable model to aid dermatologists in early and accurate diagnosis. This study proposes DSSCC-Net, a novel deep learning framework that integrates an optimized CNN architecture with the SMOTE-Tomek technique to address class imbalance. The model processes dermoscopic images from the HAM10000 dataset, resized to 28 28 pixels, and employs data augmentation, dropout layers, and ReLU activation to enhance feature extraction and reduce overfitting. Performance is evaluated using metrics such as accuracy, precision, recall, F1-score, and AUC, alongside Grad-CAM for interpretability. DSSCC-Net achieves a 98% classification accuracy, outperforming state-of-the-art models like VGG-16 (91.12%), ResNet-152 (89.32%), and EfficientNet-B0 (89.46%). The SMOTE-Tomek integration significantly improves minority-class detection, yielding an AUC of 99.43%. The model also demonstrates balanced precision (97%) and recall (97%), with a low loss value (0.1677), indicating strong generalization. DSSCC-Net sets a new benchmark for skin cancer classification by effectively addressing class imbalance and computational limitations. Its high interpretability, achieved through Grad-CAM, makes it a practical tool for clinical deployment. Future work includes extending this framework to other medical imaging domains and developing real-time diagnostic applications.

Full text

Turn on search term navigation

© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.