Transfer learning based hybrid feature learning framework for enhanced skin cancer diagnosis using deep feature integration
Maridu Bhargavi, Sivadi Balakrishna,
Transfer learning based hybrid feature learning framework for enhanced skin cancer diagnosis using deep feature integration,
Engineering Science and Technology, an International Journal,
Volume 69,
2025,
102135,
ISSN 2215-0986,
https://doi.org/10.1016/j.jestch.2025.102135.
(https://www.sciencedirect.com/science/article/pii/S2215098625001909)
Abstract: Skin cancer continues to be a major health problem worldwide, with excessive misdiagnosis of skin cancer among dermatologists resulting in delayed treatment and poor patient outcomes. To improve survival chances, skin cancer must be identified accurately and promptly. However, current diagnostic methods lack feature representation and model generalization. Among the primary challenges in automated skin cancer classification are addressing differences in lesion appearance, occlusions, and data class imbalance that impact model performance and reliability. To address these issues, this research proposes the DRMv2Net model, a feature fusion deep learning-based technique that integrates multiple pre-trained convolutional neural networks to enhance skin cancer diagnosis. The method applies a systematic pipeline that includes pre-processing, feature extraction, fusion, and classification. The pre-processing techniques such as adaptive thresholding for hair artifact removal, image inpainting to remove occlusions, and data augmentation for class balancing, were applied to enhance the quality of inputs. Using DenseNet201, ResNet101, and MobileNetV2, diverse features like edges, texture, and color change were extracted and concatenated to build a rich feature representation, followed by fully connected layers for classification. The two benchmark datasets, ISIC 2357 and PAD-UFES 20, are used extensively in testing the DRMv2Net model. A comparison with standalone CNN models such as DenseNet201, ResNet101, MobileNetV2, VGG19, and Xception shows that feature fusion had better accuracy results of 96.11 % on ISIC 2357 and 96.17 % on PAD-UFES 20 respectively when compared to values obtained by existing standalone models. These results demonstrate the strength of feature fusion and pre-processing in boosting how accurately skin cancer is identified and offer a robust and scalable automatic medical image classification solution.
Keywords: Skin cancer; Data Augmentation; Feature extraction; Feature fusion; Deep learning; Classification