Content area
Explanation methods are being used to understand model reasoning and decision-making. In this work, we introduce a novel point of view for these methods. We first apply Grad-CAM, initially proposed to explain image classification models, to a segmentation network. Then, we show that small negative gradients can be used to enhance model predictions in the case of under-pixel prediction without retraining. Instead of discarding negative gradients with ReLU as Grad-CAM does, we propose Drift-Grad-CAM with two heuristics methods of thresholding as a novel approach that leverages the informative potential hidden within negative gradients. Drift-Grad-CAM method applied to U-Net and DeepLabV3 model with a ResNet-50 backbone and on two datasets, results in an improvement of performance metrics, Dice and IoU scores, by up to 46% without retraining the model. It demonstrates that some small negative gradients are underestimated but valuable source of information for pixel prediction, and they should be considered as meaningful as positive gradients in future works.