Content area

Abstract

The increasing complexity of video tampering techniques poses a significant threat to the integrity and security of Internet of Multimedia Things (IoMT) ecosystems, particularly in resource-constrained edge-cloud infrastructures. This paper introduces Multiscale Gated Multihead Attention Depthwise Separable CNN (MGMA-DSCNN), an advanced deep learning framework specifically optimized for real-time tampered video detection in IoMT environments. By integrating lightweight convolutional neural networks (CNNs) with multihead attention mechanisms, MGMA-DSCNN significantly enhances feature extraction while maintaining computational efficiency. Unlike conventional methods, this approach employs a multiscale attention mechanism to refine feature representations, effectively identifying deepfake manipulations, frame insertions, splicing, and adversarial forgeries across diverse multimedia streams. Extensive experiments on multiple forensic video datasets—including the HTVD dataset—demonstrate that MGMA-DSCNN outperforms state-of-the-art architectures such as VGGNet-16, ResNet, and DenseNet, achieving an unprecedented detection accuracy of 98.1%. Furthermore, by leveraging edge-cloud synergy, our framework optimally distributes computational loads, effectively reducing latency and energy consumption, making it highly suitable for real-time security surveillance and forensic investigations. These advancements position MGMA-DSCNN as a scalable, high-performance solution for next-generation intelligent video authentication, offering robust, low-latency detection capabilities in dynamic and resource-constrained IoMT environments.

Full text

Turn on search term navigation

Copyright © 2025 Yuwen Shao et al. International Journal of Intelligent Systems published by John Wiley & Sons Ltd. This is an open access article under the terms of the Creative Commons Attribution License (the “License”), which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/