Content area

Abstract

Trust in visual evidence increasingly depends on reliable image forgery detection and localization. Although evaluation datasets and metrics are becoming standardized, training practices remain fragmented—often relying on unreleased corpora and narrow artifact cues that hinder reproducibility and cross-domain transfer. This dissertation addresses these gaps through integrated frameworks combining model design, data standardization, synthetic data generation, validation, and generalization across manipulation types and applications.

An affinity-guided visual state-space model reformulates image manipulation localization as a three-class task—pristine, source, and target—unifying copy–move and splicing detection. It captures long-range duplication efficiently while distinguishing genuine self-similarity from tampered correspondences, with domain-specific and prompt-conditioned mechanisms enabling robust localization across biomedical figures and text-driven edits. Beyond classical forgeries, this work explores the detection of AI-generated local generative forgeries by introducing a mutual information–based artifact derived from image–text conditioning signals, wherein traditional noise, compression, and frequency cues are largely diminished.

On the data side, standardized benchmarks are established across splicing, copy–move, removal, inpainting, and enhancement, with protocolized splits and consistent post-processing to ensure reproducible evaluation under identical settings in both natural and biomedical domains. A vision–language–guided diffusion pipeline synthesizes semantically controlled forgeries with an automatic verification loop that maintains fidelity and annotation quality.

Together, these contributions unify architectural, training, and evaluation practices under a provenance-aware paradigm that bridges handcrafted and generative-era artifacts. The research establishes a reproducible, semantically grounded, and domain-transferable foundation for image manipulation localization, providing both standardized resources and methodological advances to support fair, transparent, and interpretable progress in the field.

Details

1010268
Business indexing term
Title
Context-Aware Semantic Forgery Detection in Biomedical & Natural Images
Number of pages
186
Publication year
2025
Degree date
2025
School code
0208
Source
DAI-B 87/6(E), Dissertation Abstracts International
ISBN
9798265470966
Committee member
Ferarra, Emilio; O'Leary, Daniel
University/institution
University of Southern California
Department
Computer Science
University location
United States -- California
Degree
Ph.D.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
32395503
ProQuest document ID
3280355433
Document URL
https://www.proquest.com/dissertations-theses/context-aware-semantic-forgery-detection/docview/3280355433/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic