Content area

Abstract

Modern signal processing AI applications face increasing demands for diverse training data while operating under computational constraints. State-of-the-art generative models, though effective, often require prohibitive resources, limiting their deployment in real-time or embedded systems. This thesis proposes a computationally efficient framework for synthetic signal generation using a two-stage architecture that combines a Vector Quantized Variational Autoencoder (VQ-VAE) with either a decoder-only transformer or a discrete diffusion model. The VQ-VAE encodes high-dimensional signals into discrete latent tokens, significantly reducing model complexity while enabling symbolic sequence modeling. These discrete representations are then modeled using transformer-based autoregressive models or Score Entropy Discrete Diffusion (SEDD) models. We validate this approach on two datasets: TorchSig for radio-frequency signals and AudioMNIST for spoken digits. Our work introduces the first discrete-diffusion based generative models for both audio and RF data and presents the first transformer-based generative model for RF signals trained entirely in discrete latent space. We also improve and extend an existing discrete-space transformer-based speech synthesis pipeline and perform a comprehensive comparative analysis of these generative models across domains. The results demonstrate that these methods maintain high fidelity, generate diverse and realistic signals, and offer substantial computational advantages. This work establishes a scalable foundation for efficient data augmentation in signal-driven machine learning systems and opens new directions for generative modeling in low-resource environments.

Details

Title
Efficient Signal Synthesis for Data Augmentation Using Generative AI
Author
Kaasaragadda, Yagna Veera Narayan
Publication year
2025
Publisher
ProQuest Dissertations & Theses
ISBN
9798315768005
Source type
Dissertation or Thesis
Language of publication
English
ProQuest document ID
3214106107
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.