Content area
This study explores an expression synthesis algorithm anchored in Generative Adversarial Networks (GAN) with attention mechanisms, achieving enhanced authenticity in facial expression generation. Evaluated on the MUG and Oulu-CASIA datasets, our method synthesizes six expressions with superior clarity (96.63±0.26 confidence for neutral expressions) and smoothness (SSIM >0.92 for video frames), outperforming StarGAN and ExprGAN in detail preservation and temporal stability. The proposed model demonstrates significant advantages in realism and identity retention, validated through quantitative metrics and comparative experiments.