Content area

Abstract

Variational Graph Autoencoder (VGAE) is a widely explored model for learning the distribution of graph data. Currently, the approximate posterior distribution in VGAE-based methods is overly restrictive, leading to a significant gap between the variational lower bound and the log-likelihood of graph data. This limitation reduces the expressive power of these VGAE-based models. To address this issue, this paper proposes the Importance Weighted Variational Graph Autoencoder (IWVGAE) and provides a theoretical justification. This method makes the posterior distribution more flexible through Monte Carlo sampling and assigns importance weights to the likelihood gradients during backpropagation. In this way, IWVGAE achieves a more flexible optimization objective, enabling the learning of richer latent representations for graph data. It not only achieves a theoretically tighter variational lower bound but also makes graph density estimation more accurate. Extensive experimental results on seven classic graph datasets show that as the number of samples from the approximate posterior distribution increases, (1) the variational lower bound continuously improves, validating the proposed theory, and (2) the performance on downstream tasks significantly improves, demonstrating more effective learning and representation of graph data.

Full text

Turn on search term navigation

© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.