It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Deep learning on graphs has recently become a hot research topic in machine learning and shown great promise in a wide variety of applications such as recommendation and social network analysis. This dissertation considers a set of more practical and challenging cases of learning on complex graphs, i.e., edges with attribute information, nodes with missing attributes and multiple graphs with overlapped nodes.
Our first work aims to model the topological information in signed directed networks where the edges have additional attribute information, i.e. signs and directions. In signed directed networks, different signs and directions have different effects in information propagation, which raises challenges to model the structural information. We propose to decouple the modeling of signs and directions with different network parameters, and meanwhile maximize the log-likelihood of observed edges by a variational evidence lower boundto learn the node representations. The experimental results show the effectiveness of the proposed model on both link sign prediction and node recommendation task.
Our second work considers learning on attribute-missing graphs where attributes of partial nodes are entirely missing. Previous graph learning algorithms have limitations in dealing with this kind of graphs. The random walk based methods suffer from the sampling bias issue of structures. The popular graph neural networks feed the structures and attributes to a shared network, and thus become incompatible for attribute-missing nodes. To better learn on attribute-missing graphs, we consider the structures and attributes as two correlated views of the node information and make a sharedlatent space assumption of these two views. Based on the assumption, we propose to model the two views by two different encoders and meanwhile maintain their joint distribution by a novel distribution matching scheme. Extensive experiments on seven real-world datasets show the superiority of the proposed model on both the link prediction task and the newly introduced node attribute completion task.
Moreover, single graphs may have overlapped nodes and become one complex graph. A popular case is the user-item bipartite graphs in cross domain recommendation where users are shared while items are from different domains. Previous methods usually emphasize the overlapped features of user preferences while compromise the domain-specific features or learn the domain-specific features by heuristic human knowledge. Our third work proposes to learn both features in a more practical way by an equivalent transformation assumption. The assumption hypothesizes the user preference in each domain can be mutually converted to each other by equivalent transformation. Then, a novel equivalent transformation based distribution matching scheme is developed to model the joint distribution of user behaviors across domains and conduct the recommendation task. The results on three real-world benchmarks confirm the superiority of the proposed model.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer