On the distribution alignment of propagation in graph neural networks
On the distribution alignment of propagation in graph neural networks
Blog Article
Graph neural networks (GNNs) have been widely adopted for modeling graph-structure data.Most existing GNN studies have focused on designing panc mix gheos different strategies to propagate information over the graph structures.After systematic investigations, we observe that the propagation step in GNNs matters, but its resultant performance improvement is insensitive to the location where we apply it.Our empirical examination further shows that the performance improvement brought by propagation mostly comes from a phenomenon of distribution alignment, i.
e., propagation over graphs actually results in the alignment mcdsp apb 8 of the underlying distributions between the training and test sets.The findings are instrumental to understand GNNs, e.g.
, why decoupled GNNs can work as good as standard GNNs.11 Source code: https://github.com/THUDM/DistAlign-GNNs.