TY - GEN
T1 - Graph Attention Auto-Encoders
AU - Salehi, Amin
AU - Davulcu, Hasan
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/11
Y1 - 2020/11
N2 - Auto-encoders have emerged as a successful framework for unsupervised learning. However, conventional auto-encoders are incapable of utilizing explicit relations in structured data. To take advantage of relations in graph-structured data, several graph auto-encoders have recently been proposed, but they neglect to reconstruct either the graph structure or node attributes. In this paper, we present the graph attention auto-encoder (GATE), a neural network architecture for unsupervised representation learning on graph-structured data. Our architecture is able to reconstruct graph-structured inputs, including both node attributes and the graph structure, through stacked encoder/decoder layers equipped with self-Attention mechanisms. In the encoder, by considering node attributes as initial node representations, each layer generates new representations of nodes by attending over their neighbors' representations. In the decoder, we attempt to reverse the encoding process to reconstruct node attributes. Moreover, node representations are regularized to reconstruct the graph structure. Our proposed architecture does not need to know the graph structure upfront, and thus it can be applied to inductive learning. Our experiments demonstrate competitive performance on several node classification benchmark datasets for transductive and inductive tasks, even exceeding the performance of supervised learning baselines in most cases.
AB - Auto-encoders have emerged as a successful framework for unsupervised learning. However, conventional auto-encoders are incapable of utilizing explicit relations in structured data. To take advantage of relations in graph-structured data, several graph auto-encoders have recently been proposed, but they neglect to reconstruct either the graph structure or node attributes. In this paper, we present the graph attention auto-encoder (GATE), a neural network architecture for unsupervised representation learning on graph-structured data. Our architecture is able to reconstruct graph-structured inputs, including both node attributes and the graph structure, through stacked encoder/decoder layers equipped with self-Attention mechanisms. In the encoder, by considering node attributes as initial node representations, each layer generates new representations of nodes by attending over their neighbors' representations. In the decoder, we attempt to reverse the encoding process to reconstruct node attributes. Moreover, node representations are regularized to reconstruct the graph structure. Our proposed architecture does not need to know the graph structure upfront, and thus it can be applied to inductive learning. Our experiments demonstrate competitive performance on several node classification benchmark datasets for transductive and inductive tasks, even exceeding the performance of supervised learning baselines in most cases.
KW - n/a
UR - http://www.scopus.com/inward/record.url?scp=85098774447&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098774447&partnerID=8YFLogxK
U2 - 10.1109/ICTAI50040.2020.00154
DO - 10.1109/ICTAI50040.2020.00154
M3 - Conference contribution
AN - SCOPUS:85098774447
T3 - Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI
SP - 989
EP - 996
BT - Proceedings - IEEE 32nd International Conference on Tools with Artificial Intelligence, ICTAI 2020
A2 - Alamaniotis, Miltos
A2 - Pan, Shimei
PB - IEEE Computer Society
T2 - 32nd IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2020
Y2 - 9 November 2020 through 11 November 2020
ER -