Welcome! :wave:
|
|
0
|
563
|
June 14, 2023
|
Pure Transformers are Powerful Graph Learners
|
|
5
|
940
|
August 17, 2023
|
Graph-Bert: Only Attention is Needed for Learning Graph Representations
|
|
0
|
1500
|
July 26, 2023
|
Attending to Graph Transformers
|
|
0
|
1621
|
July 21, 2023
|
Transformers for Node Classification
|
|
13
|
766
|
July 20, 2023
|
LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation
|
|
0
|
1084
|
July 7, 2023
|
TabPFN: A Transformer That Solves Small Tabular Classification
|
|
0
|
711
|
June 19, 2023
|
TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual
|
|
0
|
462
|
June 15, 2023
|
Inductive Matrix Completion Based on Graph Neural Networks
|
|
0
|
494
|
June 15, 2023
|
Complex Embeddings for Simple Link Prediction
|
|
0
|
485
|
June 15, 2023
|
High-Resolution Image Synthesis with Latent Diffusion Models
|
|
0
|
501
|
June 15, 2023
|
Do Transformers Really Perform Bad for Graph Representation?
|
|
0
|
572
|
June 15, 2023
|
Global Self-Attention as a Replacement for Graph Convolution
|
|
0
|
462
|
June 15, 2023
|
Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction
|
|
0
|
451
|
June 15, 2023
|
Scalable Graph Neural Networks for Heterogeneous Graphs
|
|
0
|
390
|
June 15, 2023
|
Position-based Hash Embeddings For Scaling Graph Neural Networks
|
|
0
|
452
|
June 15, 2023
|
Position-aware Graph Neural Networks
|
|
0
|
526
|
June 15, 2023
|
Graph Neural Networks with Learnable Structural and Positional Representations
|
|
0
|
456
|
June 15, 2023
|
Inductive Graph Embeddings through Locality Encodings
|
|
0
|
501
|
June 15, 2023
|