cogdl.models.nn.pyg_infograph
¶
Module Contents¶
Classes¶
Encoder used in supervised model with Set2set in paper `”Order Matters: Sequence to sequence for sets” |
|
Encoder stacked with GIN layers |
|
Residual MLP layers. |
|
Implimentation of Infograph in paper `”InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation |
-
class
cogdl.models.nn.pyg_infograph.
SUPEncoder
(num_features, dim, num_layers=1)[source]¶ Bases:
torch.nn.Module
Encoder used in supervised model with Set2set in paper “Order Matters: Sequence to sequence for sets” <https://arxiv.org/abs/1511.06391> and NNConv in paper “Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs” <https://arxiv.org/abs/1704.02901>
-
class
cogdl.models.nn.pyg_infograph.
Encoder
(in_feats, hidden_dim, num_layers=3, num_mlp_layers=2, pooling='sum')[source]¶ Bases:
torch.nn.Module
Encoder stacked with GIN layers
- in_featsint
Size of each input sample.
- hidden_featsint
Size of output embedding.
- num_layersint, optional
Number of GIN layers, default:
3
.- num_mlp_layersint, optional
Number of MLP layers for each GIN layer, default:
2
.- poolingstr, optional
Aggragation type, default :
sum
.
-
class
cogdl.models.nn.pyg_infograph.
FF
(in_feats, out_feats)[source]¶ Bases:
torch.nn.Module
Residual MLP layers.
- ..math::
out = mathbf{MLP}(x) + mathbf{Linear}(x)
- in_featsint
Size of each input sample
- out_featsint
Size of each output sample
-
class
cogdl.models.nn.pyg_infograph.
InfoGraph
(in_feats, hidden_dim, out_feats, num_layers=3, sup=False)[source]¶ Bases:
cogdl.models.BaseModel
- Implimentation of Infograph in paper `”InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation
Learning via Mutual Information Maximization” <https://openreview.net/forum?id=r1lfF2NYvH>__. `
- in_featsint
Size of each input sample.
- out_featsint
Size of each output sample.
- num_layersint, optional
Number of MLP layers in encoder, default:
3
.- unsupbool, optional
Use unsupervised model if True, default:
True
.