layers

class cogdl.layers.gcn_layer.GCNLayer(in_features, out_features, dropout=0.0, activation=None, residual=False, norm=None, bias=True, **kwargs)[source]

Bases: torch.nn.modules.module.Module

Simple GCN layer, similar to https://arxiv.org/abs/1609.02907

forward(graph, x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reset_parameters()[source]
training: bool
class cogdl.layers.gat_layer.GATLayer(in_feats, out_feats, nhead=1, alpha=0.2, attn_drop=0.5, activation=None, residual=False, norm=None)[source]

Bases: torch.nn.modules.module.Module

Sparse version GAT layer, similar to https://arxiv.org/abs/1710.10903

forward(graph, x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reset_parameters()[source]
training: bool
class cogdl.layers.sage_layer.MaxAggregator[source]

Bases: object

class cogdl.layers.sage_layer.MeanAggregator[source]

Bases: object

class cogdl.layers.sage_layer.SAGELayer(in_feats, out_feats, normalize=False, aggr='mean', dropout=0.0, norm=None, activation=None, residual=False)[source]

Bases: torch.nn.modules.module.Module

forward(graph, x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.layers.sage_layer.SumAggregator[source]

Bases: object

class cogdl.layers.gin_layer.GINLayer(apply_func=None, eps=0, train_eps=True)[source]

Bases: torch.nn.modules.module.Module

Graph Isomorphism Network layer from paper “How Powerful are Graph Neural Networks?”.

\[h_i^{(l+1)} = f_\Theta \left((1 + \epsilon) h_i^{l} + \mathrm{sum}\left(\left\{h_j^{l}, j\in\mathcal{N}(i) \right\}\right)\right)\]
Parameters
  • apply_func (callable layer function)) – layer or function applied to update node feature

  • eps (float32, optional) – Initial epsilon value.

  • train_eps (bool, optional) – If True, epsilon will be a learnable parameter.

forward(graph, x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.layers.gcnii_layer.GCNIILayer(n_channels, alpha=0.1, beta=1, residual=False)[source]

Bases: torch.nn.modules.module.Module

forward(graph, x, init_x)[source]

Symmetric normalization

reset_parameters()[source]
training: bool
class cogdl.layers.deepergcn_layer.BondEncoder(bond_dim_list, emb_size)[source]

Bases: torch.nn.modules.module.Module

forward(edge_attr)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.layers.deepergcn_layer.EdgeEncoder(in_feats, out_feats, bias=False)[source]

Bases: torch.nn.modules.module.Module

forward(edge_attr)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.layers.deepergcn_layer.GENConv(in_feats: int, out_feats: int, aggr: str = 'softmax_sg', beta: float = 1.0, p: float = 1.0, learn_beta: bool = False, learn_p: bool = False, use_msg_norm: bool = False, learn_msg_scale: bool = True, norm: Optional[str] = None, residual: bool = False, activation: Optional[str] = None, num_mlp_layers: int = 2, edge_attr_size: Optional[list] = None)[source]

Bases: torch.nn.modules.module.Module

forward(graph, x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

message_norm(x, msg)[source]
training: bool
class cogdl.layers.deepergcn_layer.ResGNNLayer(conv, in_channels, activation='relu', norm='batchnorm', dropout=0.0, out_norm=None, out_channels=- 1, residual=True, checkpoint_grad=False)[source]

Bases: torch.nn.modules.module.Module

Implementation of DeeperGCN in paper “DeeperGCN: All You Need to Train Deeper GCNs”

Parameters
  • conv (nn.Module) – An instance of GNN Layer, recieving (graph, x) as inputs

  • n_channels (int) – size of input features

  • activation (str) –

  • norm (str) – type of normalization, batchnorm as default

  • dropout (float) –

  • checkpoint_grad (bool) –

forward(graph, x, dropout=None, *args, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.layers.disengcn_layer.DisenGCNLayer(in_feats, out_feats, K, iterations, tau=1.0, activation='leaky_relu')[source]

Bases: torch.nn.modules.module.Module

Implementation of “Disentangled Graph Convolutional Networks”.

forward(graph, x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reset_parameters()[source]
training: bool
class cogdl.layers.han_layer.AttentionLayer(num_features)[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.layers.han_layer.HANLayer(num_edge, w_in, w_out)[source]

Bases: torch.nn.modules.module.Module

forward(graph, x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.layers.mlp_layer.MLP(in_feats, out_feats, hidden_size, num_layers, dropout=0.0, activation='relu', norm=None, act_first=False, bias=True)[source]

Bases: torch.nn.modules.module.Module

Multilayer perception with normalization

\[x^{(i+1)} = \sigma(W^{i}x^{(i)})\]
Parameters
  • in_feats (int) – Size of each input sample.

  • out_feats (int) – Size of each output sample.

  • hidden_dim (int) – Size of hidden layer dimension.

  • use_bn (bool, optional) – Apply batch normalization if True, default: True.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reset_parameters()[source]
training: bool
class cogdl.layers.pprgo_layer.LinearLayer(in_features, out_features, bias=True)[source]

Bases: torch.nn.modules.module.Module

forward(input)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reset_parameters()[source]
training: bool
class cogdl.layers.pprgo_layer.PPRGoLayer(in_feats, hidden_size, out_feats, num_layers, dropout, activation='relu')[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.layers.rgcn_layer.RGCNLayer(in_feats, out_feats, num_edge_types, regularizer='basis', num_bases=None, self_loop=True, dropout=0.0, self_dropout=0.0, layer_norm=True, bias=True)[source]

Bases: torch.nn.modules.module.Module

Implementation of Relational-GCN in paper “Modeling Relational Data with Graph Convolutional Networks”

Parameters
  • in_feats (int) – Size of each input embedding.

  • out_feats (int) – Size of each output embedding.

  • num_edge_type (int) – The number of edge type in knowledge graph.

  • regularizer (str, optional) – Regularizer used to avoid overfitting, basis or bdd, default : basis.

  • num_bases (int, optional) – The number of basis, only used when regularizer is basis, default : None.

  • self_loop (bool, optional) – Add self loop embedding if True, default : True.

  • dropout (float) –

  • self_dropout (float, optional) – Dropout rate of self loop embedding, default : 0.0

  • layer_norm (bool, optional) – Use layer normalization if True, default : True

  • bias (bool) –

basis_forward(graph, x)[source]
bdd_forward(graph, x)[source]
forward(graph, x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reset_parameters()[source]
training: bool

Modified from https://github.com/GraphSAINT/GraphSAINT

class cogdl.layers.saint_layer.SAINTLayer(dim_in, dim_out, dropout=0.0, act='relu', order=1, aggr='mean', bias='norm-nn', **kwargs)[source]

Bases: torch.nn.modules.module.Module

forward(graph, x)[source]
Inputs:

graph normalized adj matrix of the subgraph x 2D matrix of input node features

Outputs:

feat_out 2D matrix of output node features

training: bool
class cogdl.layers.sgc_layer.SGCLayer(in_features, out_features, order=3)[source]

Bases: torch.nn.modules.module.Module

forward(graph, x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.layers.mixhop_layer.MixHopLayer(num_features, adj_pows, dim_per_pow)[source]

Bases: torch.nn.modules.module.Module

adj_pow_x(graph, x, p)[source]
forward(graph, x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reset_parameters()[source]
training: bool
class cogdl.layers.se_layer.SELayer(in_channels, se_channels)[source]

Bases: torch.nn.modules.module.Module

Squeeze-and-excitation networks

forward(x)[source]
training: bool