models

BaseModel

class cogdl.models.base_model.BaseModel[source]

Bases: torch.nn.modules.module.Module

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(*args)[source]
static get_trainer(task: Any, args: Any) → Optional[Type[cogdl.trainers.base_trainer.BaseTrainer]][source]
graph_classification_loss(batch)[source]
node_classification_loss(data, mask=None)[source]
predict(data)[source]
set_device(device)[source]
set_loss_fn(loss_fn)[source]

Supervised Model

class cogdl.models.supervised_model.SupervisedHeterogeneousNodeClassificationModel[source]

Bases: cogdl.models.base_model.BaseModel, abc.ABC

evaluate(data: Any, nodes: Any, targets: Any) → Any[source]
static get_trainer(taskType: Any, args: Any) → Optional[Type[SupervisedHeterogeneousNodeClassificationTrainer]][source]
loss(data: Any) → Any[source]
class cogdl.models.supervised_model.SupervisedHomogeneousNodeClassificationModel[source]

Bases: cogdl.models.base_model.BaseModel, abc.ABC

static get_trainer(taskType: Any, args: Any) → Optional[Type[SupervisedHomogeneousNodeClassificationTrainer]][source]
loss(data: Any) → Any[source]
predict(data: Any) → Any[source]
class cogdl.models.supervised_model.SupervisedModel[source]

Bases: cogdl.models.base_model.BaseModel, abc.ABC

loss(data: Any) → Any[source]

Embedding Model

class cogdl.models.emb.hope.HOPE(dimension, beta)[source]

Bases: cogdl.models.base_model.BaseModel

The HOPE model from the “Grarep: Asymmetric transitivity preserving graph embedding” paper.

Args:
hidden_size (int) : The dimension of node representation. beta (float) : Parameter in katz decomposition.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'hope'
train(G)[source]

The author claim that Katz has superior performance in related tasks S_katz = (M_g)^-1 * M_l = (I - beta*A)^-1 * beta*A = (I - beta*A)^-1 * (I - (I -beta*A)) = (I - beta*A)^-1 - I

class cogdl.models.emb.spectral.Spectral(dimension)[source]

Bases: cogdl.models.base_model.BaseModel

The Spectral clustering model from the “Leveraging social media networks for classification” paper

Args:
hidden_size (int) : The dimension of node representation.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'spectral'
train(G)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self
class cogdl.models.emb.hin2vec.Hin2vec(hidden_dim, walk_length, walk_num, batch_size, hop, negative, epochs, lr, cpu=True)[source]

Bases: cogdl.models.base_model.BaseModel

The Hin2vec model from the “HIN2Vec: Explore Meta-paths in Heterogeneous Information Networks for Representation Learning” paper.

Args:
hidden_size (int) : The dimension of node representation. walk_length (int) : The walk length. walk_num (int) : The number of walks to sample for each node. batch_size (int) : The batch size of training in Hin2vec. hop (int) : The number of hop to construct training samples in Hin2vec. negative (int) : The number of nagative samples for each meta2path pair. epochs (int) : The number of training iteration. lr (float) : The initial learning rate of SGD. cpu (bool) : Use CPU or GPU to train hin2vec.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'hin2vec'
train(G, node_type)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self
class cogdl.models.emb.netmf.NetMF(dimension, window_size, rank, negative, is_large=False)[source]

Bases: cogdl.models.base_model.BaseModel

The NetMF model from the “Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec” paper.

Args:
hidden_size (int) : The dimension of node representation. window_size (int) : The actual context size which is considered in language model. rank (int) : The rank in approximate normalized laplacian. negative (int) : The number of nagative samples in negative sampling. is-large (bool) : When window size is large, use approximated deepwalk matrix to decompose.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'netmf'
train(G)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self
class cogdl.models.emb.distmult.DistMult(nentity, nrelation, hidden_dim, gamma, double_entity_embedding=False, double_relation_embedding=False)[source]

Bases: cogdl.models.emb.knowledge_base.KGEModel

The DistMult model from the ICLR 2015 paper “EMBEDDING ENTITIES AND RELATIONS FOR LEARNING AND INFERENCE IN KNOWLEDGE BASES” <https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/ICLR2015_updated.pdf> borrowed from KnowledgeGraphEmbedding<https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding>

model_name = 'distmult'
score(head, relation, tail, mode)[source]
class cogdl.models.emb.transe.TransE(nentity, nrelation, hidden_dim, gamma, double_entity_embedding=False, double_relation_embedding=False)[source]

Bases: cogdl.models.emb.knowledge_base.KGEModel

The TransE model from paper “Translating Embeddings for Modeling Multi-relational Data” <http://papers.nips.cc/paper/5071-translating-embeddings-for-modeling-multi-relational-data.pdf> borrowed from KnowledgeGraphEmbedding<https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding>

model_name = 'transe'
score(head, relation, tail, mode)[source]
class cogdl.models.emb.deepwalk.DeepWalk(dimension, walk_length, walk_num, window_size, worker, iteration)[source]

Bases: cogdl.models.base_model.BaseModel

The DeepWalk model from the “DeepWalk: Online Learning of Social Representations” paper

Args:
hidden_size (int) : The dimension of node representation. walk_length (int) : The walk length. walk_num (int) : The number of walks to sample for each node. window_size (int) : The actual context size which is considered in language model. worker (int) : The number of workers for word2vec. iteration (int) : The number of training iteration in word2vec.
static add_args(parser: argparse.ArgumentParser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args) → cogdl.models.emb.deepwalk.DeepWalk[source]

Build a new model instance.

model_name = 'deepwalk'
train(G: networkx.classes.graph.Graph, embedding_model_creator=<class 'gensim.models.word2vec.Word2Vec'>)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self
class cogdl.models.emb.rotate.RotatE(nentity, nrelation, hidden_dim, gamma, double_entity_embedding=False, double_relation_embedding=False)[source]

Bases: cogdl.models.emb.knowledge_base.KGEModel

Implementation of RotatE model from the paper “RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space” <https://openreview.net/forum?id=HkgEQnRqYQ>. borrowed from KnowledgeGraphEmbedding<https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding>

model_name = 'rotate'
score(head, relation, tail, mode)[source]
class cogdl.models.emb.gatne.GATNE(dimension, walk_length, walk_num, window_size, worker, epoch, batch_size, edge_dim, att_dim, negative_samples, neighbor_samples, schema)[source]

Bases: cogdl.models.base_model.BaseModel

The GATNE model from the “Representation Learning for Attributed Multiplex Heterogeneous Network” paper

Args:
walk_length (int) : The walk length. walk_num (int) : The number of walks to sample for each node. window_size (int) : The actual context size which is considered in language model. worker (int) : The number of workers for word2vec. epoch (int) : The number of training epochs. batch_size (int) : The size of each training batch. edge_dim (int) : Number of edge embedding dimensions. att_dim (int) : Number of attention dimensions. negative_samples (int) : Negative samples for optimization. neighbor_samples (int) : Neighbor samples for aggregation schema (str) : The metapath schema used in model. Metapaths are splited with “,”, while each node type are connected with “-” in each metapath. For example:”0-1-0,0-1-2-1-0”
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'gatne'
train(network_data)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self
class cogdl.models.emb.dgk.DeepGraphKernel(hidden_dim, min_count, window_size, sampling_rate, rounds, epoch, alpha, n_workers=4)[source]

Bases: cogdl.models.base_model.BaseModel

The Hin2vec model from the “Deep Graph Kernels” paper.

Args:
hidden_size (int) : The dimension of node representation. min_count (int) : Parameter in word2vec. window (int) : The actual context size which is considered in language model. sampling_rate (float) : Parameter in word2vec. iteration (int) : The number of iteration in WL method. epoch (int) : The number of training iteration. alpha (float) : The learning rate of word2vec.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

static feature_extractor(data, rounds, name)[source]
forward(graphs, **kwargs)[source]
model_name = 'dgk'
save_embedding(output_path)[source]
static wl_iterations(graph, features, rounds)[source]
class cogdl.models.emb.grarep.GraRep(dimension, step)[source]

Bases: cogdl.models.base_model.BaseModel

The GraRep model from the “Grarep: Learning graph representations with global structural information” paper.

Args:
hidden_size (int) : The dimension of node representation. step (int) : The maximum order of transitition probability.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'grarep'
train(G)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self
class cogdl.models.emb.dngr.DNGR(hidden_size1, hidden_size2, noise, alpha, step, max_epoch, lr, cpu)[source]

Bases: cogdl.models.base_model.BaseModel

The DNGR model from the “Deep Neural Networks for Learning Graph Representations” paper

Args:
hidden_size1 (int) : The size of the first hidden layer. hidden_size2 (int) : The size of the second hidden layer. noise (float) : Denoise rate of DAE. alpha (float) : Parameter in DNGR. step (int) : The max step in random surfing. max_epoch (int) : The max epoches in training step. lr (float) : Learning rate in DNGR.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

get_denoised_matrix(mat)[source]
get_emb(matrix)[source]
get_ppmi_matrix(mat)[source]
model_name = 'dngr'
random_surfing(adj_matrix)[source]
scale_matrix(mat)[source]
train(G)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self
class cogdl.models.emb.pronepp.ProNEPP(filter_types, svd, search, max_evals=None, loss_type=None, n_workers=None)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'prone++'
class cogdl.models.emb.graph2vec.Graph2Vec(dimension, min_count, window_size, dm, sampling_rate, rounds, epoch, lr, worker=4)[source]

Bases: cogdl.models.base_model.BaseModel

The Graph2Vec model from the “graph2vec: Learning Distributed Representations of Graphs” paper

Args:
hidden_size (int) : The dimension of node representation. min_count (int) : Parameter in doc2vec. window_size (int) : The actual context size which is considered in language model. sampling_rate (float) : Parameter in doc2vec. dm (int) : Parameter in doc2vec. iteration (int) : The number of iteration in WL method. epoch (int) : The max epoches in training step. lr (float) : Learning rate in doc2vec.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

static feature_extractor(data, rounds, name)[source]
forward(graphs, **kwargs)[source]
model_name = 'graph2vec'
save_embedding(output_path)[source]
static wl_iterations(graph, features, rounds)[source]
class cogdl.models.emb.metapath2vec.Metapath2vec(dimension, walk_length, walk_num, window_size, worker, iteration, schema)[source]

Bases: cogdl.models.base_model.BaseModel

The Metapath2vec model from the “metapath2vec: Scalable Representation Learning for Heterogeneous Networks” paper

Args:
hidden_size (int) : The dimension of node representation. walk_length (int) : The walk length. walk_num (int) : The number of walks to sample for each node. window_size (int) : The actual context size which is considered in language model. worker (int) : The number of workers for word2vec. iteration (int) : The number of training iteration in word2vec. schema (str) : The metapath schema used in model. Metapaths are splited with “,”, while each node type are connected with “-” in each metapath. For example:”0-1-0,0-2-0,1-0-2-0-1”.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'metapath2vec'
train(G, node_type)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self
class cogdl.models.emb.node2vec.Node2vec(dimension, walk_length, walk_num, window_size, worker, iteration, p, q)[source]

Bases: cogdl.models.base_model.BaseModel

The node2vec model from the “node2vec: Scalable feature learning for networks” paper

Args:
hidden_size (int) : The dimension of node representation. walk_length (int) : The walk length. walk_num (int) : The number of walks to sample for each node. window_size (int) : The actual context size which is considered in language model. worker (int) : The number of workers for word2vec. iteration (int) : The number of training iteration in word2vec. p (float) : Parameter in node2vec. q (float) : Parameter in node2vec.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'node2vec'
train(G)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self
class cogdl.models.emb.complex.ComplEx(nentity, nrelation, hidden_dim, gamma, double_entity_embedding=False, double_relation_embedding=False)[source]

Bases: cogdl.models.emb.knowledge_base.KGEModel

the implementation of ComplEx model from the paper “Complex Embeddings for Simple Link Prediction”<http://proceedings.mlr.press/v48/trouillon16.pdf> borrowed from KnowledgeGraphEmbedding<https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding>

model_name = 'complex'
score(head, relation, tail, mode)[source]
class cogdl.models.emb.pte.PTE(dimension, walk_length, walk_num, negative, batch_size, alpha)[source]

Bases: cogdl.models.base_model.BaseModel

The PTE model from the “PTE: Predictive Text Embedding through Large-scale Heterogeneous Text Networks” paper.

Args:
hidden_size (int) : The dimension of node representation. walk_length (int) : The walk length. walk_num (int) : The number of walks to sample for each node. negative (int) : The number of nagative samples for each edge. batch_size (int) : The batch size of training in PTE. alpha (float) : The initial learning rate of SGD.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'pte'
train(G, node_type)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self
class cogdl.models.emb.netsmf.NetSMF(dimension, window_size, negative, num_round, worker)[source]

Bases: cogdl.models.base_model.BaseModel

The NetSMF model from the “NetSMF: Large-Scale Network Embedding as Sparse Matrix Factorization” paper.

Args:
hidden_size (int) : The dimension of node representation. window_size (int) : The actual context size which is considered in language model. negative (int) : The number of nagative samples in negative sampling. num_round (int) : The number of round in NetSMF. worker (int) : The number of workers for NetSMF.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'netsmf'
train(G)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self
class cogdl.models.emb.line.LINE(dimension, walk_length, walk_num, negative, batch_size, alpha, order)[source]

Bases: cogdl.models.base_model.BaseModel

The LINE model from the “Line: Large-scale information network embedding” paper.

Args:
hidden_size (int) : The dimension of node representation. walk_length (int) : The walk length. walk_num (int) : The number of walks to sample for each node. negative (int) : The number of nagative samples for each edge. batch_size (int) : The batch size of training in LINE. alpha (float) : The initial learning rate of SGD. order (int) : 1 represents perserving 1-st order proximity, 2 represents 2-nd, while 3 means both of them (each of them having dimension/2 node representation).
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'line'
train(G)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self
class cogdl.models.emb.sdne.SDNE(hidden_size1, hidden_size2, droput, alpha, beta, nu1, nu2, max_epoch, lr, cpu)[source]

Bases: cogdl.models.base_model.BaseModel

The SDNE model from the “Structural Deep Network Embedding” paper

Args:
hidden_size1 (int) : The size of the first hidden layer. hidden_size2 (int) : The size of the second hidden layer. droput (float) : Droput rate. alpha (float) : Trade-off parameter between 1-st and 2-nd order objective function in SDNE. beta (float) : Parameter of 2-nd order objective function in SDNE. nu1 (float) : Parameter of l1 normlization in SDNE. nu2 (float) : Parameter of l2 normlization in SDNE. max_epoch (int) : The max epoches in training step. lr (float) : Learning rate in SDNE. cpu (bool) : Use CPU or GPU to train hin2vec.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'sdne'
train(G)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self
class cogdl.models.emb.prone.ProNE(dimension, step, mu, theta)[source]

Bases: cogdl.models.base_model.BaseModel

The ProNE model from the “ProNE: Fast and Scalable Network Representation Learning” paper.

Args:
hidden_size (int) : The dimension of node representation. step (int) : The number of items in the chebyshev expansion. mu (float) : Parameter in ProNE. theta (float) : Parameter in ProNE.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'prone'
train(G)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self

GNN Model

class cogdl.models.nn.dgi.DGIModel(in_feats, hidden_size, activation)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

embed(data, msk=None)[source]
forward(x, edge_index, edge_attr=None)[source]
static get_trainer(task, args)[source]
loss(data)[source]
model_name = 'dgi'
node_classification_loss(data)[source]
class cogdl.models.nn.mvgrl.MVGRL(in_feats, hidden_size, sample_size=2000, batch_size=4, sparse=False, dataset='cora')[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

embed(data, msk=None)[source]
forward(x, edge_index, edge_attr=None)[source]
static get_trainer(taskType, args)[source]
loss(data)[source]
model_name = 'mvgrl'
node_classification_loss(data)[source]
preprocess(x, edge_index, edge_attr=None)[source]
class cogdl.models.nn.patchy_san.PatchySAN(batch_size, num_features, num_classes, num_sample, stride, num_neighbor, iteration)[source]

Bases: cogdl.models.base_model.BaseModel

The Patchy-SAN model from the “Learning Convolutional Neural Networks for Graphs” paper.

Args:
batch_size (int) : The batch size of training. sample (int) : Number of chosen vertexes. stride (int) : Node selection stride. neighbor (int) : The number of neighbor for each node. iteration (int) : The number of training iteration.
static add_args(parser)[source]

Add model-specific arguments to the parser.

build_model(num_channel, num_sample, num_neighbor, num_class)[source]
classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(batch)[source]
model_name = 'patchy_san'
classmethod split_dataset(dataset, args)[source]
class cogdl.models.nn.pyg_cheb.Chebyshev(in_feats, hidden_size, out_feats, num_layers, dropout, filter_size)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index)[source]
model_name = 'chebyshev'
predict(data)[source]
class cogdl.models.nn.gcn.TKipfGCN(in_feats, hidden_size, out_feats, num_layers, dropout)[source]

Bases: cogdl.models.base_model.BaseModel

The GCN model from the “Semi-Supervised Classification with Graph Convolutional Networks” paper

Args:
in_features (int) : Number of input features. out_features (int) : Number of classes. hidden_size (int) : The dimension of node representation. dropout (float) : Dropout rate for model training.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index, edge_weight=None)[source]
get_embeddings(x, edge_index)[source]
model_name = 'gcn'
predict(data)[source]
class cogdl.models.nn.gdc_gcn.GDC_GCN(nfeat, nhid, nclass, dropout, alpha, t, k, eps, gdctype)[source]

Bases: cogdl.models.base_model.BaseModel

The GDC model from the “Diffusion Improves Graph Learning” paper, with the PPR and heat matrix variants combined with GCN

Args:
num_features (int) : Number of input features in ppr-preprocessed dataset. num_classes (int) : Number of classes. hidden_size (int) : The dimension of node representation. dropout (float) : Dropout rate for model training. alpha (float) : PPR polynomial filter param, 0 to 1. t (float) : Heat polynomial filter param k (int) : Top k nodes retained during sparsification. eps (float) : Threshold for clipping. gdc_type (str) : “none”, “ppr”, “heat”
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index)[source]
model_name = 'gdc_gcn'
node_classification_loss(data)[source]
predict(data=None)[source]
preprocessing(data, gdc_type='ppr')[source]
reset_data(data)[source]
class cogdl.models.nn.pyg_hgpsl.HGPSL(num_features, num_classes, hidden_size, dropout, pooling, sample_neighbor, sparse_attention, structure_learning, lamb)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(data)[source]
model_name = 'hgpsl'
classmethod split_dataset(dataset, args)[source]
class cogdl.models.nn.graphsage.Graphsage(num_features, num_classes, hidden_size, num_layers, sample_size, dropout)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(*args)[source]
static get_trainer(task: Any, args: Any)[source]
inference(x_all, data_loader)[source]
mini_forward(x, edge_index)[source]
mini_loss(data)[source]
model_name = 'graphsage'
node_classification_loss(*args)[source]
predict(data)[source]
sampling(edge_index, num_sample)[source]
set_data_device(device)[source]
class cogdl.models.nn.compgcn.LinkPredictCompGCN(num_entities, num_rels, hidden_size, num_bases=0, layers=1, sampling_rate=0.01, score_func='conve', penalty=0.001, dropout=0.0, lbl_smooth=0.1)[source]

Bases: cogdl.layers.link_prediction_module.GNNLinkPredict, cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

add_reverse_edges(edge_index, edge_types)[source]
classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(edge_index, edge_types)[source]
loss(data, split='train')[source]
model_name = 'compgcn'
predict(edge_index, edge_types)[source]
class cogdl.models.nn.drgcn.DrGCN(num_features, num_classes, hidden_size, num_layers, dropout)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index)[source]
model_name = 'drgcn'
predict(data)[source]
class cogdl.models.nn.pyg_gpt_gnn.GPT_GNN[source]

Bases: cogdl.models.supervised_model.SupervisedHomogeneousNodeClassificationModel, cogdl.models.supervised_model.SupervisedHeterogeneousNodeClassificationModel

static add_args(parser)[source]

Add task-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

evaluate(data: Any, nodes: Any, targets: Any) → Any[source]
static get_trainer(taskType: Any, args) → Optional[Type[Union[cogdl.trainers.gpt_gnn_trainer.GPT_GNNHomogeneousTrainer, cogdl.trainers.gpt_gnn_trainer.GPT_GNNHeterogeneousTrainer]]][source]
loss(data: Any) → Any[source]
model_name = 'gpt_gnn'
predict(data: Any) → Any[source]
class cogdl.models.nn.pyg_graph_unet.GraphUnet(in_feats: int, hidden_size: int, out_feats: int, pooling_layer: int, pooling_rates: List[float], n_dropout: float = 0.5, adj_dropout: float = 0.3, activation: str = 'elu', improved: bool = False, aug_adj: bool = False)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x: torch.Tensor, edge_index: torch.Tensor, edge_attr: Optional[torch.Tensor] = None) → torch.Tensor[source]
model_name = 'unet'
predict(data)[source]
class cogdl.models.nn.gcnmix.GCNMix(in_feat, hidden_size, num_classes, k, temperature, alpha, rampup_starts, rampup_ends, final_consistency_weight, ema_decay, dropout)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index)[source]
forward_ema(x, edge_index)[source]
model_name = 'gcnmix'
node_classification_loss(data)[source]
predict(data)[source]
class cogdl.models.nn.diffpool.DiffPool(in_feats, hidden_dim, embed_dim, num_classes, num_layers, num_pool_layers, assign_dim, pooling_ratio, batch_size, dropout=0.5, no_link_pred=True, concat=False, use_bn=False)[source]

Bases: cogdl.models.base_model.BaseModel

DIFFPOOL from paper Hierarchical Graph Representation Learning with Differentiable Pooling.

in_feats : int
Size of each input sample.
hidden_dim : int
Size of hidden layer dimension of GNN.
embed_dim : int
Size of embeded node feature, output size of GNN.
num_classes : int
Number of target classes.
num_layers : int
Number of GNN layers.
num_pool_layers : int
Number of pooling.
assign_dim : int
Embedding size after the first pooling.
pooling_ratio : float
Size of each poolling ratio.
batch_size : int
Size of each mini-batch.
dropout : float, optional
Size of dropout, default: 0.5.
no_link_pred : bool, optional
If True, use link prediction loss, default: True.
static add_args(parser)[source]

Add model-specific arguments to the parser.

after_pooling_forward(gnn_layers, adj, x, concat=False)[source]
classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(batch)[source]
graph_classificatoin_loss(batch)[source]
model_name = 'diffpool'
reset_parameters()[source]
classmethod split_dataset(dataset, args)[source]
class cogdl.models.nn.gcnii.GCNII(in_feats, hidden_size, out_feats, num_layers, dropout=0.5, alpha=0.1, lmbda=1, wd1=0.0, wd2=0.0, residual=False)[source]

Bases: cogdl.models.base_model.BaseModel

Implementation of GCNII in paper “Simple and Deep Graph Convolutional Networks” <https://arxiv.org/abs/2007.02133>.

in_feats : int
Size of each input sample
hidden_size : int
Size of each hidden unit
out_feats : int
Size of each out sample

num_layers : int dropout : float alpha : float

Parameter of initial residual connection
lmbda : float
Parameter of identity mapping
wd1 : float
Weight-decay for Fully-connected layers
wd2 : float
Weight-decay for convolutional layers
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index, edge_attr=None)[source]
get_optimizer(args)[source]
model_name = 'gcnii'
predict(data)[source]
class cogdl.models.nn.sign.MLP(num_features, hidden_size, num_classes, num_layers, dropout, dropedge_rate, undirected, num_propagations, asymm_norm, set_diag, remove_diag)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index)[source]
model_name = 'sign'
node_classification_loss(data, mask=None)[source]
predict(data)[source]
reset_parameters()[source]
class cogdl.models.nn.pyg_gcn.GCN(num_features, num_classes, hidden_size, num_layers, dropout)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index, weight=None)[source]
get_embeddings(x, edge_index, weight=None)[source]
get_trainer(task, args)[source]
model_name = 'pyg_gcn'
predict(data)[source]
class cogdl.models.nn.mixhop.MixHop(num_features, num_classes, dropout, layer1_pows, layer2_pows)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index)[source]
model_name = 'mixhop'
predict(data)[source]
class cogdl.models.nn.gat.GAT(in_feats, hidden_size, out_features, num_layers, dropout, alpha, nhead, residual, last_nhead, fast_mode=False)[source]

Bases: cogdl.models.base_model.BaseModel

The GAT model from the “Graph Attention Networks” paper

Args:
num_features (int) : Number of input features. num_classes (int) : Number of classes. hidden_size (int) : The dimension of node representation. dropout (float) : Dropout rate for model training. alpha (float) : Coefficient of leaky_relu. nheads (int) : Number of attention heads.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index)[source]
model_name = 'gat'
predict(data)[source]
class cogdl.models.nn.han.HAN(num_edge, w_in, w_out, num_class, num_nodes, num_layers)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

evaluate(data, nodes, targets)[source]
forward(A, X, target_x, target)[source]
loss(data)[source]
model_name = 'han'
class cogdl.models.nn.ppnp.PPNP(nfeat, nhid, nclass, num_layers, dropout, propagation, alpha, niter, cache=True)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, adj)[source]
model_name = 'ppnp'
predict(data)[source]
class cogdl.models.nn.grace.GRACE(in_feats: int, hidden_size: int, proj_hidden_size: int, num_layers: int, drop_feature_rates: List[float], drop_edge_rates: List[float], tau: float = 0.5, activation: str = 'relu', batch_size: int = -1)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

batched_loss(z1: torch.Tensor, z2: torch.Tensor, batch_size: int)[source]
classmethod build_model_from_args(args)[source]

Build a new model instance.

contrastive_loss(z1: torch.Tensor, z2: torch.Tensor)[source]
drop_adj(edge_index: torch.Tensor, edge_weight: Optional[torch.Tensor] = None, drop_rate: float = 0.5)[source]
drop_feature(x: torch.Tensor, droprate: float)[source]
embed(data)[source]
forward(x: torch.Tensor, edge_index: torch.Tensor, edge_weight: Optional[torch.Tensor] = None)[source]
static get_trainer(task, args)[source]
model_name = 'grace'
node_classification_loss(data)[source]
prop(x: torch.Tensor, edge_index: torch.Tensor, edge_weight: Optional[torch.Tensor] = None, drop_feature_rate: float = 0.0, drop_edge_rate: float = 0.0)[source]
class cogdl.models.nn.dgl_jknet.JKNet(in_features, out_features, n_layers, n_units, node_aggregation, layer_aggregation)[source]

Bases: cogdl.models.supervised_model.SupervisedHomogeneousNodeClassificationModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(graph, x)[source]
static get_trainer(taskType, args)[source]
loss(data)[source]
model_name = 'jknet'
predict(data)[source]
set_graph(graph)[source]
class cogdl.models.nn.pprgo.PPRGo(in_feats, hidden_size, out_feats, num_layers, alpha, dropout, activation='relu', nprop=2)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, targets, ppr_scores)[source]
static get_trainer(taskType: Any, args: Any)[source]
model_name = 'pprgo'
node_classification_loss(x, targets, ppr_scores, y)[source]
predict(x, edge_index, batch_size, norm_func)[source]
class cogdl.models.nn.gin.GIN(num_layers, in_feats, out_feats, hidden_dim, num_mlp_layers, eps=0, pooling='sum', train_eps=False, dropout=0.5)[source]

Bases: cogdl.models.base_model.BaseModel

Graph Isomorphism Network from paper “How Powerful are Graph Neural Networks?”.

Args:
num_layers : int
Number of GIN layers
in_feats : int
Size of each input sample
out_feats : int
Size of each output sample
hidden_dim : int
Size of each hidden layer dimension
num_mlp_layers : int
Number of MLP layers
eps : float32, optional
Initial epsilon value, default: 0
pooling : str, optional
Aggregator type to use, default: sum
train_eps : bool, optional
If True, epsilon will be a learnable parameter, default: True
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(batch)[source]
model_name = 'gin'
classmethod split_dataset(dataset, args)[source]
class cogdl.models.nn.pyg_dgcnn.DGCNN(in_feats, hidden_dim, out_feats, k=20, dropout=0.5)[source]

Bases: cogdl.models.base_model.BaseModel

EdgeConv and DynamicGraph in paper “Dynamic Graph CNN for Learning on Point Clouds” <https://arxiv.org/pdf/1801.07829.pdf>__ .

in_feats : int
Size of each input sample.
out_feats : int
Size of each output sample.
hidden_dim : int
Dimension of hidden layer embedding.
k : int
Number of neareast neighbors.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(batch)[source]
model_name = 'dgcnn'
classmethod split_dataset(dataset, args)[source]
class cogdl.models.nn.grand.Grand(nfeat, nhid, nclass, input_droprate, hidden_droprate, use_bn, dropnode_rate, tem, lam, order, sample, alpha)[source]

Bases: cogdl.models.base_model.BaseModel

Implementation of GRAND in paper “Graph Random Neural Networks for Semi-Supervised Learning on Graphs” <https://arxiv.org/abs/2005.11079>

nfeat : int
Size of each input features.
nhid : int
Size of hidden features.
nclass : int
Number of output classes.
input_droprate : float
Dropout rate of input features.
hidden_droprate : float
Dropout rate of hidden features.
use_bn : bool
Using batch normalization.
dropnode_rate : float
Rate of dropping elements of input features
tem : float
Temperature to sharpen predictions.
lam : float
Proportion of consistency loss of unlabelled data
order : int
Order of adjacency matrix
sample : int
Number of augmentations for consistency loss

alpha : float

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

consis_loss(logps, train_mask)[source]
dropNode(x)[source]
forward(x, edge_index, edge_weight=None)[source]
model_name = 'grand'
node_classification_loss(data)[source]
normalize_x(x)[source]
predict(data)[source]
rand_prop(x, edge_index, edge_weight)[source]
class cogdl.models.nn.pyg_gtn.GTN(num_edge, num_channels, w_in, w_out, num_class, num_nodes, num_layers)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

evaluate(data, nodes, targets)[source]
forward(A, X, target_x, target)[source]
loss(data)[source]
model_name = 'gtn'
norm(edge_index, num_nodes, edge_weight, improved=False, dtype=None)[source]
normalization(H)[source]
class cogdl.models.nn.rgcn.LinkPredictRGCN(num_entities, num_rels, hidden_size, num_layers, regularizer='basis', num_bases=None, self_loop=True, sampling_rate=0.01, penalty=0, dropout=0.0, self_dropout=0.0)[source]

Bases: cogdl.layers.link_prediction_module.GNNLinkPredict, cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(edge_index, edge_type)[source]
loss(data, split='train')[source]
model_name = 'rgcn'
predict(edge_index, edge_type)[source]
class cogdl.models.nn.deepergcn.DeeperGCN(in_feat, hidden_size, out_feat, num_layers, connection='res+', activation='relu', dropout=0.0, aggr='max', beta=1.0, p=1.0, learn_beta=False, learn_p=False, learn_msg_scale=True, use_msg_norm=False)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index, edge_attr=None)[source]
static get_trainer(taskType: Any, args)[source]
model_name = 'deepergcn'
predict(data)[source]
class cogdl.models.nn.drgat.DrGAT(num_features, num_classes, hidden_size, num_heads, dropout)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index)[source]
model_name = 'drgat'
predict(data)[source]
class cogdl.models.nn.infograph.InfoGraph(in_feats, hidden_dim, out_feats, num_layers=3, sup=False)[source]

Bases: cogdl.models.base_model.BaseModel

Implimentation of Infograph in paper `”InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation

Learning via Mutual Information Maximization” <https://openreview.net/forum?id=r1lfF2NYvH>__. `

in_feats : int
Size of each input sample.
out_feats : int
Size of each output sample.
num_layers : int, optional
Number of MLP layers in encoder, default: 3.
unsup : bool, optional
Use unsupervised model if True, default: True.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(batch)[source]
graph_classification_loss(batch)[source]
static mi_loss(pos_mask, neg_mask, mi, pos_div, neg_div)[source]
model_name = 'infograph'
reset_parameters()[source]
classmethod split_dataset(dataset, args)[source]
sup_forward(x, edge_index=None, batch=None, label=None, edge_attr=None)[source]
sup_loss(pred, batch)[source]
unsup_forward(x, edge_index=None, batch=None)[source]
unsup_loss(graph_feat, node_feat, batch)[source]
unsup_sup_loss(x, edge_index, batch)[source]
class cogdl.models.nn.dropedge_gcn.DropEdge_GCN(nfeat, nhid, nclass, nhidlayer, dropout, baseblock, inputlayer, outputlayer, nbaselayer, activation, withbn, withloop, aggrmethod)[source]

Bases: cogdl.models.base_model.BaseModel

DropEdge: Towards Deep Graph Convolutional Networks on Node Classification Applying DropEdge to GCN @ https://arxiv.org/pdf/1907.10903.pdf

The model for the single kind of deepgcn blocks. The model architecture likes: inputlayer(nfeat)–block(nbaselayer, nhid)–…–outputlayer(nclass)–softmax(nclass)

The total layer is nhidlayer*nbaselayer + 2. All options are configurable.

Args:

Initial function. :param nfeat: the input feature dimension. :param nhid: the hidden feature dimension. :param nclass: the output feature dimension. :param nhidlayer: the number of hidden blocks. :param dropout: the dropout ratio. :param baseblock: the baseblock type, can be “mutigcn”, “resgcn”, “densegcn” and “inceptiongcn”. :param inputlayer: the input layer type, can be “gcn”, “dense”, “none”. :param outputlayer: the input layer type, can be “gcn”, “dense”. :param nbaselayer: the number of layers in one hidden block. :param activation: the activation function, default is ReLu. :param withbn: using batch normalization in graph convolution. :param withloop: using self feature modeling in graph convolution. :param aggrmethod: the aggregation function for baseblock, can be “concat” and “add”. For “resgcn”, the default

is “add”, for others the default is “concat”.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(fea, adj)[source]
model_name = 'dropedge_gcn'
predict(data)[source]
reset_parameters()[source]
class cogdl.models.nn.disengcn.DisenGCN(in_feats, hidden_size, num_classes, K, iterations, tau, dropout, activation)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index)[source]
model_name = 'disengcn'
predict(data)[source]
reset_parameters()[source]
class cogdl.models.nn.mlp.MLP(in_feats, out_feats, hidden_size, num_layers, dropout=0.0, activation='relu', norm=None)[source]

Bases: cogdl.models.base_model.BaseModel

Multilayer perception with normalization

\[x^{(i+1)} = \sigma(W^{i}x^{(i)})\]
in_feats : int
Size of each input sample.
out_feats : int
Size of each output sample.
hidden_dim : int
Size of hidden layer dimension.
use_bn : bool, optional
Apply batch normalization if True, default: `True).
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, *args, **kwargs)[source]
model_name = 'mlp'
predict(data)[source]
class cogdl.models.nn.sgc.sgc(in_feats, out_feats)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index)[source]
model_name = 'sgc'
predict(data)[source]
class cogdl.models.nn.stpgnn.stpgnn(args)[source]

Bases: cogdl.models.base_model.BaseModel

Implementation of models in paper “Strategies for Pre-training Graph Neural Networks”. <https://arxiv.org/abs/1905.12265>

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'stpgnn'
class cogdl.models.nn.sortpool.SortPool(in_feats, hidden_dim, num_classes, num_layers, out_channel, kernel_size, k=30, dropout=0.5)[source]

Bases: cogdl.models.base_model.BaseModel

Implimentation of sortpooling in paper “An End-to-End Deep Learning Architecture for Graph Classification” <https://www.cse.wustl.edu/~muhan/papers/AAAI_2018_DGCNN.pdf>__.

in_feats : int
Size of each input sample.
out_feats : int
Size of each output sample.
hidden_dim : int
Dimension of hidden layer embedding.
num_classes : int
Number of target classes.
num_layers : int
Number of graph neural network layers before pooling.
k : int, optional
Number of selected features to sort, default: 30.
out_channel : int
Number of the first convolution’s output channels.
kernel_size : int
Size of the first convolution’s kernel.
dropout : float, optional
Size of dropout, default: 0.5.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(batch)[source]
model_name = 'sortpool'
classmethod split_dataset(dataset, args)[source]
class cogdl.models.nn.pyg_srgcn.SRGCN(in_feats, hidden_size, out_feats, attention, activation, nhop, normalization, dropout, node_dropout, alpha, nhead, subheads)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index)[source]
model_name = 'srgcn'
predict(data)[source]
class cogdl.models.nn.dgl_gcc.GCC(load_path)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

model_name = 'gcc'
train(data)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Args:
mode (bool): whether to set training mode (True) or evaluation
mode (False). Default: True.
Returns:
Module: self
class cogdl.models.nn.pairnorm.PairNorm(pn_model, hidden_layers, nhead, dropout, nlayer, residual, norm_mode, norm_scale, num_features, num_classes)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index)[source]
model_name = 'pairnorm'
predict(data)[source]
class cogdl.models.nn.unsup_graphsage.SAGE(num_features, hidden_size, num_layers, sample_size, dropout, walk_length, negative_samples)[source]

Bases: cogdl.models.base_model.BaseModel

Implementation of unsupervised GraphSAGE in paper “Inductive Representation Learning on Large Graphs” <https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf>

num_features : int
Size of each input sample

hidden_size : int num_layers : int

The number of GNN layers.
samples_size : list
The number sampled neighbors of different orders

dropout : float walk_length : int

The length of random walk

negative_samples : int

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

embed(data)[source]
forward(x, edge_index)[source]
static get_trainer(taskType, args)[source]
loss(data)[source]
model_name = 'unsup_graphsage'
node_classification_loss(data)[source]
sampling(edge_index, num_sample)[source]
class cogdl.models.nn.pyg_sagpool.SAGPoolNetwork(nfeat, nhid, nclass, dropout, pooling_ratio, pooling_layer_type)[source]

Bases: cogdl.models.base_model.BaseModel

static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(batch)[source]
model_name = 'sagpool'
classmethod split_dataset(dataset, args)[source]

AGC Model

class cogdl.models.agc.daegc.DAEGC(num_features, hidden_size, embedding_size, num_heads, dropout, num_clusters)[source]

Bases: cogdl.models.base_model.BaseModel

The DAEGC model from the “Attributed Graph Clustering: A Deep Attentional Embedding Approach” paper

Args:
num_clusters (int) : Number of clusters. T (int) : Number of iterations to recalculate P and Q gamma (float) : Hyperparameter that controls two parts of the loss.
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

forward(x, edge_index)[source]
get_2hop(edge_index)[source]

add 2-hop neighbors as new edges

get_features(data)[source]
get_trainer(task, args)[source]
model_name = 'daegc'
recon_loss(z, adj)[source]
class cogdl.models.agc.agc.AGC(num_clusters, max_iter)[source]

Bases: cogdl.models.base_model.BaseModel

The AGC model from the “Attributed Graph Clustering via Adaptive Graph Convolution” paper

Args:
num_clusters (int) : Number of clusters. max_iter (int) : Max iteration to increase k
static add_args(parser)[source]

Add model-specific arguments to the parser.

classmethod build_model_from_args(args)[source]

Build a new model instance.

get_features(data)[source]
get_trainer(task, args)[source]
model_name = 'agc'

Model Module

cogdl.models.build_model(args)[source]
cogdl.models.register_model(name)[source]

New model types can be added to cogdl with the register_model() function decorator.

For example:

@register_model('gat')
class GAT(BaseModel):
    (...)
Args:
name (str): the name of the model
cogdl.models.try_import_model(model)[source]