utils

class cogdl.utils.utils.ArgClass[source]

Bases: object

cogdl.utils.utils.alias_draw(J, q)[source]

Draw sample from a non-uniform discrete distribution using alias sampling.

cogdl.utils.utils.alias_setup(probs)[source]

Compute utility lists for non-uniform sampling from discrete distributions. Refer to https://hips.seas.harvard.edu/blog/2013/03/03/the-alias-method-efficient-sampling-with-many-discrete-outcomes/ for details

cogdl.utils.utils.batch_max_pooling(x, batch)[source]
cogdl.utils.utils.batch_mean_pooling(x, batch)[source]
cogdl.utils.utils.batch_sum_pooling(x, batch)[source]
cogdl.utils.utils.build_args_from_dict(dic)[source]
cogdl.utils.utils.cycle_index(num, shift)[source]
cogdl.utils.utils.download_url(url, folder, name=None, log=True)[source]

Downloads the content of an URL to a specific folder.

Parameters
  • url (string) – The url.

  • folder (string) – The folder.

  • name (string) – saved filename.

  • log (bool, optional) – If False, will not print anything to the console. (default: True)

cogdl.utils.utils.get_activation(act: str, inplace=False)[source]
cogdl.utils.utils.get_memory_usage(print_info=False)[source]

Get accurate gpu memory usage by querying torch runtime

cogdl.utils.utils.get_norm_layer(norm: str, channels: int)[source]
Parameters
  • norm – str type of normalization: layernorm, batchnorm, instancenorm

  • channels – int size of features for normalization

cogdl.utils.utils.identity_act(input)[source]
cogdl.utils.utils.makedirs(path)[source]
cogdl.utils.utils.print_result(results, datasets, model_name)[source]
cogdl.utils.utils.set_random_seed(seed)[source]
cogdl.utils.utils.split_dataset_general(dataset, args)[source]
cogdl.utils.utils.tabulate_results(results_dict)[source]
cogdl.utils.utils.untar(path, fname, deleteTar=True)[source]

Unpacks the given archive file to the same directory, then (by default) deletes the archive file.

cogdl.utils.utils.update_args_from_dict(args, dic)[source]
class cogdl.utils.evaluator.Accuracy(mini_batch=False)[source]

Bases: object

clear()[source]
evaluate()[source]
class cogdl.utils.evaluator.BCEWithLogitsLoss[source]

Bases: torch.nn.modules.module.Module

training: bool
class cogdl.utils.evaluator.BaseEvaluator(eval_func)[source]

Bases: object

clear()[source]
evaluate()[source]
class cogdl.utils.evaluator.CrossEntropyLoss[source]

Bases: torch.nn.modules.module.Module

training: bool
class cogdl.utils.evaluator.MultiClassMicroF1(mini_batch=False)[source]

Bases: cogdl.utils.evaluator.Accuracy

class cogdl.utils.evaluator.MultiLabelMicroF1(mini_batch=False)[source]

Bases: cogdl.utils.evaluator.Accuracy

cogdl.utils.evaluator.accuracy(y_pred, y_true)[source]
cogdl.utils.evaluator.bce_with_logits_loss(y_pred, y_true, reduction='mean')[source]
cogdl.utils.evaluator.cross_entropy_loss(y_pred, y_true)[source]
cogdl.utils.evaluator.multiclass_f1(y_pred, y_true)[source]
cogdl.utils.evaluator.multilabel_f1(y_pred, y_true, sigmoid=False)[source]
cogdl.utils.evaluator.setup_evaluator(metric: Union[str, Callable])[source]
class cogdl.utils.sampling.RandomWalker(adj=None, num_nodes=None)[source]

Bases: object

build_up(adj, num_nodes)[source]
walk(start, walk_length, restart_p=0.0)[source]
walk_one(start, length, p)[source]
cogdl.utils.sampling.random_walk(start, length, indptr, indices, p=0.0)[source]
Parameters
  • start – np.array(dtype=np.int32)

  • length – int

  • indptr – np.array(dtype=np.int32)

  • indices – np.array(dtype=np.int32)

  • p – float

Returns

list(np.array(dtype=np.int32))

cogdl.utils.graph_utils.add_remaining_self_loops(edge_index, edge_weight=None, fill_value=1, num_nodes=None)[source]
cogdl.utils.graph_utils.add_self_loops(edge_index, edge_weight=None, fill_value=1, num_nodes=None)[source]
cogdl.utils.graph_utils.coalesce(row, col, value=None)[source]
cogdl.utils.graph_utils.coo2csc(row, col, data, num_nodes=None, sorted=False)[source]
cogdl.utils.graph_utils.coo2csr(row, col, data, num_nodes=None, ordered=False)[source]
cogdl.utils.graph_utils.coo2csr_index(row, col, num_nodes=None)[source]
cogdl.utils.graph_utils.csr2coo(indptr, indices, data)[source]
cogdl.utils.graph_utils.csr2csc(indptr, indices, data=None)[source]
cogdl.utils.graph_utils.get_degrees(row, col, num_nodes=None)[source]
cogdl.utils.graph_utils.negative_edge_sampling(edge_index: Union[Tuple, torch.Tensor], num_nodes: Optional[int] = None, num_neg_samples: Optional[int] = None, undirected: bool = False)[source]
cogdl.utils.graph_utils.remove_self_loops(indices, values=None)[source]
cogdl.utils.graph_utils.row_normalization(num_nodes, row, col, val=None)[source]
cogdl.utils.graph_utils.sorted_coo2csr(row, col, data, num_nodes=None, return_index=False)[source]
cogdl.utils.graph_utils.symmetric_normalization(num_nodes, row, col, val=None)[source]
cogdl.utils.graph_utils.to_undirected(edge_index, num_nodes=None)[source]

Converts the graph given by edge_index to an undirected graph, so that \((j,i) \in \mathcal{E}\) for every edge \((i,j) \in \mathcal{E}\).

Parameters
  • edge_index (LongTensor) – The edge indices.

  • num_nodes (int, optional) – The number of nodes, i.e. max_val + 1 of edge_index. (default: None)

Return type

LongTensor

Bases: torch.nn.modules.module.Module

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Bases: torch.nn.modules.module.Module

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Bases: torch.nn.modules.module.Module

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters
  • edge_index – edge index of graph

  • edge_types

  • edge_set – set of all edges of the graph, (h, t, r)

  • sampling_rate

  • num_rels

  • label_smoothing (Optional) –

  • num_entities (Optional) –

Returns

sampled existing edges rels: types of smapled existing edges sampled_edges_all: existing edges with corrupted edges sampled_types_all: types of existing and corrupted edges labels: 0/1

Return type

sampled_edges

cogdl.utils.ppr_utils.build_topk_ppr_matrix_from_data(edge_index, *args, **kwargs)[source]
cogdl.utils.ppr_utils.calc_ppr_topk_parallel(indptr, indices, deg, alpha, epsilon, nodes, topk)[source]
cogdl.utils.ppr_utils.construct_sparse(neighbors, weights, shape)[source]
cogdl.utils.ppr_utils.ppr_topk(adj_matrix, alpha, epsilon, nodes, topk)[source]

Calculate the PPR matrix approximately using Anderson.

cogdl.utils.ppr_utils.topk_ppr_matrix(adj_matrix, alpha, eps, idx, topk, normalization='row')[source]

Create a sparse matrix where each node has up to the topk PPR neighbors and their weights.

class cogdl.utils.prone_utils.Gaussian(mu=0.5, theta=1, rescale=False, k=3)[source]

Bases: object

prop(mx, emb)[source]
class cogdl.utils.prone_utils.HeatKernel(t=0.5, theta0=0.6, theta1=0.4)[source]

Bases: object

prop(mx, emb)[source]
prop_adjacency(mx)[source]
class cogdl.utils.prone_utils.HeatKernelApproximation(t=0.2, k=5)[source]

Bases: object

chebyshev(mx, emb)[source]
prop(mx, emb)[source]
taylor(mx, emb)[source]
class cogdl.utils.prone_utils.NodeAdaptiveEncoder[source]

Bases: object

  • shrink negative values in signal/feature matrix

  • no learning

static prop(signal)[source]
class cogdl.utils.prone_utils.PPR(alpha=0.5, k=10)[source]

Bases: object

applying sparsification to accelerate computation

prop(mx, emb)[source]
class cogdl.utils.prone_utils.ProNE[source]

Bases: object

class cogdl.utils.prone_utils.SignalRescaling[source]

Bases: object

  • rescale signal of each node according to the degree of the node:
    • sigmoid(degree)

    • sigmoid(1/degree)

prop(mx, emb)[source]
cogdl.utils.prone_utils.get_embedding_dense(matrix, dimension)[source]
cogdl.utils.prone_utils.propagate(mx, emb, stype, space=None)[source]
class cogdl.utils.srgcn_utils.ColumnUniform[source]

Bases: torch.nn.modules.module.Module

forward(edge_index, edge_attr, N)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.utils.srgcn_utils.EdgeAttention(in_feat)[source]

Bases: torch.nn.modules.module.Module

forward(x, edge_index, edge_attr)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.utils.srgcn_utils.Gaussian(in_feat)[source]

Bases: torch.nn.modules.module.Module

forward(x, edge_index, edge_attr)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.utils.srgcn_utils.HeatKernel(in_feat)[source]

Bases: torch.nn.modules.module.Module

forward(x, edge_index, edge_attr)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.utils.srgcn_utils.Identity(in_feat)[source]

Bases: torch.nn.modules.module.Module

forward(x, edge_index, edge_attr)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.utils.srgcn_utils.NodeAttention(in_feat)[source]

Bases: torch.nn.modules.module.Module

forward(x, edge_index, edge_attr)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.utils.srgcn_utils.NormIdentity[source]

Bases: torch.nn.modules.module.Module

forward(edge_index, edge_attr, N)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.utils.srgcn_utils.PPR(in_feat)[source]

Bases: torch.nn.modules.module.Module

forward(x, edge_index, edge_attr)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.utils.srgcn_utils.RowSoftmax[source]

Bases: torch.nn.modules.module.Module

forward(edge_index, edge_attr, N)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.utils.srgcn_utils.RowUniform[source]

Bases: torch.nn.modules.module.Module

forward(edge_index, edge_attr, N)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class cogdl.utils.srgcn_utils.SymmetryNorm[source]

Bases: torch.nn.modules.module.Module

forward(edge_index, edge_attr, N)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
cogdl.utils.srgcn_utils.act_attention(attn_type)[source]
cogdl.utils.srgcn_utils.act_map(act)[source]
cogdl.utils.srgcn_utils.act_normalization(norm_type)[source]