cogdl.layers.gpt_gnn_module
¶
Module Contents¶
Classes¶
Implement the Temporal Encoding (Sinusoid) function. |
|
Matching between a pair of nodes to conduct link prediction. |
|
Container module with an encoder, a recurrent module, and a decoder. |
Functions¶
|
|
|
|
|
|
|
Row-normalize sparse matrix |
|
Convert a scipy sparse matrix to a torch sparse tensor. |
|
|
|
|
|
|
|
|
|
Sample Sub-Graph based on the connection of other nodes with currently sampled nodes |
|
Transform a sampled sub-graph into pytorch Tensor |
|
-
cogdl.layers.gpt_gnn_module.
sparse_mx_to_torch_sparse_tensor
(sparse_mx)[source]¶ Convert a scipy sparse matrix to a torch sparse tensor.
-
class
cogdl.layers.gpt_gnn_module.
Graph
[source]¶
-
cogdl.layers.gpt_gnn_module.
sample_subgraph
(graph, time_range, sampled_depth=2, sampled_number=8, inp=None, feature_extractor=feature_OAG)[source]¶ Sample Sub-Graph based on the connection of other nodes with currently sampled nodes We maintain budgets for each node type, indexed by <node_id, time>. Currently sampled nodes are stored in layer_data. After nodes are sampled, we construct the sampled adjacancy matrix.
-
cogdl.layers.gpt_gnn_module.
to_torch
(feature, time, edge_list, graph)[source]¶ Transform a sampled sub-graph into pytorch Tensor node_dict: {node_type: <node_number, node_type_ID>} node_number is used to trace back the nodes in original graph. edge_dict: {edge_type: edge_type_ID}
-
class
cogdl.layers.gpt_gnn_module.
HGTConv
(in_dim, out_dim, num_types, num_relations, n_heads, dropout=0.2, use_norm=True, use_RTE=True, **kwargs)[source]¶ Bases:
torch_geometric.nn.conv.MessagePassing
-
message
(self, edge_index_i, node_inp_i, node_inp_j, node_type_i, node_type_j, edge_type, edge_time)[source]¶ j: source, i: target; <j, i>
-
-
class
cogdl.layers.gpt_gnn_module.
RelTemporalEncoding
(n_hid, max_len=240, dropout=0.2)[source]¶ Bases:
torch.nn.Module
Implement the Temporal Encoding (Sinusoid) function.
-
class
cogdl.layers.gpt_gnn_module.
GeneralConv
(conv_name, in_hid, out_hid, num_types, num_relations, n_heads, dropout, use_norm=True, use_RTE=True)[source]¶ Bases:
torch.nn.Module
-
class
cogdl.layers.gpt_gnn_module.
GNN
(in_dim, n_hid, num_types, num_relations, n_heads, n_layers, dropout=0.2, conv_name='hgt', prev_norm=False, last_norm=False, use_RTE=True)[source]¶ Bases:
torch.nn.Module
-
class
cogdl.layers.gpt_gnn_module.
GPT_GNN
(gnn, rem_edge_list, attr_decoder, types, neg_samp_num, device, neg_queue_size=0)[source]¶ Bases:
torch.nn.Module
-
class
cogdl.layers.gpt_gnn_module.
Matcher
(n_hid, n_out, temperature=0.1)[source]¶ Bases:
torch.nn.Module
Matching between a pair of nodes to conduct link prediction. Use multi-head attention as matching model.
-
class
cogdl.layers.gpt_gnn_module.
RNNModel
(n_word, ninp, nhid, nlayers, dropout=0.2)[source]¶ Bases:
torch.nn.Module
Container module with an encoder, a recurrent module, and a decoder.
-
cogdl.layers.gpt_gnn_module.
preprocess_dataset
(dataset) → cogdl.layers.gpt_gnn_module.Graph[source]¶