Hypergat_Layer#
HyperGAT layer.
- class topomodelx.nn.hypergraph.hypergat_layer.HyperGATLayer(in_channels, hidden_channels, update_func: str = 'relu', initialization: Literal['xavier_uniform', 'xavier_normal'] = 'xavier_uniform', initialization_gain: float = 1.414, **kwargs)[source]#
Implementation of the HyperGAT layer proposed in [1].
- Parameters:
- in_channelsint
Dimension of the input features.
- hidden_channelsint
Dimension of the output features.
- update_funcstr, default = “relu”
Update method to apply to message.
- initializationLiteral[“xavier_uniform”, “xavier_normal”], default=”xavier_uniform”
Initialization method.
- initialization_gainfloat, default=1.414
Gain for the initialization.
- **kwargsoptional
Additional arguments for the layer modules.
Methods
add_module
(name, module)Adds a child module to the current module.
aggregate
(x_message)Aggregate messages on each target cell.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.attention
(x_source[, x_target, mechanism])Compute attention weights for messages, as proposed in [1].
bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(x_0, incidence_1)Forward pass.
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.get_extra_state
()Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Moves all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.message
(x_source[, x_target])Construct message from source cells to target cells.
modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse, ...])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook, *[, prepend, ...])Registers a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Registers a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Registers a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Registers a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Registers a post hook to be run after module's
load_state_dict
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
register_state_dict_pre_hook
(hook)These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
Reset parameters.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.share_memory
()See
torch.Tensor.share_memory_()
state_dict
(*args[, destination, prefix, ...])Returns a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.update
(x_message_on_target)Update embeddings on each cell (step 4).
xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
__call__
References
[1]Ding, Wang, Li, Li and Huan Liu. EMNLP, 2020. https://aclanthology.org/2020.emnlp-main.399.pdf
- attention(x_source, x_target=None, mechanism: Literal['node-level', 'edge-level'] = 'node-level')[source]#
Compute attention weights for messages, as proposed in [1].
- Parameters:
- x_sourcetorch.Tensor, shape = (n_source_cells, in_channels)
Input features on source cells. Assumes that all source cells have the same rank r.
- x_targettorch.Tensor, shape = (n_target_cells, in_channels)
Input features on source cells. Assumes that all source cells have the same rank r.
- mechanismLiteral[“node-level”, “edge-level”], default = “node-level”
Attention mechanism as proposed in [1]. If set to “node-level”, will compute node-level attention, if set to “edge-level”, will compute edge-level attention (see [1]).
- Returns:
- torch.Tensor, shape = (n_messages, 1)
Attention weights: one scalar per message between a source and a target cell.
- forward(x_0, incidence_1)[source]#
Forward pass.
\[\begin{split}\begin{align*} &🟥 \quad m_{y \rightarrow z}^{(0 \rightarrow 1) } = (B^T_1\odot att(h_{y \in \mathcal{B}(z)}^{t,(0)}))\_{zy} \cdot h^{t,(0)}y \cdot \Theta^{t,(0)}\\ &🟧 \quad m_z^{(1)} = \sigma(\sum_{y \in \mathcal{B}(z)} m_{y \rightarrow z}^{(0 \rightarrow 1)})\\ &🟥 \quad m_{z \rightarrow x}^{(1 \rightarrow 0)} = (B_1 \odot att(h_{z \in \mathcal{C}(x)}^{t,(1)}))\_{xz} \cdot m_{z}^{(1)} \cdot \Theta^{t,(1)}\\ &🟧 \quad m_{x}^{(0)} = \sum_{z \in \mathcal{C}(x)} m_{z \rightarrow x}^{(1\rightarrow0)}\\ &🟩 \quad m_x = m_{x}^{(0)}\\ &🟦 \quad h_x^{t+1, (0)} = \sigma(m_x) \end{align*}\end{split}\]- Parameters:
- x_0torch.Tensor
Input features.
- incidence_1torch.sparse
Incidence matrix between nodes and hyperedges.
- Returns:
- x_0torch.Tensor
Output node features.
- x_1torch.Tensor
Output hyperedge features.
- update(x_message_on_target)[source]#
Update embeddings on each cell (step 4).
- Parameters:
- x_message_on_targettorch.Tensor, shape = (n_target_cells, hidden_channels)
Output features on target cells.
- Returns:
- torch.Tensor, shape = (n_target_cells, hidden_channels)
Updated output features on target cells.