Hypersage_Layer#

HyperSAGE layer.

class topomodelx.nn.hypergraph.hypersage_layer.GeneralizedMean(power: int = 2, **kwargs)[source]#

Generalized mean aggregation layer.

Parameters:
powerint, default=2

Power for the generalized mean.

**kwargskeyword arguments, optional

Arguments for the base aggregation layer.

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(x)

Forward pass.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module's state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

ipu([device])

Moves all model parameters and buffers to the IPU.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse, ...])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse, ...])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook, *[, prepend, ...])

Registers a forward hook on the module.

register_forward_pre_hook(hook, *[, ...])

Registers a forward pre-hook on the module.

register_full_backward_hook(hook[, prepend])

Registers a backward hook on the module.

register_full_backward_pre_hook(hook[, prepend])

Registers a backward pre-hook on the module.

register_load_state_dict_post_hook(hook)

Registers a post hook to be run after module's load_state_dict is called.

register_module(name, module)

Alias for add_module().

register_parameter(name, param)

Adds a parameter to the module.

register_state_dict_pre_hook(hook)

These hooks will be called with arguments: self, prefix, and keep_vars before calling state_dict on self.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict(*args[, destination, prefix, ...])

Returns a dictionary containing references to the whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

update(inputs)

Update (Step 4).

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

forward(x: Tensor)[source]#

Forward pass.

Parameters:
xtorch.Tensor

Input features.

Returns:
torch.Tensor

Output features.

class topomodelx.nn.hypergraph.hypersage_layer.HyperSAGELayer(in_channels: int, out_channels: int, alpha: int = -1, aggr_func_intra: Aggregation | None = None, aggr_func_inter: Aggregation | None = None, update_func: Literal['relu', 'sigmoid'] = 'relu', initialization: Literal['uniform', 'xavier_uniform', 'xavier_normal'] = 'uniform', device: str = 'cpu', **kwargs)[source]#

Implementation of the HyperSAGE layer proposed in [1].

Parameters:
in_channelsint

Dimension of the input features.

out_channelsint

Dimension of the output features.

alphaint, default=-1

Max number of nodes in a neighborhood to consider. If -1 it considers all the nodes.

aggr_func_intracallable, default=GeneralizedMean(p=2)

Aggregation function. Default is GeneralizedMean(p=2).

aggr_func_intercallable, default=GeneralizedMean(p=2)

Aggregation function. Default is GeneralizedMean(p=2).

update_funcLiteral[“relu”, “sigmoid”], default=”relu”

Update method to apply to message.

initializationLiteral[“uniform”, “xavier_uniform”, “xavier_normal”], default=”uniform”

Initialization method.

devicestr, default=”cpu”

Device name to train layer on.

**kwargsoptional

Additional arguments for the layer modules.

Methods

add_module(name, module)

Adds a child module to the current module.

aggregate(x_messages[, mode])

Aggregate messages on each target cell.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

attention(x_source[, x_target])

Compute attention weights for messages.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(x, incidence)

Forward pass ([2]_ and [3]_).

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module's state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

ipu([device])

Moves all model parameters and buffers to the IPU.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

message(x_source[, x_target])

Construct message from source cells to target cells.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse, ...])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse, ...])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook, *[, prepend, ...])

Registers a forward hook on the module.

register_forward_pre_hook(hook, *[, ...])

Registers a forward pre-hook on the module.

register_full_backward_hook(hook[, prepend])

Registers a backward hook on the module.

register_full_backward_pre_hook(hook[, prepend])

Registers a backward pre-hook on the module.

register_load_state_dict_post_hook(hook)

Registers a post hook to be run after module's load_state_dict is called.

register_module(name, module)

Alias for add_module().

register_parameter(name, param)

Adds a parameter to the module.

register_state_dict_pre_hook(hook)

These hooks will be called with arguments: self, prefix, and keep_vars before calling state_dict on self.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

reset_parameters()

Reset learnable parameters.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict(*args[, destination, prefix, ...])

Returns a dictionary containing references to the whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

update(x_message_on_target)

Update embeddings on each node (step 4).

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

References

[1]

Arya, Gupta, Rudinac and Worring. HyperSAGE: Generalizing inductive representation learning on hypergraphs (2020). https://arxiv.org/abs/2010.04558

[2]

Papillon, Sanborn, Hajij, Miolane. Equations of topological neural networks (2023). awesome-tnns/awesome-tnns

[3]

Papillon, Sanborn, Hajij, Miolane. Architectures of topological deep learning: a survey on topological neural networks (2023). https://arxiv.org/abs/2304.10031

aggregate(x_messages: Tensor, mode: str = 'intra')[source]#

Aggregate messages on each target cell.

A target cell receives messages from several source cells. This function aggregates these messages into a single output feature per target cell.

This function corresponds to either intra- or inter-aggregation.

Parameters:
x_messagesTensor, shape = (…, n_messages, out_channels)

Features associated with each message. One message is sent from a source cell to a target cell.

modestr, default = “inter”

The mode on which aggregation to compute. If set to “inter”, will compute inter-aggregation, if set to “intra”, will compute intra-aggregation (see [1]).

Returns:
Tensor, shape = (…, n_target_cells, out_channels)

Output features on target cells. Each target cell aggregates messages from several source cells. Assumes that all target cells have the same rank s.

forward(x: Tensor, incidence: Tensor)[source]#

Forward pass ([2]_ and [3]_).

\[\begin{split}\begin{align*} &🟥 \quad m_{y \rightarrow z}^{(0 \rightarrow 1)} = (B_1)^T_{zy} \cdot w_y \cdot (h_y^{(0)})^p\\ &🟥 \quad m_z^{(0 \rightarrow 1)} = \left(\frac{1}{\vert \mathcal{B}(z)\vert}\sum_{y \in \mathcal{B}(z)} m_{y \rightarrow z}^{(0 \rightarrow 1)}\right)^{\frac{1}{p}}\\ &🟥 \quad m_{z \rightarrow x}^{(1 \rightarrow 0)} = (B_1)_{xz} \cdot w_z \cdot (m_z^{(0 \rightarrow 1)})^p\\ &🟧 \quad m_x^{(1,0)} = \left(\frac{1}{\vert \mathcal{C}(x) \vert}\sum_{z \in \mathcal{C}(x)} m_{z \rightarrow x}^{(1 \rightarrow 0)}\right)^{\frac{1}{p}}\\ &🟩 \quad m_x^{(0)} = m_x^{(1 \rightarrow 0)}\\ &🟦 \quad h_x^{t+1, (0)} = \sigma \left(\frac{m_x^{(0)} + h_x^{t,(0)}}{\lvert m_x^{(0)} + h_x^{t,(0)}\rvert} \cdot \Theta^t\right) \end{align*}\end{split}\]
Parameters:
xtorch.Tensor

Input features.

incidencetorch.Tensor

Incidence matrix between node/hyperedges.

Returns:
torch.Tensor

Output features.

update(x_message_on_target: Tensor) Tensor[source]#

Update embeddings on each node (step 4).

Parameters:
x_message_on_targettorch.Tensor, shape = (n_target_nodes, out_channels)

Output features on target nodes.

Returns:
torch.Tensor, shape = (n_target_nodes, out_channels)

Updated output features on target nodes.