AllSet_Layer#

AllSet Layer Module.

class topomodelx.nn.hypergraph.allset_layer.AllSetBlock(in_channels, hidden_channels, dropout: float = 0.2, mlp_num_layers: int = 2, mlp_activation=<class 'torch.nn.modules.activation.ReLU'>, mlp_dropout: float = 0.0, mlp_norm=None, **kwargs)[source]#

AllSet Block Module.

A module for AllSet block in a bipartite graph.

Parameters:
in_channelsint

Dimension of the input features.

hidden_channelsint

Dimension of the hidden features.

dropoutfloat, default=0.2

Dropout probability.

mlp_num_layersint, default=2

Number of layers in the MLP.

mlp_activationcallable or None, optional

Activation function in the MLP.

mlp_dropoutfloat, optional

Dropout probability in the MLP.

mlp_normcallable or None, optional

Type of layer normalization in the MLP.

**kwargsoptional

Additional arguments for the block modules.

forward(x_0, incidence_1)[source]#

Forward computation.

Parameters:
x_0torch.Tensor

Input node features.

incidence_1torch.sparse

Incidence matrix between node/hyperedges.

Returns:
torch.Tensor

Output features.

reset_parameters() None[source]#

Reset learnable parameters.

class topomodelx.nn.hypergraph.allset_layer.AllSetLayer(in_channels, hidden_channels, dropout: float = 0.2, mlp_num_layers: int = 2, mlp_activation=<class 'torch.nn.modules.activation.ReLU'>, mlp_dropout: float = 0.0, mlp_norm=None, **kwargs)[source]#

AllSet Layer Module [1].

A module for AllSet layer in a bipartite graph.

Parameters:
in_channelsint

Dimension of the input features.

hidden_channelsint

Dimension of the hidden features.

dropoutfloat, default=0.2

Dropout probability.

mlp_num_layersint, default=2

Number of layers in the MLP.

mlp_activationcallable or None, optional

Activation function in the MLP.

mlp_dropoutfloat, optional

Dropout probability in the MLP.

mlp_normstr or None, optional

Type of layer normalization in the MLP.

**kwargsoptional

Additional arguments for the layer modules.

References

[1]

Chien, Pan, Peng and Milenkovic. You are AllSet: a multiset function framework for hypergraph neural networks. ICLR 2022. https://arxiv.org/abs/2106.13264

forward(x_0, incidence_1)[source]#

Forward computation.

Vertex to edge:

\[\begin{split}\begin{align*} &🟧 \quad m_{\rightarrow z}^{(\rightarrow 1)} = AGG_{y \in \mathcal{B}(z)} (h_y^{t, (0)}, h_z^{t,(1)}) \\ &🟦 \quad h_z^{t+1,(1)} = \sigma(m_{\rightarrow z}^{(\rightarrow 1)}) \end{align*}\end{split}\]

Edge to vertex:

\[\begin{split}\begin{align*} &🟧 \quad m_{\rightarrow x}^{(\rightarrow 0)} = AGG_{z \in \mathcal{C}(x)} (h_z^{t+1,(1)}, h_x^{t,(0)}) \\ &🟦 \quad h_x^{t+1,(0)} = \sigma(m_{\rightarrow x}^{(\rightarrow 0)}) \end{align*}\end{split}\]
Parameters:
x_0torch.Tensor, shape = (n_nodes, channels)

Node input features.

incidence_1torch.sparse, shape = (n_nodes, n_hyperedges)

Incidence matrix \(B_1\) mapping hyperedges to nodes.

Returns:
x_0torch.Tensor

Output node features.

x_1torch.Tensor

Output hyperedge features.

reset_parameters() None[source]#

Reset learnable parameters.

class topomodelx.nn.hypergraph.allset_layer.MLP(in_channels, hidden_channels, norm_layer=None, activation_layer=None, dropout: float = 0.0, inplace: bool | None = None, bias: bool = False)[source]#

MLP Module.

A module for a multi-layer perceptron (MLP).

Parameters:
in_channelsint

Dimension of the input features.

hidden_channelslist of int

List of dimensions of the hidden features.

norm_layercallable or None, optional

Type of layer normalization.

activation_layercallable or None, optional

Type of activation function.

dropoutfloat, default=0.0

Dropout probability.

inplacebool, default=False

Whether to do the operation in-place.

biasbool, default=False

Whether to add bias.