Unigcn_Layer#

Implementation of UniGCN layer from Huang et. al.: UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks.

class topomodelx.nn.hypergraph.unigcn_layer.UniGCNLayer(in_channels, hidden_channels, aggr_norm: bool = False, use_bn: bool = False, **kwargs)[source]#

Layer of UniGCN.

Implementation of UniGCN layer proposed in [1].

Parameters:
in_channelsint

Dimension of the input features.

hidden_channelsint

Dimension of the hidden features.

aggr_normbool, default=False

Whether to normalize the aggregated message by the neighborhood size.

use_bnbool, default=False

Whether to use bathnorm after the linear transformation.

**kwargsoptional

Additional arguments for the layer modules.

References

[1]

Huang and Yang. UniGNN: a unified framework for graph and hypergraph neural networks. IJCAI 2021. https://arxiv.org/pdf/2105.00956.pdf

[2]

Papillon, Sanborn, Hajij, Miolane. Equations of topological neural networks (2023). awesome-tnns/awesome-tnns

[3]

Papillon, Sanborn, Hajij, Miolane. Architectures of topological deep learning: a survey on topological neural networks (2023). https://arxiv.org/abs/2304.10031.

forward(x_0, incidence_1)[source]#

[1]_ initially proposed the forward pass.

Its equations are given in [2]_ and graphically illustrated in [3]_.

The forward pass of this layer is composed of three steps.

First, every hyper-edge sums up the features of its constituent edges:

\[\begin{split}\begin{align*} &🟥 \quad m_{y \rightarrow z}^{(0 \rightarrow 1)} = B_1^T \cdot h_y^{t, (0)}\\ &🟧 \quad m_z^{(0 \rightarrow 1)} = \sum_{y \in \mathcal{B}(z)} m_{y \rightarrow z}^{(0 \rightarrow 1)}\\ \end{align*}\end{split}\]

Second, the message to the nodes is the sum of the messages from the incident hyper-edges:

\[\begin{split}\begin{align*} &🟥 \quad m_{z \rightarrow x}^{(1 \rightarrow 0)} = B_1^{t,(1)} \cdot w^{(1)} \cdot m_z^{(0 \rightarrow 1)} \cdot \Theta^t\\ &🟧 \quad m_x^{(1 \rightarrow 0)} = \sum_{y \in \mathcal{C}(x)} m_{z \rightarrow x}^{(1 \rightarrow 0)}\\ \end{align*}\end{split}\]

Third, the node features are updated:

\[\begin{split}\begin{align*} &🟩 \quad m_x^{(0)} = m_x^{(1\rightarrow0)}\\ &🟦 \quad h_x^{t+1,(0)} = m_x^{(0)} \end{align*}\end{split}\]
Parameters:
x_0torch.Tensor, shape = (n_nodes, in_channels)

Input features on the nodes of the hypergraph.

incidence_1torch.sparse, shape = (n_nodes, n_edges)

Incidence matrix mapping edges to nodes (B_1).

Returns:
x_0torch.Tensor

Output node features.

x_1torch.Tensor

Output hyperedge features.

reset_parameters() None[source]#

Reset learnable parameters.