High Skip Network (HSN) Layer.

class topomodelx.nn.simplicial.hsn_layer.HSNLayer(channels)[source]#

Layer of a High Skip Network (HSN).

Implementation of the HSN layer proposed in [1].

Parameters:
channelsint

Dimension of features on each simplicial cell.

Notes

This is the architecture proposed for node classification on simplicial complices.

References

[1]

Hajij, Ramamurthy, Guzmán-Sáenz, Zamzmi. High skip networks: a higher order generalization of skip connections. Geometrical and topological representation learning workshop at ICLR 2022. https://openreview.net/pdf?id=Sc8glB-k6e9

forward(x_0, incidence_1, adjacency_0)[source]#

Forward pass.

The forward pass was initially proposed in [1]_. Its equations are given in [2] and graphically illustrated in [3].

\[\begin{split}\begin{align*} &🟥 \quad m_{{y \rightarrow z}}^{(0 \rightarrow 0)} = \sigma ((A_{\uparrow,0})_{xy} \cdot h^{t,(0)}_y \cdot \Theta^{t,(0)1})\\ &🟥 \quad m_{z \rightarrow x}^{(0 \rightarrow 0)} = (A_{\uparrow,0})_{xy} \cdot m_{y \rightarrow z}^{(0 \rightarrow 0)} \cdot \Theta^{t,(0)2}\\ &🟥 \quad m_{{y \rightarrow z}}^{(0 \rightarrow 1)} = \sigma((B_1^T)_{zy} \cdot h_y^{t,(0)} \cdot \Theta^{t,(0 \rightarrow 1)})\\ &🟥 \quad m_{z \rightarrow x)}^{(1 \rightarrow 0)} = (B_1)_{xz} \cdot m_{z \rightarrow x}^{(0 \rightarrow 1)} \cdot \Theta^{t, (1 \rightarrow 0)}\\ &🟧 \quad m_{x}^{(0 \rightarrow 0)} = \sum_{z \in \mathcal{L}_\uparrow(x)} m_{z \rightarrow x}^{(0 \rightarrow 0)}\\ &🟧 \quad m_{x}^{(1 \rightarrow 0)} = \sum_{z \in \mathcal{C}(x)} m_{z \rightarrow x}^{(1 \rightarrow 0)}\\ &🟩 \quad m_x^{(0)} = m_x^{(0 \rightarrow 0)} + m_x^{(1 \rightarrow 0)}\\ &🟦 \quad h_x^{t+1,(0)} = I(m_x^{(0)}) \end{align*}\end{split}\]
Parameters:
x_0torch.Tensor, shape = (n_nodes, channels)

Input features on the nodes of the simplicial complex.

incidence_1torch.sparse, shape = (n_nodes, n_edges)

Incidence matrix \(B_1\) mapping edges to nodes.

adjacency_0torch.sparse, shape = (n_nodes, n_nodes)

Adjacency matrix \(A_0^{\uparrow}\) mapping nodes to nodes via edges.

Returns:
torch.Tensor, shape = (n_nodes, channels)

Output features on the nodes of the simplicial complex.

References

[2]

Papillon, Sanborn, Hajij, Miolane. Equations of topological neural networks (2023). awesome-tnns/awesome-tnns

[3]

Papillon, Sanborn, Hajij, Miolane. Architectures of topological deep learning: a survey on topological neural networks (2023). https://arxiv.org/abs/2304.10031.

reset_parameters() None[source]#

Reset learnable parameters.