Simplicial Complex Convolutional Network (SCCN) Layer.

class topomodelx.nn.simplicial.sccn_layer.SCCNLayer(channels, max_rank, aggr_func: Literal['mean', 'sum'] = 'sum', update_func: Literal['relu', 'sigmoid', 'tanh'] | None = 'sigmoid')[source]#

Simplicial Complex Convolutional Network (SCCN) layer by [1].

This implementation applies to simplicial complexes of any rank.

This layer corresponds to the leftmost tensor diagram labeled Yang22c in Figure 11 of [3].

Parameters:
channelsint

Dimension of features on each simplicial cell.

max_rankint

Maximum rank of the cells in the simplicial complex.

aggr_func{“mean”, “sum”}, default=”sum”

The function to be used for aggregation.

update_func{“relu”, “sigmoid”, “tanh”, None}, default=”sigmoid”

The activation function.

See also

topomodelx.nn.simplicial.scn2_layer.SCN2Layer

SCN layer proposed in [1] for simplicial complexes of rank 2. The difference between SCCN and SCN is that: - SCN passes messages between cells of the same rank, - SCCN passes messages between cells of the same ranks, one rank above and one rank below.

References

[1] (1,2)

Yang, Sala, Bogdan. Efficient representation learning for higher-order data with simplicial complexes (2022). https://proceedings.mlr.press/v198/yang22a.html

[2]

Papillon, Sanborn, Hajij, Miolane. Equations of topological neural networks (2023). awesome-tnns/awesome-tnns

[3]

Papillon, Sanborn, Hajij, Miolane. Architectures of topological deep learning: a survey on topological neural networks (2023). https://arxiv.org/abs/2304.10031.

forward(features, incidences, adjacencies)[source]#

Forward pass.

The forward pass was initially proposed in [1]_. Its equations are given in [2]_ and graphically illustrated in [3]_.

The incidence and adjacency matrices passed into this layer can be normalized as described in [1]_ or unnormalized.

\[\begin{split}\begin{align*} &🟥 \quad m_{{y \rightarrow x}}^{(r \rightarrow r)} = (H_{r})_{xy} \cdot h^{t,(r)}_y \cdot \Theta^{t,(r\to r)} \\ &🟥 \quad m_{{y \rightarrow x}}^{(r-1 \rightarrow r)} = (B_{r}^T)_{xy} \cdot h^{t,(r-1)}_y \cdot \Theta^{t,(r-1\to r)} \\ &🟥 \quad m_{{y \rightarrow x}}^{(r+1 \rightarrow r)} = (B_{r+1})_{xy} \cdot h^{t,(r+1)}_y \cdot \Theta^{t,(r+1\to r)} \\ &🟧 \quad m_{x}^{(r \rightarrow r)} = \sum_{y \in \mathcal{L}_\downarrow(x)\bigcup \mathcal{L}_\uparrow(x)} m_{y \rightarrow x}^{(r \rightarrow r)} \\ &🟧 \quad m_{x}^{(r-1 \rightarrow r)} = \sum_{y \in \mathcal{B}(x)} m_{y \rightarrow x}^{(r-1 \rightarrow r)} \\ &🟧 \quad m_{x}^{(r+1 \rightarrow r)} = \sum_{y \in \mathcal{C}(x)} m_{y \rightarrow x}^{(r+1 \rightarrow r)} \\ &🟩 \quad m_x^{(r)} = m_x^{(r \rightarrow r)} + m_x^{(r-1 \rightarrow r)} + m_x^{(r+1 \rightarrow r)} \\ &🟦 \quad h_x^{t+1,(r)} = \sigma(m_x^{(r)}) \end{align*}\end{split}\]
Parameters:
featuresdict[int, torch.Tensor], length=max_rank+1, shape = (n_rank_r_cells, channels)

Input features on the cells of the simplicial complex.

incidencesdict[int, torch.sparse], length=max_rank, shape = (n_rank_r_minus_1_cells, n_rank_r_cells)

Incidence matrices \(B_r\) mapping r-cells to (r-1)-cells.

adjacenciesdict[int, torch.sparse], length=max_rank, shape = (n_rank_r_cells, n_rank_r_cells)

Adjacency matrices \(H_r\) mapping cells to cells via lower and upper cells.

Returns:
dict[int, torch.Tensor], length=max_rank+1, shape = (n_rank_r_cells, channels)

Output features on the cells of the simplicial complex.

reset_parameters() None[source]#

Reset learnable parameters.