Simplicial Complex Net Layer.
- class topomodelx.nn.simplicial.scone_layer.SCoNeLayer(in_channels: int, out_channels: int, update_func: Literal['relu', 'sigmoid', 'tanh'] = 'tanh')[source]#
Implementation of the SCoNe layer proposed in [1].
- Parameters:
- in_channelsint
Input dimension of features on each edge.
- out_channelsint
Output dimension of features on each edge.
- update_funcLiteral[‘relu’, ‘sigmoid’, ‘tanh’]
Update function to use when updating edge features.
Methods
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(x, incidence_1, incidence_2)Forward pass.
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.get_extra_state
()Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Moves all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse, ...])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook, *[, prepend, ...])Registers a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Registers a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Registers a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Registers a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Registers a post hook to be run after module's
load_state_dict
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
register_state_dict_pre_hook
(hook)These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
reset_parameters
([gain])Reset learnable parameters.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.share_memory
()See
torch.Tensor.share_memory_()
state_dict
(*args[, destination, prefix, ...])Returns a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
__call__
Notes
This is the architecture proposed for trajectory prediction on simplicial complexes.
For the trajectory prediction architecture proposed in [1], these layers are stacked before applying the boundary map from 1-chains to 0-chains. Finally, one can apply the softmax operator on the neighbouring nodes of the last node in the given trajectory to predict the next node. When implemented like this, we get a map from (ordered) 1-chains (trajectories) to the neighbouring nodes of the last node in the 1-chain.
References
[1] (1,2)Roddenberry, Mitchell, Glaze. Principled simplicial neural networks for trajectory prediction. ICML 2021. https://proceedings.mlr.press/v139/roddenberry21a.html
[2]Papillon, Sanborn, Hajij, Miolane. Equations of topological neural networks (2023). awesome-tnns/awesome-tnns
[3]Papillon, Sanborn, Hajij, Miolane. Architectures of topological deep learning: a survey on topological neural networks (2023). https://arxiv.org/abs/2304.10031.
- forward(x: Tensor, incidence_1: Tensor, incidence_2: Tensor) Tensor [source]#
Forward pass.
The forward pass was initially proposed in [1]_. Its equations are given in [2]_ and graphically illustrated in [3]_.
\[\begin{split}\begin{align*} &🟥 \quad m^{(1 \rightarrow 0 \rightarrow 1)}_{y \rightarrow \{z\} \rightarrow x} = (L_{\downarrow,1})_{xy} \cdot h_y^{t,(1)} \cdot \Theta^{t,(1 \rightarrow 0 \rightarrow 1)}\\ &🟥 \quad m_{x \rightarrow x}^{(1 \rightarrow 1)} = h_x^{t,(1)} \cdot \Theta^{t,(1 \rightarrow 1)}\\ &🟥 \quad m_{y \rightarrow \{z\} \rightarrow x}^{(1 \rightarrow 2 \rightarrow 1)} = (L_{\uparrow,1})_{xy} \cdot h_y^{t,(1)} \cdot \Theta^{t,(1 \rightarrow 2 \rightarrow 1)}\\ &🟧 \quad m_{x}^{(1 \rightarrow 0 \rightarrow 1)} = \sum_{y \in \mathcal{L}_\downarrow(x)} m_{y \rightarrow \{z\} \rightarrow x}^{(1 \rightarrow 0 \rightarrow 1)}\\ &🟧 \quad m_{x}^{(1 \rightarrow 2 \rightarrow 1)} = \sum_{y \in \mathcal{L}_\uparrow(x)} m_{y \rightarrow \{z\} \rightarrow x}^{(1 \rightarrow 2 \rightarrow 1)}\\ &🟩 \quad m_x^{(1)} = m_{x}^{(1 \rightarrow 0 \rightarrow 1)} + m_{x \rightarrow x}^{(1 \rightarrow 1)} + m_{x}^{(1 \rightarrow 2 \rightarrow 1)}\\ &🟦 \quad h_x^{t,(1)} = \sigma(m_x^{(1)}) \end{align*}\end{split}\]- Parameters:
- x: torch.Tensor, shape = (n_edges, in_channels)
Input features on the edges of the simplicial complex.
- incidence_1torch.sparse, shape = (n_nodes, n_edges)
Incidence matrix \(B_1\) mapping edges to nodes.
- incidence_2torch.sparse, shape = (n_edges, n_triangles)
Incidence matrix \(B_2\) mapping triangles to edges.
- Returns:
- torch.Tensor, shape = (n_edges, out_channels)
Output features on the edges of the simplicial complex.