Implementing your own model#
In this tutorial we show how to implement your own model and test it on a dataset.
This particular example uses the MUTAG dataset, uses an hypergraph lifting to create hypergraphs, and defines a model to work on them.
We train the model using the appropriate training and validation datasets, and finally test it on the test dataset.
Table of contents#
1. Imports
2. Configurations and utilities
3. Loading the data
4. Backbone definition
5. Model initialization
6. Training
7. Testing the model
1. Imports#
In [1]:
import lightning as pl
import torch
from omegaconf import OmegaConf
from topobenchmarkx.data.loaders import GraphLoader
from topobenchmarkx.data.preprocessor import PreProcessor
from topobenchmarkx.dataloader import TBXDataloader
from topobenchmarkx.evaluator import TBXEvaluator
from topobenchmarkx.loss import TBXLoss
from topobenchmarkx.model import TBXModel
from topobenchmarkx.nn.encoders import AllCellFeatureEncoder
from topobenchmarkx.nn.readouts import PropagateSignalDown
from topobenchmarkx.optimizer import TBXOptimizer
2. Configurations and utilities#
Configurations can be specified using yaml files or directly specified in your code like in this example.
In [2]:
loader_config = {
"data_domain": "graph",
"data_type": "TUDataset",
"data_name": "MUTAG",
"data_dir": "./data/MUTAG/",
}
transform_config = { "khop_lifting":
{"transform_type": "lifting",
"transform_name": "HypergraphKHopLifting",
"k_value": 1,}
}
split_config = {
"learning_setting": "inductive",
"split_type": "random",
"data_seed": 0,
"data_split_dir": "./data/MUTAG/splits/",
"train_prop": 0.5,
}
in_channels = 7
out_channels = 2
dim_hidden = 16
readout_config = {
"readout_name": "PropagateSignalDown",
"num_cell_dimensions": 1,
"hidden_dim": dim_hidden,
"out_channels": out_channels,
"task_level": "graph",
"pooling_type": "sum",
}
loss_config = {"task": "classification", "loss_type": "cross_entropy"}
evaluator_config = {"task": "classification",
"num_classes": out_channels,
"metrics": ["accuracy", "precision", "recall"]}
optimizer_config = {"optimizer_id": "Adam",
"parameters":
{"lr": 0.001,"weight_decay": 0.0005}
}
loader_config = OmegaConf.create(loader_config)
transform_config = OmegaConf.create(transform_config)
split_config = OmegaConf.create(split_config)
readout_config = OmegaConf.create(readout_config)
loss_config = OmegaConf.create(loss_config)
evaluator_config = OmegaConf.create(evaluator_config)
optimizer_config = OmegaConf.create(optimizer_config)
3. Loading the data#
In this example we use the MUTAG dataset. It is a graph dataset and we use the k-hop lifting to transform the graphs into hypergraphs.
We invite you to check out the README of the repository to learn more about the various liftings offered.
In [3]:
graph_loader = GraphLoader(loader_config)
dataset, dataset_dir = graph_loader.load()
preprocessor = PreProcessor(dataset, dataset_dir, transform_config)
dataset_train, dataset_val, dataset_test = preprocessor.load_dataset_splits(split_config)
datamodule = TBXDataloader(dataset_train, dataset_val, dataset_test, batch_size=32)
Transform parameters are the same, using existing data_dir: ./data/MUTAG/MUTAG/khop_lifting/1116229528
4. Backbone definition#
To implement a new model we only need to define the forward method.
With a hypergraph with \(n\) nodes and \(m\) hyperedges this model simply calculates the hyperedge features as \(X_1 = B_1 \cdot X_0\) where \(B_1 \in \mathbb{R}^{n \times m}\) is the incidence matrix, where \(B_{ij}=1\) if node \(i\) belongs to hyperedge \(j\) and is 0 otherwise.
Then the outputs are computed as \(X^{'}_0=\text{ReLU}(W_0 \cdot X_0 + B_0)\) and \(X^{'}_1=\text{ReLU}(W_1 \cdot X_1 + B_1)\), by simply using two linear layers with ReLU activation.
In [4]:
class myModel(pl.LightningModule):
def __init__(self, dim_hidden):
super().__init__()
self.dim_hidden = dim_hidden
self.linear_0 = torch.nn.Linear(dim_hidden, dim_hidden)
self.linear_1 = torch.nn.Linear(dim_hidden, dim_hidden)
def forward(self, batch):
x_0 = batch.x_0
incidence_hyperedges = batch.incidence_hyperedges
x_1 = torch.sparse.mm(incidence_hyperedges, x_0)
x_0 = self.linear_0(x_0)
x_0 = torch.relu(x_0)
x_1 = self.linear_1(x_1)
x_1 = torch.relu(x_1)
model_out = {"labels": batch.y, "batch_0": batch.batch_0}
model_out["x_0"] = x_0
model_out["hyperedge"] = x_1
return model_out
5. Model initialization#
Now that the model is defined we can create the TBXModel, which takes care of implementing everything else that is needed to train the model.
First we need to implement a few classes to specify the behaviour of the model.
In [5]:
backbone = myModel(dim_hidden)
readout = PropagateSignalDown(**readout_config)
loss = TBXLoss(**loss_config)
feature_encoder = AllCellFeatureEncoder(in_channels=[in_channels], out_channels=dim_hidden)
evaluator = TBXEvaluator(**evaluator_config)
optimizer = TBXOptimizer(**optimizer_config)
Now we can instantiate the TBXModel.
In [6]:
model = TBXModel(backbone=backbone,
backbone_wrapper=None,
readout=readout,
loss=loss,
feature_encoder=feature_encoder,
evaluator=evaluator,
optimizer=optimizer,
compile=False)
6. Training#
Now we can use the lightning
trainer to train the model.
In [7]:
# Increase the number of epochs to get better results
trainer = pl.Trainer(max_epochs=50, accelerator="cpu", enable_progress_bar=False, log_every_n_steps=1)
trainer.fit(model, datamodule)
train_metrics = trainer.callback_metrics
GPU available: True (mps), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/opt/miniconda3/envs/topox/lib/python3.11/site-packages/lightning/pytorch/trainer/setup.py:187: GPU available but not used. You can set it by doing `Trainer(accelerator='gpu')`.
/opt/miniconda3/envs/topox/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py:75: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `lightning.pytorch` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default
| Name | Type | Params
----------------------------------------------------------
0 | feature_encoder | AllCellFeatureEncoder | 448
1 | backbone | myModel | 544
2 | readout | PropagateSignalDown | 34
3 | val_acc_best | MeanMetric | 0
----------------------------------------------------------
1.0 K Trainable params
0 Non-trainable params
1.0 K Total params
0.004 Total estimated model params size (MB)
/opt/miniconda3/envs/topox/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:441: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=13` in the `DataLoader` to improve performance.
/opt/miniconda3/envs/topox/lib/python3.11/site-packages/torchmetrics/utilities/prints.py:43: UserWarning: The ``compute`` method of metric MulticlassAccuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs) # noqa: B028
/opt/miniconda3/envs/topox/lib/python3.11/site-packages/torchmetrics/utilities/prints.py:43: UserWarning: The ``compute`` method of metric MulticlassPrecision was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs) # noqa: B028
/opt/miniconda3/envs/topox/lib/python3.11/site-packages/torchmetrics/utilities/prints.py:43: UserWarning: The ``compute`` method of metric MulticlassRecall was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs) # noqa: B028
/opt/miniconda3/envs/topox/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:441: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=13` in the `DataLoader` to improve performance.
`Trainer.fit` stopped: `max_epochs=50` reached.
In [8]:
print(' Training metrics\n', '-'*26)
for key in train_metrics:
print('{:<21s} {:>5.4f}'.format(key+':', train_metrics[key].item()))
Training metrics
--------------------------
train/accuracy: 0.7234
train/precision: 0.7849
train/recall: 0.5888
val/loss: 0.5416
val/accuracy: 0.7234
val/precision: 0.7355
val/recall: 0.5844
train/loss: 0.4863
7. Testing the model#
Finally, we can test the model and obtain the results.
In [9]:
trainer.test(model, datamodule)
test_metrics = trainer.callback_metrics
/opt/miniconda3/envs/topox/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:441: The 'test_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=13` in the `DataLoader` to improve performance.
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Test metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ test/accuracy │ 0.6808510422706604 │ │ test/loss │ 0.532489538192749 │ │ test/precision │ 0.8333333730697632 │ │ test/recall │ 0.5588235259056091 │ └───────────────────────────┴───────────────────────────┘
In [10]:
print(' Testing metrics\n', '-'*25)
for key in test_metrics:
print('{:<20s} {:>5.4f}'.format(key+':', test_metrics[key].item()))
Testing metrics
-------------------------
test/loss: 0.5325
test/accuracy: 0.6809
test/precision: 0.8333
test/recall: 0.5588
In [ ]: