Evaluator#

This module implements custom Python classes to evaluate performances of models in TopoBenchmarkX.

Abstract class for the evaluator class.

class topobenchmarkx.evaluator.base.AbstractEvaluator[source]#

Abstract class for the evaluator class.

abstract compute()[source]#

Compute the metrics.

abstract reset()[source]#

Reset the metrics.

abstract update(model_out: dict)[source]#

Update the metrics with the model output.

Parameters:
model_outdict

The model output.

This module contains the Evaluator class that is responsible for computing the metrics.

class topobenchmarkx.evaluator.evaluator.TBXEvaluator(task, **kwargs)[source]#

Evaluator class that is responsible for computing the metrics.

Parameters:
taskstr

The task type. It can be either “classification” or “regression”.

**kwargsdict

Additional arguments for the class. The arguments depend on the task. In “classification” scenario, the following arguments are expected: - num_classes (int): The number of classes. - metrics (list[str]): A list of classification metrics to be computed. In “regression” scenario, the following arguments are expected: - metrics (list[str]): A list of regression metrics to be computed.

compute()[source]#

Compute the metrics.

Returns:
dict

Dictionary containing the computed metrics.

reset()[source]#

Reset the metrics.

This method should be called after each epoch.

update(model_out: dict)[source]#

Update the metrics with the model output.

Parameters:
model_outdict

The model output. It should contain the following keys: - logits : torch.Tensor The model predictions. - labels : torch.Tensor The ground truth labels.

Raises:
ValueError

If the task is not valid.