eval_stats_base

class EvalStats(metrics: List[sensai.evaluation.eval_stats.eval_stats_base.TMetric], additional_metrics: Optional[List[sensai.evaluation.eval_stats.eval_stats_base.TMetric]] = None)[source]

Bases: Generic[sensai.evaluation.eval_stats.eval_stats_base.TMetric], sensai.util.string.ToStringMixin

__init__(metrics: List[sensai.evaluation.eval_stats.eval_stats_base.TMetric], additional_metrics: Optional[List[sensai.evaluation.eval_stats.eval_stats_base.TMetric]] = None)
set_name(name: str)
add_metric(metric: sensai.evaluation.eval_stats.eval_stats_base.TMetric)
compute_metric_value(metric: sensai.evaluation.eval_stats.eval_stats_base.TMetric) float
metrics_dict() Dict[str, float]

Computes all metrics

Returns

a dictionary mapping metric names to values

get_all() Dict[str, float]

Alias for metricsDict; may be deprecated in the future

class Metric(name: Optional[str] = None, bounds: Optional[Tuple[float, float]] = None)[source]

Bases: Generic[sensai.evaluation.eval_stats.eval_stats_base.TEvalStats], abc.ABC

__init__(name: Optional[str] = None, bounds: Optional[Tuple[float, float]] = None)
Parameters
  • name – the name of the metric; if None use the class’ name attribute

  • bounds – the minimum and maximum values the metric can take on (or None if the bounds are not specified)

name: str
abstract compute_value_for_eval_stats(eval_stats: sensai.evaluation.eval_stats.eval_stats_base.TEvalStats) float
get_paired_metrics() List[sensai.evaluation.eval_stats.eval_stats_base.TMetric]

Gets a list of metrics that should be considered together with this metric (e.g. for paired visualisations/plots). The direction of the pairing should be such that if this metric is “x”, the other is “y” for x-y type visualisations.

Returns

a list of metrics

has_finite_bounds() bool
class EvalStatsCollection(eval_stats_list: List[sensai.evaluation.eval_stats.eval_stats_base.TEvalStats])[source]

Bases: Generic[sensai.evaluation.eval_stats.eval_stats_base.TEvalStats, sensai.evaluation.eval_stats.eval_stats_base.TMetric], abc.ABC

__init__(eval_stats_list: List[sensai.evaluation.eval_stats.eval_stats_base.TEvalStats])
get_values(metric_name: str)
get_metric_names() List[str]
get_metrics() List[sensai.evaluation.eval_stats.eval_stats_base.TMetric]
get_metric_by_name(name: str) Optional[sensai.evaluation.eval_stats.eval_stats_base.TMetric]
has_metric(metric: Union[sensai.evaluation.eval_stats.eval_stats_base.Metric, str]) bool
agg_metrics_dict(agg_fns=(<function mean>, <function std>)) Dict[str, float]
mean_metrics_dict() Dict[str, float]
plot_distribution(metric_name: str, subtitle: Optional[str] = None, bins=None, kde=False, cdf=False, cdf_complementary=False, stat='proportion', **kwargs) matplotlib.figure.Figure

Plots the distribution of a metric as a histogram

Parameters
  • metric_name – name of the metric for which to plot the distribution (histogram) across evaluations

  • subtitle – the subtitle to add, if any

  • bins – the histogram bins (number of bins or boundaries); metrics bounds will be used to define the x limits. If None, use ‘auto’ bins

  • kde – whether to add a kernel density estimator plot

  • cdf – whether to add the cumulative distribution function (cdf)

  • cdf_complementary – whether to plot, if cdf is True, the complementary cdf instead of the regular cdf

  • stat – the statistic to compute for each bin (‘percent’, ‘probability’=’proportion’, ‘count’, ‘frequency’ or ‘density’), y-axis value

  • kwargs – additional parameters to pass to seaborn.histplot (see https://seaborn.pydata.org/generated/seaborn.histplot.html)

Returns

the plot

plot_scatter(metric_name_x: str, metric_name_y: str) matplotlib.figure.Figure
plot_heat_map(metric_name_x: str, metric_name_y: str) matplotlib.figure.Figure
to_data_frame() pandas.core.frame.DataFrame
Returns

a DataFrame with the evaluation metrics from all contained EvalStats objects; the EvalStats’ name field being used as the index if it is set

get_global_stats() sensai.evaluation.eval_stats.eval_stats_base.TEvalStats

Alias for getCombinedEvalStats

abstract get_combined_eval_stats() sensai.evaluation.eval_stats.eval_stats_base.TEvalStats
Returns

an EvalStats object that combines the data from all contained EvalStats objects

class PredictionEvalStats(y_predicted: Optional[Union[numpy.ndarray, pandas.core.series.Series, pandas.core.frame.DataFrame, list]], y_true: Optional[Union[numpy.ndarray, pandas.core.series.Series, pandas.core.frame.DataFrame, list]], metrics: List[sensai.evaluation.eval_stats.eval_stats_base.TMetric], additional_metrics: Optional[List[sensai.evaluation.eval_stats.eval_stats_base.TMetric]] = None)[source]

Bases: sensai.evaluation.eval_stats.eval_stats_base.EvalStats[sensai.evaluation.eval_stats.eval_stats_base.TMetric], abc.ABC

Collects data for the evaluation of predicted values (including multi-dimensional predictions) and computes corresponding metrics

__init__(y_predicted: Optional[Union[numpy.ndarray, pandas.core.series.Series, pandas.core.frame.DataFrame, list]], y_true: Optional[Union[numpy.ndarray, pandas.core.series.Series, pandas.core.frame.DataFrame, list]], metrics: List[sensai.evaluation.eval_stats.eval_stats_base.TMetric], additional_metrics: Optional[List[sensai.evaluation.eval_stats.eval_stats_base.TMetric]] = None)
Parameters
  • y_predicted – sequence of predicted values, or, in case of multi-dimensional predictions, either a data frame with one column per dimension or a nested sequence of values

  • y_true – sequence of ground truth labels of same shape as y_predicted

  • metrics – list of metrics to be computed on the provided data

  • additional_metrics – the metrics to additionally compute. This should only be provided if metrics is None

add(y_predicted, y_true)

Adds a single pair of values to the evaluation :param y_predicted: the value predicted by the model :param y_true: the true value

add_all(y_predicted: Union[numpy.ndarray, pandas.core.series.Series, pandas.core.frame.DataFrame, list], y_true: Union[numpy.ndarray, pandas.core.series.Series, pandas.core.frame.DataFrame, list])
Parameters
  • y_predicted – sequence of predicted values, or, in case of multi-dimensional predictions, either a data frame with one column per dimension or a nested sequence of values

  • y_true – sequence of ground truth labels of same shape as y_predicted

mean_stats(eval_stats_list: Sequence[sensai.evaluation.eval_stats.eval_stats_base.EvalStats]) Dict[str, float][source]

For a list of EvalStats objects compute the mean values of all metrics in a dictionary. Assumes that all provided EvalStats have the same metrics

class EvalStatsPlot(*args, **kwds)[source]

Bases: Generic[sensai.evaluation.eval_stats.eval_stats_base.TEvalStats], abc.ABC

abstract create_figure(eval_stats: sensai.evaluation.eval_stats.eval_stats_base.TEvalStats, subtitle: str) Optional[matplotlib.figure.Figure]
Parameters
  • eval_stats – the evaluation stats from which to generate the plot

  • subtitle – the plot’s subtitle

Returns

the figure or None if this plot is not applicable/cannot be created