detector_benchmark.pipeline.pipeline_utils¶

Functions¶

compute_bootstrap_metrics(data, labels[, n_bootstrap, ...])

Compute the metrics using bootstrap.

create_logger(name[, silent, to_disk, log_file])

Create a new logger

create_logger_file(log_path)

get_threshold_for_results(eval_json_path, target_fpr)

Module Contents¶

detector_benchmark.pipeline.pipeline_utils.compute_bootstrap_metrics(data, labels, n_bootstrap=1000, flip_labels=False)¶

Compute the metrics using bootstrap.

Parameters:¶

data: list

The data to compute the metrics on.

labels: list

The labels of the data.

n_bootstrap: int

The number of bootstrap samples to use.

flip_labels: bool

Whether to flip the labels.

Returns:¶

dict

The metrics.

detector_benchmark.pipeline.pipeline_utils.create_logger(name, silent=False, to_disk=False, log_file=None)¶

Create a new logger

detector_benchmark.pipeline.pipeline_utils.create_logger_file(log_path)¶
detector_benchmark.pipeline.pipeline_utils.get_threshold_for_results(eval_json_path: str, target_fpr: float)¶