Metrics
A collection of methods for the evaluation of classifiers.
@author: J. Cid-Sueiro, A. Gallardo-Antolin
- src.domain_classifier.metrics.binary_metrics(preds, labels, sampling_probs=None)
Compute performance metrics based on binary labels and binary predictions only
- Parameters
preds (np.array) – Binary predictions
labels (np.array) – True class labels
sampling_probs (np.array) – Sampling probabilities. It is used to compute performance metrics as weighted averages
- Returns
eval_scores – A dictionary of evaluation metrics.
- Return type
dict
- src.domain_classifier.metrics.print_metrics(m, roc=None, title='', data='', print_unweighted=True)
Pretty-prints the given metrics
- Parameters
m (dict) – Dictionary of metrics (produced by the binary_metrics() method)
roc (dict or None, optional (default=None)) – A dictionary of score-based metrics. It is used to print AUC.
data (str, optional (default=””)) – Identifier of the dataset used to compute the metrics. It is used to compose the text title
print_unweighted (boolean (default=True)) – If True, unweighted metrics are printed in addition to the weighted metrics
- src.domain_classifier.metrics.score_based_metrics(scores, labels, sampling_probs=None)
Computes score-based metrics
- Parameters
scores (np.array) – Score values
labels (np.array) – Target values
sampling_probs (np.array) – Sampling probabilities. It is used to compute performance metrics as weighted averages
- Returns
eval_scores – A dictionary of evaluation metrics.
- Return type
dict