madmom.evaluation.onsets

This module contains onset evaluation functionality described in [Re366ebbe117c-1]:

References

[Re366ebbe117c-1]Sebastian Böck, Florian Krebs and Markus Schedl, “Evaluating the Online Capabilities of Onset Detection Methods”, Proceedings of the 13th International Society for Music Information Retrieval Conference (ISMIR), 2012.
madmom.evaluation.onsets.onset_evaluation(detections, annotations, window=0.025)[source]

Determine the true/false positive/negative detections.

Parameters:
detections : numpy array

Detected notes.

annotations : numpy array

Annotated ground truth notes.

window : float, optional

Evaluation window [seconds].

Returns:
tp : numpy array, shape (num_tp,)

True positive detections.

fp : numpy array, shape (num_fp,)

False positive detections.

tn : numpy array, shape (0,)

True negative detections (empty, see notes).

fn : numpy array, shape (num_fn,)

False negative detections.

errors : numpy array, shape (num_tp,)

Errors of the true positive detections wrt. the annotations.

Notes

The returned true negative array is empty, because we are not interested in this class, since it is magnitudes bigger than true positives array.

class madmom.evaluation.onsets.OnsetEvaluation(detections, annotations, window=0.025, combine=0, delay=0, **kwargs)[source]

Evaluation class for measuring Precision, Recall and F-measure of onsets.

Parameters:
detections : str, list or numpy array

Detected notes.

annotations : str, list or numpy array

Annotated ground truth notes.

window : float, optional

F-measure evaluation window [seconds]

combine : float, optional

Combine all annotated onsets within combine seconds.

delay : float, optional

Delay the detections delay seconds for evaluation.

mean_error

Mean of the errors.

std_error

Standard deviation of the errors.

tostring(**kwargs)[source]

Format the evaluation metrics as a human readable string.

Returns:
str

Evaluation metrics formatted as a human readable string.

class madmom.evaluation.onsets.OnsetSumEvaluation(eval_objects, name=None)[source]

Class for summing onset evaluations.

errors

Errors of the true positive detections wrt. the ground truth.

class madmom.evaluation.onsets.OnsetMeanEvaluation(eval_objects, name=None, **kwargs)[source]

Class for averaging onset evaluations.

mean_error

Mean of the errors.

std_error

Standard deviation of the errors.

tostring(**kwargs)[source]

Format the evaluation metrics as a human readable string.

Returns:
str

Evaluation metrics formatted as a human readable string.

madmom.evaluation.onsets.add_parser(parser)[source]

Add an onset evaluation sub-parser to an existing parser.

Parameters:
parser : argparse parser instance

Existing argparse parser object.

Returns:
sub_parser : argparse sub-parser instance

Onset evaluation sub-parser.

parser_group : argparse argument group

Onset evaluation argument group.