madmom.evaluation.notes

This module contains note evaluation functionality.

madmom.evaluation.notes.load_notes(*args, **kwargs)[source]

Load the notes from the given values or file.

Parameters:

values: str, file handle, list of tuples or numpy array

Notes values.

Returns:

numpy array

Notes.

Notes

Expected file/tuple/row format:

‘note_time’ ‘MIDI_note’ [‘duration’ [‘MIDI_velocity’]]

madmom.evaluation.notes.remove_duplicate_notes(data)[source]

Remove duplicate rows from the array.

Parameters:

data : numpy array

Data.

Returns:

numpy array

Data array with duplicate rows removed.

Notes

This function removes only exact duplicates.

madmom.evaluation.notes.note_onset_evaluation(detections, annotations, window=0.025)[source]

Determine the true/false positive/negative note onset detections.

Parameters:

detections : numpy array

Detected notes.

annotations : numpy array

Annotated ground truth notes.

window : float, optional

Evaluation window [seconds].

Returns:

tp : numpy array, shape (num_tp, 2)

True positive detections.

fp : numpy array, shape (num_fp, 2)

False positive detections.

tn : numpy array, shape (0, 2)

True negative detections (empty, see notes).

fn : numpy array, shape (num_fn, 2)

False negative detections.

errors : numpy array, shape (num_tp, 2)

Errors of the true positive detections wrt. the annotations.

Notes

The expected note row format is:

‘note_time’ ‘MIDI_note’ [‘duration’ [‘MIDI_velocity’]]

The returned true negative array is empty, because we are not interested in this class, since it is magnitudes bigger than true positives array.

class madmom.evaluation.notes.NoteEvaluation(detections, annotations, window=0.025, delay=0, **kwargs)[source]

Evaluation class for measuring Precision, Recall and F-measure of notes.

Parameters:

detections : str, list or numpy array

Detected notes.

annotations : str, list or numpy array

Annotated ground truth notes.

window : float, optional

F-measure evaluation window [seconds]

delay : float, optional

Delay the detections delay seconds for evaluation.

mean_error

Mean of the errors.

std_error

Standard deviation of the errors.

tostring(notes=False, **kwargs)[source]
Parameters:

notes : bool, optional

Display detailed output for all individual notes.

Returns:

str

Evaluation metrics formatted as a human readable string.

class madmom.evaluation.notes.NoteSumEvaluation(eval_objects, name=None)[source]

Class for summing note evaluations.

errors

Errors of the true positive detections wrt. the ground truth.

class madmom.evaluation.notes.NoteMeanEvaluation(eval_objects, name=None, **kwargs)[source]

Class for averaging note evaluations.

mean_error

Mean of the errors.

std_error

Standard deviation of the errors.

tostring(**kwargs)[source]

Format the evaluation metrics as a human readable string.

Returns:

str

Evaluation metrics formatted as a human readable string.

madmom.evaluation.notes.add_parser(parser)[source]

Add a note evaluation sub-parser to an existing parser.

Parameters:

parser : argparse parser instance

Existing argparse parser object.

Returns:

sub_parser : argparse sub-parser instance

Note evaluation sub-parser.

parser_group : argparse argument group

Note evaluation argument group.