madmom.evaluation.alignment

This module contains global alignment evaluation functionality.

exception madmom.evaluation.alignment.AlignmentFormatError(value=None)[source]

Exception to be raised whenever an incorrect alignment format is given.

madmom.evaluation.alignment.load_alignment(values)[source]

Load the alignment from given values or file.

Parameters:

values : str, file handle, list or numpy array

Alignment values.

Returns:

numpy array

Time and score position columns.

madmom.evaluation.alignment.compute_event_alignment(alignment, ground_truth)[source]

This function finds the alignment outputs corresponding to each ground truth alignment. In general, the alignment algorithm will output more alignment positions than events in the score, e.g. if it is designed to output the current alignment at constant intervals.

Parameters:

alignment : 2D numpy array

The score follower’s resulting alignment. 2D array, first value is the time in seconds, second value is the beat position.

ground_truth : 2D numpy array

Ground truth of the aligned performance. 2D array, first value is the time in seconds, second value is the beat position. It can contain the alignment positions for each individual note. In this case, the deviation for each note is taken into account.

Returns:

numpy array

Array of the same size as ground_truth, with each row representing the alignment of the corresponding ground truth element..

madmom.evaluation.alignment.compute_metrics(event_alignment, ground_truth, window, err_hist_bins)[source]

This function computes the evaluation metrics based on the paper [R2] plus an cumulative histogram of absolute errors.

Parameters:

event_alignment : 2D numpy array

Sequence alignment as computed by the score follower. 2D array, where the first column is the alignment time in seconds and the second column the position in beats. Needs to be the same length as ground_truth, hence for each element in the ground truth the corresponding alignment has to be available. Use the compute_event_alignment() function to compute this.

ground_truth : 2D numpy array

Ground truth of the aligned performance. 2D array, first value is the time in seconds, second value is the beat position. It can contain the alignment positions for each individual note. In this case, the deviation for each note is taken into account.

window : float

Tolerance window in seconds. Alignments off less than this amount from the ground truth will be considered correct.

err_hist_bins : list

List of error bounds for which the cumulative histogram of absolute error will be computed (e.g. [0.1, 0.3] will give the percentage of events aligned with an error smaller than 0.1 and 0.3).

Returns:

metrics : dict

(Some) of the metrics described in [R2] and the error histogram.

References

[R2](1, 2, 3) Arshia Cont, Diemo Schwarz, Norbert Schnell and Christopher Raphael, “Evaluation of Real-Time Audio-to-Score Alignment”, Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR), 2007.
class madmom.evaluation.alignment.AlignmentEvaluation(alignment, ground_truth, window=0.25, name=None, **kwargs)[source]

Alignment evaluation class for beat-level alignments. Beat-level aligners output beat positions for points in time, rather than computing a time step for each individual event in the score. The following metrics are available:

Parameters:

alignment : 2D numpy array or list of tuples

Computed alignment; first value is the time in seconds, second value is the beat position.

ground_truth : 2D numpy array or list of tuples

Ground truth of the aligned file; first value is the time in seconds, second value is the beat position. It can contain the alignment positions for each individual event. In this case, the deviation for each event is taken into account.

window : float

Tolerance window in seconds. Alignments off less than this amount from the ground truth will be considered correct.

name : str

Name to be displayed.

Attributes

miss_rate (float) Percentage of missed events (events that exist in the reference score, but are not reported).
misalign_rate (float) Percentage of misaligned events (events with an alignment that is off by more than a defined window).
avg_imprecision (float) Average alignment error of non-misaligned events.
stddev_imprecision (float) Standard deviation of alignment error of non-misaligned events.
avg_error (float) Average alignment error.
stddev_error (float) Standard deviation of alignment error.
piece_completion (float) Percentage of events that was followed until the aligner hangs, i.e from where on there are only misaligned or missed events.
below_{x}_{yy} (float) Percentage of events that are aligned with an error smaller than x.yy seconds.
tostring(histogram=False, **kwargs)[source]

Format the evaluation metrics as a human readable string.

Parameters:

histogram : bool

Also output the error histogram.

Returns:

str

Evaluation metrics formatted as a human readable string.

class madmom.evaluation.alignment.AlignmentSumEvaluation(eval_objects, name=None)[source]

Class for averaging alignment evaluation scores, considering the lengths of the aligned pieces. For a detailed description of the available metrics, refer to AlignmentEvaluation.

Parameters:

eval_objects : list

Evaluation objects.

name : str

Name to be displayed.

class madmom.evaluation.alignment.AlignmentMeanEvaluation(eval_objects, name=None)[source]

Class for averaging alignment evaluation scores, averaging piecewise (i.e. ignoring the lengths of the pieces). For a detailed description of the available metrics, refer to AlignmentEvaluation.

Parameters:

eval_objects : list

Evaluation objects.

name : str

Name to be displayed.

madmom.evaluation.alignment.add_parser(parser)[source]

Add an alignment evaluation sub-parser to an existing parser.

Parameters:

parser : argparse parser instance

Existing argparse parser object.

Returns:

sub_parser : argparse sub-parser instance

Alignment evaluation sub-parser.

parser_group : argparse argument group

Alignment evaluation argument group.