madmom.features.beats¶
This module contains beat tracking related functionality.
-
class
madmom.features.beats.
RNNBeatProcessor
(post_processor=<function average_predictions>, online=False, nn_files=None, **kwargs)[source]¶ Processor to get a beat activation function from multiple RNNs.
Parameters: - post_processor : Processor, optional
Post-processor, default is to average the predictions.
- online : bool, optional
Use signal processing parameters and RNN models suitable for online mode.
- nn_files : list, optional
List with trained RNN model files. Per default (‘None’), an ensemble of networks will be used.
References
[1] Sebastian Böck and Markus Schedl, “Enhanced Beat Tracking with Context-Aware Neural Networks”, Proceedings of the 14th International Conference on Digital Audio Effects (DAFx), 2011. Examples
Create a RNNBeatProcessor and pass a file through the processor. The returned 1d array represents the probability of a beat at each frame, sampled at 100 frames per second.
>>> proc = RNNBeatProcessor() >>> proc <madmom.features.beats.RNNBeatProcessor object at 0x...> >>> proc('tests/data/audio/sample.wav') array([0.00479, 0.00603, 0.00927, 0.01419, ... 0.02725], dtype=float32)
For online processing, online must be set to ‘True’. If processing power is limited, fewer number of RNN models can be defined via nn_files. The audio signal is then processed frame by frame.
>>> from madmom.models import BEATS_LSTM >>> proc = RNNBeatProcessor(online=True, nn_files=[BEATS_LSTM[0]]) >>> proc <madmom.features.beats.RNNBeatProcessor object at 0x...> >>> proc('tests/data/audio/sample.wav') array([0.03887, 0.02619, 0.00747, 0.00218, ... 0.04825], dtype=float32)
-
class
madmom.features.beats.
MultiModelSelectionProcessor
(num_ref_predictions, **kwargs)[source]¶ Processor for selecting the most suitable model (i.e. the predictions thereof) from a multiple models/predictions.
Parameters: - num_ref_predictions : int
Number of reference predictions (see below).
Notes
This processor selects the most suitable prediction from multiple models by comparing them to the predictions of a reference model. The one with the smallest mean squared error is chosen.
If num_ref_predictions is 0 or None, an averaged prediction is computed from the given predictions and used as reference.
References
[1] Sebastian Böck, Florian Krebs and Gerhard Widmer, “A Multi-Model Approach to Beat Tracking Considering Heterogeneous Music Styles”, Proceedings of the 15th International Society for Music Information Retrieval Conference (ISMIR), 2014. Examples
The MultiModelSelectionProcessor takes a list of model predictions as it’s call argument. Thus, ppost_processor of RNNBeatProcessor hast to be set to ‘None’ in order to get the predictions of all models.
>>> proc = RNNBeatProcessor(post_processor=None) >>> proc <madmom.features.beats.RNNBeatProcessor object at 0x...>
When passing a file through the processor, a list with predictions, one for each model tested, is returned.
>>> predictions = proc('tests/data/audio/sample.wav') >>> predictions [array([0.00535, 0.00774, ..., 0.02343, 0.04931], dtype=float32), array([0.0022 , 0.00282, ..., 0.00825, 0.0152 ], dtype=float32), ..., array([0.005 , 0.0052 , ..., 0.00472, 0.01524], dtype=float32), array([0.00319, 0.0044 , ..., 0.0081 , 0.01498], dtype=float32)]
We can feed these predictions to the MultiModelSelectionProcessor. Since we do not have a dedicated reference prediction (which had to be the first element of the list and num_ref_predictions set to 1), we simply set num_ref_predictions to ‘None’. MultiModelSelectionProcessor averages all predictions to obtain a reference prediction it compares all others to.
>>> mm_proc = MultiModelSelectionProcessor(num_ref_predictions=None) >>> mm_proc(predictions) array([0.00759, 0.00901, ..., 0.00843, 0.01834], dtype=float32)
-
process
(predictions, **kwargs)[source]¶ Selects the most appropriate predictions form the list of predictions.
Parameters: - predictions : list
Predictions (beat activation functions) of multiple models.
Returns: - numpy array
Most suitable prediction.
Notes
The reference beat activation function must be the first one in the list of given predictions.
-
madmom.features.beats.
detect_beats
(activations, interval, look_aside=0.2)[source]¶ Detects the beats in the given activation function as in [1].
Parameters: - activations : numpy array
Beat activations.
- interval : int
Look for the next beat each interval frames.
- look_aside : float
Look this fraction of the interval to each side to detect the beats.
Returns: - numpy array
Beat positions [frames].
Notes
A Hamming window of 2 * look_aside * interval is applied around the position where the beat is expected to prefer beats closer to the centre.
References
[1] (1, 2) Sebastian Böck and Markus Schedl, “Enhanced Beat Tracking with Context-Aware Neural Networks”, Proceedings of the 14th International Conference on Digital Audio Effects (DAFx), 2011.
-
class
madmom.features.beats.
BeatTrackingProcessor
(look_aside=0.2, look_ahead=10.0, fps=None, tempo_estimator=None, **kwargs)[source]¶ Track the beats according to previously determined (local) tempo by iteratively aligning them around the estimated position [1].
Parameters: - look_aside : float, optional
Look this fraction of the estimated beat interval to each side of the assumed next beat position to look for the most likely position of the next beat.
- look_ahead : float, optional
Look look_ahead seconds in both directions to determine the local tempo and align the beats accordingly.
- tempo_estimator :
TempoEstimationProcessor
, optional Use this processor to estimate the (local) tempo. If ‘None’ a default tempo estimator will be created and used.
- fps : float, optional
Frames per second.
- kwargs : dict, optional
Keyword arguments passed to
madmom.features.tempo.TempoEstimationProcessor
if no tempo_estimator was given.
Notes
If look_ahead is not set, a constant tempo throughout the whole piece is assumed. If look_ahead is set, the local tempo (in a range +/- look_ahead seconds around the actual position) is estimated and then the next beat is tracked accordingly. This procedure is repeated from the new position to the end of the piece.
Instead of the auto-correlation based method for tempo estimation proposed in [1], it uses a comb filter based method [2] per default. The behaviour can be controlled with the tempo_method parameter.
References
[1] (1, 2, 3) Sebastian Böck and Markus Schedl, “Enhanced Beat Tracking with Context-Aware Neural Networks”, Proceedings of the 14th International Conference on Digital Audio Effects (DAFx), 2011. [2] (1, 2) Sebastian Böck, Florian Krebs and Gerhard Widmer, “Accurate Tempo Estimation based on Recurrent Neural Networks and Resonating Comb Filters”, Proceedings of the 16th International Society for Music Information Retrieval Conference (ISMIR), 2015. Examples
Create a BeatTrackingProcessor. The returned array represents the positions of the beats in seconds, thus the expected sampling rate has to be given.
>>> proc = BeatTrackingProcessor(fps=100) >>> proc <madmom.features.beats.BeatTrackingProcessor object at 0x...>
Call this BeatTrackingProcessor with the beat activation function returned by RNNBeatProcessor to obtain the beat positions.
>>> act = RNNBeatProcessor()('tests/data/audio/sample.wav') >>> proc(act) array([0.11, 0.45, 0.79, 1.13, 1.47, 1.81, 2.15, 2.49])
-
process
(activations, **kwargs)[source]¶ Detect the beats in the given activation function.
Parameters: - activations : numpy array
Beat activation function.
Returns: - beats : numpy array
Detected beat positions [seconds].
-
static
add_arguments
(parser, look_aside=0.2, look_ahead=10.0)[source]¶ Add beat tracking related arguments to an existing parser.
Parameters: - parser : argparse parser instance
Existing argparse parser object.
- look_aside : float, optional
Look this fraction of the estimated beat interval to each side of the assumed next beat position to look for the most likely position of the next beat.
- look_ahead : float, optional
Look look_ahead seconds in both directions to determine the local tempo and align the beats accordingly.
Returns: - parser_group : argparse argument group
Beat tracking argument parser group.
Notes
Parameters are included in the group only if they are not ‘None’.
-
class
madmom.features.beats.
BeatDetectionProcessor
(look_aside=0.2, fps=None, **kwargs)[source]¶ Class for detecting beats according to the previously determined global tempo by iteratively aligning them around the estimated position [1].
Parameters: - look_aside : float
Look this fraction of the estimated beat interval to each side of the assumed next beat position to look for the most likely position of the next beat.
- fps : float, optional
Frames per second.
See also
Notes
A constant tempo throughout the whole piece is assumed.
Instead of the auto-correlation based method for tempo estimation proposed in [1], it uses a comb filter based method [2] per default. The behaviour can be controlled with the tempo_method parameter.
References
[1] (1, 2, 3) Sebastian Böck and Markus Schedl, “Enhanced Beat Tracking with Context-Aware Neural Networks”, Proceedings of the 14th International Conference on Digital Audio Effects (DAFx), 2011. [2] (1, 2) Sebastian Böck, Florian Krebs and Gerhard Widmer, “Accurate Tempo Estimation based on Recurrent Neural Networks and Resonating Comb Filters”, Proceedings of the 16th International Society for Music Information Retrieval Conference (ISMIR), 2015. Examples
Create a BeatDetectionProcessor. The returned array represents the positions of the beats in seconds, thus the expected sampling rate has to be given.
>>> proc = BeatDetectionProcessor(fps=100) >>> proc <madmom.features.beats.BeatDetectionProcessor object at 0x...>
Call this BeatDetectionProcessor with the beat activation function returned by RNNBeatProcessor to obtain the beat positions.
>>> act = RNNBeatProcessor()('tests/data/audio/sample.wav') >>> proc(act) array([0.11, 0.45, 0.79, 1.13, 1.47, 1.81, 2.15, 2.49])
-
class
madmom.features.beats.
CRFBeatDetectionProcessor
(interval_sigma=0.18, use_factors=False, num_intervals=5, factors=array([0.5, 0.67, 1., 1.5, 2. ]), **kwargs)[source]¶ Conditional Random Field Beat Detection.
Tracks the beats according to the previously determined global tempo using a conditional random field (CRF) model.
Parameters: - interval_sigma : float, optional
Allowed deviation from the dominant beat interval per beat.
- use_factors : bool, optional
Use dominant interval multiplied by factors instead of intervals estimated by tempo estimator.
- num_intervals : int, optional
Maximum number of estimated intervals to try.
- factors : list or numpy array, optional
Factors of the dominant interval to try.
References
[1] Filip Korzeniowski, Sebastian Böck and Gerhard Widmer, “Probabilistic Extraction of Beat Positions from a Beat Activation Function”, Proceedings of the 15th International Society for Music Information Retrieval Conference (ISMIR), 2014. Examples
Create a CRFBeatDetectionProcessor. The returned array represents the positions of the beats in seconds, thus the expected sampling rate has to be given.
>>> proc = CRFBeatDetectionProcessor(fps=100) >>> proc <madmom.features.beats.CRFBeatDetectionProcessor object at 0x...>
Call this BeatDetectionProcessor with the beat activation function returned by RNNBeatProcessor to obtain the beat positions.
>>> act = RNNBeatProcessor()('tests/data/audio/sample.wav') >>> proc(act) array([0.09, 0.79, 1.49])
-
process
(activations, **kwargs)[source]¶ Detect the beats in the given activation function.
Parameters: - activations : numpy array
Beat activation function.
Returns: - numpy array
Detected beat positions [seconds].
-
static
add_arguments
(parser, interval_sigma=0.18, use_factors=False, num_intervals=5, factors=array([0.5, 0.67, 1., 1.5, 2. ]))[source]¶ Add CRFBeatDetection related arguments to an existing parser.
Parameters: - parser : argparse parser instance
Existing argparse parser object.
- interval_sigma : float, optional
allowed deviation from the dominant beat interval per beat
- use_factors : bool, optional
use dominant interval multiplied by factors instead of intervals estimated by tempo estimator
- num_intervals : int, optional
max number of estimated intervals to try
- factors : list or numpy array, optional
factors of the dominant interval to try
Returns: - parser_group : argparse argument group
CRF beat tracking argument parser group.
-
class
madmom.features.beats.
DBNBeatTrackingProcessor
(min_bpm=55.0, max_bpm=215.0, num_tempi=None, transition_lambda=100, observation_lambda=16, correct=True, threshold=0, fps=None, online=False, **kwargs)[source]¶ Beat tracking with RNNs and a dynamic Bayesian network (DBN) approximated by a Hidden Markov Model (HMM).
Parameters: - min_bpm : float, optional
Minimum tempo used for beat tracking [bpm].
- max_bpm : float, optional
Maximum tempo used for beat tracking [bpm].
- num_tempi : int, optional
Number of tempi to model; if set, limit the number of tempi and use a log spacing, otherwise a linear spacing.
- transition_lambda : float, optional
Lambda for the exponential tempo change distribution (higher values prefer a constant tempo from one beat to the next one).
- observation_lambda : int, optional
Split one beat period into observation_lambda parts, the first representing beat states and the remaining non-beat states.
- threshold : float, optional
Threshold the observations before Viterbi decoding.
- correct : bool, optional
Correct the beats (i.e. align them to the nearest peak of the beat activation function).
- fps : float, optional
Frames per second.
- online : bool, optional
Use the forward algorithm (instead of Viterbi) to decode the beats.
Notes
Instead of the originally proposed state space and transition model for the DBN [1], the more efficient version proposed in [2] is used.
References
[1] (1, 2) Sebastian Böck, Florian Krebs and Gerhard Widmer, “A Multi-Model Approach to Beat Tracking Considering Heterogeneous Music Styles”, Proceedings of the 15th International Society for Music Information Retrieval Conference (ISMIR), 2014. [2] (1, 2) Florian Krebs, Sebastian Böck and Gerhard Widmer, “An Efficient State Space Model for Joint Tempo and Meter Tracking”, Proceedings of the 16th International Society for Music Information Retrieval Conference (ISMIR), 2015. Examples
Create a DBNBeatTrackingProcessor. The returned array represents the positions of the beats in seconds, thus the expected sampling rate has to be given.
>>> proc = DBNBeatTrackingProcessor(fps=100) >>> proc <madmom.features.beats.DBNBeatTrackingProcessor object at 0x...>
Call this DBNBeatTrackingProcessor with the beat activation function returned by RNNBeatProcessor to obtain the beat positions.
>>> act = RNNBeatProcessor()('tests/data/audio/sample.wav') >>> proc(act) array([0.1 , 0.45, 0.8 , 1.12, 1.48, 1.8 , 2.15, 2.49])
-
process_offline
(activations, **kwargs)[source]¶ Detect the beats in the given activation function with Viterbi decoding.
Parameters: - activations : numpy array
Beat activation function.
Returns: - beats : numpy array
Detected beat positions [seconds].
-
process_online
(activations, reset=True, **kwargs)[source]¶ Detect the beats in the given activation function with the forward algorithm.
Parameters: - activations : numpy array
Beat activation for a single frame.
- reset : bool, optional
Reset the DBNBeatTrackingProcessor to its initial state before processing.
Returns: - beats : numpy array
Detected beat position [seconds].
-
process_forward
(activations, reset=True, **kwargs)¶ Detect the beats in the given activation function with the forward algorithm.
Parameters: - activations : numpy array
Beat activation for a single frame.
- reset : bool, optional
Reset the DBNBeatTrackingProcessor to its initial state before processing.
Returns: - beats : numpy array
Detected beat position [seconds].
-
process_viterbi
(activations, **kwargs)¶ Detect the beats in the given activation function with Viterbi decoding.
Parameters: - activations : numpy array
Beat activation function.
Returns: - beats : numpy array
Detected beat positions [seconds].
-
static
add_arguments
(parser, min_bpm=55.0, max_bpm=215.0, num_tempi=None, transition_lambda=100, observation_lambda=16, threshold=0, correct=True)[source]¶ Add DBN related arguments to an existing parser object.
Parameters: - parser : argparse parser instance
Existing argparse parser object.
- min_bpm : float, optional
Minimum tempo used for beat tracking [bpm].
- max_bpm : float, optional
Maximum tempo used for beat tracking [bpm].
- num_tempi : int, optional
Number of tempi to model; if set, limit the number of tempi and use a log spacing, otherwise a linear spacing.
- transition_lambda : float, optional
Lambda for the exponential tempo change distribution (higher values prefer a constant tempo over a tempo change from one beat to the next one).
- observation_lambda : float, optional
Split one beat period into observation_lambda parts, the first representing beat states and the remaining non-beat states.
- threshold : float, optional
Threshold the observations before Viterbi decoding.
- correct : bool, optional
Correct the beats (i.e. align them to the nearest peak of the beat activation function).
Returns: - parser_group : argparse argument group
DBN beat tracking argument parser group