madmom.ml.hmm¶
This module contains Hidden Markov Model (HMM) functionality.
Notes¶
If you want to change this module and use it interactively, use pyximport.
>>> import pyximport
>>> pyximport.install(reload_support=True,
... setup_args={'include_dirs': np.get_include()})
...
(None, <pyximport.pyximport.PyxImporter object at 0x...>)

class
madmom.ml.hmm.
DiscreteObservationModel
¶ Simple discrete observation model that takes an observation matrix of the form (num_states x num_observations) containing P(observation  state).
Parameters: observation_probabilities : numpy array
Observation probabilities as a 2D array of shape (num_observations, num_states). Has to sum to 1 over the second axis, since it represents P(observation  state).
Examples
Assuming two states and three observation types, instantiate a discrete observation model:
>>> om = DiscreteObservationModel(np.array([[0.1, 0.5, 0.4], ... [0.7, 0.2, 0.1]])) >>> om <madmom.ml.hmm.DiscreteObservationModel object at 0x...>
If the probabilities do not sum to 1, it throws a ValueError:
>>> om = DiscreteObservationModel(np.array([[0.5, 0.5, 0.5], ... [0.5, 0.5, 0.5]])) ... Traceback (most recent call last): ... ValueError: Not a probability distribution.

densities
(self, observations)¶ Densities of the observations.
Parameters: observations : numpy array
Observations.
Returns: numpy array
Densities of the observations.

log_densities
(self, observations)¶ Log densities of the observations.
Parameters: observations : numpy array
Observations.
Returns: numpy array
Log densities of the observations.


madmom.ml.hmm.
HMM
¶ alias of
HiddenMarkovModel

class
madmom.ml.hmm.
HiddenMarkovModel
¶ Hidden Markov Model
To search for the best path through the state space with the Viterbi algorithm, the following parameters must be defined.
Parameters: transition_model :
TransitionModel
instanceTransition model.
observation_model :
ObservationModel
instanceObservation model.
initial_distribution : numpy array, optional
Initial state distribution; if ‘None’ a uniform distribution is assumed.
Examples
Create a simple HMM with two states and three observation types. The initial distribution is uniform.
>>> tm = TransitionModel.from_dense([0, 1, 0, 1], [0, 0, 1, 1], ... [0.7, 0.3, 0.6, 0.4]) >>> om = DiscreteObservationModel(np.array([[0.2, 0.3, 0.5], ... [0.7, 0.1, 0.2]])) >>> hmm = HiddenMarkovModel(tm, om)
Now we can decode the most probable state sequence and get the logprobability of the sequence
>>> seq, log_p = hmm.viterbi([0, 0, 1, 1, 0, 0, 0, 2, 2]) >>> log_p 12.87... >>> seq array([1, 1, 0, 0, 1, 1, 1, 0, 0], dtype=uint32)
Compute the forward variables:
>>> hmm.forward([0, 0, 1, 1, 0, 0, 0, 2, 2]) array([[ 0.34667, 0.65333], [ 0.33171, 0.66829], [ 0.83814, 0.16186], [ 0.86645, 0.13355], [ 0.38502, 0.61498], [ 0.33539, 0.66461], [ 0.33063, 0.66937], [ 0.81179, 0.18821], [ 0.84231, 0.15769]])

forward
(self, observations, reset=True)¶ Compute the forward variables at each time step. Instead of computing in the log domain, we normalise at each step, which is faster for the forward algorithm.
Parameters: observations : numpy array, shape (num_frames, num_densities)
Observations to compute the forward variables for.
reset : bool, optional
Reset the HMM to its inital state before computing the forward variables.
Returns: numpy array, shape (num_observations, num_states)
Forward variables.

forward_generator
(self, observations, block_size=None)¶ Compute the forward variables at each time step. Instead of computing in the log domain, we normalise at each step, which is faster for the forward algorithm. This function is a generator that yields the forward variables for each time step individually to save memory. The observation densities are computed blockwise to save Python calls in the inner loops.
Parameters: observations : numpy array
Observations to compute the forward variables for.
block_size : int, optional
Block size for the blockwise computation of observation densities. If ‘None’, all observation densities will be computed at once.
Yields: numpy array, shape (num_states,)
Forward variables.

reset
(self, initial_distribution=None)¶ Reset the HMM to its initial state.
Parameters: initial_distribution : numpy array, optional
Reset to this initial state distribution.

viterbi
(self, observations)¶ Determine the best path with the Viterbi algorithm.
Parameters: observations : numpy array
Observations to decode the optimal path for.
Returns: path : numpy array
Best statespace path sequence.
log_prob : float
Corresponding log probability.


class
madmom.ml.hmm.
ObservationModel
¶ Observation model class for a HMM.
The observation model is defined as a plain 1D numpy arrays pointers and the methods log_densities() and densities() which return 2D numpy arrays with the (log) densities of the observations.
Parameters: pointers : numpy array (num_states,)
Pointers from HMM states to the correct densities. The length of the array must be equal to the number of states of the HMM and pointing from each state to the corresponding column of the array returned by one of the log_densities() or densities() methods. The pointers type must be np.uint32.

densities
(self, observations)¶ Densities (or probabilities) of the observations for each state.
This defaults to computing the exp of the log_densities. You can provide a special implementation to speedup everything.
Parameters: observations : numpy array
Observations.
Returns: numpy array
Densities as a 2D numpy array with the number of rows being equal to the number of observations and the columns representing the different observation log probability densities. The type must be np.float.

log_densities
(self, observations)¶ Log densities (or probabilities) of the observations for each state.
Parameters: observations : numpy array
Observations.
Returns: numpy array
Log densities as a 2D numpy array with the number of rows being equal to the number of observations and the columns representing the different observation log probability densities. The type must be np.float.


class
madmom.ml.hmm.
TransitionModel
¶ Transition model class for a HMM.
The transition model is defined similar to a scipy compressed sparse row matrix and holds all transition probabilities from one state to an other. This allows an efficient Viterbi decoding of the HMM.
Parameters: states : numpy array
All states transitioning to state s are stored in: states[pointers[s]:pointers[s+1]]
pointers : numpy array
Pointers for the states array for state s.
probabilities : numpy array
The corresponding transition are stored in: probabilities[pointers[s]:pointers[s+1]].
See also
scipy.sparse.csr_matrix
Notes
This class should be either used for loading saved transition models or being subclassed to define a specific transition model.
Examples
Create a simple transition model with two states using a list of transitions and their probabilities
>>> tm = TransitionModel.from_dense([0, 1, 0, 1], [0, 0, 1, 1], ... [0.8, 0.2, 0.3, 0.7]) >>> tm <madmom.ml.hmm.TransitionModel object at 0x...>
TransitionModel.from_dense will check if the supplied probabilties for each state sum to 1 (and thus represent a correct probability distribution)
>>> tm = TransitionModel.from_dense([0, 1], [1, 0], [0.5, 1.0]) ... Traceback (most recent call last): ... ValueError: Not a probability distribution.

classmethod
from_dense
(cls, states, prev_states, probabilities)¶ Instantiate a TransitionModel from dense transitions.
Parameters: states : numpy array, shape (num_transitions,)
Array with states (i.e. destination states).
prev_states : numpy array, shape (num_transitions,)
Array with previous states (i.e. origination states).
probabilities : numpy array, shape (num_transitions,)
Transition probabilities.
Returns: TransitionModel
instanceTransitionModel instance.

log_probabilities
¶ Transition log probabilities.

make_sparse
(states, prev_states, probabilities)¶ Return a sparse representation of dense transitions.
This method removes all duplicate states and thus allows an efficient Viterbi decoding of the HMM.
Parameters: states : numpy array, shape (num_transitions,)
Array with states (i.e. destination states).
prev_states : numpy array, shape (num_transitions,)
Array with previous states (i.e. origination states).
probabilities : numpy array, shape (num_transitions,)
Transition probabilities.
Returns: states : numpy array
All states transitioning to state s are returned in: states[pointers[s]:pointers[s+1]]
pointers : numpy array
Pointers for the states array for state s.
probabilities : numpy array
The corresponding transition are returned in: probabilities[pointers[s]:pointers[s+1]].
See also
Notes
Three 1D numpy arrays of same length must be given. The indices correspond to each other, i.e. the first entry of all three arrays define the transition from the state defined prev_states[0] to that defined in states[0] with the probability defined in probabilities[0].

num_states
¶ Number of states.

num_transitions
¶ Number of transitions.

classmethod