# Category: Machine learning

Advertisements

# Real-time image recognition with apache spark

# Hidden Markov model

Difference between a mixture model and HHM

- If we examine a single time slice of the model, it can be seen as a mixture distribution with component densities given by
- It can be interpreted as an extension of a mixture model where the choice of mixture component for each observation is not independent but depends on the choice of component for the previous observations ()

Applications

- Speech recognition
- Natural language modeling
- On-line handwriting recognition
- analysis of biological sequences such as protein and DNA

Transition probability

- Latent variables; discrete multinomial variables = describe which component of the mixture is responsible for generating the corresponding observation
- The probability distribution of depends on the previous latent variable through conditional distribution
- Conditional distribution

- Inital latent node does not have a parent node, so it has a marginal distribution

- Lattice or trellis diagram

Emission probability

Example;

- Three Gaussian distribution/ two dice problem
- Handwriting

# What is a Markov model?

**Markov Models**

- A way to exploit special sequential aspect (e.g. correlations between observations that are close in the sequence).
- For example, rainy day or not. If we treat the data as i.i.d., then the only information we glean from the data is the frequency of rainy days without any weather trends that last few days. Therefore, knowing whether or not it rains today helps to predict tomorrow’s weather.
- A Markov Model is one of the simplest ways to relax such i.i.d. assumptions and express such effects in a probabilistic model.

**Joint distribution for Markov chain **

- General expression of the joint distribution for a sequence of observations

- First-order Markov chain

If we assume that each of the conditional probability on the right-hand side is independent of all previous observations except the most recent one, we obtain the first-order Markov chain.

In other words, the conditional distribution for observation is given by

- Second-order markov chain

Likewise, a second order Markov chain, [Figure]

which menas each observation is only influenced by two previous observations and independent of all observations before.

# Sequential data

**What is sequential data?**

- Data with poor or no i.i.d assumption
- Often found in time series data. For instance,
- Rainfall measurements on successive days at a particular location
- Daily values of a currency exchange rate
- Acoustic features at successive time frames used for speech recognition)

**Stationary vs. nonstationary sequential distributions**

- Stationary: data evolves in time but the distribution from which it is generated remains the same
- Nonstationary: the generative distribution itself evolves with time

**Applications of Markov Models**

- Financial forecasting: predict the next value in a time series given observations of the previous values
- Speech recognition: predict the next speech spectrum given observations of the previous speech spectrum values
- Note: Markov models are useful in applications where more recent observations are more informative than less recent observations

Latent variable

- Hidden Markov model: where the latent variables are discrete
- Linear dynamical systems: where latent variables are Gaussian

# Music reconstruction

Try me!