A Hidden Markov Model (HMM) is a type of probabilistic model that tries to explain the underlying structure behind a sequence of observations – where the true underlying process is “hidden” or unknown.
The model assumes that there are underlying states that generate the observations, and the states transition from one to another according to some probabilities, forming a Markov chain.
The goal of an HMM is to estimate the probabilities of these hidden states given the observed sequence, and make predictions or decode the most likely hidden state sequence.
The hidden state sequence itself is memoryless – where the probability of transitioning to the next state depends only on the current state – not on the full sequence of previous states.
In other words, you only need the current hidden state to predict the next one.
HMMs have applications in various fields, such as speech recognition, bioinformatics, finance and are used in a wide range of bioinformatic software, including gene prediction and protein annotation.
In the next few posts we will go through HMM’s in more detail.