Markov Chains
The SettingPermalink
There is a system with possible states/values. At each step, the state changes probabilistically.
Let be the state of the system at time
So the evolution of the system is: and is the initial state.
The system is memoryless: the probability that is in a certain state is determined by the state of :
For simplicity, we assume that:
- Finite number of states
- The underlying graph is strongly connected: for any two states, there is a path from 1 to 2 and a path from 2 to 1
Mathematical Representation of the EvolutionPermalink
In general, if the initial probabilistic state is , where is the probability of being in state , , and the transition matrix is , such that:
After more steps, the probabilistic state is:
And the probability of bing in state after steps is:
Some PropertiesPermalink
- Suppose the Markov chain is “aperiodic”, then as the system evolves, the probabilistic state converges to a limiting probabilistic state:
So the resulting is called stationary/invariant distribution, which is unique.
- Let be the time of reaching state when you start at state , then:
This is known as the Mean Recurrence Theorem.
Example: The Connectivity ProblemPermalink
Given a undirected graph , and , check if there is a path from to .
It is easy to do in polynomial time with BFS or DFS, but how about using only space?
Here is a possible randomized algorithm:
v = s
for k = 1,2,...,N:
v = random-neighbor(v)
if v == t, return YES
return NO
For , then this uses space.
But what is the success probability? If and are disconnected, we give the correct answer.
What if and are connected?
If we have a graph represented by the following adjacency matrix, and we start at any vertex and randomly walk to a neighbor, how does the transition matrix look like?
The stationary distribution is:
So
What we care is really , where and are connected.
We pick a path from to :
Why ?
So if we set N in the algorithm to be , then:
by Markov’s inequality.
Comments