|

Continuous Time, Two State Markov Chain Model

Description

A Markov Process is a nonlinear model consisting of two parts: 1) a finite number of states and 2) transition probabilities for moving between those states. This example provides a simple continous time Markov Process (or chain) model with two states: State A and State B. The model randomly switches between the two different states. When the model is in State A, the conditional container 'StateA' is activated.  When in State B, the conditional container 'StateB' is activated. All the model does is to track the time in each state with an Integrator element. However, logic and functionality could be added to each container to represent a particular Markov process by representing each state for the process in the appropriate conditional container.

In the continuous time representation, a Markov chain spends a random amount of time in each state where the amount of time has an exponential distribution. The amount of time spent in each state is determined in this example model by the Event Delay elements "Duration A" and "Duration B". The delay time from the Event Delay elements is used to set the amount of time spent in the state. For example, "Duration A" produces an event at the end of the stochastically derived delay time. This event tells "StateA" to deactivate, "StateB" to activate, and triggers "Duration B" to stochastically generate a delay time. Then when "DurationB" produces an event (at the end of the delay time generated in response to the "Duration A" event), this event tells "StateB" to deactivate, "StateA" to activate, and triggers "Duration A" to stochastically generate a new delay time. Thus the model switches back and forth between states according to the stochastically generated delay times from the Event Delay elements "Duration A" and "Duration B".

In this model, two localized containers are provided. In each container, the example model is implemented using a different stochastic representation of the the amount of time in each state. In the "Erlang_Dispersion" container, the amount of time spent in each state is determined using Event Delay elements which use Erlang dispersion with an n coefficient value of 1.0 to stochastically generate the amount of time. In contrast, the "Exp_Distribution" container implementation of the model employs Event Delay elements which use Stochastic Delay Time distributions to generate the amount of delay. Stochastic elements with an exponential distribution are used for the Stochastic Delay Time.

The mean of an Erlang distribution is n divided by the rate. While, the mean of an exponential distribution is 1 divided by the rate. In the special case when n = 1, the mean values of the Erlang distribution and the exponential distribution are equal (assuming of course that an equivalent rate holds for each distribution). Additionally, the Erlang distribution with an n coefficient (i.e. shape parameter) equal to 1 simplifies to an exponential distribution. The n coefficient value for Erlang dispersion calculated delay time can be set using the Data element "Erlang_N". When the value of "Erlang_N" is set to 1, the two containers and two model implementations will produce approximately the same result distributions (with Monte Carlo simulations and finite number of realizations the resulting distributions will not be exactly equal). As the value of n is increased (it must be >= 1), less spread (or dispersion) is evident in the result distribution for the Erlang dispersion dealy time implemention.

Thus, a random amount of time is spent in State A and in State B in this example model. With two states (A and B), there are two transition rates, 1) the flow or transition rate from State A to State B (denoted "RateAtoB") and 2) the flow or transition rate from State B to State A (denoted "RateBtoA"). Also with two states there are two probabilities, the probability of being in State A ("ProbA") and the probability of being in State B. In our simple model, only the probability of being in state A ("ProbA") and the transition rate from State A to State B ("RateAtoB") need to be specified as shown to the right. Then, the transition rate from State B to State A ("RateBtoA") can be calculated as shown in each container ("Exp_Distribution" and "Erlang_Dispersion").

A Markov Process converges to a unique distribution over states provided that the following assumptions are met: 1) fixed set of states, 2) fixed transition probabilities, and 3) ability to move from any state to any other state through a series of transitions. This unique distribution represents a statistical equilibrium. 10,000 realizations are used in this example model and the total time in each state (State A and State B) is recorded. The total times across the realizations are combined to provide the distribution of time in each state observed during the Monte Carlo simulation. The Distribution Result element "TimeDistribution" displays the result distributions for each state. When "Erlang_N" is set to 1, the delay time calculation employing Erlang dispersivity provides a result distribution that is approximately equal to that obtained by using an exponential distribution for the delay time. This is because the Erlang distribution with n=1 simplifies to the exponential distribution. For "Erlang_N" values greater than 1, the result distributions will no longer be approximately equal. To see this result, the value of "Erlang_N" can be changed to 10 (or any value greater than 1).

Additional Information

Keywords

Probabilistic modeling, Markov chain

Categories

GoldSim Features & Capabilities, Probabilistic Modeling

Experience Level
Beginner
Contact

GoldSim Technology Group

 

Making Better Decisions In An Uncertain World