Stochastic solvers for dynamical systems

So there are two main ways that we might typically take a dynamical system (expressed as a set of interacting differential equations) and make it into a stochastic simulation. They each have their strengths and weaknesses. I hope to describe these here concisely.

Let’s formalize the system a bit and call each state variable x_i, and all rates \alpha_i. The system is in general non-linear and can’t be expressed neatly, but for example we might have the Lotka-Volterra equations where prey is born at a constant rate, eaten with another rate, and predators die off exponentially when there is no prey with a final rate. Note these rates have different units here but that’s fine for now.

\dot{x_1}=\alpha_1 - \alpha_2 x_1 x_2

\dot{x_2}=\alpha_2 x_1 x_2 - \alpha_3x_2

First, the reason to use a stochastic solver is often because of small numbers. For large simulations, it doesn’t matter if there are 100,000 prey and 5,500 predators, but what does matter is if we have a scenario with 100 prey and 5 predators. There cannot be 5.5 predators, and once there are zero predators, predators never return. So as a sanity check, it’s useful to see if these `burnouts’ or discrete numbered problems actually matter for your system before coding a stochastic implementation. Ok, on we go.

The first method is typically referred to as Gillespie’s direct method, though sometimes SSA for stochastic simulation algorithm. This method assumes that all reactions in the system are exponentially distributed based on their rate. Each process is independent and occurs at an average rate given by the deterministic dynamical system. The algorithm proceeds by looking at all possible transitions (in the above there are 3 possible “rates”: r_i = \{\alpha_1,  \alpha_2 x_1 x_2, \alpha_3 x_2\}. That is, either a prey is born, a prey is eaten, or a predator dies. The sum of all these rates R=\sum_i r_i determines the time until the next event. Computationally, we draw a uniform random number U\in[0,1] and solve \Delta t = -\ln(U/R). If the sum R is large, \Delta t will be small. Each time a transition occurs, we choose which of the three available transitions based on their relative rates. Computationally this is achieved by normalizing the rates r_i/R, drawing another uniform random number and determining where in the cumulative sum the random number falls. In Matlab for example, compute

max(cumsum(r_i)/sum(r_i)>=rand())

The larger the rate, the more likely to happen, but the random number ensures some variability. This method is “direct”. It simulates all transitions. Unfortunately, it is extremely computationally expensive if one wants to simulate a time frame much longer than the typical event time.

An alternative is therefore the so called “tau-leap” method. It allows the user to specify the time-step (often denoted \tau, which I think is where the name came from) and then tabulate how many stochastic events occurred in this time step. This is essentially a stochastic Newton’s solver. For each transition, we draw the number of times that transitioned occurred in a time step from a distribution (often Poisson) given an average number of events r_i \tau in that time step. Thus, above we might have a random draw of \mathcal{P}(\alpha_1 \tau)=10 prey births and \mathcal{P}(\alpha_3 x_2\tau)=4 predator deaths in some time-step. Note this still ensures discrete transitions and once predators die out, they cannot stochastically come back. Seems better? But, it has its own challenges because \tau must be chosen artfully so that arguments of the probability don’t exceed 1, and if rates are very different in magnitude eventually the rounding sneaks up on you and a certain process never occurs.

A final alternative is a clever combination of deterministic and stochastic implementation. We call this a hybrid approach. It is well described by Alfonsi et al. 2005. We introduce the transition matrix that governs what happens in each transition as \mathcal{T}_{ij}. For example, our predator prey model matrix is \mathcal{T}_{ij}=[[1, 0],[-1,1],[0,-1].

We separate the transitions into deterministic i\in\mathcal{D} or stochastic i\in\mathcal{S}. The algorithm proceeds by drawing exponentially distributed random variables U_i=U_\epsilon for j\in\mathcal{S}. Then, the evolution of proceeds deterministically as

\frac{dx_i}{dt} = \sum_{i\in\mathcal{D}} \mathcal{T}_{ij}r_i.

which could be solved with the Newton solver for example. Meanwhile, the stochastic transitions accrue probability with the evolution equation

\frac{dw_i}{dt} = \sum_{i\in\mathcal{S}} r_i

until a certain time when w_i\ge U_i. Then that specific stochastic transition i occurs, making x_i=x_i+\mathcal{T}_{ij}, and the next random variable resets the probability of that transition, i.e. U_i=U_i+U_\epsilon. This method is useful for separation of scales, and seems that the largest challenges are in implementation (might need to use higher order solvers than Newton for the deterministic solving) and in choosing which variables to allow to be deterministic a priori.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s