Collecting summaries at each iteration of a SMC algorithm.


This module implements “summary collectors”, that is, objects that collect at every time t certain summaries of the particle system. Important applications are fixed-lag smoothing and on-line smoothing. However, the idea is a bit more general that that. Here is a simple example:

import particles
# ...
# define some_fk_model
# ...
my_alg = particles.SMC(fk=some_fk_model, N=100, moments=True,
print(my_alg.summaries.moments)  # print a list of moments
print(my_alg.summaries.naive_online_smooth)  # print a list of estimates

Once the algorithm is run, the object my_alg.summaries contains the computed summaries, stored in lists of length T (one component for each iteration t).

Turning off summary collection

You may set option summaries of class SMC to False to avoid collecting any summary:

my_alg = particles.SMC(fk=some_fk_model, N=100, summaries=False)

This might be useful in cases when you need to keep a large number of SMC objects in memory (as in SMC^2). In that case, even the default summaries (see below) might take too much space.

Default summaries

By default, the following summaries are collected:
  • ESSs: ESS at each iteration;
  • rs_flags: whether resampling was triggered or not at each time t;
  • logLts: log-likelihood estimates.

Computing moments

To compute moments (weighted averages, or other functions of the particle sample), use option moments as follows:

my_alg = particles.SMC(fk=some_fk_model, N=100, moments=mom_func)

where mom_func is a function with the following signature:

def mom_func(W, X):
    return np.average(X, weights=W)

If option moments is set to True, the default moments are computed. For instance, for a FeynmanKac object derived from a state-space model, the default moments at time t consist of a dictionary, with keys ‘mean’, and ‘var’, containing the particle estimates (at time t) of the filtering mean and variance.

It is possible to define different defaults for the moments. To do so, override method default_moments of the considered FeynmanKac class:

class Bootstrap_with_better_moments(Bootstrap):
    def summary(W, X):
        return np.average(X**2, weights=W)
# ...
# define state-space model my_ssm
# ...
my_fk_model = Bootstrap_with_better_moments(ssm=my_ssm, data=data)
my_alg = particles.SMC(fk=my_fk_model, N=100, moments=True)

In that case, my_fk_model.summaries.moments is a list of weighed averages of the squares of the components of the particles.

Fixed-lag smoothing

Fixed-lag smoothing means smoothing of the latest h states; that is, computing (at every time t) expectations of

\[\mathbb{E}[\phi_t(X_{t-h:t}) | Y_{0:t} = y_{0:t}]\]

for a fixed integer $h$ (at times $t geq h$; if $t<h$, replace $h$ by $t$).

This requires keeping track of the $h$ previous states for each particle; this is achieved by using a rolling window history, by setting option store_history to an int equals to $h+1$ (the length of the trajectories):

my_alg = particles.SMC(fk=some_fk_model, N=100, fixed_lag_smooth=phi,
                       store_history=3)  # h = 2

See module smoothing for more details on rolling window and other types of particle history. Function phi must have the same signature as for moments:

def phi(W, X):
    return np.average(np.array(X), axis=-1, weights=W)

Note however that X is a deque of length at most $h$; it behaves like a list, except that its length is always at most $h + 1$. Of course this function could simply return its arguments W and X; in that case you simply record the fixed-lag trajectories (and their weights) at every time $t$.

On-line smoothing

On-line smoothing is the task of approximating, at every time t, expectations of the form:

\[\mathbb{E}[\phi_t(X_{0:t}) | Y_{0:t} = y_{0:t}]\]

On-line smoothing is covered in Sections 11.1 and 11.3 in the book. Note that on-line smoothing is typically restricted to additive functions $phi$, see below.

The following three algorithms are implemented:

  • naive_online_smooth: basic forward smoothing (carry forward full trajectories); cost is O(N) but performance may be poor for large t.
  • ON2_online_smooth: O(N^2) on-line smoothing. Expensive (cost is O(N^2), so big increase of CPU time), but better performance.
  • 'paris': on-line smoothing using Paris algorithm. (Warning: current implementation is very slow, work in progress).

These algorithms compute the smoothing expectation of a certain additive function, that is a function of the form:

\[\phi_t(x_{0:t}) = \psi_0(x_0) + \psi_1(x_0, x_1) + ... + \psi_t(x_{t-1}, x_t)\]

The elementary function \(\psi_t\) is specified by defining method add_func in considered state-space model. Here is an example:

class ToySSM(StateSpaceModel):
    def PX0(self):
        ... # as usual, see module `state_space_model`
    def add_func(self, t, xp, x):  # xp means x_{t-1} (p=past)
        if t == 0:
            return x**2
            return (xp - x)**2

The reason why additive functions are specified in this way is that additive functions often depend on fixed parameters of the state-space model (which are available in the closure of the StateSpaceModel object, but not outside).

The two first algorithms do not require any parameter:

my_alg = particles.SMC(fk=some_fk_model, N=100, naive_online_smooth=True)

Paris algoritm has an optional parameter Nparis, which you may specify as follows:

my_alg = particles.SMC(fk=some_fk_model, N=100, paris=5)

If option paris is set to True, then the default value (2) is used.

User-defined collectors

You may implement your own collectors as follows:

import collectors

class ToyCollector(collectors.Collector):
    summary_name = 'toy'
    def fetch(self, smc):  # smc is the particles.SMC instance
        return smc.X[self.arg]
        # self.arg is the argument passed to the collector

Once this is done, you may use this new collector exactly as the other ones:

pf = particles.SMC(N=30, fk=some_fk_model, toy=3)

Then pf.summaries.toy will be a list of the summaries collected at each time by the fetch method.