particles.variance_estimators¶
Singlerun variance estimators.
Overview¶
As discussed in Section 19.3 of the book, several recent papers (Chan & Lai,
2013; Lee & Whiteley, 2018; Olsson & Douc, 2019) have proposed variance
estimates that may be computed from a single run of the algorithm. These
estimates rely on genealogy tracking; more precisely they require to track eve
variables; i.e. the index of the ancestor at time 0 (or some other time, in
Olsson and Douc, 2019) of each particle. See function var_estimate
for the exact expression of this type of estimate.
Variance estimators (Chan & Lai, 2013; Lee & Whiteley, 2018)¶
These estimates may be collected (see module collectors
) as follows:
import particles
from particles import variance_estimators as var # this module
# Define a FeynmanKac object, etc.
# ...
phi = lambda x: x**2 # for instance
my_alg = particles.SMC(fk=my_fk_model, N=100,
collect=[var.Var(phi=phi), var.Var_logLt()])
The first collector will compute at each time t an estimate of the variance of
\(\sum_{n=1}^N W_t^n \varphi(X_t^n)\) (which is itself a particle estimate
of expectation \(\mathbb{Q}_t(\varphi)\)). If argument phi
is not provided,
the function \(\varphi(x)=x\) will be used.
The second collector will compute an estimate of the variance of the log normalising constant, i.e. \(\log L_t\).
Note
The estimators found in Chan & Lai (2013) and Lee & Whiteley (2018) differ only by a factor \((N/(N1))^t\); the collectors above implement the former version, without the factor.
Lagbased variance estimators (Olsson and Douc, 2019)¶
The above estimators suffer from the well known problem of particle degeneracy; as soon as the number of distinct ancestors falls to one, these variance estimates equal zero. Olsson and Douc (2019) proposed a variant based on a fixedlag approximation. To compute it, you need to activate the tracking of a rollingwindow history, as for fixedlag smoothing (see below):
my_alg = particles.SMC(fk=my_fk_model, N=100,
collect=[var.Lag_based_var(phi=phi)],
store_history=10)
which is going to compute the same type of estimates, but using as eve variables (called Enoch variables in Olsson and Douc) the index of the ancestor of each particle \(X_t^n\) as time \(tl\), where \(l\) is the lag. This collector actually computes and stores simultaneously the estimates that correspond to lags 0, 1, …, k (where k is the size of the rolling window history). This makes it easier to assess the impact of the lag on the estimates. Thus:
print(my_alg.lag_based_var[1]) # prints a list of length 10
Numerical experiments¶
See here for a jupyter notebook that illustrates these variance estimates in a simple example.
References
Chan, H.P. and Lai, T.L. (2013). A general theory of particle filters in hidden Markov models and some applications. Ann. Statist. 41, pp. 2877–2904.
Lee, A and Whiteley, N (2018). Variance estimation in the particle filter. Biometrika 3, pp. 609625.
Olsson, J. and Douc, R. (2019). Numerically stable online estimation of variance in particle filters. Bernoulli 25.2, pp. 15041535.
Functions

Variance estimate based on genealogy tracking. 
Classes

Computes and collects Olsson and Douc (2019) variance estimates, which are based on a fixedlag approximation. 

Computes and collects variance estimates for a given test function phi. 



Computes and collects estimates of the variance of the log normalising constant estimator. 