# particles.kalman¶

Basic implementation of the Kalman filter (and smoother).

## Overview¶

The Kalman filter/smoother is a well-known algorithm for computing recursively the filtering/smoothing distributions of a linear Gaussian model, i.e. a model of the form:

$\begin{split}X_0 & \sim N(\mu_0,C_0) \\ X_t & = F X_{t-1} + U_t, \quad U_t \sim N(0, C_X) \\ Y_t & = G X_t + V_t, \quad V_t \sim N(0, C_Y)\end{split}$

Linear Gaussian models and the Kalman filter are covered in Chapter 7 of the book.

## MVLinearGauss class and subclasses¶

To define a specific linear Gaussian model, we instantiate class MVLinearGauss (or one its subclass) as follows:

import numpy as np
from particles import kalman

ssm = kalman.MVLinearGauss(F=np.eye(2), G=np.ones((1, 2)), covX=np.eye(2),
covY=.3)

where the parameters have the same meaning as above. It is also possible to specify mu0and  cov0 (the mean and covariance of the initial state X_0). (See the documentation of the class for more details.)

Class MVLinearGauss is a sub-class of StateSpaceModel in module state_space_models, so it inherits methods from its parent such as:

true_states, data = ssm.simulate(30)

Class MVLinearGauss implements methods proposal, proposal0 and logeta, which correspond respectively to the optimal proposal distributions and auxiliary function for a guided or auxiliary particle filter; see Chapter 11 and module state_space_models for more details. (That the optimal quantities are tractable is, of course, due to the fact that the model is linear and Gaussian.)

To define a univariate linear Gaussian model, you may want to use instead the more conveniently parametrised class LinearGauss (which is a sub-class of MVLinearGauss):

ssm = LinearGauss(rho=0.3, sigmaX=1., sigmaY=.2, sigma0=1.)

which corresponds to model:

$\begin{split}X_0 & \sim N(0, \sigma_0^2) \\ X_t|X_{t-1}=x_{t-1} & \sim N(\rho * X_{t-1},\sigma_X^2) \\ Y_t |X_t=x_t & \sim N(x_t, \sigma_Y^2)\end{split}$

Another sub-class of MVLinearGauss defined in this module is MVLinearGauss_Guarniero_etal, which implements a particular class of linear Gaussian models often used as a benchmark (after Guarniero et al, 2016).

## Kalman class¶

The Kalman filter is implemented as a class, Kalman, with methods filter and smoother. When instantiating the class, one passes as arguments the data, and an object that represents the considered model (i.e. an instance of MvLinearGauss, see above):

kf = kalman.Kalman(ssm=ssm, data=data)
kf.filter()

The second line implements the forward pass of a Kalman filter. The results are stored as lists of MeanAndCov objects, that is, named tuples with attributes ‘mean’ and ‘cov’ that represent a Gaussian distribution. For instance:

kf.filt[3].mean  # mean of the filtering distribution at time 3
kf.pred[7].cov  # cov matrix of the predictive distribution at time 7

The forward pass also computes the log-likelihood of the data:

kf.logpyt[5]  # log-density of Y_t | Y_{0:t-1} at time t=5

Smoothing works along the same lines:

kf.smoother()

then object kf contains a list called smooth, which represents the successive (marginal) smoothing distributions:

kf.smth[8].mean  # mean of the smoothing dist at time 8

It is possible to call method smoother directly (without calling filter first). In that case, the filtering step is automatically performed as a preliminary step.

## Kalman objects as iterators¶

It is possible to perform the forward pass step by step; in fact a Kalman object is an iterator:

kf = kalman.Kalman(ssm=ssm, data=data)
next(kf)  # one step
next(kf)  # one step

If you run the smoother after k such steps, you will obtain the smoothing distribution based on the k first data-points. It is therefore possible to compute recursively the successive smoothing distributions, but (a) at a high CPU cost; and (b) at each time, you must save the results somewhere, as attribute kf.smth gets written over and over.

## Functions to perform a single step¶

The module also defines low-level functions that perform a single step of the forward or backward step. Some of these function makes it possible to perform such steps in parallel (e.g. for N predictive means). The table below lists these functions. Some of the required inputs are MeanAndCov objects, which may be defined as follows:

my_predictive_dist = kalman.MeanAndCov(mean=np.ones(2), cov=np.eye(2))

Function (with signature)

predict_step(F, covX, filt)

filter_step(G, covY, pred, yt)

filter_step_asarray(G, covY, pred, yt)

smoother_step(F, filt, next_pred, next_smth)

Functions

 dotdot(a, b, c) dotdotinv(a, b, c) a * b * c^{-1}, where c is symmetric positive filter_step(G, covY, pred, yt) Filtering step of Kalman filter. filter_step_asarray(G, covY, pred, yt) Filtering step of Kalman filter: array version. predict_step(F, covX, filt) Predictive step of Kalman filter. smoother_step(F, filt, next_pred, next_smth) Smoothing step of Kalman filter/smoother.

Classes

 Kalman([ssm, data]) Kalman filter/smoother. LinearGauss(**kwargs) A basic (univariate) linear Gaussian model. MVLinearGauss([F, G, covX, covY, mu0, cov0]) Multivariate linear Gaussian model. MVLinearGauss_Guarniero_etal([alpha, dx]) Special case of a MV Linear Gaussian ssm from Guarnierio et al. (2016). MeanAndCov(mean, cov) Create new instance of MeanAndCov(mean, cov)