We investigate the use of possibly the simplest scheme for the parallelisation of the standard particle filter, that consists in splitting the computational budget into M fully independent particle filters with N particles each, and then obtaining the desired estimators by averaging over the M independent outcomes of the filters. This approach minimises the parallelisation overhead yet displays highly desirable theoretical properties. Under very mild assumptions, we analyse the mean square error (MSE) of the estimators of 1-dimensional statistics of the optimal filtering distribution and show explicitly the effect of parallelisation scheme on the convergence rate. Specifically, we study the decomposition of the MSE into variance and bias components, to show that the former decays as 1MN, i.e., linearly with the total number of particles, while the latter converges towards 0 as 1N2. Parallelisation, therefore, has the obvious advantage of dividing the running times while preserving the (asymptotic) performance of the particle filter. Following this lead, we propose a time-error index to compare schemes with different degrees of parallelisation. Finally, we provide two numerical examples. The first one deals with the tracking of a Lorenz 63 chaotic system with dynamical noise and partial (noisy) observations, while the second example involves a dynamical network of modified FitzHugh-Nagumo (FH-N) stochastic nodes. The latter is a large dimensional system (≈3,000 state variables in our computer experiments) designed to numerically reproduce typical electrical phenomena observed in the atria of the human heart. In both examples, we show how the proposed parallelisation scheme attains the same approximation accuracy as a centralised particle filter with only a small fraction of the running time, using a standard multicore computer.
particle filtering; parallelisation; convergence analysis; stochastic fitzhugh-nagumo; excitable media