關於卡爾曼濾波的理論這里不打算講了,就是那個5個基本的公式,這里直接給出公式:
公式1:X(k|k-1) = AX(k-1 | k-1) + BU(k) + W(k)
公式2:P(k|k-1) = AP(k-1|k-1)A' + Q(k)
公式3:X(k|k) = X(k|k-1) + Kg(k)[Z(k) - HX(k|k-1)
公式4:Kg(k) = P(k|k-1)H'/{HP(k|k-1)H' + R}
//卡爾曼增益
公式5:P(k|k) = (1- Kg(k) H) P(k|k-1)
另外,Z(k) = HX(k) + V
,Z是測量值,X是系統值,W是過程噪聲,V是測量噪聲,H是測量矩陣,A是轉移矩陣,Q是W的協方差,R是V的協方差,X(k|k-1)是估計值;X(k|k)是X(k|k-1)的最優估計值,即濾波估計值;P(k|k-1)是估計值誤差方差矩陣,P(k|k)是濾波誤差方差矩陣。
下面給出Python版本的卡爾曼濾波小程序,這里設置A=1,H=1,BU=0,W=0
:
# -*- coding=utf-8 -*-
# Kalman filter example demo in Python
# A Python implementation of the example given in pages 11-15 of "An
# Introduction to the Kalman Filter" by Greg Welch and Gary Bishop,
# University of North Carolina at Chapel Hill, Department of Computer
# Science, TR 95-041,
# http://www.cs.unc.edu/~welch/kalman/kalmanIntro.html
# by Andrew D. Straw
#coding:utf-8
import numpy
import pylab
#這里是假設A=1,H=1的情況
# intial parameters
n_iter = 50
sz = (n_iter,) # size of array
x = -0.37727 # truth value (typo in example at top of p. 13 calls this z)
z = numpy.random.normal(x,0.1,size=sz) # observations (normal about x, sigma=0.1)
Q = 1e-5 # process variance
# allocate space for arrays
xhat=numpy.zeros(sz) # a posteri estimate of x
P=numpy.zeros(sz) # a posteri error estimate
xhatminus=numpy.zeros(sz) # a priori estimate of x
Pminus=numpy.zeros(sz) # a priori error estimate
K=numpy.zeros(sz) # gain or blending factor
R = 0.1**2 # estimate of measurement variance, change to see effect
# intial guesses
xhat[0] = 0.0
P[0] = 1.0
for k in range(1,n_iter):
# time update
xhatminus[k] = xhat[k-1] #X(k|k-1) = AX(k-1|k-1) + BU(k) + W(k),A=1,BU(k) = 0
Pminus[k] = P[k-1]+Q #P(k|k-1) = AP(k-1|k-1)A' + Q(k) ,A=1
# measurement update
K[k] = Pminus[k]/( Pminus[k]+R ) #Kg(k)=P(k|k-1)H'/[HP(k|k-1)H' + R],H=1
xhat[k] = xhatminus[k]+K[k]*(z[k]-xhatminus[k]) #X(k|k) = X(k|k-1) + Kg(k)[Z(k) - HX(k|k-1)], H=1
P[k] = (1-K[k])*Pminus[k] #P(k|k) = (1 - Kg(k)H)P(k|k-1), H=1
pylab.figure()
pylab.plot(z,'k+',label='noisy measurements') #測量值
pylab.plot(xhat,'b-',label='a posteri estimate') #過濾后的值
pylab.axhline(x,color='g',label='truth value') #系統值
pylab.legend()
pylab.xlabel('Iteration')
pylab.ylabel('Voltage')
pylab.figure()
valid_iter = range(1,n_iter) # Pminus not valid at step 0
pylab.plot(valid_iter,Pminus[valid_iter],label='a priori error estimate')
pylab.xlabel('Iteration')
pylab.ylabel('$(Voltage)^2$')
pylab.setp(pylab.gca(),'ylim',[0,.01])
pylab.show()
結果: