之前有關卡爾曼濾波的例子都比較簡單,只能用於簡單的理解卡爾曼濾波的基本步驟。現在讓我們來看看卡爾曼濾波在實際中到底能做些什么吧。這里有一個使用卡爾曼濾波在窗口內跟蹤鼠標移動的例子,原作者主頁:http://home.wlu.edu/~levys/
首先,第一步是選取狀態變量,這里選擇系統狀態變量為x=[x, y]T ,即狀態變量選為鼠標在窗口內的位置。通過鼠標事件響應的回調函數可以獲得鼠標當前位置,即觀測值z = [x, y]T.對於這一問題外界控制量u=0. 觀測噪聲和系統噪聲的選擇需要靠實驗或其它方式確定,這里先采用默認值以簡化問題。則系統狀態方程可以寫為:

同樣可以寫出觀測值和狀態變量之間的關系,可知矩陣F和矩陣H均為2階單位矩陣。
下面使用Python和OpenCV來解決這一問題。tinyekf.py文件中定義了EKF抽象類,使用時需要自己定義一個類(繼承EKF),並實現非線性函數f(x),h(x)和雅可比矩陣JF,JH的計算。擴展卡爾曼濾波(EKF)同樣能解決線性問題。
tinyekf.py:
1 ''' 2 Extended Kalman Filter in Python 3 ''' 4 import numpy as np 5 from abc import ABCMeta, abstractmethod 6 7 class EKF(object): 8 __metaclass__ = ABCMeta 9 def __init__(self, n, m, pval=0.1, qval=1e-4, rval=0.1): 10 ''' 11 Creates a KF object with n states, m observables, and specified values for 12 prediction noise covariance pval, process noise covariance qval, and 13 measurement noise covariance rval. 14 ''' 15 # No previous prediction noise covariance 16 self.P_pre = None 17 18 # Current state is zero, with diagonal noise covariance matrix 19 self.x = np.zeros((n,1)) 20 self.P_post = np.eye(n) * pval 21 22 # Get state transition and measurement Jacobians from implementing class 23 self.F = self.getF(self.x) 24 self.H = self.getH(self.x) 25 26 # Set up covariance matrices for process noise and measurement noise 27 self.Q = np.eye(n) * qval 28 self.R = np.eye(m) * rval 29 30 # Identity matrix will be usefel later 31 self.I = np.eye(n) 32 33 def step(self, z): 34 ''' 35 Runs one step of the EKF on observations z, where z is a tuple of length M. 36 Returns a NumPy array representing the updated state. 37 ''' 38 # Predict ---------------------------------------------------- 39 self.x = self.f(self.x) 40 self.P_pre = self.F * self.P_post * self.F.T + self.Q 41 42 # Update ----------------------------------------------------- 43 G = np.dot(self.P_pre * self.H.T, np.linalg.inv(self.H * self.P_pre * self.H.T + self.R)) 44 self.x += np.dot(G, (np.array(z) - self.h(self.x).T).T) 45 self.P_post = np.dot(self.I - np.dot(G, self.H), self.P_pre) 46 47 # return self.x.asarray() 48 return self.x 49 50 @abstractmethod 51 def f(self, x): 52 ''' 53 Your implementing class should define this method for the state transition function f(x), 54 returning a NumPy array of n elements. Typically this is just the identity function np.copy(x). 55 ''' 56 raise NotImplementedError() 57 58 @abstractmethod 59 def getF(self, x): 60 ''' 61 Your implementing class should define this method for returning the n x n Jacobian matrix F of the 62 state transition function as a NumPy array. Typically this is just the identity matrix np.eye(n). 63 ''' 64 raise NotImplementedError() 65 66 @abstractmethod 67 def h(self, x): 68 ''' 69 Your implementing class should define this method for the observation function h(x), returning 70 a NumPy array of m elements. For example, your function might include a component that 71 turns barometric pressure into altitude in meters. 72 ''' 73 raise NotImplementedError() 74 75 @abstractmethod 76 def getH(self, x): 77 ''' 78 Your implementing class should define this method for returning the m x n Jacobian matirx H of the 79 observation function as a NumPy array. 80 ''' 81 raise NotImplementedError()
kalman_mousetracker.py:
1 # -*- coding: utf-8 -*- 2 ''' 3 kalman_mousetracker.py - OpenCV mouse-tracking demo using TinyEKF 4 ''' 5 6 # This delay will affect the Kalman update rate 7 DELAY_MSEC = 20 # 卡爾曼濾波計算時間間隔,單位為ms 8 9 WINDOW_NAME = 'Kalman Mousetracker [ESC to quit]' # 窗口名稱 10 WINDOW_SIZE = 500 # 窗口大小 11 12 import cv2 13 import numpy as np 14 from sys import exit 15 from tinyekf import EKF 16 17 class TrackerEKF(EKF): 18 ''' 19 An EKF for mouse tracking 20 ''' 21 def __init__(self): 22 EKF.__init__(self, 2, 2, pval=1, qval=0.001, rval=0.1) 23 24 def f(self, x): 25 # State-transition function is identity 26 return np.copy(x) 27 28 def getF(self, x): 29 # So state-transition Jacobian is identity matrix 30 return np.eye(2) 31 32 def h(self, x): 33 # Observation function is identity 34 return x 35 36 def getH(self, x): 37 # So observation Jacobian is identity matrix 38 return np.eye(2) 39 40 41 class MouseInfo(object): 42 ''' 43 A class to store X,Y points 44 ''' 45 def __init__(self): 46 self.x, self.y = -1, -1 47 48 # If you print an object then its __str__ method will get called 49 # The __str__ is intended to be as human-readable as possible 50 def __str__(self): 51 return '%4d %4d' % (self.x, self.y) 52 53 54 def mouseCallback(event, x, y, flags, mouse_info): 55 ''' 56 Callback to update a MouseInfo object with new X,Y coordinates 57 ''' 58 mouse_info.x = x 59 mouse_info.y = y 60 61 62 def drawCross(img, center, r, g, b): 63 ''' 64 Draws a cross a the specified X,Y coordinates with color RGB 65 ''' 66 d = 5 # 調整d改變X標記大小 67 thickness = 2 # 線寬 68 color = (r, g, b) # 標記顏色 69 ctrx = center[0] # 標記中心點的x坐標 70 ctry = center[1] # 標記中心點的y坐標 71 72 # Python: cv2.line(img, pt1, pt2, color[, thickness[, lineType[, shift ] ] ])--> None 73 # lineType參數之一: CV_AA - antialiased line 74 cv2.line(img, (ctrx - d, ctry - d), (ctrx + d, ctry + d), color, thickness, cv2.CV_AA) 75 cv2.line(img, (ctrx + d, ctry - d), (ctrx - d, ctry + d), color, thickness, cv2.CV_AA) 76 77 78 def drawLines(img, points, r, g, b): 79 ''' 80 Draws lines 81 ''' 82 # Python: cv2.polylines(img, pts, isClosed, color[, thickness[, lineType[, shift ] ] ]) -->None 83 # 參數pts: Array of polygonal curves 84 cv2.polylines(img, [np.int32(points)], isClosed=False, color=(r, g, b)) 85 86 87 def newImage(): 88 ''' 89 Returns a new image 90 ''' 91 return np.zeros((500,500,3), np.uint8) # 創建矩陣,用於保存圖像內容 92 93 94 if __name__ == '__main__': 95 # Create a new image in a named window 96 img = newImage() 97 cv2.namedWindow(WINDOW_NAME) 98 99 # Create an X,Y mouse info object and set the window's mouse callback to modify it 100 mouse_info = MouseInfo() # mouse_info用於存貯當前鼠標位置 101 102 # 設置鼠標事件回調函數 103 # 參數1:name – Window name 104 # 參數2:onMouse – Mouse callback. 105 # 參數3:param – The optional parameter passed to the callback. 106 cv2.setMouseCallback(WINDOW_NAME, mouseCallback, mouse_info) 107 108 # Loop until mouse inside window 109 while True: 110 if mouse_info.x > 0 and mouse_info.y > 0: # 鼠標進入窗口內 111 break 112 cv2.imshow(WINDOW_NAME, img) # 鼠標沒進入窗口內則一直顯示黑色背景 113 if cv2.waitKey(1) == 27: # 檢測是否按下ESC鍵 114 exit(0) 115 116 # These will get the trajectories for mouse location and Kalman estiamte 117 measured_points = [] # 測量值列表 118 kalman_points = [] # 估計值列表 119 120 # Create a new Kalman filter for mouse tracking 121 kalfilt = TrackerEKF() 122 123 # Loop till user hits escape 124 while True: 125 # Serve up a fresh image 126 img = newImage() 127 128 # Grab current mouse position and add it to the trajectory 129 measured = (mouse_info.x, mouse_info.y) 130 measured_points.append(measured) # 注意:程序運行時間越長(或者計算間隔越小)列表長度會越大 131 132 # Update the Kalman filter with the mouse point, getting the estimate. 133 estimate = kalfilt.step((mouse_info.x, mouse_info.y)) 134 135 # Add the estimate to the trajectory 136 estimated = [int(c) for c in estimate] 137 kalman_points.append(estimated) # kalman_points為2D point列表,存放每次計算出的估計值坐標 138 139 # Display the trajectories and current points 140 drawLines(img, kalman_points, 0, 255, 0) # 繪制跟蹤點移動路徑 141 drawCross(img, estimated, 255, 255, 255) # X標記點,代表卡爾曼濾波估計位置 142 drawLines(img, measured_points, 255, 255, 0) # 繪制鼠標移動路徑 143 drawCross(img, measured, 0, 0, 255) # X標記點,代表鼠標當前位置 144 145 # Delay for specified interval, quitting on ESC 146 cv2.imshow(WINDOW_NAME, img) # image每隔DELAY_MSEC毫秒就刷新一次 147 if cv2.waitKey(DELAY_MSEC) & 0xFF == 27: 148 break 149 150 # close the window and de-allocate any associated memory usage. 151 cv2.destroyAllWindows()
程序執行效的果如下圖所示:

跟蹤的效果與程序中的兩個參數有關:qval代表了模型噪聲(即模型准確度,obviously all moedls are not able to describe the motion perfectly,thus each model contains a probabilistic part),rval代表了觀測噪聲。使用觀測值修正預測值的表達式為:

其中卡爾曼增益K的表達式為:

從上面公式可以看出當R趨於0,即測量誤差非常小時,估計的結果更接近測量值。當P趨向0時,估計的結果更接近預測值。因此,當程序中rval的值相比qval小得多時跟蹤進行的很流暢(卡爾曼濾波的估計值更偏向觀測值)。但是當rval比qval小時(從感性上理解這個小),跟蹤就出現了滯后,這說明需要更合理的鼠標運動模型,如Constant Velocity Model或Constant Acceleration Model。可以參考論文: Empirical evaluation of vehicular models for ego motion estimation.2011 IEEE Intelligent Vehicles Symposium (IV), 534–539. doi:10.1109/IVS.2011.5940526. 論文中提到不同的模型主要適用於不同的情境,例如高速公路上汽車加減速過程較少,而在鬧市區汽車需要頻繁加減速,此時用CA模型就比用CV模型合適。In general,more sophisticated models outperform simpler ones, especially in situations where the assumption of the simple models are no longer true. 這就是我認為卡爾曼濾波中最難的部分:根據問題建立合理的模型。
根據參考[3]中的敘述:If you have a badly defined model, you will not get a good estimate. But you can relax your model by increasing your estimated error. This will let the Kalman filter rely more on the measurement values, but still allow some noise removal.
即當你建立的模型不准確時,可以將與模型誤差有關的參數Q調大,此時估計值將更加依賴觀測值而不是通過模型得到的預測值。既然模型不准確,為什么還要用卡爾曼濾波,直接從傳感器獲取觀測值不就好了嗎?可是有時傳感器獲得的數據也不那么准確,而且傳感器測量精度越高價格也越貴:Sensors are noisy. The world is full of data and events that we want to measure and track, but we cannot rely on sensors to give us perfect information. 因此,就我目前淺顯的理解卡爾曼濾波是一種折衷(trade off)的方法,當模型更准時估計值更接近模型的預測,當傳感器測量值更准時估計值更接近測量值。
參考:
[1] http://home.wlu.edu/~levys/kalman_tutorial/
[3] https://www.cs.cornell.edu/courses/cs4758/2012sp/materials/mi63slides.pdf
