Mean Shift,我們 翻譯為“均值飄移”。其在聚類,圖像平滑。圖像切割和跟蹤方面得到了比較廣泛的應用。因為本人眼下研究跟蹤方面的東西,故此主要介紹利用Mean Shift方法進行目標跟蹤,從而對MeanShift有一個比較全面的介紹。
(下面某些部分轉載常峰學長的“Mean Shift概述”) Mean Shift 這個概念最早是由Fukunaga等人於1975年在一篇關於概率密度梯度函數的預計(The Estimation of the Gradient of a Density Function, with Applications in Pattern Recognition )中提出來的,其最初含義正如其名,就是偏移的均值向量,在這里Mean Shift是一個名詞,它指代的是一個向量,但隨着Mean Shift理論的發展,Mean Shift的含義也發生了變化,假設我們說Mean Shift算法,通常是指一個迭代的步驟,即先算出當前點的偏移均值,移動該點到其偏移均值,然后以此為新的起始點,繼續移動,直到滿足一定的條件結束.
然而在以后的非常長一段時間內Mean Shift並沒有引起人們的注意,直到20年以后,也就是1995年,另外一篇關於Mean Shift的重要文獻(Mean shift, mode seeking, and clustering )才發表.在這篇重要的文獻中,Yizong Cheng對主要的Mean Shift算法在下面兩個方面做了推廣,首先Yizong Cheng定義了一族核函數,使得隨着樣本與被偏移點的距離不同,其偏移量對均值偏移向量的貢獻也不同,其次Yizong Cheng還設定了一個權重系數,使得不同的樣本點重要性不一樣,這大大擴大了Mean Shift的適用范圍.另外Yizong Cheng指出了Mean Shift可能應用的領域,並給出了詳細的樣例。
Comaniciu等人在還(Mean-shift Blob Tracking through Scale Space)中把非剛體的跟蹤問題近似為一個Mean Shift最優化問題,使得跟蹤能夠實時的進行。眼下,利用Mean Shift進行跟蹤已經相當成熟。
目標跟蹤不是一個新的問題,眼下在計算機視覺領域內有不少人在研究。所謂跟蹤,就是通過已知的圖像幀中的目標位置找到目標在下一幀中的位置。
以下主要以代碼形式展現Mean Shift在跟蹤中的應用。
void CObjectTracker::ObjeckTrackerHandlerByUser(IplImage *frame)//跟蹤函數
{
m_cActiveObject = 0;
if (m_sTrackingObjectTable[m_cActiveObject].Status)
{
if (!m_sTrackingObjectTable[m_cActiveObject].assignedAnObject)
{
FindHistogram(frame,m_sTrackingObjectTable[m_cActiveObject].initHistogram);
m_sTrackingObjectTable[m_cActiveObject].assignedAnObject = true;
}
else
{
FindNextLocation(frame);//利用mean shift 迭代找出目標下一個位置點
DrawObjectBox(frame);
}
}
}
void CObjectTracker::FindNextLocation(IplImage *frame)
{
int i, j, opti, optj;
SINT16 scale[3]={-3, 3, 0};
FLOAT32 dist, optdist;
SINT16 h, w, optX, optY;
//try no-scaling
FindNextFixScale(frame);//找出目標的下一個大致范圍
optdist=LastDist;
optX=m_sTrackingObjectTable[m_cActiveObject].X;
optY=m_sTrackingObjectTable[m_cActiveObject].Y;
//try one of the 9 possible scaling
i=rand()*2/RAND_MAX;
j=rand()*2/RAND_MAX;
h=m_sTrackingObjectTable[m_cActiveObject].H;
w=m_sTrackingObjectTable[m_cActiveObject].W;
if(h+scale[i]>10 && w+scale[j]>10 && h+scale[i]<m_nImageHeight/2 && w+scale[j]<m_nImageWidth/2)
{
m_sTrackingObjectTable[m_cActiveObject].H=h+scale[i];
m_sTrackingObjectTable[m_cActiveObject].W=w+scale[j];
FindNextFixScale(frame);
if( (dist=LastDist) < optdist ) //scaling is better
{
optdist=dist;
// printf("Next%f->/n", dist);
}
else //no scaling is better
{
m_sTrackingObjectTable[m_cActiveObject].X=optX;
m_sTrackingObjectTable[m_cActiveObject].Y=optY;
m_sTrackingObjectTable[m_cActiveObject].H=h;
m_sTrackingObjectTable[m_cActiveObject].W=w;
}
};
TotalDist+=optdist; //the latest distance
// printf("/n");
}
這里仍然在跟蹤的基礎上解說mean shift。首先還是把mean shift的原理用數學公式說一下吧。1、目標模型,算法採用的是特征值的加權概率分布來描寫敘述目標模型。這應該是模式識別中主要描寫敘述目標的模型,不同於自己主動控制理論中採用的狀態方程。目標模型共m個特征值(能夠理解為像素灰度值)
當中X0是窗體中心點向量值(可能為RBG 向量或者灰度值), Xi 是窗體內第i 點向量值。C 為歸一化常數,保障q1+q2+q3+……qm=1,H 為核函數的帶寬向量。M 為特征值的個數,相應於圖像處理能夠理解為灰度等級划分的個數,從而特征值u 為相應的灰度等級。d 函數為脈沖函數,保證僅僅有具有u 特征值的像素才對概率分布作出貢獻。從而k函數能夠理解為u 灰度值的一個加權頻數。
2、 匹配對象,也採用特征值加權概率分布
當中,Y 為匹配對象的中心, Xi 是匹配窗體內第i 點向量值, Hh 為匹配窗體的核函數帶寬向量。 Ch 為匹配窗體特征向量的歸一化常數。
3、 匹配對象與目標模型的類似程度,類似函數可採用Bhattacharyya 函數
4、 匹配過程就是尋找類似函數最大值的尋優過程,Mean-Shift 採用的是梯度下降法。首先將(Y) 在
(Y0)附近進行泰勒級數展開,取前兩項。即:
要使得(Y) 向最大值迭代,僅僅要Y 的搜索方向與梯度方向一致就可以,通過求導可得到Y0的梯度方向為:
為權值。因此假設例如以下確定Y1,那么Y1-Y0將與梯度方向一致。
以上為mean shift的數學原理。有關文字的敘述已經在上一篇中提到了。用mean shift來跟蹤屬於確定性算法,粒子濾波器屬於統計學方法。meanshift跟蹤算法相對於粒子濾波器來說可能實時性更好一些,可是跟蹤的准確性在理論上還是略遜於粒子濾波器的。mean shift跟蹤的的實質就是通過相應的模板來確定目標的下一個位置。通過迭代找到新的中心點(即是目標的新的位置點)。有關跟蹤的code例如以下所看到的:
/**********************************************************************
Bilkent University:
Mean-shift Tracker based Moving Object Tracker in Video
Version: 1.0
Compiler: Microsoft Visual C++ 6.0 (tested in both debug and release
mode)
Modified by Mr Zhou
**********************************************************************/
#include "ObjectTracker.h"
#include "utils.h"
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
/*
#define GetRValue(rgb) ((UBYTE8) (rgb))
#define GetGValue(rgb) ((UBYTE8) (((ULONG_32) (rgb)) >> 8))
#define GetBValue(rgb) ((UBYTE8) ((rgb) >> 16))
*/
//#define RGB(r, g ,b) ((ULONG_32) (((UBYTE8) (r) | ((UBYTE8) (g) << 8)) | (((ULONG_32) (UBYTE8) (b)) << 16)))
#define min(a, b) (((a) < (b)) ? (a) : (b))
#define max(a, b) (((a) > (b)) ? (a) : (b))
#define MEANSHIFT_ITARATION_NO 5
#define DISTANCE_ITARATION_NO 1
#define ALPHA 1
#define EDGE_DETECT_TRESHOLD 32
//////////////////////////////////////////////////
/*
1 給定目標的初始位置和尺寸, 計算目標在圖像中的直方圖;
2 輸入新圖像, 迭代直到收斂:
計算圖像上相應區域的新直方圖;
新直方圖與目標直方圖比較,計算權重;
依據權重,計算圖像上相應區域的形心/質心;
依據形心,修正目標位置;
直方圖分為兩部分, 每部分大小4096,
RGB的256*256*256種組合, 縮減為16*16*16=4096種組合.
假設目標區域的點是邊緣點, 則計入直方圖的后一部分,
否則計入直方圖的前一部分.
*/
//////////////////////////////////////////////////
CObjectTracker::CObjectTracker(INT32 imW,INT32 imH,IMAGE_TYPE eImageType)
{
m_nImageWidth = imW;
m_nImageHeight = imH;
m_eIMAGE_TYPE = eImageType;
m_cSkipValue = 0;
for (UBYTE8 i=0;i<MAX_OBJECT_TRACK_NUMBER;i++)//初始化各個目標
{
m_sTrackingObjectTable[i].Status = false;
for(SINT16 j=0;j<HISTOGRAM_LENGTH;j++)
m_sTrackingObjectTable[i].initHistogram[j] = 0;
}
m_nFrameCtr = 0;
m_uTotalTime = 0;
m_nMaxEstimationTime = 0;
m_cActiveObject = 0;
TotalDist=0.0;
LastDist=0.0;
switch (eImageType)
{
case MD_RGBA:
m_cSkipValue = 4 ;
break ;
case MD_RGB:
m_cSkipValue = 3 ;
break ;
};
};
CObjectTracker::~CObjectTracker()
{
}
//returns pixel values in format |0|B|G|R| wrt to (x.y)
/*
ULONG_32 CObjectTracker::GetPixelValues(UBYTE8 *frame,SINT16 x,SINT16 y)
{
ULONG_32 pixelValues = 0;
pixelValues = *(frame+(y*m_nImageWidth+x)*m_cSkipValue+2)|//0BGR
*(frame+(y*m_nImageWidth+x)*m_cSkipValue+1) << 8|
*(frame+(y*m_nImageWidth+x)*m_cSkipValue) << 16;
return(pixelValues);
}*/
//set RGB components wrt to (x.y)
void CObjectTracker::SetPixelValues(IplImage *r,IplImage *g,IplImage *b,ULONG_32 pixelValues,SINT16 x,SINT16 y)
{
// *(frame+(y*m_nImageWidth+x)*m_cSkipValue+2) = UBYTE8(pixelValues & 0xFF);
// *(frame+(y*m_nImageWidth+x)*m_cSkipValue+1) = UBYTE8((pixelValues >> 8) & 0xFF);
// *(frame+(y*m_nImageWidth+x)*m_cSkipValue) = UBYTE8((pixelValues >> 16) & 0xFF);
//setpix32f
setpix8c(r, y, x, UBYTE8(pixelValues & 0xFF));
setpix8c(g, y, x, UBYTE8((pixelValues >> 8) & 0xFF));
setpix8c(b, y, x, UBYTE8((pixelValues >> 16) & 0xFF));
}
// returns box color
ULONG_32 CObjectTracker::GetBoxColor()
{
ULONG_32 pixelValues = 0;
switch(m_cActiveObject)
{
case 0:
pixelValues = RGB(255,0,0);
break;
case 1:
pixelValues = RGB(0,255,0);
break;
case 2:
pixelValues = RGB(0,0,255);
break;
case 3:
pixelValues = RGB(255,255,0);
break;
case 4:
pixelValues = RGB(255,0,255);
break;
case 5:
pixelValues = RGB(0,255,255);
break;
case 6:
pixelValues = RGB(255,255,255);
break;
case 7:
pixelValues = RGB(128,0,128);
break;
case 8:
pixelValues = RGB(128,128,0);
break;
case 9:
pixelValues = RGB(128,128,128);
break;
case 10:
pixelValues = RGB(255,128,0);
break;
case 11:
pixelValues = RGB(0,128,128);
break;
case 12:
pixelValues = RGB(123,50,10);
break;
case 13:
pixelValues = RGB(10,240,126);
break;
case 14:
pixelValues = RGB(0,128,255);
break;
case 15:
pixelValues = RGB(128,200,20);
break;
default:
break;
}
return(pixelValues);
}
//初始化一個目標的參數
void CObjectTracker::ObjectTrackerInitObjectParameters(SINT16 x,SINT16 y,SINT16 Width,SINT16 Height)
{
m_cActiveObject = 0;
m_sTrackingObjectTable[m_cActiveObject].X = x;
m_sTrackingObjectTable[m_cActiveObject].Y = y;
m_sTrackingObjectTable[m_cActiveObject].W = Width;
m_sTrackingObjectTable[m_cActiveObject].H = Height;
m_sTrackingObjectTable[m_cActiveObject].vectorX = 0;
m_sTrackingObjectTable[m_cActiveObject].vectorY = 0;
m_sTrackingObjectTable[m_cActiveObject].Status = true;
m_sTrackingObjectTable[m_cActiveObject].assignedAnObject = false;
}
//進行一次跟蹤
void CObjectTracker::ObjeckTrackerHandlerByUser(IplImage *frame)
{
m_cActiveObject = 0;
if (m_sTrackingObjectTable[m_cActiveObject].Status)
{
if (!m_sTrackingObjectTable[m_cActiveObject].assignedAnObject)
{
//計算目標的初始直方圖
FindHistogram(frame,m_sTrackingObjectTable[m_cActiveObject].initHistogram);
m_sTrackingObjectTable[m_cActiveObject].assignedAnObject = true;
}
else
{
//在圖像上搜索目標
FindNextLocation(frame);
DrawObjectBox(frame);
}
}
}
//Extracts the histogram of box
//frame: 圖像
//histogram: 直方圖
//在圖像frame中計算當前目標的直方圖histogram
//直方圖分為兩部分,每部分大小4096,
//RGB的256*256*256種組合,縮減為16*16*16=4096種組合
//假設目標區域的點是邊緣點,則計入直方圖的后一部分,
//否則計入直方圖的前一部分
void CObjectTracker::FindHistogram(IplImage *frame, FLOAT32 (*histogram))
{
SINT16 i = 0;
SINT16 x = 0;
SINT16 y = 0;
UBYTE8 E = 0;
UBYTE8 qR = 0,qG = 0,qB = 0;
// ULONG_32 pixelValues = 0;
UINT32 numberOfPixel = 0;
IplImage* r, * g, * b;
r = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
g = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
b = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
cvCvtPixToPlane( frame, b, g, r, NULL ); //divide color image into separate planes r, g, b. The exact sequence doesn't matter.
for (i=0;i<HISTOGRAM_LENGTH;i++) //reset all histogram
histogram[i] = 0.0;
//for all the pixels in the region
for (y=max(m_sTrackingObjectTable[m_cActiveObject].Y-m_sTrackingObjectTable[m_cActiveObject].H/2,0);y<=min(m_sTrackingObjectTable[m_cActiveObject].Y+m_sTrackingObjectTable[m_cActiveObject].H/2,m_nImageHeight-1);y++)
for (x=max(m_sTrackingObjectTable[m_cActiveObject].X-m_sTrackingObjectTable[m_cActiveObject].W/2,0);x<=min(m_sTrackingObjectTable[m_cActiveObject].X+m_sTrackingObjectTable[m_cActiveObject].W/2,m_nImageWidth-1);x++)
{
//邊緣信息: 當前點與上下左右4點灰度差異是否超過閾值
E = CheckEdgeExistance(r, g, b,x,y);
qR = (UBYTE8)pixval8c( r, y, x )/16;//quantize R component
qG = (UBYTE8)pixval8c( g, y, x )/16;//quantize G component
qB = (UBYTE8)pixval8c( b, y, x )/16;//quantize B component
histogram[4096*E+256*qR+16*qG+qB] += 1; //依據邊緣信息, 累計直方圖//HISTOGRAM_LENGTH=8192
numberOfPixel++;
}
for (i=0;i<HISTOGRAM_LENGTH;i++) //normalize
histogram[i] = histogram[i]/numberOfPixel;
//for (i=0;i<HISTOGRAM_LENGTH;i++)
// printf("histogram[%d]=%d/n",i,histogram[i]);
// printf("numberOfPixel=%d/n",numberOfPixel);
cvReleaseImage(&r);
cvReleaseImage(&g);
cvReleaseImage(&b);
}
//Draw box around object
void CObjectTracker::DrawObjectBox(IplImage *frame)
{
SINT16 x_diff = 0;
SINT16 x_sum = 0;
SINT16 y_diff = 0;
SINT16 y_sum = 0;
SINT16 x = 0;
SINT16 y = 0;
ULONG_32 pixelValues = 0;
IplImage* r, * g, * b;
r = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
g = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
b = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
cvCvtPixToPlane( frame, b, g, r, NULL );
pixelValues = GetBoxColor();
//the x left and right bounds
x_sum = min(m_sTrackingObjectTable[m_cActiveObject].X+m_sTrackingObjectTable[m_cActiveObject].W/2+1,m_nImageWidth-1);//右邊界
x_diff = max(m_sTrackingObjectTable[m_cActiveObject].X-m_sTrackingObjectTable[m_cActiveObject].W/2,0);//左邊界
//the y upper and lower bounds
y_sum = min(m_sTrackingObjectTable[m_cActiveObject].Y+m_sTrackingObjectTable[m_cActiveObject].H/2+1,m_nImageHeight-1);//下邊界
y_diff = max(m_sTrackingObjectTable[m_cActiveObject].Y-m_sTrackingObjectTable[m_cActiveObject].H/2,0);//上邊界
for (y=y_diff;y<=y_sum;y++)
{
SetPixelValues(r, g, b,pixelValues,x_diff,y);
SetPixelValues(r, g, b,pixelValues,x_diff+1,y);
SetPixelValues(r, g, b,pixelValues,x_sum-1,y);
SetPixelValues(r, g, b,pixelValues,x_sum,y);
}
for (x=x_diff;x<=x_sum;x++)
{
SetPixelValues(r, g, b,pixelValues,x,y_diff);
SetPixelValues(r, g, b,pixelValues,x,y_diff+1);
SetPixelValues(r, g, b,pixelValues,x,y_sum-1);
SetPixelValues(r, g, b,pixelValues,x,y_sum);
}
cvCvtPlaneToPix(b, g, r, NULL, frame);
cvReleaseImage(&r);
cvReleaseImage(&g);
cvReleaseImage(&b);
}
// Computes weights and drives the new location of object in the next frame
//frame: 圖像
//histogram: 直方圖
//計算權重, 更新目標的坐標
void CObjectTracker::FindWightsAndCOM(IplImage *frame, FLOAT32 (*histogram))
{
SINT16 i = 0;
SINT16 x = 0;
SINT16 y = 0;
UBYTE8 E = 0;
FLOAT32 sumOfWeights = 0;
SINT16 ptr = 0;
UBYTE8 qR = 0,qG = 0,qB = 0;
FLOAT32 newX = 0.0;
FLOAT32 newY = 0.0;
// ULONG_32 pixelValues = 0;
IplImage* r, * g, * b;
FLOAT32 *weights = new FLOAT32[HISTOGRAM_LENGTH];
for (i=0;i<HISTOGRAM_LENGTH;i++)
{
if (histogram[i] >0.0 )
weights[i] = m_sTrackingObjectTable[m_cActiveObject].initHistogram[i]/histogram[i]; //qu/pu(y0)
else
weights[i] = 0.0;
}
r = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
g = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
b = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
cvCvtPixToPlane( frame, b, g, r, NULL ); //divide color image into separate planes r, g, b. The exact sequence doesn't matter.
for (y=max(m_sTrackingObjectTable[m_cActiveObject].Y-m_sTrackingObjectTable[m_cActiveObject].H/2,0);y<=min(m_sTrackingObjectTable[m_cActiveObject].Y+m_sTrackingObjectTable[m_cActiveObject].H/2,m_nImageHeight-1);y++)
for (x=max(m_sTrackingObjectTable[m_cActiveObject].X-m_sTrackingObjectTable[m_cActiveObject].W/2,0);x<=min(m_sTrackingObjectTable[m_cActiveObject].X+m_sTrackingObjectTable[m_cActiveObject].W/2,m_nImageWidth-1);x++)
{
E = CheckEdgeExistance(r, g, b,x,y);
qR = (UBYTE8)pixval8c( r, y, x )/16;
qG = (UBYTE8)pixval8c( g, y, x )/16;
qB = (UBYTE8)pixval8c( b, y, x )/16;
ptr = 4096*E+256*qR+16*qG+qB; //some recalculation here. The bin number of (x, y) can be stroed somewhere in fact.
newX += (weights[ptr]*x);
newY += (weights[ptr]*y);
sumOfWeights += weights[ptr];
}
if (sumOfWeights>0)
{
m_sTrackingObjectTable[m_cActiveObject].X = SINT16((newX/sumOfWeights) + 0.5); //update location
m_sTrackingObjectTable[m_cActiveObject].Y = SINT16((newY/sumOfWeights) + 0.5);
}
cvReleaseImage(&r);
cvReleaseImage(&g);
cvReleaseImage(&b);
delete[] weights, weights = 0;
}
// Returns the distance between two histograms.
FLOAT32 CObjectTracker::FindDistance(FLOAT32 (*histogram))
{
SINT16 i = 0;
FLOAT32 distance = 0;
for(i=0;i<HISTOGRAM_LENGTH;i++)
distance += FLOAT32(sqrt(DOUBLE64(m_sTrackingObjectTable[m_cActiveObject].initHistogram[i]
*histogram[i])));
return(sqrt(1-distance));
}
//An alternative distance measurement
FLOAT32 CObjectTracker::CompareHistogram(UBYTE8 (*histogram))
{
SINT16 i = 0;
FLOAT32 distance = 0.0;
FLOAT32 difference = 0.0;
for (i=0;i<HISTOGRAM_LENGTH;i++)
{
difference = FLOAT32(m_sTrackingObjectTable[m_cActiveObject].initHistogram[i]
-histogram[i]);
if (difference>0)
distance += difference;
else
distance -= difference;
}
return(distance);
}
// Returns the edge insformation of a pixel at (x,y), assume a large jump of value around edge pixels
UBYTE8 CObjectTracker::CheckEdgeExistance(IplImage *r, IplImage *g, IplImage *b, SINT16 _x,SINT16 _y)
{
UBYTE8 E = 0;
SINT16 GrayCenter = 0;
SINT16 GrayLeft = 0;
SINT16 GrayRight = 0;
SINT16 GrayUp = 0;
SINT16 GrayDown = 0;
// ULONG_32 pixelValues = 0;
// pixelValues = GetPixelValues(frame,_x,_y);
GrayCenter = SINT16(3*pixval8c( r, _y, _x )+6*pixval8c( g, _y, _x )+pixval8c( b, _y, _x ));
if (_x>0)
{
// pixelValues = GetPixelValues(frame,_x-1,_y);
GrayLeft = SINT16(3*pixval8c( r, _y, _x-1 )+6*pixval8c( g, _y, _x-1 )+pixval8c( b, _y, _x-1 ));
}
if (_x < (m_nImageWidth-1))
{
// pixelValues = GetPixelValues(frame,_x+1,_y);
GrayRight = SINT16(3*pixval8c( r, _y, _x+1 )+6*pixval8c( g, _y, _x+1 )+pixval8c( b, _y, _x+1 ));
}
if (_y>0)
{
// pixelValues = GetPixelValues(frame,_x,_y-1);
GrayUp = SINT16(3*pixval8c( r, _y-1, _x )+6*pixval8c( g, _y-1, _x )+pixval8c( b, _y-1, _x ));
}
if (_y<(m_nImageHeight-1))
{
// pixelValues = GetPixelValues(frame,_x,_y+1);
GrayDown = SINT16(3*pixval8c( r, _y+1, _x )+6*pixval8c( g, _y+1, _x )+pixval8c( b, _y+1, _x ));
}
if (abs((GrayCenter-GrayLeft)/10)>EDGE_DETECT_TRESHOLD)
E = 1;
if (abs((GrayCenter-GrayRight)/10)>EDGE_DETECT_TRESHOLD)
E = 1;
if (abs((GrayCenter-GrayUp)/10)>EDGE_DETECT_TRESHOLD)
E = 1;
if (abs((GrayCenter-GrayDown)/10)>EDGE_DETECT_TRESHOLD)
E = 1;
return(E);
}
// Alpha blending: used to update initial histogram by the current histogram
void CObjectTracker::UpdateInitialHistogram(UBYTE8 (*histogram))
{
SINT16 i = 0;
for (i=0; i<HISTOGRAM_LENGTH; i++)
m_sTrackingObjectTable[m_cActiveObject].initHistogram[i] = ALPHA*m_sTrackingObjectTable[m_cActiveObject].initHistogram[i]
+(1-ALPHA)*histogram[i];
}
// Mean-shift iteration
//frame: 圖像
//MeanShift迭代找出中心點
void CObjectTracker::FindNextLocation(IplImage *frame)
{
int i, j, opti, optj;
SINT16 scale[3]={-3, 3, 0};
FLOAT32 dist, optdist;
SINT16 h, w, optX, optY;
//try no-scaling
FindNextFixScale(frame);
optdist=LastDist;
optX=m_sTrackingObjectTable[m_cActiveObject].X;
optY=m_sTrackingObjectTable[m_cActiveObject].Y;
//try one of the 9 possible scaling
i=rand()*2/RAND_MAX;
j=rand()*2/RAND_MAX;
h=m_sTrackingObjectTable[m_cActiveObject].H;
w=m_sTrackingObjectTable[m_cActiveObject].W;
if(h+scale[i]>10 && w+scale[j]>10 && h+scale[i]<m_nImageHeight/2 && w+scale[j]<m_nImageWidth/2)
{
m_sTrackingObjectTable[m_cActiveObject].H=h+2*scale[i];
m_sTrackingObjectTable[m_cActiveObject].W=w+2*scale[j];
FindNextFixScale(frame);
if( (dist=LastDist) < optdist ) //scaling is better
{
optdist=dist;
// printf("Next%f->/n", dist);
}
else //no scaling is better
{
m_sTrackingObjectTable[m_cActiveObject].X=optX;
m_sTrackingObjectTable[m_cActiveObject].Y=optY;
m_sTrackingObjectTable[m_cActiveObject].H=h;
m_sTrackingObjectTable[m_cActiveObject].W=w;
}
};
TotalDist+=optdist; //the latest distance
// printf("/n");
}
void CObjectTracker::FindNextFixScale(IplImage *frame)
{
UBYTE8 iteration = 0;
SINT16 optX, optY;
FLOAT32 *currentHistogram = new FLOAT32[HISTOGRAM_LENGTH];
FLOAT32 dist, optdist=1.0;
for (iteration=0; iteration<MEANSHIFT_ITARATION_NO; iteration++)
{
FindHistogram(frame,currentHistogram); //current frame histogram, use the last frame location as starting point
FindWightsAndCOM(frame,currentHistogram);//derive weights and new location
//FindHistogram(frame,currentHistogram); //uptade histogram
//UpdateInitialHistogram(currentHistogram);//uptade initial histogram
if( ((dist=FindDistance(currentHistogram)) < optdist) || iteration==0 )
{
optdist=dist;
optX=m_sTrackingObjectTable[m_cActiveObject].X;
optY=m_sTrackingObjectTable[m_cActiveObject].Y;
// printf("%f->", dist);
}
else //bad iteration, then find a better start point for next iteration
{
m_sTrackingObjectTable[m_cActiveObject].X=(m_sTrackingObjectTable[m_cActiveObject].X+optX)/2;
m_sTrackingObjectTable[m_cActiveObject].Y=(m_sTrackingObjectTable[m_cActiveObject].Y+optY)/2;
}
}//end for
m_sTrackingObjectTable[m_cActiveObject].X=optX;
m_sTrackingObjectTable[m_cActiveObject].Y=optY;
LastDist=optdist; //the latest distance
// printf("/n");
delete[] currentHistogram, currentHistogram = 0;
}
float CObjectTracker::GetTotalDist(void)
{
return(TotalDist);
}