1 介紹
本文將介紹一種基於深度學習和稀疏表達的人臉識別算法。
首先。利用深度學習框架(VGGFace)提取人臉特征;其次,利用PCA對提取的特征進行降維;最后,利用稀疏表達分類實現特征匹配。我採用CMC曲線評價在AR數據庫上的識別性能。最后我還提供了整個過程的code。
2 基於深度學習和稀疏表達的人臉識別算法
2.1 利用VGGFace提取人臉特征
以下介紹利用VGGFace對人臉特征進行提取。我們利用的數據庫為AR數據庫。數據庫的圖比例如以下:
接下來我們利用VGGFace對人臉特征進行提取。
- 我們利用Matconvent作為深度學習框架,Matconvent能夠到http://www.vlfeat.org/matconvnet/上下載,我採用的1.0-beta19。也能夠下載最新版本號。
- VGGFace的deep model能夠到http://www.robots.ox.ac.uk/~vgg/software/vgg_face/上進行下載,我採用的是Matconvnet的版本號。模型大概1.01G。
如果VGGFace的模型為F ,圖片為x 。那么提取的特征為y=F(x) .
2.2 PCA對人臉特征進行降維
利用pca對數據降維,VGGFace提取出的特征為4096維。對提取的特征進行降維最后降到128維。
2.3 稀疏表達的人臉匹配
數據庫一共同擁有
當中
最后我們能夠利用稀疏表達分類器來識別這個probe人臉
3 Code
function cnn_vgg_faces()
%CNN_VGG_FACES Demonstrates how to use VGG-Face
clear all
clc
addpath PCA
run(fullfile(fileparts(mfilename('fullpath')),...
'..', 'matlab', 'vl_setupnn.m')) ;
net = load('data/models/vgg-face.mat') ;
list = dir('../data/AR');
C = 100;
img_list = list(3:end);
index = [1, 10];
%% 建立基於VGGFace的Gallery字典 dictionary = []; for i = 1:C disp(i) numEachGalImg(i) = 0; for j = 1:2 im = imread(strcat('../data/AR/',img_list((i-1)*26+index(j)).name)); im_ = single(im) ; % note: 255 range
im_ = imresize(im_, net.meta.normalization.imageSize(1:2)) ;
for k = 1:3
im1_(:,:,k) = im_;
end
im2_ = bsxfun(@minus,im1_,net.meta.normalization.averageImage) ;
res = vl_simplenn(net, im2_) ;
feature_p(:,j) = res(36).x(:);
end
numEachGalImg(i) = numEachGalImg(i) + size(feature_p,2);
dictionary = [dictionary feature_p];
end
%% PCA對特征進行降維 FaceContainer = double(dictionary'); [pcaFaces W meanVec] = fastPCA(FaceContainer,128); X = pcaFaces; [X,A0,B0] = scaling(X); LFWparameter.mean = meanVec; LFWparameter.A = A0; LFWparameter.B = B0; LFWparameter.V = W; imfo = LFWparameter; train_fea = (double(FaceContainer)-repmat(imfo.mean, size(FaceContainer,1), 1))*imfo.V; dictionary = scaling(train_fea,1,imfo.A,imfo.B); for i = 1:size(dictionary, 1) dictionary(i,:) = dictionary(i,:)/norm(dictionary(i,:)); end dictionary = double(dictionary); totalGalKeys = sum(numEachGalImg); cumNumEachGalImg = [0; cumsum(numEachGalImg')]; %% 利用稀疏編碼進行特征匹配
% sparse coding parameters
if ~exist('opt_choice', 'var')
opt_choice = 1;
end
num_bases = 128;
beta = 0.4;
batch_size = size(dictionary, 1);
num_iters = 5;
if opt_choice==1
sparsity_func= 'L1';
epsilon = [];
elseif opt_choice==2
sparsity_func= 'epsL1';
epsilon = 0.01;
end
Binit = [];
fname_save = sprintf('../results/sc_%s_b%d_beta%g_%s', sparsity_func, num_bases, beta, datestr(now, 30));
AtA = dictionary*dictionary'; for i = 1:C fprintf('%s \n',num2str(i)); tic im = imread(strcat('../data/AR/',img_list((i-1)*26+26).name)); im_ = single(im) ; % note: 255 range im_ = imresize(im_, net.meta.normalization.imageSize(1:2)) ; for k = 1:3 im1_(:,:,k) = im_; end im2_ = bsxfun(@minus,im1_,net.meta.normalization.averageImage) ; res = vl_simplenn(net, im2_) ; feature_p = res(36).x(:); feature_p = (double(feature_p)'-imfo.mean)*imfo.V;
feature_p = scaling(feature_p,1,imfo.A,imfo.B);
feature_p = feature_p/norm(feature_p, 2);
[B S stat] = sparse_coding(AtA,0, dictionary', double(feature_p'), num_bases, beta, sparsity_func, epsilon, num_iters, batch_size, fname_save, Binit);
for m = 1:length(numEachGalImg)
AA = S(cumNumEachGalImg(m)+1:cumNumEachGalImg(m+1),:);
X1 = dictionary(cumNumEachGalImg(m)+1:cumNumEachGalImg(m+1),:);
recovery = X1'*AA; YY(m) = mean(sum((recovery'-double(feature_p)).^2));
end
score(:,i) = YY;
toc
end
accuracy = calrank(score1,1:1,'ascend');
fprintf('rank-1:%d/%%\n',accuracy*100);
文中以
calrank能夠計算得到CMC曲線:參見http://blog.csdn.net/hlx371240/article/details/53482752。
最后得到rank-1為82%。
整個代碼見資源,因為vgg-face 太大,能夠自己到vgg的官網下載,然后放到../matconvnet-1.0-beta19\examples\data\models中。