關於近鄰保持算法LLE,LPP,NPE等算法的實現---流行學習


需要注意的是:

1、在基於流行學習的方法中,所謂的M=(I-W)*(I-W)'中的I是單位矩陣。而不是1

2、對於圖W,我們將系數按照一列一列的形式放,並且,Sw = Train_Ma*Mg*Train_Ma';Sb = Train_Ma*Train_Ma';

3、NPE在各種情況下的識別錯誤率對比,

第一種:ORL_56x46.mat數據庫,不用PCA降維,選取每類樣本的前k個樣本,投影后的樣本也做了歸一化,SRC使用的是DALM_fast( [solution status]=SolveDALM_fast(Train_data,test_sample,0.01);)

         結論:(1) 樣本投影后,可能並不能增加識別率,反而會降低識別率。

                  (2) 雖然NPE是基於重構約束的,但是投影后的樣本並不具備很好的表示效果,即投影后的樣本在使用SRC識別時,同樣識別率下降很厲害。

                  (3) cai deng的代碼中我沒有調節降維維度,所以可能會差一點吧!!!

樣本個數 2 3 4 5 6
原始數據上KNN 77.81 79.29 78.33 77 75
NPE(有監督,k=train_num-1)KNN識別 77.81 78.21 78.75 79 76.25
NPE(無監督,k=train_num-1)KNN識別 78.44 80.71 85 86 86.25
原始數據SRC 76.88 79.29 77.08 78 73.75
NPE(有監督,k=train_num-1)SRC識別 77.19 78.21 78.75 79 76.25
NPE(無監督,k=train_num-1)SRC識別 78.44 82.86 85.42 86.5 90.63
           
 our NPE 無監督 k=train_num-1 KNN dim=80    72.5      73.13
 our NPE 有監督k=traiin_num-1 KNN dim=80    74.29      73.13
 our NPE 無監督 k=train_num-1 SRC dim=80    74.29      74.38
 our NPE 有監督 k=train_num-1 SRC dim=80    74.29      71.88 
           

 

 

第二種:ORL_56x46.mat數據庫,不用PCA降維,每類訓練樣本隨機選取,循環20次,記錄均值和標准差,投影后的樣本也做了歸一化,SRC使用的是DALM_fast( [solution status]=SolveDALM_fast(Train_data,test_sample,0.01);),識別錯誤率如下:

 

樣本個數 2 3 4 5 6
原始數據上KNN  51.02±2.25  38.25±2.38      
NPE(有監督,k=train_num-1)KNN識別 52.45±2.16         
NPE(無監督,k=train_num-1)KNN識別  67.36±2.93 58.42±2.88       
原始數據SRC 49.36±2.20   35.31±2.23      
NPE(有監督,k=train_num-1)SRC識別 52.55±2.73         
NPE(無監督,k=train_num-1)SRC識別

 63.81±2.60

 65.06±3.65      
           
 our NPE 無監督 k=traiin_num-1 KNN dim=80  50.89±2.82  47.21±2.73      39.81±2.77
 our NPE 無監督 k=train_num-1 SRC dim=80  49.63±2.40  44.23±2.40      36.25±4.08
 our NPE 有監督 k=traiin_num-1 KNN dim=80  50.47±2.30  39.23±2.50      20.25±3.16
 our NPE 有監督 k=train_num-1 SRC dim=80  50.22±2.45  40.14±2.85      22.41±3.06
           

1 locally linear embedding(LLE)算法

LLE就是直接對M=(I-W)(I-W)進行特征值分解,取其最小特征向量集合。

參考文獻:S. T. Roweis and L. K. Saul, 'Nonlinear dimensionality reduction by locally linear embedding', Science, vol. 290, no. 5500, pp. 2323-2326, 2000.

中文參考資料:http://wenku.baidu.com/view/19afb8d3b9f3f90f76c61bb0.html?from=search


clear all clc addpath ('\data set\'); load ORL_56x46.mat; % 40類 每類10 個樣本 fea = double(fea); Train_Ma = fea'; % transformed to each column a sample % construct neighborhood matrix K_sample = zeros(size(Train_Ma,2),size(Train_Ma,2)); k = 10; % 近鄰數 for i = 1:size(Train_Ma,2) NK = zeros(size(Train_Ma,2),1); for j = 1:size(Train_Ma,2) distance(i,j) = norm(Train_Ma(:,i)-Train_Ma(:,j)); end [value,state] = sort(distance(i,:),'ascend'); dd1(:,i) = value(2:k+1); neigh(:,i) = state(2:k+1); Sub_sample = Train_Ma(:,state(2:k+1)); Sub_sample = Sub_sample - repmat(Train_Ma(:,i),1,k);    % 這里計算表示權重的方式 貌似非常規 coeff = inv(Sub_sample'*Sub_sample)*ones(k,1); coeff = coeff/sum(coeff); W1(:,i) = coeff; NK(state(2:k+1)) = coeff; K_sample(:,i) = NK; % each row denotes the k nearest samples of the ith sample end M = (eye(size(Train_Ma,2))-K_sample)*(eye(size(Train_Ma,2))-K_sample)'; options.disp = 0; options.isreal = 1; options.issym = 1; [eigvector1, eigvalue] = eigs(M,101, 0, options); eigvalue = diag(eigvalue); 

2、matlab關於NPE算法的學習

主函數為:NPE_caideng_demo.m

% (neighborhood preserving embedding)NPE 算法學習 
% 基於蔡登的框架算法
% 文獻請參考:He, X., Cai, D., Yan, S. and Zhang, H.-J. (2005)
% Neighborhood preserving embedding. In: Proceedings of Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on. IEEE, 1208-1213.
clear all
clc
addpath ('data set\');
load ORL_56x46.mat;           % 40類 每類10 個樣本 
fea = double(fea)';
sele_num = 3;                 % 選擇每類的訓練樣本個數
nnClass = length(unique(gnd));  % The number of classes;
num_Class=[];
for i=1:nnClass
  num_Class=[num_Class length(find(gnd==i))]; %The number of samples of each class
end
%%------------------select training samples and test samples--------------%% 
Train_Ma=[];
Train_Lab=[];
Test_Ma=[];
Test_Lab=[];
for j=1:nnClass    
    idx=find(gnd==j);
%     randIdx=randperm(num_Class(j));       % 隨機選擇樣本 
    randIdx  = [1:num_Class(j)];            % 選取每類樣本的前幾個樣本作為訓練樣本
    Train_Ma = [Train_Ma; fea(idx(randIdx(1:sele_num)),:)];            % select select_num samples per class for training
    Train_Lab= [Train_Lab;gnd(idx(randIdx(1:sele_num)))];
    Test_Ma  = [Test_Ma;fea(idx(randIdx(sele_num+1:num_Class(j))),:)];  % select remaining samples per class for test
    Test_Lab = [Test_Lab;gnd(idx(randIdx(sele_num+1:num_Class(j))))];
end
Train_Ma = Train_Ma';                       % transform to a sample per column
Train_Ma = Train_Ma./repmat(sqrt(sum(Train_Ma.^2)),[size(Train_Ma,1) 1]);
Test_Ma = Test_Ma';
Test_Ma = Test_Ma./repmat(sqrt(sum(Test_Ma.^2)),[size(Test_Ma,1) 1]); 

% 調用 cai deng的 NPE代碼
options = [];
options.k = sele_num-1;     % k近鄰的個數
% options.NeighborMode = 'KNN';         % 選擇無監督的話 用KNN
options.NeighborMode = 'Supervised';    % 選擇有監督
options.gnd = Train_Lab;
[P_NPE, eigvalue] = NPE_caideng(options,Train_Ma');
Train_Maa = P_NPE'*Train_Ma;
Test_Maa = P_NPE'*Test_Ma;
Train_Maa = Train_Maa./repmat(sqrt(sum(Train_Maa.^2)),[size(Train_Maa,1) 1]);
Test_Maa = Test_Maa./repmat(sqrt(sum(Test_Maa.^2)),[size(Test_Maa,1) 1]);    
rate2 = KNN(Train_Maa',Train_Lab,Test_Maa',Test_Lab,1)*100;
error2 = 100-rate2

實驗結果:選取ORL前3個樣本的識別錯誤率為78.21%,而隨機選取3個樣本的話,識別率能夠明顯改善。  

 第二種,我們自己編寫的NPE代碼,需要說明是,代碼段中舉例說明了兩處與別人代碼中不一致的地方,我們按照標准的公式編寫,但是實驗發現,這兩處地方並不影響識別結果。主函數NPE_jerry_demo.m

        

% (neighborhood preserving embedding)NPE 算法學習  無監督
% jerry 2016 3 22
clear all
clc
addpath ('G:\2015629房師兄代碼\data set\');
load ORL_56x46.mat;           % 40類 每類10 個樣本 
fea = double(fea)';
sele_num = 3;
nnClass = length(unique(gnd));  % The number of classes;
num_Class=[];
for i=1:nnClass
  num_Class=[num_Class length(find(gnd==i))]; %The number of samples of each class
end
%%------------------select training samples and test samples--------------%% 
Train_Ma=[];
Train_Lab=[];
Test_Ma=[];
Test_Lab=[];
for j=1:nnClass    
    idx=find(gnd==j);
%     randIdx=randperm(num_Class(j));
    randIdx  = [1:num_Class(j)];
    Train_Ma = [Train_Ma; fea(idx(randIdx(1:sele_num)),:)];            % select select_num samples per class for training
    Train_Lab= [Train_Lab;gnd(idx(randIdx(1:sele_num)))];
    Test_Ma  = [Test_Ma;fea(idx(randIdx(sele_num+1:num_Class(j))),:)];  % select remaining samples per class for test
    Test_Lab = [Test_Lab;gnd(idx(randIdx(sele_num+1:num_Class(j))))];
end
Train_Ma = Train_Ma';                       % transform to a sample per column
Train_Ma = Train_Ma./repmat(sqrt(sum(Train_Ma.^2)),[size(Train_Ma,1) 1]);
Test_Ma = Test_Ma';
Test_Ma = Test_Ma./repmat(sqrt(sum(Test_Ma.^2)),[size(Test_Ma,1) 1]); 
% construct neighborhood matrix

K_sample = zeros(size(Train_Ma,2),size(Train_Ma,2));
k = sele_num-1;                         % 近鄰數
for i = 1:size(Train_Ma,2)
    NK = zeros(size(Train_Ma,2),1);
    for j = 1:size(Train_Ma,2)
        distance(i,j) = norm(Train_Ma(:,i)-Train_Ma(:,j));
    end
    [value,state]  = sort(distance(i,:),'ascend');
    dd1(:,i) = value(2:k+1);        % 第 i 個樣本的 KNN距離值
    neigh(:,i) = state(2:k+1);      % 第 i 個樣本的 KNN位置
    Sub_sample = Train_Ma(:,state(2:k+1));
%     Sub_sample = Sub_sample - repmat(Train_Ma(:,i),1,k);
%     coeff = inv(Sub_sample'*Sub_sample)*ones(k,1);% 上兩句是      不同之處1
% %    別人NPE代碼中使用的方法
    coeff = inv(Sub_sample'*Sub_sample)*Sub_sample'*Train_Ma(:,i); % 利用基於表示的方法 求解 表示系數p
    coeff = coeff/sum(coeff);
    W1(:,i) = coeff;       
    NK(state(2:k+1)) = coeff;
    K_sample(:,i) = NK;                % each row denotes the k nearest samples of the ith sample
end

M = (eye(size(Train_Ma,2))-K_sample)*(eye(size(Train_Ma,2))-K_sample)';         % 這里是I不是1
Sw = Train_Ma*M*Train_Ma';             
Sb = Train_Ma*Train_Ma';
% Sw = (Sw + Sw') / 2;      % 別人的NPE中 還加上了下面兩行代碼    不同之處2
% Sb = (Sb + Sb') / 2; % [eigvector1, eigvalue1] = eig((Sw+0.001*eye(size(Sw,1))),Sb); % inv(Sb)*Sw 求最小特征值對應的特征向量 這句有問題
% Pt = eigvector1(:,tt(1:dim)); %
[eigvector1, eigvalue1] = eig(Sw,Sb+0.001*eye(size(Sb,1))); % inv(Sb)*Sw 求最小特征值對應的特征向量 結果卻求的最大值對應的特征向量
[eigvalue1,tt] = sort(diag(eigvalue1),'ascend');
dim = 80;
Pt = eigvector1(:,tt(end-dim+1:end));
Train_Maa = Pt'*Train_Ma;
Test_Maa = Pt'*Test_Ma;
Train_Maa = Train_Maa./repmat(sqrt(sum(Train_Maa.^2)),[size(Train_Maa,1) 1]);
Test_Maa = Test_Maa./repmat(sqrt(sum(Test_Maa.^2)),[size(Test_Maa,1) 1]);    
rate2 = KNN(Train_Maa',Train_Lab,Test_Maa',Test_Lab,1)*100;
error2 = 100-rate2

% 若用 基於表示的方法 會出現什么結果呢?
SRC_DP_accuracy = SRC_rec(Train_Maa,Train_Lab,Test_Maa,Test_Lab);
error_DP = 100-SRC_DP_accuracy

  


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM