機器學習|基於SVM的鳶尾花數據集分類實現


  iris數據集的中文名是安德森鳶尾花卉數據集,英文全稱是Anderson’s Iris data set。iris包含150個樣本,對應數據集的每行數據。每行數據包含每個樣本的四個特征和樣本的類別信息,所以iris數據集是一個150行5列的二維表。通俗地說,iris數據集是用來給花做分類的數據集,每個樣本包含了花萼長度、花萼寬度、花瓣長度、花瓣寬度四個特征(前4列),我們需要建立一個分類器,分類器可以通過樣本的四個特征來判斷樣本屬於山鳶尾、變色鳶尾還是維吉尼亞鳶尾(這三個名詞都是花的品種)。

數據的獲取:

file=importdata('iris.csv');%讀取csv文件中從第R-1行,第C-1列的數據開始的數據
data=file.data;
features=data(:,1:4);%特征列表
classlabel=data(:,5);%對應類別
n = randperm(size(features,1));%隨機產生訓練集和測試集

繪制散點圖查看數據:

%% 繪制散點圖
class_0 = find(data(:,5)==0);
class_1 = find(data(:,5)==1);
class_2 = find(data(:,5)==2);%返回類別為2的位置索引
subplot(3,2,1)
hold on
scatter(features(class_0,1),features(class_0,2),'x','b')
scatter(features(class_1,1),features(class_1,2),'+','g')
scatter(features(class_2,1),features(class_2,2),'o','r')
subplot(3,2,2)
hold on
scatter(features(class_0,1),features(class_0,3),'x','b')
scatter(features(class_1,1),features(class_1,3),'+','g')
scatter(features(class_2,1),features(class_2,3),'o','r')
subplot(3,2,3)
hold on
scatter(features(class_0,1),features(class_0,4),'x','b')
scatter(features(class_1,1),features(class_1,4),'+','g')
scatter(features(class_2,1),features(class_2,4),'o','r')
subplot(3,2,4)
hold on
scatter(features(class_0,2),features(class_0,3),'x','b')
scatter(features(class_1,2),features(class_1,3),'+','g')
scatter(features(class_2,2),features(class_2,3),'o','r')
subplot(3,2,5)
hold on
scatter(features(class_0,2),features(class_0,4),'x','b')
scatter(features(class_1,2),features(class_1,4),'+','g')
scatter(features(class_2,2),features(class_2,4),'o','r')
subplot(3,2,6)
hold on
scatter(features(class_0,3),features(class_0,4),'x','b')
scatter(features(class_1,3),features(class_1,4),'+','g')
scatter(features(class_2,3),features(class_2,4),'o','r')

  曲線為根據花萼長度、花萼寬度、花瓣長度、花瓣寬度之間的關系繪制的散點圖。

訓練集與測試集:

 

%% 訓練集--70個樣本
train_features=features(n(1:70),:);
train_label=classlabel(n(1:70),:);
%% 測試集--30個樣本
test_features=features(n(71:end),:);
test_label=classlabel(n(71:end),:);

 

數據歸一化:

%% 數據歸一化
 [Train_features,PS] = mapminmax(train_features');
 Train_features = Train_features'; 
 Test_features = mapminmax('apply',test_features',PS); 
 Test_features = Test_features';

使用SVM進行分類:

%% 創建/訓練SVM模型
model = svmtrain(train_label,Train_features);
%% SVM仿真測試
[predict_train_label] = svmpredict(train_label,Train_features,model);
[predict_test_label] = svmpredict(test_label,Test_features,model);
%% 打印准確率
compare_train = (train_label == predict_train_label);
accuracy_train = sum(compare_train)/size(train_label,1)*100; 
fprintf('訓練集准確率:%f\n',accuracy_train)
compare_test = (test_label == predict_test_label);
accuracy_test = sum(compare_test)/size(test_label,1)*100;
fprintf('測試集准確率:%f\n',accuracy_test)

結果:

*
optimization finished, #iter = 18
nu = 0.668633
obj = -21.678546, rho = 0.380620
nSV = 30, nBSV = 28
*
optimization finished, #iter = 29
nu = 0.145900
obj = -3.676315, rho = -0.010665
nSV = 9, nBSV = 4
*
optimization finished, #iter = 21
nu = 0.088102
obj = -2.256080, rho = -0.133432
nSV = 7, nBSV = 2
Total nSV = 40
Accuracy = 97.1429% (68/70) (classification)
Accuracy = 97.5% (78/80) (classification)
訓練集准確率:97.142857
測試集准確率:97.500000

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM