在大數據如火如荼的時候,機器學習無疑成為了炙手可熱的工具,機器學習是計算機科學和統計學的交叉學科,
旨在通過收集和分析數據的基礎上,建立一系列的算法,模型對實際問題進行預測或分類。
R語言無疑為我們提供了很好的工具,它正是計算機科學和統計科學結合的產物,開源免費,
相對於Python、Orange Canvas、Weka、Kinme這些免費的數據挖掘軟件來說,更容易上手,統計圖形也更加美觀。
今天在這里和大家介紹一下Caret機器學習包的一些基本用法。
一、數據收集
下載kernlab包里的spam數據集,spam是一個郵件數據集,共有4601個觀測值,58個變量,最后一個變量是一個二值變量,“spam”和“no spam”,我們要做的工作就是通過建立模型了預測觀測值是否為“spam”。首先加載軟件包和數據集:
> library(caret)
載入需要的程輯包:lattice
載入需要的程輯包:ggplot2
警告信息:
1: 程輯包‘caret’是用R版本3.1.1 來建造的
2: 程輯包‘ggplot2’是用R版本3.1.1 來建造的
> library(kernlab)
警告信息:
程輯包‘kernlab’是用R版本3.1.3 來建造的
> data(spam)
> head(spam)
make address all num3d our over remove internet order mail
1 0.00 0.64 0.64 0 0.32 0.00 0.00 0.00 0.00 0.00
2 0.21 0.28 0.50 0 0.14 0.28 0.21 0.07 0.00 0.94
3 0.06 0.00 0.71 0 1.23 0.19 0.19 0.12 0.64 0.25
4 0.00 0.00 0.00 0 0.63 0.00 0.31 0.63 0.31 0.63
5 0.00 0.00 0.00 0 0.63 0.00 0.31 0.63 0.31 0.63
6 0.00 0.00 0.00 0 1.85 0.00 0.00 1.85 0.00 0.00
receive will people report addresses free business email you
1 0.00 0.64 0.00 0.00 0.00 0.32 0.00 1.29 1.93
2 0.21 0.79 0.65 0.21 0.14 0.14 0.07 0.28 3.47
3 0.38 0.45 0.12 0.00 1.75 0.06 0.06 1.03 1.36
4 0.31 0.31 0.31 0.00 0.00 0.31 0.00 0.00 3.18
5 0.31 0.31 0.31 0.00 0.00 0.31 0.00 0.00 3.18
6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
credit your font num000 money hp hpl george num650 lab labs telnet
1 0.00 0.96 0 0.00 0.00 0 0 0 0 0 0 0
2 0.00 1.59 0 0.43 0.43 0 0 0 0 0 0 0
3 0.32 0.51 0 1.16 0.06 0 0 0 0 0 0 0
4 0.00 0.31 0 0.00 0.00 0 0 0 0 0 0 0
5 0.00 0.31 0 0.00 0.00 0 0 0 0 0 0 0
6 0.00 0.00 0 0.00 0.00 0 0 0 0 0 0 0
num857 data num415 num85 technology num1999 parts pm direct cs
1 0 0 0 0 0 0.00 0 0 0.00 0
2 0 0 0 0 0 0.07 0 0 0.00 0
3 0 0 0 0 0 0.00 0 0 0.06 0
4 0 0 0 0 0 0.00 0 0 0.00 0
5 0 0 0 0 0 0.00 0 0 0.00 0
6 0 0 0 0 0 0.00 0 0 0.00 0
meeting original project re edu table conference charSemicolon
1 0 0.00 0 0.00 0.00 0 0 0.00
2 0 0.00 0 0.00 0.00 0 0 0.00
3 0 0.12 0 0.06 0.06 0 0 0.01
4 0 0.00 0 0.00 0.00 0 0 0.00
5 0 0.00 0 0.00 0.00 0 0 0.00
6 0 0.00 0 0.00 0.00 0 0 0.00
charRoundbracket charSquarebracket charExclamation charDollar
1 0.000 0 0.778 0.000
2 0.132 0 0.372 0.180
3 0.143 0 0.276 0.184
4 0.137 0 0.137 0.000
5 0.135 0 0.135 0.000
6 0.223 0 0.000 0.000
charHash capitalAve capitalLong capitalTotal type
1 0.000 3.756 61 278 spam
2 0.048 5.114 101 1028 spam
3 0.010 9.821 485 2259 spam
4 0.000 3.537 40 191 spam
5 0.000 3.537 40 191 spam
6 0.000 3.000 15 54 spam
二、數據划分
機器學習一般將數據划分成訓練數據、驗證數據(可選)、測試數據、三個部分,訓練數據和驗證數據用來訓練模型,估計模型的具體參數,測試數據用來驗證模型預測的准確程度。下面我們就對spam這個數據進行划分
inTrain <- createDataPartition(y=spam$type,p=0.75,list=FALSE)
training <- spam[inTrain, ]
testing <- spam[-inTrain, ]
nrow(training)
[1] 3451
nrow(testing)
[1] 1150
以上命令中createDataPartition( )就是數據划分函數,對象是spam$typ,p=0.75表示訓練數據所占的比例為75%,list是輸出結果的格式,默認list=FALSE。 training <- spam[inTrain, ],testing <- spam[-inTrain, ]分別制定具體的訓練數據和測試數據。
三、訓練模型
以上的工作完成后就可以將訓練數據放入訓練器中對模型參數進行訓練了
modelFit <- train(type~.,data=training,method="glm") train( )函數就是我們的訓練器,type~是回歸方程,data=training指定數據集,method="glm"指定具體的模型形式,這里我們用的是glm估計,當然讀者也可以用SVM(支持向量機),nnet神經網絡等其他模型形式,以下是模型的具體內容:
modelFit$finalModel
Coefficients:
(Intercept) make address all num3d
-1.989e+00 -5.022e-01 -1.702e-01 1.553e-01 3.368e+00
our over remove internet order
7.554e-01 6.682e-01 2.220e+00 5.586e-01 1.144e+00
mail receive will people report
Degrees of Freedom: 3450 Total (i.e. Null); 3393 Residual
Null Deviance: 4628
Residual Deviance: 1335 AIC: 1451(篇幅有限,中間有刪減)
四、驗證模型
當模型的參數全部訓練完畢后,就要將測試數據帶入模型中進行驗證預測了
predictions <- predict(modelFit,newdata=testing)
predictions####預測結果如下
[1] spam spam spam spam spam spam spam spam spam spam spam
[12] spam spam spam spam spam spam spam spam spam spam spam
[23] nonspam spam spam spam spam spam spam nonspam spam spam spam
[34] spam spam spam spam spam spam spam spam spam spam spam
[45] spam spam spam spam spam spam spam spam spam spam spam
五、錯誤分類矩陣
想知道模型預測的准確率如何呢?這個時候就要用到錯誤分類矩陣了,將模型預測的值和真實的值進行比較,計算錯誤分類率。通過觀察錯誤分類矩陣,我們可知准確率為0.9252,結果還是很理想的。
confusionMatrix(predictions,testing$type)####輸出結果如下
Confusion Matrix and Statistics
Reference
Prediction nonspam spam
nonspam 658 47
spam 39 406
Accuracy : 0.9252
95% CI : (0.9085, 0.9398)
No Information Rate : 0.6061
P-Value [Acc > NIR] : <2e-16
Kappa : 0.8429
Mcnemar's Test P-Value : 0.4504
Sensitivity : 0.9440
Specificity : 0.8962
Pos Pred Value : 0.9333
Neg Pred Value : 0.9124
Prevalence : 0.6061
Detection Rate : 0.5722
Detection Prevalence : 0.6130
Balanced Accuracy : 0.9201
實例2:
library(caret)
library(mlbench)
data(Sonar)
set.seed(107)
inTrain<-createDataPartition(y = Sonar$Class,##the outcome data are needed
p=.75,##The percentage of data in the training set
list = FALSE##the format of the results
)
#The output is a set of integers for the rows of Sonar
#that belong in the training set.
> str(inTrain)
int [1:157, 1] 98 100 101 102 103 105 107 109 110 111 ...
- attr(*, "dimnames")=List of 2
..$ : NULL
..$ : chr "Resample1"
> training <- Sonar[inTrain,]
> testing <- Sonar[-inTrain,]
> nrow(training)
[1] 157
> nrow(testing)
[1] 51
1)
library(pls)
plsFit <- train(Class~.,data = training,
method = 'pls',#Center and scale the predictors for the training set and all future samples,
preProc = c("center","scale"))
plot(plsFit)
2)
plsFit <- train(Class~.,data = training,
method = 'pls',
tuneLength = 15,
preProc = c("center","scale"))
plot(plsFit)
3)
ctrl <-trainControl(method = "repeatedcv",repeats=3)
plsFit <- train(Class~.,data = training,
method = 'pls',
tuneLength = 15,
trControl = ctrl,
preProc = c("center","scale"))
plot(plsFit)
4)
ctrl <- trainControl(method = "repeatedcv",repeats=3,
classProbs = TRUE,
summaryFunction = twoClassSummary)
plsFit <-train(Class~.,
data = training,
tuneLength = 15,
trControl = ctrl,
metric = "ROC",
preProc = C("center","scale"))
> plsFit
Partial Least Squares
157 samples
60 predictor
2 classes: 'M', 'R'
Pre-processing: centered, scaled
Resampling: Cross-Validated (10 fold, repeated 3 times)
Summary of sample sizes: 141, 141, 142, 141, 140, 142, ...
Resampling results across tuning parameters:
ncomp Accuracy Kappa Accuracy SD Kappa SD
1 0.729 0.460 0.1291 0.254
2 0.807 0.614 0.0896 0.176
3 0.788 0.577 0.0880 0.176
4 0.780 0.558 0.0783 0.158
5 0.757 0.512 0.0953 0.193
6 0.762 0.524 0.0925 0.185
7 0.752 0.504 0.0943 0.188
8 0.739 0.477 0.0743 0.148
9 0.745 0.491 0.0861 0.170
10 0.747 0.493 0.0791 0.156
11 0.736 0.472 0.0845 0.167
12 0.758 0.514 0.0887 0.177
13 0.730 0.458 0.0883 0.176
14 0.734 0.466 0.0916 0.182
15 0.743 0.483 0.0964 0.193
Accuracy was used to select the optimal model using the largest value.
The final value used for the model was ncomp = 2.
> plsClasses <- predict(plsFit,newdata = testing)
> str(plsClasses)
Factor w/ 2 levels "M","R": 2 1 1 2 1 2 2 2 2 2 ...
> plsProbs <- predict(plsFit,newdata = testing,type = "prob")
> head(plsProbs)
M R
4 0.3762529 0.6237471
5 0.5229047 0.4770953
8 0.5839468 0.4160532
16 0.3660142 0.6339858
20 0.7351013 0.2648987
25 0.2135788 0.7864212
> confusionMatrix(data = plsClasses,testing$Class)
Confusion Matrix and Statistics
Reference
Prediction M R
M 20 7
R 7 17
Accuracy : 0.7255
95% CI : (0.5826, 0.8411)
No Information Rate : 0.5294
P-Value [Acc > NIR] : 0.003347
Kappa : 0.4491
Mcnemar's Test P-Value : 1.000000
Sensitivity : 0.7407
Specificity : 0.7083
Pos Pred Value : 0.7407
Neg Pred Value : 0.7083
Prevalence : 0.5294
Detection Rate : 0.3922
Detection Prevalence : 0.5294
Balanced Accuracy : 0.7245
'Positive' Class : M