cross_val_score(model_name, x_samples, y_labels, cv=k)
作用:驗證某個模型在某個訓練集上的穩定性,輸出k個預測精度。
K折交叉驗證(k-fold)
把初始訓練樣本分成k份,其中(k-1)份被用作訓練集,剩下一份被用作評估集,這樣一共可以對分類器做k次訓練,並且得到k個訓練結果。
1 from sklearn.model_selection import cross_val_score 2 clf = sklearn.linear_model.LogisticRegression() 3 # X:features y:targets cv:k 4 cross_val_score(clf, X, y, cv=5)
模型的訓練、預測和評價
1 def svm_model(): 2 from sklearn.metrics import accuracy_score 3 from sklearn.metrics import precision_score, recall_score, f1_score 4 from sklearn.svm import SVC 5 # 模型訓練 6 clf = SVC(kernel='linear') 7 clf.fit(x_train_samples, y_train_labels) 8 # 模型存儲 9 joblib.dump(clf, './model/svm_mode.pkl') 10 # 模型評估 11 predict_labels = clf.predict(x_test_samples) 12 Accuracy = accuracy_score(y_test_labels, predict_labels) 13 Precision = precision_score(y_test_labels, predict_labels, pos_label=0) 14 Recall = recall_score(y_test_labels, predict_labels, pos_label=0) 15 F1_scores = f1_score(y_test_labels, predict_labels, pos_label=0)
整個過程結束。需要說明的是調用K折交叉驗證,結果輸出的是准確率,其它的指標不會輸出。所以,建議還是前期,使用train_test_split()函數划分訓練集和驗證集,后期根據實際需求評估模型