機器學習4個常用超參數調試方法


ML工作流中最困難的部分之一是為模型找到最好的超參數。ML模型的性能與超參數直接相關。超參數調優的越好,得到的模型就越好。調優超參數可能是非常乏味和困難的,更像是一門藝術而不是科學。

超參數

超參數是在建立模型時用於控制算法行為的參數。這些參數不能從常規訓練過程中獲得。在對模型進行訓練之前,需要對它們進行賦值。

 

 

1. 傳統手工搜索

在傳統的調參過程中,我們通過訓練算法手動檢查隨機超參數集,並選擇符合我們目標的最佳參數集。

我們看看代碼:

#importing required libraries
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold , cross_val_score
from sklearn.datasets import load_wine

wine = load_wine()
X = wine.data
y = wine.target

#splitting the data into train and test set
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size = 0.3,random_state = 14)

#declaring parameters grid
k_value = list(range(2,11))
algorithm = ['auto','ball_tree','kd_tree','brute']
scores = []
best_comb = []
kfold = KFold(n_splits=5)

#hyperparameter tunning
for algo in algorithm: for k in k_value:
    knn = KNeighborsClassifier(n_neighbors=k,algorithm=algo)
    results = cross_val_score(knn,X_train,y_train,cv = kfold)

    print(f'Score:{round(results.mean(),4)} with algo = {algo} , K = {k}')
    scores.append(results.mean())
    best_comb.append((k,algo))

best_param = best_comb[scores.index(max(scores))]
print(f'\nThe Best Score : {max(scores)}')
print(f"['algorithm': {best_param[1]} ,'n_neighbors': {best_param[0]}]")
Score:0.6697 with algo = auto , K = 2
Score:0.6773 with algo = auto , K = 3
Score:0.7177 with algo = auto , K = 4
Score:0.734 with algo = auto , K = 5
Score:0.7017 with algo = auto , K = 6
Score:0.7417 with algo = auto , K = 7
Score:0.7017 with algo = auto , K = 8
Score:0.6533 with algo = auto , K = 9
Score:0.6613 with algo = auto , K = 10
Score:0.6697 with algo = ball_tree , K = 2
Score:0.6773 with algo = ball_tree , K = 3
Score:0.7177 with algo = ball_tree , K = 4
Score:0.734 with algo = ball_tree , K = 5
Score:0.7017 with algo = ball_tree , K = 6
Score:0.7417 with algo = ball_tree , K = 7
Score:0.7017 with algo = ball_tree , K = 8
Score:0.6533 with algo = ball_tree , K = 9
Score:0.6613 with algo = ball_tree , K = 10
Score:0.6697 with algo = kd_tree , K = 2
Score:0.6773 with algo = kd_tree , K = 3
Score:0.7177 with algo = kd_tree , K = 4
Score:0.734 with algo = kd_tree , K = 5
Score:0.7017 with algo = kd_tree , K = 6
Score:0.7417 with algo = kd_tree , K = 7
Score:0.7017 with algo = kd_tree , K = 8
Score:0.6533 with algo = kd_tree , K = 9
Score:0.6613 with algo = kd_tree , K = 10
Score:0.6697 with algo = brute , K = 2
Score:0.6773 with algo = brute , K = 3
Score:0.7177 with algo = brute , K = 4
Score:0.734 with algo = brute , K = 5
Score:0.7017 with algo = brute , K = 6
Score:0.7417 with algo = brute , K = 7
Score:0.7017 with algo = brute , K = 8
Score:0.6533 with algo = brute , K = 9
Score:0.6613 with algo = brute , K = 10

The Best Score : 0.7416666666666667
['algorithm': auto ,'n_neighbors': 7]

缺點

  1. 沒辦法確保得到最佳的參數組合。
  2. 這是一個不斷試錯的過程,所以,非常的耗時。

2. 網格搜索

網格搜索是一種基本的超參數調優技術。它類似於手動調優,為網格中指定的所有給定超參數值的每個排列構建模型,評估並選擇最佳模型。考慮上面的例子,其中兩個超參數k_value =[2,3,4,5,6,7,8,9,10] & algorithm =[' auto ', ' ball_tree ', ' kd_tree ', ' brute '],在這個例子中,它總共構建了9*4 = 36不同的模型。

from sklearn.model_selection import GridSearchCV

knn = KNeighborsClassifier()
grid_param = { 'n_neighbors' : list(range(2,11)) , 
              'algorithm' : ['auto','ball_tree','kd_tree','brute'] }
              
grid = GridSearchCV(knn,grid_param,cv = 5)
grid.fit(X_train,y_train)

#best parameter combination
grid.best_params_  #{'algorithm': 'auto', 'n_neighbors': 5}

#Score achieved with best parameter combination
grid.best_score_  #0.774

#all combinations of hyperparameters
grid.cv_results_['params']

#average scores of cross-validation
grid.cv_results_['mean_test_score']

至於為什么二者的結果會不一樣,那是因為seed種子數也是一個超參數

缺點

由於它嘗試了超參數的每一個組合,並根據交叉驗證得分選擇了最佳組合,這使得GridsearchCV非常慢。

3. 隨機搜索

使用隨機搜索代替網格搜索的動機是,在許多情況下,所有的超參數可能不是同等重要的。隨機搜索從超參數空間中隨機選擇參數組合,參數由n_iter給定的固定迭代次數的情況下選擇。實驗證明,隨機搜索的結果優於網格搜索。

from sklearn.model_selection import RandomizedSearchCV

knn = KNeighborsClassifier()

grid_param = { 'n_neighbors' : list(range(2,11)) , 
              'algorithm' : ['auto','ball_tree','kd_tree','brute'] }

rand_ser = RandomizedSearchCV(knn,grid_param,n_iter=10)
rand_ser.fit(X_train,y_train)

#best parameter combination
rand_ser.best_params_  #{'n_neighbors': 7, 'algorithm': 'brute'}

#score achieved with best parameter combination
rand_ser.best_score_  #0.7256666666666667

#all combinations of hyperparameters
rand_ser.cv_results_['params']

#average scores of cross-validation
rand_ser.cv_results_['mean_test_score']

缺點

隨機搜索的問題是它不能保證給出最好的參數組合。

4. 貝葉斯搜索

貝葉斯優化屬於一類優化算法,稱為基於序列模型的優化(SMBO)算法。這些算法使用先前對損失f的觀察結果,以確定下一個(最優)點來抽樣f。該算法大致可以概括如下。

  1. 使用先前評估的點X1*:n*,計算損失f的后驗期望。
  2. 在新的點X的抽樣損失f,從而最大化f的期望的某些方法。該方法指定f域的哪些區域最適於抽樣。

重復這些步驟,直到滿足某些收斂准則。

from skopt import BayesSearchCV

import warnings
warnings.filterwarnings("ignore")

# parameter ranges are specified by one of below
from skopt.space import Real, Categorical, Integer

knn = KNeighborsClassifier()
#defining hyper-parameter grid
grid_param = { 'n_neighbors' : list(range(2,11)) , 
              'algorithm' : ['auto','ball_tree','kd_tree','brute'] }

#initializing Bayesian Search
Bayes = BayesSearchCV(knn , grid_param , n_iter=30 , random_state=14)
Bayes.fit(X_train,y_train)

#best parameter combination
Bayes.best_params_  #OrderedDict([('algorithm', 'ball_tree'), ('n_neighbors', 5)])

#score achieved with best parameter combination
Bayes.best_score_  #0.7741935483870968

#all combinations of hyperparameters
Bayes.cv_results_['params']

#average scores of cross-validation
Bayes.cv_results_['mean_test_score']

缺點

要在2維或3維的搜索空間中得到一個好的代理曲面需要十幾個樣本,增加搜索空間的維數需要更多的樣本。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM