GBDT:原理及python實現


Table of Contents

GBDT概述

  GBDT(Gradient Boosting Decision Tree)梯度提升樹是一種以決策樹為基模型的boosting方法。首先,以MSE為損失函數的GBDT回歸樹為例,引入GBDT。假設我們前一輪迭代得到的強學習器是\(f_{k-1}(x)\),損失函數是\(L(y,f_{k-1}(x))\)。那本輪的目標就是尋找一個基模型\(h_k(x)\),使得\(L(y, f_{k}(x)) =L(y, f_{k-1}(x)+ h_k(x))\)最小,注意,這里的損失函數是全局的損失函數。

  舉一個例子來解釋GBDT,假設明天的銷售額是3000,我們首先訓練一棵Cart樹,預測明天的銷售額是2000,再訓練一顆樹,以\(3000-2000=1000\)為預測目標來擬合殘差,假設預測值為800,然后繼續以200為目標訓練決策樹,直到結束。

GBDT回歸(提升樹)

算法流程

  首先訓練一個Cart樹

\[f_0(x) = \underbrace{arg\; min}_{c}\sum\limits_{i=1}^{m}L(y_i, c) \]

其損失函數為

\[L=\sum_{i=1}^m \frac1 2[f(x_i)-y_i]^2 \]

其負梯度為

\[r_{ki} = -\bigg[\frac{\partial L(y_i, f(x_i)))}{\partial f(x_i)}\bigg]_{f(x) = f_{k-1}\;\; (x)}=y_i-f(x_i) \]

計算每條數據的負梯度,得到數據集

\[(x_i,r_{ki})\;\; (i=1,2,..m) \]

負梯度剛好是實際值與模型擬合值之間的偏差,再訓練一個模型用於擬合這個偏差

\[c_{kj} = \underbrace{arg\; min}_{c}\sum\limits_{x_i \in R_{kj}} L(y_i,f_{k-1}(x_i) +c) \]

然后更新學習器

\[f_{t}(x) = f_{k-1}(x) + \sum\limits_{j=1}^{J}c_{kj}I(x \in R_{kj}) \]

得到學習器k,重復此過程直到結束,最終預測值為

\[f(x) = f_K(x) =f_0(x) + \sum\limits_{k=1}^{K}\sum\limits_{j=1}^{J}c_{kj}I(x \in R_{kj}) \]

  可以看到,對於使用MSE作為損失函數的Cart回歸樹,損失函數的負梯度就是殘差,而由於損失函數中兩個參數的關系是\(x_1-x_2\),因此

\[c_{kj} = \underbrace{arg\; min}_{c}\sum\limits_{x_i \in R_{kj}} L(y_i,f_{k-1}(x_i) +c) \]

\[c_{kj} = \underbrace{arg\; min}_{c}\sum\limits_{x_i \in R_{kj}} L(y_i-f_{k-1}(x_i),c) \]

是等價的,而后者就是使用MSE作為損失函數的Cart樹的損失函數,也就是說,對單棵樹的局部最優就是全局最優,可直接調用決策樹包而不加修改

python實現

from multiprocessing import Pool

import numpy as np
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.tree import DecisionTreeRegressor
from multiprocessing import Pool
from sklearn import datasets
from sklearn.metrics import mean_squared_error


class GBDTRegressor:
    def __init__(self, X, y, n_estimators=100, learning_rate=0.1, max_depth=3):
        self.features = np.array(X)
        if len(self.features.shape) == 1:
            self.features = self.features.reshape(1, -1)
        self.labels = np.array(y).reshape(-1, 1)
        self.regressors = []
        self.n_estimators = n_estimators
        self.max_depth = max_depth
        self.learning_rate = learning_rate

    def fit(self):
        estimator0 = DecisionTreeRegressor(
            max_depth=self.max_depth).fit(
            self.features, self.labels)
        self.regressors.append(estimator0)
        predicted_y = self.learning_rate * estimator0.predict(self.features)

        for i in range(self.n_estimators - 1):
            target = self.labels.ravel() - predicted_y
            estimator = DecisionTreeRegressor(
                max_depth=self.max_depth).fit(
                self.features, target)
            self.regressors.append(estimator)
            predicted_y = predicted_y + self.learning_rate * \
                estimator.predict(self.features)

    def _predict(self, regressor, X):
        return self.learning_rate * regressor.predict(X)

    def predict(self, X):
        X = np.array(X)
        if len(X.shape) == 1:
            X = X.reshape(1, -1)
        with Pool() as p:
            result = np.array(
                p.starmap(
                    self._predict,
                    [(reg, X) for reg in self.regressors]))
            result[-1, :] = result[-1, :] / self.learning_rate
            return np.sum(result, axis=0)


if __name__ == '__main__':
    d = datasets.fetch_california_housing()
    X = d['data']
    y = d['target']
    gbdt = GBDTRegressor(X, y)
    gbdt.fit()
    pre_y = gbdt.predict(X)
    print(mean_squared_error(pre_y, y))
    sklean_gbdt = GradientBoostingRegressor(n_estimators=100).fit(X, y)
    pre_y = sklean_gbdt.predict(X)
    print(mean_squared_error(pre_y, y))
0.2593586663475436
0.26188431965892933

GBDT分類

算法流程

  GBDT分類由於難以處理殘差,因此,使用類似邏輯回歸的對數損失函數將預測值的殘差連續化,也就是說,GBDT分類使用的也是Cart回歸樹。

  \(f(x)\)表示提升樹的輸出值,與邏輯回歸類似,邏輯回歸將線性回歸的預測值\(X\theta\)使用sigmoid函數映射到0-1之間,作為預測概率,然后使用最大似然並取負對數作為損失函數。

  同樣的,GBDT將\(y_if(x_i)\)輸入sigmoid函數,作為預測概率

\[P=\frac 1{1+e^{-yf(x)}} \]

使用最大似然估計得到似然函數

\[L = \prod_{i=1}^m\frac 1{1+e^{-y_if(x_i)}} \]

取負對數得

\[-ln L = \sum_{i=1}^mln(1+e^{-y_if(x_i)}) \]

  以上就是需要最優化的損失函數,對於每個樣本,其損失為:

\[L(y_i,f(x_i))=ln(1+e^{-y_if(x_i)}) \]

擬合對象負梯度:

\[r_{ki} = -\bigg[\frac{\partial L(y, f(x_i)))}{\partial f(x_i)}\bigg]_{f(x) = f_{k-1}\;\; (x)} = \frac{y_i}{1+e^{y_if_{k-1}\space\space(x_i)}} \]

  損失函數不同是跟回歸相比一個明顯的區別,下面是擬合負梯度時的決策樹的最優值:

\[c_{kj} = \underbrace{arg\; min}_{c}\sum\limits_{x_i \in R_{kj}} ln(1+e^{-y_i(f_{k-1}\space\space(x_i) +c)}) \]

  前面提到過,在使用MSE作為損失函數時,總體最優與決策樹預測值的最優是一致的,也就是說可以直接使用決策樹葉節點均值作為預測值。但是當使用對數函數時,就需要直接優化上面的式子。但是要優化上面的式子比較困難,因此,通常使用以下的近似:

\[c_{kj} = \frac {\sum\limits_{x_i \in R_{kj}}r_{ki}}{\sum\limits_{x_i \in R_{kj}}|r_{ki}|(1-|r_{ki}|)} \]

  • 初始化問題

  在初始化\(f(x)\)時,李航老師的書中

\[f_0(x)=c =\underbrace{arg\; min}_{c}\sum f(y_i,c) \]

  XGB論文中,提到初始化

\[c=ln\frac{1+\bar y}{1-\bar y} \]

其實這兩者是等價的,假設樣本中,正例數量為a,反例數量為b,則有

\[\frac{\partial L}{\partial c}=\frac{\partial \sum ln(1+e^{-y_i\space c})}{\partial c}=\sum \frac{-y_ie^{-y_i\space c}}{1+e^{-y_i\space c}}=\sum \frac{-y_i}{1+e^{y_i\space c}}=0 \]

上式可化為

\[\sum_{y_i=1}\frac{1}{1+e^{c}}-\sum_{y_i=-1}\frac{1}{1+e^{-c}}=0 \]

\[\frac {a-be^c}{1+e^c}=0 \]

\[c = ln\frac ab \]

\[\bar y = \frac{a-b}{a+b} \]

\[c=ln\frac{1+\bar y}{1-\bar y} \]

python實現

import numpy as np
from sklearn.datasets import make_classification
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.tree import DecisionTreeRegressor
from multiprocessing import Pool, Manager
from sklearn.metrics import accuracy_score
from tqdm import tqdm


class GBDTClassifier:
    def __init__(self, X, y, n_estimators=100, learning_rate=0.1,max_depth=3):
        self.features = np.array(X)
        if len(self.features.shape) == 1:
            self.features = self.features.reshape(1, -1)
        self.labels = np.array(y).reshape(-1, 1)
        self.labels[self.labels == 0] = -1
        self.estimators = []
        self.tree_predict_value = []
        self.init_value = np.log((1 + np.mean(self.labels)) / (1 - np.mean(self.labels)))
        self.n_estimators = n_estimators
        self.learning_rate = learning_rate
        self.max_depth = max_depth

    def _cal_cj(self, j, r, area_num, i, dic):
        '''
        計算葉節點j的預測值
        :param j: 節點名
        :param r: 本輪樣本殘差(待預測值)
        :param area_num: 本輪樣本所在節點
        :param i: 輪數
        :return: 無返回,將計算結果存入字典中
        '''
        # if tree_predict_value is None:
        #     tree_predict_value = self.tree_predict_value
        rj = r[area_num == j]
        c =  np.sum(rj) / np.sum(abs(rj) * (1 - abs(rj)))
        dic[j] = c


    def fit(self):
        fk_1 = self.init_value * np.ones([self.labels.shape[0], 1])
        for i in tqdm(range(self.n_estimators)):
            # 每一輪迭代,首先計算殘差r,然后訓練回歸樹,並獲取每個樣本所在的葉節點
            dic = {}
            r = self.labels / (1 + np.exp(self.labels * fk_1))
#             print('r',np.unique(r))
            dtr = DecisionTreeRegressor(random_state=1,max_depth=self.max_depth).fit(self.features, r)
            self.estimators.append(dtr)
            area_num = dtr.apply(self.features)
            # 並行計算每個節點的預測值
            for j in np.unique(area_num):
                self._cal_cj(j, r, area_num, i, dic)
            self.tree_predict_value.append(dic)
            ci = np.array(list(map(lambda x: self.tree_predict_value[i][x], area_num)))
#             print('ci',i,ci[0],fk_1[0],self.labels[0])
            fk_1 = fk_1 + self.learning_rate * ci.reshape(-1,1)

    def _predict(self, i, X):
        area_num = self.estimators[i].apply(X)
        return np.array(list(map(lambda x: self.learning_rate * self.tree_predict_value[i][x], area_num)))

    def predict(self, X):
        if len(X.shape) == 1:
            X = X.reshape(1, -1)
        with Pool() as p:
            result = np.array(list(p.starmap(self._predict, [(i, X) for i in range(self.n_estimators)])))
            result[-1, :] = result[-1, :] / self.learning_rate
            return np.array(np.sum(result, axis=0) >= 0).astype(int)


if __name__ == '__main__':
    X, y = make_classification(n_samples=1000, n_features=4,
                               n_informative=2, n_redundant=0,
                               random_state=0, shuffle=False)
    gbdt = GBDTClassifier(X, y)
    gbdt.fit()
    print('訓練完成')
    gbdt_sklearn = GradientBoostingClassifier(criterion='mse').fit(X, y)
    print(f"本例准確率:{accuracy_score(y, gbdt.predict(X))}")
    print(f"sklearn准確率:{accuracy_score(y, gbdt_sklearn.predict(X))}")
    print(gbdt_sklearn.score(X,y))
100%|██████████| 100/100 [00:00<00:00, 304.81it/s]


訓練完成
本例准確率:0.992
sklearn准確率:0.991
0.991

多分類

  多分類使用Softmax來完成,損失函數是交叉熵損失。

  多分類與二分類有幾點不同,由於使用交叉熵損失函數,因此,多分類的label是一個\(m\times K\)的矩陣,其中\(K\)為多分類的類別數量。是將原來的類別向量做One-hot的結果。比如,原來的4個樣本的標簽向量是\([1,2,1,3]\),OneHot之后就變成了

\[[1,0,0] \\ [0,1,0] \\ [1,0,0] \\ [0,0,1] \]

  多分類的算法描述如下圖:

  首先,初始化\(F_{k0}(x)=0\),其中\(K\)是類別數,把這個數代入softmax中,也就是每個樣本都初始化一個長度為\(K\)的向量,值為\(\frac1k\),還是上面的例子,就是每個樣本初始化向量\(p_0=[\frac13,\frac13,\frac13]\),然后計算負梯度,以第一個樣本為例\([\frac23,-\frac13,-\frac13]\),然后訓練三棵樹分別擬合三列值,方法類似二分類。與二分類的區別就是特征列從1變為3,每輪訓練三棵樹。

image

\[L(y, f(x)) = - \sum\limits_{k=1}^{K}y_klog\;p_k(x) \]

\[p_k(x) = \frac {exp(f_k(x))} {\sum\limits_{l=1}^{K} exp(f_l(x))} \]

\[r_{til} = -\bigg[\frac{\partial L(y_i, f(x_i)))}{\partial f(x_i)}\bigg]_{f_k(x) = f_{l, t-1}\;\; (x)} = y_{il} - p_{l, t-1}(x_i) \]

\[c_{tjl} = \underbrace{arg\; min}_{c_{jl}}\sum\limits_{i=0}^{m}\sum\limits_{k=1}^{K} L(y_k, f_{t-1, l}(x) + \sum\limits_{j=0}^{J}c_{jl} I(x_i \in R_{tjl})) \]

\[c_{tjl} = \frac{K-1}{K} \; \frac{\sum\limits_{x_i \in R_{tjl}}r_{til}}{\sum\limits_{x_i \in R_{til}}|r_{til}|(1-|r_{til}|)} \]


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM