機器學習-softmax回歸 python實現


---恢復內容開始---

Softmax Regression 可以看做是 LR 算法在多分類上的推廣,即類標簽 y 的取值大於或者等於 2。

假設數據樣本集為:$\left \{ \left ( X^{(1)},y ^{(1)} \right ) ,\left ( X^{(2)},y ^{(2)} \right ),\left ( X^{(3)},y ^{(3)} \right ),...,\left ( X^{(m)},y ^{(m)} \right )\right \}$

對於 SR 算法,其輸入特征為:$ X^{(i)} \in \mathbb{R}^{n+1}$,類別標記為:$y^{(i)} \in \{ 0,1,2,...,k \}$,假設函數為每一個樣本估計其所屬類別的概率 $P(y=j|X)$,具體的假設函數為:

$h_{\theta}(X^{(i)}) =\begin{bmatrix}
P(y^{(i)}=1|X^{(i)};\theta)\\
P(y^{(i)}=2|X^{(i)};\theta)\\
...\\
P(y^{(i)}=k|X^{(i)};\theta)
\end{bmatrix} = \frac{1}{\sum _{j=1}^{k}e^{\theta_j^TX^{(i)}}}\begin{bmatrix}
e^{\theta_1^TX^{(i)}}\\
e^{\theta_2^TX^{(i)}}\\
...\\
e^{\theta_k^TX^{(i)}}
\end{bmatrix}$

其中,$\theta$表示的向量,且 $\theta_i \in \mathbb{R}^{n+1}$,則對於每一個樣本估計其所屬的類別的概率為

$P(y^{(i)}=j|X^{(i)};\theta) = \frac{e^{\theta_j^TX^{(i)}}}{\sum _{l=1}^{k}e^{\theta_l^TX^{(i)}}}$

SR 的損失函數為:

$J(\theta) = -\frac{1}{m} \left [\sum_{i=1}^{m} \sum_{j=1}^{k} I \{ y^{(i)}=j \} \log \frac{e^{\theta_j^TX^{(i)}}}{\sum _{l=1}^{k}e^{\theta_l^TX^{(i)}}} \right ]$

 其中,$I(x) = \left\{\begin{matrix}
0 & if\;\;x = false\\
1 & if\;\;x = true
\end{matrix}\right.$ 表示指示函數。

對於上述的損失函數,可以使用梯度下降法求解:

首先求參數的梯度:

$\frac{\partial J(\theta )}{\partial \theta _j} = -\frac{1}{m}\left [ \sum_{i=1}^{m}\triangledown _{\theta_j}\left \{ \sum_{j=1}^{k}I(y^{(i)}=j) \log\frac{e^{\theta_j^TX^{(i)}}}{\sum _{l=1}^{k}e^{\theta_l^TX^{(i)}}}  \right \}  \right ]$

當 $y^{(i)}=j$ 時, $\frac{\partial J(\theta )}{\partial \theta _j} = -\frac{1}{m}\sum_{i=1}^{m}\left [\left ( 1-\frac{e^{\theta_j^TX^{(i)}}}{\sum _{l=1}^{k}e^{\theta_l^TX^{(i)}}} \right )X^{(i)}  \right ]$

當 $y^{(i)}\neq j$ 時,$\frac{\partial J(\theta )}{\partial \theta _j} = -\frac{1}{m}\sum_{i=1}^{m}\left [\left (-\frac{e^{\theta_j^TX^{(i)}}}{\sum _{l=1}^{k}e^{\theta_l^TX^{(i)}}} \right )X^{(i)}  \right ]$

因此,最終結果為:

$g(\theta_j) = \frac{\partial J(\theta )}{\partial \theta _j} = -\frac{1}{m}\sum_{i=1}^{m}\left [X^{(i)} \cdot \left ( I\left \{ y^{(i)}=j \right \}-P( y^{(i)}=j|X^{(i)};\theta) \right )  \right ]$

梯度下降法的迭代更新公式為:

$\theta_j  = \theta_j - \alpha \cdot g(\theta_j)$

 主要python代碼

def gradientAscent(feature_data,label_data,k,maxCycle,alpha):
    '''
    梯度下降求解Softmax模型
    :param feature_data: 特征
    :param label_data: 標簽
    :param k: 類別個數
    :param maxCycle: 最大迭代次數
    :param alpha: 學習率
    :return: 權重
    '''
    m,n = np.shape(feature_data)
    weights = np.mat(np.ones((n,k))) #一共有n*k個權值
    i = 0
    while i <=maxCycle:
        i+=1
        err = np.exp(feature_data*weights) #e^(\theta_j * x^i)
        if i%100==0:
            print ("\t-----iter:",i,",cost:",cost(err,label_data))
        rowsum = -err.sum(axis = 1) 
        rowsum = rowsum.repeat(k,axis = 1) 
        err = err/rowsum  # -p(y^i = j|x^i;0)
        for x in range(m): 
            err[x,label_data[x,0]]+=1 # I(y^i = j)-p(y^i = j|x^i;0)
        weights = weights+(alpha/m)*feature_data.T*err #weights
    return weights
View Code
def cost(err,label_data):
    '''
    計算損失函數值
    :param err: exp的值
    :param label_data: 標簽值
    :return: sum_cost/m:損失函數值
    '''
    m = np.shape(err)[0]
    sum_cost = 0.0
    for i in xrange(m):
        if err[i,label_data[i,0]] / np.sum(err[i,:])>0:
            sum_cost -=np.log(err[i,label_data[i,0]]/np.sum(err[i,:]))
        else:
            sum_cost-=0
    return sum_cost/m
    
View Code

Sklearn代碼:

lr = LogisticRegressionCV(fit_intercept=True, Cs=np.logspace(-5, 1, 100),
                          multi_class='multinomial', penalty='l2', solver='lbfgs',max_iter = 10000,cv = 7)#multinomial表示多類即softmax回歸
re = lr.fit(X_train, Y_train)
View Code

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM