冪迭代法求特征值和特征向量


冪迭代法求第k大的特征值和特征向量

數學表述

設矩陣A

\[A = \left[ \begin{matrix} X_{11}&\ldots&X_{1n} \newline X_{21} & \ldots& X_{2n} \newline &\vdots& \newline X_{n1}&\ldots &X_{nn} \newline \end{matrix} \right] \]

求其最大特征值對應的特征向量\(b_k = [v_1,v_2,\ldots,v_n]^T\):

\[b_{k+1} = \frac{Ab_k}{||Ab_k ||}..........(1) \]

證明:見英文wiki Power_iteration

代碼:

import numpy as np
def power_iteration(A, num_simulations: int):
    # Ideally choose a random vector
    # To decrease the chance that our vector
    # Is orthogonal to the eigenvector
    b_k = np.random.rand(A.shape[1])
  
    for _ in range(num_simulations):
        # calculate the matrix-by-vector product Ab
        b_k1 = np.dot(A, b_k)
 
        # calculate the norm
        b_k1_norm = np.linalg.norm(b_k1)
 
        # re normalize the vector
        b_k = b_k1 / b_k1_norm
 
    return b_k

例子

\[A = \left[ \begin{matrix} 2&1\newline 1&2\newline \end{matrix} \right] \]

求其特征值:

\[|A-\lambda I| = \left| \begin{matrix} 2-\lambda &1\newline 1 &2-\lambda\newline \end{matrix} \right| = 3 - 4\lambda + \lambda^2 \]

求得最大特征值\(\lambda_2 = 3\),另一特征值\(\lambda_1 = 1\)
\(\lambda_2 = 3\) 時:

\[|(A-\lambda I)V| = 0 \]

\[(A-3I)V = \left[ \begin{array} {cccc} -1 &1\newline 1 &-1\newline \end{array} \right] \left[ \begin{array} {cccc} v_1\newline v_2\newline \end{array} \right] = \left[ \begin{array} {cccc} 0\newline 0\newline \end{array} \right] \]

求得\(v_1 = v_2\),歸一化后,\(V_{\lambda=3} = [0.707,0.707]^T\)

驗證代碼:

A = np.array([[2,1],[1,2]])
V = power_iteration(A, 100).reshape(1,-1)
V

array([[0.70710678, 0.70710678]])

求得特征向量后,特征值

\[\lambda = VAV^T........(2) \]

代碼

lbd = np.dot(V,np.dot(A,V.T))
lbd 

array([[3.]])

擴展

求次大特征向量與特征值:

\[B = A - \frac{\lambda}{||V||^2}V^TV ........(3) \]

\[b_{k+1}' = \frac{Bb_k'}{||Bb_k' ||} ...........(4) \]

該理論借鑒於stackexchange,其中\(||V||^2\)是對特征向量歸一化(好像求得的特征向量模長應該都是1,不需要歸一化應該也沒問題)。

代碼

B = A - lbd / np.linalg.norm(V)**2 * np.dot(V.T,V)
V_2 = power_iteration(B, 100)
lbd1 = np.dot(V_2.T,np.dot(B,V_2))
lbd1

1.0000000000000002

以此類推,我們可以得到任意第k大特征值。這在很多機器學習方法中,取前k重要特征有着重要的作用。
歡迎訪問個人博客。轉載請注明出處。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM