因子分析-降維算法LDA/PCA


因子分析-降維算法LDA/PCA

因子分析是將具有錯綜復雜關系的變量(或樣本)綜合為少數幾個因子,以再現原始變量和因子之間的相互關系,探討多個能夠直接測量,並且具有一定相關性的實測指標是如何受少數幾個內在的獨立因子所支配,並且在條件許可時借此嘗試對變量進行分類。

因子分析的基本思想

根據變量間相關性的大小把變量分組,使得同組內的變量之間的相關性(共性)較高,並用一個公共因子來代表這個組的變量,而不同組的變量相關性較低(個性)。

因子分析的目的

因子分析的目的,通俗來講就是簡化變量維數。即要使因素結構簡單化,希望以最少的共同因素(公共因子),能對總變異量作最大的解釋,因而抽取得因子越少越好,但抽取的因子的累積解釋的變異量越大越好。

主要內容:

(1):主成分分析PCA

(2):線性判別分析LDA

線性判別分析LDA(Linear Discriminant Analysis)

LCA主要用於數據預處理中的降維,分類任務。LCA關心的是能夠最大化類間區分度的坐標軸成分,將特征空間(數據集中的多維樣本)投影到一個維度更小的k 維子空間中,同時保持區分類別的信息。其原理是投影到維度更低的空間中,使得投影后的點,會形成按類別區分,一簇一簇的情況,相同類別的點,將會在投影后的空間中更接近方法。與PCA不同,LDA更關心分類而不是方差。

  • LDA分類的一個目標是使得不同類別之間的距離越遠越好,同一類別之中的距離越近越好
  • 每類樣例的均值:image-20220308102350539
  • 投影后的均值:image-20220308102407195
  • 投影后的兩類樣本中心點盡量分離:image-20220308102422910

散列值:樣本點的密集程度,值越大,越分散,反之,越集中。

同類之間應該越密集些:image-20220308102517477

  • 目標函數:image-20220308110450622
  • 散列值公式展開:image-20220308110504676
  • 散列矩陣(scatter matrices):image-20220308110540622
  • 類內散布矩陣𝑆𝐵image-20220308110613948
  • 分子展開:image-20220308110437585
  • 最終目標函數:image-20220308110422659
    • 分母進行歸一化:如果分子、分母是都可以取任意值的,那就會使得有無窮解,我們將分母限制為長度為1
    • 拉格朗日乘子法:image-20220308110714010
    • 兩邊都乘以Sw的逆:image-20220308110804839
feature_dict = {i:label for i,label in zip(
                range(4),
                  ('sepal length in cm',
                  'sepal width in cm',
                  'petal length in cm',
                  'petal width in cm', ))}

import pandas as pd

df = pd.io.parsers.read_csv(
    filepath_or_buffer='https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data',
    header=None,
    sep=',',
    )
df.columns = [l for i,l in sorted(feature_dict.items())] + ['class label']
df.dropna(how="all", inplace=True) # to drop the empty line at file-end

df.tail()
sepal length in cm sepal width in cm petal length in cm petal width in cm class label
145 6.7 3.0 5.2 2.3 Iris-virginica
146 6.3 2.5 5.0 1.9 Iris-virginica
147 6.5 3.0 5.2 2.0 Iris-virginica
148 6.2 3.4 5.4 2.3 Iris-virginica
149 5.9 3.0 5.1 1.8 Iris-virginica
lda1
from sklearn.preprocessing import LabelEncoder

X = df[['sepal length in cm','sepal width in cm','petal length in cm','petal width in cm']].values
y = df['class label'].values

enc = LabelEncoder()
label_encoder = enc.fit(y)
y = label_encoder.transform(y) + 1

#label_dict = {1: 'Setosa', 2: 'Versicolor', 3:'Virginica'}
lda2

分別求三種鳶尾花數據在不同特征維度上的均值向量 mi

lda3
import numpy as np
np.set_printoptions(precision=4)

mean_vectors = []
for cl in range(1,4):
    mean_vectors.append(np.mean(X[y==cl], axis=0))
    print('Mean Vector class %s: %s\n' %(cl, mean_vectors[cl-1]))

Mean Vector class 1: [ 5.006 3.418 1.464 0.244] Mean Vector class 2: [ 5.936 2.77 4.26 1.326] Mean Vector class 3: [ 6.588 2.974 5.552 2.026]

計算兩個 4×4 維矩陣:類內散布矩陣和類間散布矩陣

lda5
S_W = np.zeros((4,4))
for cl,mv in zip(range(1,4), mean_vectors):
    class_sc_mat = np.zeros((4,4))                  # scatter matrix for every class
    for row in X[y == cl]:
        row, mv = row.reshape(4,1), mv.reshape(4,1) # make column vectors
        class_sc_mat += (row-mv).dot((row-mv).T)
    S_W += class_sc_mat                             # sum class scatter matrices
print('within-class Scatter Matrix:\n', S_W)

S_W = np.zeros((4,4))
for cl,mv in zip(range(1,4), mean_vectors):
class_sc_mat = np.zeros((4,4)) # scatter matrix for every class
for row in X[y == cl]:
row, mv = row.reshape(4,1), mv.reshape(4,1) # make column vectors
class_sc_mat += (row-mv).dot((row-mv).T)
S_W += class_sc_mat # sum class scatter matrices
print('within-class Scatter Matrix:\n', S_W)

lda6
overall_mean = np.mean(X, axis=0)

S_B = np.zeros((4,4))
for i,mean_vec in enumerate(mean_vectors):  
    n = X[y==i+1,:].shape[0]
    mean_vec = mean_vec.reshape(4,1) # make column vector
    overall_mean = overall_mean.reshape(4,1) # make column vector
    S_B += n * (mean_vec - overall_mean).dot((mean_vec - overall_mean).T)

print('between-class Scatter Matrix:\n', S_B)

between-class Scatter Matrix: [[ 63.2121 -19.534 165.1647 71.3631] [ -19.534 10.9776 -56.0552 -22.4924] [ 165.1647 -56.0552 436.6437 186.9081] [ 71.3631 -22.4924 186.9081 80.6041]]

lda7
eig_vals, eig_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))

for i in range(len(eig_vals)):
    eigvec_sc = eig_vecs[:,i].reshape(4,1)   
    print('\nEigenvector {}: \n{}'.format(i+1, eigvec_sc.real))
    print('Eigenvalue {:}: {:.2e}'.format(i+1, eig_vals[i].real))

Eigenvector 1:
[[ 0.2049]
[ 0.3871]
[-0.5465]
[-0.7138]]
Eigenvalue 1: 3.23e+01
Eigenvector 2:
[[-0.009 ]
[-0.589 ]
[ 0.2543]
[-0.767 ]]
Eigenvalue 2: 2.78e-01
Eigenvector 3:
[[-0.7113]
[ 0.0353]
[-0.0267]
[ 0.7015]]
Eigenvalue 3: -5.76e-15
Eigenvector 4:
[[ 0.422 ]
[-0.4364]
[-0.4851]
[ 0.6294]]
Eigenvalue 4: 7.80e-15

特征值與特征向量:

- 特征向量:表示映射方向

- 特征值:特征向量的重要程度

#Make a list of (eigenvalue, eigenvector) tuples
eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))]

# Sort the (eigenvalue, eigenvector) tuples from high to low
eig_pairs = sorted(eig_pairs, key=lambda k: k[0], reverse=True)

# Visually confirm that the list is correctly sorted by decreasing eigenvalues

print('Eigenvalues in decreasing order:\n')
for i in eig_pairs:
    print(i[0])

Eigenvalues in decreasing order:
32.2719577997
0.27756686384
7.7995841654e-15
5.76433252705e-15

print('Variance explained:\n')
eigv_sum = sum(eig_vals)
for i,j in enumerate(eig_pairs):
    print('eigenvalue {0:}: {1:.2%}'.format(i+1, (j[0]/eigv_sum).real))

Variance explained:
eigenvalue 1: 99.15%
eigenvalue 2: 0.85%
eigenvalue 3: 0.00%
eigenvalue 4: 0.00%

W = np.hstack((eig_pairs[0][1].reshape(4,1), eig_pairs[1][1].reshape(4,1)))
print('Matrix W:\n', W.real)

Matrix W:
[[ 0.2049 -0.009 ]
[ 0.3871 -0.589 ]
[-0.5465 0.2543]
[-0.7138 -0.767 ]]

X_lda = X.dot(W)
assert X_lda.shape == (150,2), "The matrix is not 150x2 dimensional."
from matplotlib import pyplot as plt

def plot_step_lda():

    ax = plt.subplot(111)
    for label,marker,color in zip(
        range(1,4),('^', 's', 'o'),('blue', 'red', 'green')):

        plt.scatter(x=X_lda[:,0].real[y == label],
                y=X_lda[:,1].real[y == label],
                marker=marker,
                color=color,
                alpha=0.5,
                label=label_dict[label]
                )

    plt.xlabel('LD1')
    plt.ylabel('LD2')

    leg = plt.legend(loc='upper right', fancybox=True)
    leg.get_frame().set_alpha(0.5)
    plt.title('LDA: Iris projection onto the first 2 linear discriminants')

    # hide axis ticks
    plt.tick_params(axis="both", which="both", bottom="off", top="off",  
            labelbottom="on", left="off", right="off", labelleft="on")

    # remove axis spines
    ax.spines["top"].set_visible(False)  
    ax.spines["right"].set_visible(False)
    ax.spines["bottom"].set_visible(False)
    ax.spines["left"].set_visible(False)    

    plt.grid()
    plt.tight_layout
    plt.show()

plot_step_lda()
ldaoutput1
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA

# LDA
sklearn_lda = LDA(n_components=2)
X_lda_sklearn = sklearn_lda.fit_transform(X, y)
def plot_scikit_lda(X, title):

    ax = plt.subplot(111)
    for label,marker,color in zip(
        range(1,4),('^', 's', 'o'),('blue', 'red', 'green')):

        plt.scatter(x=X[:,0][y == label],
                    y=X[:,1][y == label] * -1, # flip the figure
                    marker=marker,
                    color=color,
                    alpha=0.5,
                    label=label_dict[label])

    plt.xlabel('LD1')
    plt.ylabel('LD2')

    leg = plt.legend(loc='upper right', fancybox=True)
    leg.get_frame().set_alpha(0.5)
    plt.title(title)

    # hide axis ticks
    plt.tick_params(axis="both", which="both", bottom="off", top="off",  
            labelbottom="on", left="off", right="off", labelleft="on")

    # remove axis spines
    ax.spines["top"].set_visible(False)  
    ax.spines["right"].set_visible(False)
    ax.spines["bottom"].set_visible(False)
    ax.spines["left"].set_visible(False)    

    plt.grid()
    plt.tight_layout
    plt.show()
plot_step_lda()
plot_scikit_lda(X_lda_sklearn, title='Default LDA via scikit-learn')
ldaoutput2 ldaoutput3

主成分分析PCA(Principal Component Analysis)

PCA是降維中最常用的一種手段,提取最有價值的信息(基於方差)。

向量的表示:

  • 內積:image-20220308115212486
  • 解釋:image-20220308115203677
  • 設向量B的模為1,則A與B的內積值等於A向B所在直線投影的矢量長度image-20220308115154425
  • 向量可以表示為(3,2)實際上表示線性組合:image-20220308115106877 image-20220308115056099
  • 基:(1,0)和(0,1)叫做二維空間中的一組基image-20220308115022287

基變換:

  • 基是正交的(即內積為0,或直觀說相互垂直)。要求:線性無關image-20220308114947251
  • 變換:數據與一個基做內積運算,結果作為第一個新的坐標分量,然后與第二個基做內積運算,結果作為第二個新坐標的分量
  • 數據(3,2)映射到基中坐標:image-20220308114850816
  • 兩個矩陣相乘的意義是將右邊矩陣中的每一列列向量變換到左邊矩陣中每一行行向量為基所表示的空間中去:image-20220308114839207

協方差矩陣:

  • 方向:如何選擇這個方向(或者說基)才能盡量保留最多的原始信息呢?一種直觀的看法是:希望投影后的投影值盡可能分散
  • 方差:image-20220308114824871
  • 尋找一個一維基,使得所有數據變換為這個基上的坐標表示后,方差值最大
  • 協方差(假設均值為0時):image-20220308114810825

協方差:

  • 如果單純只選擇方差最大的方向,后續方向應該會和方差最大的方向接近重合。
  • 解決方案:為了讓兩個字段盡可能表示更多的原始信息,我們是不希望它們之間存在(線性)相關性的
  • 協方差:可以用兩個字段的協方差表示其相關性image-20220308114754745
  • 當協方差為0時,表示兩個字段完全獨立。為了讓協方差為0,選擇第二個基時只能在與第一個基正交的方向上選擇。因此最終選擇的兩個方向一定是正交的。

優化目標:

  • 將一組N維向量降為K維(K大於0,小於N),目標是選擇K個單位正交基,使原始數據變換到這組基上后,各字段兩兩間協方差為0,字段的方差則盡可能大
  • 協方差矩陣:image-20220308114731170
  • 矩陣對角線上的兩個元素分別是兩個字段的方差,而其它元素是a和b的協方差。
  • 協方差矩陣對角化:即除對角線外的其它元素化為0,並且在對角線上將元素按大小從上到下排列
  • 協方差矩陣對角化:image-20220308114715950
  • 實對稱矩陣:一個n行n列的實對稱矩陣一定可以找到n個單位正交特征向量: image-20220308114652697
  • 實對稱陣可進行對角化:image-20220308114638996
  • 根據特征值的從大到小,將特征向量從上到下排列,則用前K行組成的矩陣乘以原始數據矩陣X,就得到了我們需要的降維后的數據矩陣Y

PCA實例:

數據:image-20220308114526184

協方差矩陣:image-20220308114558060

特征值:image-20220308114403588

特征向量:image-20220308114417730

對角化:image-20220308114352690

降維:image-20220308114336143

import numpy as np
import pandas as pd
df = pd.read_csv('iris.data')
df.head()
5.1 3.5 1.4 0.2 Iris-setosa
0 4.9 3.0 1.4 0.2 Iris-setosa
1 4.7 3.2 1.3 0.2 Iris-setosa
2 4.6 3.1 1.5 0.2 Iris-setosa
3 5.0 3.6 1.4 0.2 Iris-setosa
4 5.4 3.9 1.7 0.4 Iris-setosa
df.columns=['sepal_len', 'sepal_wid', 'petal_len', 'petal_wid', 'class']
df.head()
sepal_len sepal_wid petal_len petal_wid class
0 4.9 3.0 1.4 0.2 Iris-setosa
1 4.7 3.2 1.3 0.2 Iris-setosa
2 4.6 3.1 1.5 0.2 Iris-setosa
3 5.0 3.6 1.4 0.2 Iris-setosa
# split data table into data X and class labels y

X = df.ix[:,0:4].values
y = df.ix[:,4].values
from matplotlib import pyplot as plt
import math

label_dict = {1: 'Iris-Setosa',
              2: 'Iris-Versicolor',
              3: 'Iris-Virgnica'}

feature_dict = {0: 'sepal length [cm]',
                1: 'sepal width [cm]',
                2: 'petal length [cm]',
                3: 'petal width [cm]'}


plt.figure(figsize=(8, 6))
for cnt in range(4):
    plt.subplot(2, 2, cnt+1)
    for lab in ('Iris-setosa', 'Iris-versicolor', 'Iris-virginica'):
        plt.hist(X[y==lab, cnt],
                     label=lab,
                     bins=10,
                     alpha=0.3,)
    plt.xlabel(feature_dict[cnt])
    plt.legend(loc='upper right', fancybox=True, fontsize=8)

plt.tight_layout()
plt.show()
output1
from sklearn.preprocessing import StandardScaler
X_std = StandardScaler().fit_transform(X)
print (X_std)

Output exceeds the size limit. Open the full output data in a text editor

[[-1.1483555 -0.11805969 -1.35396443 -1.32506301] [-1.3905423 0.34485856 -1.41098555 -1.32506301] [-1.51163569 0.11339944 -1.29694332 -1.32506301] [-1.02726211 1.27069504 -1.35396443 -1.32506301] [-0.54288852 1.9650724 -1.18290109 -1.0614657 ] [-1.51163569 0.8077768 -1.35396443 -1.19326436] [-1.02726211 0.8077768 -1.29694332 -1.32506301] [-1.75382249 -0.34951881 -1.35396443 -1.32506301] [-1.1483555 0.11339944 -1.29694332 -1.45686167] [-0.54288852 1.50215416 -1.29694332 -1.32506301] [-1.2694489 0.8077768 -1.23992221 -1.32506301] [-1.2694489 -0.11805969 -1.35396443 -1.45686167] [-1.87491588 -0.11805969 -1.52502777 -1.45686167] [-0.05851493 2.19653152 -1.46800666 -1.32506301] [-0.17960833 3.122368 -1.29694332 -1.0614657 ] [-0.54288852 1.9650724 -1.41098555 -1.0614657 ] [-0.90616871 1.03923592 -1.35396443 -1.19326436] [-0.17960833 1.73361328 -1.18290109 -1.19326436] [-0.90616871 1.73361328 -1.29694332 -1.19326436] [-0.54288852 0.8077768 -1.18290109 -1.32506301] [-0.90616871 1.50215416 -1.29694332 -1.0614657 ] [-1.51163569 1.27069504 -1.58204889 -1.32506301] [-0.90616871 0.57631768 -1.18290109 -0.92966704] [-1.2694489 0.8077768 -1.06885886 -1.32506301] [-1.02726211 -0.11805969 -1.23992221 -1.32506301]

[ 1.03132564 -0.11805969 0.81283789 1.4427088 ] [ 0.54695205 -1.27535529 0.69879566 0.91551417] [ 0.78913885 -0.11805969 0.81283789 1.04731282] [ 0.42585866 0.8077768 0.92688012 1.4427088 ] [ 0.06257847 -0.11805969 0.75581678 0.78371551]]

mean_vec = np.mean(X_std, axis=0)
cov_mat = (X_std - mean_vec).T.dot((X_std - mean_vec)) / (X_std.shape[0]-1)
print('Covariance matrix \n%s' %cov_mat)

Covariance matrix [[ 1.00675676 -0.10448539 0.87716999 0.82249094] [-0.10448539 1.00675676 -0.41802325 -0.35310295] [ 0.87716999 -0.41802325 1.00675676 0.96881642] [ 0.82249094 -0.35310295 0.96881642 1.00675676]]

print('NumPy covariance matrix: \n%s' %np.cov(X_std.T))

NumPy covariance matrix: [[ 1.00675676 -0.10448539 0.87716999 0.82249094] [-0.10448539 1.00675676 -0.41802325 -0.35310295] [ 0.87716999 -0.41802325 1.00675676 0.96881642] [ 0.82249094 -0.35310295 0.96881642 1.00675676]]

cov_mat = np.cov(X_std.T)

eig_vals, eig_vecs = np.linalg.eig(cov_mat)

print('Eigenvectors \n%s' %eig_vecs)
print('\nEigenvalues \n%s' %eig_vals)

Eigenvectors [[ 0.52308496 -0.36956962 -0.72154279 0.26301409] [-0.25956935 -0.92681168 0.2411952 -0.12437342] [ 0.58184289 -0.01912775 0.13962963 -0.80099722] [ 0.56609604 -0.06381646 0.63380158 0.52321917]]

Eigenvalues [ 2.92442837 0.93215233 0.14946373 0.02098259]

# Make a list of (eigenvalue, eigenvector) tuples
eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))]
print (eig_pairs)
print ('----------')
# Sort the (eigenvalue, eigenvector) tuples from high to low
eig_pairs.sort(key=lambda x: x[0], reverse=True)

# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in descending order:')
for i in eig_pairs:
    print(i[0])

[(2.9244283691111144, array([ 0.52308496, -0.25956935, 0.58184289, 0.56609604])), (0.93215233025350641, array([-0.36956962, -0.92681168, -0.01912775, -0.06381646])), (0.14946373489813314, array([-0.72154279, 0.2411952 , 0.13962963, 0.63380158])), (0.020982592764270606, array([ 0.26301409, -0.12437342, -0.80099722, 0.52321917]))]
----------
Eigenvalues in descending order:
2.92442836911
0.932152330254
0.149463734898
0.0209825927643

tot = sum(eig_vals)
var_exp = [(i / tot)*100 for i in sorted(eig_vals, reverse=True)]
print (var_exp)
cum_var_exp = np.cumsum(var_exp)
cum_var_exp

[72.620033326920336, 23.147406858644135, 3.7115155645845164, 0.52104424985101538]

array([ 72.62003333, 95.76744019, 99.47895575, 100. ])

a = np.array([1,2,3,4])
print (a)
print ('-----------')
print (np.cumsum(a))

[1 2 3 4]
-----------
[ 1 3 6 10]

plt.figure(figsize=(6, 4))

plt.bar(range(4), var_exp, alpha=0.5, align='center',
            label='individual explained variance')
plt.step(range(4), cum_var_exp, where='mid',
             label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
output2
matrix_w = np.hstack((eig_pairs[0][1].reshape(4,1),
                      eig_pairs[1][1].reshape(4,1)))

print('Matrix W:\n', matrix_w)

Matrix W:
[[ 0.52308496 -0.36956962]
[-0.25956935 -0.92681168]
[ 0.58184289 -0.01912775]
[ 0.56609604 -0.06381646]]

Y = X_std.dot(matrix_w)
Y

Output exceeds the size limit. Open the full output data in a text editor

array([[-2.10795032, 0.64427554], [-2.38797131, 0.30583307], [-2.32487909, 0.56292316], [-2.40508635, -0.687591 ], [-2.08320351, -1.53025171], [-2.4636848 , -0.08795413], [-2.25174963, -0.25964365], [-2.3645813 , 1.08255676], [-2.20946338, 0.43707676], [-2.17862017, -1.08221046], [-2.34525657, -0.17122946], [-2.24590315, 0.6974389 ], [-2.66214582, 0.92447316], [-2.2050227 , -1.90150522], [-2.25993023, -2.73492274], [-2.21591283, -1.52588897], [-2.20705382, -0.52623535], [-1.9077081 , -1.4415791 ], [-2.35411558, -1.17088308], [-1.93202643, -0.44083479], [-2.21942518, -0.96477499], [-2.79116421, -0.50421849], [-1.83814105, -0.11729122], [-2.24572458, -0.17450151], [-1.97825353, 0.59734172],

​ [ 1.99464025, -1.04517619], [ 1.85977129, -0.37934387], [ 1.54200377, 0.90808604], [ 1.50925493, -0.26460621], [ 1.3690965 , -1.01583909], [ 0.94680339, 0.02182097]])

plt.figure(figsize=(6, 4))
for lab, col in zip(('Iris-setosa', 'Iris-versicolor', 'Iris-virginica'),
                        ('blue', 'red', 'green')):
     plt.scatter(X[y==lab, 0],
                X[y==lab, 1],
                label=lab,
                c=col)
plt.xlabel('sepal_len')
plt.ylabel('sepal_wid')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
output3
plt.figure(figsize=(6, 4))
for lab, col in zip(('Iris-setosa', 'Iris-versicolor', 'Iris-virginica'),
                        ('blue', 'red', 'green')):
     plt.scatter(Y[y==lab, 0],
                Y[y==lab, 1],
                label=lab,
                c=col)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.legend(loc='lower center')
plt.tight_layout()
plt.show()
output4


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM