標簽傳播算法(llgc 或 lgc)


動手實踐標簽傳播算法

復現論文:Learning with Local and Global Consistency[1]

lgc 算法可以參考:DecodePaper/notebook/lgc

初始化算法

載入一些必備的庫:

from IPython.display import set_matplotlib_formats
%matplotlib inline
#set_matplotlib_formats('svg', 'pdf')

import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.distance import cdist
from sklearn.datasets import make_moons

save_dir = '../data/images'

創建一個簡單的數據集

利用 make_moons 生成一個半月形數據集。

n = 800   # 樣本數
n_labeled = 10 # 有標簽樣本數
X, Y = make_moons(n, shuffle=True, noise=0.1, random_state=1000)

X.shape, Y.shape
((800, 2), (800,))
def one_hot(Y, n_classes):
    '''
    對標簽做 one_hot 編碼
    
    參數
    =====
    Y: 從 0 開始的標簽
    n_classes: 類別數
    '''
    out = Y[:, None] == np.arange(n_classes)
    return out.astype(float)
color = ['red' if l == 0 else 'blue' for l in Y]
plt.scatter(X[:, 0], X[:, 1], color=color)
plt.savefig(f"{save_dir}/bi_classification.pdf", format='pdf')
plt.show()

Y_input = np.concatenate((one_hot(Y[:n_labeled], 2), np.zeros((n-n_labeled, 2))))

算法過程:

Step 1: 創建相似度矩陣 W

def rbf(x, sigma):
    return np.exp((-x)/(2* sigma**2))
sigma = 0.2
dm = cdist(X, X, 'euclidean')
W = rbf(dm, sigma)
np.fill_diagonal(W, 0)   # 對角線全為 0

Step 2: 計算 S

\[S = D^{-\frac{1}{2}} W D^{-\frac{1}{2}} \]

向量化編程:

def calculate_S(W):
    d = np.sum(W, axis=1) 
    D_ = np.sqrt(d*d[:, np.newaxis]) # D_ 是 np.sqrt(np.dot(diag(D),diag(D)^T))
    return np.divide(W, D_, where=D_ != 0)


S = calculate_S(W)

迭代一次的結果

alpha = 0.99
F = np.dot(S, Y_input)*alpha + (1-alpha)*Y_input

Y_result = np.zeros_like(F)
Y_result[np.arange(len(F)), F.argmax(1)] = 1

Y_v = [1 if x == 0 else 0 for x in Y_result[0:,0]]

color = ['red' if l == 0 else 'blue' for l in Y_v]
plt.scatter(X[0:,0], X[0:,1], color=color)
#plt.savefig("iter_1.pdf", format='pdf')
plt.show()

Step 3: 迭代 F "n_iter" 次直到收斂

n_iter = 150
F = Y_input
for t in range(n_iter):
    F = np.dot(S, F)*alpha + (1-alpha)*Y_input

Step 4: 畫出最終結果

Y_result = np.zeros_like(F)
Y_result[np.arange(len(F)), F.argmax(1)] = 1

Y_v = [1 if x == 0 else 0 for x in Y_result[0:,0]]

color = ['red' if l == 0 else 'blue' for l in Y_v]
plt.scatter(X[0:,0], X[0:,1], color=color)
#plt.savefig("iter_n.pdf", format='pdf')
plt.show()

from sklearn import metrics

print(metrics.classification_report(Y, F.argmax(1)))

acc = metrics.accuracy_score(Y, F.argmax(1))
print('准確度為',acc)
              precision    recall  f1-score   support

           0       1.00      0.86      0.92       400
           1       0.88      1.00      0.93       400

   micro avg       0.93      0.93      0.93       800
   macro avg       0.94      0.93      0.93       800
weighted avg       0.94      0.93      0.93       800

准確度為 0.92875

sklearn 實現 lgc

參考:https://scikit-learn.org/stable/modules/label_propagation.html

在 sklearn 里提供了兩個 lgc 模型:LabelPropagationLabelSpreading,其中后者是前者的正則化形式。\(W\) 的計算方式提供了 rbfknn

  • rbf 核由參數 gamma控制(\(\gamma=\frac{1}{2{\sigma}^2}\)
  • knn 核 由參數 n_neighbors(近鄰數)控制
def pred_lgc(X, Y, F, numLabels):
    from sklearn import preprocessing 
    from sklearn.semi_supervised import LabelSpreading
    cls = LabelSpreading(max_iter=150, kernel='rbf', gamma=0.003, alpha=.99)
    # X.astype(float) 為了防止報錯 "Numerical issues were encountered "
    cls.fit(preprocessing.scale(X.astype(float)), F)
    ind_unlabeled = np.arange(numLabels, len(X))
    y_pred = cls.transduction_[ind_unlabeled]
    y_true = Y[numLabels:].astype(y_pred.dtype)
    return y_true, y_pred
Y_input = np.concatenate((Y[:n_labeled], -np.ones(n-n_labeled)))
y_true, y_pred = pred_lgc(X, Y, Y_input, n_labeled)
print(metrics.classification_report(Y, F.argmax(1)))
              precision    recall  f1-score   support

           0       1.00      0.86      0.92       400
           1       0.88      1.00      0.93       400

   micro avg       0.93      0.93      0.93       800
   macro avg       0.94      0.93      0.93       800
weighted avg       0.94      0.93      0.93       800

networkx 實現 lgc

參考:networkx.algorithms.node_classification.lgc.local_and_global_consistency 具體的細節,我還沒有研究!先放一個簡單的例子:

G = nx.path_graph(4)
G.node[0]['label'] = 'A'
G.node[3]['label'] = 'B'
G.nodes(data=True)

G.edges()

predicted = node_classification.local_and_global_consistency(G)
predicted
['A', 'A', 'B', 'B']

更多精彩內容見:DecodePaper 覺得有用,記得給個 star !(@DecodePaper)


  1. Zhou D, Bousquet O, Lal T N, et al. Learning with Local and Global Consistency[C]. neural information processing systems, 2003: 321-328. ↩︎


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM