程序簡介
本實驗根據英雄聯盟的對局數據,搭建全連接網絡分類模型,以粒子群算法對神經網絡的節點數和dropout概率進行調優,最后對比默認模型和優化后的模型對英雄聯盟比賽結果的預測准確率粒子群優化算法(PSO) 是一種進化計算技術源於對鳥群捕食的行為研究。粒子群優化算法的基本思想:是通過群體中個體之間的協作和信息共享來尋找最優解。它的優點是收斂快、實現簡單,缺點則是容易陷入局部最優
程序/數據集下載
代碼分析
導入模塊
from tensorflow.keras.layers import Input,Dense,Dropout,Activation
import matplotlib.pyplot as plt
from tensorflow.keras.models import load_model
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from sklearn.metrics import accuracy_score
import pandas as pd
import numpy as np
import json
from copy import deepcopy
數據是英雄聯盟各對局10分鍾的戰況,英雄聯盟對戰方分為紅藍雙方,blueWins就是指是否為藍方獲勝,其他列基本就是小龍、野怪、經濟啥的,原諒我解釋不清。。。讀研的時候升到白銀,因為出裝問題每一局都要被罵,順便說一下,我老是在里面裝妹子,玩輔助,因為這樣罵我的人就少了很多,好友已經滿了┭┮﹏┭┮錯的不是我,是這個世界
data = pd.read_csv("Static/high_diamond_ranked_10min.csv").iloc[:,1:]
print("數據尺寸:",data.shape)
print("展示下數據前5行和前5列")
data.iloc[:,:5].head()
數據尺寸: (9879, 39)
展示下數據前5行和前5列
blueWins | blueWardsPlaced | blueWardsDestroyed | blueFirstBlood | blueKills | |
---|---|---|---|---|---|
0 | 0 | 28 | 2 | 1 | 9 |
1 | 0 | 12 | 1 | 0 | 5 |
2 | 0 | 15 | 0 | 0 | 7 |
3 | 0 | 43 | 1 | 0 | 4 |
4 | 0 | 75 | 4 | 0 | 6 |
切割數據集,分成訓練、驗證和測試,這里只用驗證集給PSO調優用,並沒有給神經網絡訓練保存最佳checkpoint
data = data.sample(frac=1.0)#打亂數據
trainData = data.iloc[:6000]#訓練集
xTrain = trainData.values[:,1:]
yTrain = trainData.values[:,:1]
valData = data.iloc[6000:8000]#驗證集
xVal = valData.values[:,1:]
yVal = valData.values[:,:1]
testData = data.iloc[8000:]#測試集
xTest = testData.values[:,1:]
yTest = testData.values[:,:1]
1個粒子即1個方案,在本實驗中,1個粒子就是1個(節點數、dropout概率)的數組,PSO就是計算每個粒子對應方案的適應度,找到最合適的方案的過程,下文的PSO類會根據粒子需要優化的特征對粒子迭代,支持小數或整數的特征,若為自定的離散區間,可在適應度函數中,將非法粒子的適應度給予懲罰值,實例化該類后,需要指定適應度函數給iterate函數對粒子方案進行迭代,本實驗的粒子群算法為最簡單的形式,更新公式如下:
class PSO():
def __init__(self,featureNum,featureArea,featureLimit,featureType,particleNum=5,epochMax=10,c1=2,c2=2):
'''
粒子群算法
:param featureNum: 粒子特征數
:param featureArea: 特征上下限矩陣
:param featureLimit: 特征上下闕界,也是區間的開閉 0為不包含 1為包含
:param featureType: 特征類型 int float
:param particleNum: 粒子個數
:param epochMax: 最大迭代次數
:param c1: 自身認知學習因子
:param c2: 群體認知學習因子
'''
#如上所示
self.featureNum = featureNum
self.featureArea = np.array(featureArea).reshape(featureNum,2)
self.featureLimit = np.array(featureLimit).reshape(featureNum,2)
self.featureType = featureType
self.particleNum = particleNum
self.epochMax = epochMax
self.c1 = c1
self.c2 = c2
self.epoch = 0#已迭代次數
#自身最優適應度記錄
self.pBest = [-1e+10 for i in range(particleNum)]
self.pBestArgs = [None for i in range(particleNum)]
#全局最優適應度記錄
self.gBest = -1e+10
self.gBestArgs = None
#初始化所有粒子
self.particles = [self.initParticle() for i in range(particleNum)]
#初始化所有粒子的學習速度
self.vs = [np.random.uniform(0,1,size=featureNum) for i in range(particleNum)]
#迭代歷史
self.gHistory = {"特征%d"%i:[] for i in range(featureNum)}
self.gHistory["群內平均"] = []
self.gHistory["全局最優"] = []
def standardValue(self,value,lowArea,upArea,lowLimit,upLimit,valueType):
'''
規范一個特征值,使其落在區間內
:param value: 特征值
:param lowArea: 下限
:param upArea: 上限
:param lowLimit: 下限開閉區間
:param upLimit: 上限開閉區間
:param valueType: 特征類型
:return: 修正后的值
'''
if value < lowArea:
value = lowArea
if value > upArea:
value = upArea
if valueType is int:
value = np.round(value,0)
#下限為閉區間
if value <= lowArea and lowLimit==0:
value = lowArea + 1
#上限為閉區間
if value >= upArea and upLimit==0:
value = upArea - 1
elif valueType is float:
#下限為閉區間
if value <= lowArea and lowLimit == 0:
value = lowArea + 1e-10
#上限為閉=間
if value >= upArea and upLimit==0:
value = upArea - 1e-10
return value
def initParticle(self):
'''隨機初始化1個粒子'''
values = []
#初始化這么多特征數
for i in range(self.featureNum):
#該特征的上下限
lowArea = self.featureArea[i][0]
upArea = self.featureArea[i][1]
#該特征的上下闕界
lowLimit = self.featureLimit[i][0]
upLimit = self.featureLimit[i][1]
#隨機值
value = np.random.uniform(0,1) * (upArea-lowArea) + lowArea
value = self.standardValue(value,lowArea,upArea,lowLimit,upLimit,self.featureType[i])
values.append(value)
return values
def iterate(self,calFitness):
'''
開始迭代
:param calFitness:適應度函數 輸入為1個粒子的所有特征和全局最佳適應度,輸出為適應度
'''
while self.epoch<self.epochMax:
self.epoch += 1
for i,particle in enumerate(self.particles):
#該粒子的適應度
fitness = calFitness(particle,self.gBest)
#更新該粒子的自身認知最佳方案
if self.pBest[i] < fitness:
self.pBest[i] = fitness
self.pBestArgs[i] = deepcopy(particle)
#更新全局最佳方案
if self.gBest < fitness:
self.gBest = fitness
self.gBestArgs = deepcopy(particle)
#更新粒子
for i, particle in enumerate(self.particles):
#更新速度
self.vs[i] = np.array(self.vs[i]) + self.c1*np.random.uniform(0,1,size=self.featureNum)*(np.array(self.pBestArgs[i])-np.array(self.particles[i])) + self.c2*np.random.uniform(0,1,size=self.featureNum)*(np.array(self.gBestArgs)-np.array(self.particles[i]))
#更新特征值
self.particles[i] = np.array(particle) + self.vs[i]
#規范特征值
values = []
for j in range(self.featureNum):
#該特征的上下限
lowArea = self.featureArea[j][0]
upArea = self.featureArea[j][1]
#該特征的上下闕界
lowLimit = self.featureLimit[j][0]
upLimit = self.featureLimit[j][1]
#隨機值
value =self.particles[i][j]
value = self.standardValue(value,lowArea,upArea,lowLimit,upLimit,self.featureType[j])
values.append(value)
self.particles[i] = values
#保存歷史數據
for i in range(self.featureNum):
self.gHistory["特征%d"%i].append(self.gBestArgs[i])
self.gHistory["群內平均"].append(np.mean(self.pBest))
self.gHistory["全局最優"].append(self.gBest)
print("PSO epoch:%d/%d 群內平均:%.4f 全局最優:%.4f"%(self.epoch,self.epochMax,np.mean(self.pBest),self.gBest))
buildNet函數根據網絡節點數和dropout概率來構建一個簡單的全連接分類網絡,其輸入特征數為38,輸出特征數為1(當然,也可以選擇網絡層數、學習率等超參數來優化,為方便學習,這里只選擇這倆超參數進行實驗)並對該網絡進行訓練
def buildNet(nodeNum,p):
'''
搭建全連接網絡 進行訓練,返回模型和訓練歷史、驗證集准確率和測試集准確率
:param nodeNum: 網絡節點數
:param p: dropout概率
'''
#輸入層 38個對局特征
inputLayer = Input(shape=(38,))
#中間層
middle = Dense(nodeNum)(inputLayer)
middle = Dropout(p)(middle)
#輸出層 二分類
outputLayer = Dense(1,activation="sigmoid")(middle)
#建模 二分類損失
model = Model(inputs=inputLayer,outputs=outputLayer)
optimizer = Adam(lr=1e-3)
model.compile(optimizer=optimizer,loss="binary_crossentropy",metrics=['acc'])
#訓練
history = model.fit(xTrain,yTrain,verbose=0,batch_size=1000,epochs=100,validation_data=(xVal,yVal)).history
#驗證集准確率
valAcc = accuracy_score(yVal,model.predict(xVal).round(0))
#測試集准確率
testAcc = accuracy_score(yTest,model.predict(xTest).round(0))
return model,history,valAcc,testAcc
為了跟優化好的模型有所對比,這里我們訓練一個默認參數的神經網絡,它的超參數取值即各超參數區間的平均值,訓練並打印網絡結構和訓練指標
nodeArea = [10,200]#節點數區間
pArea = [0,0.5]#dropout概率區間
#按區間平均值訓練一個神經網絡
nodeNum = int(np.mean(nodeArea))
p = np.mean(pArea)
defaultNet,defaultHistory,defaultValAcc,defaultTestAcc = buildNet(nodeNum,p)
defaultNet.summary()
print("\n默認網絡的 節點數:%d dropout概率:%.2f 驗證集准確率:%.4f 測試集准確率:%.4f"%(nodeNum,p,defaultValAcc,defaultTestAcc))
Model: "model_346"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_347 (InputLayer) [(None, 38)] 0
_________________________________________________________________
dense_691 (Dense) (None, 105) 4095
_________________________________________________________________
dropout_346 (Dropout) (None, 105) 0
_________________________________________________________________
dense_692 (Dense) (None, 1) 106
=================================================================
Total params: 4,201
Trainable params: 4,201
Non-trainable params: 0
_________________________________________________________________
默認網絡的 節點數:105 dropout概率:0.25 驗證集准確率:0.6535 測試集准確率:0.6578
實例化PSO模型,將區間信息輸入,開始迭代,適應度函數就是輸入1各粒子和全局最優適應度,返回該粒子對應方案的驗證集准確率
featureNum = 2#2個需要優化的特征
featureArea = [nodeArea,pArea]#2個特征取值范圍
featureLimit = [[1,1],[0,1]]#取值范圍的開閉 0為閉區間 1為開區間
featureType = [int,float]#2個特征的類型
#粒子群算法類
pso = PSO(featureNum,featureArea,featureLimit,featureType)
def calFitness(particle,gBest):
'''適應度函數,輸入1個粒子的數組和全局最優適應度,返回該粒子對應的適應度'''
nodeNum,p = particle#取出粒子的特征值
net,history,valAcc,testAcc = buildNet(nodeNum,p)
#該粒子方案超過全局最優
if valAcc>gBest:
#保存模型和對應信息
net.save("Static/best.h5")
history = pd.DataFrame(history)
history.to_excel("Static/best.xlsx",index=None)
with open("Static/info.json","w") as f:
f.write(json.dumps({"valAcc":valAcc,"testAcc":testAcc}))
return valAcc
#開始用粒子群算法迅游
pso.iterate(calFitness)
#載入最佳模型和對應的訓練歷史
bestNet = load_model("Static/best.h5")
with open("Static/info.json","r") as f:
info = json.loads(f.read())
bestValAcc = float(info["valAcc"])
bestTestAcc = float(info["testAcc"])
bestHistory = pd.read_excel("Static/best.xlsx")
print("最優模型的驗證集准確率:%.4f 測試集准確率:%.4f"%(bestValAcc,bestTestAcc))
PSO epoch:1/10 群內平均:0.7210 全局最優:0.7280
PSO epoch:2/10 群內平均:0.7210 全局最優:0.7280
PSO epoch:3/10 群內平均:0.7251 全局最優:0.7280
PSO epoch:4/10 群內平均:0.7275 全局最優:0.7350
PSO epoch:5/10 群內平均:0.7275 全局最優:0.7350
PSO epoch:6/10 群內平均:0.7299 全局最優:0.7350
PSO epoch:7/10 群內平均:0.7313 全局最優:0.7350
PSO epoch:8/10 群內平均:0.7313 全局最優:0.7350
PSO epoch:9/10 群內平均:0.7313 全局最優:0.7350
PSO epoch:10/10 群內平均:0.7313 全局最優:0.7350
最優模型的驗證集准確率:0.7350 測試集准確率:0.7350
查看PSO最優解隨迭代次數的變換
history = pd.DataFrame(pso.gHistory)
history["epoch"] = range(1,history.shape[0]+1)
history
特征0 | 特征1 | 群內平均 | 全局最優 | epoch | |
---|---|---|---|---|---|
0 | 50.0 | 0.267706 | 0.7210 | 0.728 | 1 |
1 | 50.0 | 0.267706 | 0.7210 | 0.728 | 2 |
2 | 50.0 | 0.267706 | 0.7251 | 0.728 | 3 |
3 | 57.0 | 0.201336 | 0.7275 | 0.735 | 4 |
4 | 57.0 | 0.201336 | 0.7275 | 0.735 | 5 |
5 | 57.0 | 0.201336 | 0.7299 | 0.735 | 6 |
6 | 57.0 | 0.201336 | 0.7313 | 0.735 | 7 |
7 | 57.0 | 0.201336 | 0.7313 | 0.735 | 8 |
8 | 57.0 | 0.201336 | 0.7313 | 0.735 | 9 |
9 | 57.0 | 0.201336 | 0.7313 | 0.735 | 10 |
對比下默認參數模型和PSO調優模型的准確率,是有點效果,僅供學習...
fig, ax = plt.subplots()
x = np.arange(2)
a = [defaultValAcc,bestValAcc]
b = [defaultTestAcc,bestTestAcc]
total_width, n = 0.8, 2
width = total_width / n
x = x - (total_width - width) / 2
ax.bar(x, a, width=width, label='val',color="#00BFFF")
for x1,y1 in zip(x,a):
plt.text(x1,y1+0.01,'%.3f' %y1, ha='center',va='bottom')
ax.bar(x + width, b, width=width, label='test',color="#FFA500")
for x1,y1 in zip(x,b):
plt.text(x1+width,y1+0.01,'%.3f' %y1, ha='center',va='bottom')
ax.legend()
ax.set_xticks([0, 1])
ax.set_ylim([0,1.2])
ax.set_ylabel("acc")
ax.set_xticklabels(["default net","PSO-net"])
fig.savefig("Static/對比.png",dpi=250)