EEGNet: 深度學習應用於腦電信號特征提取


本分享為腦機學習者Rose整理發表於公眾號:腦機接口社區(微信號:Brain_Computer).QQ交流群:903290195

簡介

腦機接口(BCI)使用神經活動作為控制信號,實現與計算機的直接通信。這種神經信號通常是從各種研究透徹的腦電圖(EEG)信號中挑選出來的。卷積神經網絡(CNN)主要用來自動特征提取和分類,其在計算機視覺和語音識別領域中的使用已經很廣泛。CNN已成功應用於基於EEG的BCI;但是,CNN主要應用於單個BCI范式,在其他范式中的使用比較少,論文作者提出是否可以設計一個CNN架構來准確分類來自不同BCI范式的EEG信號,同時盡可能地緊湊(定義為模型中的參數數量)。該論文介紹了EEGNet,這是一種用於基於EEG的BCI的緊湊型卷積神經網絡。論文介紹了使用深度和可分離卷積來構建特定於EEG的模型,該模型封裝了腦機接口中常見的EEG特征提取概念。論文通過四種BCI范式(P300視覺誘發電位、錯誤相關負性反應(ERN)、運動相關皮層電位(MRCP)和感覺運動節律(SMR)),將EEGNet在主體內和跨主體分類方面與目前最先進的方法進行了比較。結果顯示,在訓練數據有限的情況下,EEGNet比參考算法具有更強的泛化能力和更高的性能。同時論文也證明了EEGNet可以有效地推廣到ERP和基於振盪的BCI。

實驗結果如下圖,P300數據集的所有CNN模型之間的差異非常小,但是MRCP數據集卻存在顯著的差異,兩個EEGNet模型的性能都優於所有其他模型。對於ERN數據集來說,兩個EEGNet模型的性能都優於其他所有模型(p < 0.05)。

EEGNet網絡原理

EEGNet網絡結構圖:

EEGNet原理架構如下:

EEGNet網絡實現

import numpy as np
from sklearn.metrics import roc_auc_score, precision_score, recall_score, accuracy_score
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import torch.nn.functional as F
import torch.optim as optim

定義網絡模型:

class EEGNet(nn.Module):
    def __init__(self):
        super(EEGNet, self).__init__()
        self.T = 120
        
        # Layer 1
        self.conv1 = nn.Conv2d(1, 16, (1, 64), padding = 0)
        self.batchnorm1 = nn.BatchNorm2d(16, False)
        
        # Layer 2
        self.padding1 = nn.ZeroPad2d((16, 17, 0, 1))
        self.conv2 = nn.Conv2d(1, 4, (2, 32))
        self.batchnorm2 = nn.BatchNorm2d(4, False)
        self.pooling2 = nn.MaxPool2d(2, 4)
        
        # Layer 3
        self.padding2 = nn.ZeroPad2d((2, 1, 4, 3))
        self.conv3 = nn.Conv2d(4, 4, (8, 4))
        self.batchnorm3 = nn.BatchNorm2d(4, False)
        self.pooling3 = nn.MaxPool2d((2, 4))
        
        # 全連接層
        # 此維度將取決於數據中每個樣本的時間戳數。
        # I have 120 timepoints. 
        self.fc1 = nn.Linear(4*2*7, 1)
        

    def forward(self, x):
        # Layer 1
        x = F.elu(self.conv1(x))
        x = self.batchnorm1(x)
        x = F.dropout(x, 0.25)
        x = x.permute(0, 3, 1, 2)
        
        # Layer 2
        x = self.padding1(x)
        x = F.elu(self.conv2(x))
        x = self.batchnorm2(x)
        x = F.dropout(x, 0.25)
        x = self.pooling2(x)
        
        # Layer 3
        x = self.padding2(x)
        x = F.elu(self.conv3(x))
        x = self.batchnorm3(x)
        x = F.dropout(x, 0.25)
        x = self.pooling3(x)
        
        # 全連接層
        x = x.view(-1, 4*2*7)
        x = F.sigmoid(self.fc1(x))
        return x

定義評估指標:
acc:准確率
auc:AUC 即 ROC 曲線對應的面積
recall:召回率
precision:精確率
fmeasure:F值

def evaluate(model, X, Y, params = ["acc"]):
    results = []
    batch_size = 100
    
    predicted = []
    
    for i in range(len(X)//batch_size):
        s = i*batch_size
        e = i*batch_size+batch_size
        
        inputs = Variable(torch.from_numpy(X[s:e]))
        pred = model(inputs)
        
        predicted.append(pred.data.cpu().numpy())
        
    inputs = Variable(torch.from_numpy(X))
    predicted = model(inputs)
    predicted = predicted.data.cpu().numpy()
    """
    設置評估指標:
    acc:准確率
    auc:AUC 即 ROC 曲線對應的面積
    recall:召回率
    precision:精確率
    fmeasure:F值
    """
    for param in params:
        if param == 'acc':
            results.append(accuracy_score(Y, np.round(predicted)))
        if param == "auc":
            results.append(roc_auc_score(Y, predicted))
        if param == "recall":
            results.append(recall_score(Y, np.round(predicted)))
        if param == "precision":
            results.append(precision_score(Y, np.round(predicted)))
        if param == "fmeasure":
            precision = precision_score(Y, np.round(predicted))
            recall = recall_score(Y, np.round(predicted))
            results.append(2*precision*recall/ (precision+recall))
    return results

構建網絡EEGNet,並設置二分類交叉熵和Adam優化器

# 定義網絡
net = EEGNet()
# 定義二分類交叉熵 (Binary Cross Entropy)
criterion = nn.BCELoss()
# 定義Adam優化器
optimizer = optim.Adam(net.parameters())

創建數據集

"""
生成訓練數據集,數據集有100個樣本
訓練數據X_train:為[0,1)之間的隨機數;
標簽數據y_train:為0或1
"""
X_train = np.random.rand(100, 1, 120, 64).astype('float32')
y_train = np.round(np.random.rand(100).astype('float32')) 
"""
生成驗證數據集,數據集有100個樣本
驗證數據X_val:為[0,1)之間的隨機數;
標簽數據y_val:為0或1
"""
X_val = np.random.rand(100, 1, 120, 64).astype('float32')
y_val = np.round(np.random.rand(100).astype('float32'))
"""
生成測試數據集,數據集有100個樣本
測試數據X_test:為[0,1)之間的隨機數;
標簽數據y_test:為0或1
"""
X_test = np.random.rand(100, 1, 120, 64).astype('float32')
y_test = np.round(np.random.rand(100).astype('float32'))

訓練並驗證

batch_size = 32
# 訓練 循環
for epoch in range(10): 
    print("\nEpoch ", epoch)
    
    running_loss = 0.0
    for i in range(len(X_train)//batch_size-1):
        s = i*batch_size
        e = i*batch_size+batch_size
        
        inputs = torch.from_numpy(X_train[s:e])
        labels = torch.FloatTensor(np.array([y_train[s:e]]).T*1.0)
        
        # wrap them in Variable
        inputs, labels = Variable(inputs), Variable(labels)

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        
        optimizer.step()
        
        running_loss += loss.item()
    
    # 驗證
    params = ["acc", "auc", "fmeasure"]
    print(params)
    print("Training Loss ", running_loss)
    print("Train - ", evaluate(net, X_train, y_train, params))
    print("Validation - ", evaluate(net, X_val, y_val, params))
    print("Test - ", evaluate(net, X_test, y_test, params))

Epoch 0
['acc', 'auc', 'fmeasure']
Training Loss 1.6107637286186218
Train - [0.52, 0.5280448717948718, 0.6470588235294118]
Validation - [0.55, 0.450328407224959, 0.693877551020408]
Test - [0.54, 0.578926282051282, 0.6617647058823529]

Epoch 1
['acc', 'auc', 'fmeasure']
Training Loss 1.5536684393882751
Train - [0.45, 0.41145833333333337, 0.5454545454545454]
Validation - [0.55, 0.4823481116584565, 0.6564885496183207]
Test - [0.65, 0.6530448717948717, 0.7107438016528926]

Epoch 2
['acc', 'auc', 'fmeasure']
Training Loss 1.5197088718414307
Train - [0.49, 0.5524839743589743, 0.5565217391304348]
Validation - [0.53, 0.5870279146141215, 0.5436893203883495]
Test - [0.57, 0.5428685897435898, 0.5567010309278351]

Epoch 3
['acc', 'auc', 'fmeasure']
Training Loss 1.4534167051315308
Train - [0.53, 0.5228365384615385, 0.4597701149425287]
Validation - [0.5, 0.48152709359605916, 0.46808510638297873]
Test - [0.61, 0.6502403846153847, 0.5517241379310345]

Epoch 4
['acc', 'auc', 'fmeasure']
Training Loss 1.3821702003479004
Train - [0.46, 0.4651442307692308, 0.3076923076923077]
Validation - [0.47, 0.5977011494252874, 0.29333333333333333]
Test - [0.52, 0.5268429487179488, 0.35135135135135137]

Epoch 5
['acc', 'auc', 'fmeasure']
Training Loss 1.440490186214447
Train - [0.56, 0.516025641025641, 0.35294117647058826]
Validation - [0.36, 0.3801313628899836, 0.2]
Test - [0.53, 0.6113782051282052, 0.27692307692307694]

Epoch 6
['acc', 'auc', 'fmeasure']
Training Loss 1.4722238183021545
Train - [0.47, 0.4194711538461539, 0.13114754098360656]
Validation - [0.46, 0.5648604269293925, 0.2285714285714286]
Test - [0.5, 0.5348557692307693, 0.10714285714285714]

Epoch 7
['acc', 'auc', 'fmeasure']
Training Loss 1.3460421562194824
Train - [0.51, 0.44871794871794873, 0.1694915254237288]
Validation - [0.44, 0.4490968801313629, 0.2]
Test - [0.53, 0.4803685897435898, 0.14545454545454545]

Epoch 8
['acc', 'auc', 'fmeasure']
Training Loss 1.3336675763130188
Train - [0.54, 0.4130608974358974, 0.20689655172413793]
Validation - [0.39, 0.40394088669950734, 0.14084507042253522]
Test - [0.51, 0.5400641025641025, 0.19672131147540983]

Epoch 9
['acc', 'auc', 'fmeasure']
Training Loss 1.438510239124298
Train - [0.53, 0.5392628205128205, 0.22950819672131148]
Validation - [0.42, 0.4848111658456486, 0.09375]
Test - [0.56, 0.5420673076923076, 0.2413793103448276]

參考
EEGNet: 深度學習應用於腦電信號特征提取
腦機學習者Rose筆記分享,QQ交流群:903290195
更多分享,請關注公眾號


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM