【tensorflow2.0】使用單GPU訓練模型


深度學習的訓練過程常常非常耗時,一個模型訓練幾個小時是家常便飯,訓練幾天也是常有的事情,有時候甚至要訓練幾十天。

訓練過程的耗時主要來自於兩個部分,一部分來自數據准備,另一部分來自參數迭代。

當數據准備過程還是模型訓練時間的主要瓶頸時,我們可以使用更多進程來准備數據。

當參數迭代過程成為訓練時間的主要瓶頸時,我們通常的方法是應用GPU或者Google的TPU來進行加速。

詳見《用GPU加速Keras模型——Colab免費GPU使用攻略》

https://zhuanlan.zhihu.com/p/68509398

無論是內置fit方法,還是自定義訓練循環,從CPU切換成單GPU訓練模型都是非常方便的,無需更改任何代碼。當存在可用的GPU時,如果不特意指定device,tensorflow會自動優先選擇使用GPU來創建張量和執行張量計算。

但如果是在公司或者學校實驗室的服務器環境,存在多個GPU和多個使用者時,為了不讓單個同學的任務占用全部GPU資源導致其他同學無法使用(tensorflow默認獲取全部GPU的全部內存資源權限,但實際上只使用一個GPU的部分資源),我們通常會在開頭增加以下幾行代碼以控制每個任務使用的GPU編號和顯存大小,以便其他同學也能夠同時訓練模型。

在Colab筆記本中:修改->筆記本設置->硬件加速器 中選擇 GPU

注:以下代碼只能在Colab 上才能正確執行。

可通過以下colab鏈接測試效果《tf_單GPU》:

https://colab.research.google.com/drive/1r5dLoeJq5z01sU72BX2M5UiNSkuxsEFe

%tensorflow_version 2.x
import tensorflow as tf
print(tf.__version__)
from tensorflow.keras import * 
 
# 打印時間分割線
@tf.function
def printbar():
    ts = tf.timestamp()
    today_ts = ts%(24*60*60)
 
    hour = tf.cast(today_ts//3600+8,tf.int32)%tf.constant(24)
    minite = tf.cast((today_ts%3600)//60,tf.int32)
    second = tf.cast(tf.floor(today_ts%60),tf.int32)
 
    def timeformat(m):
        if tf.strings.length(tf.strings.format("{}",m))==1:
            return(tf.strings.format("0{}",m))
        else:
            return(tf.strings.format("{}",m))
 
    timestring = tf.strings.join([timeformat(hour),timeformat(minite),
                timeformat(second)],separator = ":")
    tf.print("=========="*8,end = "")
    tf.print(timestring)

一,GPU設置

gpus = tf.config.list_physical_devices("GPU")
 
if gpus:
    gpu0 = gpus[0] #如果有多個GPU,僅使用第0個GPU
    tf.config.experimental.set_memory_growth(gpu0, True) #設置GPU顯存用量按需使用
    # 或者也可以設置GPU顯存為固定使用量(例如:4G)
    #tf.config.experimental.set_virtual_device_configuration(gpu0,
    #    [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=4096)]) 
    tf.config.set_visible_devices([gpu0],"GPU") 
#比較GPU和CPU的計算速度

printbar()
with tf.device("/gpu:0"):
    tf.random.set_seed(0)
    a = tf.random.uniform((10000,100),minval = 0,maxval = 3.0)
    b = tf.random.uniform((100,100000),minval = 0,maxval = 3.0)
    c = a@b
    tf.print(tf.reduce_sum(tf.reduce_sum(c,axis = 0),axis=0))
printbar()

printbar()
with tf.device("/cpu:0"):
    tf.random.set_seed(0)
    a = tf.random.uniform((10000,100),minval = 0,maxval = 3.0)
    b = tf.random.uniform((100,100000),minval = 0,maxval = 3.0)
    c = a@b
    tf.print(tf.reduce_sum(tf.reduce_sum(c,axis = 0),axis=0))
printbar()
================================================================================11:59:21
2.24953778e+11
================================================================================11:59:23
================================================================================11:59:23
2.24953795e+11
================================================================================11:59:29

二,准備數據

MAX_LEN = 300
BATCH_SIZE = 32
(x_train,y_train),(x_test,y_test) = datasets.reuters.load_data()
x_train = preprocessing.sequence.pad_sequences(x_train,maxlen=MAX_LEN)
x_test = preprocessing.sequence.pad_sequences(x_test,maxlen=MAX_LEN)
 
MAX_WORDS = x_train.max()+1
CAT_NUM = y_train.max()+1
 
ds_train = tf.data.Dataset.from_tensor_slices((x_train,y_train)) \
          .shuffle(buffer_size = 1000).batch(BATCH_SIZE) \
          .prefetch(tf.data.experimental.AUTOTUNE).cache()
 
ds_test = tf.data.Dataset.from_tensor_slices((x_test,y_test)) \
          .shuffle(buffer_size = 1000).batch(BATCH_SIZE) \
          .prefetch(tf.data.experimental.AUTOTUNE).cache()

三,定義模型

tf.keras.backend.clear_session()
 
def create_model():
 
    model = models.Sequential()
 
    model.add(layers.Embedding(MAX_WORDS,7,input_length=MAX_LEN))
    model.add(layers.Conv1D(filters = 64,kernel_size = 5,activation = "relu"))
    model.add(layers.MaxPool1D(2))
    model.add(layers.Conv1D(filters = 32,kernel_size = 3,activation = "relu"))
    model.add(layers.MaxPool1D(2))
    model.add(layers.Flatten())
    model.add(layers.Dense(CAT_NUM,activation = "softmax"))
    return(model)
 
model = create_model()
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (None, 300, 7)            216874    
_________________________________________________________________
conv1d (Conv1D)              (None, 296, 64)           2304      
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 148, 64)           0         
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 146, 32)           6176      
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 73, 32)            0         
_________________________________________________________________
flatten (Flatten)            (None, 2336)              0         
_________________________________________________________________
dense (Dense)                (None, 46)                107502    
=================================================================
Total params: 332,856
Trainable params: 332,856
Non-trainable params: 0
_________________________________________________________________

四,訓練模型

optimizer = optimizers.Nadam()
loss_func = losses.SparseCategoricalCrossentropy()
 
train_loss = metrics.Mean(name='train_loss')
train_metric = metrics.SparseCategoricalAccuracy(name='train_accuracy')
 
valid_loss = metrics.Mean(name='valid_loss')
valid_metric = metrics.SparseCategoricalAccuracy(name='valid_accuracy')
 
@tf.function
def train_step(model, features, labels):
    with tf.GradientTape() as tape:
        predictions = model(features,training = True)
        loss = loss_func(labels, predictions)
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
 
    train_loss.update_state(loss)
    train_metric.update_state(labels, predictions)
 
@tf.function
def valid_step(model, features, labels):
    predictions = model(features)
    batch_loss = loss_func(labels, predictions)
    valid_loss.update_state(batch_loss)
    valid_metric.update_state(labels, predictions)
 
 
def train_model(model,ds_train,ds_valid,epochs):
    for epoch in tf.range(1,epochs+1):
 
        for features, labels in ds_train:
            train_step(model,features,labels)
 
        for features, labels in ds_valid:
            valid_step(model,features,labels)
 
        logs = 'Epoch={},Loss:{},Accuracy:{},Valid Loss:{},Valid Accuracy:{}'
 
        if epoch%1 ==0:
            printbar()
            tf.print(tf.strings.format(logs,
            (epoch,train_loss.result(),train_metric.result(),valid_loss.result(),valid_metric.result())))
            tf.print("")
 
        train_loss.reset_states()
        valid_loss.reset_states()
        train_metric.reset_states()
        valid_metric.reset_states()
 
train_model(model,ds_train,ds_test,10)
================================================================================12:01:11
Epoch=1,Loss:2.00887108,Accuracy:0.470273882,Valid Loss:1.6704694,Valid Accuracy:0.566340148

================================================================================12:01:13
Epoch=2,Loss:1.47044504,Accuracy:0.618681788,Valid Loss:1.51738906,Valid Accuracy:0.630454123

================================================================================12:01:14
Epoch=3,Loss:1.1620506,Accuracy:0.700289488,Valid Loss:1.52190566,Valid Accuracy:0.641139805

================================================================================12:01:16
Epoch=4,Loss:0.878907442,Accuracy:0.771654427,Valid Loss:1.67911685,Valid Accuracy:0.644256473

================================================================================12:01:17
Epoch=5,Loss:0.647668123,Accuracy:0.836450696,Valid Loss:1.93839979,Valid Accuracy:0.642475486

================================================================================12:01:19
Epoch=6,Loss:0.487838209,Accuracy:0.880538881,Valid Loss:2.20062685,Valid Accuracy:0.642030299

================================================================================12:01:21
Epoch=7,Loss:0.390418053,Accuracy:0.90670228,Valid Loss:2.32795334,Valid Accuracy:0.646482646

================================================================================12:01:22
Epoch=8,Loss:0.328294098,Accuracy:0.92351371,Valid Loss:2.44113493,Valid Accuracy:0.644701719

================================================================================12:01:24
Epoch=9,Loss:0.286735713,Accuracy:0.931195736,Valid Loss:2.5071857,Valid Accuracy:0.642920732

================================================================================12:01:25
Epoch=10,Loss:0.256434649,Accuracy:0.936428428,Valid Loss:2.60177088,Valid Accuracy:0.640249312

參考:

開源電子書地址:https://lyhue1991.github.io/eat_tensorflow2_in_30_days/

GitHub 項目地址:https://github.com/lyhue1991/eat_tensorflow2_in_30_days


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM