【每天學習一點點】Tensorflow2.X 運行問題:Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED


Tensorflow2.X 運行問題:Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED

 

Probably you're running out of GPU memory.


If you're using TensorFlow 1.x:

1st option) set allow_growth to true.

import tensorflow as tf    
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)

2nd option) set memory fraction.

# change the memory fraction as you want

import tensorflow as tf
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.3)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

If you're using TensorFlow 2.x:

1st option) set set_memory_growth to true.

# Currently the ‘memory growth’ option should be the same for all GPUs.
# You should set the ‘memory growth’ option before initializing GPUs.

import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  try:
    for gpu in gpus:
      tf.config.experimental.set_memory_growth(gpu, True)
  except RuntimeError as e:
    print(e)

2nd option) set memory_limit as you want. Just change the index of gpus and memory_limit in this code below.

import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  try:
    tf.config.experimental.set_virtual_device_configuration(gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
  except RuntimeError as e:
    print(e)

使用方案:
import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  try:
    for gpu in gpus:
      tf.config.experimental.set_memory_growth(gpu, True)
  except RuntimeError as e:
    print(e)
問題解決。

參考:https://stackoverflow.com/questions/48610132/tensorflow-crash-with-cudnn-status-alloc-failed

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM