CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
這個error 的原因是,當期指定的GPU的顯存不足,可以關閉現有的process,或者重指定顯卡編號。
device = torch.device("cuda:0")