Open LIMr1209 opened 4 years ago
Is it because there is not enough GPU memory?
Yes, it seems like that. I think it is occurred during generate samples for a sanity check. Then anyway converted checkpoints will be generated before it.
@rosinality ok
@rosinality How to use multiple GPUs for convert_weight.py
The image generated by closed form factorization is too poor
Seems like that there are problems in image value scaling. You may need to check it.
2020-10-13 13:54:15.914144: W tensorflow/core/common_runtime/bfc_allocator.cc:419] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.00MiB (rounded to 1048576). Current allocation summary follows. 2020-10-13 13:54:15.914437: W tensorflow/core/common_runtime/bfc_allocator.cc:424] **xxxxxxxxxxx**xxxxxxxxxxx*xxx 2020-10-13 13:54:15.914480: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at random_op.cc:76 : Resource exhausted: OOM when allocating tensor with shape[512,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc 2020-10-13 13:54:25.954888: W tensorflow/core/common_runtime/bfc_allocator.cc:419] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.00MiB (rounded to 1048576). Current allocation summary follows. 2020-10-13 13:54:25.955703: W tensorflow/core/common_runtime/bfc_allocator.cc:424] **xxxxxxxxxxx**xxxxxxxxxxx*xxx 2020-10-13 13:54:25.955741: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at random_op.cc:76 : Resource exhausted: OOM when allocating tensor with shape[512,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc