Are multiple GPU's supported? Getting same error with single, multiple GPU's (8gb each)
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[5,17,72,72,32] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: gradients/MSNet_WT32/block2_2/prelu_acti_0/Abs_grad/Sign = SignT=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Are multiple GPU's supported? Getting same error with single, multiple GPU's (8gb each)
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[5,17,72,72,32] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[Node: gradients/MSNet_WT32/block2_2/prelu_acti_0/Abs_grad/Sign = SignT=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.