zzw922cn / Automatic_Speech_Recognition

End-to-end Automatic Speech Recognition for Madarian and English in Tensorflow
MIT License
2.84k stars 538 forks source link

ResourceExhaustedError: OOM when allocating tensor with shape #78

Open cjjjy opened 6 years ago

cjjjy commented 6 years ago

HI when I run timit_train.py, error as follow , how can I slove it?

2018-06-08 17:26:00.908082: I tensorflow/core/common_runtime/bfc_allocator.cc:680] Stats: Limit: 10625279591 InUse: 7350513920 MaxInUse: 7464792320 NumAllocs: 82 MaxAllocSize: 3656908800

2018-06-08 17:26:00.908099: W tensorflow/core/common_runtime/bfc_allocator.cc:279] *___**__ 2018-06-08 17:26:00.908130: W tensorflow/core/framework/op_kernel.cc:1202] OP_REQUIRES failed at tile_ops.cc:123 : Resource exhausted: OOM when allocating tensor with shape[16,1,892800,8,4,2,2] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc Traceback (most recent call last): File "lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1361, in _do_call return fn(*args) File "lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1340, in _run_fn target_list, status, run_metadata) File "lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 516, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[16,1,892800,8,4,2,2] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[Node: capsule_cnn_layer_1/capsule_cnn_layer_1/Tile = Tile[T=DT_FLOAT, Tmultiples=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](capsule_cnn_layer_1/w/read, capsule_dnn_layer/capsule_dnn_layer/Tile/multiples)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

 [[Node: transpose/_3 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_199_transpose", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Flyzzz commented 5 years ago

Have you solved this?

madhavivr commented 5 years ago

I'm having the same issue and being new to TF I'm not sure how to use information given in the Hint.

Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.