Open prajwaljpj opened 4 years ago
I have same issue. My system is
RAM : 128GB GPU : GTX 1080ti * 4 OS : ubuntu 18.04 NVIDIA Driver : 440.82 CUDA : 10.1 CUDNN : 7.6.5 python : 3.6.9 tensorflow & tensorflow-gpu : 2.1.0 (And I do not change any param in run_common_voice.py)
When I run the run_common_voice.py code. These are shown.
At the 0th epoch Eval_step is running with retracing warning and then, I got the OOM error.
Disable evaluation at the 0th epoch. 2-1. When there is retracing warning (slow) Epoch: 0, Batch: 60, Global Step: 60, Step Time: 26.0310, Loss: 165.6244 2-2. When there is no retracing warning (fast) Epoch: 0, Batch: 62, Global Step: 62, Step Time: 6.3741, Loss: 164.6387
Then I get the OOM error after this line Epoch: 0, Batch: 226, Global Step: 226, Step Time: 5.9092, Loss: 142.7257 ...
I think some of the tf.function? affect to speed of the training.
Does the retracing warning have a connection with OOM error? --> If so, how can I solve the retracing warning? --> If not, how can I solve the OOM error?
Thank you
@nambee Did single GPU training work for you?
@nambee Did single GPU training work for you?
No it does not work.
To see the progress, I print some logs in 'run_evaluate' func which is inside of 'run_training' func. (I attach this code at the end of this comment. (I only added 'print' functions.)) After 432 batches, OOM error has occurred. (+ The total eval_dataset loop count is 486.)
CUDA_VISIBLE_DEVICE=1 python run_common_voice.py --mode train --data_dir english_data/feature
... tensorflow.org/api_docs/python/tf/function for more details. Performing evaluation.2-2 Performing evaluation.2-3 -------------------- [432] ------------------ Performing evaluation.2-1 eval_step : <tensorflow.python.eager.def_function.Function object at 0x7f6dac6b07b8> Type : eval_step : <class 'tensorflow.python.eager.def_function.Function'> input type : <class 'tuple'> Performing evaluation.2-1-1 Performing evaluation.2-1-2 Performing evaluation.2-1-3: Tensor("Identity:0", shape=(), dtype=float32, device=/job:localhost/replica:0/task:0/device:CPU:0) Performing evaluation.2-1-4: {'WER': <tf.Tensor 'Identity_1:0' shape=
dtype=float32>, 'Accuracy': <tf.Tensor 'Identity_2:0' shape= dtype=float32>, 'CER': <tf.Tensor 'Identity_3:0' shape= dtype=float32>} 2020-04-22 15:55:35.613508: I tensorflow/stream_executor/cuda/cuda_driver.cc:801] failed to allocate 4.00G (4294967296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory ... (0) Resource exhausted: OOM when allocating tensor with shape[8,4088,303,37] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node transducer/dense_1/BiasAdd-0-TransposeNHWCToNCHW-LayoutOptimizer}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. [[replica_3/StringsByteSplit_1/RaggedGetItem/strided_slice_4/stack_1/_1212]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[8,4088,303,37] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node transducer/dense_1/BiasAdd-0-TransposeNHWCToNCHW-LayoutOptimizer}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
def run_evaluate(model,
optimizer,
loss_fn,
eval_dataset,
batch_size,
strategy,
metrics=[],
fp16_run=False,
gpus=[]):
@tf.function(experimental_relax_shapes=True)
def eval_step(dist_inputs):
def step_fn(inputs):
(mel_specs, pred_inp,
spec_lengths, label_lengths, labels) = inputs
outputs = model([mel_specs, pred_inp],
training=False)
loss = loss_fn(labels, outputs,
spec_lengths=spec_lengths,
label_lengths=label_lengths)
loss *= (1. / batch_size)
if fp16_run:
loss = optimizer.get_scaled_loss(loss)
if metrics is not None:
metric_results = run_metrics(mel_specs, labels,
metrics=metrics)
metric_results = {name: result * (1. / max(len(gpus), 1)) for name, result in metric_results.items()}
return loss, metric_results
print('Performing evaluation.2-1-1')
losses, metrics_results = strategy.experimental_run_v2(step_fn, args=(dist_inputs,))
print('Performing evaluation.2-1-2')
mean_loss = strategy.reduce(
tf.distribute.ReduceOp.SUM, losses, axis=0)
print('Performing evaluation.2-1-3:',mean_loss)
mean_metrics = {name: strategy.reduce(
tf.distribute.ReduceOp.SUM, result, axis=0) for name, result in metrics_results.items()}
print('Performing evaluation.2-1-4:',mean_metrics)
return mean_loss, mean_metrics
print('Performing evaluation.')
loss_object = tf.keras.metrics.Mean()
metric_objects = {fn.__name__: tf.keras.metrics.Mean() for fn in metrics}
print('Performing evaluation.2 ')
cnt = 0
for batch, inputs in enumerate(eval_dataset):
cnt = cnt +1
print('-------------------- ['+str(cnt)+'] ------------------')
print('Performing evaluation.2-1')
print('eval_step : ',eval_step)
print('Type : eval_step : ',type(eval_step))
print('input type : ',type(inputs))
loss, metrics_results = eval_step(inputs)
print('Performing evaluation.2-2')
loss_object(loss)
print('Performing evaluation.2-3')
for metric_name, metric_result in metrics_results.items():
metric_objects[metric_name](metric_result)
print('Performing evaluation.3')
metrics_final_results = {name: metric_object.result() for name, metric_object in metric_objects.items()}
print('Performing evaluation. finish')
return loss_object.result(), metrics_final_results
@nambee From this log, you can see that you are running out of gpu memory, reduce the batch size to 8 or lower should fix the problem. 2020-04-22 15:55:35.613508: I tensorflow/stream_executor/cuda/cuda_driver.cc:801] failed to allocate 4.00G (4294967296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
But still it looks like due to eager execution, the memory requirement keeps growing and only at eval step. My system fails to allocate GPU memory after 19000 Batches at Epoc 0. @noahchalifour is there a way to fix this?
Oh, it's a different issue. sorry. I thought you end up with OOM error too.
Can you run the run_common_voice.py without the OOM error? I got the OOM error for eval_step and train_step too. (I disabled the eval step to see the train_step can work.)
Oh, it's a different issue. sorry. I thought you end up with OOM error too.
Can you run the run_common_voice.py without the OOM error? I got the OOM error for eval_step and train_step too. (I disabled the eval step to see the train_step can work.)
Yes it worked for me. Even though you use CUDA_VISIBLE_DEVICES=0 to specify one GPU you have to change the strategy = None in run_common_voice.py.
@prajwaljpj Thank you for your advice. Retracing errors are gone when I disable strategy. I still got the OOM error, I should reduce some factors. Again, Thank you!
@nambee Strategy part is not implemented for eval. If you see the training function there is a condition which implements strategy and experimental_run. You have to make a similar change for eval. also Try reducing batch size to 2.
@prajwaljpj Yes, I did that already. But I apply it only for small datasets. (Because I need feasibility now) I will expand it in the future. Thank you for your kind consideration.
Can someone please let me know if this is resolved in the latest commit? I do not have a multi GPU machine to test on. Thanks
Could this be related to https://github.com/noahchalifour/rnnt-speech-recognition/issues/29 ?
It does seem so.
First off, there seems to be an error, gpus
is not defined at this point and run_evaluate()
does not expose an argument gpus
.
If I run the training with CUDA_VISIBLE_DEVICES=0
it does seem to work. However, running with multiple GPUs gives me the exception as described in https://github.com/noahchalifour/rnnt-speech-recognition/issues/29.
I can train the model use multi-gpus by add a decorator @tf.function refer this link https://github.com/tensorflow/tensorflow/issues/29911,and i also add line of “os.environ['CUDA_VISIBLE_DEVICES'] = "{your gpus}” in my code.
Maybe take a look at https://github.com/usimarit/TiramisuASR
It's the multi-gpu training code what i modified,but the loss value from negative to nan after trained some batches.
I have a machine with 2x Nvidia RTX 2080 Ti 8 Core Intel i7 processor 32Gb of RAM
The training code (non-Docker version) when CUDA_VISIBLE_DEVICES=0,1 causes a memory leak in eval_step. python run_common_voice.py --mode train --data_dir
These are the warnings I get. I am not able to pinpoint which object is causing the retracing error.
Performing evaluation. [949/1811] INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',)..eval_step at 0x7fc4885dc598> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.functi
on has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for
more details.
W0421 00:40:37.296950 140487075895104 def_function.py:586] 5 out of the last 5 calls to <function run_evaluate..eval_step at 0x7fc4885dc598> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python obj
ects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensor
flow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:6 out of the last 6 calls to <function run_evaluate..eval_step at 0x7fc4885dc598> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.functi
on has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for
more details
I0421 00:39:38.737240 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:38.740701 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:38.743986 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:38.747186 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
2020-04-21 00:39:43.431398: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-04-21 00:39:44.193788: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap
call_for_each_replica
orexperimental_run
orexperimental_run_v2
inside a tf.function to get the best perf$ rmance.W0421 00:39:49.856330 140487075895104 mirrored_strategy.py:692] Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap
call_for_each_replica
orexperimental_run
orexperimental_run_$ 2
inside a tf.function to get the best performance.INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:49.859219 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:49.859964 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap
call_for_each_replica
orexperimental_run
orexperimental_run_v2
inside a tf.function to get the best perf$ rmance.W0421 00:39:49.861165 140487075895104 mirrored_strategy.py:692] Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap
call_for_each_replica
orexperimental_run
orexperimental_run_$ 2
inside a tf.function to get the best performance.INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:49.863494 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:49.864265 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap
call_for_each_replica
orexperimental_run
orexperimental_run_v2
inside a tf.function to get the best perf$ rmance.W0421 00:39:49.865403 140487075895104 mirrored_strategy.py:692] Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap
call_for_each_replica
orexperimental_run
orexperimental_run_$ 2
inside a tf.function to get the best performance.INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:49.867894 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:49.868691 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap
call_for_each_replica
orexperimental_run
orexperimental_run_v2
inside a tf.function to get the best perfo rmance. W0421 00:39:49.869868 140487075895104 mirrored_strategy.py:692] Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrapcall_for_each_replica
orexperimental_run
orexperimental_run_v 2
inside a tf.function to get the best performance. WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrapcall_for_each_replica
orexperimental_run
orexperimental_run_v2
inside a tf.function to get the best perfo rmance. W0421 00:40:01.864544 140487075895104 mirrored_strategy.py:692] Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrapcall_for_each_replica
orexperimental_run
orexperimental_run_v 2
inside a tf.function to get the best performance. WARNING:tensorflow:5 out of the last 5 calls to <function run_evaluate.Is this a tensorflow issue?