noahchalifour / rnnt-speech-recognition

End-to-end speech recognition using RNN Transducers in Tensorflow 2.0
MIT License
242 stars 79 forks source link

Multi-GPU training is not working #18

Open prajwaljpj opened 4 years ago

prajwaljpj commented 4 years ago

I have a machine with 2x Nvidia RTX 2080 Ti 8 Core Intel i7 processor 32Gb of RAM

The training code (non-Docker version) when CUDA_VISIBLE_DEVICES=0,1 causes a memory leak in eval_step. python run_common_voice.py --mode train --data_dir These are the warnings I get. I am not able to pinpoint which object is causing the retracing error.

Performing evaluation. [949/1811] INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:38.737240 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:38.740701 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:38.743986 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:38.747186 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
2020-04-21 00:39:43.431398: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-04-21 00:39:44.193788: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap call_for_each_replica or experimental_run or experimental_run_v2 inside a tf.function to get the best perf$ rmance.
W0421 00:39:49.856330 140487075895104 mirrored_strategy.py:692] Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap call_for_each_replica or experimental_run or experimental_run_$ 2 inside a tf.function to get the best performance.
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:49.859219 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:49.859964 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap call_for_each_replica or experimental_run or experimental_run_v2 inside a tf.function to get the best perf$ rmance.
W0421 00:39:49.861165 140487075895104 mirrored_strategy.py:692] Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap call_for_each_replica or experimental_run or experimental_run_$ 2 inside a tf.function to get the best performance.
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:49.863494 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:49.864265 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap call_for_each_replica or experimental_run or experimental_run_v2 inside a tf.function to get the best perf$ rmance.
W0421 00:39:49.865403 140487075895104 mirrored_strategy.py:692] Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap call_for_each_replica or experimental_run or experimental_run_$ 2 inside a tf.function to get the best performance.
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:49.867894 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I0421 00:39:49.868691 140487075895104 cross_device_ops.py:439] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap call_for_each_replica or experimental_run or experimental_run_v2 inside a tf.function to get the best perfo rmance. W0421 00:39:49.869868 140487075895104 mirrored_strategy.py:692] Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap call_for_each_replica or experimental_run or experimental_run_v 2 inside a tf.function to get the best performance. WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap call_for_each_replica or experimental_run or experimental_run_v2 inside a tf.function to get the best perfo rmance. W0421 00:40:01.864544 140487075895104 mirrored_strategy.py:692] Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap call_for_each_replica or experimental_run or experimental_run_v 2 inside a tf.function to get the best performance. WARNING:tensorflow:5 out of the last 5 calls to <function run_evaluate..eval_step at 0x7fc4885dc598> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.functi on has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. W0421 00:40:37.296950 140487075895104 def_function.py:586] 5 out of the last 5 calls to <function run_evaluate..eval_step at 0x7fc4885dc598> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python obj ects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensor flow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:6 out of the last 6 calls to <function run_evaluate..eval_step at 0x7fc4885dc598> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.functi on has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details

Is this a tensorflow issue?

NAM-hj commented 4 years ago

I have same issue. My system is

RAM : 128GB GPU : GTX 1080ti * 4 OS : ubuntu 18.04 NVIDIA Driver : 440.82 CUDA : 10.1 CUDNN : 7.6.5 python : 3.6.9 tensorflow & tensorflow-gpu : 2.1.0 (And I do not change any param in run_common_voice.py)

When I run the run_common_voice.py code. These are shown.

  1. At the 0th epoch Eval_step is running with retracing warning and then, I got the OOM error.

  2. Disable evaluation at the 0th epoch. 2-1. When there is retracing warning (slow) Epoch: 0, Batch: 60, Global Step: 60, Step Time: 26.0310, Loss: 165.6244 2-2. When there is no retracing warning (fast) Epoch: 0, Batch: 62, Global Step: 62, Step Time: 6.3741, Loss: 164.6387

    Then I get the OOM error after this line Epoch: 0, Batch: 226, Global Step: 226, Step Time: 5.9092, Loss: 142.7257 ...

I think some of the tf.function? affect to speed of the training.

Does the retracing warning have a connection with OOM error? --> If so, how can I solve the retracing warning? --> If not, how can I solve the OOM error?

Thank you

prajwaljpj commented 4 years ago

@nambee Did single GPU training work for you?

NAM-hj commented 4 years ago

@nambee Did single GPU training work for you?

No it does not work.

To see the progress, I print some logs in 'run_evaluate' func which is inside of 'run_training' func. (I attach this code at the end of this comment. (I only added 'print' functions.)) After 432 batches, OOM error has occurred. (+ The total eval_dataset loop count is 486.)

CUDA_VISIBLE_DEVICE=1 python run_common_voice.py --mode train --data_dir english_data/feature

... tensorflow.org/api_docs/python/tf/function for more details. Performing evaluation.2-2 Performing evaluation.2-3 -------------------- [432] ------------------ Performing evaluation.2-1 eval_step : <tensorflow.python.eager.def_function.Function object at 0x7f6dac6b07b8> Type : eval_step : <class 'tensorflow.python.eager.def_function.Function'> input type : <class 'tuple'> Performing evaluation.2-1-1 Performing evaluation.2-1-2 Performing evaluation.2-1-3: Tensor("Identity:0", shape=(), dtype=float32, device=/job:localhost/replica:0/task:0/device:CPU:0) Performing evaluation.2-1-4: {'WER': <tf.Tensor 'Identity_1:0' shape= dtype=float32>, 'Accuracy': <tf.Tensor 'Identity_2:0' shape= dtype=float32>, 'CER': <tf.Tensor 'Identity_3:0' shape= dtype=float32>} 2020-04-22 15:55:35.613508: I tensorflow/stream_executor/cuda/cuda_driver.cc:801] failed to allocate 4.00G (4294967296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory ... (0) Resource exhausted: OOM when allocating tensor with shape[8,4088,303,37] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node transducer/dense_1/BiasAdd-0-TransposeNHWCToNCHW-LayoutOptimizer}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

[[replica_3/StringsByteSplit_1/RaggedGetItem/strided_slice_4/stack_1/_1212]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

(1) Resource exhausted: OOM when allocating tensor with shape[8,4088,303,37] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node transducer/dense_1/BiasAdd-0-TransposeNHWCToNCHW-LayoutOptimizer}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

def run_evaluate(model,
                 optimizer,
                 loss_fn, 
                 eval_dataset,
                 batch_size,
                 strategy,
                 metrics=[],
                 fp16_run=False,
                 gpus=[]):

    @tf.function(experimental_relax_shapes=True)
    def eval_step(dist_inputs):
        def step_fn(inputs):
            (mel_specs, pred_inp, 
             spec_lengths, label_lengths, labels) = inputs

            outputs = model([mel_specs, pred_inp], 
                training=False)

            loss = loss_fn(labels, outputs,
                spec_lengths=spec_lengths,
                label_lengths=label_lengths)
            loss *= (1. / batch_size)

            if fp16_run:
                loss = optimizer.get_scaled_loss(loss)

            if metrics is not None:
                metric_results = run_metrics(mel_specs, labels,
                    metrics=metrics)
                metric_results = {name: result * (1. / max(len(gpus), 1)) for name, result in metric_results.items()}

            return loss, metric_results
        print('Performing evaluation.2-1-1')
        losses, metrics_results = strategy.experimental_run_v2(step_fn, args=(dist_inputs,))
        print('Performing evaluation.2-1-2')
        mean_loss = strategy.reduce(
            tf.distribute.ReduceOp.SUM, losses, axis=0)
        print('Performing evaluation.2-1-3:',mean_loss)
        mean_metrics = {name: strategy.reduce(
            tf.distribute.ReduceOp.SUM, result, axis=0) for name, result in metrics_results.items()}
        print('Performing evaluation.2-1-4:',mean_metrics)        
        return mean_loss, mean_metrics

    print('Performing evaluation.')

    loss_object = tf.keras.metrics.Mean()
    metric_objects = {fn.__name__: tf.keras.metrics.Mean() for fn in metrics}
    print('Performing evaluation.2 ')
    cnt = 0
    for batch, inputs in enumerate(eval_dataset):
        cnt = cnt +1
        print('-------------------- ['+str(cnt)+'] ------------------')
        print('Performing evaluation.2-1')
        print('eval_step : ',eval_step)
        print('Type : eval_step : ',type(eval_step))
        print('input type : ',type(inputs))
        loss, metrics_results = eval_step(inputs)
        print('Performing evaluation.2-2')
        loss_object(loss)
        print('Performing evaluation.2-3')
        for metric_name, metric_result in metrics_results.items():
            metric_objects[metric_name](metric_result)
    print('Performing evaluation.3')
    metrics_final_results = {name: metric_object.result() for name, metric_object in metric_objects.items()}
    print('Performing evaluation. finish')
    return loss_object.result(), metrics_final_results
prajwaljpj commented 4 years ago

@nambee From this log, you can see that you are running out of gpu memory, reduce the batch size to 8 or lower should fix the problem. 2020-04-22 15:55:35.613508: I tensorflow/stream_executor/cuda/cuda_driver.cc:801] failed to allocate 4.00G (4294967296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory

But still it looks like due to eager execution, the memory requirement keeps growing and only at eval step. My system fails to allocate GPU memory after 19000 Batches at Epoc 0. @noahchalifour is there a way to fix this?

NAM-hj commented 4 years ago

Oh, it's a different issue. sorry. I thought you end up with OOM error too.

Can you run the run_common_voice.py without the OOM error? I got the OOM error for eval_step and train_step too. (I disabled the eval step to see the train_step can work.)

prajwaljpj commented 4 years ago

Oh, it's a different issue. sorry. I thought you end up with OOM error too.

Can you run the run_common_voice.py without the OOM error? I got the OOM error for eval_step and train_step too. (I disabled the eval step to see the train_step can work.)

Yes it worked for me. Even though you use CUDA_VISIBLE_DEVICES=0 to specify one GPU you have to change the strategy = None in run_common_voice.py.

NAM-hj commented 4 years ago

@prajwaljpj Thank you for your advice. Retracing errors are gone when I disable strategy. I still got the OOM error, I should reduce some factors. Again, Thank you!

prajwaljpj commented 4 years ago

@nambee Strategy part is not implemented for eval. If you see the training function there is a condition which implements strategy and experimental_run. You have to make a similar change for eval. also Try reducing batch size to 2.

NAM-hj commented 4 years ago

@prajwaljpj Yes, I did that already. But I apply it only for small datasets. (Because I need feasibility now) I will expand it in the future. Thank you for your kind consideration.

noahchalifour commented 4 years ago

Can someone please let me know if this is resolved in the latest commit? I do not have a multi GPU machine to test on. Thanks

stefan-falk commented 4 years ago

Could this be related to https://github.com/noahchalifour/rnnt-speech-recognition/issues/29 ?

stefan-falk commented 4 years ago

It does seem so.

First off, there seems to be an error, gpus is not defined at this point and run_evaluate() does not expose an argument gpus.

https://github.com/noahchalifour/rnnt-speech-recognition/blob/a0d972f5e407e465ad784c682fa4e72e33d8eefe/run_rnnt.py#L570

If I run the training with CUDA_VISIBLE_DEVICES=0 it does seem to work. However, running with multiple GPUs gives me the exception as described in https://github.com/noahchalifour/rnnt-speech-recognition/issues/29.

for completeness, click to expand full error log ``` /home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/librosa/util/decorators.py:9: NumbaDeprecationWarning: An import was requested from a module that has moved location. Import requested from: 'numba.decorators', please update to use 'numba.core.decorators' or pin to Numba version 0.48.0. This alias will not be present in Numba version 0.50.0. from numba.decorators import jit as optional_jit /home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/librosa/util/decorators.py:9: NumbaDeprecationWarning: An import was requested from a module that has moved location. Import of 'jit' requested from: 'numba.decorators', please update to use 'numba.core.decorators' or pin to Numba version 0.48.0. This alias will not be present in Numba version 0.50.0. from numba.decorators import jit as optional_jit 2020-05-26 09:14:30.736191: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-05-26 09:14:30.748386: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:30.749173: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1 coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s 2020-05-26 09:14:30.749232: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:30.750058: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 1 with properties: pciBusID: 0000:02:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1 coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s 2020-05-26 09:14:30.750112: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:30.750888: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 2 with properties: pciBusID: 0000:03:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1 coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s 2020-05-26 09:14:30.750927: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:30.751427: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 3 with properties: pciBusID: 0000:05:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1 coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s 2020-05-26 09:14:30.751570: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-05-26 09:14:30.752638: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-05-26 09:14:30.753673: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-05-26 09:14:30.753866: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-05-26 09:14:30.754997: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-05-26 09:14:30.755618: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-05-26 09:14:30.757804: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-05-26 09:14:30.757899: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:30.759345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:30.760097: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:30.760844: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:30.761589: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:30.762328: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:30.763068: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:30.763805: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:30.764521: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0, 1, 2, 3 2020-05-26 09:14:30.764770: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2020-05-26 09:14:30.770070: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 4200000000 Hz 2020-05-26 09:14:30.770492: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560e6150fef0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-05-26 09:14:30.770507: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-05-26 09:14:31.020514: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.037961: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.041811: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.049635: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.050189: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560e60e72d20 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-05-26 09:14:31.050199: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1 2020-05-26 09:14:31.050203: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (1): GeForce GTX 1080 Ti, Compute Capability 6.1 2020-05-26 09:14:31.050206: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (2): GeForce GTX 1080 Ti, Compute Capability 6.1 2020-05-26 09:14:31.050209: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (3): GeForce GTX 1080 Ti, Compute Capability 6.1 2020-05-26 09:14:31.051527: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.051949: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1 coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s 2020-05-26 09:14:31.051989: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.052409: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 1 with properties: pciBusID: 0000:02:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1 coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s 2020-05-26 09:14:31.052448: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.052867: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 2 with properties: pciBusID: 0000:03:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1 coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s 2020-05-26 09:14:31.052904: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.053326: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 3 with properties: pciBusID: 0000:05:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1 coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s 2020-05-26 09:14:31.053353: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-05-26 09:14:31.053366: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-05-26 09:14:31.053377: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-05-26 09:14:31.053387: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-05-26 09:14:31.053397: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-05-26 09:14:31.053407: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-05-26 09:14:31.053418: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-05-26 09:14:31.053452: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.053895: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.054339: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.054782: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.055227: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.055669: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.056126: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.056579: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.057003: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0, 1, 2, 3 2020-05-26 09:14:31.057025: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-05-26 09:14:31.059325: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-05-26 09:14:31.059335: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 1 2 3 2020-05-26 09:14:31.059340: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N Y Y Y 2020-05-26 09:14:31.059344: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 1: Y N Y Y 2020-05-26 09:14:31.059347: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 2: Y Y N Y 2020-05-26 09:14:31.059350: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 3: Y Y Y N 2020-05-26 09:14:31.060102: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.060567: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.061033: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.061486: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.061942: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.062368: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9449 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1) 2020-05-26 09:14:31.062688: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.063131: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10161 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:02:00.0, compute capability: 6.1) 2020-05-26 09:14:31.063473: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.064690: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 10161 MB memory) -> physical GPU (device: 2, name: GeForce GTX 1080 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1) 2020-05-26 09:14:31.065011: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-05-26 09:14:31.065455: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 10161 MB memory) -> physical GPU (device: 3, name: GeForce GTX 1080 Ti, pci bus id: 0000:05:00.0, compute capability: 6.1) 4 Physical GPU, 4 Logical GPUs WARNING:tensorflow:From /home/sfalk/tmp/rnnt-speech-recognition/model.py:59: LSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version. Instructions for updating: This class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0. W0526 09:14:32.108052 140106746382080 deprecation.py:317] From /home/sfalk/tmp/rnnt-speech-recognition/model.py:59: LSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version. Instructions for updating: This class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0. WARNING:tensorflow:: Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. W0526 09:14:32.108385 140106746382080 rnn_cell_impl.py:909] : Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. WARNING:tensorflow:From /home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/tensorflow/python/ops/rnn_cell_impl.py:962: Layer.add_variable (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.add_weight` method instead. W0526 09:14:32.109819 140106746382080 deprecation.py:317] From /home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/tensorflow/python/ops/rnn_cell_impl.py:962: Layer.add_variable (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.add_weight` method instead. WARNING:tensorflow:: Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. W0526 09:14:32.227335 140106746382080 rnn_cell_impl.py:909] : Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. WARNING:tensorflow:: Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. W0526 09:14:32.490125 140106746382080 rnn_cell_impl.py:909] : Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. WARNING:tensorflow:: Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. W0526 09:14:32.669947 140106746382080 rnn_cell_impl.py:909] : Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. WARNING:tensorflow:: Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. W0526 09:14:32.804272 140106746382080 rnn_cell_impl.py:909] : Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. WARNING:tensorflow:: Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. W0526 09:14:32.951039 140106746382080 rnn_cell_impl.py:909] : Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. WARNING:tensorflow:: Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. W0526 09:14:33.074690 140106746382080 rnn_cell_impl.py:909] : Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. WARNING:tensorflow:: Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. W0526 09:14:33.202479 140106746382080 rnn_cell_impl.py:909] : Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. WARNING:tensorflow:: Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. W0526 09:14:33.890956 140106746382080 rnn_cell_impl.py:909] : Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. WARNING:tensorflow:: Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. W0526 09:14:34.015121 140106746382080 rnn_cell_impl.py:909] : Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU. I0526 09:14:34.344151 140106746382080 run_rnnt.py:490] Using word-piece encoder with vocab size: 4341 Model: "encoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, None, 240)] 0 _________________________________________________________________ batch_normalization (BatchNo (None, None, 240) 960 _________________________________________________________________ rnn (RNN) (None, None, 640) 8527872 _________________________________________________________________ dropout (Dropout) (None, None, 640) 0 _________________________________________________________________ layer_normalization (LayerNo (None, None, 640) 1280 _________________________________________________________________ rnn_1 (RNN) (None, None, 640) 11804672 _________________________________________________________________ dropout_1 (Dropout) (None, None, 640) 0 _________________________________________________________________ layer_normalization_1 (Layer (None, None, 640) 1280 _________________________________________________________________ time_reduction (TimeReductio (None, None, 1280) 0 _________________________________________________________________ rnn_2 (RNN) (None, None, 640) 17047552 _________________________________________________________________ dropout_2 (Dropout) (None, None, 640) 0 _________________________________________________________________ layer_normalization_2 (Layer (None, None, 640) 1280 _________________________________________________________________ rnn_3 (RNN) (None, None, 640) 11804672 _________________________________________________________________ dropout_3 (Dropout) (None, None, 640) 0 _________________________________________________________________ layer_normalization_3 (Layer (None, None, 640) 1280 _________________________________________________________________ rnn_4 (RNN) (None, None, 640) 11804672 _________________________________________________________________ dropout_4 (Dropout) (None, None, 640) 0 _________________________________________________________________ layer_normalization_4 (Layer (None, None, 640) 1280 _________________________________________________________________ rnn_5 (RNN) (None, None, 640) 11804672 _________________________________________________________________ dropout_5 (Dropout) (None, None, 640) 0 _________________________________________________________________ layer_normalization_5 (Layer (None, None, 640) 1280 _________________________________________________________________ rnn_6 (RNN) (None, None, 640) 11804672 _________________________________________________________________ dropout_6 (Dropout) (None, None, 640) 0 _________________________________________________________________ layer_normalization_6 (Layer (None, None, 640) 1280 _________________________________________________________________ rnn_7 (RNN) (None, None, 640) 11804672 _________________________________________________________________ dropout_7 (Dropout) (None, None, 640) 0 _________________________________________________________________ layer_normalization_7 (Layer (None, None, 640) 1280 ================================================================= Total params: 96,414,656 Trainable params: 96,414,176 Non-trainable params: 480 _________________________________________________________________ Model: "prediction_network" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, None)] 0 _________________________________________________________________ embedding (Embedding) (None, None, 500) 2170500 _________________________________________________________________ rnn_8 (RNN) (None, None, 640) 10657792 _________________________________________________________________ dropout_8 (Dropout) (None, None, 640) 0 _________________________________________________________________ layer_normalization_8 (Layer (None, None, 640) 1280 _________________________________________________________________ rnn_9 (RNN) (None, None, 640) 11804672 _________________________________________________________________ dropout_9 (Dropout) (None, None, 640) 0 _________________________________________________________________ layer_normalization_9 (Layer (None, None, 640) 1280 ================================================================= Total params: 24,635,524 Trainable params: 24,635,524 Non-trainable params: 0 _________________________________________________________________ Model: "transducer" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== mel_specs (InputLayer) [(None, None, 240)] 0 __________________________________________________________________________________________________ pred_inp (InputLayer) [(None, None)] 0 __________________________________________________________________________________________________ encoder (Model) (None, None, 640) 96414656 mel_specs[0][0] __________________________________________________________________________________________________ prediction_network (Model) (None, None, 640) 24635524 pred_inp[0][0] __________________________________________________________________________________________________ tf_op_layer_ExpandDims (TensorF [(None, None, 1, 640 0 encoder[1][0] __________________________________________________________________________________________________ tf_op_layer_ExpandDims_1 (Tenso [(None, 1, None, 640 0 prediction_network[1][0] __________________________________________________________________________________________________ tf_op_layer_AddV2 (TensorFlowOp [(None, None, None, 0 tf_op_layer_ExpandDims[0][0] tf_op_layer_ExpandDims_1[0][0] __________________________________________________________________________________________________ dense (Dense) (None, None, None, 6 410240 tf_op_layer_AddV2[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, None, None, 4 2782581 dense[0][0] ================================================================================================== Total params: 124,243,001 Trainable params: 124,242,521 Non-trainable params: 480 __________________________________________________________________________________________________ Starting training. Performing evaluation. Traceback (most recent call last): File "/home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2292, in _convert_inputs_to_signature flatten_inputs[index] = ops.convert_to_tensor( File "/home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 1341, in convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 321, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 261, in constant return _constant_impl(value, dtype, shape, name, verify_shape=False, File "/home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 270, in _constant_impl t = convert_to_eager_tensor(value, ctx, dtype) File "/home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 96, in convert_to_eager_tensor return ops.EagerTensor(value, ctx.device_name, dtype) ValueError: Attempt to convert a value (PerReplica:{ 0: , 1: , 2: , 3: }) with an unsupported type () to a Tensor. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "run_rnnt.py", line 586, in app.run(main) File "/home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "run_rnnt.py", line 532, in main run_training( File "run_rnnt.py", line 347, in run_training checkpoint_model() File "run_rnnt.py", line 304, in checkpoint_model eval_loss, eval_metrics_results = run_evaluate( File "run_rnnt.py", line 433, in run_evaluate loss, metrics_results = eval_step(inputs) File "/home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 580, in __call__ result = self._call(*args, **kwds) File "/home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 647, in _call self._stateful_fn._function_spec.canonicalize_function_inputs( # pylint: disable=protected-access File "/home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2235, in canonicalize_function_inputs inputs = _convert_inputs_to_signature( File "/home/sfalk/miniconda3/envs/rnnt/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2296, in _convert_inputs_to_signature raise ValueError("When input_signature is provided, all inputs to " ValueError: When input_signature is provided, all inputs to the Python function must be convertible to tensors: inputs: ( (PerReplica:{ 0: , 1: , 2: , 3: }, PerReplica:{ 0: , 1: , 2: , 3: }, PerReplica:{ 0: , 1: , 2: , 3: }, PerReplica:{ 0: , 1: , 2: , 3: }, PerReplica:{ 0: , 1: , 2: , 3: })) input_signature: ( [TensorSpec(shape=(None, None, 240), dtype=tf.float32, name=None), TensorSpec(shape=(None, None), dtype=tf.int32, name=None), TensorSpec(shape=(None,), dtype=tf.int32, name=None), TensorSpec(shape=(None,), dtype=tf.int32, name=None), TensorSpec(shape=(None, None), dtype=tf.int32, name=None)]) ```
ChristopheZhao commented 4 years ago

I can train the model use multi-gpus by add a decorator @tf.function refer this link https://github.com/tensorflow/tensorflow/issues/29911,and i also add line of “os.environ['CUDA_VISIBLE_DEVICES'] = "{your gpus}” in my code.

stefan-falk commented 4 years ago

Maybe take a look at https://github.com/usimarit/TiramisuASR

ChristopheZhao commented 4 years ago

It's the multi-gpu training code what i modified,but the loss value from negative to nan after trained some batches. image