Open GuiyuDu opened 6 years ago
I am not sure what is your problem. If you want to decode on CPU only, install tensorflow
(i.e. not tensorflow-gpu
) in a virtual env or hide the GPU with CUDA_VISIBLE_DEVICES.
If you want to decode on GPU, you should expect it takes both CPU and GPU memory.
The default worker_gpu_memory_fraction should be OK. If you get OOM, you should decrease the batch_size (in decoding hparams, in case of decoding). BTW: I remember strange effects when adjusting worker_gpu_memory_fraction, something like that 0.91 and 0.93 was OK, but 0.92 failed (I don't remember the exact values).
I mean that t2t-decoder use CPU to decode rather than GPU. I follow the guide of the walkthrough and change nothing. When I use the default worker_gpu_memory_fraction, it told me that the allocation failed. But I use nvidia-smi to check the usage of memory, I found the memory has been allocated.
You may kill the process on GPUs and retry.
Description
TensorFlow and tensor2tensor versions
I tried to find out the reason from the source code but failed.Can anyone give me some help? Thanks.