Closed haiyang-tju closed 6 years ago
Do you try allowing_gpu_memory_growth ? Because of the NVIDIA Jetson use the share memory of CPU and GPU, sometimes you can try to reboot and disable a desktop environment. I also have a similar problem too. Just try to reboot the system to free other memory.
Yes, I have set it with True, and reboot does not work for me. When the script is running, it will fill up the memory, and I can do nothing. Then I rewrite it with C++ API of tensorRT, it works well. I don’t understand why.
Hi,
I used to build TF with TensorRT support enabled and my performance was awful. Removing it enabled my program to run with a much higher performance. Like, 3x times better.
Not sure if you can afford to do this but might be worth a shot.
-------- Original Message -------- On Jul 20, 2018, 14:16, 海洋@TJU wrote:
Yes, I have set it with True, and reboot does not work for me. When the script is running, it will fill up the memory, and I can do nothing. Then I rewrite it with C++ API of tensorRT, it works well. I don’t understand why.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.
Cool. It seems we also need a build without tensorrt. In fact, the tensorrt tensor flow in Jetson is limited. Only a few feature can work.
@haiyang-tju Hi I've been trying to write a C++ program to perform TensorRT optimization as well, because the python script seemed to increase memory usage on the Jetson, could you possibly share that piece of code with me, Im not too familiar with the C++ API. Thanks!
@asinha94 You can try the sample code of TensorRT-4.0.0.3 in here. Maybe you need to log in the Nvidia account. The sample code in this path:
***\TensorRT-4.0.0.3\targets\x86_64-linux-gnu\samples
If you can not find this file, leave a mail and I will send it to you.
I successfully compiled TF-1.9.0 followed to your steps. Also got the correct verification. Thanks for your code there.
But, when running the RT optimization graph, there is too low freeMemory left, and I often make mistakes while running. Also on your test_tftrt.py output logger:
Is the maximum value of the workspace set by RT, it will be occupied, and other programs can no longer be used?
I am only running the VGG-16 model on the TX2 with tensorRT,and the max_workspace_size_bytes=4096 << 20. When the model is running, the output is Cuda Error in execute: 9. There is my output logger:
Do you know what is going on here? Thanks a lot.