Open 7oranger opened 2 years ago
pls use branch tensorrt8 instead for jetpack version> 4.6 Thx
hi @wanghr323 after switch to tensorrt8
I still got the same error.
sudo python3 benchmark.py --all --csv_file_path benchmark_csv/nx-benchmarks.csv --model_dir models --jetson_devkit xavier --power_mode 8
Here is the log.
pose_estimation_b2_ws2048_gpu_save.txt inception_v4_b2_ws2048_gpu_save.txt inception_v4_b1_ws1024_dla1_save.txt log.txt
retry to use absolute path in command line "--csv_file_path /home/nvidia/YOUR_PATH/benchmark_csv/nx-benchmarks.csv ..."
Im meeting the same question as https://forums.developer.nvidia.com/t/the-jetson-benchmarks-from-github-can-not-run-on-xavier-nx-16g-som/211209/27. Cannt run this github benchmark demo. I am using
Xavier NX 16G dev-kit with 120G ssd.And nv_tegra_release is R32 revision7.2
JetPack 4.6.2. tensorrt8.2.1.8 ( I used sudo apt-get install nvidia-jetpack command )
cuda 10.2 cudnn 8.2.1( I used sudo apt-get install nvidia-jetpack command ).
I tried the following method but none wored.
I have used the absolute path instead of relative path in the command.
I have decreased the para in nx csv file from 2048 or 1024 to 512
I check the free -m command, usually total is 15817, and used is 1800. I cannot make RAM used under 500M because when I reboot the board and do nothing, the RAM used reaches 1.8G already.
When I run _python3 utils/download_models.py --all --csv_file_path/benchmark_csv/nx-benchmarks.csv --savedir
Result is FPS is 0.00 Error in Build,please check the log in. we recommend to run benchmarking in headless mode. and in the log it says “failed to create engine from model”.
I wonder if that’s out of my JP/tensorrt version? How can I solve the problem ? Here are my logs. inception_v4_b1_ws512_dla1.txt inception_v4_b1_ws512_dla2.txt inception_v4_b2_ws512_gpu.txt inception_v4_b2_ws1024_gpu.txt ?