xurui / SiamRPNTracker

Other
65 stars 17 forks source link

accelerate inference #3

Open JensenHJS opened 5 years ago

JensenHJS commented 5 years ago

Hello, I want to know whether it accelerate the inference. Recently,I try to accelerate the inference of siamrpn. I try to use fp16 instead of fp32. It is said that fp16 is twice as fast as fp32. It accelerate the inference indeed. But the acceleration is not very obvious. My platform is pytorch and nvidia tx2

jufeng123 commented 5 years ago

你TX2上能正常运行吗?我在TX2上跑GPU占用率很低

jufeng123 commented 5 years ago

I run the code on TX2 and the GPU occupancy is very low,do you know why?

jufeng123 commented 5 years ago

tx2上运行是不是不能使用libtorch,只能使用pytorch,那是不应该用.pth文件运行呢?

JensenHJS commented 5 years ago

tx2上装pytorch都挺麻烦的,需要自行编译,装libtorch的话,你网上搜搜,我应该是在电脑上用的libtorch

JensenHJS commented 5 years ago

这个我感觉速度挺慢的,代码没细看,单纯看跟踪的速度感觉都明显慢于pysot开源的pytorch下运行的。感觉没有充分利用gpu资源,可能只是适合于部署,并没有加速

jufeng123 commented 5 years ago

感谢!

Zepyhrus commented 4 years ago

@JensenHJS Try using source-build libtorch instead of pre-compiled libtorch. I was struggling the performance problem on some other projects before, build from source solved the problem.

jufeng123 commented 4 years ago

@Zepyhrus Source-build pytorch and pip install pytorch ,which performance is better?

JensenHJS commented 4 years ago

@Zepyhrus 我通过pip的方式安装libtorch,运行作者的这个代码,发现速度很慢,比官方python版的慢多了,你的意思是通过源码编译安装libtorch会解决很慢的问题?