PRBonn / bonnet

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.
GNU General Public License v3.0
323 stars 89 forks source link

MaxWorkSpace #51

Closed JadBatmobile closed 5 years ago

JadBatmobile commented 5 years ago

Hey Andres,

In the NetTRT.cpp code, the line:

_builder->setMaxWorkspaceSize(1 << size);

I assume this means the amount of GPU mem allocated for the network? You use size 32. What does this mean in this context?

If i wanted to deploy two models (two .uff files), on one GPU, how do you recommend i proceed?

tano297 commented 5 years ago

That is a first trial to get roughly 4Gb from your GPU. If your GPU is not big enough to allocate this, then I try decreasing by halving each time, 2Gb, 1Gb, etc. I would recommend you set it to half your usual GPU free memory per model.

TensorRT api docs for setMaxWorkspaceSize

JadBatmobile commented 5 years ago

cool! Not to ask too much... to allocate half, would you do size=15; (1<<size) ?

JadBatmobile commented 5 years ago

sorry ill ask again in a better way: when you use size = 32 and then (1<<size) evaluates to 1073741824.. can you explain how this corresponds to 4 Gb?

tano297 commented 5 years ago

No, it would be 1<<31. It is a bitshift, so 1<<32 means 2^32, and 1<<31 is 2^31, effectively half :) That's why inside the loop I do size--;

You could also do 4294967296 and 2147483648, but it is not as fancy, is it? :stuck_out_tongue:

JadBatmobile commented 5 years ago

thank you Andres!

tano297 commented 5 years ago

my pleasure