Closed JadBatmobile closed 5 years ago
That is a first trial to get roughly 4Gb from your GPU. If your GPU is not big enough to allocate this, then I try decreasing by halving each time, 2Gb, 1Gb, etc. I would recommend you set it to half your usual GPU free memory per model.
cool! Not to ask too much... to allocate half, would you do size=15; (1<<size) ?
sorry ill ask again in a better way: when you use size = 32 and then (1<<size) evaluates to 1073741824.. can you explain how this corresponds to 4 Gb?
No, it would be 1<<31
. It is a bitshift, so 1<<32
means 2^32
, and 1<<31
is 2^31
, effectively half :)
That's why inside the loop I do size--;
You could also do 4294967296
and 2147483648
, but it is not as fancy, is it? :stuck_out_tongue:
thank you Andres!
my pleasure
Hey Andres,
In the NetTRT.cpp code, the line:
_builder->setMaxWorkspaceSize(1 << size);
I assume this means the amount of GPU mem allocated for the network? You use size 32. What does this mean in this context?
If i wanted to deploy two models (two .uff files), on one GPU, how do you recommend i proceed?