Open maym86 opened 4 years ago
@maym86 Did you manage to solve this?
Nope. I think the model needs to have specific dla enabled layers which this one didn't
Docs somewhere probably list which ones can be converted. All I know is for my application having some layers running on dla and some on gpu is very slow and might as well run it all on the gpu.
I am trying to run yolo on AGX Xavier on the DLA but it looks like the whole model is running on the gpu.
I set the DLA device to 0 here:
https://github.com/lewes6369/TensorRT-Yolov3/blob/b84aa7230830155b21339ed11aa831cec43bef4d/main.cpp#L263
by adding a DLADeivce parameter: net.reset(new trtNet(deployFile, caffemodelFile, outputNames, calibData, run_mode, batchSize, DLADevice));
When I create the engine I get the following which indicates all the layers are still running on the GPU. Has anyone else tried this and have any hints on how to change the model to run on the DLA?