NVIDIA-AI-IOT / cuDLA-samples

YOLOv5 on Orin DLA
Other
180 stars 17 forks source link

[hybrid mode] load cuDLA module from memory FAILED in src/cudla_context_hybrid.cpp:96, CUDLA ERR: 7 #10

Closed hygxy closed 1 month ago

hygxy commented 10 months ago

I remove the --buildDLAStandalone flag here since I just have jetson Orin(no Drive Orin) and can only run DLA in a hybrid mode, I get the following error after running make run:

bash /usr/local/cuda/bin/nvcc -I /usr/local/cuda/include -I ./src/matx_reformat/ -I /usr/include/opencv4/ -I /usr/include/jsoncpp/ -I /usr/include -gencode arch=compute_87,code=sm_87 -c -o build/decode_nms.o src/decode_nms.cu g++ -I /usr/local/cuda/include -I ./src/matx_reformat/ -I /usr/include/opencv4/ -I /usr/include/jsoncpp/ -I /usr/include --std=c++14 -Wno-deprecated-declarations -Wall -O2 -c -o build/validate_coco.o src/validate_coco.cpp g++ -I /usr/local/cuda/include -I ./src/matx_reformat/ -I /usr/include/opencv4/ -I /usr/include/jsoncpp/ -I /usr/include --std=c++14 -Wno-deprecated-declarations -Wall -O2 -c -o build/yolov5.o src/yolov5.cpp g++ -I /usr/local/cuda/include -I ./src/matx_reformat/ -I /usr/include/opencv4/ -I /usr/include/jsoncpp/ -I /usr/include --std=c++14 -Wno-deprecated-declarations -Wall -O2 -c -o build/cudla_context_hybrid.o src/cudla_context_hybrid.cpp g++ --std=c++14 -Wno-deprecated-declarations -Wall -O2 -I /usr/local/cuda/include -I ./src/matx_reformat/ -I /usr/include/opencv4/ -I /usr/include/jsoncpp/ -I /usr/include -o ./build/cudla_yolov5_app build/decode_nms.o build/validate_coco.o build/yolov5.o build/cudla_context_hybrid.o -l cudla -L/usr/local/cuda/lib64 -l cuda -l cudart -l nvinfer -L /usr/lib/aarch64-linux-gnu/ -l opencv_objdetect -l opencv_highgui -l opencv_imgproc -l opencv_core -l opencv_imgcodecs -L ./src/matx_reformat/build/ -l matx_reformat -l jsoncpp ././build/cudla_yolov5_app --engine ./data/loadable/yolov5.int8.int8hwc4in.fp16chw16out.hybrid.bin --image ./data/images/image.jpg --backend cudla_int8 [hybrid mode] create cuDLA device SUCCESS [hybrid mode] load cuDLA module from memory FAILED in src/cudla_context_hybrid.cpp:96, CUDLA ERR: 7 make: *** [Makefile:78: run] Error 1

RuningMangoPi commented 10 months ago

Excuse me, has your problem been resolved?

lynettez commented 8 months ago

@hygxy Hi, sorry for the late response, the error is expected since without the --buildDLAStandalone flag, trtexec will generate a TensorRT plan file instead of a DLA loadable, However, cuDLA only can load a DLA loadable.