Open bjia56 opened 6 months ago
Hello @bjia56 . Are you able to use NPU from Docker container? If yes, can you provide some instructions?
Hello @bjia56 . Are you able to use NPU from Docker container? If yes, can you provide some instructions?
Yes, you'll need to run the Docker container with --security-opt systempaths=unconfined
(to let the container see /proc/device-tree/compatible
) and --device /dev/dri:/dev/dri
(for access to /dev/dri/renderD129
). librknnrt.so
also needs to be downloaded to /usr/lib
inside the container.
@bjia56 Thank you for response. I just tried it but it throw this error.
rknn_execute-1 | W Verbose file path is invalid, debug info will not dump to file.
rknn_execute-1 | D target set by user is: None
rknn_execute-1 | D Starting ntp or adb, target soc is RK3588, device id is: None
rknn_execute-1 | E RKNN: [21:56:25.555] failed to open rknpu module, need to insmod rknpu dirver!
rknn_execute-1 | E RKNN: [21:56:25.555] failed to open rknn device!
rknn_execute-1 | E Catch exception when init runtime!
rknn_execute-1 | E Traceback (most recent call last):
rknn_execute-1 | File "/home/jenkins/.local/lib/python3.10/site-packages/rknnlite/api/rknn_lite.py", line 149, in init_runtime
rknn_execute-1 | self.rknn_runtime.build_graph(self.rknn_data, self.load_model_in_npu)
rknn_execute-1 | File "rknnlite/api/rknn_runtime.py", line 921, in rknnlite.api.rknn_runtime.RKNNRuntime.build_graph
rknn_execute-1 | Exception: RKNN init failed. error code: RKNN_ERR_FAIL
rknn_execute-1 |
rknn_execute-1 | --> Loading model
rknn_execute-1 | done
rknn_execute-1 | --> Init runtime environment
rknn_execute-1 | Init runtime environment failed!
rknn_execute-1 exited with code 255
docker-compose.yaml
rknn_execute:
build:
context: .
dockerfile: docker/rknn_execute/Dockerfile
environment:
- RKNN_MODEL=yolov5s.rknn
volumes:
- "./models_datasets:/home/jenkins/models_datasets"
devices:
- "/dev/dri"
security_opt:
- systempaths:unconfined
It looks like the rknpu driver is not loaded. It needs to be loaded on the host outside the container. I run my container on the Armbian Orange Pi 5 image which, if I recall correctly, has the driver preinstalled.
I also use Armbian, and exactly the same code works on host. Do you load kernel module manually?
No, I don't recall having to load the module manually
Can you send what version of rknn tolkit you use? And what version of docker - maybe here some bug?
@bjia56 Here how I execute my code
rknn = RKNNLite(verbose=True)
# Load ONNX model
print('--> Loading model')
ret = rknn.load_rknn(RKNN_MODEL)
if ret != 0:
print('Load model failed!')
exit(ret)
print('done')
# Init runtime environment
print('--> Init runtime environment')
ret = rknn.init_runtime()
if ret != 0:
print('Init runtime environment failed!')
exit(ret)
print('done')
Kernel module: 0.9.2 Docker: 26.0.1 rknn-toolkit-lite2: v2.0.0-beta0
Maybe try if running inference as root inside the container would help?
Code we use in Scrypted: https://github.com/koush/scrypted/blob/main/plugins/rknn/src/rknn/plugin.py#L98
@bjia56 Thank you, user inside container did not have permissions to work with /dev/dri/
devices, I just added render and video groups and all works fine now.
Great!
I also can confirm that Docker volumes does not work with /proc/device-tree/compatible
That /proc/device-tree
is a link to /sys/firmware/devicetree/base
, which is masked by default in Docker.
A work around without using the privileged option is to use --security-opt systempaths=unconfined
which is also not very secure, It would be much better if we have the option to pass the CPU model to the rknn library instead.
Hello! Thanks for the awesome inference library and NPU. I have recently been working on adding Rockchip NPU real-time object detection support to the Scrypted home video platform.
A couple feature requests that would help with rknn usability on Linux:
/proc/device-tree/compatible
for the CPU model. In Docker, this path is only accessible if the container is created in privileged mode. Can this check be optional or done some other way to relax the privileged Docker requirement?librknnrt.so
to exist under/usr/lib
, my guess its loading is done with a dynamic dlopen somewhere in the Python native extensions. Can this library be bundled into rknn_toolkit_lite2 wheels (such as withauditwheel
) so it doesn't need to be downloaded separately? Alternatively, can this library be loaded fromLD_LIBRARY_PATH
so a process could be pointed to a different directory? Non-Docker Scrypted installations are not guaranteed to have write access to/usr/lib
.