jocover / jetson-ffmpeg

ffmpeg support on jetson nano
Other
612 stars 196 forks source link

ffmpeg nvmpi usage from within a docker container #108

Open jeroenvanderschoot opened 2 years ago

jeroenvanderschoot commented 2 years ago

Is it possible to use the binaries from within a docker container?

Anybody has a sample Dockerfile for this?

grantthomas commented 2 years ago

It should be possible, I would imagine, if you can figure out what resources to passthrough to the container

Nvidia has data on using regular PCI-E cards for GPU acceleration in docker containers https://github.com/NVIDIA/nvidia-docker

You would probably be better served by starting with the docker AI/ML accelerators on jetson, like something here: https://forums.developer.nvidia.com/t/how-to-build-docker-container-for-jetson-nano/183281 or here: https://medium.com/@Smartcow_ai/building-arm64-based-docker-containers-for-nvidia-jetson-devices-on-an-x86-based-host-d72cfa535786

jeroenvanderschoot commented 2 years ago

@grantthomas thx for poiting this out.

What would be needed to copy the binaries themselves into the container? Should I repeat build steps from wiki during build of the container image itself?

grantthomas commented 2 years ago

I think it'd depend on how you wanted to structure it.

You could roll the binaries in, or do a volume, so long as the docker container has access to the files.

Probably better practice to roll them in, but would be easier to update the bins without having to rebuild the docker container.

I moved a pre-compiled binary along with the proper .so objects from one Jetson NX to another, and was able to get it to work after installing the so files globally.

I don't think you need to do the full compliation, as long as you have the kernel modules available.

I would assume you'd need to passthrough whatever nvidia suggests regarding the typical jetson platform GPU usage, but that's just a guess.

Good luck, and please report back if/when you get it working, I'm sure you won't be the only one who's interested. I know I really dont' like using gstreamer for what I need and much prefer ffmpeg

grantthomas commented 2 years ago

Actually, on second though, rolling them in would probably the the only way to have it consistent since the .so file(s) need to be present and loaded at boot unless you want to bootstrap loading the modules for every container start.

Azkali commented 2 years ago

These are the devices needed to be passed to the container :

/dev/dri
/dev/nvhost-as-gpu
/dev/nvhost-ctrl
/dev/nvhost-ctrl-gpu
/dev/nvhost-ctrl-isp
/dev/nvhost-ctrl-isp.1
/dev/nvhost-ctrl-nvdec
/dev/nvhost-ctxsw-gpu
/dev/nvhost-dbg-gpu
/dev/nvhost-gpu
/dev/nvhost-isp
/dev/nvhost-isp.1
/dev/nvhost-msenc
/dev/nvhost-nvdec
/dev/nvhost-nvjpg
/dev/nvhost-prof-gpu
/dev/nvhost-sched-gpu
/dev/nvhost-tsec
/dev/nvhost-tsecb
/dev/nvhost-tsg-gpu
/dev/nvhost-vic
/dev/nvmap

Or just pass the privileged flag if security is not a priority in your use case. You can easily make your own Jetson accelerated container from regular containers. This is how I do it on my end: https://gitlab.com/l4t-community/docker/toybox/-/blob/master/toybox_86_64