Many servers on the internet are most updated, running the latest OS and the latest nvidia drivers (ie 535). It is not possible to install an old cuda11.8 on a 535 nvidia driver.
If the recognize app could make use of a dockered cuda11.8 and tensorflow in a docker container, it would have some advantages:
an older cuda11.8 could be used, so everyone can use the recognize app with GPU accelleration
it would be easier to pull a ready docker container than to install cuda and nvidia drivers manually for most (can be painful also)
maintenance of the container would be a lot easier than answering all those questions why the GPU part of your software does not work.
Hope this works someday. Best greetings :)
Describe the feature you'd like to request
Many servers on the internet are most updated, running the latest OS and the latest nvidia drivers (ie 535). It is not possible to install an old cuda11.8 on a 535 nvidia driver. If the recognize app could make use of a dockered cuda11.8 and tensorflow in a docker container, it would have some advantages:
Describe the solution you'd like
Recognize GPU parts in a docker container.
Describe alternatives you've considered
There are none.