Closed decadance-dance closed 1 month ago
Hi @decadance-dance 👋, We have already split some dependencies in extras (will be available with the next release). You could also take a look at https://github.com/felixdittrich92/OnnxTR that's more optimized for plain inference :)
Hi @felixdittrich92, thanks I have never seen the OnnxTR project before. I am gonna try it for sure.
@decadance-dance yeah i worked last week a bit on it and released it on friday (public) ^^ Because there was some requests about an onnx pipeline and it's easier to keep it dedicated instead of blowing up docTR with a third "backend"
🚀 The feature
At the moment, installing all the dependencies for doctr/.[torch] takes up a lot of disk space. My final Docker image takes about 12GB, despite the fact that I run the service with only one model. I'm not sure that I need about 7GB of dependencies to infer one model. My suggestion is to add separate options like [torch-infer] or [tf-infer] that would install only those packages that are needed for inference. I think it might help if we don't need things for training and evaluation.
Motivation, pitch
I'd like to build lighter images for inference purposes, installing less dependencies. It will save disk space and reduce bulding time.
Alternatives
There may be dependencies that are installed but not used. If so, then removing them from the list of packages will help reduce the size of the builds.
Additional context
My typical docker image: