Closed albertotb closed 4 years ago
just use FROM thufeifeibear/turbo_transformers_gpu
in your dockerfile.
The problem with using that base image is that it is quite big (7 Gb). I was looking for the bare minimum requirements in order to perform inference. I also tried the CPU version but it is also 7Gb.
Yes, I have noticed the expansion of the docker image size. I will try to build a small version with fewer dependencies.
@albertotb Hi Albert. I tried my best to shrink the size of docker image.
In addition to the base image of nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04 which has used 3.6 GB on disk, I found the most of disk space is occupied by installing pytorch, which is resulted from the following command.
conda install pytorch=1.5.0 cudatoolkit=10.1 -c pytorch -y
PS: you can observe the container size interactively with shell cmd
docker system df --verbose
@feifeibear Thank you for taking the time to look into this. Just one quick question, is pytorch really needed to use the library? In that case, at least for the cpu image, you could maybe save some space by installing pytorch-cpu, which is only 300 Mb (I think)
Could you expand the README to include an example on how to include TurboTransformers in our own Docker images?