NVIDIA-AI-IOT / nanoowl

A project that optimizes OWL-ViT for real-time inference with NVIDIA TensorRT.
Apache License 2.0
252 stars 45 forks source link

nanoowl container downloads /root/.cache/clip/ViT-B-32.pt everytime #29

Closed TadayukiOkada closed 7 months ago

TadayukiOkada commented 7 months ago

The ViT model file is not included in the docker image, so it will be downloaded when tree_demo is started. I modified the Dockerfile as follows so that the created image contains the ViT model file: diff --git a/packages/vit/nanoowl/Dockerfile b/packages/vit/nanoowl/Dockerfile index 1254f00..c0b40e9 100644 --- a/packages/vit/nanoowl/Dockerfile +++ b/packages/vit/nanoowl/Dockerfile @@ -40,6 +40,9 @@ RUN cd /opt/nanoowl/examples/ && \ --threshold=0.1 \ --image_encoder_engine=../data/owl_image_encoder_patch32.engine

+RUN cd /opt/nanoowl/examples/ && \

-WORKDIR /opt/nanoowl \ No newline at end of file +WORKDIR /opt/nanoowl

TadayukiOkada commented 7 months ago

sorry, wrong repo.