I have an ext4 drive mounted at boot as /media/sandisk
I have followed the recommendations and instructions in the setup.md of Jetson-containers. Swap is mounted, the data root and default runtime are specified in /etc/docker/daemon.json
Things seem to start to set up as expected but the huggingface download of the rather large Llama3.1 70B instruction model seems to be going to an unknown location at my machine root despite my confirmation that the docker data root is indeed the ext4 mount. Obviously before the download can finish my disk will be full.
Am I missing an extra flag somewhere for the nanollm docker initialisation call via Jetson-containers?
Should I be specifying another directory for the downloads through huggingface as well through a certain flag?
Or should I be manually downloading the model from huggingface face to the external drive then using a different set of flags altogether when initializing the nanollm container to run said discreet download?
I'm sorry I can't seem to find the right documentation for this query in NanoLLM, Huggingface, or Jetson-containers docs hence my asking here.
----edit----
It looks like the hugging face model goes into the data directory of the jetson-containers repo on my root; meaning if I move the repo to the external drive, and run the setup again it will go there instead too?
If im not barking up the wrong tree, can you advise on where to learn the appropriate permissions settings for the ext4 drive since docker currently seems to have exclusive ownership of the drive? Should I be changing the read write permissions to 777 every time or should I be adding myself to a certain group to gain shared ownership of the ext4 drive?
----edit2----
Sorry I realise now I was being silly by trying to run Jetson-containers from the root/home directory while having the docker root dir elsewhere, setting the ownership of the drive and then setting everything up from scratch all on the drive as working dir obviously solved it!
I have an ext4 drive mounted at boot as /media/sandisk
I have followed the recommendations and instructions in the setup.md of Jetson-containers. Swap is mounted, the data root and default runtime are specified in /etc/docker/daemon.json
When I run
jetson-containers run --env HUGGINGFACE_token=mytoken \ autotag(nano_llm) \ python3 -m nano_llm.agents.web_chat --api=mlc \ --model meta-llama/Meta-Llama-3.1-70B-Instruct \ --asr=riva --tts=piper
Things seem to start to set up as expected but the huggingface download of the rather large Llama3.1 70B instruction model seems to be going to an unknown location at my machine root despite my confirmation that the docker data root is indeed the ext4 mount. Obviously before the download can finish my disk will be full.
Am I missing an extra flag somewhere for the nanollm docker initialisation call via Jetson-containers? Should I be specifying another directory for the downloads through huggingface as well through a certain flag?
Or should I be manually downloading the model from huggingface face to the external drive then using a different set of flags altogether when initializing the nanollm container to run said discreet download?
I'm sorry I can't seem to find the right documentation for this query in NanoLLM, Huggingface, or Jetson-containers docs hence my asking here.
----edit----
It looks like the hugging face model goes into the data directory of the jetson-containers repo on my root; meaning if I move the repo to the external drive, and run the setup again it will go there instead too?
If im not barking up the wrong tree, can you advise on where to learn the appropriate permissions settings for the ext4 drive since docker currently seems to have exclusive ownership of the drive? Should I be changing the read write permissions to 777 every time or should I be adding myself to a certain group to gain shared ownership of the ext4 drive?
----edit2----
Sorry I realise now I was being silly by trying to run Jetson-containers from the root/home directory while having the docker root dir elsewhere, setting the ownership of the drive and then setting everything up from scratch all on the drive as working dir obviously solved it!