I tried to load this model but got error message: could not find model config file at .../Meta-Llama-3-8B-Instruct/config.json
the code to load the model
model = NanoLLM.from_pretrained(
model="/data/models/Meta-Llama-3-8B-Instruct",
quantization='q4f16_ft',
api='mlc'
)
but I can use the model: /data/models/Llama-2-7b-chat-hf
did I downloaded the all complete files of Meta-Llama-3-8B-Instruct?
I applied and downloaded the model files Meta-Llama-3-8B-Instruct via the official script: https://github.com/meta-llama/llama3/blob/main/download.sh. and put the model files in local folder: /data/models/Meta-Llama-3-8B-Instruct/
I tried to load this model but got error message: could not find model config file at .../Meta-Llama-3-8B-Instruct/config.json![error](https://github.com/dusty-nv/jetson-containers/assets/57220346/056f3407-a953-4b45-b8fd-b7ea030113b6)
the code to load the model
model = NanoLLM.from_pretrained( model="/data/models/Meta-Llama-3-8B-Instruct", quantization='q4f16_ft', api='mlc' ) but I can use the model: /data/models/Llama-2-7b-chat-hf did I downloaded the all complete files of Meta-Llama-3-8B-Instruct?