dusty-nv / jetson-containers

Machine Learning Containers for NVIDIA Jetson and JetPack-L4T
MIT License
2.33k stars 476 forks source link

Update the Ollama version to enable running more recent models #611

Open JorgeAlberto91MS opened 2 months ago

JorgeAlberto91MS commented 2 months ago

Command

Ollama run llama3.1 Ollama run gemma2:2b

Return

Return an error code because the model requires a newer ollama version

¿Could you please update Ollama to the latest version to enable running more recent LLMs?

Thank you.

rzo1 commented 1 month ago

Would be great indeed!

dusty-nv commented 1 month ago

Hi @JorgeAlberto91MS , @rzo1, dustynv/ollama:r36.2.0 was updated earlier this month, can you try including -e VERSION="0.0.0" in your docker run command? Like jetson-containers run -e VERSION="0.0.0" $(autotag ollama)

See here for more info: https://github.com/dusty-nv/jetson-containers/issues/592#issuecomment-2323177164

rzo1 commented 1 month ago

0.0.0 didn't work for us (running with docker-compose) and 36.3.0 did load the model but resulted in a core dump.

JorgeAlberto91MS commented 1 month ago

Command

jetson-containers run -e VERSION="0.0.0" $(autotag ollama)

Output

bcpgrpai@bcpgrpai:~$ jetson-containers run -e VERSION="0.0.0" $(autotag ollama) Namespace(packages=['ollama'], prefer=['local', 'registry', 'build'], disable=[''], user='dustynv', output='/tmp/autotag', quiet=False, verbose=False) -- L4T_VERSION=36.3.0 JETPACK_VERSION=6.0 CUDA_VERSION=12.2 -- Finding compatible container image for ['ollama'] dustynv/ollama:r36.2.0

Starting ollama server

Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. Your new public key is:

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILyxuJgsdr0mYkmLLSdir1XlJdmy9TMsXkxijZoBWjqb

2024/09/19 15:42:31 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/models/ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost: https://localhost: http://127.0.0.1 https://127.0.0.1 http://127.0.0.1: https://127.0.0.1: http://0.0.0.0 https://0.0.0.0 http://0.0.0.0: https://0.0.0.0: app:// file:// tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-09-19T15:42:31.991Z level=INFO source=images.go:753 msg="total blobs: 5" time=2024-09-19T15:42:31.991Z level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-09-19T15:42:31.992Z level=INFO source=routes.go:1172 msg="Listening on [::]:11434 (version 5f7b4a5)" time=2024-09-19T15:42:31.992Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama751363741/runners time=2024-09-19T15:42:33.489Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cuda_v12]" time=2024-09-19T15:42:33.489Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs" time=2024-09-19T15:42:33.489Z level=WARN source=gpu.go:669 msg="unable to locate gpu dependency libraries" time=2024-09-19T15:42:33.489Z level=WARN source=gpu.go:669 msg="unable to locate gpu dependency libraries" time=2024-09-19T15:42:33.617Z level=INFO source=types.go:107 msg="inference compute" id=GPU-7ec3eac5-f768-52c6-9e2a-755c3a393cd0 library=cuda variant=jetpack6 compute=8.7 driver=12.2 name=Orin total="15.3 GiB" available="13.5 GiB"

OLLAMA_MODELS /data/models/ollama/models OLLAMA_LOGS /data/logs/ollama.log

ollama server is now started, and you can run commands here like 'ollama run llama3'

root@bcpgrpai:/# ollama run gemma2:2b pulling manifest Error: pull model manifest: 412:

The model you are attempting to pull requires a newer version of Ollama.

Please download the latest version at:

https://ollama.com/download

root@bcpgrpai:/#

rayrrr commented 1 month ago

Fix: https://github.com/dusty-nv/jetson-containers/issues/585#issuecomment-2374310288