Open N4S4 opened 2 months ago
Hello, thank you for the instructions, not sure if this is the right place, I am experiencing an issue where when setting the override with systemd edit ollama.service it dorws not take place after restart
output from journalctl -u ollama --no-pager
ollama[49742]: 2024/08/25 08:15:47 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:*
HSA_OVERRIDE_GFX_VERSION: remains empty
HSA_OVERRIDE_GFX_VERSION:
could you help me?
ok increasing the vram from gpu did the trick thank you
Hello, thank you for the instructions, not sure if this is the right place, I am experiencing an issue where when setting the override with systemd edit ollama.service it dorws not take place after restart
output from journalctl -u ollama --no-pager
ollama[49742]: 2024/08/25 08:15:47 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:*
HSA_OVERRIDE_GFX_VERSION:
remains emptycould you help me?