Open zero617 opened 3 months ago
logs: PS C:\Users\60461> ollama serve 2024/08/07 14:40:41 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost: https://localhost: http://127.0.0.1 https://127.0.0.1 http://127.0.0.1: https://127.0.0.1: http://0.0.0.0 https://0.0.0.0 http://0.0.0.0: https://0.0.0.0: app:// file:// tauri://*] OLLAMA_RUNNERS_DIR:D:\Software\scoop\apps\ollama_cderv\current\ollama_runners OLLAMA_TMPDIR:]" time=2024-08-07T14:40:41.359+08:00 level=INFO source=images.go:740 msg="total blobs: 5" time=2024-08-07T14:40:41.360+08:00 level=INFO source=images.go:747 msg="total unused blobs removed: 0" time=2024-08-07T14:40:41.360+08:00 level=INFO source=routes.go:1057 msg="Listening on 127.0.0.1:11434 (version 0.1.42)" time=2024-08-07T14:40:41.361+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7 cpu]" time=2024-08-07T14:40:41.532+08:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-5392dfca-71f5-39f0-f508-c97cd317dd59 library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4060 Ti" total="16.0 GiB" available="14.9 GiB" [GIN] 2024/08/07 - 14:40:50 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/08/07 - 14:40:51 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/08/07 - 14:40:52 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
启动ollama时添加一个环境变量即可 OLLAMA_ORIGINS=*
OLLAMA_ORIGINS=*
详细解释看这里:https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-allow-additional-web-origins-to-access-ollama
启动ollama时添加一个环境变量即可 OLLAMA_ORIGINS=* 详细解释看这里:https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-allow-additional-web-origins-to-access-ollama
添加了环境变量仍然403
logs: PS C:\Users\60461> ollama serve 2024/08/07 14:40:41 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost: https://localhost: http://127.0.0.1 https://127.0.0.1 http://127.0.0.1: https://127.0.0.1: http://0.0.0.0 https://0.0.0.0 http://0.0.0.0: https://0.0.0.0: app:// file:// tauri://*] OLLAMA_RUNNERS_DIR:D:\Software\scoop\apps\ollama_cderv\current\ollama_runners OLLAMA_TMPDIR:]" time=2024-08-07T14:40:41.359+08:00 level=INFO source=images.go:740 msg="total blobs: 5" time=2024-08-07T14:40:41.360+08:00 level=INFO source=images.go:747 msg="total unused blobs removed: 0" time=2024-08-07T14:40:41.360+08:00 level=INFO source=routes.go:1057 msg="Listening on 127.0.0.1:11434 (version 0.1.42)" time=2024-08-07T14:40:41.361+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7 cpu]" time=2024-08-07T14:40:41.532+08:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-5392dfca-71f5-39f0-f508-c97cd317dd59 library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4060 Ti" total="16.0 GiB" available="14.9 GiB" [GIN] 2024/08/07 - 14:40:50 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/08/07 - 14:40:51 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/08/07 - 14:40:52 | 403 | 0s | 127.0.0.1 | POST "/api/generate"