ollama / ollama

Get up and running with Llama 3.2, Mistral, Gemma 2, and other large language models.
https://ollama.com
MIT License
93.44k stars 7.38k forks source link

Ollama does not run #7163

Open d3tk opened 5 days ago

d3tk commented 5 days ago

What is the issue?

The process never completes when I try to do ollama run or ollama list.

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.3.12

rick-github commented 5 days ago

Server logs will help in debugging.

d3tk commented 5 days ago

Server logs will help in debugging.

Server Logs

2024/10/10 14:38:26 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\dkuts\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-10-10T14:38:26.218-04:00 level=INFO source=images.go:753 msg="total blobs: 37" time=2024-10-10T14:38:26.247-04:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-10-10T14:38:26.249-04:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)" time=2024-10-10T14:38:26.251-04:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 rocm_v6.1 cpu cpu_avx cpu_avx2]" time=2024-10-10T14:38:26.252-04:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs" time=2024-10-10T14:38:26.383-04:00 level=INFO source=gpu.go:292 msg="detected OS VRAM overhead" id=GPU-40f13575-6ec5-c1b3-c72e-69caef571bd7 library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3080" overhead="81.5 MiB" time=2024-10-10T14:38:26.384-04:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-40f13575-6ec5-c1b3-c72e-69caef571bd7 library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3080" total="10.0 GiB" available="8.9 GiB" [GIN] 2024/10/10 - 14:39:46 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/10/10 - 14:39:46 | 200 | 26.4183ms | 127.0.0.1 | GET "/api/tags"

rick-github commented 5 days ago

Can you give a demonstration of the problem you are experiencing?

d3tk commented 5 days ago

When I run ollama list that is the server log I get and the list is never printed to console. Here is a gif of it: problem gif

Is there anything else that could help?

rick-github commented 5 days ago

Does ollama --version return anything?

d3tk commented 5 days ago

Yes that returns 0.3.12

rick-github commented 4 days ago

ollama --version indicates that the client can talk to the server. So run and list not working implies the server is somehow borked. What does curl http://localhost:11434/api/tags return?

d3tk commented 4 days ago

{"models":[{"name":"qwen2-math:latest","model":"qwen2-math:latest","modified_at":"2024-09-18T15:04:52.6146302-04:00","size":4431400514,"digest":"28cc3a337734d0db9326604d931ccce1c9379f2310b60dee03ef76440b37bb65","details":{"parent_model":"","format":"gguf","family":"qwen2","families":["qwen2"],"parameter_size":"7.6B","quantization_level":"Q4_0"}},{"name":"deepseek-coder-v2:latest","model":"deepseek-coder-v2:latest","modified_at":"2024-08-26T14:25:27.440896-04:00","size":8905125527,"digest":"8577f96d693e51135fb408f915344f4413db45ce31d771be6a6a9b1c7e7a4b40","details":{"parent_model":"","format":"gguf","family":"deepseek2","families":["deepseek2"],"parameter_size":"15.7B","quantization_level":"Q4_0"}},{"name":"llama3.1:latest","model":"llama3.1:latest","modified_at":"2024-08-26T14:22:43.7956121-04:00","size":4661230977,"digest":"91ab477bec9d27086a119e33c471ae7afbd786cc4fbd8f38d8af0a0b949d53aa","details":{"parent_model":"","format":"gguf","family":"llama","families":["llama"],"parameter_size":"8.0B","quantization_level":"Q4_0"}},{"name":"gemma2:27b","model":"gemma2:27b","modified_at":"2024-07-15T18:32:59.5951191-04:00","size":15628387458,"digest":"53261bc9c192c1cb5fcc898dd3aa15da093f5ab6f08e17e48cf838bb1c58abfe","details":{"parent_model":"","format":"gguf","family":"gemma2","families":["gemma2"],"parameter_size":"27.2B","quantization_level":"Q4_0"}},{"name":"gemma2:latest","model":"gemma2:latest","modified_at":"2024-07-15T18:32:16.5725515-04:00","size":5443152417,"digest":"ff02c3702f322b9e075e9568332d96c0a7028002f1a5a056e0a6784320a4db0b","details":{"parent_model":"","format":"gguf","family":"gemma2","families":["gemma2"],"parameter_size":"9.2B","quantization_level":"Q4_0"}},{"name":"llama3:latest","model":"llama3:latest","modified_at":"2024-06-03T20:03:06.414835-04:00","size":4661224676,"digest":"365c0bd3c000a25d28ddbf732fe1c6add414de7275464c4e4d1c3b5fcb5d8ad1","details":{"parent_model":"","format":"gguf","family":"llama","families":["llama"],"parameter_size":"8.0B","quantization_level":"Q4_0"}},{"name":"nomic-embed-text:latest","model":"nomic-embed-text:latest","modified_at":"2024-05-31T17:34:46.1618435-04:00","size":274302450,"digest":"0a109f422b47e3a30ba2b10eca18548e944e8a23073ee3f3e947efcf3c45e59f","details":{"parent_model":"","format":"gguf","family":"nomic-bert","families":["nomic-bert"],"parameter_size":"137M","quantization_level":"F16"}},{"name":"codestral:latest","model":"codestral:latest","modified_at":"2024-05-31T17:33:12.9738307-04:00","size":12569170041,"digest":"fcc0019dcee9947fe4298e23825eae643f4670e391f205f8c55a64c2068e9a22","details":{"parent_model":"","format":"gguf","family":"llama","families":["llama"],"parameter_size":"22.2B","quantization_level":"Q4_0"}}]}

rick-github commented 4 days ago

So list works when talking directly to the server. Do you have OLLAMA_HOST set in your environment to something other than localhost:11434? If you run set at the command line, what's the output?

d3tk commented 4 days ago

It looks like there isnt an OLLAMA_HOST variable in the output. I've had Ollama working for awhile, I am not sure what I did to break it, if i did.

ALLUSERSPROFILE=C:\ProgramData APPDATA=C:\Users\dkuts\AppData\Roaming CommonProgramFiles=C:\Program Files\Common Files CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files CommonProgramW6432=C:\Program Files\Common Files COMPUTERNAME=DESKTOP-Q91IHFM ComSpec=C:\WINDOWS\system32\cmd.exe CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6 CUDA_PATH_V12_0=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.0 CUDA_PATH_V12_6=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6 DriverData=C:\Windows\System32\Drivers\DriverData EFC_10812=1 FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer FPS_BROWSER_USER_PROFILE_STRING=Default HOMEDRIVE=C: HOMEPATH=\Users\dkuts LOCALAPPDATA=C:\Users\dkuts\AppData\Local LOGONSERVER=\DESKTOP-Q91IHFM NUMBER_OF_PROCESSORS=20 OneDrive=C:\Users\dkuts\University of Pittsburgh OneDriveCommercial=C:\Users\dkuts\University of Pittsburgh OneDriveConsumer=C:\Users\dkuts\OneDrive OS=Windows_NT Path=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\libnvvp;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.0\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.0\libnvvp;C:\Program Files\Common Files\Oracle\Java\javapath;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files (x86)\dotnet\;C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon\;C:\Program Files\dotnet\;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\MATLAB\R2023a\bin;Z:\Git\cmd;C:\Program Files\NVIDIA Corporation\Nsight Compute 2024.3.0\;C:\Users\dkuts\AppData\Local\Programs\Python\Python312\Scripts\;C:\Users\dkuts\AppData\Local\Programs\Python\Python312\;C:\Users\dkuts\AppData\Local\Programs\Python\Python311\Scripts\;C:\Users\dkuts\AppData\Local\Programs\Python\Python311\;C:\Users\dkuts\AppData\Local\Microsoft\WindowsApps;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon\;C:\Users\dkuts\AppData\Local\Programs\Microsoft VS Code\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.0\bin;;C:\Users\dkuts\AppData\Local\Programs\Ollama PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC PROCESSOR_ARCHITECTURE=AMD64 PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 151 Stepping 2, GenuineIntel PROCESSOR_LEVEL=6 PROCESSOR_REVISION=9702 ProgramData=C:\ProgramData ProgramFiles=C:\Program Files ProgramFiles(x86)=C:\Program Files (x86) ProgramW6432=C:\Program Files PROMPT=$P$G PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules PUBLIC=C:\Users\Public SESSIONNAME=Console SystemDrive=C: SystemRoot=C:\WINDOWS TEMP=C:\Users\dkuts\AppData\Local\Temp TMP=C:\Users\dkuts\AppData\Local\Temp USERDOMAIN=DESKTOP-Q91IHFM USERDOMAIN_ROAMINGPROFILE=DESKTOP-Q91IHFM USERNAME=dkuts USERPROFILE=C:\Users\dkuts VBOX_MSI_INSTALL_PATH=C:\Program Files\Oracle\VirtualBox\ windir=C:\WINDOWS

rick-github commented 4 days ago

It's a head-scratcher. list and --version are just wrappers to the /api/tags and /api/version endpoints, so having the client successfully return the version but hang on the model list, which returns successfully when you curl the endpoint, is strange. Do you have any firewall/anti-virus processes running on the machine? Have you previously tried running ollama in WSL? What's the output of netstat -nq | findstr 11434?

d3tk commented 4 days ago

TCP 127.0.0.1:11434 0.0.0.0:0 LISTENING

I don't think I have any firewall/anti-virus. If I did it would be Windows Defender but, I haven't changed any settings so not sure why it would be blocked now and wasn't before.

On WSL, ollama list works and ollama pull works but when I try to run a model it says: "Error: llama runner process no longer running: -1"

rick-github commented 4 days ago

Have you installed ollama in both WSL (ie, curl | sh) and Windows (OllamaSetup.exe)? I don't believe that will work reliably (not a Windows guy so I may be mistaken).

d3tk commented 4 days ago

I didn't have Ollama installed on WSL, so I did sudo snap install ollama. it said ollama 0.1.32 from Matias Piipari (mz2) installed when it completed.

For windows, I download the exe from the website download section. So I guess they're separate installations.

rick-github commented 4 days ago

My experience (https://github.com/ollama/ollama/issues/7023, https://github.com/ollama/ollama/issues/6701) is that installing ollama in both Windows and WSL is problematic. If you installed in WSL just now because I asked about it, uninstall. It's not the cause of the original problem but may be an issue later. If you installed in WSL and Windows prior to submitting this bug report, delete one of the installations.

d3tk commented 4 days ago

I installed it just now, I have uninstalled it. Should I attempt to uninstall ollama from my windows as well?

rick-github commented 4 days ago

Yes, it won't reveal the cause of the problem, but uninstalling and reinstalling may resolve the issue.

d3tk commented 4 days ago

I have re-installed Ollama and the issue still persists. Perhaps there were some files that weren't fully deleted when I uninstalled?

rick-github commented 4 days ago

I'm not familiar with the Windows install, on Linux it's a single binary with a service config file so cleanup is straightforward. Now that you have a fresh install, what's the result of:

curl http://localhost:11434/api/version
curl http://localhost:11434/api/ps
curl http://localhost:11434/api/tags
ollama --version
ollama ps
ollama list

When you've run those, add the results and the server logs.

d3tk commented 4 days ago

Results: Microsoft Windows [Version 10.0.26100.2033] (c) Microsoft Corporation. All rights reserved.

C:\Users\dkuts>curl http://localhost:11434/api/version {"version":"0.3.12"} C:\Users\dkuts>curl http://localhost:11434/api/ps {"models":[]} C:\Users\dkuts>curl http://localhost:11434/api/tags {"models":[]} C:\Users\dkuts>ollama --version ollama version is 0.3.12

C:\Users\dkuts>ollama ps ^C C:\Users\dkuts>ollama list ^C C:\Users\dkuts>

ps and list didn't return anything. I can run the command and see if they ever return if you think that would help.

Here are the server logs: 2024/10/11 15:50:34 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\dkuts\.ollama\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost: https://localhost: http://127.0.0.1 https://127.0.0.1 http://127.0.0.1: https://127.0.0.1: http://0.0.0.0 https://0.0.0.0 http://0.0.0.0: https://0.0.0.0: app:// file:// tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-10-11T15:50:34.635-04:00 level=INFO source=images.go:753 msg="total blobs: 0" time=2024-10-11T15:50:34.635-04:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-10-11T15:50:34.635-04:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)" time=2024-10-11T15:50:34.636-04:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 rocm_v6.1 cpu cpu_avx]" time=2024-10-11T15:50:34.636-04:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs" time=2024-10-11T15:50:34.758-04:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-40f13575-6ec5-c1b3-c72e-69caef571bd7 library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3080" total="10.0 GiB" available="8.9 GiB" [GIN] 2024/10/11 - 15:50:42 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/10/11 - 15:50:42 | 200 | 517.3µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/10/11 - 16:03:51 | 200 | 866.5µs | 127.0.0.1 | GET "/api/version" [GIN] 2024/10/11 - 16:03:56 | 200 | 609.6µs | 127.0.0.1 | GET "/api/ps" [GIN] 2024/10/11 - 16:04:00 | 200 | 0s | 127.0.0.1 | GET "/api/tags" [GIN] 2024/10/11 - 16:04:06 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2024/10/11 - 16:04:09 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/10/11 - 16:04:09 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2024/10/11 - 16:04:16 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/10/11 - 16:04:16 | 200 | 0s | 127.0.0.1 | GET "/api/tags"

rick-github commented 4 days ago

Well, the server thinks that it successfully completed all 6 commands, 3 curls and 3 ollama client calls. I don't understand why the ps and list commands don't return to the prompt.

Does the same thing happen if you run the commands in a Powershell terminal?

d3tk commented 4 days ago

PS C:\Users\dkuts> curl http://localhost:11434/api/version
StatusCode : 200 StatusDescription : OK Content : {"version":"0.3.12"} RawContent : HTTP/1.1 200 OK Content-Length: 20 Content-Type: application/json; charset=utf-8 Date: Fri, 11 Oct 2024 20:16:38 GMT

                {"version":"0.3.12"}

Forms : {} Headers : {[Content-Length, 20], [Content-Type, application/json; charset=utf-8], [Date, Fri, 11 Oct 2024 20:16:38 GMT]} Images : {} InputFields : {} Links : {} ParsedHtml : System.__ComObject RawContentLength : 20 PS C:\Users\dkuts> curl http://localhost:11434/api/ps

StatusCode : 200 StatusDescription : OK Content : {"models":[]} RawContent : HTTP/1.1 200 OK Content-Length: 13 Content-Type: application/json; charset=utf-8 Date: Fri, 11 Oct 2024 20:19:11 GMT

                {"models":[]}

Forms : {} Headers : {[Content-Length, 13], [Content-Type, application/json; charset=utf-8], [Date, Fri, 11 Oct 2024 20:19:11 GMT]} Images : {} InputFields : {} Links : {} ParsedHtml : System.__ComObject RawContentLength : 13 PS C:\Users\dkuts> curl http://localhost:11434/api/tags

StatusCode : 200 StatusDescription : OK Content : {"models":[]} RawContent : HTTP/1.1 200 OK Content-Length: 13 Content-Type: application/json; charset=utf-8 Date: Fri, 11 Oct 2024 20:18:40 GMT

                {"models":[]}

Forms : {} Headers : {[Content-Length, 13], [Content-Type, application/json; charset=utf-8], [Date, Fri, 11 Oct 2024 20:18:40 GMT]} Images : {} InputFields : {} Links : {} ParsedHtml : System.__ComObject RawContentLength : 13

2024/10/11 15:50:34 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\dkuts\.ollama\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost: https://localhost: http://127.0.0.1 https://127.0.0.1 http://127.0.0.1: https://127.0.0.1: http://0.0.0.0 https://0.0.0.0 http://0.0.0.0: https://0.0.0.0: app:// file:// tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-10-11T15:50:34.635-04:00 level=INFO source=images.go:753 msg="total blobs: 0" time=2024-10-11T15:50:34.635-04:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-10-11T15:50:34.635-04:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)" time=2024-10-11T15:50:34.636-04:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 rocm_v6.1 cpu cpu_avx]" time=2024-10-11T15:50:34.636-04:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs" time=2024-10-11T15:50:34.758-04:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-40f13575-6ec5-c1b3-c72e-69caef571bd7 library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3080" total="10.0 GiB" available="8.9 GiB" [GIN] 2024/10/11 - 15:50:42 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/10/11 - 15:50:42 | 200 | 517.3µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/10/11 - 16:03:51 | 200 | 866.5µs | 127.0.0.1 | GET "/api/version" [GIN] 2024/10/11 - 16:03:56 | 200 | 609.6µs | 127.0.0.1 | GET "/api/ps" [GIN] 2024/10/11 - 16:04:00 | 200 | 0s | 127.0.0.1 | GET "/api/tags" [GIN] 2024/10/11 - 16:04:06 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2024/10/11 - 16:04:09 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/10/11 - 16:04:09 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2024/10/11 - 16:04:16 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/10/11 - 16:04:16 | 200 | 0s | 127.0.0.1 | GET "/api/tags" [GIN] 2024/10/11 - 16:16:38 | 200 | 504.6µs | 127.0.0.1 | GET "/api/version" [GIN] 2024/10/11 - 16:16:44 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2024/10/11 - 16:18:40 | 200 | 611.5µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/10/11 - 16:18:43 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2024/10/11 - 16:18:46 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/10/11 - 16:18:46 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2024/10/11 - 16:18:52 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/10/11 - 16:18:52 | 200 | 0s | 127.0.0.1 | GET "/api/tags" [GIN] 2024/10/11 - 16:18:59 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2024/10/11 - 16:19:11 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2024/10/11 - 16:19:21 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/10/11 - 16:19:21 | 200 | 0s | 127.0.0.1 | GET "/api/tags"

powershell didn't print anything from ollama ps or ollama list but ollama version worked.

dhiltgen commented 4 days ago

I'm not sure what's going on here either. It's an unusual failure mode.

When the client commands hang, can you try checking Task Manager to see if something is using a lot of CPU, or is the system ~idle? Some AV products will stop a program while they do investigation to see if it's malicious, but the fact that ollama --version works seems to contract that potential root cause.

Since curl seems to work, it seems more likely to be a client-side problem, so maybe check get-command ollama to see which binary is being used, and maybe try running it under an Admin terminal.

d3tk commented 15 hours ago

I'm sorry for not getting back to you sooner.

I checked the task manager; nothing was using an excessive amount of CPU, Memory, or GPU. I ran ollama list with an admin terminal, and it did not complete. The binary being used is the only one I have installed currently.

rick-github commented 15 hours ago

At this point I think we are stumped and have no explanation for the program behaviour, so we need more info. On a Linux system I'd use strace to figure out why a process hangs. That's not an option for Windows, but searching for an equivalent turned up procmon. Would it be possible for you to install it and monitor ollama?

d3tk commented 14 hours ago

I have installed it. Is there anything in particular you'd like me to try?

rick-github commented 12 hours ago

Start procmon. If it starts with capture on (number of events on bottom of window is increasing), pause by typing Ctrl-E or clicking on the Capture icon (right of the Open and Save icons). In Filter > Filter, turn off all existing filters by clicking on the tick mark to the left of the filters. Then add a new filter for ollama: set Process Name is ollama.exe, click Add then OK. Start capture (Ctrl-E) and run ollama list, Ctrl-C when it hangs. Pause capture and then in File > Save save the captured events as a CSV file. Attach that file to this issue.

I don't know if this will actually give us any useful info, this may be an iterative process.

d3tk commented 12 hours ago

Here is the log file obtained Logfile.CSV

Hopefully its of assistance to you, thank you.

rick-github commented 11 hours ago

Can you do the same again, but this time run ollama -v?

d3tk commented 11 hours ago

logfile-ollama-v.CSV