Open phalexo opened 1 week ago
idem here with ROCm on Ubuntu 24.04, v0.4.1 log attached, same result with commit 65973ceb
PS: please can you provide some tips on how to debug the server/runner in the developers docs
idem here with ROCm on Ubuntu 24.04, v0.4.1 log attached, same result with commit 65973ce
PS: please can you provide some tips on how to debug the server/runner in the developers docs
I cannot be sure what your problem, but I think I figured what mine is. Ollama people decided to get "clever" how they identify a CUDA installation, and they just ignore variables like CUDA_ROOT, CUDA_HOME and instead look at symbolic links like /usr/local/cuda-11, which was the OLDER version. From that point different things did not match.
We're working to improve the new build in PR #7499
@phalexo please give the change above a try and let us know if it clears up your build glitch, or if there's still more work to do.
The docs/development.md has been updated in that PR to provide additional guidance on building.
I addressed the core dump problem by deleting old CUDA distribution. The second problem that caused the runner to die, I had to drop the version to V0.3.11. So at least you know that v0.3.11 works with Qwen2.5-32b-instruct-Q8_0
On Wed, Nov 13, 2024 at 2:48 PM Daniel Hiltgen @.***> wrote:
@phalexo https://github.com/phalexo please give the change above a try and let us know if it clears up your build glitch, or if there's still more work to do.
The docs/development.md has been updated in that PR to provide additional guidance on building.
— Reply to this email directly, view it on GitHub https://github.com/ollama/ollama/issues/7638#issuecomment-2474611352, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABDD3ZIIJDJLJYOB2GUZNZ32AOUIFAVCNFSM6AAAAABRVHSHSOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINZUGYYTCMZVGI . You are receiving this because you were mentioned.Message ID: @.***>
What is the issue?
I used to build it with go generate ./... go build .
Is it different now? Does it detect automatically CUDA at /usr/local/cuda?
OS
Linux, Docker
GPU
Nvidia
CPU
Intel
Ollama version
cloned github latest