<⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.
⚠️ Check for existing issues before proceeding. ⚠️
[X] I have searched the existing issues, and there is no existing issue for my problem
Where are you using SuperAGI?
Linux
Which branch of SuperAGI are you using?
Main
Do you use OpenAI GPT-3.5 or GPT-4?
GPT-3.5
Which area covers your issue best?
Installation and setup
Describe your issue.
When I do a docker compose -f local-llm-gpu up --build, I am getting this error:
1.295 RuntimeError:
1.295 The detected CUDA version (11.8) mismatches the version that was used to compile
1.295 PyTorch (12.1). Please make sure to use the same CUDA versions.
1.295
------
failed to solve: process "/bin/sh -c cd /app/repositories/GPTQ-for-LLaMa/ && python3 setup_cuda.py install" did not complete successfully: exit code: 1
But aren't both PyTorch and CUDA inside these docker images?
How to replicate your Issue?
docker compose -f local-llm-gpu up --build
I haven't done anything special than make the config file from the template.
⚠️ Check for existing issues before proceeding. ⚠️
Where are you using SuperAGI?
Linux
Which branch of SuperAGI are you using?
Main
Do you use OpenAI GPT-3.5 or GPT-4?
GPT-3.5
Which area covers your issue best?
Installation and setup
Describe your issue.
When I do a
docker compose -f local-llm-gpu up --build
, I am getting this error:But aren't both PyTorch and CUDA inside these docker images?
How to replicate your Issue?
docker compose -f local-llm-gpu up --build
I haven't done anything special than make the config file from the template.
Upload Error Log Content
https://gist.github.com/joshuacox/f9d4aa78b84ab614af5954a361cc6b2b