TransformerOptimus / SuperAGI

<⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.
https://superagi.com/
MIT License
15.48k stars 1.86k forks source link

CUDA version (11.8) mismatches PyTorch (12.1) #1410

Open joshuacox opened 8 months ago

joshuacox commented 8 months ago

⚠️ Check for existing issues before proceeding. ⚠️

Where are you using SuperAGI?

Linux

Which branch of SuperAGI are you using?

Main

Do you use OpenAI GPT-3.5 or GPT-4?

GPT-3.5

Which area covers your issue best?

Installation and setup

Describe your issue.

When I do a docker compose -f local-llm-gpu up --build, I am getting this error:

1.295 RuntimeError: 
1.295 The detected CUDA version (11.8) mismatches the version that was used to compile
1.295 PyTorch (12.1). Please make sure to use the same CUDA versions.
1.295 
------
failed to solve: process "/bin/sh -c cd /app/repositories/GPTQ-for-LLaMa/ && python3 setup_cuda.py install" did not complete successfully: exit code: 1

But aren't both PyTorch and CUDA inside these docker images?

How to replicate your Issue?

docker compose -f local-llm-gpu up --build

I haven't done anything special than make the config file from the template.

Upload Error Log Content

https://gist.github.com/joshuacox/f9d4aa78b84ab614af5954a361cc6b2b

joshuacox commented 8 months ago

this PR appears to address the issue, though I am still having issues with it.