Open mcgeochd opened 1 year ago
drop your nvidia driver version down to 515
@LeafmanZ I went to https://www.nvidia.com/download/find.aspx to search for 515, but the oldest I found for my card, a 2070 super, was 527.56. Is there a version compatible with a 2070 super that will work?
Try it the driver and then see if nvidia-smi returns 11.8. I use ubuntu so idk tbh.
Try it the driver and then see if nvidia-smi returns 11.8. I use ubuntu so idk tbh.
I can give it a go, but Table 3 in https://docs.nvidia.com/deploy/cuda-compatibility/index.html seems to suggest that 5.25+ only has compatibility with cuda 12.0 onwards. Though PyTorch compiled with 11.8 ran ingest.py despite my drivers being 12.1, so I’m not sure what’s going on. I do also have cudatoolkit 11.8 installed in the environment, but it doesn’t appear to have helped in this case.
yeah autoGPTQ is very very very picky. I was trying to run this on my windows (and WSL) set up for a while and just gave up and went back to ubuntu.
TBH its so confusing why windows is offered such a limited history of drivers, while linux u can go back over a year in driver history.
I fixed the error on ubuntu the following way and have submitted this text to README.md. No idea about windoze. conda create -n localGPT python=3.10 conda activate localGPT conda install -c nvidia cudatoolkit=11.7 conda init zsh
It's a million:1 odds, but it just might work ...
The error in the output is saying that AutoGPTQ requires 11.8 which is probably the version number the dependency is set to in that project. We could, probably, include the required modules as submodules, but this would require local wheels to be built and compiling can take some time (sometimes more time than it's actually worth). Personally, I wouldn't bother. Downgrading is the best option here. You would need to look at the official nvidia driver listing and download the right one and then use DDU to swap the drivers out. It is an annoying and time consuming process. If you're on W11, you're most likely locked in as Microsoft now force distributes their drivers and driver updates. I know that DDU is still useful and still used, but I don't use Windows anymore (for years now) and it could be something that could break your system install along the way. I have no idea tbh.
You may have too new of a cuda for auto-gptq to build the wheel. You should install cu118 or cu117, depending on the version of Cuda you choose.
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
Hi All,
I had trouble getting ingest.py to run with dev or nightly versions of pytorch that support cuda 12.1, which I have installed:
I was able to get it to run successfully with the following versions however, despite the cuda version mismatch:
The problem now is that this version of pytorch is incompatible with AutoGPTQ, which I need as I don't have the vram to run a 7B model without quantisation. When I run
pip install -r requirements.txt
, I get the very long output:Until there is a stable version of pytorch for cuda 12.1, does anyone know how to fix this mismatch issue?