Open xushilundao opened 10 months ago
Good points! Have you tried to build Triton on windows for even more performance? ;-) I have not!
I'm getting 61fps on my rtx 3090+Ryzen 5900.
If you want the max speed version, this bit of the instructions works:
pip install -r req-maxperf.txt
But this fails to find the wheel on my windows at least:
pip install -r req-sfast.txt
So instead install stable-fast from a wheel:
I got the url from:
https://github.com/chengzeyi/stable-fast/releases
And then you run it like this for batch size 6, not to be confused by the
python maxperf.py 6
but it does seem a bit buggy, it starts with 6 then fills up all 10 spaces anyway when you click 'go'.
I don't understand "prerequisiton:use conda as default env." but agree with the other points.
Here's the guide for the CUDA error that we use in 4090 Grotto (https://discord.gg/zVAvFp3wnU):
If you get this error, you need to have the correctly compiled version of torch which you can find with this command:
pip install torch== --no-index --find-links https://download.pytorch.org/whl/torch
torch==
means I don't know what version I want, list them.
--no-index
means don't search the default python index.
--find-links
searches for packages from a url.
Once you have found the correct version, you can install it like this:
pip install torch==2.1.0+cu121 --find-links https://download.pytorch.org/whl/torch
In this example we've installed version 2.1.0+cu121
.
prerequisiton:use conda as default env. 1.adjust python -m venv ./venv to python -m venv venv(drop ./) 2.use nvdia-smi to ensure your CUDA version. In my case ,cuda 12.2. 3.If you encounter "raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled",which means that you need to install CUDA support; using pip uninstall torch pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 install CUDA support; 4.Run the code left ,then it works! In my system using 3070,2 minutes will genertate 2 images!