Open AfterHAL opened 1 month ago
That is very strange- the only thing I can think of would be that python isn't pointing to the right python. You're sure that when you do python -m pip list | grep torch
- torch shows up?
That is very strange- the only thing I can think of would be that python isn't pointing to the right python. You're sure that when you do
python -m pip list | grep torch
- torch shows up?
Hi @aredden. Thanks for helping out. Nothing special for me about the torch installation :
python -m pip list | grep torch
pytorch-triton 3.1.0+5fe38ffd73
torch 2.6.0.dev20240917+cu124
torchaudio 2.5.0.dev20240917+cu124
torchvision 0.20.0.dev20240917+cu124
I'm trying to run flux-fp8-api from a fresh virtual environment. I've been trying updating and checking everything I could thing of without success.
By the way, I'm looking for way to use a fast loading Flux-dev-fp8 model like the "easy to use ComfyUI fp8 checkpoint for the Flux dev (safetensors file here)" of ComfyUI (with the --fast
argument). It loads in 30 to 40 seconds on my 64GBram/4090-24GBvram PC, and generates a 1280x1280px image in less then 20 secs, even with 4 or 5 LoRAs. This setup uses around 20 GBvram.
I've been looking inside the the code of ComfyUI and the official Flux.1 repository, but it is a little too hard for me.
You're code looks good and I would like to know how to do it, but I'm stuck ;)
Let me know if you have any idea. Thanks again.
Hi @aredden. I finally started from scratch with a fresh installation of Ubuntu and it's ok now. By the way, I don't know the origin of the error I had before.
Ah okay cool! Yeah I'm unsure either. I'm glad you got it sorted :)
I have the exact same issue, so let me know if you get any clarity on it, thanks!
Just in case if someone faces this issue:
python -m pip install --no-build-isolation -U -v .
Adding --no-build-isolation
solved the problem.
I think this happens because the package depends on torch/cuda during its setup.py execution, not only at runtime.
--no-build-isolation
will work for some (by allowing the installation to look globally for torch/cuda I guess?). Some environments though—like when trying to deploy with some cloud providers—won't have torch/cuda available during the build phase, so it won't work for everyone.
I have encountered the same problem when I was trying flux-fp8-api.
I used python -m venv venv
to create a virtual environment, activated it and then followed the installation steps to install torch then install requirements.txt then this problem happened.
I used ubuntu 22.04 and python 3.12. To reinstall ubuntu is way too costly for me.
@spejamas
Some environments though—like when trying to deploy with some cloud providers—won't have torch/cuda available during the build phase
seems to be the root cause. Any way to get torch ready in build phase?
Thanks.
Hi. I've been trying to install torch-cubas-hgemm (for flux-fp8-api) on ubuntu without success.
The same error
ModuleNotFoundError: No module named 'torch'
, whether withpip install git+https://github.com/aredden/torch-cublas-hgemm.git@master
or withpython -m pip install -U -v .
(after cloning the repository).--> Of course, torch is installed.
Regards.