Woolverine94 / biniou

a self-hosted webui for 30+ generative ai
GNU General Public License v3.0
439 stars 42 forks source link

light-the-torch for pytorch installation #32

Closed do-me closed 6 days ago

do-me commented 2 weeks ago

biniou natively only rely on CPU for all operations. It use a specific CPU-only version of PyTorch. The result is a better compatibility with a wide range of hardware, but degraded performances. Depending on your hardware, expect slowness. See here for Nvidia CUDA support and AMD ROCm experimental support (GNU/Linux only).

Maybe a better approach would be to detect the systems specs automatically with light-the-torch. This way, everyone gets the best out of their systems.

Woolverine94 commented 2 weeks ago

Hello @do-me ,

Thanks for your suggestion.

I didn't know light-the-torch, and I will study the possibility to integrate it in the installer.

Woolverine94 commented 6 days ago

Hello @do-me,

Sorry for being so long.

I've evaluated the compatibility of light-the-torch.

It was a good idea, the installation works great -at least for the few use cases I could test-, but it requires to freeze the release versions of torch, torchaudio and torchvision.

Unfortunately, the further updates could be a problem, and the real issue is that I want to keep the ability for CUDA (or ROCm) users to switch between CUDA (or ROCm) and CPU, which seems complicated with light-the-torch. I had some reports of users that don't have enough VRAM to use SDXL -for example- with CUDA but had enough RAM to run it on CPU inference.

Also, it replaces pip with a version that throws away constant warnings and sometimes fails on my system, which I definitely don't want.

I totally agree that the autodetection at installation is a good idea, but using light-the-torch doesn't seems to be the good way to do that for biniou.

I close this issue as light-the-torch is unlikely to be integrated in biniou, but stay open to the idea of an auto-detection at install time.