Open Quick-Eyed-Sky opened 1 month ago
I managed to download things with:
import sys sys.path.append('..')
import torch from kandinsky3 import get_T2I_Flash_pipeline
device_map = torch.device('cuda:0') dtype_map = { 'unet': torch.float32, 'text_encoder': torch.float16, 'movq': torch.float32, }
t2i_pipe = get_T2I_Flash_pipeline(
device_map, dtype_map,
)
When this part runs:
def generate_and_save_image(dynamic_prompt, i): res = t2i_pipe( text=dynamic_prompt, guidance_scale=4, height=image_dimensions[0], width=image_dimensions[1] ).images[0]
I get:
OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB. GPU 0 has a total capacity of 39.56 GiB of which 100.81 MiB is free. Process 146482 has 39.46 GiB memory in use. Of the allocated memory 38.69 GiB is allocated by PyTorch, and 265.55 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.
And I'm on A100 with 40 Gb GPU...
I need help downloading 3.1 in colab. I'm not a programmer so I tango with ChatGPT-4o.
I tried !pip install kandinsky3 : error I tried !pip install git+https://github.com/author/kandinsky3.git
So I did:
!git clone https://github.com/ai-forever/Kandinsky-3.git %cd Kandinsky-3
and I got : ERROR: Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
In my next cell I have:
import torch from kandinsky3 import get_pipeline # Import the correct function
Set the device to CUDA
device_map = torch.device('cuda:0')
Define the data types for different components
dtype_map = { 'unet': torch.float32, 'text_encoder': torch.float16, 'movq': torch.float32, }
Initialize the T2I Flash pipeline for Kandinsky 3.1
t2i_pipe = get_pipeline(device=device_map, dtype=dtype_map)
Move the pipeline to GPU, if available
t2i_pipe = t2i_pipe.to('cuda')
and I get:
ImportError Traceback (most recent call last)