lucidrains / imagen-pytorch

Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch
MIT License
8.09k stars 768 forks source link

Can anyone look over my script and see if I've made any errors? #365

Open Shikamaru5 opened 1 year ago

Shikamaru5 commented 1 year ago

I'm attempting to train off of image-text pairs, however, there aren't really any good examples that I could find of how this is accomplished. It's all about the unconditional which doesn't particularly help me unfortunately. Be that as it may, I believe that I've found the correct set up, although the examples seem to point to trying to use the Dataloader and Dataset class in data.py it seemed that the Collator class was the option to use for text_embeddings. This is my script at this point:

 from imagen.imagen_pytorch.imagen_pytorch import Unet, Imagen
 from imagen.imagen_pytorch.data import Collator
 from imagen.imagen_pytorch.trainer import ImagenTrainer
 import torch
 import deepspeed

 # unets for unconditional imagen

 unet = Unet(
     dim = 128,
     cond_dim = 512,
     dim_mults = (1, 2, 4, 8),
     num_resnet_blocks = 3,
     layer_attns = (False, True, True, True),
     layer_cross_attns = (False, True, True, True)
 )

 # imagen, which contains the unet above
 imagen = Imagen(
     unets = (unet),
     image_sizes = 64,
     text_encoder_name = 't5-3b',
     timesteps = 5_000,
     cond_drop_prob = 0.1
 )

 model = ImagenTrainer(imagen = imagen).to("cuda")

 text = '/mnt/e/desktop/genaitor/majel/imagen/datasets/furniture/furniture_data_img.csv'
 img = '/mnt/e/desktop/genaitor/majel/imagen/datasets/furniture/furniture_images'

 # Create an instance of the Collator class
 collator = Collator(image_size=(64), url_label=url_label, text_label=text, image_label=img, name="t5-3b", channels="RGB")

 model_trainer = model(collator, batch_size=32)

 model_engine, optimizer, _, _ = deepspeed.initialize(args='deepspeed_config.json',
                                                      model=model_trainer,
                                                      optimizer=None,
                                                      model_parameters=None)

 # working training loop

 for i in range(20_000):

     loss = model_engine.train_step(unet_number = 1, max_batch_size = 4)

 model.update(unet_number = 1)

 print(f'loss: {loss}')

 save_checkpoint('./U1.pt', trainer)
 #trainer.load('./U1.pt')

If someone wouldn't mind reviewing this set up and give me pointers on what I'm doing incorrectly that'd be greatly appreciated, thanks for taking the time to read this.