lucidrains / DALLE2-pytorch

Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
MIT License
11.08k stars 1.08k forks source link

Diffusion trainer wrapping for clip less ? #293

Closed ethancohen123 closed 1 year ago

ethancohen123 commented 1 year ago

Hi I was wondering if that was possible to use the diffusion prior wrapping with clip-less like method ? [ reffering to this paragraph in the readme "You can also completely go CLIP-less, in which case you will need to pass in the image_embed_dim into the DiffusionPrior on initialization"] @lucidrains Thanks !

ethancohen123 commented 1 year ago

This is the code snippet following the readme instruction:

import torch
from dalle2_pytorch import DALLE2, DiffusionPriorNetwork, DiffusionPrior, DiffusionPriorTrainer

clip_image_embeds = torch.randn(256, 512).cuda()
clip_text_embeds = torch.randn(256, 512).cuda()

# prior networks (with transformer)

# setup prior network, which contains an autoregressive transformer

prior_network = DiffusionPriorNetwork(
    dim = 512,
    depth = 6,
    dim_head = 64,
    heads = 8
).cuda()

# diffusion prior network, which contains the CLIP and network (with transformer) above

diffusion_prior = DiffusionPrior(
    net = prior_network,
    image_embed_dim = 512,               # this is actually mol embedding dim
    timesteps = 100,
    cond_drop_prob = 0.2,
    condition_on_text_encodings = False  # this probably should be true, but just to get Laion started
).cuda()

diffusion_prior_trainer = DiffusionPriorTrainer(
    diffusion_prior,
    lr = 3e-4,
    wd = 1e-2,
    ema_beta = 0.99,
    ema_update_after_step = 1000,
    ema_update_every = 10,
)

loss = diffusion_prior_trainer(clip_text_embeds, clip_image_embeds, max_batch_size = 4)
diffusion_prior_trainer.update()  # this will update the optimizer as well as the exponential moving averaged diffusion prior

# after much of the above three lines in a loop
# you can sample from the exponential moving average of the diffusion prior identically to how you do so for DiffusionPrior

image_embeds = diffusion_prior_trainer.sample(clip_text_embeds, max_batch_size = 4) # (512, 512) - exponential moving averaged image embeddings

and here's the error I get

erruer

ethancohen123 commented 1 year ago

any recommandation here please ?