Open KathyGCY opened 2 years ago
This is the code I'm running in jupyter notebook on a mac
import torch from dalle2_pytorch import DALLE2, DiffusionPriorNetwork, DiffusionPrior, Unet, Decoder, CLIP clip = CLIP( dim_text = 512, dim_image = 512, dim_latent = 512, num_text_tokens = 49408, text_enc_depth = 6, text_seq_len = 256, text_heads = 8, visual_enc_depth = 6, visual_image_size = 256, visual_patch_size = 32, visual_heads = 8 ).cuda()
import torch from dalle2_pytorch import DALLE2, DiffusionPriorNetwork, DiffusionPrior, Unet, Decoder, CLIP
clip = CLIP( dim_text = 512, dim_image = 512, dim_latent = 512, num_text_tokens = 49408, text_enc_depth = 6, text_seq_len = 256, text_heads = 8, visual_enc_depth = 6, visual_image_size = 256, visual_patch_size = 32, visual_heads = 8 ).cuda()
And this happened:
Removing All the .cuda() from the code should get the code going and not trying to get GPU involved as that isn't supported on Mac.
Can confirm this works on Windows 10 and MacOS Catalina.
This is the code I'm running in jupyter notebook on a mac
And this happened: