Open RajShekhorRoy opened 1 year ago
Can you show the full error message? One problem seems to be that your size is 32 also for the channels. You need to change the c_in to your diffusion model: model = UNet(c_in=32).to(device_val). But do you really want that? This means you have images with 32 channels?
Am Fr., 30. Juni 2023 um 19:22 Uhr schrieb Raj Shekhor Roy < @.***>:
getting this error RuntimeError: mat1 and mat2 shapes cannot be multiplied (128x128 and 256x128)
when I am using the code below which is basically an array of size 32x32x32 and timesteps of 128:
size = 32 device_val="cuda" x_input = torch.tensor(np.random.rand(1,size, size, size)).to(device_val).type(torch.cuda.FloatTensor) diffusion = Diffusion(img_size=size, device=device_val) t = diffusion.sample_timesteps(128).to(device_val) model = UNet().to(device_val) xmodel=model(x_input,t)
In addition to that can you please explain a bit more how is this time embedding working especially in terms of the dimension?
— Reply to this email directly, view it on GitHub https://github.com/dome272/Diffusion-Models-pytorch/issues/31, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOYRYBRHSIAQWUJPKQEOPLDXN4DL7ANCNFSM6AAAAAAZ2DQG7Y . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Here is the full error:
Traceback (most recent call last):
File "/home/rajroy/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/212.4746.96/plugins/python/helpers/pydev/pydevd.py", line 1483, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rajroy/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/212.4746.96/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/rajroy/Diffusion-Models-pytorch/test.py", line 80, in
One more thing is that I changed the default parameter in the model to provide the input channel number, in this way:
class UNet(nn.Module): def init(self, c_in=32, c_out=32, time_dim=128, device="cuda"):
getting this error RuntimeError: mat1 and mat2 shapes cannot be multiplied (128x128 and 256x128)
when I am using the code below which is basically an array of size 32x32x32 and timesteps of 128:
size = 32 device_val="cuda" x_input = torch.tensor(np.random.rand(1,size, size, size)).to(device_val).type(torch.cuda.FloatTensor) diffusion = Diffusion(img_size=size, device=device_val) t = diffusion.sample_timesteps(128).to(device_val) model = UNet().to(device_val) xmodel=model(x_input,t)
In addition to that can you please explain a bit more how is this time embedding working especially in terms of the dimension?