Open devshaww opened 1 year ago
Hi outlier, i am working on a greyscale ultrasound dataset with having image size is 265*256 consisting of 5 different classes. i modified the image size in class SelfAttention, also i changed the i/p and O/p channels for greyscale images. but when i run the modules.py i face this error. kindly help out.
also, tell me why its samples 10 images every time during training. can we change it?
This is an improved codebase here: https://github.com/tcapelle/Diffusion-Models-pytorch I think it implementes easy handling of different image resolutions
This is an improved codebase here: https://github.com/tcapelle/Diffusion-Models-pytorch I think it implementes easy handling of different image resolutions
am still facing this issue. i tried on this repo but still stuck here. i want to work on Greyscale images having a size is 256. kindly let me guide what should i do. Thanks
Hey, you would change the in and out channels here: https://github.com/dome272/Diffusion-Models-pytorch/blob/be352208d0576039fec061238c0e4385a562a2d4/modules.py#L190
That should be it. And then your DataLoader would need to be adjusted too.