Open elvisace opened 1 year ago
Hello, I also have the same error, with all my images being 512x512.
I tried to remove all torchvision.transforms
in the YAML config file, but then I have a new error (TypeError: pic should be PIL Image or ndarray. Got <class 'torch.Tensor'>
).
I'm not sure but I suppose it's linked to the BATCH_SIZE variable, because the only change compared to text-to-pokemon in my Notebook is the number of GPUs that is "1", I also tried different BATCH_SIZE but I got different errors.
Running into this error
RuntimeError: stack expects each tensor to be equal size...
not exactly sure what the issue is. Obviously it's the data, but I thought that the images are transformed (resized and cropped) when the data is prepared, prior to being trained. Could it be that the dataset has a mix of jpegs and pngs?Here are the error logs: