Firstly thanks for your implantation, it is really so good.
Now I want to try if the input image is grayscale [512, 512] and the generated target mask is RGB image [512, 512, 3] (same figures). After SimDataset and DataLoader we have shapes torch.Size([20, 512, 512]) torch.Size([20, 512, 512, 3]) and training show RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 20, 512, 512] to have 3 channels, but got 20 channels instead.
Firstly thanks for your implantation, it is really so good.
Now I want to try if the input image is grayscale [512, 512] and the generated target mask is RGB image [512, 512, 3] (same figures). After SimDataset and DataLoader we have shapes torch.Size([20, 512, 512]) torch.Size([20, 512, 512, 3]) and training show RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 20, 512, 512] to have 3 channels, but got 20 channels instead.
How to fix this issue?