Describe the bug
If you use the out of the box image transforms for TinyViT, they do not really work if you want to use a grayscale images, because they expect 2/3 channel images (see code below). It would also be nice if you could use batching right away since it increases the interoperability with other workflows and frameworks (lightning for example).
Expected behavior
Expected behaviour would be to automatically detect a 4D tensor and implement batching. Also for grayscale images the channel could just be copied of all 3 channels before transformation for a low effort implementation. There is also the PIL Images dependency, maybe it makes sense to drop that in favor of torch.tensor.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Describe the bug If you use the out of the box image transforms for TinyViT, they do not really work if you want to use a grayscale images, because they expect 2/3 channel images (see code below). It would also be nice if you could use batching right away since it increases the interoperability with other workflows and frameworks (lightning for example).
To Reproduce Steps to reproduce the behavior:
Expected behavior Expected behaviour would be to automatically detect a 4D tensor and implement batching. Also for grayscale images the channel could just be copied of all 3 channels before transformation for a low effort implementation. There is also the PIL Images dependency, maybe it makes sense to drop that in favor of torch.tensor.
Screenshots If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):