I build a data loader following the MaskFormer tutorial and it worked well on training the MaskFormer, when I switch to Mask2Former, however, I encountered the following error
grid_sampler(): expected 4D or 5D input and grid with same number of dimensions, but got input with sizes [0, 1] and grid with sizes [0, 12544, 1, 2]
The error only pops with images where class label is with the shape torch.Size([0]), ie no object in the frame. Is there a way to fix this? Thank you so much.
Dear Team, I'm currently trying to fine-tune the current model, since I don't have semantic labels, I used only one class label
I build a data loader following the MaskFormer tutorial and it worked well on training the MaskFormer, when I switch to Mask2Former, however, I encountered the following error
The error only pops with images where class label is with the shape
torch.Size([0])
, ie no object in the frame. Is there a way to fix this? Thank you so much.