Closed martintomov closed 5 months ago
Hi @martintmv-git thanks for your interest in my notebook. Looking at your dataset, it looks like the labels might need to be reversed, i.e. the object of interest needs to be marked with white and the background with black pixels.
Even after attempting to invert the colour of the object of interest, the issue with training persisted. Upon further investigation, it appears that it also came from the format and encoding of the masks (labels).
All labels must be in format:
TIFF image data, little-endian, direntries=10, height=256, bps=32, compression=none, PhotometricIntepretation=BlackIsZero, width=256
Hey @NielsRogge, I need your help with the fine-tune SAM tutorial notebook you provided on GitHub. I successfully replicated your entire notebook using the
nielsr/breast-cancer
dataset, and it works great. However, when I attempt to use it with my own dataset, I encounter that while training I get huge negative mean loss that I've been unable to resolve all day. Could the problem be related to how my dataset is divided? My images are RGB, my masks are Grayscale.I followed the guide to upload the dataset to the Hub, but I suspect there might be an issue with it. The dataset comprises 733 images and 733 segmentation masks. If possible, could you please take a look and help me troubleshoot this issue?
Loss while training: