Open Swapnil-gautam opened 3 years ago
I think this is a typical kind of edge case. You can try to find similar images with white reflections in your training set and augment (e.g. offline or change the loss weights of these images in training) these images for reducing this issue.
On Mon, Nov 29, 2021 at 1:20 AM Swapnil Gautam @.***> wrote:
I am training u2netp with my jewelry images data-set (mostly white background with little shadow) but in results I am getting blank (transparent) patches at white regions of object as show in the image (left side: input image, right side: results ), I have trained the model with images in datasets: 9000 images batches_size: 16 epochs: 130 and set all other hyper-parameters to there default value as defined in repository.
Can anyone suggest changes that I should make that can avoid this issue. Thank you.
[image: stackoverflow] https://user-images.githubusercontent.com/51311257/143785924-bd9a9ca9-54f3-4f90-a616-3551ebe97a6d.png
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/xuebinqin/U-2-Net/issues/271, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSGORJAVGZNR7NMHKYDH7LUOKMKVANCNFSM5I5XJU6Q . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
-- Xuebin Qin PhD Department of Computing Science University of Alberta, Edmonton, AB, Canada Homepage:https://webdocs.cs.ualberta.ca/~xuebin/
Hey @xuebinqin sorry for late reply, I have added similar image in data-set but still the issue is there, how can I change the loss weights while training for only these images ?
Hey @xuebinqin sorry for late reply, I have added similar image in data-set but still the issue is there, how can I change the loss weights while training for only these images ?
1) You can make a list of the ids (or names) of the images that have similar whitish reflections, and then modify the SalObjDataset
's __getitem__
function to return a custom weight as well. Pass this weight into your loss function.
2) You can perform augmentation to blow out the whites in your dataset images. Play around with brightness and gamma values (Albumentations allows all these).
3) Depending on the shape and type of your jewellery, you could also try using OpenCV Closing.
I am training u2netp with my jewelry images data-set (mostly white background with little shadow) but in results I am getting blank (transparent) patches at white regions of object as show in the image (left side: input image, right side: results ), I have trained the model with images in datasets: 9000 images batches_size: 16 epochs: 130 min_loss: 0.225001 and set all other hyper-parameters to there default value as defined in repository.
Can anyone suggest changes that I should make that can avoid this issue. Thank you.