Hi @xuebinqin, you've made an excellent work thank you! I'm pretty new to ML so I apologize for some stupid questions ahead.
What GPUs (and how many) you've used to train the last u2net (basic one) model? How long does it took you and how many epochs was trained?
What should I do in order to train higher resolution mask output: 512x512 or 1024x1024? Is it enough just change RescaleT(320) to RescaleT(512) in dataset loader?
What should I do train more then one output channels? For example to train not the whole object mask but hair, skin and clothes separately (link to dataset)? And how do I feed those additional channels as input?
Have you used different then in u2net_train.py settings for optimizer? Maybe some LR decay etc? I've tried to train on DUTS-TR and after 60-70 epochs train loss just stops decreasing. But after reducing LR from 0.001 to 0.0001 it did better work for some time (and stops again). Can you give me some ideas how to make training more stable (automatically)?
From your experience, how good u2net for generating similar to pix2pix results with color images? Can it do "sketch to cats" (lsample image) kind of things? Or pix2pix model better for that? If u2net is great - how do I tune training process to do those "sketch to cats"?
Hi @xuebinqin, you've made an excellent work thank you! I'm pretty new to ML so I apologize for some stupid questions ahead.
What GPUs (and how many) you've used to train the last u2net (basic one) model? How long does it took you and how many epochs was trained?
What should I do in order to train higher resolution mask output: 512x512 or 1024x1024? Is it enough just change RescaleT(320) to RescaleT(512) in dataset loader?
What should I do train more then one output channels? For example to train not the whole object mask but hair, skin and clothes separately (link to dataset)? And how do I feed those additional channels as input?
Have you used different then in u2net_train.py settings for optimizer? Maybe some LR decay etc? I've tried to train on DUTS-TR and after 60-70 epochs train loss just stops decreasing. But after reducing LR from 0.001 to 0.0001 it did better work for some time (and stops again). Can you give me some ideas how to make training more stable (automatically)?
From your experience, how good u2net for generating similar to pix2pix results with color images? Can it do "sketch to cats" (lsample image) kind of things? Or pix2pix model better for that? If u2net is great - how do I tune training process to do those "sketch to cats"?
Thank you a lot for your time and answers!