Open lianxxx opened 4 years ago
Hi lian, I remove the gate from the smoothness loss to achieve more stable training. You can put it back with code below:
smoothLoss_cur1 = opt.sm_loss_weightsmooth_loss(torch.sigmoid(sal1), torch.sigmoid(sal1)grays)
Cheers, Jing
On Tue, 16 Jun 2020 at 22:31, lianxxx notifications@github.com wrote:
Excellent work!!! Thank you for your repo.
Is there a gate (which is denoted by G in formula (3) in the paper) in this smoothness loss?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/JingZhang617/Scribble_Saliency/issues/3, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE6B4F2XMZM57RDVSKNJ45TRW5Q3VANCNFSM4N7SJ7CA .
-- Jing Zhang Ph.D. Student College of Engineering and Computer Science, Australian National University. Email: zjnwpu@gmail.com
Thank you very much for your quick reply.
Hi Long, From my experience, the gate can lead to more clean saliency prediction, while may lead to over-confident prediction. By removing the gate, we obtain moderate predictions. You can decide whether to use the gate according to your task.
Cheers, Jing
On Thu, 18 Jun 2020 at 17:09, Mrlong12 notifications@github.com wrote:
[image: image] https://user-images.githubusercontent.com/43989375/84988853-4d09cf80-b175-11ea-881a-903cf3169b3a.png Will the saliency map be more poor,if you remove the gate?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/JingZhang617/Scribble_Saliency/issues/3#issuecomment-645825266, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE6B4FZY3KA5A7K6SNTCQJTRXG4TLANCNFSM4N7SJ7CA .
-- Jing Zhang Ph.D. Student College of Engineering and Computer Science, Australian National University. Email: zjnwpu@gmail.com
Will the saliency map be more poor,if you remove the gate?
thank you very much
And there is another question anout the Edge detection network.the ground truth E is made by Richer convolutional features for edge detection. But the existing edge detector has been trained by edge Ground Truth in [22], the edge Ground truth is also very expensive compared with saliency GT. So can the saliency weak supervision use the other trained edge detector?
Hi Long, You can also use Canny or Sobel edge detector, which can also lead to reasonable performance.
Cheers, Jing
On Thu, 18 Jun 2020 at 17:30, Mrlong12 notifications@github.com wrote:
And there is another question anout the Edge detection network.the ground truth E is made by Richer convolutional features for edge detection. But the existing edge detector has been trained by edge Ground Truth in [22], the edge Ground truth is also very expensive compared with saliency GT. So can the saliency weak supervision use the other trained edge detector?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/JingZhang617/Scribble_Saliency/issues/3#issuecomment-645835797, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE6B4FYNOIVJX3I2VR2CUQTRXG7CNANCNFSM4N7SJ7CA .
-- Jing Zhang Ph.D. Student College of Engineering and Computer Science, Australian National University. Email: zjnwpu@gmail.com
ok ,thank you for your quick reply.
Hi! This is really a great work! And thank you so much for sharing your source code. Here is a question. Is there the Scribble Boosting part in the code?
Hi weiyao, I didn't include the boosting part, as I think the current one-stage training is enough to achieve reasonable performance. Meanwhile, you can modify the saliency detection module, the edge detection branch, or you can simply change the weights for edge detection, or smoothness loss for better performance. The provided version can be treated as a baseline. Have fun.
Cheers, Jing
On Wed, 23 Sep 2020 at 14:05, weiyao1996 notifications@github.com wrote:
Hi! This is really a great work! And thank you so much for sharing your source code. Here is a question. Is there the Scribble Boosting part in the code?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/JingZhang617/Scribble_Saliency/issues/3#issuecomment-697121382, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE6B4F66DYO2ZPRSNKNN3T3SHFXXXANCNFSM4N7SJ7CA .
-- Jing Zhang Ph.D. Student College of Engineering and Computer Science, Australian National University. Email: zjnwpu@gmail.com
Thanks for your quick reply!
Excellent work!!! Thank you for your repo.
Is there a gate (which is denoted by G in formula (3) in the paper) in this smoothness loss?