I tried to compare your method with other sota methods listed in your report.
However, for BDRAR, following their public code and retrained on ViSha training data, I get something around 13 for BER when testing.
May I know what is your scheme when retraining all the competitors, especially for BDRAR?
Also, I found that most methods performs better when using threshold 0 other than 127.5 in the evaluation for BER, do you have a special reason for using 127.5.
Thanks!
I also follow the original network implementation for BDRAR. However, In order to be consistent, I use the same training settings as our networks. Maybe the BDRAR original training setting is more proper for BDRAR. Thank you for your advice, I will check it. As for the BER metric, I use four metrics. when BER is better, other indicators are not necessarily good. In my TVSD-Net, I choose a balanced result for four metrics. So the BER metric is not necessarily good.
127.5 is half of 255, I follow the most paper to choose this setting. If you set the threshold as 0 or another value, I do not recommend it.
TVSD-Net is just a baseline network, I hope it can be easily surpassed and more and more researchers can use ViSha in their works.
I tried to compare your method with other sota methods listed in your report. However, for BDRAR, following their public code and retrained on ViSha training data, I get something around 13 for BER when testing. May I know what is your scheme when retraining all the competitors, especially for BDRAR? Also, I found that most methods performs better when using threshold 0 other than 127.5 in the evaluation for BER, do you have a special reason for using 127.5. Thanks!