rheinzler / PointCloudDeNoising

MIT License
116 stars 25 forks source link

About the hyper parameters of WeatherNet training #6

Closed giorking closed 3 years ago

giorking commented 4 years ago

Hi @rheinzler I am not sure if this is the proper place to ask the question of WeatherNet here. If not, please let me know, I'll close the issue. In the paper "CNN-based Lidar Point Cloud De-Noising in Adverse Weather", you mentioned several hyper parameters, such as learning rate(4e-8), learning rate decay(0.9/per epoch), batch size(20) and the optimizer(Adam). I would like to confirm with you if all the hyper parameters are the same as you experiments. On the other hand, I also curious about other hyper parameters in your experiments, such as how many epochs for the training, how many gpus for the training, weight decay, etc. Thanks.

rheinzler commented 3 years ago

Hi @giorking. Sorry for the huge delayed response. I'll activate email notifications for GitHub again. If you're still interested in the parameters: I used exponential learning rate decay (decay_rate=0.90). I have finished the training after convergence.