dougsm / ggcnn

Generative Grasping CNN from "Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach" (RSS 2018)
BSD 3-Clause "New" or "Revised" License
499 stars 140 forks source link

Fine tuning on pretrained weights #33

Closed HanwenCao closed 3 years ago

HanwenCao commented 3 years ago

Dear author,

May I ask you questions about training on customized data?


INFO:root:Beginning Epoch 00
INFO:root:Epoch: 0, Batch: 100, Loss: 0.0716
INFO:root:Epoch: 0, Batch: 200, Loss: 0.1626
INFO:root:Epoch: 0, Batch: 300, Loss: 0.0423
INFO:root:Epoch: 0, Batch: 400, Loss: 0.0685
INFO:root:Epoch: 0, Batch: 500, Loss: 0.1163
INFO:root:Epoch: 0, Batch: 600, Loss: 0.0476
INFO:root:Epoch: 0, Batch: 700, Loss: 0.0870
INFO:root:Epoch: 0, Batch: 800, Loss: 0.2107
INFO:root:Epoch: 0, Batch: 900, Loss: 0.1240
INFO:root:Validating...
INFO:root:176/249 = 0.706827

The pretrained network only achieve 0.7 on Cornell Dataset(data02.tar.gz) for the first epoch. I was wondering is it normal? Thank you!