zhaofang0627 / AnchorUDF

MIT License
62 stars 9 forks source link

Large gap between training/eval performance #21

Open hanhuili opened 2 years ago

hanhuili commented 2 years ago

@zhaofang0627 Hi! Thanks for your excellent research and implementation. However, when we try to reproduce the results following your instruction, we found the evaluation performance (on both training/test subsets) is far lower than that of training.

For example, the performance obtained at epoch 4 is listed as follows: mean train L1Loss: 72.158565 ChamLoss: 3.533241 DirectLoss: 0.000000 eval test L1Loss: 205.070717 ChamLoss: 16.174886 eval train L1Loss: 209.616318 ChamLoss: 16.152732

We did not make any change to your implementation. Do you have any idea to solve this issue? Thank you.