Open xbcReal opened 5 years ago
I have reimplemented EAST in Keras using the original tensorflow implementation but I don't think I used L2 loss. Despite that, I was able to produce roughly the same performance as the TF code on ICDAR15 dataset and it took about 400~600 epochs iirc. I've tried running this pytorch code as it is and I've only got about 0.03 precision and 0.25 recall. How many epochs did you train it to get 0.4 hmean? And did you made any changes to the code or parameters to get that score?
I forget the specific params,but one thing I can assure is that in this repo, add L2 loss can improve performance to what I mentioned above.
I see. It's unfortunate that we can't reproduce the original results with this code. I wonder what is/are the causes(s). I guess I'll go back to my Keras code.
Hi bro,I read the code of yours and the src tf version,and I use your code to train but found can get 0.4 hmean on ic2015 test dataset,and I found that in your implemention,the network lacks L2 regularzation while the tf version has a 1e-5 L2 loss in the total loss.
Hi @xbcReal, I wonder if you can reach much higher precision and recall after adding L2 regularization?
@xbcReal I tried with l2
regularization with different weight values on my own dataset but it gave very poor results. Were you able to get ICDAR results similar to that of the paper? @BYJRK I am currently using the workaround for nan
errors, were you getting similar results with that?
@saharudra Sorry, I don't think this will achieve ICDAR 2015 results mentioned on the EAST paper due to the low recall rate and hence low hmean. To be more specific, after modifying the thresholds, I could at most achieve about 0.82 precision, and 0.6 recall. I currently gave up working on this one.
Hi bro,I read the code of yours and the src tf version,and I use your code to train but found can get 0.4 hmean on ic2015 test dataset,and I found that in your implemention,the network lacks L2 regularzation while the tf version has a 1e-5 L2 loss in the total loss.