Closed Endless-Hao closed 5 years ago
@cardwing In the retraining process, the loss is very normal.
@cardwing Also different test processes get different results using the same model. I really don't know what's wrong with it. I retrain normal.
@Endless-Hao, the position mismatch problem still exists. Can you upload the full code you use to train and test? I will have a look at it and check which part is wrong.
@cardwing I upload the full code ,Thank you for your kind. Fullcode.zip
@Endless-Hao, can you upload your trained model here so that I can have a test on my local server? I think the problem may exist in the evaluation process. However, according to your description, the training accuracy is only 80% while in my local server it can achieve 90%.
@cardwing Here is my trained model. At the last the accuracy can also achieve 90% or more, I just say they are in this interval. I upload in google cloud. https://drive.google.com/open?id=1fLxMfgEpLXQqCl2SENsuB3nub4JW_IVz
@cardwing I also get accuracy above 90% and get accuracy-back above 97% at last. The training process is ok and normal which is nearly the same as the trained picture you upload in another issue.
@Endless-Hao, it is obvious that the problem exists in your evaluation process. The following figure is obtained via your uploaded model. It looks fine and the overall F1-measure is also normal. The following log is obtained via your model. Please carefully check your evaluation code (scripts provided by SCNN). Besides, you should train the model from the provided vgg.npy instead of the final testing model.
The following two log files are obtained via your model.
@cardwing hello, I think my evaluation process is ok. Because I use your probability map, the evaluation is ok. The problem is my test result. They were so strange. You look at my test code. Does the testing code have some problems? I upload my probability map results by the test. https://drive.google.com/open?id=1n-zfSHwLOAsh9EGDKrLDvZ7dTL1n2aEB
@cardwing With this different test result, all I can think of is that some of the parameters are from the model, and some of them are randomly initialized, otherwise it is impossible.
@Endless-Hao, your provided probability maps are indeed unsatisfactory. The testing code looks fine as it is exactly the same as mine. The mismatch problem can not be caused by the random initialization of some model parameters. What is the version of your tensorflow?
And it is also weird that the code cannot be debugged in your pycharm.
@cardwing the version of my tensorflow is 1.10.1.
@Endless-Hao, the version of my tensorflow is 1.3.0. Maybe some functions have been changed as tensorflow is updated. You need to check the problem by your self since I do not have any idea about what to do next.
Another cause why you have different outputs is that BN parameters may not be fixed in the testing phase.
@cardwing I change another computer which the version of tensorflow is 1.7. And then, I get the good results. Maybe is the problem of some function in tensorflow. Thank you for your help very much.
^_^
^_^
first of all, thank for your amazing work. when i try to follow your work,during training, i found the training accuracy is up to 90%,so i want to know when the model is good enough?
The training accuracy is fine. Just try the evaluation process and see if you can achieve similar performance.
The training accuracy is fine. Just try the evaluation process and see if you can achieve similar performance.
3q so much
The training accuracy is fine. Just try the evaluation process and see if you can achieve similar performance.
After the model i trained get the f1score 0.68, which is 3 point lower than your model, i wonder how you trained your model, the same parameters as the global_config?
@cardwing hello, I am coming again. I have retrained your pre-train-model in CULane data. In the training process, I think the train is normal. Like accuracy is 70% to 80 %, accuracy-back is 90% to 98%. The times of train epoch is 90000. They are normal. But when I test them, I also get the not good result. I evaluate the results. Get the information below:
Evaluating the results... tp: 6783 FP: 96950 fn: 98103 finished process file precision: 0.065389 recall: 0.0646702 F-measure: 0.0650276
The result in real pictures are like below:
![Uploading 00900.results.png…]() ![Uploading 02430.results.png…]()
Is there anything wrong in test code? I retrain it, the details are normal. But the test results were not good.