Open hustcc19860606 opened 4 years ago
What is "training space"?
The “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” is one trained weight of network . When I try to use it as one checkpoint, I find that run avg can be reduced by training. Is this phenomenon right? Moreover, avg val Loss on the validate set is much larger than run avg of train set, is this phenomenon right? In my opinion, using a well trained weight of network as checkpoint, the run_ avg will has no descending space by training. Can you give me some suggestions? @javiribera
The “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” is one trained weight of network . When I try to use it as one checkpoint, I find that run avg can be reduced by training. Is this phenomenon right? Moreover, avg val Loss on the validate set is much larger than run avg of train set, is this phenomenon right?
Yes, this is called overfitting.
In my opinion, using a well trained weight of network as checkpoint, the run_ avg will has no descending space by training. Can you give me some suggestions? @javiribera
I disagree. You can easily get the training loss to 0 by just training for very long time a huge model. What matters is the validation error. You are free to find another approach that would get a lower training and/or validator loss.
I don't see any of this as a problem of the method, the training or the code.
Thanks for your reply. I need more suggestions to repeat the training process. I find that you use 9749 epoches to get “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” trained weight. Meanwhile, we find the lowest_mahd in the record of your weight is 4.32. We use your weight to get judge.mahd is 91.57 on the validation set, which contains No.seq_001601.jpg to No.seq_001800.jpg. And the judge.mahd is 7.67 on the train set, which contains No.seq_000001.jpg to No.seq_001600.jpg. Is that right? We have trained about 2000 epoches and consume serval days by NVIDIA P40, only the following pic is achieved. Do we need more training epoches? Can you give me some suggestions? @javiribera
Thanks for your reply. I need more suggestions to repeat the training process. I find that you use 9749 epoches to get “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” trained weight. Meanwhile, we find the lowest_mahd in the record of your weight is 4.32. We use your weight to get judge.mahd is 91.57 on the validation set, which contains No.seq_001601.jpg to No.seq_001800.jpg. And the judge.mahd is 7.67 on the train set, which contains No.seq_000001.jpg to No.seq_001600.jpg. Is that right? We have trained about 2000 epoches and consume serval days by NVIDIA P40, only the following pic is achieved. Do we need more training epoches? Can you give me some suggestions? @javiribera
Hi~ I am now looking for the mall dataset used in this paper. As mentioned in another issue, access to the website of the shopping mall dataset is prohibited in this repository. Do you still have the mall dataset you learned and experimented with, if so, can you share it?
Thanks for your reply. I need more suggestions to repeat the training process. I find that you use 9749 epoches to get “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” trained weight. Meanwhile, we find the lowest_mahd in the record of your weight is 4.32. We use your weight to get judge.mahd is 91.57 on the validation set, which contains No.seq_001601.jpg to No.seq_001800.jpg. And the judge.mahd is 7.67 on the train set, which contains No.seq_000001.jpg to No.seq_001600.jpg. Is that right? We have trained about 2000 epoches and consume serval days by NVIDIA P40, only the following pic is achieved. Do we need more training epoches? Can you give me some suggestions? @javiribera
Did you solve your problem? How many epochs did you finally train?
Thanks!
I'm trying to replicate your training process using the “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt”. I find that there is still training space after loading weights, and the loss of verification set is very large. I confuse about it, can you give me some suggestions?