javiribera / locating-objects-without-bboxes

PyTorch code for "Locating objects without bounding boxes" - Loss function and trained models
Other
249 stars 52 forks source link

Training About Mall Dataset #37

Open hustcc19860606 opened 4 years ago

hustcc19860606 commented 4 years ago

I'm trying to replicate your training process using the “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt”. I find that there is still training space after loading weights, and the loss of verification set is very large. I confuse about it, can you give me some suggestions? QQ图片20200710151642

javiribera commented 4 years ago

What is "training space"?

hustcc19860606 commented 4 years ago

The “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” is one trained weight of network . When I try to use it as one checkpoint, I find that run avg can be reduced by training. Is this phenomenon right? Moreover, avg val Loss on the validate set is much larger than run avg of train set, is this phenomenon right? In my opinion, using a well trained weight of network as checkpoint, the run_ avg will has no descending space by training. Can you give me some suggestions? @javiribera

javiribera commented 4 years ago

The “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” is one trained weight of network . When I try to use it as one checkpoint, I find that run avg can be reduced by training. Is this phenomenon right? Moreover, avg val Loss on the validate set is much larger than run avg of train set, is this phenomenon right?

Yes, this is called overfitting.

In my opinion, using a well trained weight of network as checkpoint, the run_ avg will has no descending space by training. Can you give me some suggestions? @javiribera

I disagree. You can easily get the training loss to 0 by just training for very long time a huge model. What matters is the validation error. You are free to find another approach that would get a lower training and/or validator loss.

I don't see any of this as a problem of the method, the training or the code.

hustcc19860606 commented 4 years ago

Thanks for your reply. I need more suggestions to repeat the training process. I find that you use 9749 epoches to get “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” trained weight. Meanwhile, we find the lowest_mahd in the record of your weight is 4.32. We use your weight to get judge.mahd is 91.57 on the validation set, which contains No.seq_001601.jpg to No.seq_001800.jpg. And the judge.mahd is 7.67 on the train set, which contains No.seq_000001.jpg to No.seq_001600.jpg. Is that right? We have trained about 2000 epoches and consume serval days by NVIDIA P40, only the following pic is achieved. Do we need more training epoches? Can you give me some suggestions? @javiribera seq_001819

csm-kr commented 3 years ago

Thanks for your reply. I need more suggestions to repeat the training process. I find that you use 9749 epoches to get “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” trained weight. Meanwhile, we find the lowest_mahd in the record of your weight is 4.32. We use your weight to get judge.mahd is 91.57 on the validation set, which contains No.seq_001601.jpg to No.seq_001800.jpg. And the judge.mahd is 7.67 on the train set, which contains No.seq_000001.jpg to No.seq_001600.jpg. Is that right? We have trained about 2000 epoches and consume serval days by NVIDIA P40, only the following pic is achieved. Do we need more training epoches? Can you give me some suggestions? @javiribera seq_001819

Hi~ I am now looking for the mall dataset used in this paper. As mentioned in another issue, access to the website of the shopping mall dataset is prohibited in this repository. Do you still have the mall dataset you learned and experimented with, if so, can you share it?

Frank-Dz commented 3 years ago

Thanks for your reply. I need more suggestions to repeat the training process. I find that you use 9749 epoches to get “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” trained weight. Meanwhile, we find the lowest_mahd in the record of your weight is 4.32. We use your weight to get judge.mahd is 91.57 on the validation set, which contains No.seq_001601.jpg to No.seq_001800.jpg. And the judge.mahd is 7.67 on the train set, which contains No.seq_000001.jpg to No.seq_001600.jpg. Is that right? We have trained about 2000 epoches and consume serval days by NVIDIA P40, only the following pic is achieved. Do we need more training epoches? Can you give me some suggestions? @javiribera seq_001819

Did you solve your problem? How many epochs did you finally train?

Thanks!