junyongyou / triq

TRIQ implementation
MIT License
134 stars 23 forks source link

Training #10

Closed zmm96 closed 3 years ago

zmm96 commented 3 years ago

hello,I want to rapeat your work and rewrite it by pytorch. can you tell me more about the detail about training,"A base learning rate 5e-5 was used for pretraining"you mean pretrain in the same dataset(Koniq-10k and livec)?

junyongyou commented 3 years ago

Yes, I used 5e-5 to train the model first, based on the ImageNet pretrained weights for the base net. Subsequently, I used 1e-6 to retrain the model based on the weights obtained from the previous train.

zmm96 commented 3 years ago

Thanks for your raplay. I want to konw that whether you partition the training set ,validation set and test set. I don not see the validation set. Maybe you use the test as val ?

zmm96 commented 3 years ago

And what you mean KonIQ-half-sized in the experiment table? Thanks so much for your work.

junyongyou commented 3 years ago

Hi, I didn't split the data set into three parts, so basically there was no test set, but I used other datasets as test set. KonIQ-half-sizes means KonIQ images that have been halve sized, as that in the original KonIQ paper. Maybe you could first read the papers and then see if you have any questions.

zmm96 commented 3 years ago

Do you use the same training set for the two stage(lr=5e-5,lr=1e-6) of your experiment?

junyongyou commented 3 years ago

Do you use the same training set for the two stage(lr=5e-5,lr=1e-6) of your experiment?

Yes, I did.

zmm96 commented 3 years ago

Then I seen Imagenet in your dataloader code, you mean yorself train the imagenet again from the pretrained?

junyongyou commented 3 years ago

Then I seen Imagenet in your dataloader code, you mean yorself train the imagenet again from the pretrained?

No. I only used ImageNet pretrained weights.

zmm96 commented 3 years ago

Can you tell me detail about training. total epoch, warm_up_epoch and so no. It seems that your code is not match with the paper(et al. learning rate)

junyongyou commented 3 years ago

Can you tell me detail about training. total epoch, warm_up_epoch and so no. It seems that your code is not match with the paper(et al. learning rate)

Hi, there might be some slight difference in settings between the paper and the code. You can find the total epoch, warnup epoch and hold epoch in the code. I used 5e-5 as lr in the base train and 1e-6 in the finetune.