zhenngbolun / S-Net

S-Net: A Scalable Convolutional Neural Network for JPEG Compression Artifact Reduction
24 stars 5 forks source link

End of training? #3

Open FabianBartels opened 5 years ago

FabianBartels commented 5 years ago

Thanks for providing the projekt code. Your work is amazing :)

Im currentry write my thesis and want to train with the DIV2K Dataset as proposed.

I´ve run 190000 batches trough the model --> MTRN_k8_f256_c3_QF40_190000.h5

When should i stop the training process? Do you think this state is already sufficient for fine-tuning?

Greets :)

zhenngbolun commented 5 years ago

In this work, we didn't use validation dataset to monitor the training. As you can see in our paper that, "The initial learning rate was set to 10−4 at the start of the training procedure, and subsequently halved after every set of 104 batch updates until it was below 10−6. All network models for different convolutional units were trained with 2 × 105 batch updates." you can just stop your training at 200000 batches.

Or you can reference the code in our recent work "IDCN", which used LIVE1 as a validation dataset to monitor the training progress.