Open mmartin56 opened 4 years ago
I think we can add this feature optionally - ranges of risizing:
Thanks! My code if it helps:
if ( l.random == 1 && random_val > 1 ) random_val = 1/random_val;
int dim_w = roundl(random_val*init_w / 32 + 1) * 32;
int dim_h = roundl(random_val*init_h / 32 + 1) * 32;
if ( l.random == 1 )
{
dim_w = fmin( dim_w, init_w );
dim_h = fmin( dim_h, init_h );
}
if (avg_loss < 0) // at the beginning
{
if ( l.random == 1 )
{
dim_w = init_w;
dim_h = init_h;
}
else
{
dim_w = roundl(1.4*init_w / 32 + 1) * 32;
dim_h = roundl(1.4*init_h / 32 + 1) * 32;
}
}
so random=1 is downsizing only, and any random>1 is the normal resizing (up and down).
Hi @mmartin56
Did you see any improvement by downsizing only?
Thanks
Hi @iraadit, as far as I can tell I haven't seen any drop in performance when only downsizing - but I haven't done proper tests to be honest.
That allowed me to increase the batch size, which also hasn't resulted in a significant increase in performance - all in all probably not much of a difference. :)
Hi @AlexeyAB
The option random=0 means you can train on larger mini-batches, which, according to you (#4386 ), leads to a higher mAP. random=1 means data augmentation, which I guess also leads to a higher mAP, but for a different reason (model generalisation).
Which one is the best option?
Could only downsizing images be a better compromise? This way we get more data augmentation than random=0, and mini-batches of the same size - it seems better than random=0.
That could be achieved by something like:
if ( l.random == 2 && random_val > 1 ) random_val = 1/random_val;
after the lines