I read a few days ago about multi-scale CNN (in OverFeat method), which you can access to presentation via this link. You performed CNN on different scales of an image and then combine all output maps. You said inside of that presentation:
Classification performed at 6 scales at test time, but only 1 scale at run time .
So my question is: If we use 6 different scales of CNN architecture, then we have different convolution layers in every scale (I guess so). So how in OverFeat You use just 1 scale in run time? if we use specific scale then how we can access to other feature extractor of different scales?, and I see in the article You combine feature maps of different scales but I can't figure out how this process performed.
I read a few days ago about multi-scale CNN (
in OverFeat
method), which you can access to presentation via this link. You performed CNN on different scales of an image and then combine all output maps. You said inside of that presentation:So my question is: If we use 6 different scales of CNN architecture, then we have different convolution layers in every scale (I guess so). So how in
OverFeat
You use just 1 scale in run time? if we use specific scale then how we can access to other feature extractor of different scales?, and I see in the article You combine feature maps of different scales but I can't figure out how this process performed.Thanks