Open alexjc opened 8 years ago
For the second option, I figured out the down-sampling using pooling. Is up-sampling possible the opposite operation somehow?
Hey, I think you would have to code your own interpolated upsampling kernels since the ones that comes with the CUDA SDK (in the Performance Primitives (NPP) library) don't support batches.
If you just want to upsample in a fashion similar to bprob in a pooling operation, then the CUDA kernel might be easy to write. This is sometimes called "perforated upsampling".
Thanks! I think you can close this ticket unless you plan on adding it in there :-)
Actually, I do. :)
Ooh, looking forward to it!
I'm trying to scale up 3D matrices from a deeppy convolutional neural network, for example from
(512,64,64)
to(512,128,128)
. Currently I'm doing this by going via a numpy array, then iterating and usingscipy.misc.imresize
with bilinear filtering which is very slow, but works.Is there a way to do this on CUDA? If so, is this exposed to cudarray?
The alternative (not quite as good) would be to scale the same matrices down, for example using the same code that does pooling. It'd take one matrix that's
(512,128,128)
and return(512,64,64)
. I presume there is a way I can do this as a function call on an array rather than within a deeppy layer?Thanks!