Closed pengpaiSH closed 7 years ago
As Francois has explained multiple times already, a deconvolution layer is only a convolution layer with an upsampling. I don't think there is an official deconvolution layer. The result is the same.
@fbranchaud-charron-miovision Thanks for your quick response. I think there may still be a little different between conv and deconv. For example, conv reduce a larger patch to a smaller one by performing convolution operation (Say no zero-padding and the kernel size >= 1), while decov just reverses this operation. If we think a deconv is only a conv plus an upsampling. Then after up sampling the activation map is sparse while the deconv results a dense activation output.
@pengpaiSH You are right. The deconvolution is not equivalent as "convolution + upsampling". It is the opposite operation of the convolution, i.e. the swap of forward pass and backward pass. When the stride is non-unit, you need to insert zeros in-between each input pixel, and then perform convolution. A nice article is here: https://arxiv.org/pdf/1603.07285v1.pdf
we now have an actual deconv layer, check out variational_autoencoder_deconv example. it is still in beta
@EderSantana Thank you for your notification!
@jwgu Big thanks for your reference.
@EderSantana I test the variational_autoencoder_deconv example,but get some errors:
RuntimeError: GpuDnnConvGradI: error getting worksize: CUDNN_STATUS_BAD_PARAM
Apply node that caused the error: GpuDnnConvGradI{algo='none', inplace=True}(GpuContiguous.0, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode='half', subsample=(1, 1), conv_mode='conv', precision='float32'}.0, Constant{1.0}, Constant{0.0})
Toposort index: 352
Inputs types: [CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, 4D), <theano.gof.type.CDataType object at 0x000000018EB71D30>, Scalar(float32), Scalar(float32)]
Inputs shapes: [(64, 64, 3, 3), (100, 64, 14, 14), (100, 64, 14, 64), 'No shapes', (), ()]
Inputs strides: [(576, 9, 3, 1), (12544, 196, 14, 1), (57344, 896, 64, 1), 'No strides', (), ()]
Inputs values: ['not shown', 'not shown', 'not shown', <PyCObject object at 0x0000000193194B20>, 1.0, 0.0]
Inputs name: ('kernel', 'grad', 'output', 'descriptor', 'alpha', 'beta')
Outputs clients: [[GpuDimShuffle{0,2,3,1}(GpuDnnConvGradI{algo='none', inplace=True}.0)]]
HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
I don't know what happened, if possible, please give me some suggestions. Thanks.
is this the latest version of keras? do you have the same problem with tensorflow backend?
@EderSantana I have used the latest version of keras, but:
WARNING:theano.gof.compilelock:Overriding existing lock by dead process '6156' (I am process '13044')
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (100, 28, 28, 1) 0
____________________________________________________________________________________________________
conv2d_1 (Conv2D) (100, 28, 28, 1) 5
____________________________________________________________________________________________________
conv2d_2 (Conv2D) (100, 14, 14, 64) 320
____________________________________________________________________________________________________
conv2d_3 (Conv2D) (100, 14, 14, 64) 36928
____________________________________________________________________________________________________
conv2d_4 (Conv2D) (100, 14, 14, 64) 36928
____________________________________________________________________________________________________
flatten_1 (Flatten) (100, 12544) 0
____________________________________________________________________________________________________
dense_1 (Dense) (100, 128) 1605760
____________________________________________________________________________________________________
dense_2 (Dense) (100, 2) 258
____________________________________________________________________________________________________
dense_3 (Dense) (100, 2) 258
____________________________________________________________________________________________________
lambda_1 (Lambda) (100, 2) 0
____________________________________________________________________________________________________
dense_4 (Dense) (100, 128) 384
____________________________________________________________________________________________________
dense_5 (Dense) (100, 12544) 1618176
____________________________________________________________________________________________________
reshape_1 (Reshape) (100, 14, 14, 64) 0
____________________________________________________________________________________________________
conv2d_transpose_1 (Conv2DTransp (100, 14, 14, 64) 36928
____________________________________________________________________________________________________
conv2d_transpose_2 (Conv2DTransp (100, 14, 14, 64) 36928
____________________________________________________________________________________________________
conv2d_transpose_3 (Conv2DTransp (100, 29, 29, 64) 36928
____________________________________________________________________________________________________
conv2d_5 (Conv2D) (100, 28, 28, 1) 257
====================================================================================================
Total params: 3,410,058
Trainable params: 3,410,058
Non-trainable params: 0
____________________________________________________________________________________________________
('x_train.shape:', (60000L, 28L, 28L, 1L))
Using gpu device 0: GeForce GTX 750 (CNMeM is enabled with initial size: 80.0% of memory, cuDNN 5005)
Traceback (most recent call last):
File "<ipython-input-1-80080090c6dc>", line 1, in <module>
runfile('E:/DL EX/K/keras-master-2017-4-7/examples/variational_autoencoder_deconv.py', wdir='E:/DL EX/K/keras-master-2017-4-7/examples')
File "C:\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 699, in runfile
execfile(filename, namespace)
File "C:\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "E:/DL EX/K/keras-master-2017-4-7/examples/variational_autoencoder_deconv.py", line 136, in <module>
validation_data=(x_test, x_test))
File "C:\Anaconda2\lib\site-packages\keras\engine\training.py", line 1427, in fit
self._make_test_function()
File "C:\Anaconda2\lib\site-packages\keras\engine\training.py", line 1022, in _make_test_function
**self._function_kwargs)
File "C:\Anaconda2\lib\site-packages\keras\backend\theano_backend.py", line 1132, in function
return Function(inputs, outputs, updates=updates, **kwargs)
File "C:\Anaconda2\lib\site-packages\keras\backend\theano_backend.py", line 1118, in __init__
**kwargs)
File "C:\Anaconda2\lib\site-packages\theano\compile\function.py", line 326, in function
output_keys=output_keys)
File "C:\Anaconda2\lib\site-packages\theano\compile\pfunc.py", line 486, in pfunc
output_keys=output_keys)
File "C:\Anaconda2\lib\site-packages\theano\compile\function_module.py", line 1808, in orig_function
defaults)
File "C:\Anaconda2\lib\site-packages\theano\compile\function_module.py", line 1674, in create
input_storage=input_storage_lists, storage_map=storage_map)
File "C:\Anaconda2\lib\site-packages\theano\gof\link.py", line 699, in make_thunk
storage_map=storage_map)[:3]
File "C:\Anaconda2\lib\site-packages\theano\gof\vm.py", line 1047, in make_all
impl=impl))
File "C:\Anaconda2\lib\site-packages\theano\gof\op.py", line 935, in make_thunk
no_recycling)
File "C:\Anaconda2\lib\site-packages\theano\gof\op.py", line 839, in make_c_thunk
output_storage=node_output_storage)
File "C:\Anaconda2\lib\site-packages\theano\gof\cc.py", line 1190, in make_thunk
keep_lock=keep_lock)
File "C:\Anaconda2\lib\site-packages\theano\gof\cc.py", line 1131, in __compile__
keep_lock=keep_lock)
File "C:\Anaconda2\lib\site-packages\theano\gof\cc.py", line 1586, in cthunk_factory
key=key, lnk=self, keep_lock=keep_lock)
File "C:\Anaconda2\lib\site-packages\theano\gof\cmodule.py", line 1159, in module_from_key
module = lnk.compile_cmodule(location)
File "C:\Anaconda2\lib\site-packages\theano\gof\cc.py", line 1489, in compile_cmodule
preargs=preargs)
File "C:\Anaconda2\lib\site-packages\theano\sandbox\cuda\nvcc_compiler.py", line 417, in compile_str
return dlimport(lib_filename)
File "C:\Anaconda2\lib\site-packages\theano\gof\cmodule.py", line 302, in dlimport
rval = __import__(module_name, {}, {}, [module_name])
RuntimeError: ('The following error happened while compiling the node', GpuDnnConv{algo='small', inplace=True}(GpuContiguous.0, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode='half', subsample=(1, 1), conv_mode='conv', precision='float32'}.0, Constant{1.0}, Constant{0.0}), '\n', 'could not create cuDNN handle: CUDNN_STATUS_NOT_INITIALIZED', "[GpuDnnConv{algo='small', inplace=True}(<CudaNdarrayType(float32, 4D)>, <CudaNdarrayType(float32, 4D)>, <CudaNdarrayType(float32, 4D)>, <CDataType{cudnnConvolutionDescriptor_t}>, Constant{1.0}, Constant{0.0})]")
You are using the old Theano back-end. I strongly suggest that you try the new back-end:
https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end(gpuarray)
On Wed, Apr 12, 2017 at 10:51 PM Imorton-zd notifications@github.com wrote:
@EderSantana https://github.com/EderSantana I have used the latest version of keras, but:
WARNING:theano.gof.compilelock:Overriding existing lock by dead process '6156' (I am process '13044')
Layer (type) Output Shape Param # Connected to
input_1 (InputLayer) (100, 28, 28, 1) 0
conv2d_1 (Conv2D) (100, 28, 28, 1) 5
conv2d_2 (Conv2D) (100, 14, 14, 64) 320
conv2d_3 (Conv2D) (100, 14, 14, 64) 36928
conv2d_4 (Conv2D) (100, 14, 14, 64) 36928
flatten_1 (Flatten) (100, 12544) 0
dense_1 (Dense) (100, 128) 1605760
dense_2 (Dense) (100, 2) 258
dense_3 (Dense) (100, 2) 258
lambda_1 (Lambda) (100, 2) 0
dense_4 (Dense) (100, 128) 384
dense_5 (Dense) (100, 12544) 1618176
reshape_1 (Reshape) (100, 14, 14, 64) 0
conv2d_transpose_1 (Conv2DTransp (100, 14, 14, 64) 36928
conv2d_transpose_2 (Conv2DTransp (100, 14, 14, 64) 36928
conv2d_transpose_3 (Conv2DTransp (100, 29, 29, 64) 36928
conv2d_5 (Conv2D) (100, 28, 28, 1) 257
Total params: 3,410,058 Trainable params: 3,410,058 Non-trainable params: 0
('x_train.shape:', (60000L, 28L, 28L, 1L)) Using gpu device 0: GeForce GTX 750 (CNMeM is enabled with initial size: 80.0% of memory, cuDNN 5005) Traceback (most recent call last):
File "
", line 1, in runfile('E:/DL EX/K/keras-master-2017-4-7/examples/variational_autoencoder_deconv.py', wdir='E:/DL EX/K/keras-master-2017-4-7/examples') File "C:\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 699, in runfile execfile(filename, namespace)
File "C:\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile exec(compile(scripttext, filename, 'exec'), glob, loc)
File "E:/DL EX/K/keras-master-2017-4-7/examples/variational_autoencoder_deconv.py", line 136, in
validation_data=(x_test, x_test)) File "C:\Anaconda2\lib\site-packages\keras\engine\training.py", line 1427, in fit self._make_test_function()
File "C:\Anaconda2\lib\site-packages\keras\engine\training.py", line 1022, in _make_test_function **self._function_kwargs)
File "C:\Anaconda2\lib\site-packages\keras\backend\theano_backend.py", line 1132, in function return Function(inputs, outputs, updates=updates, **kwargs)
File "C:\Anaconda2\lib\site-packages\keras\backend\theano_backend.py", line 1118, in init **kwargs)
File "C:\Anaconda2\lib\site-packages\theano\compile\function.py", line 326, in function output_keys=output_keys)
File "C:\Anaconda2\lib\site-packages\theano\compile\pfunc.py", line 486, in pfunc output_keys=output_keys)
File "C:\Anaconda2\lib\site-packages\theano\compile\function_module.py", line 1808, in orig_function defaults)
File "C:\Anaconda2\lib\site-packages\theano\compile\function_module.py", line 1674, in create input_storage=input_storage_lists, storage_map=storage_map)
File "C:\Anaconda2\lib\site-packages\theano\gof\link.py", line 699, in make_thunk storage_map=storage_map)[:3]
File "C:\Anaconda2\lib\site-packages\theano\gof\vm.py", line 1047, in make_all impl=impl))
File "C:\Anaconda2\lib\site-packages\theano\gof\op.py", line 935, in make_thunk no_recycling)
File "C:\Anaconda2\lib\site-packages\theano\gof\op.py", line 839, in make_c_thunk output_storage=node_output_storage)
File "C:\Anaconda2\lib\site-packages\theano\gof\cc.py", line 1190, in make_thunk keep_lock=keep_lock)
File "C:\Anaconda2\lib\site-packages\theano\gof\cc.py", line 1131, in compile keep_lock=keep_lock)
File "C:\Anaconda2\lib\site-packages\theano\gof\cc.py", line 1586, in cthunk_factory key=key, lnk=self, keep_lock=keep_lock)
File "C:\Anaconda2\lib\site-packages\theano\gof\cmodule.py", line 1159, in module_from_key module = lnk.compile_cmodule(location)
File "C:\Anaconda2\lib\site-packages\theano\gof\cc.py", line 1489, in compile_cmodule preargs=preargs)
File "C:\Anaconda2\lib\site-packages\theano\sandbox\cuda\nvcc_compiler.py", line 417, in compile_str return dlimport(lib_filename)
File "C:\Anaconda2\lib\site-packages\theano\gof\cmodule.py", line 302, in dlimport rval = import(module_name, {}, {}, [module_name])
RuntimeError: ('The following error happened while compiling the node', GpuDnnConv{algo='small', inplace=True}(GpuContiguous.0, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode='half', subsample=(1, 1), conv_mode='conv', precision='float32'}.0, Constant{1.0}, Constant{0.0}), '\n', 'could not create cuDNN handle: CUDNN_STATUS_NOT_INITIALIZED', "[GpuDnnConv{algo='small', inplace=True}(<CudaNdarrayType(float32, 4D)>, <CudaNdarrayType(float32, 4D)>, <CudaNdarrayType(float32, 4D)>, <CDataType{cudnnConvolutionDescriptor_t}>, Constant{1.0}, Constant{0.0})]")
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/fchollet/keras/issues/2822#issuecomment-293763548, or mute the thread https://github.com/notifications/unsubscribe-auth/AALC-9kMe82mTyrUHE7mA26snvIJMOwPks5rvY21gaJpZM4InbX- .
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
@fchollet provides a fantastic blog post introducing Autoencoders named Building Autoencoders in Keras. The convolutional Autoencoder is interesting as below:
As far as I understand, the encoding phase consists of convolutions and down-sampling for feature extraction purpose. On the contrary, the decoding part is done by convolutions and up-sampling to recover the original input. I am reading paper "Learning Deconvolution Network for Semantic Segmentation" . It designs a Deconvolutuion Network for semantic segmentation which is composed of convolutional and deconvolutuional layers. The so-called deonv layer is expected to play the same role with the conv layer in the decoding phase in the blog post. I would like to ask that whether there is already an implementation of such kind of layer?