Closed guanfaqian closed 5 years ago
This code should work for 3D convolutions, possibly with small modifications to spectral_normalization.py
.
Note that this code (functions _update_u_v
and _make_params
) define the height
of the kernel. This does not actually refer to the geometric height of the kernel, rather the number of channels. In spectral normalization, we flatten all of the geometric dimensions of the weight tensor (in 2D, the width and height) into the variable called width
.
Because this code simply flattens all of these dimensions, it should work when the weight vector is a 5D tensor rather than a 4D tensor (CCHWD rather than CCHW).
Have you had any success using the SpectralNorm wrapper on 3D convolutional layers?
Can I flatten any two latitudes? For example, Length and width or Length and height.
I don't think you need to change the code. The SpectralNorm
layer should automatically flatten the last four dimensions of the weight tensor.
For 3D convolutions, the weight tensor is 5D (out_channels, in_channels, height, width, depth). The spectral norm should flatten in_channels, height, width, depth into one dimension of size in_channels*height*width*depth. In other words, it treats the weight tensor as a matrix with dimensions (out_channels, in_channels*height*width*depth).
Let me know if this works.
oh,thanks
The paper and the code are both for the 2D convolution of the sn limit w, then how to deal with w in the 3D convolution?