Closed cmanzo closed 10 months ago
Some small comments on the naming: I'd use:
What do you think? I've called them
post
andpre
, but I don't like these names. @giovannivolpe any better idea?
I'd go with preprocess
and postprocess
Question: Now all these components are in encdec in components. Should we move them within cnn since they're all convolutional? Should we use separate files for each component?
I think it's good to keep them in encdec. Alternatively, also good to move them in convolutional. Definitely no single file for each. @BenjaminMidtvedt what do you think?
At the moment the vae is in regression, should we create an independent folder?
I think it should be in a separate folder, like autoencoders
As a general comment: When I added the pre/post layers to encoder and decoder, I thought they could be replaced by blocks or modules using configure or replace but that doesn't seem to work. Is there any way to enable deeplay to do this kind of replacement? I believe it would greatly enhance its flexibility
I agree. @BenjaminMidtvedt what're your thoughts on this?
- ConvolutionalEncoderDecoder2D
I'm fine with capital D, but I prefer lower case to be consistent with torch naming (nn.Conv2d etc)
Question: Now all these components are in encdec in components. Should we move them within cnn since they're all convolutional? Should we use separate files for each component?
I think it's good to keep them in encdec. Alternatively, also good to move them in convolutional. Definitely no single file for each. @BenjaminMidtvedt what do you think?
I'm fine with either. Slightly prefer to have them in the cnn folder but I don't think it matters too much
At the moment the vae is in regression, should we create an independent folder?
I think it should be in a separate folder, like autoencoders
Either in generators or autoencoders. Fine with both
As a general comment: When I added the pre/post layers to encoder and decoder, I thought they could be replaced by blocks or modules using configure or replace but that doesn't seem to work. Is there any way to enable deeplay to do this kind of replacement? I believe it would greatly enhance its flexibility
I agree. @BenjaminMidtvedt what're your thoughts on this?
Yes, this should already work with replace
. Please open an issue with an example so I can reproduce
- ConvolutionalEncoderDecoder2D
I'm fine with capital D, but I prefer lower case to be consistent with torch naming (nn.Conv2d etc)
I agree, I used lower case for consistency with torch, @Giovanni, is it ok?
As a general comment: When I added the pre/post layers to encoder and decoder, I thought they could be replaced by blocks or modules using configure or replace but that doesn't seem to work. Is there any way to enable deeplay to do this kind of replacement? I believe it would greatly enhance its flexibility
I agree. @BenjaminMidtvedt what're your thoughts on this?
Yes, this should already work with
replace
. Please open an issue with an example so I can reproduce
@BenjaminMidtvedt you are right, I was not using the replace correctly. The code below:
from deeplay import ConvolutionalEncoder2d, Sequential
enc = ConvolutionalEncoder2d(
in_channels=1,
channels=[16, 32, 64, 128],
out_channels=3,
)
import torch.nn as nn
rep = Sequential(nn.Identity(), nn.Sigmoid(), nn.Identity())
enc.replace("postprocess", rep)
enc = enc.create()
works fine.
The fact that create
doesn't overwrite the module (whereas build does) was producing some confusion, see issue #59
- ConvolutionalEncoderDecoder2D
I'm fine with capital D, but I prefer lower case to be consistent with torch naming (nn.Conv2d etc)
ok, let's keep it consistent with torch
At the moment the vae is in regression, should we create an independent folder?
I think it should be in a separate folder, like autoencoders
Either in generators or autoencoders. Fine with both
ok, let's go with autoencoders
@giovannivolpe @BenjaminMidtvedt I've implemented the changes. Besides naming, is the code good to merge?
@cmanzo yep! looks good to merge now
This PR modifies the encoder/decoder structure, introduces VAE and UNET and requires some discussion. I have defined 2 extra components (
ConvolutionalEncoder2d
andConvolutionalDecoder2d
) that can be used independently. Each has an extra layer (default = Identity) at the end of the encoder and at the beginning of the decoder. These are useful to flexiblyflatten
andunflatten
the output/input, in particular for the VAE, as we will see later. They break a bit the symmetry but I find them quite useful when needed and can just be ignored if one doesn't need them. What do you think? I've called thempost
andpre
, but I don't like these names. @giovannivolpe any better idea?The
ConvolutionalEncoderDecoder2d
is now based on the combinations of these 2 components. No big change besides that.I've made a
Unet2d
that inherits from the ConvolutionalEncoderDecoder2d and adds the skip connections. The skip is configurable (torch.cat
by default).Question: Now all these components are in encdec in components. Should we move them within cnn since they're all convolutional? Should we use separate files for each component?
I've also introduced the VAE as an application, I guess it is where it belongs. By default, it is based on the
ConvolutionalEncoder2d, ConvolutionalDecoder2d
components but can be configured to use others (only dense layers, for example). The pre and post layers enable the user to configure the flatten and unflatten step, for example with global pooling. At the moment the vae is in regression, should we create an independent folder?As a general comment: When I added the pre/post layers to encoder and decoder, I thought they could be replaced by blocks or modules using configure or replace but that doesn't seem to work. Is there any way to enable deeplay to do this kind of replacement? I believe it would greatly enhance its flexibility.