Open alisiahkoohi opened 4 years ago
Hello, I am trying to implement a Memory-efficient implementation you mentioned. Could you tell me if you have seen the source code of this implementation in pytorch, thank you!
Hello, I am aware of other libraries that provide this functionality. For example see MenCNN.
But if I understand MemCNN correctly, they have a different architecture to construct the normalizing flow, don’t they? So this approach cannot be easily put to use within existing infrastructure of Freia. Haven’t digged too deep in either libraries to have a good idea of the feasibility.
I think it would be possible. Before FrEIA, we had already implemented this in some home-made normalizing flows. Because this is a larger feature, I am moving the issue to https://github.com/VLL-HD/FrEIA-community
Perhaps we can get this done in the next weeks, at least for the most common modules (specifically AllInOneBlock
, which combines the standard coupling-scaling-permuting that has become standard in the literature)
Is there a plan to develop a memory-efficient back-propagation training mode? Perhaps a flag that by activating it, during back-propagation, the forward-pass network states get recomputed by inverting the network layer-by-layer, instead of storing them in the forward pass.