vislearn / FrEIA-community

Community-driven effort to improve and extend FrEIA
0 stars 0 forks source link

Memory-efficient implementation #9

Open alisiahkoohi opened 4 years ago

alisiahkoohi commented 4 years ago

Is there a plan to develop a memory-efficient back-propagation training mode? Perhaps a flag that by activating it, during back-propagation, the forward-pass network states get recomputed by inverting the network layer-by-layer, instead of storing them in the forward pass.

Gword commented 3 years ago

Hello, I am trying to implement a Memory-efficient implementation you mentioned. Could you tell me if you have seen the source code of this implementation in pytorch, thank you!

alisiahkoohi commented 3 years ago

Hello, I am aware of other libraries that provide this functionality. For example see MenCNN.

psteinb commented 3 years ago

But if I understand MemCNN correctly, they have a different architecture to construct the normalizing flow, don’t they? So this approach cannot be easily put to use within existing infrastructure of Freia. Haven’t digged too deep in either libraries to have a good idea of the feasibility.

ardizzone commented 3 years ago

I think it would be possible. Before FrEIA, we had already implemented this in some home-made normalizing flows. Because this is a larger feature, I am moving the issue to https://github.com/VLL-HD/FrEIA-community

Perhaps we can get this done in the next weeks, at least for the most common modules (specifically AllInOneBlock, which combines the standard coupling-scaling-permuting that has become standard in the literature)