Closed ahundt closed 6 years ago
I saw that some time ago. Neither TF nor Theano have such ops. And I am not that interested in delving deep into the guts of TF or Theano.
I had my friend ask Prof. Killian Weinberger for a temporary measure to reduce memory consumption in DenseNets, and the answer was using the BC architecture with lesser width (at the cost of performance obviously). That seems to be the way to go for now.
I'll keep the issue up, in case some time in the future these frameworks will implement such ops.
Sounds good, I created https://stackoverflow.com/questions/46124965/tensorflow-shared-memory-allocation-of-recursive-features-for-densenet to see if anyone has advice.
Hey @titu1994 I used your code and someone elses to show how to do a memory efficient version in tensorflow: https://github.com/joeyearsley/efficient_densenet_tensorflow
I'm not exactly sure how to do in Theano or CNTK but guessing it could be merged in with an explicit warning that the efficient version only works with the tf backend?
Or maybe someone could help make a gradient checkpointing end point in Keras such that it calls version specific ones.
Sorry @ahundt as you are probably being spammed by these notifications atm.
This is definitely very useful, but it is not technically the same technique applied by the memory efficient DenseNets in the original Torch codebase.
Their, they cache the activation of BN and Relu of each of the layers so that multiple calls to those layers use precomputed values (as far as I remember).
In either case, gradient checkpointing is also a useful tool to train extremely large DenseNets.
It’s not exactly the same as the original but if you check the current version by gpleiss he’s altered it to gradient checkpointing like this.
Oh, that's interesting. Then I suggest you send a PR to this repository to be properly credited (f you have the time). If not, then I can manage to do it in a week or two.
The problem is the Keras backend, I can do a TF only one which raises and issue otherwise?
I'm thinking of adding a separate file which follows most of what was done in the original densenet.py
script, but uses the TF only functions.
Maybe something like densenet_efficient.py
or something.
Memory-Efficient Implementation of DenseNets https://arxiv.org/abs/1707.06990
Apparently the memory efficient densenet comes with a paper, it says pre-activation is the way to go. I wonder if TensorFlow has a way to do this memory-efficient version without implementing new ops.