prigoyal / pytorch_memonger

Experimental ground for optimizing memory of pytorch models
GNU General Public License v3.0
360 stars 34 forks source link
memory-management pytorch

PyTorch Memory optimizations via gradient checkpointing

This repository contains implementation of various PyTorch models using the gradient checkpointing[1] which allows trading compute for memory and hence allows training bigger/wider models and use large minibatch sizes.

The application of checkpointing is showcased on various models:

Results of checkpointing on these models are showcased below:

In order to use the models, you need to install PyTorch master following instructions from here

To run checkpointed models and their baseline tests, follow the commands below:

# for checkpointed
python test_memory_optimized.py

# for baseline
python test_memory_optimized.py

Tutorial

We provide a tutorial to describe how to use checkpointing for various kinds of models.

There are few special kinds of layers like Batch normalization, dropout that should be handled carefully. The details for handling those are also available in the tutorial

References

[1]. Siskind, Jeffrey Mark, and Barak A. Pearlmutter. "Divide-and-Conquer Checkpointing for Arbitrary Programs with No User Annotation." arXiv preprint arXiv:1708.06799 (2017).