Stonesjtu / pytorch_memlab

Profiling and inspecting memory in pytorch
MIT License
1.01k stars 37 forks source link

Documentation for pl.LightningModule that includes many nn.Modules #28

Open turian opened 3 years ago

turian commented 3 years ago

I have a pl.LightningModule (pytorch-lightning) that includes many nn.Modules.

It's not obvious from the documentation how I can profile all the LightningModule tensors and the subordinate Module tensors. Could you please provide an example?

turian commented 3 years ago

Here is an example:

https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/01-mnist-hello-world.ipynb

In my code (not the colab above, but a similar style), I don't OOM when I create the model. I OOM when I run

trainer.fit(model)

How do I memory profile why I OOM?

Stonesjtu commented 3 years ago

THX for reporting. I'll investigate the integration with pytorch lightning in this weekend.

But in principle, the only thing need to be done is to add the forward function into the line_profiler.

Stonesjtu commented 3 years ago

It looks like our current implementation cannot profiling the detailed memory usage inside nn.Module. However you can work this around by simply defining a dummy container Module like:

class Net(pl.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv1D(xxx)
    @profile
    def forward(self, input):
        out = self.conv1(input)
        return out
turian commented 3 years ago

@Stonesjtu if I have an nn.Module that contains other nn.Modules (which in turn contain other nn.Modules), do I add @profile decorator to all nn.Modules to see what is happening? Thank you for the help.

Stonesjtu commented 3 years ago

A common workflow is to profile top-down. Usually 2 or 3 profile should give you an overall memory consumption statistics.

turian commented 1 year ago

@Stonesjtu wanted to ping on this issue to see if there is a better way to use memlab with lightning now.

profPlum commented 2 weeks ago

@turian Does the MemReporter work for you? It says it is supposed to work recursively on more complicated nn.Modules.