Open turian opened 3 years ago
Here is an example:
In my code (not the colab above, but a similar style), I don't OOM when I create the model. I OOM when I run
trainer.fit(model)
How do I memory profile why I OOM?
THX for reporting. I'll investigate the integration with pytorch lightning in this weekend.
But in principle, the only thing need to be done is to add the forward function into the line_profiler.
It looks like our current implementation cannot profiling the detailed memory usage inside nn.Module
. However you can work this around by simply defining a dummy container Module like:
class Net(pl.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv1D(xxx)
@profile
def forward(self, input):
out = self.conv1(input)
return out
@Stonesjtu if I have an nn.Module that contains other nn.Modules (which in turn contain other nn.Modules), do I add @profile decorator to all nn.Modules to see what is happening? Thank you for the help.
A common workflow is to profile top-down. Usually 2 or 3 profile
should give you an overall memory consumption statistics.
@Stonesjtu wanted to ping on this issue to see if there is a better way to use memlab with lightning now.
@turian Does the MemReporter work for you? It says it is supposed to work recursively on more complicated nn.Modules.
I have a pl.LightningModule (pytorch-lightning) that includes many nn.Modules.
It's not obvious from the documentation how I can profile all the LightningModule tensors and the subordinate Module tensors. Could you please provide an example?