Closed jasonlyik closed 8 months ago
In MSWC example, the Benchmark run is used to measure accuracy during pre-training and also after prototype implementation.
Due to the hook additions https://github.com/NeuroBench/neurobench/blob/main/neurobench/benchmarks/benchmark.py#L53, there seems to be memory leaks when the Benchmark is used and the model is changing.
We should somehow make sure that the hooks are deleted at the end of the Benchmark run, can be done by saving handles and removing them at end of run: https://discuss.pytorch.org/t/how-to-remove-multiple-hooks/135442
de4ad1930fc705efc4628ad1ac1c57837dbb467d
In MSWC example, the Benchmark run is used to measure accuracy during pre-training and also after prototype implementation.
Due to the hook additions https://github.com/NeuroBench/neurobench/blob/main/neurobench/benchmarks/benchmark.py#L53, there seems to be memory leaks when the Benchmark is used and the model is changing.
We should somehow make sure that the hooks are deleted at the end of the Benchmark run, can be done by saving handles and removing them at end of run: https://discuss.pytorch.org/t/how-to-remove-multiple-hooks/135442