Closed mlindauer closed 3 years ago
Not of the top of my head. It is definitely not the same issue as there was before with the Jupiter notebook. I am looking into it.
I couldn't reproduce the issue on slurm with that code. Of 32 GB allocated memory, around 3.5 GB where used most of the time peaking at 5.5 GB. I will add an enforce for memory with pynisher for image data though.
Edit: Could be because I ran on GPU, was your run on CPU only?
I installed it again in a virtualbox Ubuntu image (to ensure that it is not the OS and most likely quite similar to your setup), but I can reproduce the problem. For example:
Current children cumulated CPU time: 510.84 s
Current children cumulated vsize: 16804940 KiB
Current children cumulated memory: 10458920 KiB
The peak was at
Current children cumulated memory: 12615168 KiB
At least I could not reproduce the 77GB as reported before... weird...
I run that on a CPU (which surprisingly parallelizes the run without me telling to do that) and use Anaconda 4.7.12 (Python 3.7.4).
Hello, we are closing this issue to track all memory-related problems here https://github.com/automl/Auto-PyTorch/issues/259.
Hi,
I tried to run AutoPyTorch again:
However, after nearly 2h Auto-PyTorch used more than 60gb RAM.
Do you have any idea why that's the case? For the first 20min, it used only roughly 7gb.