Open radoye opened 7 years ago
Thanks for bringing this up, I'll benchmark this on my machine and see if I get something similar!
It does seem to spike up to around 10+ GB on my 24GM ram system, which is kind of odd. I'm not sure what the cause is, considering that the size of the mnist data set in its entirety barely over 100MB. I think it might be GC'd allocation while loading the data into vectors (vectors might be getting re-allocated on every map); not sure if this is an issue with the hmatrix backend or tensor-ops itself. I'll investigate further.
Seems to require quite a bit of RAM on MNIST example. Any ideas why?
On my 8GB RAM machine under Arch Linux the process is getting killed:
Running on a significantly bigger machine (64GB RAM / 6-core i7 under Arch Linux) shows (in a crude top-based measurement) memory spiking to
~18GB
during data load. During training, 1000-sample batches take~33sec
to exec and memory hovers around~8.1GB
.