larq / zoo

Reference implementations of popular Binarized Neural Networks
https://docs.larq.dev/zoo
Apache License 2.0
104 stars 18 forks source link

The usage of data.cache() causes the run out of memory. #258

Open HamLaertes opened 3 years ago

HamLaertes commented 3 years ago

Hi guys, when I run the command lqz TrainR2BStrongBaseline, I found the memory had been ran out of. My server has 64GB memories. I check the code and find it will cache both the train_data and the validation_data, and I think this is the reason why my memory is exhausted. I didn't see any issue related to this problem and I am just curious about do you guys have such a large memory that enough to cache the whole ImageNet dataset(up to 150GB)? Currently, I delete the code that will cache the dataset and the training process run smoothly. Is there any way to cache part of the training_data

jneeven commented 3 years ago

Hi! ImageNet is indeed quite large, which is why we use multi-GPU training. Since each of the GPUs comes with several CPUs and 64GB of RAM on the compute platform we use, ImageNet does fit into memory if you use four GPUs in parallel. Unfortunately I don't think there is currently a clean way to cache only part of a dataset; you could try to split the dataset up and get creative with tf.data.Dataset.concatenate and then do the caching before concatenating both parts of the dataset, but I doubt this would be very efficient either way. Not caching the dataset is probably your best solution (although it is unfortunately a bit slow). Good luck!

AdamHillier commented 3 years ago

Just to add, if you're on a single-GPU machine, disabling caching shouldn't have too much of an impact, especially if your dataset is stored on fast memory e.g. an SSD.

sfalkena commented 2 years ago

Hi, I have an additional question about the caching of Imagenet. I have the possibility to configure my training setup with enough RAM for caching Imagenet. However, when I run a multistage experiment, I am experiencing an increase in RAM for the second stage. I think the problem here lays in the fact that each TrainLarqZooModel caches the dataset again. Are you aware of any way to reuse the dataset across stages or on how to release the dataset from RAM? Perhaps it would make sense to move dataloading one level up so that it gets called once per experiment, regardless if that is a single or multistage experiment.

jneeven commented 2 years ago

Hi @sfalkena, I have indeed run into this problem before and didn't realise it would apply here as well. Unfortunately I have not found any robust way to remove the dataset from RAM, so your suggestion of moving the dataset one level up sounds like the best approach. In my case I wasn't using ImageNet and I was only running slightly over memory limits, so it was feasible to just slightly increase RAM, but that is of course not a maintainable solution especially at ImageNet scale...

sfalkena commented 2 years ago

Thanks for your fast answer. I am currently trying if maintaining the cache file on disk would work without slowing down too much. Otherwise I'll try a workaround for now by training the stages separately. If I have a bit more time in the future, I am happy to contribute to moving dataloading a level up.