hengyuan-hu / bottom-up-attention-vqa

An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.
GNU General Public License v3.0
754 stars 181 forks source link

Massive RAM requirement to load .tsv #12

Closed brandonjabr closed 6 years ago

brandonjabr commented 6 years ago

Thanks for the great implementation. Unfortunately after downloading the data and processing successfully, I run the main.py file but find that even with 32GB of RAM, the run fails while loading features from the .hdf5 files from running out of RAM. Is there a workaround to this, or am I doing something wrong here?

Thank you!

hengyuan-hu commented 6 years ago

please see #6

brandonjabr commented 6 years ago

Thanks! Got it figured out.

ZhuFengdaaa commented 6 years ago

I got OSError: [Errno 12] Cannot allocate memory at for i, (v, b, q, a) in enumerate(train_loader):. And my machine have 64 GB RAM which is large enough I think. Issue #6 has been deleted. So can you please tell me what is the solution ?

Thank you.

hengyuan-hu commented 6 years ago

The easiest fix is to create a swapfile. The cause may be that pytorch dataloader will consume more memory when using subprocesses but I am not sure.

ZhuFengdaaa commented 6 years ago

Creating swapfile works! It costs me 64G RAM and 64G swap to meet the needs. Thank you.