Open laurentlasalle opened 4 years ago
I think 56.2 GB free is not enough. Have you tried to modify BATCH images in the script.?
I think 56.2 GB free is not enough. Have you tried to modify BATCH images in the script.?
I tried BATCH = 500
instead of 5000
, I even tried 5
, but both attempt failed with the same error.
I think BATCH in create_lmdb.py
only means how many images to load to memory at once. So making this smaller is just using less memory. I think 56.2GB free is not enough too. In REDS dataset case, lmdb of train_sharp(32GiB) is 61.9GiB. I'm not sure but I think lmdb is almost double of origin.
You're right @ryul99. I tried to modify it before. It will load "BATCH" images to memory at once, then write out into lmdb file. But after that, it will clean up the memory and continue to load other "BATCH" images from the dataset
While preparing DIV2K dataset in LMDB format to run
train.py
withoptions/train_ESRGAN.yml
, I am getting a "not enough space on the disk" error fromcreate_lmdb.py
under Windows 10.EDVR and all other files are stored on C, which has 56.2 GB free. Am I wrong in expecting the database to be much smaller?