NVlabs / denoising-diffusion-gan

Tackling the Generative Learning Trilemma with Denoising Diffusion GANs https://arxiv.org/abs/2112.07804
Other
688 stars 77 forks source link

CelebA-HQ 256x256 Data Pre-processing #22

Open KomputerMaster64 opened 2 years ago

KomputerMaster64 commented 2 years ago

Thank you team for the sharing the project resources. I am trying to process the CelebA-HQ 256x256 dataset for the DDGAN model. The DDGAN repository recommends going over the dataset preparation methods in the NVAE repository.


The following commands will download tfrecord files from GLOW and convert them to store them in an LMDB dataset.
Use the link by openai/glow for downloading the CelebA-HQ 256x256 dataset (4 Gb). To convert/store the CelebA-HQ 256x256 dataset to/as the lmdb dataset one needs to install module called "tfrecord". The missing module error can be rectified by simply executing the command pip install tfrecord.

!mkdir -p $DATA_DIR/celeba %cd $DATA_DIR/celeba !wget https://openaipublic.azureedge.net/glow-demo/data/celeba-tfr.tar !tar -xvf celeba-tfr.tar %cd $CODE_DIR/scripts !pip install tfrecord !python convert_tfrecord_to_lmdb.py --dataset=celeba --tfr_path=$DATA_DIR/celeba/celeba-tfr --lmdb_path=$DATA_DIR/celeba/celeba-lmdb --split=train



The final command !python convert_tfrecord_to_lmdb.py --dataset=celeba --tfr_path=$DATA_DIR/celeba/celeba-tfr --lmdb_path=$DATA_DIR/celeba/celeba-lmdb --split=train gives the following output:

.
.
.
26300
26400
26500
26600
26700
26800
26900
27000
added 27000 items to the LMDB dataset.
Traceback (most recent call last):
  File "convert_tfrecord_to_lmdb.py", line 73, in <module>
    main(args.dataset, args.split, args.tfr_path, args.lmdb_path)
  File "convert_tfrecord_to_lmdb.py", line 58, in main
    print('added %d items to the LMDB dataset.' % count)
lmdb.Error: mdb_txn_commit: Disk quota exceeded


I am not sure I have made the LMDB dataset properly, I request you to guide me.