"Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks", Alec Radford, Luke Metz and Soumith Chintala
# Working directory
WORKING_DIR=$HOME/projects
# Where the training (fine-tuned) checkpoint and logs will be saved to.
TRAIN_DIR=$WORKING_DIR/dcgan.tensorflow/exp1
# Where the dataset is saved to.
DATASET_DIR=$WORKING_DIR/datasets/celebA/tfrecords
CUDA_VISIBLE_DEVICES=0 \
python train.py \
--train_dir=${TRAIN_DIR} \
--dataset_dir=${DATASET_DIR} \
--initial_learning_rate=0.0002 \
--num_epochs_per_decay=5 \
--learning_rate_decay_factor=0.9 \
--batch_size=128 \
--num_examples=202599 \
--max_steps=30000 \
--save_steps=2000 \
--adam_beta1=0.5 \
$ ./dcgan_train.sh
tensorboard
for monitoring loss and generated images
$ tensorboard --logdir=exp1
# Working directory
WORKING_DIR=$HOME/projects
# Where the training (fine-tuned) checkpoint and logs will be saved to.
TRAIN_DIR=$WORKING_DIR/dcgan.tensorflow/exp1
batch=$1
#CUDA_VISIBLE_DEVICES=0 \
python generate.py \
--checkpoint_dir=${TRAIN_DIR} \
--checkpoint_step=-1 \
--batch_size=$batch \
--seed=12345 \
--make_gif=True \
--save_step=2000 \
convert -delay 30 -loop 0 *.jpg generated_images.gif
$ ./generate.sh batch_size (the number of images you want)
make_gif
flag is True (--make_gif=True
) then you will get generated images in each step.convert
command make one gif file from generated images.Il Gu Yi