sherjilozair / char-rnn-tensorflow

Multi-layer Recurrent Neural Networks (LSTM, RNN) for character-level language models in Python using Tensorflow
MIT License
2.64k stars 960 forks source link

Show defaults in --help #73

Closed hugovk closed 7 years ago

hugovk commented 7 years ago

Also remove unused imports (found with pyflakes .) and pep8 updates.

Before:

$ python train.py --help && python sample.py --help
usage: train.py [-h] [--data_dir DATA_DIR] [--save_dir SAVE_DIR]
                [--rnn_size RNN_SIZE] [--num_layers NUM_LAYERS]
                [--model MODEL] [--batch_size BATCH_SIZE]
                [--seq_length SEQ_LENGTH] [--num_epochs NUM_EPOCHS]
                [--save_every SAVE_EVERY] [--grad_clip GRAD_CLIP]
                [--learning_rate LEARNING_RATE] [--decay_rate DECAY_RATE]
                [--init_from INIT_FROM]

optional arguments:
  -h, --help            show this help message and exit
  --data_dir DATA_DIR   data directory containing input.txt
  --save_dir SAVE_DIR   directory to store checkpointed models
  --rnn_size RNN_SIZE   size of RNN hidden state
  --num_layers NUM_LAYERS
                        number of layers in the RNN
  --model MODEL         rnn, gru, or lstm
  --batch_size BATCH_SIZE
                        minibatch size
  --seq_length SEQ_LENGTH
                        RNN sequence length
  --num_epochs NUM_EPOCHS
                        number of epochs
  --save_every SAVE_EVERY
                        save frequency
  --grad_clip GRAD_CLIP
                        clip gradients at this value
  --learning_rate LEARNING_RATE
                        learning rate
  --decay_rate DECAY_RATE
                        decay rate for rmsprop
  --init_from INIT_FROM
                        continue training from saved model at this path. Path
                        must contain files saved by previous training process:
                        'config.pkl' : configuration; 'chars_vocab.pkl' :
                        vocabulary definitions; 'checkpoint' : paths to model
                        file(s) (created by tf). Note: this file contains
                        absolute paths, be careful when moving files around;
                        'model.ckpt-*' : file(s) with model definition
                        (created by tf)
usage: sample.py [-h] [--save_dir SAVE_DIR] [-n N] [--prime PRIME]
                 [--sample SAMPLE]

optional arguments:
  -h, --help           show this help message and exit
  --save_dir SAVE_DIR  model directory to store checkpointed models
  -n N                 number of characters to sample
  --prime PRIME        prime text
  --sample SAMPLE      0 to use max at each timestep, 1 to sample at each
                       timestep, 2 to sample on spaces

After:

$ python train.py --help && python sample.py --help
usage: train.py [-h] [--data_dir DATA_DIR] [--save_dir SAVE_DIR]
                [--rnn_size RNN_SIZE] [--num_layers NUM_LAYERS]
                [--model MODEL] [--batch_size BATCH_SIZE]
                [--seq_length SEQ_LENGTH] [--num_epochs NUM_EPOCHS]
                [--save_every SAVE_EVERY] [--grad_clip GRAD_CLIP]
                [--learning_rate LEARNING_RATE] [--decay_rate DECAY_RATE]
                [--init_from INIT_FROM]

optional arguments:
  -h, --help            show this help message and exit
  --data_dir DATA_DIR   data directory containing input.txt (default:
                        data/tinyshakespeare)
  --save_dir SAVE_DIR   directory to store checkpointed models (default: save)
  --rnn_size RNN_SIZE   size of RNN hidden state (default: 128)
  --num_layers NUM_LAYERS
                        number of layers in the RNN (default: 2)
  --model MODEL         rnn, gru, or lstm (default: lstm)
  --batch_size BATCH_SIZE
                        minibatch size (default: 50)
  --seq_length SEQ_LENGTH
                        RNN sequence length (default: 50)
  --num_epochs NUM_EPOCHS
                        number of epochs (default: 50)
  --save_every SAVE_EVERY
                        save frequency (default: 1000)
  --grad_clip GRAD_CLIP
                        clip gradients at this value (default: 5.0)
  --learning_rate LEARNING_RATE
                        learning rate (default: 0.002)
  --decay_rate DECAY_RATE
                        decay rate for rmsprop (default: 0.97)
  --init_from INIT_FROM
                        continue training from saved model at this path. Path
                        must contain files saved by previous training process:
                        'config.pkl' : configuration; 'chars_vocab.pkl' :
                        vocabulary definitions; 'checkpoint' : paths to model
                        file(s) (created by tf). Note: this file contains
                        absolute paths, be careful when moving files around;
                        'model.ckpt-*' : file(s) with model definition
                        (created by tf) (default: None)
usage: sample.py [-h] [--save_dir SAVE_DIR] [-n N] [--prime PRIME]
                 [--sample SAMPLE]

optional arguments:
  -h, --help           show this help message and exit
  --save_dir SAVE_DIR  model directory to store checkpointed models (default:
                       save)
  -n N                 number of characters to sample (default: 500)
  --prime PRIME        prime text (default: )
  --sample SAMPLE      0 to use max at each timestep, 1 to sample at each
                       timestep, 2 to sample on spaces (default: 1)
hugovk commented 7 years ago

@sherjilozair Removed merge conflicts. Is this okay to merge, or do you have any questions?

coveralls commented 7 years ago

Coverage Status

Coverage remained the same at 92.72% when pulling ccb85108439e8fe9b65c1767f5c61bf58c1965a5 on hugovk:show-defaults-in-help into ed54f4cd27cbbe373801a0b05a20396855a730ad on sherjilozair:master.