Closed netpcvnn closed 4 years ago
We had conducted experiments between the beam search (beam width=5) and greedy search (beam width=1) previously and found there is no obvious improvement between the beam width (maybe slightly improved accuracy) in our method. Besides, the greedy search is much faster than beam search with beam width=5.
Thank you for your information. Have you tried to train or infer with batch_size that is different from 1?
The batch_size (both training and inferring) can be greater than 1. See train_eval.py and config.py for example.
Thank you.
what about batch_size in Demo.py, When I change in image = tf.placeholder(tf.uint8, (1, 32, 100, 3), name='image')
to other that different from 1 for batch_size, I receive the error.
The demo.py
also supports a bigger batch size as train_eval.py
. You can customize the demo.py to support bigger batch size by modifying the batch_size of not only the tf.placeholder
but also the raw_image
, or other codes according to the reported error.
Thanks for your answer.
When I modified the config.py
to beam_width=0 to force not using beam_search and run train_eval.py
, I got the error in decoder_conv.py
in dynamic_decode
function. Look likes Beam search doesn't support batching or something like this.
Thanks for your answer. When I modified the
config.py
to beam_width=0 to force not using beam_search and runtrain_eval.py
, I got the error indecoder_conv.py
indynamic_decode
function. Look likes Beam search doesn't support batching or something like this.
beam_width=1 equals to greedy search.
With my understanding, when beam_width=1, the code in datasets .py
(line 44-45) still uses batch_size = 1 for evaluation. Is it correct?
yes, beam search doesn't support batch_size > 1 in our code.
Thank you. Do you have any ways to support batch_size > 1 in your code?
Hello Have you evaluated the performance between the beam search and greedy search?