Closed thinh276 closed 2 years ago
Hi @thinh276, the --num_thread_reader
is used in dataloders, which can speed up the data reading. --num_thread_reader=0 in MSR-VTT
can be regarded as a typo and feel free to adjust its value.
Hi @thinh276, the
--num_thread_reader
is used in dataloders, which can speed up the data reading.--num_thread_reader=0 in MSR-VTT
can be regarded as a typo and feel free to adjust its value.
Thank you so much! My workstation is runing in a slow speed now. I will test with some values of --num_thread_reader
.
Does this value affects the accuracy? I use your code (--num_thread_reader=0
) and test with 2 computers the results have the a gap:
If --num_thread_reader=0
value affects the training time only. Are my training results normal?
Thank you!
Hi @thinh276, interesting results but I do not think the --num_thread_reader=0
will affect the performance. The difference may be caused by the GPU number, or other factors (not sure), e.g., CUDA's nondeterministic behavior. Below links are for your information,
Thanks.
I tested --num_thread_reader=2
and the training time decrease from 33 hours to 10 hours. (Great!)
Thank you for your links of information. I will read it.
I would like to inform you and others my detail resutls:
--num_thread_reader=0
, --nproc_per_node=4
, batch size is 128): There is a gap with result on your paper but the trend among similarly calculation methods is same.--num_thread_reader=0
, --nproc_per_node=2
, batch size is 64): -meanP is higher with meanP but seqTransf can not reach 44.5 as paper
Can you explain the number of thread reader in the training configuration? I can adjust this value to decrease my training time? (Why --num_thread_reader=0 in MSR-VTT while --num_thread_reader=2 in other dataset.) Thank you so much!