Hi @JasonForJoy can you please confirm if reducing the batch size (due to low-end GPU machine availability) can impact the performance values of the model ?
Secondly, every batch_size including the recommended (i.e. 96) is not a divisor of the total number of training, valid or test samples. As a result of which the last batch misses out on samples. Does this process contribute in lowering the recall and other metric values ?
Hi @JasonForJoy can you please confirm if reducing the batch size (due to low-end GPU machine availability) can impact the performance values of the model ?
Secondly, every batch_size including the recommended (i.e. 96) is not a divisor of the total number of training, valid or test samples. As a result of which the last batch misses out on samples. Does this process contribute in lowering the recall and other metric values ?