RnDProjectsDeebul / ManojKolpeThesis

Mozilla Public License 2.0
2 stars 1 forks source link

Impact of training dataset batch size on the evaluation dataset model performance #19

Open Manojkl opened 1 year ago

Manojkl commented 1 year ago

Training the model with different batch sizes and comparing the model performance makes more sense than taking the same trained model and evaluating it with varying batch sizes. Because the learning (temporal transfer of information) has been done in the training process. In evaluation, what happens is that every time we take the learned information during the training to predict the segmentation of the frame (evaluation dataset), we give in. This is batch-mode training and evaluation. In the previous issue, I took continuous frames and compared the performance of vanilla, GP, and LSTM. In that process, we see the model prediction side by side, and LSTM performs reasonably well compared to the GP and Vanilla. What are your thoughts on the same?

Untitled Diagram drawio (4)

deebuls commented 1 year ago

@Manojkl Talking from the application point of view the second scenrio - where you infer on 1 frame and pass information to second frame is applicable to the embedded application. because in the obile phone you cannot load 4 frams together

Manojkl commented 1 year ago

Yes, For the real-time application, it is true. However, if we capture the sequence, save it on the device, then process the sequence finally show the output is the other scenarios where processing in batches of 2 or 4 is applicable.

deebuls commented 1 year ago

ok maybe . I dont know how much space is required and is available in mobile. Anyways both cases are interesting .. You can compare both and conclude on which is a better solution . Again depends on the time available to you, so prioritise based on which question if answered will be interesting to know.