chaneyddtt / UDA-Animal-Pose

MIT License
102 stars 12 forks source link

Descendant Accuracy for each batch #15

Open yaldashbz opened 1 year ago

yaldashbz commented 1 year ago

Hi,

I am using your dataset and your pretrained model and training mean-teacher with unchanged hyper-parameters. Still, the accuracy in the validation phase has a descendant graph for each batch (acc_re, which you print in a Bar), and then for the next batch, it suddenly jumps to the starting accuracy of the previous batch (It looks like a periodic graph). Could you please explain the reason?

Thanks.

chaneyddtt commented 1 year ago

Hi @yaldashbz , I have not met this before, can you provide more details for the phenomenon?

yaldashbz commented 1 year ago

About the jumping after each batch I was wrong; sorry about that. But I still have a question. In the training phase, for each epoch after training, the accuracy is calculated in the validation method, and the accuracy graph has a descending form. (for example, in the second epoch, it starts from 1.0, 0.86, 0.81, ... tolerate between 0.69 and 0.78, and at the end goes to 0.70). Note that I'm not using mixup for training. I don't understand the reason.

I've attached the graph for ACC and the loss of a model trained for one epoch.

Screenshot from 2022-09-29 15-12-31

chaneyddtt commented 1 year ago

We are computing the average accuracy over the test data during the validation. Only one batch of data is used for the first iteration, and the accuracy can be very high depends on the difficulty of the data in the first batch. As the iteration gets larger, the accuracy is computed over more batch of data, and the accuracy also changes.