akshitac8 / tfvaegan

[ECCV 2020] Official Pytorch implementation for "Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification". SOTA results for ZSL and GZSL
MIT License
129 stars 32 forks source link

Questions about training #26

Closed 0xMarsRover closed 3 years ago

0xMarsRover commented 3 years ago

Hi Akshitac,

Sorry to open a new issue since I get more questions about training.

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Thanks in advance.

Kind regards. Kaiqiang

0xMarsRover commented 3 years ago

In addition, I tried a different class embedding (dimension is 2048), but got the error RuntimeError: CUDA error: device-side assert triggered'. I set argparse as: --nz 2048 --attSize 2048

Also, this error comes from optimizerE.step() in train_action.py. Is this error about tensor size mismatch? I have no idea right now. Thanks

0xMarsRover commented 3 years ago

In addition, I tried a different class embedding (dimension is 2048), but got the error RuntimeError: CUDA error: device-side assert triggered'. I set argparse as: --nz 2048 --attSize 2048

Also, this error comes from optimizerE.step() in train_action.py. Is this error about tensor size mismatch? I have no idea right now. Thanks

For the issue above, there are more details as follows. '/pytorch/aten/src/ATen/native/cuda/Loss.cu:111: operator(): block: [1267,0,0], thread: [31,0,0] Assertion input_val >= zero && input_val <= one failed.' `

File "/content/kg_gnn_gan/train_tfvaegan.py", line 274, in <module>', ' vae_loss_seen = loss_fn(recon_x, input_resv, means, log_var)', ' File "/content/kg_gnn_gan/train_tfvaegan.py", line 72, in loss_fn', ' BCE = torch.nn.functional.binary_cross_entropy(recon_x + 1e-12, x.detach(), size_average=False)', ' File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 2893, in binary_cross_entropy', ' return torch._C._nn.binary_cross_entropy(input, target, weight, reduction_enum)', 'RuntimeError: CUDA error: device-side assert triggered']

It seems like the issue is about the calculation of loss function.

0xMarsRover commented 3 years ago

In addition, I tried a different class embedding (dimension is 2048), but got the error RuntimeError: CUDA error: device-side assert triggered'. I set argparse as: --nz 2048 --attSize 2048

Also, this error comes from optimizerE.step() in train_action.py. Is this error about tensor size mismatch? I have no idea right now. Thanks

Some updates for the issue above.

I tested another semantic embedding, which has 1024 dimensions. It works.

But this failed -> the case with using 2048 dimensional vector as semantic embedding.

Notes: When I apply different semantic embeddings, I set proper parameters according to the size of embedding in the script. Also, I do not change other codes or settings (only changing semantic embedding).

Looking forward to hearing from you. Thanks.

Kind regards. Kai

in-my-heart commented 3 years ago

Hi Akshitac,

Sorry to open a new issue since I get more questions about training.

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Thanks in advance.

Kind regards. Kaiqiang

Based on my understanding of this paper: Your understanding is correct. For the issue of syn_num parameter, please refer to FREE: Feature Refinement for Generalized Zero-Shot Learning。 image

0xMarsRover commented 3 years ago

Hi Akshitac,

Sorry to open a new issue since I get more questions about training.

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Thanks in advance.

Kind regards.

Kaiqiang

Based on my understanding of this paper: Your understanding is correct. For the issue of syn_num parameter, please refer to FREE: Feature Refinement for Generalized Zero-Shot Learning。

image

Thank you so much. Nice paper recommendation.

in-my-heart commented 3 years ago

Hi Akshitac,

Sorry to open a new issue since I get more questions about training.

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Thanks in advance.

Kind regards.

Kaiqiang

Based on my understanding of this paper: Your understanding is correct. For the issue of syn_num parameter, please refer to FREE: Feature Refinement for Generalized Zero-Shot Learning。 image

Thank you so much. Nice paper recommendation.

You bet! Do you know this question? https://github.com/akshitac8/tfvaegan/issues/24#issuecomment-909014584

0xMarsRover commented 3 years ago

Hi Akshitac,

Sorry to open a new issue since I get more questions about training.

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Thanks in advance.

Kind regards.

Kaiqiang

Based on my understanding of this paper: Your understanding is correct. For the issue of syn_num parameter, please refer to FREE: Feature Refinement for Generalized Zero-Shot Learning。 image

Thank you so much. Nice paper recommendation.

You bet! Do you know this question? #24 (comment)

Sorry, I did not test on image ZSL. I just focus on action recognition. I guess the issue may be caused by hyper-parameter settings.

in-my-heart commented 3 years ago

Hi Akshitac,

Sorry to open a new issue since I get more questions about training.

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Thanks in advance.

Kind regards.

Kaiqiang

Based on my understanding of this paper: Your understanding is correct. For the issue of syn_num parameter, please refer to FREE: Feature Refinement for Generalized Zero-Shot Learning。 image

Thank you so much. Nice paper recommendation.

You bet! Do you know this question? #24 (comment)

Sorry, I did not test on image ZSL. I just focus on action recognition. I guess the issue may be caused by hyper-parameter settings.

Oh, it doesn't matter.

akshitac8 commented 3 years ago

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Ans - Yes, your understanding is correct.

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Ans - syn_num parameters controls how many unseen samples per class you want to synthesis from the generator. Yes this parameter influences the model performance because further after synthesis you also train a classifier. The classifier training input data is a combination of these synthesised unseen class samples along with the real samples.

0xMarsRover commented 3 years ago

Question 1: For each epoch during training, the model is trained with seen data and then evaluate the model (from this epoch) with test set. After that, the accuracy is obtained after evaluation in this epoch. After 30 epochs, you pick the best accuracy as the final result. Am I right?

Ans - Yes, your understanding is correct.

Question 2: How about the argparse parameter: --syn_num? I can see that the default is set to 600 (means generating 600 visual representations for each action class?). Is there any experiment to indicate that this parameter can significantly influence in the model performance? Any suggestions for that?

Ans - syn_num parameters controls how many unseen samples per class you want to synthesis from the generator. Yes this parameter influences the model performance because further after synthesis you also train a classifier. The classifier training input data is a combination of these synthesised unseen class samples along with the real samples.

Thanks for your clarification. Much appreciated.