Hi, thanks for the nice work! I've tested your code, but I have some questions during the implementation.
The feature extraction stage has a very nice result but very weird train and val loss. Train loss has kept decreasing to minus values which is very strange, while Val loss kept increased. But the results are indeed nice, so I'm really confused. In the code I noticed that you used an automatic weighting method with two adaptable parameters. Could you explain that a bit further?
The results dropped after running TCN, so my results show that feature extraction only yield better results. I don't know if that is because Pytorch has updated their model so the weights become different.
After the feature extraction stage,
wandb: Run summary:
wandb: epoch 13
wandb: test_acc 0.89671
wandb: test_acc_test 0.90168
wandb: test_acc_train 0.98048
wandb: test_acc_val 0.84513
(this is when test_extract set to True, so I assume we should look at the test_acc_test value which is 0.90168)
Hi, thanks for the nice work! I've tested your code, but I have some questions during the implementation.
The feature extraction stage has a very nice result but very weird train and val loss. Train loss has kept decreasing to minus values which is very strange, while Val loss kept increased. But the results are indeed nice, so I'm really confused. In the code I noticed that you used an automatic weighting method with two adaptable parameters. Could you explain that a bit further?
The results dropped after running TCN, so my results show that feature extraction only yield better results. I don't know if that is because Pytorch has updated their model so the weights become different.
After the feature extraction stage, wandb: Run summary: wandb: epoch 13 wandb: test_acc 0.89671 wandb: test_acc_test 0.90168 wandb: test_acc_train 0.98048 wandb: test_acc_val 0.84513 (this is when test_extract set to True, so I assume we should look at the test_acc_test value which is 0.90168)
After 2 stages of TCN wandb: Run summary: wandb: epoch 11 wandb: loss_epoch 1.18394 wandb: loss_step 1.18998 wandb: test_S1_acc 0.87507 wandb: test_acc 0.88217 wandb: test_avg_precision 0.79771 wandb: test_avg_recall 0.872 wandb: train_S1_acc 0.986 wandb: train_acc 0.98297 wandb: train_avg_precision 0.94342 wandb: train_avg_recall 0.97663 wandb: val: max acc last Stage 0.90059 wandb: val_S1_acc 0.8939 wandb: val_acc 0.89872 wandb: val_avg_precision 0.83357 wandb: val_avg_recall 0.88048 wandb: val_loss 1.31414
Thanks in advance!