AlessioLucciola / advanced-machine-learning

Homeworks for the master's degree in Computer Science course "Advanced Machine Learning" (AML) at the University of Rome "La Sapienza" (A.Y. 2023-2024).
0 stars 1 forks source link

Practice Q3 - Report and Parameter Fine-Tuning Analysis #9

Open AlessioLucciola opened 1 year ago

AlessioLucciola commented 1 year ago

Objective: In this exercise, you will analyze the results obtained from a deep learning model you previously trained and perform parameter fine-tuning to optimize its performance. The key considerations are learning rate, milestones, and weight decay. You will also use tables and plots to visualize and interpret the outcomes.

Instructions:

  1. Analysis: Analyze the generated report and answer the following questions:

    • Is there evidence of overfitting or underfitting in the initial training results?
    • Are there fluctuations in training and validation loss or accuracy? If so, what might be causing them?
    • What can you infer from the initial learning rate, milestones, and weight decay settings?
  2. Parameter Fine-Tuning: Based on your analysis, perform parameter fine-tuning to optimize model performance. Adjust the following parameters:

    • Learning Rate: Experiment with different learning rates (higher and lower values) to find an optimal rate.
    • Milestones: Modify the milestone values for adjusting the learning rate schedule.
    • Weight Decay: Explore different weight decay values.
  3. Re-Training: Train the model with the adjusted hyperparameters. Record the training progress and generate a new report, including performance metrics and line plots as before.

  4. Final Analysis: Analyze the results of the fine-tuned model and compare them with the initial training. Answer the following questions:

    • Has parameter fine-tuning improved model performance?
    • Did it mitigate overfitting or underfitting issues?
    • What can you conclude about the optimal hyperparameters for this task?
CorsiDanilo commented 1 year ago

Default parameters

lr=1e-01 # learning rate milestones=[10,30] # the epochs after which the learning rate is adjusted by gamma weight_decay=1e-05 # weight decay (L2 penalty)


[Epoch: 1, Iteration: 1] training loss: 88.854 [Epoch: 1, Iteration: 201] training loss: 87.794 [Epoch: 1, Iteration: 401] training loss: 90.492 [Epoch: 1, Iteration: 601] training loss: 89.543 [Epoch: 1, Iteration: 1] validation loss: 87.264 [Epoch: 2, Iteration: 1] training loss: 91.612 [Epoch: 2, Iteration: 201] training loss: 85.417 [Epoch: 2, Iteration: 401] training loss: 89.979 [Epoch: 2, Iteration: 601] training loss: 92.994 [Epoch: 2, Iteration: 1] validation loss: 89.769 [Epoch: 3, Iteration: 1] training loss: 89.451 [Epoch: 3, Iteration: 201] training loss: 83.737 [Epoch: 3, Iteration: 401] training loss: 83.416 [Epoch: 3, Iteration: 601] training loss: 90.718 [Epoch: 3, Iteration: 1] validation loss: 85.817 [Epoch: 4, Iteration: 1] training loss: 86.751 [Epoch: 4, Iteration: 201] training loss: 88.978 [Epoch: 4, Iteration: 401] training loss: 89.613 [Epoch: 4, Iteration: 601] training loss: 92.111 [Epoch: 4, Iteration: 1] validation loss: 81.062 [Epoch: 5, Iteration: 1] training loss: 84.573 [Epoch: 5, Iteration: 201] training loss: 87.888 [Epoch: 5, Iteration: 401] training loss: 87.347 [Epoch: 5, Iteration: 601] training loss: 85.462 [Epoch: 5, Iteration: 1] validation loss: 87.062 [Epoch: 6, Iteration: 1] training loss: 89.998 [Epoch: 6, Iteration: 201] training loss: 83.358 [Epoch: 6, Iteration: 401] training loss: 83.064 [Epoch: 6, Iteration: 601] training loss: 80.132 [Epoch: 6, Iteration: 1] validation loss: 75.832 [Epoch: 7, Iteration: 1] training loss: 85.803 [Epoch: 7, Iteration: 201] training loss: 82.705 [Epoch: 7, Iteration: 401] training loss: 83.222 [Epoch: 7, Iteration: 601] training loss: 86.006 [Epoch: 7, Iteration: 1] validation loss: 79.537 [Epoch: 8, Iteration: 1] training loss: 80.500 [Epoch: 8, Iteration: 201] training loss: 81.201 [Epoch: 8, Iteration: 401] training loss: 90.418 [Epoch: 8, Iteration: 601] training loss: 83.574 [Epoch: 8, Iteration: 1] validation loss: 79.272 [Epoch: 9, Iteration: 1] training loss: 82.025 [Epoch: 9, Iteration: 201] training loss: 81.065 [Epoch: 9, Iteration: 401] training loss: 86.758 [Epoch: 9, Iteration: 601] training loss: 79.216 [Epoch: 9, Iteration: 1] validation loss: 78.666 [Epoch: 10, Iteration: 1] training loss: 81.593 [Epoch: 10, Iteration: 201] training loss: 79.310 [Epoch: 10, Iteration: 401] training loss: 79.555 [Epoch: 10, Iteration: 601] training loss: 82.884 [Epoch: 10, Iteration: 1] validation loss: 76.792 [Epoch: 11, Iteration: 1] training loss: 83.242 [Epoch: 11, Iteration: 201] training loss: 77.414 [Epoch: 11, Iteration: 401] training loss: 79.022 [Epoch: 11, Iteration: 601] training loss: 83.829 [Epoch: 11, Iteration: 1] validation loss: 77.591 [Epoch: 12, Iteration: 1] training loss: 82.420 [Epoch: 12, Iteration: 201] training loss: 82.321 [Epoch: 12, Iteration: 401] training loss: 84.465 [Epoch: 12, Iteration: 601] training loss: 82.854 [Epoch: 12, Iteration: 1] validation loss: 81.191 [Epoch: 13, Iteration: 1] training loss: 83.304 [Epoch: 13, Iteration: 201] training loss: 85.298 [Epoch: 13, Iteration: 401] training loss: 78.991 [Epoch: 13, Iteration: 601] training loss: 77.437 [Epoch: 13, Iteration: 1] validation loss: 81.473 [Epoch: 14, Iteration: 1] training loss: 77.683 [Epoch: 14, Iteration: 201] training loss: 86.122 [Epoch: 14, Iteration: 401] training loss: 79.907 [Epoch: 14, Iteration: 601] training loss: 85.838 [Epoch: 14, Iteration: 1] validation loss: 79.169 [Epoch: 15, Iteration: 1] training loss: 80.561 [Epoch: 15, Iteration: 201] training loss: 78.260 [Epoch: 15, Iteration: 401] training loss: 81.947 [Epoch: 15, Iteration: 601] training loss: 83.973 [Epoch: 15, Iteration: 1] validation loss: 77.848 [Epoch: 16, Iteration: 1] training loss: 79.763 [Epoch: 16, Iteration: 201] training loss: 79.931 [Epoch: 16, Iteration: 401] training loss: 78.401 [Epoch: 16, Iteration: 601] training loss: 82.330 [Epoch: 16, Iteration: 1] validation loss: 78.301 [Epoch: 17, Iteration: 1] training loss: 81.287 [Epoch: 17, Iteration: 201] training loss: 80.576 [Epoch: 17, Iteration: 401] training loss: 85.638 [Epoch: 17, Iteration: 601] training loss: 87.461 [Epoch: 17, Iteration: 1] validation loss: 79.543 [Epoch: 18, Iteration: 1] training loss: 84.515 [Epoch: 18, Iteration: 201] training loss: 86.503 [Epoch: 18, Iteration: 401] training loss: 90.606 [Epoch: 18, Iteration: 601] training loss: 82.612 [Epoch: 18, Iteration: 1] validation loss: 76.538 [Epoch: 19, Iteration: 1] training loss: 82.451 [Epoch: 19, Iteration: 201] training loss: 77.203 [Epoch: 19, Iteration: 401] training loss: 85.076 [Epoch: 19, Iteration: 601] training loss: 84.893 [Epoch: 19, Iteration: 1] validation loss: 76.504 [Epoch: 20, Iteration: 1] training loss: 79.793 [Epoch: 20, Iteration: 201] training loss: 82.698 [Epoch: 20, Iteration: 401] training loss: 80.180 [Epoch: 20, Iteration: 601] training loss: 81.276 [Epoch: 20, Iteration: 1] validation loss: 80.961 [Epoch: 21, Iteration: 1] training loss: 79.329 [Epoch: 21, Iteration: 201] training loss: 77.440 [Epoch: 21, Iteration: 401] training loss: 84.075 [Epoch: 21, Iteration: 601] training loss: 83.222 [Epoch: 21, Iteration: 1] validation loss: 73.377 [Epoch: 22, Iteration: 1] training loss: 82.805 [Epoch: 22, Iteration: 201] training loss: 77.450 [Epoch: 22, Iteration: 401] training loss: 79.967 [Epoch: 22, Iteration: 601] training loss: 81.716 [Epoch: 22, Iteration: 1] validation loss: 77.316 [Epoch: 23, Iteration: 1] training loss: 78.064 [Epoch: 23, Iteration: 201] training loss: 82.775 [Epoch: 23, Iteration: 401] training loss: 83.426 [Epoch: 23, Iteration: 601] training loss: 81.086 [Epoch: 23, Iteration: 1] validation loss: 75.980 [Epoch: 24, Iteration: 1] training loss: 80.781 [Epoch: 24, Iteration: 201] training loss: 83.414 [Epoch: 24, Iteration: 401] training loss: 83.187 [Epoch: 24, Iteration: 601] training loss: 85.166 [Epoch: 24, Iteration: 1] validation loss: 80.415 [Epoch: 25, Iteration: 1] training loss: 81.627 [Epoch: 25, Iteration: 201] training loss: 75.557 [Epoch: 25, Iteration: 401] training loss: 81.147 [Epoch: 25, Iteration: 601] training loss: 86.801 [Epoch: 25, Iteration: 1] validation loss: 76.207 [Epoch: 26, Iteration: 1] training loss: 84.382 [Epoch: 26, Iteration: 201] training loss: 79.645 [Epoch: 26, Iteration: 401] training loss: 79.147 [Epoch: 26, Iteration: 601] training loss: 80.579 [Epoch: 26, Iteration: 1] validation loss: 73.631 [Epoch: 27, Iteration: 1] training loss: 81.333 [Epoch: 27, Iteration: 201] training loss: 83.322 [Epoch: 27, Iteration: 401] training loss: 79.722 [Epoch: 27, Iteration: 601] training loss: 75.302 [Epoch: 27, Iteration: 1] validation loss: 77.635 [Epoch: 28, Iteration: 1] training loss: 81.025 [Epoch: 28, Iteration: 201] training loss: 84.774 [Epoch: 28, Iteration: 401] training loss: 82.634 [Epoch: 28, Iteration: 601] training loss: 76.863 [Epoch: 28, Iteration: 1] validation loss: 75.316 [Epoch: 29, Iteration: 1] training loss: 82.015 [Epoch: 29, Iteration: 201] training loss: 83.444 [Epoch: 29, Iteration: 401] training loss: 81.373 [Epoch: 29, Iteration: 601] training loss: 75.068 [Epoch: 29, Iteration: 1] validation loss: 78.247 [Epoch: 30, Iteration: 1] training loss: 75.822 [Epoch: 30, Iteration: 201] training loss: 86.928 [Epoch: 30, Iteration: 401] training loss: 79.719 [Epoch: 30, Iteration: 601] training loss: 78.410 [Epoch: 30, Iteration: 1] validation loss: 74.517 [Epoch: 31, Iteration: 1] training loss: 79.336 [Epoch: 31, Iteration: 201] training loss: 82.958 [Epoch: 31, Iteration: 401] training loss: 78.829 [Epoch: 31, Iteration: 601] training loss: 84.059 [Epoch: 31, Iteration: 1] validation loss: 78.753 [Epoch: 32, Iteration: 1] training loss: 79.737 [Epoch: 32, Iteration: 201] training loss: 85.121 [Epoch: 32, Iteration: 401] training loss: 80.066 [Epoch: 32, Iteration: 601] training loss: 83.396 [Epoch: 32, Iteration: 1] validation loss: 78.934 [Epoch: 33, Iteration: 1] training loss: 78.836 [Epoch: 33, Iteration: 201] training loss: 80.330 [Epoch: 33, Iteration: 401] training loss: 82.118 [Epoch: 33, Iteration: 601] training loss: 78.252 [Epoch: 33, Iteration: 1] validation loss: 75.935 [Epoch: 34, Iteration: 1] training loss: 75.302 [Epoch: 34, Iteration: 201] training loss: 78.732 [Epoch: 34, Iteration: 401] training loss: 78.358 [Epoch: 34, Iteration: 601] training loss: 78.332 [Epoch: 34, Iteration: 1] validation loss: 76.699 [Epoch: 35, Iteration: 1] training loss: 78.324 [Epoch: 35, Iteration: 201] training loss: 78.500 [Epoch: 35, Iteration: 401] training loss: 74.272 [Epoch: 35, Iteration: 601] training loss: 78.371 [Epoch: 35, Iteration: 1] validation loss: 75.222 [Epoch: 36, Iteration: 1] training loss: 83.819 [Epoch: 36, Iteration: 201] training loss: 79.417 [Epoch: 36, Iteration: 401] training loss: 78.709 [Epoch: 36, Iteration: 601] training loss: 79.845 [Epoch: 36, Iteration: 1] validation loss: 75.548 [Epoch: 37, Iteration: 1] training loss: 80.619 [Epoch: 37, Iteration: 201] training loss: 79.516 [Epoch: 37, Iteration: 401] training loss: 83.822 [Epoch: 37, Iteration: 601] training loss: 81.597 [Epoch: 37, Iteration: 1] validation loss: 75.410 [Epoch: 38, Iteration: 1] training loss: 80.090 [Epoch: 38, Iteration: 201] training loss: 80.002 [Epoch: 38, Iteration: 401] training loss: 79.924 [Epoch: 38, Iteration: 601] training loss: 80.585 [Epoch: 38, Iteration: 1] validation loss: 77.383 [Epoch: 39, Iteration: 1] training loss: 80.374 [Epoch: 39, Iteration: 201] training loss: 80.783 [Epoch: 39, Iteration: 401] training loss: 77.792 [Epoch: 39, Iteration: 601] training loss: 85.173 [Epoch: 39, Iteration: 1] validation loss: 83.169 [Epoch: 40, Iteration: 1] training loss: 79.881 [Epoch: 40, Iteration: 201] training loss: 78.144 [Epoch: 40, Iteration: 401] training loss: 79.795 [Epoch: 40, Iteration: 601] training loss: 83.769 [Epoch: 40, Iteration: 1] validation loss: 79.525


image


h36m_3d_25frames_ckpt_epoch_10.pt

walking : 61.4 eating : 59.2 smoking : 58.7 discussion : 87.0 directions : 78.2 greeting : 100.9 phoning : 74.9 posing : 114.6 purchases : 102.4 sitting : 86.7 sittingdown : 109.8 takingphoto : 84.1 waiting : 81.5 walkingdog : 111.3 walkingtogether : 58.9 Average: 84.6 Prediction time: 0.008561377227306367

CorsiDanilo commented 1 year ago

Tuned parameters

lr=1e-02 # learning rate ❗ milestones=[10,30] # the epochs after which the learning rate is adjusted by gamma weight_decay=1e-05 # weight decay (L2 penalty)


[Epoch: 1, Iteration: 1] training loss: 563.345 [Epoch: 1, Iteration: 201] training loss: 145.952 [Epoch: 1, Iteration: 401] training loss: 121.005 [Epoch: 1, Iteration: 601] training loss: 114.925 [Epoch: 1, Iteration: 1] validation loss: 108.394 [Epoch: 2, Iteration: 1] training loss: 111.997 [Epoch: 2, Iteration: 201] training loss: 108.064 [Epoch: 2, Iteration: 401] training loss: 99.207 [Epoch: 2, Iteration: 601] training loss: 104.599 [Epoch: 2, Iteration: 1] validation loss: 103.973 [Epoch: 3, Iteration: 1] training loss: 105.650 [Epoch: 3, Iteration: 201] training loss: 99.247 [Epoch: 3, Iteration: 401] training loss: 100.731 [Epoch: 3, Iteration: 601] training loss: 98.183 [Epoch: 3, Iteration: 1] validation loss: 103.127 [Epoch: 4, Iteration: 1] training loss: 106.265 [Epoch: 4, Iteration: 201] training loss: 95.784 [Epoch: 4, Iteration: 401] training loss: 98.933 [Epoch: 4, Iteration: 601] training loss: 95.270 [Epoch: 4, Iteration: 1] validation loss: 90.401 [Epoch: 5, Iteration: 1] training loss: 96.295 [Epoch: 5, Iteration: 201] training loss: 91.870 [Epoch: 5, Iteration: 401] training loss: 93.886 [Epoch: 5, Iteration: 601] training loss: 92.057 [Epoch: 5, Iteration: 1] validation loss: 93.374 [Epoch: 6, Iteration: 1] training loss: 93.302 [Epoch: 6, Iteration: 201] training loss: 90.044 [Epoch: 6, Iteration: 401] training loss: 94.302 [Epoch: 6, Iteration: 601] training loss: 91.325 [Epoch: 6, Iteration: 1] validation loss: 88.206 [Epoch: 7, Iteration: 1] training loss: 96.104 [Epoch: 7, Iteration: 201] training loss: 95.703 [Epoch: 7, Iteration: 401] training loss: 92.492 [Epoch: 7, Iteration: 601] training loss: 94.611 [Epoch: 7, Iteration: 1] validation loss: 97.197 [Epoch: 8, Iteration: 1] training loss: 89.054 [Epoch: 8, Iteration: 201] training loss: 87.239 [Epoch: 8, Iteration: 401] training loss: 90.863 [Epoch: 8, Iteration: 601] training loss: 92.308 [Epoch: 8, Iteration: 1] validation loss: 92.136 [Epoch: 9, Iteration: 1] training loss: 90.129 [Epoch: 9, Iteration: 201] training loss: 88.801 [Epoch: 9, Iteration: 401] training loss: 89.719 [Epoch: 9, Iteration: 601] training loss: 93.423 [Epoch: 9, Iteration: 1] validation loss: 86.007 [Epoch: 10, Iteration: 1] training loss: 94.947 [Epoch: 10, Iteration: 201] training loss: 89.469 [Epoch: 10, Iteration: 401] training loss: 92.250 [Epoch: 10, Iteration: 601] training loss: 90.624 [Epoch: 10, Iteration: 1] validation loss: 90.383 [Epoch: 11, Iteration: 1] training loss: 89.612 [Epoch: 11, Iteration: 201] training loss: 89.086 [Epoch: 11, Iteration: 401] training loss: 85.889 [Epoch: 11, Iteration: 601] training loss: 88.869 [Epoch: 11, Iteration: 1] validation loss: 87.975 [Epoch: 12, Iteration: 1] training loss: 84.711 [Epoch: 12, Iteration: 201] training loss: 86.224 [Epoch: 12, Iteration: 401] training loss: 85.866 [Epoch: 12, Iteration: 601] training loss: 85.789 [Epoch: 12, Iteration: 1] validation loss: 84.837 [Epoch: 13, Iteration: 1] training loss: 94.450 [Epoch: 13, Iteration: 201] training loss: 87.200 [Epoch: 13, Iteration: 401] training loss: 83.666 [Epoch: 13, Iteration: 601] training loss: 85.755 [Epoch: 13, Iteration: 1] validation loss: 82.425 [Epoch: 14, Iteration: 1] training loss: 85.686 [Epoch: 14, Iteration: 201] training loss: 83.278 [Epoch: 14, Iteration: 401] training loss: 87.610 [Epoch: 14, Iteration: 601] training loss: 82.963 [Epoch: 14, Iteration: 1] validation loss: 84.853 [Epoch: 15, Iteration: 1] training loss: 88.649 [Epoch: 15, Iteration: 201] training loss: 88.601 [Epoch: 15, Iteration: 401] training loss: 87.147 [Epoch: 15, Iteration: 601] training loss: 88.672 [Epoch: 15, Iteration: 1] validation loss: 89.325 [Epoch: 16, Iteration: 1] training loss: 94.350 [Epoch: 16, Iteration: 201] training loss: 84.756 [Epoch: 16, Iteration: 401] training loss: 85.262 [Epoch: 16, Iteration: 601] training loss: 81.375 [Epoch: 16, Iteration: 1] validation loss: 87.451 [Epoch: 17, Iteration: 1] training loss: 88.683 [Epoch: 17, Iteration: 201] training loss: 89.157 [Epoch: 17, Iteration: 401] training loss: 85.861 [Epoch: 17, Iteration: 601] training loss: 91.020 [Epoch: 17, Iteration: 1] validation loss: 81.402 [Epoch: 18, Iteration: 1] training loss: 86.868 [Epoch: 18, Iteration: 201] training loss: 84.932 [Epoch: 18, Iteration: 401] training loss: 87.565 [Epoch: 18, Iteration: 601] training loss: 85.126 [Epoch: 18, Iteration: 1] validation loss: 81.470 [Epoch: 19, Iteration: 1] training loss: 86.054 [Epoch: 19, Iteration: 201] training loss: 86.107 [Epoch: 19, Iteration: 401] training loss: 85.191 [Epoch: 19, Iteration: 601] training loss: 88.489 [Epoch: 19, Iteration: 1] validation loss: 86.572 [Epoch: 20, Iteration: 1] training loss: 84.334 [Epoch: 20, Iteration: 201] training loss: 87.333 [Epoch: 20, Iteration: 401] training loss: 83.930 [Epoch: 20, Iteration: 601] training loss: 85.021 [Epoch: 20, Iteration: 1] validation loss: 87.052 [Epoch: 21, Iteration: 1] training loss: 87.392 [Epoch: 21, Iteration: 201] training loss: 83.351 [Epoch: 21, Iteration: 401] training loss: 83.731 [Epoch: 21, Iteration: 601] training loss: 86.518 [Epoch: 21, Iteration: 1] validation loss: 84.430 [Epoch: 22, Iteration: 1] training loss: 82.616 [Epoch: 22, Iteration: 201] training loss: 81.369 [Epoch: 22, Iteration: 401] training loss: 85.511 [Epoch: 22, Iteration: 601] training loss: 88.102 [Epoch: 22, Iteration: 1] validation loss: 82.215 [Epoch: 23, Iteration: 1] training loss: 84.051 [Epoch: 23, Iteration: 201] training loss: 86.182 [Epoch: 23, Iteration: 401] training loss: 87.690 [Epoch: 23, Iteration: 601] training loss: 83.799 [Epoch: 23, Iteration: 1] validation loss: 84.644 [Epoch: 24, Iteration: 1] training loss: 88.923 [Epoch: 24, Iteration: 201] training loss: 84.619 [Epoch: 24, Iteration: 401] training loss: 85.539 [Epoch: 24, Iteration: 601] training loss: 91.383 [Epoch: 24, Iteration: 1] validation loss: 87.784 [Epoch: 25, Iteration: 1] training loss: 80.645 [Epoch: 25, Iteration: 201] training loss: 86.025 [Epoch: 25, Iteration: 401] training loss: 87.274 [Epoch: 25, Iteration: 601] training loss: 85.366 [Epoch: 25, Iteration: 1] validation loss: 83.254 [Epoch: 26, Iteration: 1] training loss: 86.483 [Epoch: 26, Iteration: 201] training loss: 82.750 [Epoch: 26, Iteration: 401] training loss: 81.132 [Epoch: 26, Iteration: 601] training loss: 83.293 [Epoch: 26, Iteration: 1] validation loss: 81.769 [Epoch: 27, Iteration: 1] training loss: 85.036 [Epoch: 27, Iteration: 201] training loss: 93.461 [Epoch: 27, Iteration: 401] training loss: 83.758 [Epoch: 27, Iteration: 601] training loss: 82.806 [Epoch: 27, Iteration: 1] validation loss: 83.160 [Epoch: 28, Iteration: 1] training loss: 88.372 [Epoch: 28, Iteration: 201] training loss: 87.872 [Epoch: 28, Iteration: 401] training loss: 83.451 [Epoch: 28, Iteration: 601] training loss: 88.147 [Epoch: 28, Iteration: 1] validation loss: 85.055 [Epoch: 29, Iteration: 1] training loss: 83.771 [Epoch: 29, Iteration: 201] training loss: 91.539 [Epoch: 29, Iteration: 401] training loss: 80.178 [Epoch: 29, Iteration: 601] training loss: 84.870 [Epoch: 29, Iteration: 1] validation loss: 87.556 [Epoch: 30, Iteration: 1] training loss: 85.787 [Epoch: 30, Iteration: 201] training loss: 86.974 [Epoch: 30, Iteration: 401] training loss: 87.988 [Epoch: 30, Iteration: 601] training loss: 86.947 [Epoch: 30, Iteration: 1] validation loss: 89.215 [Epoch: 31, Iteration: 1] training loss: 91.276 [Epoch: 31, Iteration: 201] training loss: 81.599 [Epoch: 31, Iteration: 401] training loss: 85.845 [Epoch: 31, Iteration: 601] training loss: 84.059 [Epoch: 31, Iteration: 1] validation loss: 84.288 [Epoch: 32, Iteration: 1] training loss: 85.976 [Epoch: 32, Iteration: 201] training loss: 88.147 [Epoch: 32, Iteration: 401] training loss: 85.569 [Epoch: 32, Iteration: 601] training loss: 83.713 [Epoch: 32, Iteration: 1] validation loss: 82.636 [Epoch: 33, Iteration: 1] training loss: 83.612 [Epoch: 33, Iteration: 201] training loss: 83.768 [Epoch: 33, Iteration: 401] training loss: 86.543 [Epoch: 33, Iteration: 601] training loss: 87.099 [Epoch: 33, Iteration: 1] validation loss: 86.867 [Epoch: 34, Iteration: 1] training loss: 88.088 [Epoch: 34, Iteration: 201] training loss: 85.081 [Epoch: 34, Iteration: 401] training loss: 85.489 [Epoch: 34, Iteration: 601] training loss: 85.406 [Epoch: 34, Iteration: 1] validation loss: 84.710 [Epoch: 35, Iteration: 1] training loss: 83.664 [Epoch: 35, Iteration: 201] training loss: 84.670 [Epoch: 35, Iteration: 401] training loss: 83.050 [Epoch: 35, Iteration: 601] training loss: 86.357 [Epoch: 35, Iteration: 1] validation loss: 81.678 [Epoch: 36, Iteration: 1] training loss: 86.307 [Epoch: 36, Iteration: 201] training loss: 88.241 [Epoch: 36, Iteration: 401] training loss: 82.407 [Epoch: 36, Iteration: 601] training loss: 87.276 [Epoch: 36, Iteration: 1] validation loss: 84.178 [Epoch: 37, Iteration: 1] training loss: 81.170 [Epoch: 37, Iteration: 201] training loss: 84.396 [Epoch: 37, Iteration: 401] training loss: 86.296 [Epoch: 37, Iteration: 601] training loss: 84.393 [Epoch: 37, Iteration: 1] validation loss: 89.497 [Epoch: 38, Iteration: 1] training loss: 82.012 [Epoch: 38, Iteration: 201] training loss: 85.050 [Epoch: 38, Iteration: 401] training loss: 83.989 [Epoch: 38, Iteration: 601] training loss: 83.415 [Epoch: 38, Iteration: 1] validation loss: 85.491 [Epoch: 39, Iteration: 1] training loss: 85.771 [Epoch: 39, Iteration: 201] training loss: 85.964 [Epoch: 39, Iteration: 401] training loss: 80.019 [Epoch: 39, Iteration: 601] training loss: 84.984 [Epoch: 39, Iteration: 1] validation loss: 85.076 [Epoch: 40, Iteration: 1] training loss: 81.301 [Epoch: 40, Iteration: 201] training loss: 83.175 [Epoch: 40, Iteration: 401] training loss: 81.844 [Epoch: 40, Iteration: 601] training loss: 79.117 [Epoch: 40, Iteration: 1] validation loss: 85.072


image


h36m_3d_25frames_ckpt_epoch_40.pt

walking : 64.1 eating : 62.2 smoking : 64.5 discussion : 89.7 directions : 82.4 greeting : 104.4 phoning : 80.3 posing : 120.4 purchases : 106.4 sitting : 94.6 sittingdown : 120.1 takingphoto : 97.0 waiting : 87.3 walkingdog : 117.6 walkingtogether : 62.5 Average: 90.2 Prediction time: 0.008715027074019114

CorsiDanilo commented 1 year ago

Tuned parameters

lr=1e-03 # learning rate ❗ milestones=[10,30] # the epochs after which the learning rate is adjusted by gamma weight_decay=1e-05 # weight decay (L2 penalty)


[Epoch: 1, Iteration: 1] training loss: 86.100 [Epoch: 1, Iteration: 201] training loss: 83.290 [Epoch: 1, Iteration: 401] training loss: 85.033 [Epoch: 1, Iteration: 601] training loss: 87.182 [Epoch: 1, Iteration: 1] validation loss: 81.665 [Epoch: 2, Iteration: 1] training loss: 81.483 [Epoch: 2, Iteration: 201] training loss: 83.032 [Epoch: 2, Iteration: 401] training loss: 82.175 [Epoch: 2, Iteration: 601] training loss: 86.581 [Epoch: 2, Iteration: 1] validation loss: 90.571 [Epoch: 3, Iteration: 1] training loss: 85.447 [Epoch: 3, Iteration: 201] training loss: 84.361 [Epoch: 3, Iteration: 401] training loss: 82.794 [Epoch: 3, Iteration: 601] training loss: 81.967 [Epoch: 3, Iteration: 1] validation loss: 85.865 [Epoch: 4, Iteration: 1] training loss: 82.111 [Epoch: 4, Iteration: 201] training loss: 82.421 [Epoch: 4, Iteration: 401] training loss: 85.151 [Epoch: 4, Iteration: 601] training loss: 84.537 [Epoch: 4, Iteration: 1] validation loss: 84.946 [Epoch: 5, Iteration: 1] training loss: 87.668 [Epoch: 5, Iteration: 201] training loss: 86.884 [Epoch: 5, Iteration: 401] training loss: 86.917 [Epoch: 5, Iteration: 601] training loss: 87.387 [Epoch: 5, Iteration: 1] validation loss: 83.379 [Epoch: 6, Iteration: 1] training loss: 83.330 [Epoch: 6, Iteration: 201] training loss: 82.781 [Epoch: 6, Iteration: 401] training loss: 83.719 [Epoch: 6, Iteration: 601] training loss: 81.719 [Epoch: 6, Iteration: 1] validation loss: 85.834 [Epoch: 7, Iteration: 1] training loss: 78.350 [Epoch: 7, Iteration: 201] training loss: 87.926 [Epoch: 7, Iteration: 401] training loss: 84.648 [Epoch: 7, Iteration: 601] training loss: 84.478 [Epoch: 7, Iteration: 1] validation loss: 84.720 [Epoch: 8, Iteration: 1] training loss: 84.543 [Epoch: 8, Iteration: 201] training loss: 83.370 [Epoch: 8, Iteration: 401] training loss: 82.191 [Epoch: 8, Iteration: 601] training loss: 85.314 [Epoch: 8, Iteration: 1] validation loss: 82.656 [Epoch: 9, Iteration: 1] training loss: 81.876 [Epoch: 9, Iteration: 201] training loss: 80.092 [Epoch: 9, Iteration: 401] training loss: 85.251 [Epoch: 9, Iteration: 601] training loss: 81.306 [Epoch: 9, Iteration: 1] validation loss: 82.710 [Epoch: 10, Iteration: 1] training loss: 85.880 [Epoch: 10, Iteration: 201] training loss: 85.490 [Epoch: 10, Iteration: 401] training loss: 82.893 [Epoch: 10, Iteration: 601] training loss: 88.461 [Epoch: 10, Iteration: 1] validation loss: 82.033 [Epoch: 11, Iteration: 1] training loss: 84.677 [Epoch: 11, Iteration: 201] training loss: 80.455 [Epoch: 11, Iteration: 401] training loss: 80.870 [Epoch: 11, Iteration: 601] training loss: 82.743 [Epoch: 11, Iteration: 1] validation loss: 86.415 [Epoch: 12, Iteration: 1] training loss: 85.413 [Epoch: 12, Iteration: 201] training loss: 84.653 [Epoch: 12, Iteration: 401] training loss: 80.773 [Epoch: 12, Iteration: 601] training loss: 84.944 [Epoch: 12, Iteration: 1] validation loss: 84.907 [Epoch: 13, Iteration: 1] training loss: 86.735 [Epoch: 13, Iteration: 201] training loss: 85.776 [Epoch: 13, Iteration: 401] training loss: 82.997 [Epoch: 13, Iteration: 601] training loss: 82.953 [Epoch: 13, Iteration: 1] validation loss: 86.469 [Epoch: 14, Iteration: 1] training loss: 81.517 [Epoch: 14, Iteration: 201] training loss: 80.097 [Epoch: 14, Iteration: 401] training loss: 83.599 [Epoch: 14, Iteration: 601] training loss: 86.220 [Epoch: 14, Iteration: 1] validation loss: 81.238 [Epoch: 15, Iteration: 1] training loss: 85.480 [Epoch: 15, Iteration: 201] training loss: 78.355 [Epoch: 15, Iteration: 401] training loss: 81.279 [Epoch: 15, Iteration: 601] training loss: 89.023 [Epoch: 15, Iteration: 1] validation loss: 81.591 [Epoch: 16, Iteration: 1] training loss: 82.046 [Epoch: 16, Iteration: 201] training loss: 80.202 [Epoch: 16, Iteration: 401] training loss: 79.586 [Epoch: 16, Iteration: 601] training loss: 80.684 [Epoch: 16, Iteration: 1] validation loss: 83.912 [Epoch: 17, Iteration: 1] training loss: 83.899 [Epoch: 17, Iteration: 201] training loss: 83.636 [Epoch: 17, Iteration: 401] training loss: 82.342 [Epoch: 17, Iteration: 601] training loss: 79.750 [Epoch: 17, Iteration: 1] validation loss: 83.348 [Epoch: 18, Iteration: 1] training loss: 85.559 [Epoch: 18, Iteration: 201] training loss: 81.296 [Epoch: 18, Iteration: 401] training loss: 78.393 [Epoch: 18, Iteration: 601] training loss: 89.063 [Epoch: 18, Iteration: 1] validation loss: 79.168 [Epoch: 19, Iteration: 1] training loss: 84.024 [Epoch: 19, Iteration: 201] training loss: 83.840 [Epoch: 19, Iteration: 401] training loss: 91.618 [Epoch: 19, Iteration: 601] training loss: 83.108 [Epoch: 19, Iteration: 1] validation loss: 83.476 [Epoch: 20, Iteration: 1] training loss: 88.386 [Epoch: 20, Iteration: 201] training loss: 84.754 [Epoch: 20, Iteration: 401] training loss: 88.763 [Epoch: 20, Iteration: 601] training loss: 79.429 [Epoch: 20, Iteration: 1] validation loss: 85.506 [Epoch: 21, Iteration: 1] training loss: 86.018 [Epoch: 21, Iteration: 201] training loss: 82.890 [Epoch: 21, Iteration: 401] training loss: 83.911 [Epoch: 21, Iteration: 601] training loss: 84.911 [Epoch: 21, Iteration: 1] validation loss: 86.036 [Epoch: 22, Iteration: 1] training loss: 80.895 [Epoch: 22, Iteration: 201] training loss: 81.513 [Epoch: 22, Iteration: 401] training loss: 81.111 [Epoch: 22, Iteration: 601] training loss: 82.538 [Epoch: 22, Iteration: 1] validation loss: 85.842 [Epoch: 23, Iteration: 1] training loss: 81.790 [Epoch: 23, Iteration: 201] training loss: 81.905 [Epoch: 23, Iteration: 401] training loss: 82.112 [Epoch: 23, Iteration: 601] training loss: 85.448 [Epoch: 23, Iteration: 1] validation loss: 81.786 [Epoch: 24, Iteration: 1] training loss: 85.822 [Epoch: 24, Iteration: 201] training loss: 86.165 [Epoch: 24, Iteration: 401] training loss: 79.817 [Epoch: 24, Iteration: 601] training loss: 86.715 [Epoch: 24, Iteration: 1] validation loss: 83.817 [Epoch: 25, Iteration: 1] training loss: 81.944 [Epoch: 25, Iteration: 201] training loss: 84.191 [Epoch: 25, Iteration: 401] training loss: 87.246 [Epoch: 25, Iteration: 601] training loss: 83.501 [Epoch: 25, Iteration: 1] validation loss: 81.102 [Epoch: 26, Iteration: 1] training loss: 79.223 [Epoch: 26, Iteration: 201] training loss: 81.203 [Epoch: 26, Iteration: 401] training loss: 83.736 [Epoch: 26, Iteration: 601] training loss: 82.687 [Epoch: 26, Iteration: 1] validation loss: 84.652 [Epoch: 27, Iteration: 1] training loss: 85.631 [Epoch: 27, Iteration: 201] training loss: 86.049 [Epoch: 27, Iteration: 401] training loss: 84.817 [Epoch: 27, Iteration: 601] training loss: 87.954 [Epoch: 27, Iteration: 1] validation loss: 84.332 [Epoch: 28, Iteration: 1] training loss: 84.339 [Epoch: 28, Iteration: 201] training loss: 86.625 [Epoch: 28, Iteration: 401] training loss: 82.638 [Epoch: 28, Iteration: 601] training loss: 81.621 [Epoch: 28, Iteration: 1] validation loss: 81.965 [Epoch: 29, Iteration: 1] training loss: 83.809 [Epoch: 29, Iteration: 201] training loss: 79.480 [Epoch: 29, Iteration: 401] training loss: 84.513 [Epoch: 29, Iteration: 601] training loss: 81.285 [Epoch: 29, Iteration: 1] validation loss: 81.331 [Epoch: 30, Iteration: 1] training loss: 83.102 [Epoch: 30, Iteration: 201] training loss: 85.517 [Epoch: 30, Iteration: 401] training loss: 84.319 [Epoch: 30, Iteration: 601] training loss: 82.656 [Epoch: 30, Iteration: 1] validation loss: 86.788 [Epoch: 31, Iteration: 1] training loss: 80.667 [Epoch: 31, Iteration: 201] training loss: 78.483 [Epoch: 31, Iteration: 401] training loss: 84.499 [Epoch: 31, Iteration: 601] training loss: 80.856 [Epoch: 31, Iteration: 1] validation loss: 81.856 [Epoch: 32, Iteration: 1] training loss: 83.143 [Epoch: 32, Iteration: 201] training loss: 81.505 [Epoch: 32, Iteration: 401] training loss: 83.661 [Epoch: 32, Iteration: 601] training loss: 83.694 [Epoch: 32, Iteration: 1] validation loss: 88.085 [Epoch: 33, Iteration: 1] training loss: 81.555 [Epoch: 33, Iteration: 201] training loss: 87.335 [Epoch: 33, Iteration: 401] training loss: 80.554 [Epoch: 33, Iteration: 601] training loss: 87.549 [Epoch: 33, Iteration: 1] validation loss: 84.201 [Epoch: 34, Iteration: 1] training loss: 80.394 [Epoch: 34, Iteration: 201] training loss: 78.322 [Epoch: 34, Iteration: 401] training loss: 83.453 [Epoch: 34, Iteration: 601] training loss: 86.928 [Epoch: 34, Iteration: 1] validation loss: 83.535 [Epoch: 35, Iteration: 1] training loss: 83.876 [Epoch: 35, Iteration: 201] training loss: 88.952 [Epoch: 35, Iteration: 401] training loss: 82.780 [Epoch: 35, Iteration: 601] training loss: 84.874 [Epoch: 35, Iteration: 1] validation loss: 84.912 [Epoch: 36, Iteration: 1] training loss: 82.273 [Epoch: 36, Iteration: 201] training loss: 86.860 [Epoch: 36, Iteration: 401] training loss: 83.927 [Epoch: 36, Iteration: 601] training loss: 81.697 [Epoch: 36, Iteration: 1] validation loss: 82.556 [Epoch: 37, Iteration: 1] training loss: 78.981 [Epoch: 37, Iteration: 201] training loss: 84.428 [Epoch: 37, Iteration: 401] training loss: 83.814 [Epoch: 37, Iteration: 601] training loss: 86.709 [Epoch: 37, Iteration: 1] validation loss: 83.303 [Epoch: 38, Iteration: 1] training loss: 78.139 [Epoch: 38, Iteration: 201] training loss: 81.300 [Epoch: 38, Iteration: 401] training loss: 86.329 [Epoch: 38, Iteration: 601] training loss: 80.344 [Epoch: 38, Iteration: 1] validation loss: 87.332 [Epoch: 39, Iteration: 1] training loss: 83.223 [Epoch: 39, Iteration: 201] training loss: 87.170 [Epoch: 39, Iteration: 401] training loss: 83.286 [Epoch: 39, Iteration: 601] training loss: 81.319 [Epoch: 39, Iteration: 1] validation loss: 84.561 [Epoch: 40, Iteration: 1] training loss: 84.117 [Epoch: 40, Iteration: 201] training loss: 84.716 [Epoch: 40, Iteration: 401] training loss: 81.717 [Epoch: 40, Iteration: 601] training loss: 84.080 [Epoch: 40, Iteration: 1] validation loss: 85.711


image


h36m_3d_25frames_ckpt_epoch_40

walking : 63.9 eating : 62.8 smoking : 64.8 discussion : 89.8 directions : 82.8 greeting : 104.3 phoning : 80.8 posing : 120.0 purchases : 106.4 sitting : 94.5 sittingdown : 119.8 takingphoto : 96.9 waiting : 87.2 walkingdog : 117.3 walkingtogether : 62.1 Average: 90.2 Prediction time: 0.008562642335891723

CorsiDanilo commented 1 year ago

Tuned parameters

lr=1e-02 # learning rate ❗ milestones=[10,30] # the epochs after which the learning rate is adjusted by gamma weight_decay=1e-04 # weight decay (L2 penalty) ❗


[Epoch: 1, Iteration: 1] training loss: 84.704 [Epoch: 1, Iteration: 201] training loss: 91.469 [Epoch: 1, Iteration: 401] training loss: 88.614 [Epoch: 1, Iteration: 601] training loss: 85.682 [Epoch: 1, Iteration: 1] validation loss: 86.260 [Epoch: 2, Iteration: 1] training loss: 86.513 [Epoch: 2, Iteration: 201] training loss: 85.893 [Epoch: 2, Iteration: 401] training loss: 88.869 [Epoch: 2, Iteration: 601] training loss: 91.737 [Epoch: 2, Iteration: 1] validation loss: 85.971 [Epoch: 3, Iteration: 1] training loss: 85.455 [Epoch: 3, Iteration: 201] training loss: 85.628 [Epoch: 3, Iteration: 401] training loss: 89.926 [Epoch: 3, Iteration: 601] training loss: 85.000 [Epoch: 3, Iteration: 1] validation loss: 88.673 [Epoch: 4, Iteration: 1] training loss: 86.277 [Epoch: 4, Iteration: 201] training loss: 88.408 [Epoch: 4, Iteration: 401] training loss: 91.229 [Epoch: 4, Iteration: 601] training loss: 83.093 [Epoch: 4, Iteration: 1] validation loss: 89.567 [Epoch: 5, Iteration: 1] training loss: 89.702 [Epoch: 5, Iteration: 201] training loss: 89.332 [Epoch: 5, Iteration: 401] training loss: 81.560 [Epoch: 5, Iteration: 601] training loss: 85.096 [Epoch: 5, Iteration: 1] validation loss: 89.275 [Epoch: 6, Iteration: 1] training loss: 87.607 [Epoch: 6, Iteration: 201] training loss: 89.539 [Epoch: 6, Iteration: 401] training loss: 84.777 [Epoch: 6, Iteration: 601] training loss: 83.710 [Epoch: 6, Iteration: 1] validation loss: 86.785 [Epoch: 7, Iteration: 1] training loss: 85.601 [Epoch: 7, Iteration: 201] training loss: 86.252 [Epoch: 7, Iteration: 401] training loss: 86.804 [Epoch: 7, Iteration: 601] training loss: 86.563 [Epoch: 7, Iteration: 1] validation loss: 84.712 [Epoch: 8, Iteration: 1] training loss: 85.877 [Epoch: 8, Iteration: 201] training loss: 81.699 [Epoch: 8, Iteration: 401] training loss: 84.260 [Epoch: 8, Iteration: 601] training loss: 86.706 [Epoch: 8, Iteration: 1] validation loss: 88.879 [Epoch: 9, Iteration: 1] training loss: 87.759 [Epoch: 9, Iteration: 201] training loss: 89.747 [Epoch: 9, Iteration: 401] training loss: 86.276 [Epoch: 9, Iteration: 601] training loss: 85.101 [Epoch: 9, Iteration: 1] validation loss: 95.177 [Epoch: 10, Iteration: 1] training loss: 85.194 [Epoch: 10, Iteration: 201] training loss: 88.898 [Epoch: 10, Iteration: 401] training loss: 87.927 [Epoch: 10, Iteration: 601] training loss: 85.895 [Epoch: 10, Iteration: 1] validation loss: 86.053 [Epoch: 11, Iteration: 1] training loss: 85.521 [Epoch: 11, Iteration: 201] training loss: 82.220 [Epoch: 11, Iteration: 401] training loss: 81.294 [Epoch: 11, Iteration: 601] training loss: 82.981 [Epoch: 11, Iteration: 1] validation loss: 85.309 [Epoch: 12, Iteration: 1] training loss: 89.875 [Epoch: 12, Iteration: 201] training loss: 77.338 [Epoch: 12, Iteration: 401] training loss: 82.622 [Epoch: 12, Iteration: 601] training loss: 81.945 [Epoch: 12, Iteration: 1] validation loss: 84.898 [Epoch: 13, Iteration: 1] training loss: 84.855 [Epoch: 13, Iteration: 201] training loss: 82.238 [Epoch: 13, Iteration: 401] training loss: 86.375 [Epoch: 13, Iteration: 601] training loss: 81.721 [Epoch: 13, Iteration: 1] validation loss: 84.468 [Epoch: 14, Iteration: 1] training loss: 89.665 [Epoch: 14, Iteration: 201] training loss: 82.672 [Epoch: 14, Iteration: 401] training loss: 85.825 [Epoch: 14, Iteration: 601] training loss: 81.803 [Epoch: 14, Iteration: 1] validation loss: 81.306 [Epoch: 15, Iteration: 1] training loss: 82.668 [Epoch: 15, Iteration: 201] training loss: 82.754 [Epoch: 15, Iteration: 401] training loss: 79.809 [Epoch: 15, Iteration: 601] training loss: 82.703 [Epoch: 15, Iteration: 1] validation loss: 78.716 [Epoch: 16, Iteration: 1] training loss: 78.448 [Epoch: 16, Iteration: 201] training loss: 76.846 [Epoch: 16, Iteration: 401] training loss: 80.887 [Epoch: 16, Iteration: 601] training loss: 83.963 [Epoch: 16, Iteration: 1] validation loss: 81.745 [Epoch: 17, Iteration: 1] training loss: 83.087 [Epoch: 17, Iteration: 201] training loss: 83.070 [Epoch: 17, Iteration: 401] training loss: 86.874 [Epoch: 17, Iteration: 601] training loss: 83.226 [Epoch: 17, Iteration: 1] validation loss: 83.352 [Epoch: 18, Iteration: 1] training loss: 84.786 [Epoch: 18, Iteration: 201] training loss: 88.767 [Epoch: 18, Iteration: 401] training loss: 83.550 [Epoch: 18, Iteration: 601] training loss: 91.935 [Epoch: 18, Iteration: 1] validation loss: 82.185 [Epoch: 19, Iteration: 1] training loss: 86.151 [Epoch: 19, Iteration: 201] training loss: 80.473 [Epoch: 19, Iteration: 401] training loss: 80.616 [Epoch: 19, Iteration: 601] training loss: 83.851 [Epoch: 19, Iteration: 1] validation loss: 82.661 [Epoch: 20, Iteration: 1] training loss: 78.352 [Epoch: 20, Iteration: 201] training loss: 84.508 [Epoch: 20, Iteration: 401] training loss: 82.408 [Epoch: 20, Iteration: 601] training loss: 84.322 [Epoch: 20, Iteration: 1] validation loss: 84.100 [Epoch: 21, Iteration: 1] training loss: 82.375 [Epoch: 21, Iteration: 201] training loss: 82.704 [Epoch: 21, Iteration: 401] training loss: 81.625 [Epoch: 21, Iteration: 601] training loss: 80.956 [Epoch: 21, Iteration: 1] validation loss: 83.784 [Epoch: 22, Iteration: 1] training loss: 78.016 [Epoch: 22, Iteration: 201] training loss: 80.301 [Epoch: 22, Iteration: 401] training loss: 84.595 [Epoch: 22, Iteration: 601] training loss: 84.502 [Epoch: 22, Iteration: 1] validation loss: 82.377 [Epoch: 23, Iteration: 1] training loss: 83.206 [Epoch: 23, Iteration: 201] training loss: 81.269 [Epoch: 23, Iteration: 401] training loss: 84.926 [Epoch: 23, Iteration: 601] training loss: 77.584 [Epoch: 23, Iteration: 1] validation loss: 84.940 [Epoch: 24, Iteration: 1] training loss: 79.352 [Epoch: 24, Iteration: 201] training loss: 86.012 [Epoch: 24, Iteration: 401] training loss: 85.014 [Epoch: 24, Iteration: 601] training loss: 84.836 [Epoch: 24, Iteration: 1] validation loss: 83.762 [Epoch: 25, Iteration: 1] training loss: 84.707 [Epoch: 25, Iteration: 201] training loss: 83.598 [Epoch: 25, Iteration: 401] training loss: 78.064 [Epoch: 25, Iteration: 601] training loss: 81.873 [Epoch: 25, Iteration: 1] validation loss: 83.204 [Epoch: 26, Iteration: 1] training loss: 86.697 [Epoch: 26, Iteration: 201] training loss: 82.885 [Epoch: 26, Iteration: 401] training loss: 82.525 [Epoch: 26, Iteration: 601] training loss: 88.038 [Epoch: 26, Iteration: 1] validation loss: 83.813 [Epoch: 27, Iteration: 1] training loss: 80.076 [Epoch: 27, Iteration: 201] training loss: 78.149 [Epoch: 27, Iteration: 401] training loss: 84.299 [Epoch: 27, Iteration: 601] training loss: 87.932 [Epoch: 27, Iteration: 1] validation loss: 83.300 [Epoch: 28, Iteration: 1] training loss: 79.875 [Epoch: 28, Iteration: 201] training loss: 79.355 [Epoch: 28, Iteration: 401] training loss: 83.756 [Epoch: 28, Iteration: 601] training loss: 82.931 [Epoch: 28, Iteration: 1] validation loss: 80.842 [Epoch: 29, Iteration: 1] training loss: 81.434 [Epoch: 29, Iteration: 201] training loss: 79.134 [Epoch: 29, Iteration: 401] training loss: 83.386 [Epoch: 29, Iteration: 601] training loss: 81.908 [Epoch: 29, Iteration: 1] validation loss: 88.911 [Epoch: 30, Iteration: 1] training loss: 82.810 [Epoch: 30, Iteration: 201] training loss: 87.753 [Epoch: 30, Iteration: 401] training loss: 81.771 [Epoch: 30, Iteration: 601] training loss: 79.201 [Epoch: 30, Iteration: 1] validation loss: 80.909 [Epoch: 31, Iteration: 1] training loss: 78.674 [Epoch: 31, Iteration: 201] training loss: 82.975 [Epoch: 31, Iteration: 401] training loss: 76.990 [Epoch: 31, Iteration: 601] training loss: 84.260 [Epoch: 31, Iteration: 1] validation loss: 84.527 [Epoch: 32, Iteration: 1] training loss: 82.917 [Epoch: 32, Iteration: 201] training loss: 79.382 [Epoch: 32, Iteration: 401] training loss: 79.121 [Epoch: 32, Iteration: 601] training loss: 83.323 [Epoch: 32, Iteration: 1] validation loss: 85.249 [Epoch: 33, Iteration: 1] training loss: 84.557 [Epoch: 33, Iteration: 201] training loss: 80.369 [Epoch: 33, Iteration: 401] training loss: 80.602 [Epoch: 33, Iteration: 601] training loss: 83.994 [Epoch: 33, Iteration: 1] validation loss: 82.726 [Epoch: 34, Iteration: 1] training loss: 83.788 [Epoch: 34, Iteration: 201] training loss: 80.839 [Epoch: 34, Iteration: 401] training loss: 77.263 [Epoch: 34, Iteration: 601] training loss: 85.155 [Epoch: 34, Iteration: 1] validation loss: 85.139 [Epoch: 35, Iteration: 1] training loss: 78.995 [Epoch: 35, Iteration: 201] training loss: 85.791 [Epoch: 35, Iteration: 401] training loss: 84.652 [Epoch: 35, Iteration: 601] training loss: 80.781 [Epoch: 35, Iteration: 1] validation loss: 88.624 [Epoch: 36, Iteration: 1] training loss: 82.399 [Epoch: 36, Iteration: 201] training loss: 80.698 [Epoch: 36, Iteration: 401] training loss: 80.652 [Epoch: 36, Iteration: 601] training loss: 80.029 [Epoch: 36, Iteration: 1] validation loss: 83.440 [Epoch: 37, Iteration: 1] training loss: 82.800 [Epoch: 37, Iteration: 201] training loss: 81.072 [Epoch: 37, Iteration: 401] training loss: 88.100 [Epoch: 37, Iteration: 601] training loss: 78.626 [Epoch: 37, Iteration: 1] validation loss: 86.393 [Epoch: 38, Iteration: 1] training loss: 85.945 [Epoch: 38, Iteration: 201] training loss: 84.561 [Epoch: 38, Iteration: 401] training loss: 81.630 [Epoch: 38, Iteration: 601] training loss: 82.973 [Epoch: 38, Iteration: 1] validation loss: 82.539 [Epoch: 39, Iteration: 1] training loss: 83.364 [Epoch: 39, Iteration: 201] training loss: 77.522 [Epoch: 39, Iteration: 401] training loss: 82.399 [Epoch: 39, Iteration: 601] training loss: 81.591 [Epoch: 39, Iteration: 1] validation loss: 84.409 [Epoch: 40, Iteration: 1] training loss: 86.024 [Epoch: 40, Iteration: 201] training loss: 81.234 [Epoch: 40, Iteration: 401] training loss: 87.260 [Epoch: 40, Iteration: 601] training loss: 80.976 [Epoch: 40, Iteration: 1] validation loss: 83.650


image


h36m_3d_25frames_ckpt_epoch_40.pt

walking : 61.9 eating : 62.7 smoking : 64.9 discussion : 89.3 directions : 82.9 greeting : 103.4 phoning : 80.5 posing : 119.3 purchases : 106.6 sitting : 94.3 sittingdown : 119.9 takingphoto : 96.4 waiting : 86.8 walkingdog : 116.5 walkingtogether : 60.8 Average: 89.8 Prediction time: 0.00840987910827001

CorsiDanilo commented 1 year ago

Tuned parameters

lr=1e-02 # learning rate ❗ milestones=[10,30] # the epochs after which the learning rate is adjusted by gamma weight_decay=1e-06 # weight decay (L2 penalty) ❗


[Epoch: 1, Iteration: 1] training loss: 82.467 [Epoch: 1, Iteration: 201] training loss: 84.714 [Epoch: 1, Iteration: 401] training loss: 83.612 [Epoch: 1, Iteration: 601] training loss: 88.043 [Epoch: 1, Iteration: 1] validation loss: 83.177 [Epoch: 2, Iteration: 1] training loss: 87.723 [Epoch: 2, Iteration: 201] training loss: 87.877 [Epoch: 2, Iteration: 401] training loss: 88.010 [Epoch: 2, Iteration: 601] training loss: 84.745 [Epoch: 2, Iteration: 1] validation loss: 87.303 [Epoch: 3, Iteration: 1] training loss: 85.881 [Epoch: 3, Iteration: 201] training loss: 79.246 [Epoch: 3, Iteration: 401] training loss: 88.846 [Epoch: 3, Iteration: 601] training loss: 84.247 [Epoch: 3, Iteration: 1] validation loss: 85.622 [Epoch: 4, Iteration: 1] training loss: 87.919 [Epoch: 4, Iteration: 201] training loss: 81.491 [Epoch: 4, Iteration: 401] training loss: 85.984 [Epoch: 4, Iteration: 601] training loss: 83.071 [Epoch: 4, Iteration: 1] validation loss: 86.059 [Epoch: 5, Iteration: 1] training loss: 82.205 [Epoch: 5, Iteration: 201] training loss: 93.778 [Epoch: 5, Iteration: 401] training loss: 83.828 [Epoch: 5, Iteration: 601] training loss: 86.785 [Epoch: 5, Iteration: 1] validation loss: 90.306

[Epoch: 6, Iteration: 1] training loss: 80.825 [Epoch: 6, Iteration: 201] training loss: 85.247 [Epoch: 6, Iteration: 401] training loss: 88.234 [Epoch: 6, Iteration: 601] training loss: 88.934 [Epoch: 6, Iteration: 1] validation loss: 83.061 [Epoch: 7, Iteration: 1] training loss: 81.823 [Epoch: 7, Iteration: 201] training loss: 87.342 [Epoch: 7, Iteration: 401] training loss: 87.643 [Epoch: 7, Iteration: 601] training loss: 85.642 [Epoch: 7, Iteration: 1] validation loss: 87.464 [Epoch: 8, Iteration: 1] training loss: 83.260 [Epoch: 8, Iteration: 201] training loss: 86.880 [Epoch: 8, Iteration: 401] training loss: 86.856 [Epoch: 8, Iteration: 601] training loss: 85.080 [Epoch: 8, Iteration: 1] validation loss: 83.611 [Epoch: 9, Iteration: 1] training loss: 82.616 [Epoch: 9, Iteration: 201] training loss: 82.944 [Epoch: 9, Iteration: 401] training loss: 87.403 [Epoch: 9, Iteration: 601] training loss: 84.181 [Epoch: 9, Iteration: 1] validation loss: 87.385 [Epoch: 10, Iteration: 1] training loss: 87.452 [Epoch: 10, Iteration: 201] training loss: 86.502 [Epoch: 10, Iteration: 401] training loss: 83.902 [Epoch: 10, Iteration: 601] training loss: 80.084 [Epoch: 10, Iteration: 1] validation loss: 86.377 [Epoch: 11, Iteration: 1] training loss: 82.076 [Epoch: 11, Iteration: 201] training loss: 81.851 [Epoch: 11, Iteration: 401] training loss: 83.709 [Epoch: 11, Iteration: 601] training loss: 79.892 [Epoch: 11, Iteration: 1] validation loss: 78.104 [Epoch: 12, Iteration: 1] training loss: 83.819 [Epoch: 12, Iteration: 201] training loss: 83.723 [Epoch: 12, Iteration: 401] training loss: 80.094 [Epoch: 12, Iteration: 601] training loss: 78.198 [Epoch: 12, Iteration: 1] validation loss: 82.945 [Epoch: 13, Iteration: 1] training loss: 86.076 [Epoch: 13, Iteration: 201] training loss: 80.683 [Epoch: 13, Iteration: 401] training loss: 83.376 [Epoch: 13, Iteration: 601] training loss: 86.182 [Epoch: 13, Iteration: 1] validation loss: 85.285 [Epoch: 14, Iteration: 1] training loss: 80.257 [Epoch: 14, Iteration: 201] training loss: 80.545 [Epoch: 14, Iteration: 401] training loss: 86.778 [Epoch: 14, Iteration: 601] training loss: 81.233 [Epoch: 14, Iteration: 1] validation loss: 84.939 [Epoch: 15, Iteration: 1] training loss: 80.349 [Epoch: 15, Iteration: 201] training loss: 83.528 [Epoch: 15, Iteration: 401] training loss: 81.834 [Epoch: 15, Iteration: 601] training loss: 75.239 [Epoch: 15, Iteration: 1] validation loss: 83.916 [Epoch: 16, Iteration: 1] training loss: 80.116 [Epoch: 16, Iteration: 201] training loss: 82.432 [Epoch: 16, Iteration: 401] training loss: 81.109 [Epoch: 16, Iteration: 601] training loss: 81.406 [Epoch: 16, Iteration: 1] validation loss: 83.704 [Epoch: 17, Iteration: 1] training loss: 84.048 [Epoch: 17, Iteration: 201] training loss: 85.074 [Epoch: 17, Iteration: 401] training loss: 78.976 [Epoch: 17, Iteration: 601] training loss: 82.570 [Epoch: 17, Iteration: 1] validation loss: 81.524 [Epoch: 18, Iteration: 1] training loss: 79.843 [Epoch: 18, Iteration: 201] training loss: 81.951 [Epoch: 18, Iteration: 401] training loss: 80.879 [Epoch: 18, Iteration: 601] training loss: 79.228 [Epoch: 18, Iteration: 1] validation loss: 81.188 [Epoch: 19, Iteration: 1] training loss: 81.237 [Epoch: 19, Iteration: 201] training loss: 84.316 [Epoch: 19, Iteration: 401] training loss: 81.086 [Epoch: 19, Iteration: 601] training loss: 80.472 [Epoch: 19, Iteration: 1] validation loss: 81.523 [Epoch: 20, Iteration: 1] training loss: 79.305 [Epoch: 20, Iteration: 201] training loss: 83.133 [Epoch: 20, Iteration: 401] training loss: 77.139 [Epoch: 20, Iteration: 601] training loss: 83.409 [Epoch: 20, Iteration: 1] validation loss: 83.574 [Epoch: 21, Iteration: 1] training loss: 78.800 [Epoch: 21, Iteration: 201] training loss: 83.194 [Epoch: 21, Iteration: 401] training loss: 84.583 [Epoch: 21, Iteration: 601] training loss: 81.532 [Epoch: 21, Iteration: 1] validation loss: 81.151 [Epoch: 22, Iteration: 1] training loss: 80.409 [Epoch: 22, Iteration: 201] training loss: 77.737 [Epoch: 22, Iteration: 401] training loss: 80.736 [Epoch: 22, Iteration: 601] training loss: 83.141 [Epoch: 22, Iteration: 1] validation loss: 85.112 [Epoch: 23, Iteration: 1] training loss: 78.957 [Epoch: 23, Iteration: 201] training loss: 83.337 [Epoch: 23, Iteration: 401] training loss: 84.584 [Epoch: 23, Iteration: 601] training loss: 80.004 [Epoch: 23, Iteration: 1] validation loss: 82.962 [Epoch: 24, Iteration: 1] training loss: 81.502 [Epoch: 24, Iteration: 201] training loss: 83.661 [Epoch: 24, Iteration: 401] training loss: 86.142 [Epoch: 24, Iteration: 601] training loss: 83.706 [Epoch: 24, Iteration: 1] validation loss: 80.430 [Epoch: 25, Iteration: 1] training loss: 80.428 [Epoch: 25, Iteration: 201] training loss: 84.622 [Epoch: 25, Iteration: 401] training loss: 83.961 [Epoch: 25, Iteration: 601] training loss: 79.039 [Epoch: 25, Iteration: 1] validation loss: 74.882 [Epoch: 26, Iteration: 1] training loss: 81.556 [Epoch: 26, Iteration: 201] training loss: 82.755 [Epoch: 26, Iteration: 401] training loss: 82.036 [Epoch: 26, Iteration: 601] training loss: 81.981 [Epoch: 26, Iteration: 1] validation loss: 80.004 [Epoch: 27, Iteration: 1] training loss: 82.258 [Epoch: 27, Iteration: 201] training loss: 82.386 [Epoch: 27, Iteration: 401] training loss: 79.959 [Epoch: 27, Iteration: 601] training loss: 80.506 [Epoch: 27, Iteration: 1] validation loss: 84.074 [Epoch: 28, Iteration: 1] training loss: 83.147 [Epoch: 28, Iteration: 201] training loss: 86.735 [Epoch: 28, Iteration: 401] training loss: 81.018 [Epoch: 28, Iteration: 601] training loss: 84.562 [Epoch: 28, Iteration: 1] validation loss: 82.354 [Epoch: 29, Iteration: 1] training loss: 82.320 [Epoch: 29, Iteration: 201] training loss: 82.030 [Epoch: 29, Iteration: 401] training loss: 80.519 [Epoch: 29, Iteration: 601] training loss: 80.884 [Epoch: 29, Iteration: 1] validation loss: 81.549 [Epoch: 30, Iteration: 1] training loss: 82.597 [Epoch: 30, Iteration: 201] training loss: 83.161 [Epoch: 30, Iteration: 401] training loss: 77.885 [Epoch: 30, Iteration: 601] training loss: 75.305 [Epoch: 30, Iteration: 1] validation loss: 83.326 [Epoch: 31, Iteration: 1] training loss: 84.727 [Epoch: 31, Iteration: 201] training loss: 82.345 [Epoch: 31, Iteration: 401] training loss: 81.713 [Epoch: 31, Iteration: 601] training loss: 78.042 [Epoch: 31, Iteration: 1] validation loss: 82.300 [Epoch: 32, Iteration: 1] training loss: 79.847 [Epoch: 32, Iteration: 201] training loss: 78.683 [Epoch: 32, Iteration: 401] training loss: 78.614 [Epoch: 32, Iteration: 601] training loss: 81.765 [Epoch: 32, Iteration: 1] validation loss: 84.366 [Epoch: 33, Iteration: 1] training loss: 82.261 [Epoch: 33, Iteration: 201] training loss: 85.972 [Epoch: 33, Iteration: 401] training loss: 80.513 [Epoch: 33, Iteration: 601] training loss: 81.728 [Epoch: 33, Iteration: 1] validation loss: 84.852 [Epoch: 34, Iteration: 1] training loss: 78.273 [Epoch: 34, Iteration: 201] training loss: 78.978 [Epoch: 34, Iteration: 401] training loss: 79.375 [Epoch: 34, Iteration: 601] training loss: 77.948 [Epoch: 34, Iteration: 1] validation loss: 80.363 [Epoch: 35, Iteration: 1] training loss: 76.545 [Epoch: 35, Iteration: 201] training loss: 81.828 [Epoch: 35, Iteration: 401] training loss: 81.195 [Epoch: 35, Iteration: 601] training loss: 82.288 [Epoch: 35, Iteration: 1] validation loss: 83.870 [Epoch: 36, Iteration: 1] training loss: 77.413 [Epoch: 36, Iteration: 201] training loss: 82.879 [Epoch: 36, Iteration: 401] training loss: 81.814 [Epoch: 36, Iteration: 601] training loss: 82.843 [Epoch: 36, Iteration: 1] validation loss: 83.831 [Epoch: 37, Iteration: 1] training loss: 84.906 [Epoch: 37, Iteration: 201] training loss: 82.267 [Epoch: 37, Iteration: 401] training loss: 88.457 [Epoch: 37, Iteration: 601] training loss: 78.221 [Epoch: 37, Iteration: 1] validation loss: 82.541 [Epoch: 38, Iteration: 1] training loss: 80.630 [Epoch: 38, Iteration: 201] training loss: 79.122 [Epoch: 38, Iteration: 401] training loss: 81.792 [Epoch: 38, Iteration: 601] training loss: 80.798 [Epoch: 38, Iteration: 1] validation loss: 84.609 [Epoch: 39, Iteration: 1] training loss: 81.953 [Epoch: 39, Iteration: 201] training loss: 82.804 [Epoch: 39, Iteration: 401] training loss: 81.803 [Epoch: 39, Iteration: 601] training loss: 83.612 [Epoch: 39, Iteration: 1] validation loss: 84.448 [Epoch: 40, Iteration: 1] training loss: 77.267 [Epoch: 40, Iteration: 201] training loss: 78.179 [Epoch: 40, Iteration: 401] training loss: 73.777 [Epoch: 40, Iteration: 601] training loss: 82.697 [Epoch: 40, Iteration: 1] validation loss: 82.415


image


h36m_3d_25frames_ckpt_epoch_40.pt

walking : 61.2 eating : 62.1 smoking : 64.7 discussion : 89.4 directions : 83.5 greeting : 103.2 phoning : 80.3 posing : 119.5 purchases : 106.8 sitting : 96.2 sittingdown : 120.7 takingphoto : 97.1 waiting : 86.2 walkingdog : 117.3 walkingtogether : 60.5 Average: 89.9 Prediction time: 0.008466757833957672

CorsiDanilo commented 1 year ago

Tuned parameters

lr=1e-02 # learning rate ❗ milestones=[10,20,30] # the epochs after which the learning rate is adjusted by gamma ❗ weight_decay=1e-05 # weight decay (L2 penalty)


[Epoch: 1, Iteration: 1] training loss: 76.262 [Epoch: 1, Iteration: 201] training loss: 80.432 [Epoch: 1, Iteration: 401] training loss: 81.672 [Epoch: 1, Iteration: 601] training loss: 87.321 [Epoch: 1, Iteration: 1] validation loss: 85.272 [Epoch: 2, Iteration: 1] training loss: 87.823 [Epoch: 2, Iteration: 201] training loss: 83.538 [Epoch: 2, Iteration: 401] training loss: 83.196 [Epoch: 2, Iteration: 601] training loss: 83.707 [Epoch: 2, Iteration: 1] validation loss: 93.661 [Epoch: 3, Iteration: 1] training loss: 83.294 [Epoch: 3, Iteration: 201] training loss: 83.200 [Epoch: 3, Iteration: 401] training loss: 84.510 [Epoch: 3, Iteration: 601] training loss: 82.309 [Epoch: 3, Iteration: 1] validation loss: 83.227 [Epoch: 4, Iteration: 1] training loss: 83.698 [Epoch: 4, Iteration: 201] training loss: 84.065 [Epoch: 4, Iteration: 401] training loss: 83.115 [Epoch: 4, Iteration: 601] training loss: 81.329 [Epoch: 4, Iteration: 1] validation loss: 87.740 [Epoch: 5, Iteration: 1] training loss: 83.469 [Epoch: 5, Iteration: 201] training loss: 84.610 [Epoch: 5, Iteration: 401] training loss: 90.840 [Epoch: 5, Iteration: 601] training loss: 82.122 [Epoch: 5, Iteration: 1] validation loss: 87.767 [Epoch: 6, Iteration: 1] training loss: 84.925 [Epoch: 6, Iteration: 201] training loss: 84.567 [Epoch: 6, Iteration: 401] training loss: 84.053 [Epoch: 6, Iteration: 601] training loss: 85.856 [Epoch: 6, Iteration: 1] validation loss: 86.209 [Epoch: 7, Iteration: 1] training loss: 85.554 [Epoch: 7, Iteration: 201] training loss: 80.895 [Epoch: 7, Iteration: 401] training loss: 87.387 [Epoch: 7, Iteration: 601] training loss: 82.242 [Epoch: 7, Iteration: 1] validation loss: 84.583 [Epoch: 8, Iteration: 1] training loss: 84.516 [Epoch: 8, Iteration: 201] training loss: 80.390 [Epoch: 8, Iteration: 401] training loss: 81.088 [Epoch: 8, Iteration: 601] training loss: 83.495 [Epoch: 8, Iteration: 1] validation loss: 83.954 [Epoch: 9, Iteration: 1] training loss: 84.214 [Epoch: 9, Iteration: 201] training loss: 81.053 [Epoch: 9, Iteration: 401] training loss: 86.992 [Epoch: 9, Iteration: 601] training loss: 86.302 [Epoch: 9, Iteration: 1] validation loss: 85.153 [Epoch: 10, Iteration: 1] training loss: 86.847 [Epoch: 10, Iteration: 201] training loss: 83.888 [Epoch: 10, Iteration: 401] training loss: 81.807 [Epoch: 10, Iteration: 601] training loss: 84.712 [Epoch: 10, Iteration: 1] validation loss: 85.740 [Epoch: 11, Iteration: 1] training loss: 84.391 [Epoch: 11, Iteration: 201] training loss: 77.842 [Epoch: 11, Iteration: 401] training loss: 76.047 [Epoch: 11, Iteration: 601] training loss: 81.338 [Epoch: 11, Iteration: 1] validation loss: 83.447 [Epoch: 12, Iteration: 1] training loss: 79.408 [Epoch: 12, Iteration: 201] training loss: 79.260 [Epoch: 12, Iteration: 401] training loss: 78.473 [Epoch: 12, Iteration: 601] training loss: 84.459 [Epoch: 12, Iteration: 1] validation loss: 83.982 [Epoch: 13, Iteration: 1] training loss: 81.982 [Epoch: 13, Iteration: 201] training loss: 78.748 [Epoch: 13, Iteration: 401] training loss: 81.345 [Epoch: 13, Iteration: 601] training loss: 86.480 [Epoch: 13, Iteration: 1] validation loss: 80.243 [Epoch: 14, Iteration: 1] training loss: 83.411 [Epoch: 14, Iteration: 201] training loss: 83.268 [Epoch: 14, Iteration: 401] training loss: 83.668 [Epoch: 14, Iteration: 601] training loss: 79.323 [Epoch: 14, Iteration: 1] validation loss: 81.183 [Epoch: 15, Iteration: 1] training loss: 80.823 [Epoch: 15, Iteration: 201] training loss: 81.117 [Epoch: 15, Iteration: 401] training loss: 79.268 [Epoch: 15, Iteration: 601] training loss: 83.300 [Epoch: 15, Iteration: 1] validation loss: 82.504 [Epoch: 16, Iteration: 1] training loss: 81.825 [Epoch: 16, Iteration: 201] training loss: 79.649 [Epoch: 16, Iteration: 401] training loss: 82.665 [Epoch: 16, Iteration: 601] training loss: 82.204 [Epoch: 16, Iteration: 1] validation loss: 83.220 [Epoch: 17, Iteration: 1] training loss: 81.999 [Epoch: 17, Iteration: 201] training loss: 74.439 [Epoch: 17, Iteration: 401] training loss: 79.302 [Epoch: 17, Iteration: 601] training loss: 80.936 [Epoch: 17, Iteration: 1] validation loss: 81.193 [Epoch: 18, Iteration: 1] training loss: 82.894 [Epoch: 18, Iteration: 201] training loss: 87.705 [Epoch: 18, Iteration: 401] training loss: 81.739 [Epoch: 18, Iteration: 601] training loss: 81.137 [Epoch: 18, Iteration: 1] validation loss: 85.476 [Epoch: 19, Iteration: 1] training loss: 83.455 [Epoch: 19, Iteration: 201] training loss: 77.929 [Epoch: 19, Iteration: 401] training loss: 76.291 [Epoch: 19, Iteration: 601] training loss: 82.626 [Epoch: 19, Iteration: 1] validation loss: 83.997 [Epoch: 20, Iteration: 1] training loss: 78.349 [Epoch: 20, Iteration: 201] training loss: 78.651 [Epoch: 20, Iteration: 401] training loss: 80.249 [Epoch: 20, Iteration: 601] training loss: 77.923 [Epoch: 20, Iteration: 1] validation loss: 81.806 [Epoch: 21, Iteration: 1] training loss: 79.337 [Epoch: 21, Iteration: 201] training loss: 81.440 [Epoch: 21, Iteration: 401] training loss: 85.946 [Epoch: 21, Iteration: 601] training loss: 78.254 [Epoch: 21, Iteration: 1] validation loss: 86.769 [Epoch: 22, Iteration: 1] training loss: 80.682 [Epoch: 22, Iteration: 201] training loss: 79.368 [Epoch: 22, Iteration: 401] training loss: 81.508 [Epoch: 22, Iteration: 601] training loss: 82.164 [Epoch: 22, Iteration: 1] validation loss: 82.896 [Epoch: 23, Iteration: 1] training loss: 78.406 [Epoch: 23, Iteration: 201] training loss: 74.865 [Epoch: 23, Iteration: 401] training loss: 78.813 [Epoch: 23, Iteration: 601] training loss: 79.248 [Epoch: 23, Iteration: 1] validation loss: 81.349 [Epoch: 24, Iteration: 1] training loss: 80.063 [Epoch: 24, Iteration: 201] training loss: 80.042 [Epoch: 24, Iteration: 401] training loss: 79.135 [Epoch: 24, Iteration: 601] training loss: 79.724 [Epoch: 24, Iteration: 1] validation loss: 82.514 [Epoch: 25, Iteration: 1] training loss: 78.770 [Epoch: 25, Iteration: 201] training loss: 76.514 [Epoch: 25, Iteration: 401] training loss: 82.627 [Epoch: 25, Iteration: 601] training loss: 79.230 [Epoch: 25, Iteration: 1] validation loss: 81.522 [Epoch: 26, Iteration: 1] training loss: 84.783 [Epoch: 26, Iteration: 201] training loss: 75.912 [Epoch: 26, Iteration: 401] training loss: 78.883 [Epoch: 26, Iteration: 601] training loss: 81.516 [Epoch: 26, Iteration: 1] validation loss: 86.635 [Epoch: 27, Iteration: 1] training loss: 81.050 [Epoch: 27, Iteration: 201] training loss: 81.301 [Epoch: 27, Iteration: 401] training loss: 82.677 [Epoch: 27, Iteration: 601] training loss: 83.282 [Epoch: 27, Iteration: 1] validation loss: 85.110 [Epoch: 28, Iteration: 1] training loss: 78.242 [Epoch: 28, Iteration: 201] training loss: 81.285 [Epoch: 28, Iteration: 401] training loss: 79.223 [Epoch: 28, Iteration: 601] training loss: 81.264 [Epoch: 28, Iteration: 1] validation loss: 83.101 [Epoch: 29, Iteration: 1] training loss: 75.621 [Epoch: 29, Iteration: 201] training loss: 79.017 [Epoch: 29, Iteration: 401] training loss: 82.948 [Epoch: 29, Iteration: 601] training loss: 80.490 [Epoch: 29, Iteration: 1] validation loss: 83.802 [Epoch: 30, Iteration: 1] training loss: 80.176 [Epoch: 30, Iteration: 201] training loss: 80.464 [Epoch: 30, Iteration: 401] training loss: 83.629 [Epoch: 30, Iteration: 601] training loss: 84.275 [Epoch: 30, Iteration: 1] validation loss: 83.286 [Epoch: 31, Iteration: 1] training loss: 82.813 [Epoch: 31, Iteration: 201] training loss: 80.012 [Epoch: 31, Iteration: 401] training loss: 80.093 [Epoch: 31, Iteration: 601] training loss: 81.713 [Epoch: 31, Iteration: 1] validation loss: 86.000 [Epoch: 32, Iteration: 1] training loss: 85.898 [Epoch: 32, Iteration: 201] training loss: 75.066 [Epoch: 32, Iteration: 401] training loss: 79.400 [Epoch: 32, Iteration: 601] training loss: 80.223 [Epoch: 32, Iteration: 1] validation loss: 80.514 [Epoch: 33, Iteration: 1] training loss: 79.932 [Epoch: 33, Iteration: 201] training loss: 79.186 [Epoch: 33, Iteration: 401] training loss: 85.827 [Epoch: 33, Iteration: 601] training loss: 78.276 [Epoch: 33, Iteration: 1] validation loss: 81.803 [Epoch: 34, Iteration: 1] training loss: 81.250 [Epoch: 34, Iteration: 201] training loss: 79.446 [Epoch: 34, Iteration: 401] training loss: 82.827 [Epoch: 34, Iteration: 601] training loss: 84.752 [Epoch: 34, Iteration: 1] validation loss: 83.219 [Epoch: 35, Iteration: 1] training loss: 83.287 [Epoch: 35, Iteration: 201] training loss: 80.374 [Epoch: 35, Iteration: 401] training loss: 79.630 [Epoch: 35, Iteration: 601] training loss: 79.160 [Epoch: 35, Iteration: 1] validation loss: 86.911 [Epoch: 36, Iteration: 1] training loss: 80.056 [Epoch: 36, Iteration: 201] training loss: 80.938 [Epoch: 36, Iteration: 401] training loss: 79.799 [Epoch: 36, Iteration: 601] training loss: 80.000 [Epoch: 36, Iteration: 1] validation loss: 82.623 [Epoch: 37, Iteration: 1] training loss: 76.976 [Epoch: 37, Iteration: 201] training loss: 78.933 [Epoch: 37, Iteration: 401] training loss: 81.687 [Epoch: 37, Iteration: 601] training loss: 79.988 [Epoch: 37, Iteration: 1] validation loss: 79.059 [Epoch: 38, Iteration: 1] training loss: 81.657 [Epoch: 38, Iteration: 201] training loss: 76.665 [Epoch: 38, Iteration: 401] training loss: 83.283 [Epoch: 38, Iteration: 601] training loss: 76.848 [Epoch: 38, Iteration: 1] validation loss: 83.789 [Epoch: 39, Iteration: 1] training loss: 84.286 [Epoch: 39, Iteration: 201] training loss: 77.755 [Epoch: 39, Iteration: 401] training loss: 80.079 [Epoch: 39, Iteration: 601] training loss: 76.923 [Epoch: 39, Iteration: 1] validation loss: 82.509 [Epoch: 40, Iteration: 1] training loss: 77.407 [Epoch: 40, Iteration: 201] training loss: 82.180 [Epoch: 40, Iteration: 401] training loss: 80.300 [Epoch: 40, Iteration: 601] training loss: 87.221 [Epoch: 40, Iteration: 1] validation loss: 84.936


image


h36m_3d_25frames_ckpt_epoch_40.pt

walking : 60.2 eating : 62.7 smoking : 64.3 discussion : 88.6 directions : 83.2 greeting : 103.4 phoning : 80.2 posing : 119.6 purchases : 106.6 sitting : 96.7 sittingdown : 120.6 takingphoto : 95.6 waiting : 86.4 walkingdog : 116.3 walkingtogether : 60.0 Average: 89.6 Prediction time: 0.009536754091580708

CorsiDanilo commented 1 year ago

Tuned parameters

lr=1e-01 # learning rate milestones=[10,20,30] # the epochs after which the learning rate is adjusted by gamma ❗ weight_decay=null ❗


[Epoch: 1, Iteration: 1] training loss: 76.605 [Epoch: 1, Iteration: 201] training loss: 107.453 [Epoch: 1, Iteration: 401] training loss: 100.455 [Epoch: 1, Iteration: 601] training loss: 101.412 [Epoch: 1, Iteration: 1] validation loss: 90.193 [Epoch: 2, Iteration: 1] training loss: 98.984 [Epoch: 2, Iteration: 201] training loss: 96.918 [Epoch: 2, Iteration: 401] training loss: 95.901 [Epoch: 2, Iteration: 601] training loss: 95.269 [Epoch: 2, Iteration: 1] validation loss: 90.861 [Epoch: 3, Iteration: 1] training loss: 99.264 [Epoch: 3, Iteration: 201] training loss: 98.726 [Epoch: 3, Iteration: 401] training loss: 89.811 [Epoch: 3, Iteration: 601] training loss: 93.311 [Epoch: 3, Iteration: 1] validation loss: 92.377 [Epoch: 4, Iteration: 1] training loss: 91.726 [Epoch: 4, Iteration: 201] training loss: 88.643 [Epoch: 4, Iteration: 401] training loss: 93.931 [Epoch: 4, Iteration: 601] training loss: 92.710 [Epoch: 4, Iteration: 1] validation loss: 93.905 [Epoch: 5, Iteration: 1] training loss: 97.709 [Epoch: 5, Iteration: 201] training loss: 96.608 [Epoch: 5, Iteration: 401] training loss: 89.028 [Epoch: 5, Iteration: 601] training loss: 90.606 [Epoch: 5, Iteration: 1] validation loss: 84.637 [Epoch: 6, Iteration: 1] training loss: 92.189 [Epoch: 6, Iteration: 201] training loss: 89.497 [Epoch: 6, Iteration: 401] training loss: 92.038 [Epoch: 6, Iteration: 601] training loss: 96.202 [Epoch: 6, Iteration: 1] validation loss: 83.193 [Epoch: 7, Iteration: 1] training loss: 89.329 [Epoch: 7, Iteration: 201] training loss: 90.413 [Epoch: 7, Iteration: 401] training loss: 95.980 [Epoch: 7, Iteration: 601] training loss: 90.168 [Epoch: 7, Iteration: 1] validation loss: 80.595 [Epoch: 8, Iteration: 1] training loss: 90.160 [Epoch: 8, Iteration: 201] training loss: 85.582 [Epoch: 8, Iteration: 401] training loss: 89.561 [Epoch: 8, Iteration: 601] training loss: 92.187 [Epoch: 8, Iteration: 1] validation loss: 85.902 [Epoch: 9, Iteration: 1] training loss: 88.695 [Epoch: 9, Iteration: 201] training loss: 86.858 [Epoch: 9, Iteration: 401] training loss: 94.119 [Epoch: 9, Iteration: 601] training loss: 87.952 [Epoch: 9, Iteration: 1] validation loss: 81.264 [Epoch: 10, Iteration: 1] training loss: 86.339 [Epoch: 10, Iteration: 201] training loss: 89.893 [Epoch: 10, Iteration: 401] training loss: 87.779 [Epoch: 10, Iteration: 601] training loss: 88.464 [Epoch: 10, Iteration: 1] validation loss: 83.715 [Epoch: 11, Iteration: 1] training loss: 84.251 [Epoch: 11, Iteration: 201] training loss: 82.893 [Epoch: 11, Iteration: 401] training loss: 83.846 [Epoch: 11, Iteration: 601] training loss: 89.041 [Epoch: 11, Iteration: 1] validation loss: 81.578 [Epoch: 12, Iteration: 1] training loss: 81.166 [Epoch: 12, Iteration: 201] training loss: 80.295 [Epoch: 12, Iteration: 401] training loss: 84.983 [Epoch: 12, Iteration: 601] training loss: 84.788 [Epoch: 12, Iteration: 1] validation loss: 82.221 [Epoch: 13, Iteration: 1] training loss: 83.587 [Epoch: 13, Iteration: 201] training loss: 83.918 [Epoch: 13, Iteration: 401] training loss: 82.884 [Epoch: 13, Iteration: 601] training loss: 83.860 [Epoch: 13, Iteration: 1] validation loss: 77.878 [Epoch: 14, Iteration: 1] training loss: 77.814 [Epoch: 14, Iteration: 201] training loss: 86.917 [Epoch: 14, Iteration: 401] training loss: 82.712 [Epoch: 14, Iteration: 601] training loss: 83.069 [Epoch: 14, Iteration: 1] validation loss: 80.050 [Epoch: 15, Iteration: 1] training loss: 82.021 [Epoch: 15, Iteration: 201] training loss: 82.681 [Epoch: 15, Iteration: 401] training loss: 84.761 [Epoch: 15, Iteration: 601] training loss: 79.331 [Epoch: 15, Iteration: 1] validation loss: 81.491 [Epoch: 16, Iteration: 1] training loss: 85.778 [Epoch: 16, Iteration: 201] training loss: 83.032 [Epoch: 16, Iteration: 401] training loss: 83.208 [Epoch: 16, Iteration: 601] training loss: 84.300 [Epoch: 16, Iteration: 1] validation loss: 83.458 [Epoch: 17, Iteration: 1] training loss: 85.102 [Epoch: 17, Iteration: 201] training loss: 84.181 [Epoch: 17, Iteration: 401] training loss: 87.008 [Epoch: 17, Iteration: 601] training loss: 80.747 [Epoch: 17, Iteration: 1] validation loss: 79.184 [Epoch: 18, Iteration: 1] training loss: 84.039 [Epoch: 18, Iteration: 201] training loss: 89.866 [Epoch: 18, Iteration: 401] training loss: 78.983 [Epoch: 18, Iteration: 601] training loss: 83.207 [Epoch: 18, Iteration: 1] validation loss: 78.099 [Epoch: 19, Iteration: 1] training loss: 78.367 [Epoch: 19, Iteration: 201] training loss: 79.929 [Epoch: 19, Iteration: 401] training loss: 78.467 [Epoch: 19, Iteration: 601] training loss: 82.587 [Epoch: 19, Iteration: 1] validation loss: 76.852 [Epoch: 20, Iteration: 1] training loss: 79.079 [Epoch: 20, Iteration: 201] training loss: 77.125 [Epoch: 20, Iteration: 401] training loss: 79.376 [Epoch: 20, Iteration: 601] training loss: 82.796 [Epoch: 20, Iteration: 1] validation loss: 80.196 [Epoch: 21, Iteration: 1] training loss: 84.050 [Epoch: 21, Iteration: 201] training loss: 80.599 [Epoch: 21, Iteration: 401] training loss: 84.654 [Epoch: 21, Iteration: 601] training loss: 80.671 [Epoch: 21, Iteration: 1] validation loss: 81.189 [Epoch: 22, Iteration: 1] training loss: 77.527 [Epoch: 22, Iteration: 201] training loss: 82.383 [Epoch: 22, Iteration: 401] training loss: 77.322 [Epoch: 22, Iteration: 601] training loss: 80.137 [Epoch: 22, Iteration: 1] validation loss: 81.910 [Epoch: 23, Iteration: 1] training loss: 82.855 [Epoch: 23, Iteration: 201] training loss: 83.703 [Epoch: 23, Iteration: 401] training loss: 84.541 [Epoch: 23, Iteration: 601] training loss: 81.210 [Epoch: 23, Iteration: 1] validation loss: 79.886 [Epoch: 24, Iteration: 1] training loss: 82.398 [Epoch: 24, Iteration: 201] training loss: 85.492 [Epoch: 24, Iteration: 401] training loss: 87.374 [Epoch: 24, Iteration: 601] training loss: 85.518 [Epoch: 24, Iteration: 1] validation loss: 78.981 [Epoch: 25, Iteration: 1] training loss: 79.235 [Epoch: 25, Iteration: 201] training loss: 80.995 [Epoch: 25, Iteration: 401] training loss: 81.913 [Epoch: 25, Iteration: 601] training loss: 85.149 [Epoch: 25, Iteration: 1] validation loss: 76.603 [Epoch: 26, Iteration: 1] training loss: 79.103 [Epoch: 26, Iteration: 201] training loss: 84.240 [Epoch: 26, Iteration: 401] training loss: 81.270 [Epoch: 26, Iteration: 601] training loss: 79.251 [Epoch: 26, Iteration: 1] validation loss: 80.247 [Epoch: 27, Iteration: 1] training loss: 82.300 [Epoch: 27, Iteration: 201] training loss: 80.606 [Epoch: 27, Iteration: 401] training loss: 83.861 [Epoch: 27, Iteration: 601] training loss: 83.161 [Epoch: 27, Iteration: 1] validation loss: 79.095 [Epoch: 28, Iteration: 1] training loss: 88.478 [Epoch: 28, Iteration: 201] training loss: 81.211 [Epoch: 28, Iteration: 401] training loss: 78.316 [Epoch: 28, Iteration: 601] training loss: 82.087 [Epoch: 28, Iteration: 1] validation loss: 79.618 [Epoch: 29, Iteration: 1] training loss: 80.852 [Epoch: 29, Iteration: 201] training loss: 81.930 [Epoch: 29, Iteration: 401] training loss: 82.555 [Epoch: 29, Iteration: 601] training loss: 78.129 [Epoch: 29, Iteration: 1] validation loss: 79.375 [Epoch: 30, Iteration: 1] training loss: 78.115 [Epoch: 30, Iteration: 201] training loss: 80.215 [Epoch: 30, Iteration: 401] training loss: 77.606 [Epoch: 30, Iteration: 601] training loss: 83.034 [Epoch: 30, Iteration: 1] validation loss: 75.430 [Epoch: 31, Iteration: 1] training loss: 80.082 [Epoch: 31, Iteration: 201] training loss: 81.025 [Epoch: 31, Iteration: 401] training loss: 82.333 [Epoch: 31, Iteration: 601] training loss: 78.993 [Epoch: 31, Iteration: 1] validation loss: 80.246 [Epoch: 32, Iteration: 1] training loss: 77.658 [Epoch: 32, Iteration: 201] training loss: 79.672 [Epoch: 32, Iteration: 401] training loss: 81.220 [Epoch: 32, Iteration: 601] training loss: 78.452 [Epoch: 32, Iteration: 1] validation loss: 78.997 [Epoch: 33, Iteration: 1] training loss: 83.402 [Epoch: 33, Iteration: 201] training loss: 79.505 [Epoch: 33, Iteration: 401] training loss: 84.367 [Epoch: 33, Iteration: 601] training loss: 80.436 [Epoch: 33, Iteration: 1] validation loss: 78.089 [Epoch: 34, Iteration: 1] training loss: 81.096 [Epoch: 34, Iteration: 201] training loss: 85.281 [Epoch: 34, Iteration: 401] training loss: 81.709 [Epoch: 34, Iteration: 601] training loss: 83.743 [Epoch: 34, Iteration: 1] validation loss: 80.320 [Epoch: 35, Iteration: 1] training loss: 80.379 [Epoch: 35, Iteration: 201] training loss: 80.743 [Epoch: 35, Iteration: 401] training loss: 85.974 [Epoch: 35, Iteration: 601] training loss: 78.930 [Epoch: 35, Iteration: 1] validation loss: 75.039 [Epoch: 36, Iteration: 1] training loss: 78.013 [Epoch: 36, Iteration: 201] training loss: 83.023 [Epoch: 36, Iteration: 401] training loss: 79.297 [Epoch: 36, Iteration: 601] training loss: 78.774 [Epoch: 36, Iteration: 1] validation loss: 73.465 [Epoch: 37, Iteration: 1] training loss: 78.006 [Epoch: 37, Iteration: 201] training loss: 79.175 [Epoch: 37, Iteration: 401] training loss: 82.245 [Epoch: 37, Iteration: 601] training loss: 77.625 [Epoch: 37, Iteration: 1] validation loss: 79.011 [Epoch: 38, Iteration: 1] training loss: 82.706 [Epoch: 38, Iteration: 201] training loss: 78.051 [Epoch: 38, Iteration: 401] training loss: 81.066 [Epoch: 38, Iteration: 601] training loss: 80.627 [Epoch: 38, Iteration: 1] validation loss: 80.228 [Epoch: 39, Iteration: 1] training loss: 81.159 [Epoch: 39, Iteration: 201] training loss: 82.863 [Epoch: 39, Iteration: 401] training loss: 80.126 [Epoch: 39, Iteration: 601] training loss: 80.912 [Epoch: 39, Iteration: 1] validation loss: 82.366 [Epoch: 40, Iteration: 1] training loss: 82.180 [Epoch: 40, Iteration: 201] training loss: 75.857 [Epoch: 40, Iteration: 401] training loss: 79.312 [Epoch: 40, Iteration: 601] training loss: 80.133 [Epoch: 40, Iteration: 1] validation loss: 78.110


image


h36m_3d_25frames_ckpt_epoch_40.pt

walking : 60.1 eating : 59.3 smoking : 59.7 discussion : 86.7 directions : 78.1 greeting : 101.8 phoning : 75.2 posing : 116.2 purchases : 101.9 sitting : 87.8 sittingdown : 111.6 takingphoto : 85.6 waiting : 81.4 walkingdog : 111.2 walkingtogether : 59.3 Average: 85.1 Prediction time: 0.009195645650227865