Closed ChrisZeThird closed 1 year ago
Old models generated at the beginning of the internship were used as well as more recent one. But results are underwhelming.
Changed model in the main
branch, currently the best MSE is 0.0008
and RMSE 0.03
, for training on synthetic lines.
Still have the same problem though:
There is no problem with the synthetic data. They are correctly predicted.
Even LeakyReLU does not help. It does a great job on synthetic lines but doesn't change the outcomes on experimental data.
I still have the same issue but this method is promising. The synthetic data might be too different to get any result at all.
The issue is in the prediction. I've debugged the script and saw angles_test_prediction = model(tensor_patches)
containing the same value, despite the model being correctly defined.
The Dx
diagrams are much closer to the synthetic figures than the regular ones, but the network predicts the same value over and over again. There is no issue in how the angles are calculated, nor how they are displayed on the plot (no print
being the same type of error). It comes directly from the prediction.
Either something in the diagrams tricks the network (meaning there is in every diagrams one line with 151° angle) or prediction is not performed properly.
Training on synthetic data is the right idea, however it won't be perform properly with a simply feed forward. Additionally, adding Gaussian noise can help generate data closer to the experimental diagrams.
Resampling dataset and reducing network size did not work. Last way to solve the issue could be to train on synthetic data and test on experimental one.