Closed DanielCoelho112 closed 2 years ago
Hi @miguelriemoliveira,
The code to implement this feature is the following: https://github.com/DanielCoelho112/localization_end_to_end/blob/5956ed8505b5ab6d266854cb528e62424ee04468/localbot_localization/src/utilities.py#L88-L105
In comparison with ATOM, I had to add the constant sqrt(3) in lines 96 and 98. This is because, in Localization, when we say that the error is 10 cm, we don't say that each component x,y,z has an error of 10cm, we say that the RMSE between the vectors is 10cm. So assuming that the 3 components share the same magnitude of errors, the sqrt(3) appears.
Doing as it is in ATOM, when I used a position error of 10cm, in reality, the mean error was 5.7cm. But with the sqrt(3), it is 10.
I created these versions of the dataset:
10cm,5º 20cm,10º 30cm,20º 0cm, 20º 30cm,0º
What do you say @miguelriemoliveira?
Usually we try to first change each independently, e.g. change only translation, then only rotation, then some mixed... if you can add a few more that would be great
On Wed, Jun 29, 2022, 19:26 Daniel Coelho @.***> wrote:
I created these versions of the dataset:
10cm,5º 20cm,10º 30cm,20º 0cm, 20º 30cm,0º
What do you say @miguelriemoliveira https://github.com/miguelriemoliveira?
— Reply to this email directly, view it on GitHub https://github.com/DanielCoelho112/localization_end_to_end/issues/76#issuecomment-1170344118, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVQVFY3PGIZPZTYVMOTVRSIN3ANCNFSM5Z64DPGA . You are receiving this because you were mentioned.Message ID: @.***>
Hi @miguelriemoliveira,
yes, makes sense. I'll add:
0cm, 5º 0cm, 10º 7.5cm, 0º 15cm, 0º
Nevertheless, in #61 the results already show what we wanted:
Nevertheless, in https://github.com/DanielCoelho112/localization_end_to_end/issues/61 the results already show what we wanted:
Right, they do. They show that with noise the accuracy localization worsens, and moreover that if the noise is more on one component then the loss of accuracy is also focused on that component.
Great results again. Congrats.
n comparison with ATOM, I had to add the constant sqrt(3) in lines 96 and 98. This is because, in Localization, when we say that the error is 10 cm, we don't say that each component x,y,z has an error of 10cm, we say that the RMSE between the vectors is 10cm. So assuming that the 3 components share the same magnitude of errors, the sqrt(3) appears.
I think I like your way better ...
Hi @miguelriemoliveira,
what about now?
The results for p30r20 are running now. I just computed these results for posenet with dynamic loss, but later on I will compute for the other models as well. I'm sure the pattern will continue, so I'll do that when we're writing the paper.
Hi @DanielCoelho112 ,
great results. Congratulations. It all makes perfect sense.
I am eager to know the results of your 500K image dataset. It should be around Easter 2030 right :- )?
I am eager to know the results of your 500K image dataset. It should be around Easter 2030 right :- )?
After all, it is 50K in lab 024, and 100k in Santuario. Check the training for the 50k dataset:
19 minutes per epoch, with 8k it was 3.5 minutes. I've never seen a test loss so low, so I'm getting curious...
I just run the model at epoch 30, and the results are:
position error= 8.8cm rotation error= 6.8º
The position error was reduced a lot, but the rotation is similar to the 8k dataset. Usually, the convergence happens around 100 epochs, so lets hope for the best.
so I'm getting curious...
mee too
This will allow us to see the impact the errors have on the learning of the models in a more supervised experiment.