FabianFuchsML / se3-transformer-public

code for the SE3 Transformers paper: https://arxiv.org/abs/2006.10503
475 stars 69 forks source link

nbody simulation results #18

Closed hanjq17 closed 2 years ago

hanjq17 commented 3 years ago

Thank the authors for the great work!

I am trying to reproduce the results of the nbody simulation, exactly using the provided training script. python nbody_run.py --ri_delta_t 10 --num_degrees 4 --batch_size 128 --num_channels 8 --div 4 --ri_burn_in 0 --siend att --xij add --head 2 --data_str 20_new where the dataset is strictly generated by the script provided in the folder (since the original dataset is not provided in this repository). However, after trained for 400+ epochs, the model is still not giving out the results reported in the paper. Below's part of my training log:

[409|0] loss: 0.17786 ...[409|test] loss: 0.18485 Test loss: 0.18485483480617404 Test pos_mse : 0.051348936242552906 Test vel_mse : 0.31836070846288633

The metrics (pos_mse, vel_mse) are not ideal. Do you have any idea whether I'm missing anything?

The dataset could be different since it's randomly generated. However, the metrics are nearly 10x greater than the reported ones. Therefore, I think the above-mentioned procedure may contain some problems.

BTW, the results on QM9 dataset well-align with those reported in the paper, so perhaps there's something wrong with the nbody simulation I am working on.

Thanks!

FabianFuchsML commented 3 years ago

Hi!

Great to hear that you are interested in our work! :)

My first thought is that you may have created the simulation with 20 particles (which I think is the default setting), but in the paper we used 5. Obviously, a system with 20 particles is more complex and will lead to larger errors.

Secondly, keep in mind that this is not the exact code from the paper but a reimplementation (IP reasons), both for the simulation and for the model. Hence, the results won't be exactly the same. However, if I do remember correctly, when we ran this code here with 5 particles, we got slightly better results than in the paper.