isl-org / DeepLagrangianFluids

Code repository for "Lagrangian Fluid Simulation with Continuous Convolutions", ICLR 2020.
Other
207 stars 41 forks source link

Timestep used in the training process #22

Closed duytrangiale closed 2 years ago

duytrangiale commented 2 years ago

Hello everyone,

I'm currently doing some experiments with the code to understand the parameters used in it. There is one thing I'm not clear is about the timestep during the training process. Should the timestep (dt) used in the training model match with the timestep used in simulation? My simulation produces a list of files, each file records all the states of particles in one frame. The timestep I used in simulation is quite small (around 2e-5 seconds).

When I train the model with timestep of 0.02s, there is no issue even though the prediction is not really good (especially for the boundary collision) but somehow looks sensible. The problem starts when I used the same timestep in the simulation to train the model. During the training, the loss decreases overtime, however, it get stuck and fluctuates about the 0.3 or 0.4 without further improvement (I also tried to modify the learning rate but not work). After finish the training, I use that model to run test case. However, when I put the first frame to let the model predict, the predicted position and the delta x just blow up (get very big value) just after a few steps (4 or 5 steps).

Could you please suggest me some reasons for this?

Thanks.