Closed iSach closed 1 month ago
Thanks for raising that point! The number of steps is not t_end
/dt
, but rather t_end
/(dt
*write_every
), with write_every=100
being the temporal coarsening step, which is also included in the metadata.json
files.
Most of the values in the metadata files are the parameters used to run the SPH solver, e.g. dt
is the integration step the SPH solver did. If you want to see which parameters come from the solver and which are added during dataset generation (e.g. velocity and acceleration statistics), please have a look here: https://github.com/tumaer/lagrangebench/blob/main/data_gen/lagrangebench_data/gen_dataset.py
If you search through the repo for write_every
, you will not see much (https://github.com/search?q=repo%3Atumaer%2Flagrangebench%20write_every&type=code) because from the perspective of the ML model, everything is normalized, and dt=1
(https://github.com/tumaer/lagrangebench/blob/main/lagrangebench/case_setup/case.py#L256).
I hope this helps. Let me know if you still have questions!
Best, Artur
Hello Artur,
Thanks for this detailed answer, this makes more sense now for me, I did not notice the write_every, my bad! :)
I am currently trying to benchmark a bunch of models on this 2D dam dataset, including the DMCF model (https://github.com/tum-pbs/DMCF). It requires the dt value for integrating external forces such as gravity. When training a model with dt=0.03
and gravity=-1.0
as I found in these config files, the model barely learns anything and does not converge at all, which I found quite surprising considering its performance on other datasets. The velocities in my case have been correctly rescaled using the dt when converting the dataset.
Do you perhaps know if somebody has done this before with DMCF, or maybe you have some experience with it?
Thanks again for your replies and time, Sacha
Benchmarking DMCF is on my longer-term to-do list, but I haven't tried it yet. By the way, as far as I'm aware, SFBC (https://arxiv.org/abs/2403.16680) is the successor of DMCF, and on top of that, SFBC should be easier to implement (https://github.com/tum-pbs/SFBC).
Regarding proper benchmarking, there are three things that I can imagine going wrong at the top of my mind:
default_connectivity_radius
in my medadata.json
files, which an MPNN easily compensates by having many layers. To be more precise, my connectivity cutoff is approx. 1.5*average_particle_distance, while CConvs work with something like factor 2.5-3 (instead of 1.5).write_every
, which is 100. However, if I'm not wrong, SFBC works with a temporal coarsening factor around 16, i.e. the amount of nonlinearity from frame to frame in DAM is much larger than what CConv has been benchmarked on.If you want to have a Zoom call in the next days, just drop me an email at artur.toshev@tum.de.
Best, Artur.
Thanks a lot for this detailed answer and the insights, I will investigate further. I am also thinking about benchmarking SFBC, but the code is less trivial to make the datasets I'm using to work with it (only a list of positions basically).
I will contact you if I still require help, thanks a lot!!
Cheers, Sacha
Hello,
In the paper, the reported ∆t value for the 2D dam break dataset is 0.03. In the
metadata.json
file, we can see:Considering
t_end
is 12 and there are 401 steps, I guess the correct value is indeed 0.03. Is there a reason it is 3e-4 here? Are there other cases of this maybe?Thanks in advance and best regards, Sacha