lwang-astro / PeTar

PeTar is a high-performance N-body code for modelling the evolution of star clusters and tidal streams, including the effect of galactic potential, dynamics of binary and hierarchical system, single and binary stellar evolution.
MIT License
72 stars 21 forks source link

Particle number suddenly drop in small cluster #37

Closed kaiwu-astro closed 1 year ago

kaiwu-astro commented 1 year ago

I am running simulations with N=1000. I have run many instances = same initial condition for many times. Some of them have this strange phenomenon: the number of lines in the file data.xxx means the number of particles, and it drops suddenly.

For example, in one instance, particle number drops 95% drops from t=25Myr to t=26Myr

     414 data.22.binary
     653 data.22.single
    1479 data.23
     412 data.23.binary
     654 data.23.single
    1476 data.24
     411 data.24.binary
     653 data.24.single
    1475 data.25
     410 data.25.binary
     654 data.25.single
      18 data.26.binary
      26 data.26.single
      63 data.26
      18 data.27.binary
      26 data.27.single
      63 data.27
      18 data.28.binary
      26 data.28.single
      63 data.28

in another one, 99% from t=138Myr to t=139Myr

     1278 data.135
     1263 data.136
     1263 data.137
     1263 data.138
       22 data.139
       15 data.140.single
       22 data.140
        3 data.140.binary
       22 data.141
       22 data.142
       22 data.143
       22 data.144
       22 data.145
       22 data.146
        1 data.147
        1 data.148

while some instances can run without this problem for >200Myr, and some still have dozens of particles at 1000Myr

Although for the tiny cluster of N=1000, maybe it's not good to run the simulation for too long, because the cluster will disperse very soon, I am still wondering what's going on at that sudden particle number drop. How can I diagnose what's wrong?

[System info, configure option and initial condition here] - Computer: ``` OS: Ubuntu 18.04 CPU: Intel(R) Core(TM) i9-9960X CPU @ 3.10GHz ``` - Configure: ``` --Features: Version: 1181_293 FDPS Version: 7.0 Using MPI: yes Using GPU: no Using arch: x86 SIMD type: avx512 Notice: this is auto-detected based on the host CPU architecture If different CPU is used for running, check whether avx512 is also supported Using OpenMP: yes Debug mode: no Step mode: kdk interrupt mode: bse external mode: galpy Galpy version: 1.8.1 tidal tensor mode: 3rd orbit mode: pm Using quad: yes --Compilers: C++ compiler: mpic++ C compiler: gcc Fortran compiler: gfortran --Targets: Program name: petar.mpi.omp.avx512.bse.galpy Install path: /home/wukai/bin ``` - Initial condition and run: ```bash petar.init -s bse -f input -c 8000,0,0,0,224.9951165832651,0 -t mcluster.out petar.find.dt -a "-b 500 -u 1" -m 2 -o 48 input > find.log step=`grep 'Best performance choice' find.log | awk '{print $NF}'` OMP_NUM_THREADS=48 nice -n 19 mpiexec -n 1 petar -o 1 -b 500 -u 1 -t 1000 --r-escape -20.864 --galpy-set MWPotential2014 --bse-metallicity 0.0153 -s $step input ```
lwang-astro commented 1 year ago

I saw you have set a small escape criterion. For these low-mass clusters, the Galactic tide is strong. When the cluster is close to be disruptted, the center-of-the-mass is not well determined (averaged potential center). At one time, if the estimated center was far from the cluster center (can be an escaped black hole), most of stars were removed at once and you would see the sudden drop of number. I suggest to use larger escape criterion.

kaiwu-astro commented 1 year ago

Thank you for the help!! I will try with a larger escape criterion and let you know if this is solved.

kaiwu-astro commented 1 year ago

Hi, I tried with larger escape criterion by adjusting --r-escape from -20 to -25 -30 -40 -50 -100 -200 and -400. -200 and -400 are still running slowly. From -25 to -100, it seems the "sudden drop" still happen in an early simulation time.

-50: (earlier than a -20 instance mentioned above)

1406 data.102 
1404 data.103 
  81 data.104 
  81 data.105

-100:

1255 data.262 
1252 data.263
   1 data.264 
   1 data.265

One of the instance with --r-escape=-20 did not drop particle number until 280 Myr, so it seems enlarging the escape criteria does not significantly help. May I know your futher suggestions?

lwang-astro commented 1 year ago

I guess that the cluster is completely disrupted around 280 Myr, so changing r_escape may not help. You can check the evolution of lagrangian radii and core radii and see whether this is the case. From readme or sample script, you can find how to do the post data anaylsis by using petar.data.gether and petar.data.process. These tools can generate the data.lagr where the evolution lagrangian radii and core radii are recored. Then you can use the petar python tool to read it and check. If the core radii suddenly increases, it suggests that the cluster is in the disruption phase.

In addition, you can avoid the sudden drop of particles by not using the option --r-escape. In that case, all particles except zero-mass ones are always kept in the simulation. But after the disruption of cluster, you may not need to continue the simulation. Thus, I suggest you to check lagrangian radii first.

kaiwu-astro commented 1 year ago

Thank you for the detailed suggestions! I managed to plot their lagrangian radii. Besides, I also did more simulations.

This is the instance that has no problem till 400 Myr (--r-escape -20, --bse-metallicity 0.0153) Instance 1

This one lives till 200 Myr (--r-escape=-20, --bse-metallicity 0.001) Instance 2

These two has problems at 23 and 71 Myr (--r-escape=-20, --bse-metallicity 0.01 and --r-escape=-30, --bse-metallicity 0.0153) die

If I understand it correctly, the core radii have'nt increased suddenly. I will try some extra simulations without the escape criterion. Beside, may I know your further suggestions?

lwang-astro commented 1 year ago

I think the problem comes from the wrong determination of cluster center by using potential weighting method. This can happen for low-mass cluster in a galactic potential. I would suggest to keep all stars in the simulation (without using --r-escape). You can still detect escapers and calculate tidal radius in the post processing by using petar.data.process.

kaiwu-astro commented 1 year ago

When --r-escape is not set, the simulation becomes slow, but the the particle number drop issue is indeed solved.

Thank you very much for your detailed suggestions and valuable time. 👍