Closed kaiwu-astro closed 1 year ago
I saw you have set a small escape criterion. For these low-mass clusters, the Galactic tide is strong. When the cluster is close to be disruptted, the center-of-the-mass is not well determined (averaged potential center). At one time, if the estimated center was far from the cluster center (can be an escaped black hole), most of stars were removed at once and you would see the sudden drop of number. I suggest to use larger escape criterion.
Thank you for the help!! I will try with a larger escape criterion and let you know if this is solved.
Hi, I tried with larger escape criterion by adjusting --r-escape
from -20 to -25 -30 -40 -50 -100 -200 and -400. -200 and -400 are still running slowly. From -25 to -100, it seems the "sudden drop" still happen in an early simulation time.
-50: (earlier than a -20 instance mentioned above)
1406 data.102
1404 data.103
81 data.104
81 data.105
-100:
1255 data.262
1252 data.263
1 data.264
1 data.265
One of the instance with --r-escape=-20
did not drop particle number until 280 Myr, so it seems enlarging the escape criteria does not significantly help. May I know your futher suggestions?
I guess that the cluster is completely disrupted around 280 Myr, so changing r_escape may not help. You can check the evolution of lagrangian radii and core radii and see whether this is the case. From readme or sample script, you can find how to do the post data anaylsis by using petar.data.gether and petar.data.process. These tools can generate the data.lagr where the evolution lagrangian radii and core radii are recored. Then you can use the petar python tool to read it and check. If the core radii suddenly increases, it suggests that the cluster is in the disruption phase.
In addition, you can avoid the sudden drop of particles by not using the option --r-escape. In that case, all particles except zero-mass ones are always kept in the simulation. But after the disruption of cluster, you may not need to continue the simulation. Thus, I suggest you to check lagrangian radii first.
Thank you for the detailed suggestions! I managed to plot their lagrangian radii. Besides, I also did more simulations.
This is the instance that has no problem till 400 Myr (--r-escape -20, --bse-metallicity 0.0153)
This one lives till 200 Myr (--r-escape=-20, --bse-metallicity 0.001)
These two has problems at 23 and 71 Myr (--r-escape=-20, --bse-metallicity 0.01 and --r-escape=-30, --bse-metallicity 0.0153)
If I understand it correctly, the core radii have'nt increased suddenly. I will try some extra simulations without the escape criterion. Beside, may I know your further suggestions?
I think the problem comes from the wrong determination of cluster center by using potential weighting method. This can happen for low-mass cluster in a galactic potential. I would suggest to keep all stars in the simulation (without using --r-escape). You can still detect escapers and calculate tidal radius in the post processing by using petar.data.process.
When --r-escape
is not set, the simulation becomes slow, but the the particle number drop issue is indeed solved.
Thank you very much for your detailed suggestions and valuable time. 👍
I am running simulations with N=1000. I have run many instances = same initial condition for many times. Some of them have this strange phenomenon: the number of lines in the file data.xxx means the number of particles, and it drops suddenly.
For example, in one instance, particle number drops 95% drops from t=25Myr to t=26Myr
in another one, 99% from t=138Myr to t=139Myr
while some instances can run without this problem for >200Myr, and some still have dozens of particles at 1000Myr
Although for the tiny cluster of N=1000, maybe it's not good to run the simulation for too long, because the cluster will disperse very soon, I am still wondering what's going on at that sudden particle number drop. How can I diagnose what's wrong?
[System info, configure option and initial condition here]
- Computer: ``` OS: Ubuntu 18.04 CPU: Intel(R) Core(TM) i9-9960X CPU @ 3.10GHz ``` - Configure: ``` --Features: Version: 1181_293 FDPS Version: 7.0 Using MPI: yes Using GPU: no Using arch: x86 SIMD type: avx512 Notice: this is auto-detected based on the host CPU architecture If different CPU is used for running, check whether avx512 is also supported Using OpenMP: yes Debug mode: no Step mode: kdk interrupt mode: bse external mode: galpy Galpy version: 1.8.1 tidal tensor mode: 3rd orbit mode: pm Using quad: yes --Compilers: C++ compiler: mpic++ C compiler: gcc Fortran compiler: gfortran --Targets: Program name: petar.mpi.omp.avx512.bse.galpy Install path: /home/wukai/bin ``` - Initial condition and run: ```bash petar.init -s bse -f input -c 8000,0,0,0,224.9951165832651,0 -t mcluster.out petar.find.dt -a "-b 500 -u 1" -m 2 -o 48 input > find.log step=`grep 'Best performance choice' find.log | awk '{print $NF}'` OMP_NUM_THREADS=48 nice -n 19 mpiexec -n 1 petar -o 1 -b 500 -u 1 -t 1000 --r-escape -20.864 --galpy-set MWPotential2014 --bse-metallicity 0.0153 -s $step input ```