FabianGabriel / Active_flow_control_past_cylinder_using_DRL

GNU General Public License v3.0
9 stars 10 forks source link

Mesh dependency study #3

Closed AndreWeiner closed 3 years ago

AndreWeiner commented 3 years ago

Hi Fabian,

I suggest a mesh dependency study for different Reynolds numbers as a next step to advance the project. As you start using denser meshes, the computational demand increases. You can accelerate the simulations by increasing the number of processes. I would impose an upper limit of 10 processes per simulation. To change the number of processes, you need to modify three files:

  1. Allrun and Allrun.singularity
  2. system/decomposeParDict

As an upper limited for the Reynolds number, I suggest Re=1000. To change the Reynolds number, modify the inflow boundary condition in 0.org/U. I would conduct the mesh debendency study roughly as follows:

Best, Andre

FabianGabriel commented 3 years ago

Hi Andre,

I have some problems getting the job to work on the phoenix-cluster. When I submit the command "sbatch jobscript cylinder2D_base" I get the following error:

Submitting case cylinder2D_base
/var/tmp/slurmd_spool/job1621901/slurm_script: line 26: ./Allrun.singularity: Permission denied

The test jobscript in the user manual works just fine.

Best Regards, Fabian

AndreWeiner commented 3 years ago

Seems like the Allrun script has the wrong owner/incorrect permissions. Can you navigate into the test case, run ls -al, and post the output here? You should be the file owner to execute the script. You can change the owner of a file using chown:

chown $(whoami) Allrun.singularity

Hope that helps. Best, Andre

FabianGabriel commented 3 years ago

Thanks! The file didn't have the "execute" permission which I than added. Now it does work.

Best regards, Fabian

FabianGabriel commented 3 years ago

Hi Andre,

Now that I ran the first simulations I've come across a bit of a problem regarding the overall computation time: The simulation times for Re = 100 were (These numbers are from log.pimplefoam): "Mesh size"  Number of cells Time Courant number mean (after 1 s) Courant number max (after 1 s)
25  4625  229.83 s 0.073249755 0.56789814
50  18500 601.55 s 0.073237097 0.56658645
100 74000 2889.3 s 0.073211338 0.55701695

These numbers are already way higher than in darshans report even though the mesh sizes are roughly equivalent. 

For higher Reynolds numbers the simulation time only increases so that the estimated time at Re = 1000 and "Mesh size = 100" would be about 8 h. That can't be right. This seems to be way too long.

What can I do to speed up this process?

Best Regards, Fabian

AndreWeiner commented 3 years ago

Hi Fabian, on how many cores did you run the simulation and how did you decompose the domain? There is some space regarding the Courant number. You can almost double the time step for Co=1. pimpleFoam can also handle Co>1, but you should check the impact on the results. The number of cells you reported comes from the checkMesh output? Best, Andre

FabianGabriel commented 3 years ago

I ran it on 10 cores like you suggested in the initial issue message. For the decomposition I just changed the values in the decomposeParDict. Result:

numberOfSubdomains  10;

method              hierarchical;

coeffs
{
    n               (10 1 1);
}

The number of cells came from the log.blockMesh output. checkMesh produces the same numbers though.

Best Regards, Fabian

AndreWeiner commented 3 years ago

10 might have been a bit too much. With 5 subdivisions in x you should result in almost quadratic domains. Maybe it's also worth checking up to how many procs you get a speed-up. Best, Andre

FabianGabriel commented 3 years ago

Hi Andre, I hope you had a nice weekend. I have now adjusted the thickness and run the simulation again. However, the results still do not match Darshan's simulation (200 is Darshan's simulation, the other ones are mine): mesh_dp_cd (1) As you can see the values are much closer to Darshan's and to each other but still not perfect. I can't really figure out why they still don't match up. Is there another normalisation variable that I have not yet changed?

Also the "25" mesh size is bothering me a bit. It should be roughly comparable to Darshan's "100" mesh size (5325 to 5506 cells) but the results are vastly different. Best Regards, Fabian

AndreWeiner commented 3 years ago

Hi Fabian, the results look fine. Darshan's mesh had a very different topology than yours, so I would not expect the same convergence behavior. According to table 4 of this benchmarking article, the upper bound for the drag should be between 3.22 and 3.25. Eyeballing, I would say your results are almost perfect. The refinement in your setup is much more uniform than in Darshan's setup. The uniformity leads to lower discretization errors at the cost of a higher cell count. I would still add another refinement level to be sure. Don't worry too much about Darshan's results. The literature reference is more important. Best, Andre

FabianGabriel commented 3 years ago
Hi Andre, thanks, that's good to know. This is now the result with all 4 mesh sizes: mesh_dp_cd (4) The values are like this: "Mesh size"  Number of cells Time Courant number mean (after 4 s) Courant number max (after 4 s)
25  5325  232.09 s 0.092931529 0.596277
50  21250 762.14 s 0.093645126 0.61540178
100 85000 3566.77 s 0.093890705 0.62253628
200 340000 58581.3 s (~16h) 0.094134363 0.62508912
I also ran the first simulation on a higher Reynolds number. I chose Re=400 but the results are not what I expected: mesh_dp_cd (2) Is the mesh size just too small here or is there another problem? The blockMeshDict file is the same as in 100_50. ControlDict: deltaT 1.25e-4; magUInf 4.0; setExprBoundaryFieldsDict: expression #{ 4*6*pos().y()*(0.41-pos().y())/(0.41*0.41)*$[(vector)vel.dir] #}; "Mesh size"  Number of cells Time Courant number mean (after 4 s) Courant number max (after 4 s)
50  21250 2784.8 s 0.10366097 0.75172798

The max Courant number fluctuate quite a bit too: From about 0.75 at 4 to about 0.68 at 4.019 (repeated periodically) The mean Courant numbers on the other hand stay pretty much constant. Is that normal for higher speeds?

Best Regards, Fabian

AndreWeiner commented 3 years ago

Hi Fabian,

thanks for the update. A couple of tips and suggestions:

Your settings for Re=400 seem to be correct. I guess the 25 cell mesh is too coarse to converge. You can try using a smaller time step (maybe one-half of the current one). However, I guess the mesh is really very coarse and not a potential candidate for further simulations. I guess the simulation did not converge, so you can report this convergence behavior (e.g., did not converge).

I hope this answers your question. Best, Andre

FabianGabriel commented 3 years ago

Hi Andre, I have now also run the simulation for Re=400 and meshsize 100. The results were very similar. I had already looked at the log.pimplefoam file and did not notice anything unusual. The simulation did converge in about 4 -7 iterations and the Courant number never exceded 1. I also looked at the flow fields for U and p, but I didn't notice anything unusual there either. The flow fields also hardly differ between mesh sizes 50 and 100. After your message, however, I noticed the p flow field. Are these already the chess field patterns you mentioned? 400_100_p

Also the U-field here:

400_100_U

Best Regards, Fabian

AndreWeiner commented 3 years ago

Hi Fabian, the p and U fields look absolutely fine; you see the classical van Karman vortex street. In my estimate, only the mesh with Nx=25 delivered strange results. The other results in the above drag plot look absolutely fine. Best, Andre

FabianGabriel commented 3 years ago

Hi Andre,

Sorry, the graphic in the post up there was wrong. I never actually ran the size 25 simulation. That one was already the size 50 one. The other ones were left over from the Re=100 batch, that I accidentally didn't remove. This is the correct one: mesh_dp_cd_400

Best Regards, Fabian

AndreWeiner commented 3 years ago

I see. The results might be correct then, and the visualization is the problem. If you look at fig. 3 of the article by Romain Prais et al. (cloud folder paris2021.pdf), you see that the frequency/Strouhal number as well as the drag increase with the Reynolds number. I also found this report, which may serve as a reference even though the setup is different (the confinement in the channel has a significant impact).

Another point is that you're plotting against the dimensionless time (eq. 2.2 in the report): \tilde{t} = t/(d U_average) I think Darshan forgot to divite by the diameter, and the average inlet velocity was unity in his case. However, his Reynolds number was constant. For you, the dimensionless time changes with the Reynolds number/inlet velocity.

To sum up: look at a shorter time window, and your simulations should look fine.

Best, Andre