VasiliBaranov / packing-generation

Hard-sphere packing generation in C++ with the Lubachevsky–Stillinger, Jodrey–Tory, and force-biased algorithms and packing post-processing.
MIT License
107 stars 44 forks source link

Packed connected medium by a specified PSD #30

Open alisheikholeslam opened 2 years ago

alisheikholeslam commented 2 years ago

Hello Dr. Baranov,

Before stating the issue, I would appreciate you for preparing and providing this repository.

I would like to use your well prepared files to create a granular media with specified conditions. I have read the other issues and the doc, but I'm a little confused yet. I will use the PackingGenerationv1.0Windows_7_8_10__x32.exe utilizing windows 10 PowerShell and want to import a PSD as diameters.txt, which contains 1085 particles. I have modified generation.conf file as:

Particles count: 1085
Packing size: 0.2 0.4 0.2
Generation start: 1
Seed: 777
Steps to write: 1000
Boundaries mode: 1
Contraction rate: 1e-2
1. boundaries mode: 1 - bulk; 2 - ellipse (inscribed in XYZ box, Z is length of an ellipse); 3 - rectangle
2. generationMode = 1 (Poisson, R) or 2 (Poisson in cells, S)

and putdiameters.txt and generation.conf files beside PackingGeneration__v1.0__Windows_7_8_10__x32.exe and run the exe file. As it was just only 1085 spheres, I expected that it must be finished in some minutes; For -fba and -ls it run fast, but -lsgd takes about 3 hours, which will be inappropriate for large number of spheres. After these 3 steps (Generation start was changed to 0 after the first step), I have modified the diameters in the lsgd resulted packing.xyzd by the following python codes (I put them here if could help anyone; it must be placed and run beside packing.nfo and packing.xyzd):

import numpy as np

packing = np.fromfile('packing.xyzd').reshape(-1, 4)    # np.fromfile('packing.xyzd')     packing[:, 3] == packing[3::4]

with open('packing.nfo', "r+") as nfo:
    lines = nfo.readlines()
    Theoretical_Porosity = float(lines[2].split()[2])
    Final_Porosity = float(lines[3].split()[2])
    # print(Theoretical_Porosity, Final_Porosity)

    scaling_factor = ((1 - Final_Porosity) / (1 - Theoretical_Porosity)) ** (1 / 3)

    real_diameters = packing[:, 3] * scaling_factor
    packing[:, 3] = real_diameters
    packing.tofile('packing.xyzd')         # updating the packing: this line will modifies diameters in the packing.xyzd

    """
    # update packing.nfo and set TheoreticalPorosity to FinalPorosity to avoid scaling the packing once again the next time running this script.
    lines[3] = lines[3].replace(str(Final_Porosity), str(Theoretical_Porosity))
    nfo.seek(0)
    nfo.writelines(lines)
    """

# np.savetxt('r.csv', real_diameters / 2)
# np.savetxt('c.csv', packing[:, :3], delimiter=',')

# Q1. Just for ensuring, Is it enough to modify diameters just after the aforementioned 3 steps? # The prepared PSD will result in a porous media with porosity ≈ 0.4 (which is the desired goal) for a box that is specified by Packing size in generation.conf. After execution of the aforementioned methods, balls dispersed in a good scheme but without any contacts between them.

Total

Needs: Each sphere must be in touch (not any overlaps just contacts) with at least one another sphere (minimum contact numbers for each sphere could be specified e.g. each sphere can contact with [1 to n] other spheres). The total medium must be continuum i.e. all spheres must have a connection path to other spheres; Not any separated sphere groups.


Q2. (THE MAIN QUESTION). Is it possible to create such a media with prespecified PSD? If no, what way do you recommend?

If no, I think it could be achieved if I create a dense media with very low porosity (balls have satisfying contact numbers without any overlaps), and then give the results (balls' center position and radius) to python or … to find the best cases (in terms of contact numbers and …) to be removed, as a way that could reach the final PSD and porosity. But, I have not any idea about how to control PSD (I have prepared the true PSD before, but in this method it must be used in another way) to reach the desired one.


I am trying to check -md option now, but I don't know if it could be helpful or not; It is running for some hours and I could not check if it is converging by its log.txt.

Some other questions:

I would be grateful if you would clarify these questions whenever you have free time.

Q3. Is there any 64bit PackingGeneration.exe? If no, are there any reasons for that? Q4. Is there any difference, pros and cons, between compile method and the PackingGeneration.exe? Q5. What is the compression rate relation with time units? What does it mean in practice? Q6. Does this software, usually, move towards convergence? Is there any way to see convergence of the execution and check if it is converging during execution? and predicted time to finish the execution? Q7. Are there any range limitations for diameter sizes and the packing size? Q8. What are packing_init.xyzd and packing_prev.xyzd exactly? I read somewhere that packing_init.xyzd is just a backup. What exactly these files are and what are their applications? Q9. Is it possible to specify condition so that entire volume of balls be placed in the specified Packing size in generation.conf? Q10. Does it need to modify Final Porosity in Packing.nfo for post-processing as the diameters are updating (rescaling) in packing.xyzd ? Q11. Is there any way to see the progress after reaching the nth specified Steps to write iterations instead of waiting for Packing.nfo in the end; I.e. something to evaluate rescaling factor before the creation of Packing.nfo during the execution? Q12. How could I find out the other abilities, that are not mentioned in the doc, for post-processing and … and their applications, e.g. -connumdist and -ncc. Q13. What is FCC packing? Q14. Is it possible to specify minimum number of contacts for each sphere at the end? Q15. What did you mean by ORIGINAL diameters in line 27 and 28 of ReadPackingScript.m? Is it refer to any specific diameter?

% The final packing (packing.xyzd) will store correct particle centers and the ORIGINAL diameters % (not the ones scaled with the final scaling factor)—for historical reasons.

Q16. I couldn't find out how to change generationMode from R to S or vice versa; Also for boundary (my question is general, I know that it is specified just for one type so far). E.g. how the line 2. generationMode = 1 (Poisson, R) or 2 (Poisson in cells, S) must be modified in generation.conf? Q17. How could I view particles' movements during the simulations? I saw somewhere that we can use VMD. Is there any documentation to see how to do so. Did you know any tools or python libraries to use with python codes, if I simulate such problems using python?


I would be very appreciated for your time if you would help me by this project.

Yours sincerely, Ali

VasiliBaranov commented 2 years ago

Hi Ali,

thanks for your interest in the program and your questions. Thanks for providing the python code snippet as well. Please see my comments below.

For -fba and -ls it run fast, but -lsgd takes about 3 hours, which will be inappropriate for large number of spheres

Yes, for 10 000 particles the -lsgd option takes ~10-20 hours. Producing jammed packings takes a lot of time. To produce many such systems, i just ran this program in parallel on a supercomputer, for many variants of starting parameters.

Q1. Just for ensuring, Is it enough to modify diameters just after the aforementioned 3 steps?

Yes, it shall be enough (the reason explained in the readme--you've probably seen it).

The prepared PSD will result in a porous media with porosity ≈ 0.4 (which is the desired goal) for a box that is specified by Packing size in generation.conf. After execution of the aforementioned methods, balls dispersed in a good scheme but without any contacts between them.

I am not quite sure if you checked the contacts visually or with an additional code? The lsgd algorithm stops when there is at least a subset of particles that is jammed, i.e. a backbone of the packing. For any polydispersity, there will be another subset of particles that can freely fly around the backbone. For high polydispersity, there will be more such particles.

If you check for contacts, you shall enlarge diameters by a very small amount, e.g. r_i = r_i * (1 + 1e-4) (see below for some justifications). Then, the particles will start to intersect. By default, the program produces configurations with zero particle intersections.

Each sphere must be in touch (not any overlaps just contacts) with at least one another sphere (minimum contact numbers for each sphere could be specified e.g. each sphere can contact with [1 to n] other spheres). The total medium must be continuum i.e. all spheres must have a connection path to other spheres; Not any separated sphere groups.

Please note that the number of contacts can't be specified arbitrary. For frictionless particles, there shall be on average exactly 6 contacts per particle from the backbone. More precisely, N_contacts_in_backbone = 6 * (N_particles_in_backbone - 1). Please note that you shall take non-rattler particles (from the backbone) for this calculation. One can determine them by the condition N_contacts_of_particle < 4. I think i recently added a branch with an option to display the contacts of each particle. I can find it if you need it. I think some replies in other issues have references to the papers with the proofs, i can find them as well if you need them.

Q2. (THE MAIN QUESTION). Is it possible to create such a media with prespecified PSD? If no, what way do you recommend?

Please see the reply above. I presume that under PSD you mean particle size distribution. The PSD can be prescribed (as you do now--through diameters.txt or through the starting packing.xyzd), but the number of contacts is determined in the course of packing generation. It will be arbitrary per particle, but, again, there will be a backbone of a packing for which N_contacts_in_backbone = 6 * (N_particles_in_backbone - 1) (on average, 6 contacts per particle). There will be particles in the backbone with as few as 4 contacts as well (and particles with more than 6 contacts).

If you need less contacts on average in a mechanically stable packing, this can only be achieved by adding friction to simulations. With infinite friction, you can reach as few as 4 contacts per particle on average.

Influence of friction: https://www.nature.com/articles/nature06981 Papers of Makse (esp. supplementary materials) have the proofs of some of these facts. Maybe this one: https://www.nature.com/articles/ncomms3194 The page of Makse once had the code to add friction to the packings and to stabilize them (I also mentioned it in other replies).

If no, I think it could be achieved if I create a dense media with very low porosity (balls have satisfying contact numbers without any overlaps), and then give the results (balls' center position and radius) to python or … to find the best cases (in terms of contact numbers and …) to be removed, as a way that could reach the final PSD and porosity. But, I have not any idea about how to control PSD (I have prepared the true PSD before, but in this method it must be used in another way) to reach the desired one.

I don't know of any research that tries to specify the exact number of contacts per particle in advance. Maybe there is a sub-area that i am not aware of, but if there is not, i would say it would take several years or decades to create something like this :-) (if it is possible at all).

I am trying to check -md option now, but I don't know if it could be helpful or not; It is running for some hours and I could not check if it is converging by its log.txt.

The -lsgd option is what you need to create mechanically stable packings (or running the -ls option with a very low compression rate, which is then slower than -lsgd). The lsgd option uses quick compression at first, then slower compression, then even slower. Here is how you can judge about the quality of packings. The lsgd option stops execution with a very low compression rate, so the reduced pressure Z it reports is almost equilibrium. There is a known Salsburg-Wood equation of state that describes the reduced pressure of hard spheres. E.g. see Eq. 4 here: https://aip.scitation.org/doi/full/10.1063/5.0036411 If you see Z = 1e12 at a packing density phi, it means that the true jamming density will be phi_Jamming = phi (1 + 3 / (Z-1)) = phi (1 + 3e-12). It means that radii will be increased by 1e-12/3 ~ 1e-4. The LSGD option stops generation at reduced pressure ~1e12, that's why i wrote above that you can multiply the radii by 1 + 1e-4 to get the real particle contacts.

The md option will just let the particles fly with zero compression rate to equilibrate them (which also doesn't make sense for almost jammed packings).

Q3. Is there any 64bit PackingGeneration.exe? If no, are there any reasons for that?

I didn't prepare it. Several years ago, when i was working on this, 32bit OSes were still used, so i wanted to cover them as well.

Q4. Is there any difference, pros and cons, between compile method and the PackingGeneration.exe?

Compiling gives you freedom: e.g. you can compile a 64-bit option. Or compile for Linux. If you run this program on a supurcomputer, you may encounter different processor architectures (IBM PowerPC, for example), and you have to recompile it anyway.

Q5. What is the compression rate relation with time units? What does it mean in practice?

For the -FBA algorithm, it is a more arbitrary number, it also depends on the expected density of the packing (calculated from the number of particles and the box dimensions). For the LS algorithm, it is how fast the spheres grow with a simulation unit of time. In all cases, the smaller the rate, the slower the spheres grow, the longer the simulation, the higher the final density.

The dependence of final diameters on the compression rate is discussed here: https://github.com/VasiliBaranov/packing-generation#14-note-on-final-diameters (refers to https://pubs.rsc.org/en/content/articlehtml/2014/sm/c3sm52959b#imgfig2 ). Essentially, you can start with 1e-2 for LS and see if it is enough. But the packings will definitely not be jammed.

One can judge about the rates also based on how fast the particles fly. You can refer to Section III B here: https://aip.scitation.org/doi/full/10.1063/1.5140365 . We use temperature = 0.2, Boltzmann constant 1, so the root mean-square velocity it \sqrt(6).

Q6. Does this software, usually, move towards convergence? Is there any way to see convergence of the execution and check if it is converging during execution? and predicted time to finish the execution?

Definitely. The LS algorithm stops execution when the measured reduced pressure is high (particles collide often). But since they grow, it is not an equilibrium pressure, so particles may still be far from jamming. The -lsgd algorithm stops when the pressure is high and this value is close to being equilibrium, so these packings are close to jamming. Please see the FBA paper for the termination conditions, but in general it means that it doesn't make sense to continue simulation with present parameters. But one can definitely restart the simulation with a smaller compression rate.

Q7. Are there any range limitations for diameter sizes and the packing size?

I didn't encounter them, but higher polydispersities perform slightly slower.

Q8. What are packing_init.xyzd and packing_prev.xyzd exactly? I read somewhere that packing_init.xyzd is just a backup. What exactly these files are and what are their applications?

The _init version is the state when the generation started (to know it when diameters.txt is specified). The _prev version is indeed just a version from the previous iteration when the log was written.

Q9. Is it possible to specify condition so that entire volume of balls be placed in the specified Packing size in generation.conf? Q10. Does it need to modify Final Porosity in Packing.nfo for post-processing as the diameters are updating (rescaling) in packing.xyzd ?

If you mean hard walls, not periodic boundary conditions, it is currently not supported.

Q11. Is there any way to see the progress after reaching the nth specified Steps to write iterations instead of waiting for Packing.nfo in the end; I.e. something to evaluate rescaling factor before the creation of Packing.nfo during the execution?

The config has the parameter "Steps to write: 1000", that's how many steps (iterations) shall pass till the new line of log is displayed. One can change it to see more logs. But the only parameter to reach algorithm termination is currently the compression rate (and for the FBA algorithm the termination is implicitly influenced by the packing size and the number of particles, i.e. the expected density). But it shall usually be enough.. If you need, you can change, for example, the final termination pressure in the code and recompile the program.

Q12. How could I find out the other abilities, that are not mentioned in the doc, for post-processing and … and their applications, e.g. -connumdist and -ncc.

Please feel free to take a look at the code! I can also try to answer as much as i can can.

Q13. What is FCC packing?

https://en.wikipedia.org/wiki/Close-packing_of_equal_spheres#FCC_and_HCP_lattices

Q14. Is it possible to specify minimum number of contacts for each sphere at the end?

No.. See above. It will actually be 4 for the particles from the jamming backbone after the -lsgd algorithm.

Q15. What did you mean by ORIGINAL diameters in line 27 and 28 of ReadPackingScript.m? Is it refer to any specific diameter?

The ones that you specify in diameters.txt or in the starting packing.xyzd.

Q16. I couldn't find out how to change generationMode from R to S or vice versa; Also for boundary (my question is general, I know that it is specified just for one type so far). E.g. how the line 2. generationMode = 1 (Poisson, R) or 2 (Poisson in cells, S) must be modified in generation.conf?

So, it wouldn't work, but you can write "Boundaries mode: 2"

Q17. How could I view particles' movements during the simulations? I saw somewhere that we can use VMD. Is there any documentation to see how to do so. Did you know any tools or python libraries to use with python codes, if I simulate such problems using python?

You would need to modify the code to save packings at every time step as well, and then you will be able to convert them to any required format and visualize them. But they will take a lot of space.

I recall that i had a similar option hidden in the code somewhere for the "molecular dynamics" simulation (the -md option), when the particles just fly, which is the Lubachevsky-Stillinger algorithm with zero compression rate.

Hope this helps.

Best Regards, Vasili

alisheikholeslam commented 2 years ago

Thank you dear Dr. Baranov for taking the time to answer.

To produce many such systems, I just ran this program in parallel on a supercomputer, for many variants of starting parameters.

Q1. I could not understand well what do you mean by parallel? Did you run some different projects at a same time or compiled the code in any way or developing the code in terms of parallelization and vectorization? I would be grateful if you explain a little more.

Q2. Could compilation (from x32 to x64 or other possible methods) have a significant improvement on execution runtime? How much it could be?

Q3. Z is constant and equal to 1e12 in the compiled exe file, is it? If so, radii or diameters shall be multiplied by ~(1+1e-4)^(1/3) instead of the porosity related equation mentioned in the Note on final diameters section. The question arises that, how these porosities are calculated and why there exists difference between the two methods?

Using Z related radii correction method did not make any difference for creating contacts between the particles in my case (I have used another software that determines the contacts, I am working on another pythonic code to see these contacts to ensure). I guess it depends on contraction rate before anything else. So, since the higher contraction rates (e.g., >=1e-2 or 1e-3) are more logical in terms of execution runtimes, contacts between the particles (as it is desired) will not be guaranteed by the both methods (the lower contraction rates, the more guaranties for contacts).

I guess I have to use very low contraction rates (e.g., <=1e-4) in my case, which is not suitable at all for such mediums with large sphere numbers (above 50000).


For my case, it is very important if we could create an initial backbone where all sphere connections are guaranteed in that body. So, I could connect other flying spheres to this backbone by some pythonic codes. I must be sure about creation of such a backbone.


I recently added a branch with an option to display the contacts of each particle. I can find it if you need it.

Q4. How it displays? Within a GUI or just showing numbers? I would be appreciated if you could find them whenever you have time. It would be great if there exists a tutorial (movie) for visualizing the execution during the runtime (in a coupled mode) or after that. Now, it is a little complicated.

VasiliBaranov commented 2 years ago

Hi Ali,

Q1. I could not understand well what do you mean by parallel? Did you run some different projects at a same time or compiled the code in any way or developing the code in terms of parallelization and vectorization? I would be grateful if you explain a little more.

Yes, i was just starting the program many times on different cores, no Mesage Passing Interface/vectorization.

Q2. Could compilation (from x32 to x64 or other possible methods) have a significant improvement on execution runtime? How much it could be?

No, i don't think so. x64 can even make it a bit slower if i recall correctly.

Q3. Z is constant and equal to 1e12 in the compiled exe file, is it?

Yes

If so, radii or diameters shall be multiplied by ~(1+1e-4)^(1/3) instead of the porosity related equation mentioned in the Note on final diameters section. The question arises that, how these porosities are calculated and why there exists difference between the two methods?

No :-) At first, you need to correct according to the Note on final diameters, this is just a technical limitation.

Then, you can rescale by ~(1+1e-12)^(1/3) = (1+1e-4) to ensure contacts.

Using Z related radii correction method did not make any difference for creating contacts between the particles in my case (I have used another software that determines the contacts, I am working on another pythonic code to see these contacts to ensure).

If you used the Z-correction instead of the correction from the Note on final diameters, it will not work indeed. You have to use both corrections to start to see intersections.

I guess it depends on contraction rate before anything else. So, since the higher contraction rates (e.g., >=1e-2 or 1e-3) are more logical in terms of execution runtimes, contacts between the particles (as it is desired) will not be guaranteed by the both methods (the lower contraction rates, the more guaranties for contacts).

Indeed, the contacts will be guaranteed by the low contraction rates OR by the LSGD option, which you can use after any contraction rate.

I guess I have to use very low contraction rates (e.g., <=1e-4) in my case, which is not suitable at all for such mediums with large sphere numbers (above 50000).

You can use a high rate and then LSGD. The method scales linearly, so LSGD will take ~5 times longer indeed, but i could run it for 200 000 particles. It took 1-2 days, but it was still fine for me.

For my case, it is very important if we could create an initial backbone where all sphere connections are guaranteed in that body. So, I could connect other flying spheres to this backbone by some pythonic codes. I must be sure about creation of such a backbone.

Yes, the backbone will be produced after the LSGD or after FBA/LS algorithms with very low contraction rates. Then, you can attach the remainder of the spheres (the flying spheres) to it with any other code.

Q4. How it displays? Within a GUI or just showing numbers? I would be appreciated if you could find them whenever you have time.

It is this branch, https://github.com/VasiliBaranov/packing-generation/tree/feature/SuccessfulPermutationProbability , option "-nnc", https://github.com/VasiliBaranov/packing-generation/blob/4572b7b3f66c02a3415b9df17cdab2b31fe0bf60/PackingGeneration/Execution/Source/PackingTaskFactory.cpp#L210

But you need to compile the code yourself. If needed, i can send the .exe to you or update the branch.

It would be great if there exists a tutorial (movie) for visualizing the execution during the runtime (in a coupled mode) or after that. Now, it is a little complicated.

Unfortunately, no option is present right now. If you don't want to touch the present C++ code, you can write a python program that will just copy the packing.xyzd every X seconds or on every update when the program runs (e.g. using the watchdog library) with a timestamp, and then you will be able to visualize them. The file is overwritten every 1000 steps (option "Steps to write" in the generation.conf).

Hope this helps!

Best Regards, Vasili

alisheikholeslam commented 2 years ago

Hello again Dr. @VasiliBaranov,

So, based on your last answer, the corrected diameters are (1+1e-4) * scaling factor;

  1. Is there any cause that you did not write the formula such this one (I mean, why you did not multiply the scaling factor by (1+1e-4) in the documentation)?

  2. What changes must be applied on finalScalingFactor for scaling the coordinates and the box size instead

% Scaling the diameters. You can of course scale the coordinates and the box size instead (with 1/finalScalingFactor).

  1. Does scaling the coordinates using aforementioned formula (1/finalScalingFactor) will get my expected result (same radii as the original)?

  2. Is there any way to preserve the radii sizes for the final medium if modifying the coordinates (Q3) is not useful? For this purpose, I ran the code by FBA once and get the scaling factor at the end --> changed the original diameters based one that scaling factor and again ran the FBA by modified diameters; So, at the end, multiplying the modified diameters by the new scaling factor get diameter sum equal to sum of the desired diameters (original diameters) ==> porosity preserved. But resulted diameters have some differences with the original ones (some are little more some little less). After beginning it changes nonlinear? ==> PSD not preserved (i.e. size of each sphere).

% (they are linearly scaled at the beginning of generation by a small value (scaling factor)

  1. Just to ensure, there is no way to pre-estimate the scaling factor to avoid running FBA once to get this value?

Any recommendation?