calculix / ccx2paraview

CalculiX to Paraview converter (frd to vtk/vtu). Makes possible to view and postprocess CalculiX analysis results in Paraview. Generates Mises and Principal components for stress and strain tensors.
GNU General Public License v3.0
84 stars 18 forks source link

Cannot process large size frd file #16

Closed prasadadhav closed 1 year ago

prasadadhav commented 3 years ago

Hello Ihor,

I have been trying to post-process a large .frd file (Almost ~7GB). This is mainly due to very small time-steps (dt=1e-05) and a relatively fine mesh (t=0.1s). I would also like to mention that this is only when I write

I tried to convert this file into .vtu. But my laptop runs of RAM and it soon hangs or crashes. Even with exiting from all other unnecessary applications, this happens.

INFO: Writing Nozzle_hex.166.vtu
INFO: Step 166, time 1.66e-03, U, 3 components, 89054 values
INFO: Writing Nozzle_hex.167.vtu
Killed

I think this is mainly because first the data is converted for all the time-steps and then it starts writing.

I tried understanding the code, but I am not entirely sure where to make the modifications. I think this issue will be solved when the code reads and writes the file immediately. Then I think I can do clean.cache() after each time step and avoid the issue.

Please let me know if this is possible.

Thank you very much.

imirzov commented 3 years ago

Thanks for great idea - to read, convert and write files increment by increment! For now, - you are right, - script reads all time steps consuming the memory, then writes all steps.

clean.cache() is absolutely different thing - it just removes temporary python files. It doesn't refer to the conversion.

I think increment by increment conversion is possible, but it requires full refactoring of the code. So it'll take much efforts to implement the idea. Leaving this issue opened to solve in future.

imirzov commented 3 years ago

By the way, you can try to convert a big file on a computer having more memory. On my laptop I have 16 GB - I'm sure it'll be enough.

prasadadhav commented 3 years ago

By the way, you can try to convert a big file on a computer having more memory. On my laptop I have 16 GB - I'm sure it'll be enough.

Thank you for the comments Ihor. Yes, I will surely try on a machine with more memory, my laptop surely doesn't have enough.

I will also try to understand the code and see if I can contribute towards stepwise reading-writing. :)

TS-CUBED commented 3 years ago

Hello Ihor,

unfortunately the "use a bigger machine" solution won't work for most cases. Even my small test case (about an order of magnitude smaller than my production cases) will eat 16GB before it is even halfway through. So even on my 32GB laptop I can't convert even the test case. The biggest single node I have at university has 256GB, but based on my extrapolations that won't be enough to postprocess the production case.

I guess doing it timestep by timestep (or offering an option to select a time range) is the only solution.

T

aomanchuria commented 2 years ago

Having the exact problem. I just generated a 12gig FRD and my 32 gig computer runs out of memory. it will be an overnight thing to run that simulation again for that file unfortunately. But, as mentioned before if the software was changed by writing the output while crunching more steps, that would work well. Maybe monitoring memory and writing file before it runs out would also work well.

imirzov commented 1 year ago

Dear all, I've started working on the next version, where FRD files will be converted increment by increment. Please, would you share a really big FRD file (more than 1 GB) for testing?

imirzov commented 1 year ago

This release solves the issue: https://github.com/calculix/ccx2paraview/releases/tag/v3.1.0