Closed eaquino51 closed 3 weeks ago
Dear Erik, we need to look into a more memory-efficient way to unzip... For now, is it an option for you to either use fewer cpus_weights or try the LegacyWeightSolver? All the best, Thomas
Hello Erik,
if you get a chance, please pull the branch reduce_required_ram. It implements an alternative to buffering the bunzip2 output, which should make a change when called in many threads.
Please let me know if it helps. If so, we'll be including it in the next release.
All the best, Thomas
Hello Thomas,
I appreciate your comments and suggestions. Let me try these new changes you suggest and let you know the result.
All the best, Erik.
Hello Erik,
I am curious: have you had a chance to try out the proposed solution to manage RAM requirements?
Many thanks and all the best, Thomas
Dear Dynamite Team,
I am running DYNAMITE models for galaxies observed with the MUSE instrument. I am using DYNAMITE on a cluster with a node of 64 physical cores and 528 GB of RAM. However, I am encountering a Python error that suggests 528 GB of RAM is insufficient. The issue arises with bunzip2 when decompressing the orbit libraries. In the configuration file, I set ncpus = ncpus_weights = "all_available" and also ncpus = ncpus_weights = 32. The code works with orbit library sizes of 6x5x4, but the issue occurs with sizes like 14x7x7. The following is a screenshot of the error.
Any suggestions to solve this problem will be very helpful.
Thanks for your help and suggestions.
Regards Erik Aquino.