Open adunham1 opened 1 year ago
Hi Audrey,
Important points! Other developers may have better ideas, but here are my thoughts:)
We have a developer/user Zoom meeting every month on the first Wednesday. I would encourage you to attend those meetings.
Best, Hom Nath
Hom Nath Gharti, PhD Assistant Professor | Digital Earth Scientist https://www.digitalearthscience.com/ Department of Geological Sciences and Geological Engineering Miller Hall 314, 36 Union St Queen’s University Kingston, ON K7L 3N6, Canada https://digitalearthscience.com/
Queen’s University is situated on traditional Anishinaabe and Haudenosaunee Territory.https://www.queensu.ca/encyclopedia/t/traditional-territories
From: Audrey Dunham @.> Sent: Monday, October 16, 2023 5:01 PM To: SPECFEM/specfem3d @.> Cc: Subscribed @.***> Subject: [SPECFEM/specfem3d] SPECFEM3D_Cartesian large memory simulation questions (Issue #1633)
Hello SPECFEM developers,
I am an avid SPECFEM3D user and am currently working on ground motion simulations for scenario ~M9 earthquakes on the Cascadia Subduction Zone Megathrust. The mesh I have created is ~65M elements and contains the Stephenson et al., 2017 Cascadia Community Velocity model. I have a few questions related to large memory simulations within SPECFEM3D_cartesian:
I am using kinematic earthquake sources with a spacing of 500m and contain ~400,000 point sources. For each run, this takes the solver about 1hr to read all of these sources in. Are there any flags to allow the solver to read these in in parallel to decrease this time or any other potential fixes to this issue?
Because these sources are so large, it is nearly impossible to use an external source time function with the current implementation (i.e. it would take 400,000 files). Is there a way to change the source time function for each point without an external file? If not, would this be a useful addition to SPECFEM3D? I am hoping to test out source time functions such as Brune and Yoffe functions.
Finally, the CVM I am using is very detailed and takes ~5Gb of memory to read in within generate_databases, which limits the amount of cores/node I can run my simulations on. Is there any way in SPECFEM3D to more efficiently deal with these very large ASCII files or does anyone have any tips for reading in large and detailed velocity models? Is there any push to change the format of these files to something like netcdf to make reading in the gridded datasets more efficient?
Thanks so much for your help and I look forward to discussing these issues!
— Reply to this email directly, view it on GitHubhttps://github.com/SPECFEM/specfem3d/issues/1633, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABMCQ4SJALJ5N6N4J5KFGDLX7WOEJAVCNFSM6AAAAAA6CZNLT2VHI2DSMVQWIX3LMV43ASLTON2WKOZRHE2DMMBYHA4TIMA. You are receiving this because you are subscribed to this thread.Message ID: @.***>
Thanks Hom! It would be great to implement at least a Brune as an option for a STF. I know this is something others would want to use as well so maybe this can be a change the developers add?
Changing the input format of the tomography files would be so beneficial and significantly decrease my memory usage for these large runs. Please keep me updated when this change (as well as the stf) could possibly be made. I will definitely attend the Wednesday zoom next month!
Hi Audrey,
I just added the 'reuse' feature for the external source-time function file. If you have the same source time function across multiple sources, differing only by their respective time shift, you can set the external source-time function file only to the first source. For all other sources, you can simply set the external source-time function file to 'reuse' and define the appropriate time shift.
Best, Hom Nath
Hom Nath Gharti, PhD Assistant Professor | Digital Earth Scientist https://www.digitalearthscience.com/ Department of Geological Sciences and Geological Engineering Miller Hall 314, 36 Union St Queen’s University Kingston, ON K7L 3N6, Canada
Queen’s University is situated on traditional Anishinaabe and Haudenosaunee Territory.https://www.queensu.ca/encyclopedia/t/traditional-territories
From: Audrey Dunham @.> Sent: Wednesday, October 18, 2023 7:25 PM To: SPECFEM/specfem3d @.> Cc: Hom Nath Gharti @.>; Comment @.> Subject: Re: [SPECFEM/specfem3d] SPECFEM3D_Cartesian large memory simulation questions (Issue #1633)
Thanks Hom! It would be great to implement at least a Brune as an option for a STF. I know this is something others would want to use as well so maybe this can be a change the developers add?
Changing the input format of the tomography files would be so beneficial and significantly decrease my memory usage for these large runs. Please keep me updated when this change (as well as the stf) could possibly be made. I will definitely attend the Wednesday zoom next month!
— Reply to this email directly, view it on GitHubhttps://github.com/SPECFEM/specfem3d/issues/1633#issuecomment-1769573361, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABMCQ4UUWYIWUYUNGOO3PDTYABQONAVCNFSM6AAAAAA6CZNLT2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONRZGU3TGMZWGE. You are receiving this because you commented.Message ID: @.***>
Hello SPECFEM developers,
I am an avid SPECFEM3D user and am currently working on ground motion simulations for scenario ~M9 earthquakes on the Cascadia Subduction Zone Megathrust. The mesh I have created is ~65M elements and contains the Stephenson et al., 2017 Cascadia Community Velocity model. I have a few questions related to large memory simulations within SPECFEM3D_cartesian:
I am using kinematic earthquake sources with a spacing of 500m and contain ~400,000 point sources. For each run, this takes the solver about 1hr to read all of these sources in. Are there any flags to allow the solver to read these in in parallel to decrease this time or any other potential fixes to this issue?
Because these sources are so large, it is nearly impossible to use an external source time function with the current implementation (i.e. it would take 400,000 files). Is there a way to change the source time function for each point without an external file? If not, would this be a useful addition to SPECFEM3D? I am hoping to test out source time functions such as Brune and Yoffe functions.
Finally, the CVM I am using is very detailed and takes ~5Gb of memory to read in within generate_databases, which limits the amount of cores/node I can run my simulations on. Is there any way in SPECFEM3D to more efficiently deal with these very large ASCII files or does anyone have any tips for reading in large and detailed velocity models? Is there any push to change the format of these files to something like netcdf to make reading in the gridded datasets more efficient?
Thanks so much for your help and I look forward to discussing these issues!