Closed matzegoebel closed 5 years ago
Can you tell me which operating system you are using and are you using the wrf-python from conda/conda-forge?
I haven't dug in to the problem yet, but here are a few things you can try if you're on mac/linux.
1) Make sure your stack size is set to unlimited. If you're on the same system as what you ran WRF on, this was probably already done for you.
For bash, this is:
`ulimit -s unlimited`
2) If you're using OpenMP via wrf.omp_set_num_threads(), OpenMP has its own stack size, which might need adjustment, even if you're just using one thread. This is controlled by the environment variable OMP_STACKSIZE.
Try something like:
`OMP_STACKSIZE=300M`
If either of these options work, please let me know.
Thank you for the file. I'm able to reproduce the problem with it, and the above suggestions don't work, so I'll dig in to this and try to get a fix out soon.
I found the problem. A new update will be available on conda soon.
thank you for the quick fix!
I ran an idealized LES simulation in WRF/4.0.3 with 35 x 35 horizontal and 171 vertical grid points (see attached file) . When trying to compute cape_2d with wrf-python/1.3.0 using
getvar(wrf_file, varname="cape_2d", timeidx=1)
, I get a segmentation fault with the error message:*** stack smashing detected ***: <unknown> terminated
. When I decrease the domain size (e.g. to 30 x 30 in the horizontal) no error appears. With 32 GB I should have enough memory.Do you know how to fix this?
wrfout_test.zip