Open rswamina opened 3 months ago
@valeriupredoi - can you please look into what I might be doing wrong when you get a chance? Thanks!
@rswamina could you please post the entire trace or the debug file, please? Not much I can gauge from that bit of output :grin:
Yes, of course..will try to attache the whole main_log_debug.txt file attached above. Let me know if you cannot access it. main_log_debug.txt
If all preprocessor functions are lazy, the save
step is where the variable (in this case pr
) data is loaded from disk, computations happen, and the result is written to the output file. Are you sure nothing is happening? How long did you wait for?
The timestamps in the attached debug log do not match those in the top post.
I reran it to generate a fresh debug file. Please consider only the timestamps in the file. I waited around 30 minutes and then killed the job. How long should I wait for?
I should add that I tried this several times wondering if there was an issue with JASMIN. The longest I waited (not sure if it was this particular run) was 30 minutes.
I tried running the recipe on my laptop with just the first dataset
- {dataset: CESM2-WACCM, project: CMIP6, exp: historical, ensemble: r1i1p1f1, start_year: 1995, end_year: 2014, grid: gn}
It completes in about 6 seconds and uses 2GB of RAM with ESMValCore v2.11.0 and 9 seconds / 4GB of RAM with v2.10.0. Maybe it is something specific to Jasmin indeed. Can you access the files that are listed in the top post, e.g. with ncdump
?
I had a similar problem with other data that originated from using dask distributed with the mask_landsea
data. This will be fixed soon (https://github.com/ESMValGroup/ESMValCore/pull/2515).
From the debug log it looks like @rswamina is using the default scheduler (so no distributed).
I am using the default scheduler. On JASMIN, SSP245 has just the r1 ensemble member's data. It seems to hang for both historical and ssp245 experiments. I tried an ncdump
on the files and am able to see the file content under the /badc/
archive path. This was not an issue for other models. I cannot think of what else could be an issue unless someone else can reproduce this on JASMIN. I will also add that I have successfully been able to process historical data for this model a few months ago.
Am travelling back to the UK today, will have a look tommz or on Thu, before I fo that though - Ranjini, could you pls confirm there is enough room on the disk you writing to, and what iris and esmvalcore you using? Cheers 🍺
Thanks @valeriupredoi. I have enough space on disk. I am using the esmvaltool
module installed on JASMIN for this, here are the version details:
>esmvaltool version
ERROR 1: PROJ: proj_create_from_database: Open of /apps/jasmin/community/esmvaltool/miniconda3_py311_23.11.0-2/envs/esmvaltool/share/proj failed
ESMValCore: 2.10.0
ESMValTool: 2.10.0
I am not sure how to check the iris version though.
Thanks Ranjini! I know how to do that 😁 Speaking of, I should install 2.11 rather sooner than later
Hi @valeriupredoi - can you let me know if you diagnosed the problem? If this might take time, I will exclude the data set for now. Thanks!
on it, will have a look today, apols for the delay, too many emails to check after holidays :rofl:
hi Ranjini, am currently running that recipe, a few points:
/gws/nopw/j04/esmeval/ranjiniout/
rather than your HOME or wherever you out to, esmeval is sure to have a lot of disk@valeriupredoi - Thanks..I am outputting to scratch
in a subdirectory created under (/work/scratch-nopw2/ranjinis/)
. There is sufficient space there from what I see:
>df -h /work/scratch-nopw2/
Filesystem Size Used Avail Use% Mounted on
172.26.70.68:/scratch-nopw2 200T 175T 26T 88% /work/scratch-nopw2
Are writes to scratch slower or as you indicate is this still a processing delay. I wonder why that is, it is not very much data.
My guess is it's not data but rather, those region selections are intensive. Have left my run on overnight, see how long it takes 🍺
I am curious as to why. This worked quite quickly for @bouweandela on a different machine and the same preprocessing worked quickly again for other model data sets (same time period but perhaps different resolutions). I have successfully run this preprocessor on JASMIN before for the same data set but I can't remember if I just let it run overnight.
so my job is still running, it hasn't outputted any actual preprocessed data, and it looks like it's going nowhere - this is on sci2, and the load on the node is pretty decent, it's not running out of memory, and it should definitely run a little faster :grin: (note: max_parallel_tasks = null
, no Dask settings whatsoever)
OK, starting to make sense of this: the save is indeed waiting on a fairly large number of still running tasks (see the wait statement in _task.py
l.787) - about 800 of these:
{PreprocessingTask('day_pr_NCA/pr'): <multiprocessing.pool.ApplyResult object at 0x7fe229b64190>}
In fact, after a fair while, the ready
tasks are empty still (empty set()
) - nothing actually gets done!
So, what does this mean?
found it! This codeblock trips it:
for cube in cubes:
logger.debug("Saving cube:\n%s\nwith %s data to %s", cube,
"lazy" if cube.has_lazy_data() else "realized", filename)
Unbelievably sneaky!
@bouweandela have a look at this - simplifying for debugging that code block above (from _io.py
):
for cube in cubes:
if cube.has_lazy_data():
data_state = "lazy"
else:
data_state = "realized"
logger.debug("Saving cube: %s with %s data to %s", cube, data_state, filename)
the whole run hangs; commenting out the debug logger line:
for cube in cubes:
if cube.has_lazy_data():
data_state = "lazy"
else:
data_state = "realized"
# logger.debug("Saving cube: %s with %s data to %s", cube, data_state, filename)
run goes through at light speed! Something is really wonky at JASMIN with debug stringIO
@rswamina you can now run the recipe no problem, I've taken out the offending bit - it is the issue with logger.debug("Saving cube: %s", cube)
- why that hangs is beyond me, I'll do some due diligence, but I need lunch now
@valeriupredoi - many thanks for this!!! I will let you know if I have further issues.
The cube
variable needs be explicitly cast to a string with str(cube)
so the debug logger doesn't hang :face_exhaling:
So why does this not happen for other data sets?
that is a one million dollar question! Here is a model code that mimicks our logging mechanism:
import logging
import logging.config
import esmvalcore
import iris
import time
import yaml
from pathlib import Path
logger = logging.getLogger(__name__)
def _update_stream_level(cfg: dict, level=None):
"""Update the log level for the stream handlers."""
handlers = cfg['handlers']
for handler in handlers.values():
if level is not None and 'stream' in handler:
if handler['stream'] in ('ext://sys.stdout', 'ext://sys.stderr'):
handler['level'] = level.upper()
def configure_logging(cfg_file: str = None,
output_dir: str = None,
console_log_level: str = None) -> list:
if cfg_file is None:
cfg_loc = Path(esmvalcore.__file__ + "esmvalcore")
cfg_file = cfg_loc.parents[0] / 'config' / 'config-logging.yml'
cfg_file = Path(cfg_file).absolute()
with open(cfg_file) as file_handler:
cfg = yaml.safe_load(file_handler)
if output_dir is None:
_purge_file_handlers(cfg)
_update_stream_level(cfg, level=console_log_level)
logging.config.dictConfig(cfg)
logging.Formatter.converter = time.gmtime
logging.captureWarnings(True)
def loaddebug():
# configure logger
run_dir = "."
log_files = configure_logging(output_dir="testdebuglog",
console_log_level="debug")
c = iris.load_cube("testRS.nc")
logger.debug("crap")
logger.debug("cube %s", c)
return c
if __name__ == "__main__":
loaddebug()
maybe you can make some sense with it - I'll just open a PR and plop a str()
in that logger statement in _io.py
just so we are on the safe side :beer:
I suspect this issue may be related to the (un)availability of parallel writes on most Jasmin filesystems (see here). In particular, I would recommend choosing a storage location that supports parallel writes for the esmvaltool output_dir
. It is likely that multiple threads will write to the log files in parallel.
Could you try using e.g. output_dir: /work/scratch-pw2/ranjinis/
(note the missing no
in the path)?
that would be interesting to experiment with! But I don't think that's the case here, since if we don't include the cube
in the debug message, then all goes through nicely, and equally, if we force cast it to str
, then all goes through nicely too. I suspect there's something wonky with JASMIN encoding, and some particular corner case related to that dataset in specific, dunno what it is, and I don't think we should spend too much time investigating :beer:
I suspect this issue may be related to the (un)availability of parallel writes on most Jasmin filesystems (see here). In particular, I would recommend choosing a storage location that supports parallel writes for the esmvaltool
output_dir
. It is likely that multiple threads will write to the log files in parallel.Could you try using e.g.
output_dir: /work/scratch-pw2/ranjinis/
(note the missingno
in the path)?
@bouweandela I just reintroduced the original debug logger with cube
and not str(cube)
and tested on the the pw scratch - same behaviour ie it hangs ad infinitum, it's not PW the issue
Apologies..that was a typo. I used nopw2
. Update from my side is that I was able to use the recipe for 3 ensemble members and a single experiment and a single region successfully but when I add more regions for extraction, it once again hangs at least takes very long and I end up killing the job. It seems like some kind of memory issue on JASMIN but I checked and resources are available.
so this particular hang is due to that logger issue with cube
- it could be that similar instances of this issue are rearing their heads where I've not str
-ed instances of cube
- is it hanging at save
again, or elsewhere?
@rswamina bit of an update here would be most welcome plss - I'd like to get https://github.com/ESMValGroup/ESMValCore/pull/2526 in and we be done with this mystery :beer:
I still think this is a peculiarity that comes up and is related to some issue when using CESM2 on JASMIN. This is because I could not successfully extract multiple regions for multiple ensemble members and SSPs in one go without the script hanging again. IT worked when I reduced the number of region exctracions. Please bear with me for a few days and I will provide an example.
ESMValTool seems to have trouble writing a preprocessed cube after extracting IPCC regions for the CEMS2-WACCM dataset. I don't know if the program hangs or simply is taking too long but the log message stops at a point where the cube is being saved under the preproc directory. I am using the
esmvaltool
module installed on JASMIN. As far as I can tell there are no typos!Here is the recipe:
Here are the final lines of the error message. After the last line shown, the program just waits and I don't know why it hangs there. This recipe works for other models.: