argonne-lcf / dlio_benchmark

An I/O benchmark for deep Learning applications
https://dlio-benchmark.readthedocs.io
Apache License 2.0
69 stars 30 forks source link

logs not getting written when multiprocessing_context is spawn or forkserver #131

Open krehm opened 10 months ago

krehm commented 10 months ago

I just opened PR #130 to fix dlio.log so that it gets reopened in spawn and forkserver child processes so that the child log messages are not lost.

The same problem exists with dlp.log, but some of the code that needs to change is in repository dlio-profiler. Once that is updated and its release number is bumped, then changes can be made in dlio_benchmark to use the newer dlio-profiler version.

krehm commented 10 months ago

Thanks for the merge of PR #130. The dlp.log has the same problem, but is more complicated because of the ENTER and EXIT code that is use to implement the: with Profile(name=f"{self.init.qualname}", cat=MODULE_DLIO_BENCHMARK): code. That code makes sense in main.py where it wraps the body of the benchmark, but inside a forked/spawned child process in TorchDataset.worker_init() I don't see how to re-initialize the dlp.log without using ENTER and EXIT. Any advice would be appreciated.

zhenghh04 commented 10 months ago

@krehm This has been fixed by @hariharan-devarajan, the corresponding PR is merged. Could you test it again whether it is working?

krehm commented 10 months ago

I will be out of the office until Tuesday, but will give it a try then.

krehm commented 9 months ago

I have not had time yet to do full testing, I am chasing another problem at the moment. But what I do notice is that when I do a MLPerf run (using 'main' branch) that it sets workflow.profiling=False, yet when I run unet3d with 'spawn' multiprocessing_context, I see messages that imply that profiling is running. So maybe child processes, at least in spawn mode, are ignoring args.do_profiling.