NKI-AI / ahcore

Ahcore is the AI for Oncology core computational pathology toolkit
Apache License 2.0
15 stars 1 forks source link

Training crashes after many successful epochs. (Too many open file handlers) #96

Closed AjeyPaiK closed 1 month ago

AjeyPaiK commented 1 month ago

Describe the bug I am training a segmentation model with the ZarrFileImageWriter and the ComputeWsiMetricsCallback. The training runs successfully for many epochs (in this case, it was 82 epochs). Then abruptly crashes with the following error stack:

 Traceback (most recent call last):
  File "/home/a.karkala/ahcore/tools/train.py", line 42, in <module>
    main()  # pylint: disable=no-value-for-parameter
    ^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/hydra/main.py", line 94, in decorated_main
    _run_hydra(
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/hydra/_internal/utils.py", line 394, in _run_hydra
    _run_app(
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/hydra/_internal/utils.py", line 457, in _run_app
    run_and_report(
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/hydra/_internal/utils.py", line 223, in run_and_report
    raise ex
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/hydra/_internal/utils.py", line 220, in run_and_report
    return func()
           ^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/hydra/_internal/utils.py", line 458, in <lambda>
    lambda: hydra.run(
            ^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/hydra/_internal/hydra.py", line 132, in run
    _ = ret.return_value
        ^^^^^^^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/hydra/core/utils.py", line 260, in return_value
    raise self._return_value
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/hydra/core/utils.py", line 186, in run_job
    ret.return_value = task_function(task_cfg)
                       ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/a.karkala/ahcore/tools/train.py", line 38, in main
    return train(config)
           ^^^^^^^^^^^^^
  File "/home/a.karkala/ahcore/ahcore/entrypoints.py", line 168, in train
    trainer.fit(model=model, datamodule=datamodule)
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 543, in fit
    call._call_and_handle_interrupt(
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_in
terrupt
    return trainer_fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 579, in _fit_impl      
    self._run(model, ckpt_path=ckpt_path)
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _run
    results = self._run_stage()
              ^^^^^^^^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 1030, in _run_stage    
    self.fit_loop.run()
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/loops/fit_loop.py", line 205, in run
    self.advance()
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/loops/fit_loop.py", line 363, in advance
    self.epoch_loop.run(self._data_fetcher)
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 141, in run  
    self.on_advance_end(data_fetcher)
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 295, in on_ad
vance_end
    self.val_loop.run()
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator     
    return loop_run(self, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 128, in run      
    batch, batch_idx, dataloader_idx = next(data_fetcher)
                                       ^^^^^^^^^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/loops/fetchers.py", line 133, in __next__        
    batch = super().__next__()
            ^^^^^^^^^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/loops/fetchers.py", line 60, in __next__
    batch = next(self.iterator)
            ^^^^^^^^^^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/utilities/combined_loader.py", line 341, in __nex
t__
    out = next(self._iterator)
          ^^^^^^^^^^^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/pytorch_lightning/utilities/combined_loader.py", line 142, in __nex
t__
    out = next(self.iterators[0])
          ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 631, in __next__
    data = self._next_data()
           ^^^^^^^^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1329, in _next_data
    idx, data = self._get_data()
                ^^^^^^^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1295, in _get_data
    success, data = self._try_get_data()
                    ^^^^^^^^^^^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1160, in _try_get_data       
    raise RuntimeError(
RuntimeError: Too many open files. Communication with the workers is no longer possible. Please increase the limit using `ulimit -n` in th
e shell or change the sharing strategy by calling `torch.multiprocessing.set_sharing_strategy('file_system')` at the beginning of your cod
e
Process Process-8731:
Traceback (most recent call last):
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/managers.py", line 814, in _callmethod
    conn = self._tls.connection
           ^^^^^^^^^^^^^^^^^^^^
AttributeError: 'ForkAwareLocal' object has no attribute 'connection'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/a.karkala/ahcore/ahcore/callbacks/converters/common.py", line 79, in worker
    self._results_queue.put(result)  # Store the result
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<string>", line 2, in put
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/managers.py", line 818, in _callmethod
    self._connect()
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/managers.py", line 805, in _connect
    conn = self._Client(self._token.address, authkey=self._authkey)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/connection.py", line 501, in Client
    c = SocketClient(address)
        ^^^^^^^^^^^^^^^^^^^^^
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/connection.py", line 629, in SocketClient
    s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
Process Process-8729:
Traceback (most recent call last):
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/a.karkala/ahcore/ahcore/callbacks/converters/common.py", line 79, in worker
    self._results_queue.put(result)  # Store the result
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<string>", line 2, in put
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/managers.py", line 821, in _callmethod
    conn.send((self._id, methodname, args, kwds))
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/connection.py", line 205, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/connection.py", line 410, in _send_bytes
    self._send(header + buf)
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/connection.py", line 367, in _send
    n = write(self._handle, buf)
        ^^^^^^^^^^^^^^^^^^^^^^^^
BrokenPipeError: [Errno 32] Broken pipe
Process Process-8730:
Traceback (most recent call last):
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/a.karkala/ahcore/ahcore/callbacks/converters/common.py", line 79, in worker
    self._results_queue.put(result)  # Store the result
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<string>", line 2, in put
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/managers.py", line 821, in _callmethod
    conn.send((self._id, methodname, args, kwds))
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/connection.py", line 205, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/connection.py", line 410, in _send_bytes
    self._send(header + buf)
  File "/home/a.karkala/.conda/envs/py311/lib/python3.11/multiprocessing/connection.py", line 367, in _send
    n = write(self._handle, buf)
        ^^^^^^^^^^^^^^^^^^^^^^^^
BrokenPipeError: [Errno 32] Broken pipe

To Reproduce Run a training for a segmentation model with the following settings -

  1. Datamodule settings: BATCH_SIZE=64 VALIDATION_BATCH_SIZE=126 NUM_WORKERS=16

  2. write_file_callback settings:

write_file_callback:
  _target_: ahcore.callbacks.WriteFileCallback
  dump_dir: ${oc.env:SCRATCH}/${task_name}/${now:%Y-%m-%d}_${now:%H-%M-%S}
  queue_size: 16
  writer_class:
    _target_: ahcore.writers.ZarrFileImageWriter
    _partial_: true
  callbacks:
    - _target_: ahcore.callbacks.converters.wsi_metric_callback.ComputeWsiMetricsCallback
      reader_class:
        _target_: ahcore.readers.ZarrFileImageReader
        _partial_: true
      max_concurrent_tasks: 3
  max_concurrent_queues: 3

Expected behavior A clear and concise description of what you expected to happen.

Environment dlup version: 0.5.1 How installed: pip Python version: 3.11 Operating System: Linux

Additional context N/A

AjeyPaiK commented 1 month ago

After doing some fact finding, I have narrowed the issue down to the backend we use while loading the WSIs during dataset creation.

I made a manifest with just 2 images (one in fit and one in validate) and I started tracking the open file handlers using lsof.

I looked for the obvious ones that could be accessed: the .log files, the image files ending with .svs, .json files and the .cache files

With just 2 images, I expect that only a reasonable number of file handlers for the .svs images are open.

After about 51 epochs, I see that there are 144 open file handlers for the two .svs files I'm using.

image

Moreover, these file handlers are all opened for just those two images.

image

When I change the backend to OPENSLIDE, there are no open file handlers corresponding to these .svs files.

image

@jonasteuwen What do you think needs to be done about this?

AjeyPaiK commented 1 month ago

This issue is closed since the bug should be handled in upstream dlup.

The PR mentioned above fixes it.