RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "<ipython-input-17-cd8dc175aedc>", line 41, in __getitem__
sample = self.default_loader(path)
File "<ipython-input-17-cd8dc175aedc>", line 33, in default_loader
with self.fs.open(path, 'rb') as f:
File "/usr/local/lib/python3.10/dist-packages/pelicanfs/core.py", line 552, in open
data_url = sync(self.loop, self.get_origin_cache if self.directReads else self.get_working_cache, path)
File "/usr/local/lib/python3.10/dist-packages/fsspec/asyn.py", line 328, in loop
raise RuntimeError("This class is not fork-safe")
RuntimeError: This class is not fork-safe
From ChatGPT:
The error you're encountering, RuntimeError: This class is not fork-safe, indicates that the PelicanFileSystem class is not safe to use with the default multiprocessing backend in PyTorch's DataLoader. This issue arises because certain libraries, like PelicanFileSystem, might not handle multiprocessing well, especially when using the fork method.
From ChatGPT: The error you're encountering, RuntimeError: This class is not fork-safe, indicates that the PelicanFileSystem class is not safe to use with the default multiprocessing backend in PyTorch's DataLoader. This issue arises because certain libraries, like PelicanFileSystem, might not handle multiprocessing well, especially when using the fork method.