Closed hsneto closed 4 years ago
Have you tried " - Slow transform. Please increase timeout to allow slower data loading in each worker."?
Close due to lack of response
Sorry for the late answer. Yes, I tried to increase the timeout to 300, but it didn't work.
have you resolved this issue yet ? I also got this issue after I wrote a gluon.Dataset by myself, but I can' t figure it out
@huzhouxiang I suggest you open a new issue containing a minimal reproducible example
I solved it. I have the same problem in d2l.ai - Ch. 7.7.
Use --shm 1024m
to lauch docker, so docker run --shm 1024m <bla bla bla>
why?
gluon.data.DataLoader
use python multiprocess, multiprocess need shared memory. The default shared memory is 64m in docker container. You can check the shm usage by using df -h
and find shm
.
Description
Whenever I set num_workers > 0 in
gluon.data.DataLoader
I get timed out.Error Message
To Reproduce
I'm using the mxnet docker image: mxnet/python:nightly_gpu_cu102_mkl_py3.
The notebook to reproduce can be find in d2l.ai - Ch. 7.1.
Environment
We recommend using our script for collecting the diagnositc information. Run the following command and paste the outputs below: