Open hardingnj opened 4 years ago
I'll look into it
I'm also seeing an error very similar to this one: https://github.com/dask/dask-jobqueue/issues/222
Are we using a version earlier than the fix implemented in this?
Removing cluster.adapt()
seems to fix this issue.
we're using dask-kubernetes not dask-jobqueue jobqueue is for deploying dask on SGE, SLURM , etc. https://jobqueue.dask.org/en/latest/
Ah, thanks- I guess the cluster adapt logic is similar though?
Yes, I think the errors you're getting might be related to this distributed.nanny - INFO - Closing Nanny at 'tcp://10.32.60.2:35879' distributed.worker - INFO - Stopping worker at tcp://10.32.60.2:41549 distributed.worker - INFO - Closed worker has not yet started: None
that's an error from one of your dask workers
Can't see a report- apologies if this is known, but I get these errors when using dask:
The error doesn't stop the notebook kernel, but still concerning. Is this a known issue or something with our configuration?
Thanks!