-
In Distributed's `Adaptive` class, there are calls to Cluster implementations of [`scale_up`]( https://github.com/dask/distributed/blob/1.21.8/distributed/deploy/adaptive.py#L285 ) and [`scale_down`](…
-
Maybe related to https://github.com/dask/dask-jobqueue/issues/31#issuecomment-380869544.
```bash
conda create -n tmp27 dask distributed pytest docrep python=2 ipython -c conda-forge -y
source act…
-
Hi all,
I was following the steps [here](http://pangeo-data.org/setup_guides/hpc.html) to implement Pangeo on a HPC system (specifically, Cheyenne). However, I quickly ran into trouble when I atte…
-
We are using docrep in https://github.com/dask/dask-jobqueue project to avoid docstring duplication. However, we've got a problem with python 2, see https://github.com/dask/dask-jobqueue/issues/35.
…
-
I am testing the PBSCluster along with autoscaling. It seems that I am unable to get the cluster to launch any workers without explicitly starting at least one worker. I would expect that this configu…
-
Having installed dask_jobqueue with:
`pip install git+https://github.com/dask/dask-jobqueue.git#egg=0.1.0`
I get the following error upon import:
```
Traceback (most recent call last):
File…
leej3 updated
6 years ago
-
The current example shows 24GB of RAM for workers, this has been shown to be an issue on Cheyenne PBS jobs per PR https://github.com/pangeo-data/pangeo/pull/27
Please update https://github.com/pang…
-
stop workers is not working for me on a SLURM cluster. Looking at the SLURM status I can still see the jobs running, and they are still available in the scheduler after running stop_workers, even afte…
-
The title is intentionally analogous to #20 as I have the feeling the explanation for the observed behavior is similar.
I'm on a PBS cluster whose nodes are made of 2 cpus with 14 cores each.
I …
-
We're getting this test failure in the CI logs for a few unrelated PRs.
```
__________________________________ test_basic __________________________________
loop =
@pytest.mark.env("slurm")…