Closed mrocklin closed 2 years ago
Maybe this?
cluster = coiled.Cluster(
...
worker_config={"distributed": {"comm": {"compression": "blosc"}}},
scheduler_config={"distributed": {"comm": {"compression": "blosc"}}},
)
This seems totally reasonable. I've added it to my queue for later this week
Hello folks, I had a quick look at the Cluster API reference and wondered if the scheduler_options
and worker_options
is doing what you suggested Matt? Just wanted to double-check before closing this issue. Thank you 😄
Thanks for asking @FabioRosado, this is a separate issue. scheduler_options
/ worker_options
are used to specify arguments for Scheduler.__init__
/ Worker.__init__
to allow for customizing scheduler / worker creation. worker_config
/ scheduler_config
are to allow for setting Dask configuration values on the scheduler / worker machines
Small note on the API: since you probably also want to use that config locally, you might already have it set in a yaml file somewhere, so being able to pass the path to a file would be a nice convenience.
Also, we might want logic for dropping the coiled
part of the config. Passing config=dask.config.config
might be a tempting way to ensure you have the same config on the cluster as locally, but depending on implementation, could cause collisions with the coiled config vars that get set automatically on the backend.
Update: this would be helpful for @jose-moralez to specify worker resources on Coiled. https://coiled-users.slack.com/archives/C0195GJKQ1G/p1619023637047600
EDIT: well, you can just pass worker_options={"resources": {"FOO": 1}}
to coiled.Cluster
for this one, but it still highlights the fact that letting users give a dask config (or set environment variables) is coming up externally.
Closing this issue as Nat implemented shipping the dask config
It would be nice to be able to set configuration for the scheduler and workers from the Cluster object or a cluster configuration.
This is useful, for example, when setting default compression. @quasiben is running into this now.
(we're able to work around it, but it would be nice)
cc @jrbourbeau