Closed delgadom closed 4 years ago
mostly looking for signoff on the direction we're headed rather than the actual PR...
@brews I think what you're saying makes sense - we should use the multi-level config structure that dask allows if possible. We'd put what is currently worker_template.yml
into the {sys.prefix}/etc/dask
folder and just put that there during the image build (rather than in the prepare.sh
script). Then, we'd allow users to create their own yamls in /home/jovyan/.config/dask
, with settings that would override the system-level ones.
I think without doing this, the current approach fixes the problem of worker images getting "stuck" if you use a newer image one time. BUT, it also prevents users from being able to make persistent changes to their specs. They would get wiped every time they spin up a notebook image again.
I think we'd need a simultaneous change to rhg_compute_tools.get_cluster. I think the current approach of parsing the full yaml in /home/jovyan (which I know I started) is probably not the smart way to do it, since dask already knows what to do with these yamls if we've put them in the right spot. So I think we could simplify get_cluster
to just have a few kwargs that get passed through to the KubeCluster and Client calls, and not actually do any parsing of yamls ourselves. Not positive, but I think that should work and would make everything a bit less complex.
Ultimately - at least the last time I checked - dask-gateway was becoming the preferred method for cluster creation over dask-kubernetes - so at some point it's a question of how much additional work do we put in to improving the dask-kubernetes based rhg_compute_tools tools before porting everything to dask-gateway. If we ever want to update the base helm-chart for our own helm-chart, which is currently pinned to the last pangeo chart before they switched to dask-gateway, we'll need to make that switch as well. But that's a cluster-level update so not the easiest switch.
Yeah, might be best to put it at /etc/dask
or DASK_ROOT_CONFIG
, if those changes work (test this!).
I worry that {sys.prefix}/etc/dask
configs might get dropped if users modify/update their miniconda installs. I don't know this for sure, just something to test or consider.
Yea that sounds like it would make even more sense. In the off chance that someone creates a new conda env during a session then we’d still want this dask config to apply
Thanks both!
This sounds awesome and I'd love to figure out how to make this work! I think there are no issues here with doing this with the general dask config file. worker-template.yml
is a bit trickier.
I circumvented the default system in dask_kubernetes.KubeCluster
so that we could provide the get_micro_cluster
, get_standard_cluster
, get_big_cluster
, ... helper functions, which read the worker-template file, modify the image specs, then return a cluster object created with dk.KubeCluster.from_dict
. As far as I can tell, there's no good way to create a cluster with workers which are 1/2 the default size, and I think it's worth the current loss of layered default customizability in exchange for users not having to pass in the explicit memory & cpu specs every time they want something other than the default.
I don't think there's currently any support for nested/overlapping worker-template specs, so I don't think this is necessarily a big problem.
I'm not sure that this is exactly what we're looking for... what I want is to overwrite files in the user directory, which we preserve across sessions on an SSD and mount in the pod. I'm not exactly sure how the jupyter base images create the user directory, but I don't think dropping our data in /etc/skel would do the trick. Does that sound right @brews?
this is my general risk aversion coming in. if we deploy images with that work in a totally different way, it will either break our current "stable" images or we'd have to figure out how to be backwards compatible in rhg_compute_tools. Also, if we start deploying our updated configs upstream of the user config, our current configs will still be in the user config location, and will override our updates. Fun times.
Thoughts on this? I don't want to be creating technical debt for ourselves down the road, so especially on the dask config file it might be best just to go through a painful change next time we're ready to update the stable images, i.e. commit a change to all image versions at once and ask users to delete old config files.
sorry i never responded to this. I think this approach is good for now, but I think it still has the issue that it's only going to apply to "latest" images, yeah? So if someone goes to a "latest" image, and then goes back to the "stable" image, they will still have worker conflicts, no? (b/c the worker template from the stable image won't necessarily update the worker template that the latest image placed in a user's home dir). I don't see a great way to get around the fact that the thing we want to change is kind of baked into the stable image now... but I also could be missing something here and its possible the current approach actually does solve this prob.
I agree that we prob don't want to do anything drastic in the short term, but also that next time we're ready to update the stable image, it might be time for the painful change. I doubt that too many people have customized their worker_template files, so hopefully it's just a matter of telling people to delete them which wouldn't be too painful.
Also, I know it's a big change but if we're going to go through the effort of migrating settings in such a way that we'll need to update rhg_compute_tools, it might be a good time for moving to dask_gateway for controlling our workers. It's simultaneously trying to address some of the issues that we've been addressing with rhg_compute_tools
(e.g. how do you make it easier for users to adjust worker specs and images). It might make long-run sense to switch over though it would certainly require changing rhg_compute_tools simultaneously. https://www.anaconda.com/blog/pangeo-with-dask-gateway
Workflow
Summary
Always copy
worker-template.yml
rather than updating. Fixes the problem of the dask worker getting "stuck" on the most recently updated verison of worker-template.yml.Couple outstanding issues:
rhodium:stable
andpyTC:stable