rs-station / careless

Merge X-ray diffraction data with Wilson's priors, variational inference, and metadata
MIT License
16 stars 6 forks source link

requeue compatibility #74

Open dennisbrookner opened 1 year ago

dennisbrookner commented 1 year ago

When submitting a job to a partition such as Harvard's gpu_requeue, sometimes, the job gets killed and requeued. It would be desirable for careless to continue where it left off in this case, rather than starting over! E.g. some flag could be added to the careless call that means, "before starting, inspect the contents of the out directory for a partial run and if you find it, continue from there?"

I have no idea how easy or hard this would be to implement (or if it exists already?). If it does exist, amazing, and if not, I figured I would mention it. I was kind of assuming that this would be the default behavior, and I was a little bummed when my job was killed and started over!

kmdalton commented 1 year ago

i have often thought that i should implement model checkpointing. for a variety of reasons, this has historically been challenging to do. however, as of version 0.2.3, it is possible to save and load structure factors and scale parameters. it would not be overly painful to implement a flag that writes the parameters to disk every so often (something like 1,000 training steps seems an okay default). to resume a job one could then use the --scale-file and --structure-factor-file flags to resume the job. i will note that some state will be lost in the optimizer. i have no idea if that is a material concern.

definitely a good suggestion. i need to think about it more.

kmdalton commented 1 month ago

This would require a lot of work to do in a satisfying way, but the process is pretty much what I've been going through over on the abismal serialization branch. Essentially every layer and model needs to have the following 3 methods

and should be decorated with the

it can be tricky to get this stuff right, but a few pointers