Closed olgabot closed 6 years ago
What's probably happening here is that the new reflow runbatch
is attaching itself to the previous jobs -- Reflow restores state when it can. You should find that new samples get smaller instance types.
Alternatively, you can nuke the current batch state by running reflow runbatch -reset
instead. This will start the job anew, and each sample should start processing from the start (but reusing data where it can, of course).
This is all too confusing: the UX around the tooling here needs to improve; it's one of our near-term goals.
Turns out I had @requires(mem := 64*GiB)
on Main which was the problem -_-
Hello, I've found that if I edit the workflow file for a
reflow runbatch
job, the information doesn't get propagated unless I add-cache=off
. Is there a way to reset the cache ONLY for the workflow file and not for the intermediate computation?Context: I made the memory requirements too high (64GiB) and now am launching very expensive instances, so I lowered it down to 8GiB but reflow is still allocating 64GiB:
Thanks! Olga