Closed larsoner closed 1 year ago
I remember we discussed this as a known Limitation back when we implemented caching. For now, we should probably at least document this behavior
It seems like we should be able to overcome it by:
out_files
when actually running the step and returning {key: (filename, hash) ... }
as the out_files
dict of the step. filename
s and force a re-run if they've changed from what's expected.I think it should work and not add much overhead. (Well, the minimum required amount of overhead I guess.)
I was just running some tests and:
derivatives
(byrm -Rf
ing them)I think (5) is a bug -- somehow the file being there its
mtime
or hash or whatever didn't cause it to run again. This seems weird/bad to me. But I don't think it's too surprising since I think we only check themtime
of the input files and not the output files. Somehow we need to check to make sure that the output files haven't been overwritten (or modified) when determining if we need to rerun or not.