The files in the result directories (most of all preprocessing but relates to other steps as well) are overwritten file by file if another run is performed. So it happens that files from previous runs survive and are processed maybe again. This can lead to unpredictable results and errors difficult to find. This is mostly due to the catch-them-all behaviour of some functions.
One solution is to delete the result directory before another run. This has to be done manually at the moment to ensure that there is no glitch between to runs!
How can we deal with that? Is there a downside if we just delete everything before a new run?
The files in the result directories (most of all preprocessing but relates to other steps as well) are overwritten file by file if another run is performed. So it happens that files from previous runs survive and are processed maybe again. This can lead to unpredictable results and errors difficult to find. This is mostly due to the catch-them-all behaviour of some functions.
One solution is to delete the result directory before another run. This has to be done manually at the moment to ensure that there is no glitch between to runs!
How can we deal with that? Is there a downside if we just delete everything before a new run?