Closed muhammad-levi closed 10 months ago
Somehow I got it resolved by rebuilding the docker image from the modified Dockerfile
of pipelines/batch
, probably bug due to previous crash in the previous run which already created the timestamp_start.txt
, maybe need to be more graceful / fail-safe when crashing (e.g. delete the timestamp_start.txt) ?
Sorry missed your original report earlier. This is actually by design not to delete the timestamp_start.txt
such that we do not overwrite previous run outputs. What the first error is telling you is that your output directory may already have data in it (i.e., an old timestamp_start.txt
exists in it). You can get rid of that by either removing the content of the output directory or use a new output directory. This is automatically handled by the controller as it uses a new [timestamped] output directory for each run.
I am closing this as WAI.
Given
pipelines/batch
uber JAR executed successfully the first time, when executing it for the second time, then there will be ERROR log as follows:Is that should be the expected behaviour? Or should it be able to resync?