In order for the runner script to detect errors they must be printed as JSON as the last line of stderr. Maybe this constraint is not ideal but I think the best way to solve it for now is by printing errors. I took the code from our running of single pipeline steps:
try:
if args.step_class == "PipelineStepRunValidateInput":
count_input_reads(input_files=step_instance.input_files_local,
max_fragments=step_instance.additional_attributes["truncate_fragments_to"])
step_instance.validate_input_files()
with open(f"{args.step_name}.description.md", "wb") as outfile:
# write step_description (which subclasses may generate dynamically) to local file
outfile.write(step_instance.step_description().encode("utf-8"))
step_instance.run()
step_instance.count_reads()
step_instance.save_counts()
except Exception as e:
traceback.print_exc()
exit(json.dumps(dict(wdl_error_message=True, error=type(e).__name__, cause=str(e), step_description_md=step_instance.step_description())))
In order for the runner script to detect errors they must be printed as JSON as the last line of stderr. Maybe this constraint is not ideal but I think the best way to solve it for now is by printing errors. I took the code from our running of single pipeline steps:
and adapted it for our new utils.