With the latest releases of AML SDK, we find line 157 in diabetes_regression/training/train_aml.py failing randomly and silently. This breaks line 122 in diabetes_regression/evaluate/evaluate_model.py. While the fix in evaluate_model.py solves the problem of the missing metric in the parent run, it does not address the root cause of the problem, which is the random failure of run.parent.log.
Could an OutputFileDatasetConfig be used to send a model_metrics.json file containing the newly trained model metrics between the train and evaluate steps of the pipeline?
With the latest releases of AML SDK, we find line 157 in diabetes_regression/training/train_aml.py failing randomly and silently. This breaks line 122 in diabetes_regression/evaluate/evaluate_model.py. While the fix in evaluate_model.py solves the problem of the missing metric in the parent run, it does not address the root cause of the problem, which is the random failure of run.parent.log.
Could an OutputFileDatasetConfig be used to send a model_metrics.json file containing the newly trained model metrics between the train and evaluate steps of the pipeline?