I use parameters with datasets in Azure Data Factory. What I've noticed is that parameters behave differently in a dataflow sink versus a dataflow source. For example, this is the parameters I set at the pipeline level on a dataflow:
I provide dynamically generated filenames to both the source and sink. However, while the source is able to read in the filename from the storage blob just fine, the sink does not and I have to "override" it as such:
If I don't, the sink writes the file in the RIGHT folder, mind you, but with the filename of part-<guid_value> and subsequent steps in the pipeline fail because they can't find the file.
I use parameters with datasets in Azure Data Factory. What I've noticed is that parameters behave differently in a dataflow sink versus a dataflow source. For example, this is the parameters I set at the pipeline level on a dataflow:
I provide dynamically generated filenames to both the source and sink. However, while the source is able to read in the filename from the storage blob just fine, the sink does not and I have to "override" it as such:
If I don't, the sink writes the file in the RIGHT folder, mind you, but with the filename of
part-<guid_value>
and subsequent steps in the pipeline fail because they can't find the file.