Reasoning: dataset creation is per-channel, and technically can be parallelized. We can write a single dataset for each channel and merge later, or create several in parallel within the same hdf5 file. Which is better: a pipelined solution, or an at-once solution? Does parallel writing require non-standard packages?
Closing. Planning to abandon the hdf5 dataset and related workflows in future updates. Current functionality is good as-is until the replacement workflow is implemented
Reasoning: dataset creation is per-channel, and technically can be parallelized. We can write a single dataset for each channel and merge later, or create several in parallel within the same hdf5 file. Which is better: a pipelined solution, or an at-once solution? Does parallel writing require non-standard packages?
Possible? https://stackoverflow.com/a/51708858