Currently it's assumed that all samples defined in metadata.csv for a given project will be present for the matched Run/Alignment, but that might not be the case, either accidentally (typo or incorrect experiment name) or intentionally (shared experiment metadata between multiple runs). This should be checked and logged appropriately: some missing names should trigger a warning, and a completely mismatched set should set the project status to failed.
Currently it's assumed that all samples defined in metadata.csv for a given project will be present for the matched Run/Alignment, but that might not be the case, either accidentally (typo or incorrect experiment name) or intentionally (shared experiment metadata between multiple runs). This should be checked and logged appropriately: some missing names should trigger a warning, and a completely mismatched set should set the project status to failed.