Open JanssenBrm opened 2 weeks ago
Created ticket for follow-up: https://github.com/eu-cdse/openeo-cdse-infra/issues/303
As mentioned in that internal ticket: I did a quick experiment:
./node_modules/.bin/open-science-catalog-validation /home/lippenss/workspace/2024/infra303-openeo-stac-earthcode-validation/j-24110466a8524212b7f7b7855038f4d7/job-results.json
Validation failed!
> open-science-catalog-validation@1.0.0-rc.4 test
> stac-node-validator --config config.json /home/lippenss/workspace/2024/infra303-openeo-stac-earthcode-validation/j-24110466a8524212b7f7b7855038f4d7/job-results.json
STAC Node Validator v2.0.0-beta.12
/home/lippenss/workspace/2024/infra303-openeo-stac-earthcode-validation/j-24110466a8524212b7f7b7855038f4d7/job-results.json
Collection: valid
Extensions
eo (1.1.0): valid
file (2.1.0): valid
processing (1.1.0): valid
projection (1.1.0): valid
Custom
1. File should not exist
Summary (1)
Valid: 0
Invalid: 1
Skipped: 0
Note that all standard STAC validation (eo, file, ...) passes fine. The only problem is the custom open science catalog-specific validation: it gives a rather obscure validation message "File should not exist". But as far as I understand this is due to local (user-side) organization of the result files, not the produced metadata itself. As such I think this task can be considered done.
FYI: I opened a ticket about that obscure validation message at:
@JanssenBrm I don's seem to have the permissions to close this ticket. (I could however change the status to "done" on the project board)
Hi @JanssenBrm,
That looks like great progress.
@Schpidi , can you please confirm if this validation should be considered successful? And if so, can the script be changes so that it can be used to test valid STAC on other platforms without errors?
Bram, we probably ought to clarify the mechanism to move user stories to done. I think the final step probably ought to be that we (well probably you) present to ESA in the Scrum of Scrums and agree that it is done.
I think part of this story is also about reviewing this with the other platforms to how similar or not we are.
Having some more well defined "Done Criteria" as we have started to do and you have suggested, should help in this regard too.
Provide a working validated example