bids-standard / bids-validator

Validator for the Brain Imaging Data Structure
https://bids-standard.github.io/bids-validator/
MIT License
180 stars 109 forks source link

Introspect values in failing checks #1998

Open yarikoptic opened 2 months ago

yarikoptic commented 2 months ago

ATM

        [ERROR] Participant labels found in this dataset did not match the values in participant_id column
found in the participants.tsv file.
 (PARTICIPANT_ID_MISMATCH)

                ./participants.tsv
                        Evidence: schema.rules.rules.checks.dataset.ParticipantIDMismatch

                1 more files with the same issue

        Please visit https://neurostars.org/search?q=PARTICIPANT_ID_MISMATCH for existing conversations about this issue.

NB: formatting is a bit odd

unfortunately it gives no specific information on what particular "labels found in this dataset did not match the values" which particular "values in participant_id column".

Here in particular we had

(dev3) yoh@typhon:/data/yoh/1076_spacetop$ diff -Naur <(awk '/^sub-/{print $1;}' participants.tsv | sort) <(/bin/ls -1d sub-* | sort) | less
--- /dev/fd/63  2024-06-12 12:45:45.253486101 -0400
+++ /dev/fd/62  2024-06-12 12:45:45.257486117 -0400
@@ -115,3 +115,4 @@
 sub-0131
 sub-0132
 sub-0133
+sub-0147

attn @jungheejung

effigies commented 2 months ago

This is a feature request for the validator to introspect the assertions that it's enforcing and show the variables that led to the failure. We would need something like pytest's magic, which would be a significant undertaking.

Not to say we shouldn't do it, but this is a consequence of pushing logic into the schema so that there is less custom code for each issue in the validator.

yarikoptic commented 2 months ago

could it be made explicit? e.g. (assuming that syntax for sets handling is added)

diff --git a/src/schema/rules/checks/dataset.yaml b/src/schema/rules/checks/dataset.yaml
index 91704d32..434fe2fc 100644
--- a/src/schema/rules/checks/dataset.yaml
+++ b/src/schema/rules/checks/dataset.yaml
@@ -11,7 +11,8 @@ SubjectFolders:
   selectors:
     - path == '/dataset_description.json'
   checks:
-    - length(dataset.subjects.sub_dirs) > 0
+    - check: length(dataset.subjects.sub_dirs) > 0
+      hint: length(dataset.subjects.sub_dirs)

 # 49
 ParticipantIDMismatch:
@@ -24,7 +25,8 @@ ParticipantIDMismatch:
   selectors:
     - path == '/participants.tsv'
   checks:
-    - allequal(sorted(columns.participant_id), sorted(dataset.subjects.sub_dirs))
+    - check: allequal(set(columns.participant_id), set(dataset.subjects.sub_dirs))
+      hint: set(columns.participant_id).difference(dataset.subjects.sub_dirs)

 # 51
 PhenotypeSubjectsMissing:

? although indeed while doing few of those I started to think about pytest's approach ;-)

edit: 1st example I did was quite dumb ;-)

effigies commented 2 months ago

Whoever implements this gets to decide whether it makes more sense to modify the schema to allow hints to be specified there or to do introspection.

Personally, I don't want to do either. If we do this, I would probably do it in the Python validator, since I've already written a parser for the expression language. With an AST, we will be able to easily introspect every check.