Open romainVala opened 5 years ago
Hi all,
I would also very much like to know this :-) also, can someone tell me what -1, 0, and 1 mean? I am assuming that 0 means 'maybe', but what about -1 (is it reject or keep?) and 1 (again, is it reject or keep?).
Thank you very much for any help with this, Kind wishes,
Charlotte
hello from what i understand, reading the code they validate the classification with the following -1 = artefacted 0 or 1 is good (from the rater point of view 0 is doubtful and 1 is good)
Concerning the multiple raters I understood that they randomly choose one of the 3 raters ... this is not easy to deal with variable ground truth ...
Thank you Romain, very helpful! Now we just need to work out which one is the correct csv, y_abide.csv or labels_abide_allraters.csv. I'm assuming it is y_abide, given that the other one is in the archived folder?!
Yes I made the same assumption, but I am not sure at all ... the strange thing is that in the archive one, theyre is no empty value (so all the rater note all the volumes) I did not check the exact difference between the both file. would be nice if @effigies or @oesteban could confirm .
Many thanks
My thoughts exactly! I have also emailed Dr Esteban directly - will let you know when he gets back to me. Best wishes, C
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hi Romain,
I'm sorry but I really don't think you have the right person. I have not yet provided any tools on github.
Good luck though! Charlotte
From: valabregue @.> Sent: Monday, March 14, 2022 8:51 AM To: nipreps/mriqc-learn @.> Cc: Pretzsch, Charlotte @.>; Comment @.> Subject: [nipreps/mriqc-learn] abide, manual rating ... what the ground truth ? (#12)
Hello,
Thank you for providing this nice tools and sorry if it is not the right place to ask.
I am trying to replicate the learning on abide dataset, and I wonder how to use the manual ratting.
First I do not know which file to choose, y_abide.csv or labels_abide_allraters.csv (in the archive subdir)
I try with the first one, and I found half of the line where the raters disagree ...(it is quite a lot ! ) with the second one I get 764 consistent rating over 1100.
So which one to use, and what to do in case of disagreement ? which label should I set ?
Since mriqc is performing a binary classificaiton, what to do with "doubfull" label is it treated as noise ?
So I do not see how to deduce a ground truth label (0/1) on all abide T1w
Many thanks for your help, and sorry if I miss the explanation in one article
Romain
PS what about abide_MS.csv and abide_DB.csv ? it seems to contain a rating by one rater, to a subpart only
— Reply to this email directly, view it on GitHubhttps://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnipreps%2Fmriqc-learn%2Fissues%2F12&data=04%7C01%7Ccharlotte.pretzsch%40kcl.ac.uk%7C0a2c4e7f617c4da73d2808da0597e2dd%7C8370cf1416f34c16b83c724071654356%7C0%7C0%7C637828447149736498%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=Bvof4LFrbSyYQx7lFDZdTaLPKda52Un9h8e%2FGjeG65Q%3D&reserved=0, or unsubscribehttps://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAGHUSY6TWBDD5LAMK5ODXLDU7342RANCNFSM5QU2TSQQ&data=04%7C01%7Ccharlotte.pretzsch%40kcl.ac.uk%7C0a2c4e7f617c4da73d2808da0597e2dd%7C8370cf1416f34c16b83c724071654356%7C0%7C0%7C637828447149892736%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=GgxYr%2FHt4tZOo5yIXDPG8R9z%2Bs9KTkBT3F5YGxtcrqw%3D&reserved=0. Triage notifications on the go with GitHub Mobile for iOShttps://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fapps.apple.com%2Fapp%2Fapple-store%2Fid1477376905%3Fct%3Dnotification-email%26mt%3D8%26pt%3D524675&data=04%7C01%7Ccharlotte.pretzsch%40kcl.ac.uk%7C0a2c4e7f617c4da73d2808da0597e2dd%7C8370cf1416f34c16b83c724071654356%7C0%7C0%7C637828447149892736%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=OltLR6aUdxmta3ELGD8tQzlh6a9feZ0rUeI%2B7JuaI%2FM%3D&reserved=0 or Androidhttps://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fplay.google.com%2Fstore%2Fapps%2Fdetails%3Fid%3Dcom.github.android%26referrer%3Dutm_campaign%253Dnotification-email%2526utm_medium%253Demail%2526utm_source%253Dgithub&data=04%7C01%7Ccharlotte.pretzsch%40kcl.ac.uk%7C0a2c4e7f617c4da73d2808da0597e2dd%7C8370cf1416f34c16b83c724071654356%7C0%7C0%7C637828447149892736%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=rDpApn%2BwJGcs9%2BAhGxErr3HRl7GtkkOXpWkxYmAov9g%3D&reserved=0. You are receiving this because you commented.Message ID: @.***>
Hello,
Thank you for providing this nice tools and sorry if it is not the right place to ask.
I am trying to replicate the learning on abide dataset, and I wonder how to use the manual ratting.
First I do not know which file to choose, y_abide.csv or labels_abide_allraters.csv (in the archive subdir)
I try with the first one, and I found half of the line where the raters disagree ...(it is quite a lot ! ) with the second one I get 764 consistent rating over 1100.
So which one to use, and what to do in case of disagreement ? which label should I set ?
Since mriqc is performing a binary classificaiton, what to do with "doubfull" label is it treated as noise ?
So I do not see how to deduce a ground truth label (0/1) on all abide T1w
Many thanks for your help, and sorry if I miss the explanation in one article
Romain
PS what about abide_MS.csv and abide_DB.csv ? it seems to contain a rating by one rater, to a subpart only