Open bpinsard opened 4 years ago
Ahoi hoi @bpinsard,
thank you very much for your post.
So far BIDSonym
focuses on T1w
images with the option to also deface
T2w
images using the defaced T1w
as a deface mask
. What did you had in mind? ASL
, PD
, FLAIR
, SWI
, etc.?
A pybids
entity filter would be possible yes. So far we implemented flags (e.g. --deface_t2w
).
Evaluating FoV
for potential mask sharing is a great idea.
I just wanted to agree that this would be very helpful. I help out a bit with openneuro.org and am thinking this could be a good tool to recommend to users for defacing their datasets before uploading. But it would nice if it could handle all anatomical modalities that might need to be defaced.
Thanks for the post @jbwexler! Do y'all have any resource in mind where we could get non-deidentified data including multiple modalities? So far I used scans of me, but I only have T1w
and T2w
....
I believe we have some but I'm not sure whether we allowed to share it... Let me check and get back to you.
Sorry for the delay. I was able to find non-deidentified T2w, T1map, FLAIR and MEFLASH, though we can't really share them. I tested one image from each of these modalities with pydeface and it seemed to work fine on all of them. I haven't tested them with the the other three defacing tools but I could. I also made a tentative list of modalities I found on openneuro that should generally require defacing. I'm curious what others think of this list:
angio FLAIR FLASH inplaneT1 inplaneT2 MEFLASH mp2rage MTS? PD PDmap? PDT2? T1map T1mw? T1rho? T1w T2map T2star T2w veno
Also, I'm wondering about what modalities it should deface by default. Should it just do T1w by default? Should it do all */anat/*.nii* by default? Should it do the all the ones from my list above, even though some of those are not actually in BIDS standard? Should it just do all the anatomical modalities mentioned in BIDS by default?
Hi gang,
finally getting back to this. That's a great list @jbwexler, thanks for that.
I'm currently trying to fix some stuff. While doing so, I thought about renaming the --deface_t2w
flag to --deface_multimodal
that would allow users to indicate their modalities or by default defaces all. The way it currently works is that the defaced T1w
is used as defacing mask. Once we have more open, non-deindentified data, we could talk further @bpinsard's FoV
idea.
That all sounds great to me.
Hi,
any news regarding this issue? Besides T1w I'd like to deface 3d FLAIR images. Works well with pydeface but using bidsonym with its metadata handling for that would be even nicer.
Ahoi hoi @m-petersen,
thanks for reviving this conversation. So far there is not really any news here, as it's still super hard to get
non-deidentified data to test the different defacing
algorithms (which is overall a good thing wrt data privacy but
makes things here a bit difficult). The other thing is that most packages mainly work/work best with T1w
images and perform not that well in other contrast/modalities. Furthermore, some are not fully supported by/integrated into BIDS
yet. All problems are unfortunately something that we cannot really address with BIDSonym
as the idea was to bring existing de-identification options together and make them easily applicable on BIDS
datasets. That being said, the option for T2w
images I described above is still there and could be extended to other modalities (for some contrasts/modalities, images would need to be gathered outside of pybids
). However, as said before, the outcomes will vary drastically. I'm more than happy to continue this conversation and evaluate what could be done.
What about extending bidsonym to other modalities with a disclaimer stating that the algorithm performance hasn't been thoroughly tested for them? So the user is to decide and evaluate. Unfortunately, I won't be able to share my data as well. Maybe providing screenshots of the defaced images to assist with assessing the performance is something I can discuss with my supervisors.
It would be great to be able to deface any modality. A json file with pybids entities filters (like fmriprep
--bids-filter-file
) could be provided to identify the images to process.Maybe it would be possible to identify images which have the same field-of-view from the BIDS sidecars to share the generated mask.