Closed mekman closed 6 years ago
I checked the Log*eventlog.csv file and it has some inconsistency right after the MRI-trigger is received. This is however a different bug though, since apart from these few weird events the timestamps are naturally increasing (we may still want to check what's going on there). To look into this negative timestamps issue we'll probably need to trace how exactly the fsl-style event list is generated.
Where do I find those files? Are the timestamps in it at least consistently increasing (suggesting that there's simply a wrong time subtracted from all of them)?
BTW, I also noted that on the VCNIN server there are 2 imaging sessions that haven't been copied from Data_raw to NHP-BIDS. I don't know what file-storage you're working on, or whether you were aware of this, but it could be that there's more curve-tracing data than you thought...
Manually checking in the tsv file when things happen.
First DL trial: INIT_TRIAL = 26.7358 PRESWITCH = 27.7357 SWITCHED = 30.7756 ResponseGiven = 31.1842 [CORRECT] POSTSWITCH = 31.2022
I assume this trial should correspond with the first row in: out_events = {'AttendDL_COR': amplitude dur time 0 1.0 3.5065 -220.3193
Amplitude is fixed at 1 dur = POSTSWITCH - PRESWITCH = 31.2022 - 27.7357 = 3.4665
This is what curvetracing.py seems to do, but neither duration (+0.04 off) nor timestamp (-251.5215 off) seems to match with the above output.
If I look at the difference in SWITCHED timestamps between the first and second AttendDL trials, it's 31.9889-27.7357 = 4.2532 in the tsv file and -216.0395-(-220.3193) = 4.2798 so that seems wrong as well.
Taking the last POSTSWITCH timestamp after a correct response +15s seconds gives an end-time of 1043.168 so at least that one that one seems to be calculated correctly.
Maybe we should look at it together next week, or perhaps Jonathan has time to explain what were missing.
I think the time issue is fixed (I'll keep this issue open until final confirmation from the lisa job)
For the timing of your example I get now the correct output as:
In [8]: split_ev['AttendDL_COR'][0].time_s
Out[8]: 27.7357
In [9]: split_ev['AttendDL_COR'][0].dur_s
Out[9]: 3.4665
regarding the other two scanning sessions 20180228
, 20180301
. The former contains "Illegal NIftI files (opening with fslview/fsleye gives this error). Do you know anything about that? For the latter I see the DICOMS in Data_dcm. How do you convert them to Data_raw?
The illegal nifti files could have something to do with the fact that JW manually stopped these acquisitions. This can cause some volumes to be incompletely sampled resulting in different number of slices for different volumes in the same time-series. JW wrote a script to fix this available here. I haven't used it myself but you can give it a try.
As for the Data_dcm (dicom) to Data_raw (nifti) conversion: you can either use this script or manually use:
dcm2niix -o <output folder> -b y -z y <DICOM folder>
the -b y
option gives you the json files while the -z y
compresses the exported .nii
to .nii.gz
Thanks - I will try that.
Can you wait for a bit? I already started the process, because there seemed to be more wrong than initially thought. There are no dicom files for run-09 (scan 16) of session 20180301. It is also missing on the xnat server, so if it ever existed and 'just' wasn't exported correctly it's gone now. For the other runs, I ran the dicom-restore to account for missing slices, then exported to nifti files which should be on VCNIN now.
I re-downloaded 20180228 from the server as the dcm files weren't present on VCNIN. Currently copying to Data_dcm and I'll convert to nifti (with fixes where required) when it's done. Hopefully these new nifti's will be better.
okay - thanks
Fixed. Apart from that one run for which there are no imaging files (20180301, run-09) there should now be readable nifti files for all.
awesome, cheers!
I'm only noticing now that:
NHP_MRI/Data_raw/EDDY/20180228/MRI/NII/801_run003_CurveTrace_TR2.5s_run003_CurveTrace_TR2.5s_20180228100159_801.nii.gz
and:
801_run003_CurveTrace_TR2.5s_run003_CurveTrace_TR2.5s_20180228100159_801a.nii.gz
are still "invalid" and also <30 MB
You said that you run the fix-script on these, right?
Hm, I did. But I also don't have a local copy of these datasets anymore, so I guess omething may have gone wrong with copying/syncing. I'll look into it and I'll try to fix it. Sorry for whatever happened....
No,no don't be sorry :) Just trying to understand which files to use. I guess the runs are just lost. Will continue with the analysis without them. Cheers!
I see what's the problem: the nifti files that start with a number are the old (unfixed) nifti files. In the Datadcm folder I could see that I indeed fixed the dicoms. And after fixing the missing slices issue at the dicom level I re-converted the functional runs from dicom to nifti. The new ones have filenames that start with 'DICOM'.
If I now look at run003 specifically, I cannot open the version you mentioned above ('801_run003_etc'), but the 'DICOM_run003_etc' version opens just fine in fsleyes. I now deleted the old functional runs to avoid confusion.
Since the missing slices is not an issue for the anatomicals, those weren't reconverted and still have their old filename format...
I also checked and fixed the 20180301 session which had similar issues...
Awesome, that works!
res.output
lists negative onset times