Open Edouard2laire opened 2 days ago
Hi Edouard,
It was great meeting you at the SfNIRS conference!
In the script "NeuroDOT_PreProcessing_Script_Workshop", we decided to use the dataset called "sub-06_ses-01_task-CV002_nirs.mat", for the sake of the workshop to show an example of DOT data collected during a language task -- here, a covert verb generation paradigm. This file was included in the "SfNIRS_24_Example_Data" folder which you downloaded from NITRC.
In the tutorial, the data used was the "NeuroDOT_Data_Sample_CCW1" dataset, which uses a rotating-wedge retinotopy paradigm.
If you would like to replicate the results from the tutorial, you should use the "NeuroDOT_Data_Sample_CCW1" dataset, then you should be able to create identical figures to the PowerPoint. Let me know if you have any other questions about the tutorial, or the data being used.
Best, Emma
Thanks a lot for the quick reply !
If you would like to replicate the results from the tutorial, you should use the "NeuroDOT_Data_Sample_CCW1" dataset, then you should be able to create identical figures to the PowerPoint
Thanks a lot. it is indeed working. :)
In the tutorial, the data used was the "NeuroDOT_Data_Sample_CCW1" dataset, which uses a rotating-wedge retinotopy paradigm.
Is this dataset part of the article 'Phase-encoded retinotopy as an evaluation of diffuse optical neuroimaging' (https://www.sciencedirect.com/science/article/pii/S1053811909007939?ref=pdf_download&fr=RR-2&rr=8d3ae7214c9fa269#f0005)?
if yes, I have two follow-up questions :
In the script "NeuroDOT_PreProcessing_Script_Workshop", we decided to use the dataset called "sub-06_ses-01_task-CV002_nirs.mat", for the sake of the workshop to show an example of DOT data collected during a language task -- here, a covert verb generation paradigm
is this part of the article "Mapping cortical activations underlying covert and overt language production using high-density diffuse optical tomography" (https://www.sciencedirect.com/science/article/pii/S1053811923003415?via%3Dihub#refdata001)
Shouldn't we expect to see a response too in this case when looking at the block average ?
Thanks a lot, Edouard
Edit: Also, what coordinate system is used for the optodes? I tried to convert the data to snirf to then load them in nirstorm; but I have an issue with the registration with the ICBM template :
I think it is because the file doesn't contain the usual landmark present in snirf file (nasion, left/right ear) that we use to co-register the montage with the anatomy.
Hello,
As we discussed during the fNIRS conference, I am interested in using some of the HD-DOT data to evaluate our source reconstruction algorithm called MEM (https://github.com/multifunkim/best-brainstorm).
I was trying today to reproduce the results from the tutorial; but it seems I cant estimate the response after the preprocessing.
Instead, I get the following response:
Here is what I have done:
Downloaded the toolbox from this GitHub
Executed the script 'NeuroDOT_PreProcessing_Script_Workshop.m' under SfNIRS_24_Scripts
Looked at the figure from the block average
badata = BlockAverage(lmdata, info.paradigm.synchpts(info.paradigm.Pulse_2), dt);
badata=bsxfun(@minus,badata,mean(badata,2));
figure('Position',[100 100 550 780]) subplot(2,1,1); plot(badata(keep,:)'); set(gca,'XLimSpec','tight'), xlabel('Time (samples)'), ylabel('log(\phi/\phi_0)') m=max(max(abs(badata(keep,:)))); subplot(2,1,2); imagesc(badata(keep,:),[-1,1].*m); colorbar('Location','northoutside'); xlabel('Time (samples)'); ylabel('Measurement #')