Open dgai91 opened 1 month ago
Hello, I applied for all the HCP (Human Connectome Project) young adult data and used the 17-network ROI (regions of interest) to extract features from the fMRI data. I also hope to use your model to differentiate between working memory tasks with different loads and categories. However, the performance of the model I trained is poor. Could you provide the parameters and data used for training your model?
Hi, the main parameters are set as follows: the size of convolution kernel k is set to 5, the scalar α is set as 0, ε is set to 10−4, the kernel width σ is initialized to 0.1, the SGD parameters with learning rate set to 0.005, weight decay is 10−5, momentum is 0.9. I noticed that your ROI is only 17 (we used 268 ROIs), it is pretty small, we haven't tried it before, you can try to adjust the window size, convolution kernel and network layers, and you can test it on our other method--Uncovering shape signatures of resting-state functional connectivity by geometric deep learning on Riemannian manifold, the parameters are small.
Hello, I applied for all the HCP (Human Connectome Project) young adult data and used the 17-network ROI (regions of interest) to extract features from the fMRI data. I also hope to use your model to differentiate between working memory tasks with different loads and categories. However, the performance of the model I trained is poor. Could you provide the parameters and data used for training your model?
Hi, the main parameters are set as follows: the size of convolution kernel k is set to 5, the scalar α is set as 0, ε is set to 10−4, the kernel width σ is initialized to 0.1, the SGD parameters with learning rate set to 0.005, weight decay is 10−5, momentum is 0.9. I noticed that your ROI is only 17 (we used 268 ROIs), it is pretty small, we haven't tried it before, you can try to adjust the window size, convolution kernel and network layers, and you can test it on our other method--Uncovering shape signatures of resting-state functional connectivity by geometric deep learning on Riemannian manifold, the parameters are small.
BTW, we have provided some simulated data, but for the real training data, we have mentioned we use Shen functional atlas [28], you can process the data following this atlas, otherwise, can you contact the corresponding author (grwu@med.unc.edu) by email? Thank you for your understanding.
hi dr Dan! thanks for your response. I still want to further understand the process of generating HCP time series. Currently, I haven't performed ICA-AROMA, but I have already constructed the masker according to the description in your paper:
atlas_path = roi_label_root + 'shen_1mm_268_parcellation_MNI152NLin2009cAsym.nii.gz'
atlas_image = nil.image.load_img(atlas_path)
atlas_image = nil.image.resample_to_img(atlas_image, target_atlas_image)
atlas_masker = NiftiLabelsMasker(atlas_image, low_pass=0.08, high_pass=0.009, standardize=True, t_r=0.72)
In addition, the data I am using is 'HCP_xxxxxx_tfMRI_WM_LR.nii.gz', and each file has 405 time points. Before processing, I remove the first five dummy scans. However, I noticed that the fMRI data in your paper only has 393 time points. My code for constructing the time series (ts) is as follows:
mri_file = 'HCP_xxxxxx_tfMRI_WM_LR.nii.gz'
scan_image = nil.image.load_img(mri_root + mri_file)
roi_timeseries = atlas_masker.fit_transform(scan_image)[5:]
My question is, after the above processing, is it consistent with the processing in your paper? Apart from ICA-AROMA. And, how were the 393 data points obtained?
hi dr Dan! thanks for your response. I still want to further understand the process of generating HCP time series. Currently, I haven't performed ICA-AROMA, but I have already constructed the masker according to the description in your paper:
atlas_path = roi_label_root + 'shen_1mm_268_parcellation_MNI152NLin2009cAsym.nii.gz' atlas_image = nil.image.load_img(atlas_path) atlas_image = nil.image.resample_to_img(atlas_image, target_atlas_image) atlas_masker = NiftiLabelsMasker(atlas_image, low_pass=0.08, high_pass=0.009, standardize=True, t_r=0.72)
In addition, the data I am using is 'HCP_xxxxxx_tfMRI_WM_LR.nii.gz', and each file has 405 time points. Before processing, I remove the first five dummy scans. However, I noticed that the fMRI data in your paper only has 393 time points. My code for constructing the time series (ts) is as follows:
mri_file = 'HCP_xxxxxx_tfMRI_WM_LR.nii.gz' scan_image = nil.image.load_img(mri_root + mri_file) roi_timeseries = atlas_masker.fit_transform(scan_image)[5:]
My question is, after the above processing, is it consistent with the processing in your paper? Apart from ICA-AROMA. And, how were the 393 data points obtained?
Hi, for the data question, can you please contact the corresponding author (grwu@med.unc.edu), Prof. Wu will give you some detailed responses. Thanks.
Hello, I applied for all the HCP (Human Connectome Project) young adult data and used the 17-network ROI (regions of interest) to extract features from the fMRI data. I also hope to use your model to differentiate between working memory tasks with different loads and categories. However, the performance of the model I trained is poor. Could you provide the parameters and data used for training your model?