Closed jeremymanning closed 7 years ago
Hi Professor Manning,
The figures we generated for synthetic data testing might be too wide for the required margins for the thesis. When scaled to the right size, the annotations and captions are barely legible...
Should I get rid of the annotations on the right and add explanations in text instead?
Thanks, Tom
You may have to play around with font sizes...just do your best based on what seems reasonable to you!
Hi Professor Manning,
As we are not including our toolbox as part of the thesis, I am thinking about taking the benchmark results out as it doesn't really fit into the whole narrative...What do you think?
Thanks, Tom
Sure; it's up to you...
Okay, right now I am already at 60 pages, but I am still in the process of fixing the results section and have yet to add to the conclusion section...
First, we should do a thorough exploration of how well we can recover "ground truth" parameters from several illustrative synthetic datasets (for example the blocked and ramping datasets we've been exploring). We'll also need good benchmark datasets for the "across" version of timecorr. For example, we could generate analogs of the the blocked and ramping datasets, but add a little noise to each "subject's" data to generate a simulated multi-subject dataset.
Second, we should have some performance benchmarks showing how well the approach scales as a function of dataset size, along the lines of what's described in this issue.
Third, we should report results on real fMRI datasets, along the lines of what's described in this issue. Specifically: how does decoding accuracy change as a function of level?
Provide enough description to clearly explain the results, including motivations for doing each analysis. Walk the reader through each analysis to provide clear intuitive explanations of why each analysis was done and what we found.