Closed liutianlin0121 closed 7 years ago
First you need to understand that not all the figures can be produced using the code that is in the repo, but the existing code should give you enough idea of how to use different functions.
ans:
cross_validatie
Thanks for your responses!
For 5, I don't see why MLS is always negative. MLS is defined as (PreRMSE - PostRMSE)/ PreRMSE. By training, one reduces the RMSE, i.e., PreRMSE > PostRMSE, implying that the numerator is positive. The denominator is positive by definition. So the MLS is always positive. This implies the larger the MLS (the close to 1), the better the learning skill.
For 1, I insist that it seems not fair to use cross-validation fold as "training set": One could tune the model parameters with training set or validation set, but definitely not testing set. In the current version, testing set = validation set, so you are actually tuning the parameter on testing set itself. In fact, I know someone who used the similar trick and who has to retract his paper from NIPS. See https://www.zhihu.com/question/50508148 . I sincerely suggest that you confirm this treatment with Prof. Jaeger. I'll definitely discuss this with him if I officially embark on this project in the coming semester.
Oops, I think that was another typo then in the report. I just checked the raw RMSE, it is true that PreRMSE is greater than postRMSE. It also makes sense because training should provide us with better smaller RMSE. I will update the equation.
Hmm that's a good point. I think the best way is to hold out ten percent of the subjects even smaller portion. The cross-validation error should not be the actual testing error.
More questions:
It seems that the script
trainDriver.m
only reconstructs the training errors. Would you extend this script such that the testing errors, as shown in Table 1 of the report, could be reconstructed?In fact, I don’t see where are the testing samples in the repo. Correct me if I am wrong, but it seems to me that for each model, say, the model OH vs YH, you use up all samples under this model to train the ESN. Would you specify how did you divide training and testing sets in the report for each model? Note “testing sets” in k-fold cross-validation aren’t really testing sets. The genuine testing set must be the datasets such that the ESN never encounter in the whole training phase.
Running
trainDriver.m,
I see the training error is approximately 0.18, which matches what Table 2 claims in your report. However, if I use other datasets, say, the first and the third dataset, the training error is approximately 0.3, which is far from what the Table 2 in reports claimed. I think probably this is due to the parameters— one has to choose different sets of parameters for different datasets. Would you specify what parameters should be used for first, second, and third dataset respectively, such that the results in table 2 could be reconstructed?Some questions I asked in the last week are not yet addressed in the report. I copy the questions that still puzzle me here.
PreRMSE and PostRMSE are defined as the motor performance before and after the motor learning training. But what do PreRMSE and PostRMSE exactly refer? Perhaps the RMSE of the first trial and that of the eighth trial in the modulation task experiment?
The report states: “For the relative MLS, the smaller the value is, the more one’s motor skills can improve after training.” Perhaps instead of smaller you mean larger. Indeed, suppose one has MLS of 0, which is small enough. But it is exactly the case that PreRMSE = PostRMSE, implying that she has no improvement at all.
In Table 1, there are 28 + 23 + 24 = 75 subjects in total. But in page 14 you mentioned that there are 77 samples: Finally, we get rid of the relative MLS outliers which leave us with 77 training samples. Why there are 2 subjects missing? This problem is non-trivial, as once the code is released, it may leave the reader the impression that you are using more training samples than you claimed. For instance, for model b (OL vs YH), there are there are 53 names in that dataset
thirdGroup.mat
. So you are training the ESN with 53 samples. But according to the table 1 in the report, one only assumes 28 + 23 = 51 samples are used.Thanks!