Closed mobeets closed 4 years ago
From manuscript:
"We included sessions in which there existed a block of at least 100 consecutive trials that showed both substantial learning of the second mapping and consistent behavior. To identify trials showing substantial learning, we computed the running mean of the target acquisition time, smoothed with a 100-trial boxcar shifted one trial at a time. The smoothed acquisition time for a trial then corresponded to the average acquisition time within a 100-trial window centered on that trial. We then normalized these values so that $1$ corresponded to the largest acquisition time in the first 25 trials using the second mapping, and $0$ corresponded to the smallest acquisition time in the subsequent trials using the second mapping. We defined trials showing substantial learning as those with normalized acquisition times below $0.5$.
"Next, to identify trials with consistent behavior, we computed the running variance of the target acquisition time. This was computed by taking the variance of the smoothed acquisition time above in a 100-trial boxcar, shifted one trial at a time. We then normalized these variances so that $1$ corresponded to the largest variance in the first half of trials using the second mapping, and $0$ corresponded to the smallest variance in any trial using the second mapping. We defined trials showing stable behavior as those with normalized variance below $0.5$. We selected the longest block of at least 100 trials in each session that passed both of these criteria. If no such block of trials was found, we excluded that session from our analyses. This procedure resulted in the 41 sessions across three monkeys that we included in our analyses."
sessions skipped:
20130527 20130612 20131208
what happened?
Also, there are now more sessions I could include...
'20120222' '20120303' '20120327' '20130528'
'20130619' '20131124' '20131125' '20131211'
'20160402' '20131214'
Ahh...I see. I was using a file called 'goodTrials_trialLength', but the params were actually based on progress, with the following opts:
muThresh: 0.5000
varThresh: 0.5000
trialsInARow: 10
groupEvalFcn: @numel
minGroupSize: 150
meanBinSz: 150
varMeanBinSz: 100
varBinSz: 100
behavNm: 'progress'
Note that values different from the text are minGroupSize=100, meanBinSz=100, trialsInARow=100, and behavNm=trial_length
Relevant code in nullSpaceControl:
behav2.plotThreshTrials
tmpShowBehav
behav2.asymptotesAll
plot.plotBehavAllSessions
a few notes:
updated:
opts =
muThresh: 0.5000
varThresh: 0.5000
meanBinSz: 100
varBinSz: 100
maxTrialSkip: 10
minGroupSize: 100
behavNm: 'trial_length'
lastBaselineTrial: 50
normalized acquisition time:
raw acquisition time:
example session: 20131218
updated with all sessions:
Should I include incorrect trials or no? I was not before. I think it makes sense based on the way I choose the max time to normalize to. If I include incorrect trials, then the max time will always be the max trial length no matter what, provided there is at least one incorrect trial in the first 50 trials. So this makes learning look like it's happening faster...
not including incorrects:
Acquisition times
Actual progress
Inferred WMP progress
Inferred WMP progress, during WMP only (relative to avg during Intuitive)