Closed larsoner closed 4 years ago
Merging #408 into master will increase coverage by
0.50%
. The diff coverage isn/a
.
@@ Coverage Diff @@
## master #408 +/- ##
==========================================
+ Coverage 87.99% 88.50% +0.50%
==========================================
Files 49 49
Lines 6531 6568 +37
Branches 1081 1086 +5
==========================================
+ Hits 5747 5813 +66
+ Misses 532 506 -26
+ Partials 252 249 -3
I'm getting an AssertionError
at that new line. I didn't have stim_rms set at all in the ExperimentController
call, but the error is the same with any combination of stim_db
or stim_rms
set (or omitted) in the ec call.
Also of note: Working from one of Maddy's scripts, I also added a missing check_rms='wholefile'
argument, which cut buffer time by an order of magnitude; it's now 700-800 ms, in line with performance in her code with stimuli of the same duration.
I'm getting an AssertionError at that new line. I didn't have stim_rms set at all in the ExperimentController call, but the error is the same with any combination of stim_db or stim_rms set (or omitted) in the ec call.
What is the value of ec._stim_scaler
? If you change the line to this it should spit it out on failure:
assert ec._stim_scaler == 1., ec._stim_scaler
Working from one of Maddy's scripts, I also added a missing check_rms='wholefile' argument, which cut buffer time by an order of magnitude; it's now 700-800 ms, in line with performance in her code with stimuli of the same duration.
Makes sense it would be the RMS checking. Good to know. I hadn't profiled to see if it would be the memcopies but probably should have:
In [1]: import numpy as np
In [2]: n_ch, dur, fs = 63, 16, 48000
In [3]: n_samp = dur * fs
In [4]: data = np.random.RandomState(0).randn(n_samp, n_ch)
In [5]: %timeit data.copy()
55.8 ms ± 739 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
That's pretty quick but nonetheless it would still be nice to include these performance enhancements if possible, though. This scales linearly with time, so for example a 16 minute stimulus would take 3 sec per copy. (It also takes many GB of memory so not too feasible, but still.)
Also pushed a commit to:
windowed
mode and a lot of sampleswindowed
mode by an order of magnitudeOkay we might as well get this in, @drammock can you look and merge if you're happy?
Thank you!
@JustinTFleming can you try adding this to your script after EC initialization:
and then see if 1) stimuli load faster than 7 sec, and 2) that they still sound correct? The assertion is necessary to make sure we avoid a scaling later.
This PR should eliminate all copies except the last one (concatenating the sound and the trigger values), there triggering a copy which would be difficult to avoid (we would have to tell
rtmixer
to play two sounds at the same time and hope it did so correctly, which is dicey).If this ends up being the bottleneck, we could make a new triggering mode where you have to supply the sound stimulus with the trigger channels zeroed out (e.g., for 6 channel + 1 trigger channel, you would already supply an array of shape
(7, n_samples
). But let's see how fast we are already with these changes.Closes #407