Open stevevanhooser opened 6 months ago
I am sorry for the delayed response. I did not write the Ephys Demo and cannot say anything regarding the design. I do see similarities to the ophys example, where the limitations around keeping all the data in memory multiple times made me write the custom datastore. I also see here that a "generator"-like function (could be a datastore) that feeds "frames" would be better then keeping many copies of the same data in memory. I do not think that the startup time of the predict function matters a lot, I tend to say it is ok to be used in a loop. But take this with a grain of salt as I think the best answer is "it depends" (on the size of the network, for example?) and would have to tested.
Thanks Thomas..I'll leave this here util we have a chance to write some test code (which might be awhile :) )
Thanks Steve
@tkuenzmw
Question for you. In the ephys inference demo, you arrange the data so that multiple inputs are in memory at the same time and make a single call to
predict
:Do you know how this compares speed-wise to doing a for loop over multiple calls to
predict
? The reason I ask is that, for this particular application, where the inputs are from essentially a "sliding window" over the data, putting the inputs in memory involves making lots of copies of the same data (with one sample difference).The other way would be a for-loop as I do in
deepinterp.Net
:But presumably there is some startup cost to calling
predict
each time? Do you know if there is a big start-up cost to callingpredict
? Do you know of a way to efficiently "stream" sliding inputs to a network as we need to do here that is more efficient than a for loop? Or is that fairly efficient?Thanks Steve