Closed h-mayorquin closed 3 days ago
I make a simulated data set for this condition, but maybe we just do it for one of the four formats rhd-fpc? Since the rhs will follow the same convention. Then we can add that to gin make sure that works. Just to make sure we are not making any assumptions maybe I'll do like digital in 03 and digital in 08 so that we have a spread and make sure that still works?
Also the exact same change should be done for rhs. so maybe I should just make examples of both that way you can do both in this PR.
I make a simulated data set for this condition, but maybe we just do it for one of the four formats rhd-fpc? Since the rhs will follow the same convention. Then we can add that to gin make sure that works. Just to make sure we are not making any assumptions maybe I'll do like digital in 03 and digital in 08 so that we have a spread and make sure that still works?
Yeah, that would be great. We can make for both or we can substitute the current ones for 'one-file-per-channel' to have this complexity instead of what we have currently.
Data is on pull request: https://gin.g-node.org/NeuralEnsemble/ephy_testing_data/pulls/133
Good luck with those tests. :) I swear every time I think the fpc will be easy it takes me like 10 commits to get it all sorted.
Good luck with those tests. :) I swear every time I think the fpc will be easy it takes me like 10 commits to get it all sorted.
This is a cache problem, can you reset the data caches?
Here:
https://github.com/NeuralEnsemble/python-neo/actions/workflows/caches_cron_job.yml
You should be able to trigger this work manually.
Not sure why actual testing is running so slowly.
Not sure why actual testing is running so slowly.
It is because it is downloading the data as the cache is failing. The commented line on the cache.
But shouldn't it have done that during the cache step and not during the pytest step?
You know actions way better than I do so I trust whatever you say for that!
But shouldn't it have done that during the cache step and not during the pytest step?
You know actions way better than I do so I trust whatever you say for that!
The cache is failing to restore in the 4th step. Then it goes on without cached data which means all the tests are downloading the data on the spot.
The caches for the new data are being created now:
https://github.com/NeuralEnsemble/python-neo/actions/workflows/caches_cron_job.yml
I guess the cron job was later than I expected.
I see! That makes sense. I thought it would pause regenerate the cache then test for this type of action. Not skip the cache and download live, but makes sense.
@samuelgarcia @alejoe91
let me know if you guys want to take a look. Basically, @h-mayorquin discovered from some neuroconv data that the one-file-per-channel format actually has a different way in how it stores dig-in and dig-out data. So this patch fixes that. It looks good to me and we supplied a set of test files which are passing.
merci!
For the
header-attached
and theone-file-per-signal
the digital channels are interleaved and need a bitwise operators to separate the channels but this is not the case for theone-file-per-channel
where each digital channel is separated and is already uint16 with 0 and 1s.This PR patches this behavior so the digital channels can be correctly retrieved for the case of the
one-file-per-channel
. Unfortunatlely, we don't have a multiple digital channels in the test examples on git to test for this behavior. Any idea on how to test for this @zm711 ?I tested this for data that I have and it works.