Open jkbhagatio opened 1 year ago
How ultra is ultrasonic? What is the max frequency we would be interested in?
How ultra is ultrasonic? What is the max frequency we would be interested in?
~ 120 kHz
@bruno-f-cruz for context, in the original setup we had a 192kHz USB ambient microphone recording data already. The issue that @jkbhagatio might be referring to here is we were timestamping the sample buffers in software, as there was no external IO lines on the USB mic that we could use for hardware synchronisation, and left it as future food for thought.
Hey Jai, Note that if the sounds you would like to record are at120kHz, then 192kHz sampling is not enough, we would need at least 240kHz.
Il giorno lun 20 feb 2023 alle 22:18 glopesdev @.***> ha scritto:
@bruno-f-cruz https://github.com/bruno-f-cruz for context, in the original setup we had a 192kHz USB ambient microphone recording data already. The problem that @jkbhagatio https://github.com/jkbhagatio might be referring to here is we were timestamping the samples in software, as there was no external IO lines on the USB mic that we could use for hardware synchronisation, and left it as future food for thought.
— Reply to this email directly, view it on GitHub https://github.com/SainsburyWellcomeCentre/aeon_experiments/issues/152#issuecomment-1437613338, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADNLA3LB3IUKXB6OWL322ODWYPUU5ANCNFSM6AAAAAAVBF2AOE . You are receiving this because you are subscribed to this thread.Message ID: @.*** .com>
Note that if the sounds you would like to record are at120kHz, then 192kHz sampling is not enough, we would need at least 240kHz.
Yup, good point. I've just read that vocalizations can get up 120 kHz, but maybe for us recording up to 96 kHz is fine?
To synchronize, I wonder if we can inject a super high-frequency short sine wave (a few dozen of milliseconds) in the signal and try to recover it later on by filtering+convolution or just a wavelet transform. Should not distort the signal that much (vs a TTL) and would give us precise alignment of the audio during the session. Otherwise, we can take the approach we settled on with the EmotionalCities project and simply benchmark the jitter of each audio buffer when we timestamp it in Bonsai. With audio cards, this jitter tends to be pretty low (<5ms)
Dont remember if I mentioned that there is a microphone from TDT that has a bnc terminal, so can go straight to an input expander. sampling frequency then would just be the harp frequency.
sampling frequency then would just be the harp frequency.
The problem is there is no way we can sample signals in Harp at 192kHz with the current generation of devices.
https://www.amazon.co.uk/Creative-External-Multi-Channel-Discrete-Optical-out/dp/B0953LL5R6
Line-in and Optical-in 192 kHz DAC sound card that we could use
Need to revisit this discussion we've informally had before. Not super urgent, but will want to have this in-place for the social experiments starting mid-March.