Open bennniii opened 6 years ago
i may add: i can verify that when sending triggers from PD to Max/Msp via UDP and creating notes in the later (then sending them out to e.g. Ableton Live) , this fluctuation does not appear.
added two patches for testing. pd max udp.zip
sorry for bugging, but can anybody reproduce this?
a very broken down pd-patch for sending midi-data at a fixed rate
This test patch doesn't actually send any midi, what I see is:
I just did a quick test with the Pd-0.49-0test3 release sending midi to Logic Pro X using the following patch instead:
Sending notes every 250 ms with a duration of 125 ms gives me the following in Logic's piano roll after recording the input:
Sending notes every 50 ms with a duration of 125 ms gives me some duration jitter due to the note offs not always happening in time with the longer duration:
Using a duration of 25 ms gives me better output:
but zooming in shows that the durations are not all exactly the same:
Note: measure markers shouldn't line up as I just choose tempo at random.
when sending very short notes, this even causes note-off messages to occasionally be sent before their note-on message.
Since you didn't specify the note length you're using, I'm guessing they are similarly short. What you are most likely seeing is the fact that MIDI is an old protocol (early 1980's) and it simply wasn't designed for such short durations. As a result, timing beyond 32nd or 64th notes is not really guaranteed and you're running into a certain amount of granularity.
the MIDI protocol has no notion of time, only now
. with slow-speed protocols (such as the original hardware MIDI-specs) this would give quite a bit of jitter.
however, with modern transport protocols (such as the virtual MIDI-cable between two applications), this is quite neglectable.
so i guess, the jitter you are seeing is caused by Pd's block-based processing, which would cause at least a time-quantization at 1.5ms
(but possibly much more, e.g. if Pd is only called-back every 1024 samples).
the only way to fix this is to pass time-information (that is: the current logical time) to the MIDI-API (if the API supports that)
the jitter you are seeing is caused by Pd's block-based processing
Right. Lowering Pd's block size does lower the time difference between metro events, but we're talking difference of 1-2 ms.
As a result, timing beyond 32nd or 64th notes is not really guaranteed and you're running into a certain amount of granularity.
By this, I mean the minimum size of a MIDI "tick" when using the clock, for instance, is around ~2ms at 120 bpm with quarter notes. MIDI clock timing is quite jittery but works well enough for lots of things.
EDIT: Generally, what I hear or feel is how I measure MIDI latency, less what I see or measure. :)
EDIT2: And there are 6 ticks per MIDI beat in the clock as well.
We had a couple threads about that issue on facebook and I've put together a simple test patch to test it
Interestingly some experiments went contrary to my expectations.
When, with a block of 2048, sending midi notes of 5ms duration at a 10ms intervals i don't see any >40ms gap in the midi stream .
For reference: the highlighted area is 40ms long
Could that mean that the Jitter is not caused by the block boundaries?
Sorry if something is obvious I'm definitely not a DSP expert or anything.
I still don't get how a block of 64 samples (which technically lasts for around 1.5ms) can be processed in 5ms (sometimes up to 10ms) and i still don't hear any audio artifacts even with a single [osc~ 440] object. That makes me question if the measurements are correct.
I still don't get how a block of 64 samples (which technically lasts for around 1.5ms) can be processed in 5ms
Pd's blocksize is one thing, the other thing is the blocksize of the audio device, as @umlaeute already mentioned. 64 samples aka 1.45 ms is only the logical time step. say your audio devices runs at 1024 samples: whenever it sends a block of audio, Pd will basically advance 16 times for 64 samples as fast as possible (it doesn't sleep between "successfull" calls to senddacs()) and then wait until a new block of audio arrives, so I would think that larger blocksizes would give you more jitter...
if DSP is off, Pd advances the scheduler whenever 1.45 ms have passed. note, however, that the granularity here depends on a) sys_sleepgrain and b) the precision of the sleep function of the OS.
@HenriAugusto i don't know what you try to measure with your testpatch, but i'm pretty sure that any numbers therein - while interesting - are pretty meaningless...
imagine a non-linear multi-track editor in the mid-90ies: bouncing your 4:33-long session to disk might take 15 minutes, during which your system is unresponsive. and yet you don't hear any dropouts when playing back the file. it's not very magic.
4:33-long session you don't hear any dropouts
it doesn't count, it's only silence...
yeah, i'm still getting my around the DSP stuff. Thanks a lot for the infos :)
@HenriAugusto i don't know what you try to measure with your testpatch
it's a [metro 0] banging a [realtime] object.
The results are plotted in the array. The interval between to [metro 0] bangs were measure around 0.0009 ms to 0.013 ms but with spikes of 4ms. It seemed to reflect the [metro] stopping while the DSP was being processed. The spikes did get bigger when increasing block size.
Why do you say they're meaningless? This is not caused by the block-based processing? :thinking:
"meaningless" is probably a too strong word, sorry.
but i still don't understand what you are trying to measure (not the actual values you are measuring; i can read the [metro]
and [realtime]
objects just fine).
EDIT: ...and how you think this measurements relate in any way to the timing issues of MIDI-messages
Can we close this?
Is it fixed (or unfixable)?
Last time I checked (a year ago) -alsamidi MIDI timing to external hardware was indeed terrible (audio example: https://mathr.co.uk/misc/2017-09-01_i_finally_found_my_novation_bassstation_rack_power_supply.ogg (18MB))
i don't think so. there is a problem with interfacing MIDI, which could be solved by using timestamps when passing midi-messages to the API
Ok, we can leave this open for now. I suppose I don't really see an issue as I don't really do anything that uses notes with a "5ms duration every 10ms." I'm used to latency in the order of 12-16ms when playing guitar anyway... :P
can't believe it... after some more months of research, trial and error β i think i may have fixed the issue! it appears to have something to do with the sleepgrain setting of pd. launching it with the flag "-sleepgrain 0.1" (or 1, as a matter of fact) seems to diminish midi-fluctuation!
very happy right now :)
as I have written in one of the answers above:
if DSP is off, Pd advances the scheduler whenever 1.45 ms have passed. note, however, that the granularity here depends on a) sys_sleepgrain and b) the precision of the sleep function of the OS.
I'm also seeing huge issues with MIDI timing and they become worse the higher the blocksize of Pd is set. It concerns MIDI in and out. If I have the right understanding of the issue, as I understand, the MIDI process needs to somehow be threaded and the MIDI events then to be integrated into Pd's own scheduler instead of merely being processed in between block boundaries. Scheduling them would at least improve timing jitter. Really solving the issue would be to make use of timestamped MIDI messages like there are available in MIDI 2.0 or Jack-MIDI See also https://github.com/pure-data/pure-data/issues/728
actually it would be enough, if the MIDI-API would provide time-stamps when delivering the the messages (no need to upgrade to MIDI2)
actually it would be enough, if the MIDI-API would provide time-stamps when delivering the the messages (no need to upgrade to MIDI2)
Quick clarification: Do you mean the MIDI backend or the part of the MIDI implementation in the Pd code? If it's the former: Are there alternatives? If it's the later can you point to the code so I can look at it?
no idea.
i guess that some APIs (backends) already could provide timestamps.
e.g. the ALSA seq
interface does have a timestamp. i have no idea what it actually contains tough.
ah well. the "timestamp" is just the midi tick, which i think is somewhat useless in this context.
Here is an observation from VST3 development:
"VST3 plugins donβt receive MIDI directly, the host converts to the Event type which is delivered in the process callback with a sampleOffset
field that tells the plugin when to handle the event relative to the start of the buffer." (source)
When using these Events in libpd and simply delaying them (using [pipe]
) by the amount defined in sampleOffset
the timing becomes perfect.
This observation gives me hope that this issue here can be solved relatively simply and the integration of timestamps into Pds own scheduler can be implemented solely by means of patching. So the remaining obstacle is to get the timestamp into Pd. Since the timestamp feature is supposed to be backwards compatible to MIDI 1.0 there must be an existing path for Pd to read it. MIDI 2.0 Specification
@HenriAugusto For a sane way to measure timing accuracy please read the paper by Chris Chronopoulos and watch the accompanying video from the presentation. Both linked to in #728
Another data point, mentioning this here in case it helps someone put two and two together.
tl;dr; timing seems to be a bit off, and using -sleepgrain 0.1
helps.
I'm on macOS, Pd 0.54.0. I was noticing very subtle but definite jitter in the timing of a simple metronome patch. Now, maybe there's something wrong in the patch, maybe I'm using some object that shouldn't be used for such a task, and if so please let me know.
Assuming the patch is correct for its intended purpose (having a highly accurate metronome), I observed that the metronome is not keeping to the beat exactly.
At this point, I wasn't sure if it is something wrong with the object or just me hearing slightly offset beats when there are none π
(and I'm still not sure). But still, I searched around here in the GitHub issues when I came across the suggestion from @bennniii above to use -sleepgrain 0.1
.
And that indeed helps. I added some instrumentation code to the patch to observe the realtime offset in the ticks between the metro object. Not sure if this is an accurate way of doing it, but here goes.
When running with -sleepgrain 0.1
, I hear less jitter, and also the realtime deltas are lesser (the print fires much less). I feel the jitter is still there, but it feels much better than running without sleepgrain
. But in any case, the delta values reported using the realtime object's instrumentation are definitely lesser.
I've attached the patch, in case it's useful: sleepgrain.pd.zip
I think MIDI timing should be significantly better with https://github.com/pure-data/pure-data/pull/1756. Will test in the next few days.
@mnvr no not really. your example does not use any MIDI at all. all timing is done with the internal/ideal/perfect time.
if you hear a jitter, than either
[realtime]
object (with your eyes), and start hearing what you see).Alright, good to know that at least what I'm trying to do is correct. And I agree, I don't think pd's scheduler is broken, so it's either 2 or 3.
Thank you both for the quick replies ππΌ
correct me if I'm wrong, but also [realtime]
shouldn't be the same as the logical time (because control messages only get computed every 64 samples). So you have to account for that intended behavior in your error calculation as well.
afaict pd is 'allowed' to compute the messages at any point within a 64-sample block. (1.45 ms at sample rate of 44100 hz)
afaict pd is 'allowed' to compute the messages at any point within a 64-sample block.
not sure what that means. (Pd doesn't do anything "within a 64-sample block"; it does wake up every 64 samples to do the message processing and the DSP-processing (for the next 64 samples), so if we are being anal, that would be between blocks rather than within a block)
also note, that Pd doesn't have to do the calculations every 1.333ms (when running at 48kHz).
If the buffer is large enough, it could just decide to do all the calculations once per second. (that's where -sleepgrain
comes back to play)
yes but whereas the audio samples output at the moment the soundcard uses them, (after they've been put into the buffer) the messages are output when the samples are done being written to the buffer, right? but I had no idea about sleepgrain edit: I guess I should have re-visited the long thread above sorry
there is some very audible fluctuation in timing when sending midi(-notes) out of pd to any destination (virtual ports, interfaces, usb midi devices etc)
when sending very short notes, this even causes note-off messages to occasionally be sent before their note-on message.
i've put together a zip file with:
this has been tested with PD 0.47 and PD 0.48 on macOS 10.12.6 and raspbian stretch (january 2018)
pd midi fluctuation.zip