lv2 / pugl

A minimal portable API for embeddable GUIs
https://gitlab.com/lv2/pugl/
ISC License
179 stars 34 forks source link

Use CLOCK_UPTIME_RAW for macOS time #88

Closed falkTX closed 1 year ago

falkTX commented 2 years ago

I was having an issue where in standalone mode under macOS (not hosted by a DAW) some plugin graphics were running very, very slow. After a bit of debug turns out the value returned by puglGetTime was quite different from running as app bundle / standalone vs hosted in a DAW.

pugl is simply calling mach_absolute_time, so maybe something is wrong there. According to https://developer.apple.com/documentation/kernel/1462446-mach_absolute_time

Prefer to use the equivalent clock_gettime_nsec_np(CLOCK_UPTIME_RAW) in nanoseconds

Which I tried, and now the behaviour is consistent when standalone and in a DAW. I do not know why it is that way.

PS: I tried mach_continuous_time but that showed the same slow-running issues.

drobilla commented 2 years ago

After a bit of debug turns out the value returned by puglGetTime was quite different from running as app bundle / standalone vs hosted in a DAW.

Can you clarify this? The value itself is different somehow in absolute terms, or it doesn't progress at a correct rate (~1.0 per second)?

falkTX commented 2 years ago

After a bit of debug turns out the value returned by puglGetTime was quite different from running as app bundle / standalone vs hosted in a DAW.

Can you clarify this? The value itself is different somehow in absolute terms, or it doesn't progress at a correct rate (~1.0 per second)?

It takes ~100x more to reach 1s. Obviously I didnt measure this, but it is extremely slow to rise up, where that value becomes meaningless for any time based calculations.

drobilla commented 2 years ago

Oh weird, I thought the rate of Mach ticks was fixed in practice, but I guess not. Sounds like yours is around a microsecond rather than the nanosecond Pugl expects. I've never seen this before, what machine and OS?

Anyway, clock_gettime_nsec_np seems suitable enough (the docs even point to it), but I don't know its portability situation. The man page says 10.12, so it either needs to be checked for, or everything can just be bumped to 10.12.

I don't particularly care about supporting very old versions of proprietary operating systems, but probably the check I guess. It's not that onerous, and whatever the current baseline is (I should probably figure out how to check that), it's before 10.10, so might as well keep that around.

falkTX commented 2 years ago

I am running on this on a M1 mac mini, macOS version 12.3.1

I didnt realize this was 10.12+, so indeed perhaps put a version check behind it. I typically support >= 10.8 as base, even new Xcode can build for it as target.

drobilla commented 2 years ago

Lowered on M1 to save power perhaps?

As an aside, it kind of annoys me that the absolute value from puglGetTime is essentially meaningless, and especially that it doesn't necessarily correlate to the time stamps for events. The double also means it's approximate at best. Convenient for graphics and simple timers when you don't particularly care, but for most real-time-related things, you're going back to an integer and system clock of some variety at some point. Maybe this "64-bit microseconds" unit is something to consider... overflows at, what, 500 years or so? With an arbitrary epoch that's probably boot time? Probably fine :)

Anyway, I'll test this on my x64 Mac.

drobilla commented 1 year ago

After a bit of debug turns out the value returned by puglGetTime was quite different from running as app bundle / standalone vs hosted in a DAW.

Didn't catch this bit earlier. I'm officially confused again, how can this be?

drobilla commented 1 year ago

Well, it seems to just be additional hassle to use the new API until we can bump the baseline to that version, so I went with using the old one properly instead.

Issue should be fixed in d5efee7, thanks.

falkTX commented 1 year ago

huh so the clock tick rate is different between hw? or was it due to x64 vs arm? perhaps when hosted in a daw I was running x64 under rosetta, and native was arm..

drobilla commented 1 year ago

No idea why it's different here or there. It was always technically wrong to not use the timebase to convert to standard units, so I assume this will fix it everywhere... assuming I didn't botch the math.

perhaps when hosted in a daw I was running x64 under rosetta, and native was arm..

Seems reasonable. I can't imagine any way that could make running in a host change the behaviour of these kernel APIs without some extremely horrid things going on in the host.

falkTX commented 1 year ago

so multiplication factor seems to be constant, right? so there is a little cpu to be saved by caching num/denum calc as a single value.

drobilla commented 1 year ago

Meh. Probably. Saving a division or two isn't significant here and that introduces new precision issues I don't want to deal with.

I'm not sure floating point event timestamps are going to last anyway. Aside from the general sloppiness mentioned above, for clipboard stuff, precise comparison of timestamps is sometimes necessary, and lack of it might cause real problems. I'm not sure about that yet, though.

drobilla commented 1 year ago

(Note that it's entirely possible that I did screw up (probably by inverting) the conversion, since the previous code working suggests that it makes no difference because everything is 1 on my machine anyway. The timer test will definitely fail for you if so)