ocornut / imgui

Dear ImGui: Bloat-free Graphical User interface for C++ with minimal dependencies
MIT License
57.68k stars 9.9k forks source link

Integrating a libuv loop #7425

Open sphaero opened 3 months ago

sphaero commented 3 months ago

Version/Branch of Dear ImGui:

master

Back-ends:

imgui_impl_opengl3.cpp + imgui_impl_sdl2.cpp

Compiler, OS:

Linux GCC

Full config/build information:

Dear ImGui 1.90.4 (19040)
--------------------------------
sizeof(size_t): 8, sizeof(ImDrawIdx): 2, sizeof(ImDrawVert): 20
define: __cplusplus=201703
define: __linux__
define: __GNUC__=13
--------------------------------

Details:

I was contemplating integrating libuv into an SDL loop with imgui, i.e. for async io. Did not find any info about this so here's a gist doing just that.

I modified an SDL example by integrating a default loop with a timer running the ui periodically:

uv_timer_t timer;
uv_timer_init(loop, &timer);
// run ui once so we now the sync timestamp (syncts)
run_ui(&timer);
int nextsyncts = (syncts+16)-SDL_GetTicks();
assert(nextsyncts>0);
uv_timer_start(&timer, run_ui, nextsyncts, nextsyncts);

The tricky part is getting it in sync with the Vsync. Of course this is not essential. Without vsync it runs nicely. With Vsync I'm not sure. Basically I'm doing:

int a = SDL_GetTicks();
SDL_GL_SwapWindow(window);
int b = SDL_GetTicks();
printf("swap duration: %ims, next sync in: %ims\n", b-a, (a+16)-b);
if (uv_timer_get_due_in(handle) < 15)
        uv_timer_again(handle); //try to sync the timer with the vsync

which would reschedule the timer if we're getting behind.

I'm not sure if this is a sane approach but perhaps others have experiences, insights or suggestions?

Screenshots/Video:

No response

Minimal, Complete and Verifiable Example code:

No response

ocornut commented 3 months ago

I have no idea what libuv is and what this is about, but it doesn't seem like a Dear ImGui question? If so, it's going to be difficult to answer.

sphaero commented 3 months ago

Libuv is a cross platform async io library. Hence it is running its own (reactor) loop. The library is really helpful to prevent the ui to block while doing some IO for example. I know in games this is often delegated to a thread but this is often not wanted. Libuv is the library on which nodejs builds.

In essence it is no different from a loop from SDL. However it only adds polling for filedescriptors (or handles on Windows) for IO. This is typically not done in SDL or GLFW because the want to support platforms that do not have these abstractions.

I just posted it it here because I couldn't find any references of people integrating ImGui this way and wanted to give it some exposure.

seanmiddleditch commented 1 month ago

The simple answer, if your app is a game-like (continuously re-renders frames): have SDL or the platform wait for vsync (presentation), and then pump libuv using uv_run(loop, UV_RUN_NOWAIT);. This is the direct analog for how the UI event loop is pumped via SDL_PollEvent, but for IO instead of UI events.

If your application is not game-like (only wants to re-render after receiving user input or some relevant network IO), then unfortunately you'll need to run the libuv reactor on a background thread. The main UI thread would use SDL_WaitEvent and the IO thread can use SDL_PushEvent to awake the UI thread after receiving any relevant IO.

For background, this is all because some platforms (Windows) lack the kinds of necessary unifying API to "await anything" and require entirely different APIs to wait for network IO, file IO, UI events, vsync (presentation), timers, etc. Reactor libraries like libuv already create a number of background threads as needed on those platforms to accomodate those complexities. Unfortunately, libuv does not have any such accomodations for UI event queues or vsync/presentation, much less SDL-specific abstractions thereof.

It is of course entirely possible to built a reactor-like abstraction layer that handles IO, UI, and presentation. That's what frameworks like Qt do, after all. Some future version of SDL3 may do that as well.

sphaero commented 1 month ago

Thanks for elaborating on this. It's a common problem to poll for I/O in a gameloop. Hence people often implement a background thread to deal with IO. This is also suggested by SDL documentation. It all boils down to the vsync event. As far as I know there's no way of knowing when a vsync event happens. I cannot poll for this event? Using SDL (or any other) the library just waits for this to happen before continuing. This can be as long as 16ms in the case of 60 fps. This is a very long time doing nothing.

But the question remains how to merge these facilities best. Especially if you don't want to spawn a background thread. Since this seems ugly with modern async IO options. I've now tested an approach using SDL to draw on vsync but poll for the remaining time to vsync. It kind of works but I'd rather have a more solid approach like QT and GTK do.

seanmiddleditch commented 1 month ago

But the question remains how to merge these facilities best.

I answered that question already. :)

As far as I know there's no way of knowing when a vsync event happens. I cannot poll for this event?

No, not directly, and certainly not portably (and hence not in SDL). That's just not how modern hardware or compositors work.

By the time your app could possibly have received any signal about "vsync" it is already done. Back buffers for every visible surface on your device have all been flipped. The new frame is already visible to the user. You can't know when precisely the next vsync is, because you won't know how long it's been since the last one; only that it happened sometime in the recent past. All you can do at that point is prepare the next frame and enqueue it for presentation.

Which is why all the rendering APIs just give you some form of Present() or SwapBuffers() function that can implicitly do the vsync wait by blocking.

(Purely as an academic implementation detail that is of no use to you as an app developer: if you peel away enough layers of abstraction, then on some render APIs on some OSes on some hardware drivers, the "buffers have been flipped" signal might be a generic OS signaling primitive that you could poll in libuv. Again, all that would tell you though is that "vsync" happened sometime in the past, not when it happened, nor when it next might happen again.)

I've now tested an approach using SDL to draw on vsync but poll for the remaining time to vsync.

Be careful, since "remaining time to vsync" doesn't fully make sense to even ask. Multiple monitors, variable rate refresh, very high refresh rate displays, etc.

nicolasnoble commented 1 month ago

Regarding the "it's a very long time to wait doing nothing" part, it is relevant to note it's possible to run faster than your typica 60hz refresh rate.

For instance, glfw lets you set the interval rate, and setting it to 0 means disabling it: https://www.glfw.org/docs/3.3/group__context.html#ga6d4e0cdf151b5e579bd67f13202994ed

This translates to having your UI run at maximum possible FPS, possibly up to 3000fps on beefy machines, making your libuv response much faster. But it also has the drawback of being extremely taxing for your GPU, and turn it into a space heater for no good reason than just being more responsive to some network events.

All in all, you have to ask yourself what you want to do exactly. If waiting 16ms is unacceptable, then it means you're designing a multithreaded application anyway: a heavy UI rendering on an old integrated GPU will eat at your network response budget anyway.