ocornut / imgui

Dear ImGui: Bloat-free Graphical User interface for C++ with minimal dependencies
MIT License
59.86k stars 10.18k forks source link

Multiple instances of Dear ImGui in the same program #586

Open nsf opened 8 years ago

nsf commented 8 years ago

It's not a bug report but more like a question/discussion topic.

So, I have this engine of mine where I have a rendering thread which runs C++ code and scripting environment which runs C# code. They talk to each other via queue and triple buffers to pass rendering lists. Of course I want ImGui to be available on both sides and I can't really just lock it with a mutex, because it will block quite a lot. So what I did:

Converted GImGui to a thread local variable, initialize ImGui from main thread before running mono scripts, also force creation of a font atlas via GetTexDataAsRGBA32 and... it kind of works. Mono renders stuff when it wants to, passes the contents of ImDrawData via triple buffer to the rendering thread, rendering thread renders both GUIs, its own and the one from mono scripting environment.

But I've noticed there are a few places where functions don't look reentrant, for example ImHash function has a static LUT which is initialized on first access. Perhaps there are some other places where implicit global state is used?

In general what do you think about that kind of usage (or should I say abusage :D)?

ocornut commented 6 years ago

This is a problem because I might want to "draw" the UI in thread A and "render" it in thread B

Can't thread A just store the data for thread B to use? Thread A calls EndFrame()/Render() without rendering, then store the ImDrawData contents? You'd only have one mutex and for a short period of time.

godlikepanos commented 6 years ago

I have to admit that this was a bad example on my part. What I have in mind is to have some threads that construct parts of the UI in parallel (per panel for example). Then render the command lists in other threads.

Imagine I have a game level with a number of computer screens and I want to use imgui to render stuff on them. Every computer screen is a scene node. On my scene graph update (which is parallel) I populate the UI for each screen. Then the renderer will take that UI contexts and render them. And because this is a vulkan engine I might also do that in parallel as well. At the end of the day I populate UI widgets in thread A and render them in thread B. In the next frame I may switch threads. It's chaotic.

Another example (that requires custom memory allocators as well) is to create a context per frame, draw the UI, render it and then destroy the context by freeing the whole memory at once. If I use a linear allocator (every allocation is just a ++offset) then the allocation cost is almost zero.

BTW I don't want to sound that I'm imposing something here. My only goal is to share a different view towards more flexibility. Continue the great work!

ocornut commented 6 years ago

Imagine I have a game level with a number of computer screens and I want to use imgui to render stuff on them. Every computer screen is a scene node. On my scene graph update (which is parallel) I populate the UI for each screen. Then the renderer will take that UI contexts and render them.

For now you can make GImGui a thread-local-stored variable if you want to achieve that without a lock, but it'll probably be simpler and just as fast with a lock if you don't have many of such renders.

Memory allocation

Dear ImGui doesn't reallocate every frame, it only allocate when things are growing then stays at zero allocations for a typical frame. Aaside from rare circumstance it doesn't allocates something that it would free during the same frame (and when that happens it's only one or a few). So using a linear/throwaway allocator shouldn't be needed and likewise I don't imagine more fine-grained allocation options would be useful.?

godlikepanos commented 6 years ago

For now you can make GImGui a thread-local-stored variable if you want to achieve that without a lock, but it'll probably be simpler and just as fast with a lock if you don't have many of such renders.

I see you have CreateContext() and SetCurrentContext(). I missed those two. Then yes thread_local might actually work work. The downside is that the perf wont as great as passing the context around.

Dear ImGui doesn't reallocate every frame, it only allocate when things are growing then stays at zero allocations for a typical frame. Aaside from rare circumstance it doesn't allocates something that it would free during the same frame (and when that happens it's only one or a few). So using a linear/throwaway allocator shouldn't be needed and likewise I don't imagine more fine-grained allocation options would be useful.?

In that case a more fine-grained allocator might not needed indeed. So I guess the only thing that is missing is to pass a void* in the MemAllocFn and MemFreeFn so I can use my linear allocator?

ocornut commented 6 years ago

I see you have CreateContext() and SetCurrentContext(). I missed those two. Then yes thread_local might actually work work. The downside is that the perf wont as great as passing the context around.

That's correct. I think changing the context will be a 2.0 thing. Not being able to add methods to a class the same way you can add functions to a namespace make this change rather unsatisfactory. People have been using the GImGui as TLS (you can #define GImGui in imconfig.h for that) so it works.

So I guess the only thing that is missing is to pass a void* in the MemAllocFn and MemFreeFn so I can use my linear allocator?

But you can't use a linear allocator for the general allocations it does.Sorry my sentence earlier "it only allocate when things are growing then stays at zero allocations for a typical frame" may be misleading but imgui uses realloc patterns (alloc new, copy old data, free old). It looks like you are trying to optimizing things out of habits but imgui likely doesn't need those optimizations, at least not for allocations.

godlikepanos commented 6 years ago

But you can't use a linear allocator for the general allocations it does.Sorry my sentence earlier "it only allocate when things are growing then stays at zero allocations for a typical frame" may be misleading but imgui uses realloc patterns (alloc new, copy old data, free old). It looks like you are trying to optimizing things out of habits but imgui likely doesn't need those optimizations, at least not for allocations.

I'll explain my rationale for that using an example. Imagine I have an level editor. Every panel is a different imgui context (because of parallel building blah blah). I'm drawing each of those panels to different textures and then I compose them to a final result. In the following frames I update only the panels that have been changed. So what I want is to use the imgui contexts as a throwaway data structure. I create a new one only when I the UI needs updating and I immediately throw it away.

This example is a bit too extreme but it depicts another use case (throwaway contexts) that might be useful in some cases.

I do believe though that this problem can also be workaround (by me and some minimum hackery) by having the "allocator" as thread_local.

Thanks for your answers! Food for thought for imgui 2.0 (any timeframe for that?)

ocornut commented 6 years ago

Note that both of my proposed changes implied removing per-context allocators. Your use case it fairly unusual and between the possibility of using TLS, or fully single-threading your imgui rendering (process them in a thread separate from your scene graph update), or mutex them their use, it looks like you have enough margin.

No timeframe for anything, sorry. I am currently focusing on dear imgui but I don't know how long I will be able to do so. Even thinking about those features or edge cases is derailing me a little from focusing on the more important short-term features.

slembcke commented 6 years ago

+1 for a non-global context of some sort. (Either as an arg or object, don't really care.)

I fall into the one context per thread camp. In my game I have the client and server running in the same process on different threads to make network debugging easier. TLS seems to be mostly working, though it crashes when calling the context destructor. I read somewhere that font atlases are shared?

ocornut commented 6 years ago

I read somewhere that font atlases are shared?

ImFontAtlas is shared by default (and read-only after Initialization).

Basically ImGuiContext constructor does: Fonts = &GImDefaultFontAtlas;

You may create your own font atlas and reassign in your ImGuiContext. I'll clean all that up and probably will end up requiring at least an explicit Initialization call to create default contexts in the future.

dtugend commented 6 years ago

I'd like to share my point of view / use case, this is by no means a feature suggestion or similar, I just want to share some thoughts:

This design with the contexts is interesting (and will be very useful for me), but for my use case there is a drawback: The implementation assumes that the IO and the Render happen on the same context.

My use case is as follows: 1) IO happens on the main thread = window message pump thread 2) Actual rendering can (and usually does) happen on an other thread. (This is the way it currently is in Valve's CS:GO for example and similar Source engine games).

In my case there can be multiple contexts waiting to be actually rendered on the render job thread, while only one is in IO / main thread at a time.

So I am left with one or two okay solutions afaik (if I don't count modfiying ImGUI core files too much as a solution for now, since that would probably cut me off from future ImGui updates or make them at least notably more cumbersome):

1) Somehow sync relevant IO state between different contexts on the window message pump thread (this will loose the MetricsRender... info of course, since when the next frame is begun, the Render thread doesn't have to have run yet. (Meaning only call ImGui::Render on a dedicated context on the render thread and everything else happens on the main thread.) Pros: No Render data duplication needed. Cons: Metrics lost (hopefully not that much of a problem), also need to take care about everything that I want synced between the contexts.

2) Another soultion would be to off-load the drawing data inside RenderDrawListsFn and buffers ("copying" them) for the Rendering thread. Pros: No need to sync IO state between contexts Cons: Potentially huge render data copying overhead.

I'll probably go for solution 1 for now :-)

Edit: Actually went for solution 2 for now.

slembcke commented 6 years ago

My game is client/server, and both get their own thread to run their main loops. My rendering is also threaded, so I'm using both solutions 1 and 2. There isn't really that much I/O to buffer on the main thread (mouse, keys, chars), so it's really not that much work. I mean it's a couple dozen lines of code to gather all the input either way, and another dozen to consume it on the other thread. My rendering uses memory mapped GL buffers, so there is no (additional) duplication of rendering data. Admittedly, that part is much more involved if you haven't already implemented it.

ocornut commented 6 years ago

Everyone: the changes discussed in #1565 have been merged to master. Just to clarify, those changes DO NOT address the multi-threading issues discussed here, they however address various issues related to creating/maintaining multiple contexts, sharing font atlas and setting up memory allocators. The multi-threading issues though an explicit context->Function() will possibly be addressed in 2.0.

Copying text from that issue:


The purpose of those changes is:

Changes:

Existing codebase will be broken:


Note that on-going work on virtual viewports (#1542) with reduce the need/usefulness of using multiple context as one context will be able to handle multiple windows.

jeffw387 commented 5 years ago

I'm in a situation where I have an update thread and a render thread. I have essentially triple-buffered a lot of my state so that I can render and update simultaneously without stepping on each other. The ideal for me with ImGui would be an easy way to have separate context per triple-buffered state. That way I could begin a frame in the update thread, and end the same frame in the render thread.

I suppose I could do that with the multiple namespaces/translation units solution, though it seems a bit awkward.

The other way would be to create separate contexts for each state. Is it safe right now to have three separate ImGui contexts right now and use them in the way I'm hoping? Is it seriously messing with performance to do so?

I know I could just try it and see, but sometimes things are more subtle or complicated than they first appear so I thought it couldn't hurt to ask.

ocornut commented 5 years ago

I'm in a situation where I have an update thread and a render thread. I have essentially triple-buffered a lot of my state so that I can render and update simultaneously without stepping on each other. T

It's not clear in your description if you want to use imgui in your render thread, or only render the imgui output from the update phase of the previous frame. If what you are aiming at is the later, you absolutely don't need multiple contexts. You can clone the ImDrawData/ImDrawList (there's a CloneOutput helper) and give ownership of that data to the render thread which will render it later.

Cloning is not as efficient as an hypothetical "flipping" of resources, but you are only going to copy your 200~400 KB worth of mostly contiguous data around and that cost should be fairly negligible. In the future we could add support for N-buffers in ImDrawList but that'll be more work.

Is it safe right now to have three separate ImGui contexts right now and use them in the way I'm hoping? Is it seriously messing with performance to do so?

It won't necessarily mess with performances but inputs/interaction will likely be all broken and hard to fix as each of the 3 context will maintain themselves with slightly time-offset inputs, those are not going to magically sync up.

slembcke commented 5 years ago

You can clone the ImDrawData/ImDrawList (there's a CloneOutput helper) and give ownership of that data to the render thread which will render it later.

Cloning is not as efficient as an hypothetical "flipping" of resources, but you are only going to copy your 200~400 KB worth of mostly contiguous data around and that cost should be fairly negligible.

What I do in my threaded renderer is to have a mapped buffer available to my main thread. At the end of the frame, that buffer gets flipped to the render thread to be drawn or to have more stuff appended to it. This way you don't need to copy the imGUI buffer, then later copy it to GPU memory.

If anything, the only change I'd make to imGUI is setter for the internal buffer pointer or pass it into the render function. That way it could be buffered however the user wanted, possibly even directly into mapped memory. Maybe a bit overkill though, and puts the onus on imGUI to play nicely with write combined memory.

ocornut commented 5 years ago

If anything, the only change I'd make to imGUI is setter for the internal buffer pointer or pass it into the render function. That way it could be buffered however the user wanted, possibly even directly into mapped memory. Maybe a bit overkill though, and puts the onus on imGUI to play nicely with write combined memory.

Could you elaborate on this because I don't clearly understand this paragraph, thanks.

slembcke commented 5 years ago

As I understand it, imGUI has one big vertex/index buffer internally that it uses.

Roughly speaking now you do:

ImGui::Render();
for(cmd_list in ImGui::GetDrawData()->CmdLists){
  // cmd_list->VtxBuffer.Data, etc to copy into another buffer or GPU memory.
  // Make draw calls
}

Instead you could do something like:

ImGui::SetBuffer(vertex_buffer, vertex_buffer_size, index_buffer, index_buffer_size);
// These pointers could be directly into mapped GPU memory to avoid an extra memcpy().

ImGui::Render();
for(cmd_list in ImGui::GetDrawData()->CmdLists){
  // Data already in your buffers (or GPU mem), can calculate bind offsets using the pointers.
  // Make draw calls.
}

Though honestly I'm not sure I'd bother with it since ImGui is generally a debug API so the extra buffering/copying doesn't really matter...

ocornut commented 5 years ago

@slembcke

As I understand it, imGUI has one big vertex/index buffer internally that it uses.

That's not the case, we have multiple buffers because they are being appended into out-of-order, then the buffers are themselves based on z-order of the windows. We also need the multiple meshes to support 16-bit indices. We could in theory have a single vertex buffer and multiple index-buffer if we enforced 32-bits indices.

Though honestly I'm not sure I'd bother with it since ImGui is generally a debug API so the extra buffering/copying doesn't really matter...

If we are talking 200~300 KB worth of data yes it doesn't matter. Dense and large (hi-dpi) UI with rounding and border enabled can be bigger but an incoming patch will reduce the vertex cost of rounding and borders.

EDIT The most approachable and realistic improvement would be for imgui to hold N-buffers (where N is specified by user) so user doesn't need to do a RAM>RAM>GPU copy in the case of multi-frame pipeline, only RAM->GPU.

FunMiles commented 4 years ago

This thread hasn't seen any activity in a while, but I want to also vote for having a C++ approach of the type context.Button(...) and it seems to me that from the point of refactoring the code to have this would be easier than passing the context as an argument. The reason being that, roughly speaking, one can substitute class ImGui for namespace ImGui and the GImGui global variable becomes a member of the class and compilation goes on without trouble for all the functions that have been neatly defined with a fully qualified name ImGui::function(...). Mostly a few issues exist. All the static functions will have to be declared in the include file and the word static removed. They can be declared as private methods.

For context, I am new to ImGui, but I have a scientific computing app where it is the perfect type of quick-to-implement interface to the visualization. I do need to be able to open multiple windows each with their own GUI attached to the object they contain and not a global object. That is the reason I looked for a thread discussing global variables and found this one.

@ocornut , this is a great and impressive library!

For my immediate use, I will create my own branch where I will make a class.

FunMiles commented 2 years ago

I'm back on this subject because I started doing work with C++20 coroutines. The reason coroutines have an impact on this subject is due to the use of TLS. A coroutine can switch thread any time it co_await something. Thus there is a danger of calling some ImGui functions on one particular thread and then more functions on another thread from the same coroutine function. In such a case, the context associated with the ImGui calls would have been switched in a hard-to-notice manner.

The flip side is that one could also use something like work_thread = co_await gui_thread; to bring the GUI work to a specific thread with the context and then co_await work_thread; to continue on the original work thread.

Has anyone used ImGui in a coroutine context? I think it is one more reason to really have the context passed to all ImGui calls.

falkTX commented 1 year ago

Hi there! I was just told of this issue, didnt know it was open.

I have been using imgui for audio plugin GUIs for quite a while now, with 2 published projects making use of it so far:

The code for imgui stuff is part of https://github.com/DISTRHO/DPF-Widgets, nothing special about it. I convert the events coming from DPF (the framework that deals with audio plugin stuff) into imgui ones. There is some seemingly odd/custom code in regards to the final drawing, but only because I wanted imgui-based widgets to be reusable as opengl-child things. Basically having a top-level widget based on opengl where we can draw anything, then in a subsection of the window draw the imgui stuff (as done for master_me histogram).

For this to work it assumes the GUI uses a single thread. Which so far has been the case for all hosts.

Besides the small changes on the rendering side (related to viewport), I am using imgui releases as-is.

falkTX commented 1 year ago

Regarding multiple imgui instances (better named contexts in my case), things go indeed just work. Here is with a similar setup but in the Cardinal project.

The outside Rack and menus are based on NanoVG + blendish, with NanoSVG for SVG background handling. The modules on the screenshot are written for imgui:

image
Dragnalith commented 1 year ago

Regarding this issue, I have made a PR #5856 as a tentative to tackle the problem.

FunMiles commented 1 year ago

Regarding this issue, I have made a PR #5856 as a tentative to tackle the problem.

@Dragnalith I am happy that you started what looks to me like a good attempt at this issue. I am commenting here for now because I don't want to add noise to the PR discussion. I will be trying out your code with Vulkan, as it is now my rendering platform of choice.

perkele1989 commented 1 year ago

Code references this issue by "Future development aims to make this context pointer explicit to all calls" in imgui.cpp:1147 (latest docking branch)

Is this still the plan?

ocornut commented 1 year ago

Is this still the plan?

Probably but not soon as it'll be a largely breaking change, perhaps for 2.0. In the meanwhile #5856 offers a way to convert the codebase with a script. Note that you can also #define GImGui to be a TLS variable which makes it possible to run multiple parallel contexts.

NostraMagister commented 4 months ago

The 'context argument per function' will indeed be an ABI breaking change, and maybe it doesn't have to be.

As an example, a main function with a context as an argument could become:

void RadioButton(context, ...)

Those that want to do the effort, new user or new code could use that new style.

To maintain backwards ABI compatibility with the current function style, the following could be envisaged.

ifdef IM_KEEP_OLD_CONTEXT_ABI

void RadioButton(...) { // Retrieve the context here and pass it as an argument to the new function. RadioButton(ctx, ...); }

endif

Existing code should be able to run unmodified, IMO, with only the above extra function call as overhead. Existing code could even be partially upgraded only focusing on functions that are called at very high frequency if overhead would be a problem, which I doubt. IMO, when one starts to upgrade they will just undefine IM_KEEP_OLD_CONTEXT_ABI and the compiler will bring up all lines that can be upgraded to the new style.

This is also easy to maintain by the ImGui team because all the new changes are made in the new-style function, and only if an extra function argument is needed, the old-style function needs 30 seconds extra work.

The TLS technique that is currently in place should still work 100%, unaltered, with this approach.

Furthermore, this should continue to work with SetCurrentContext() because the ctx is retrieved inside the, now wrapper, old function, hence when the application already called SetCurrentContext() on the outside if it uses multi-context.

As far as I studied the code, there may be some issue to be looked at related to SetAllocatorFunctions(). Normally this function is called in pairs with SetCurrentContext() and it is the application that knows when it changes context and related allocator functions. But the new function style, with a context in each function, is actually a kind of more performant and straight forward SetCurrentContext() alternative, isn't it. Hence the allocator functions should also be set.

The solution could be (an opinion) to associate allocator functions with the context. With that concept the application is still in control of what allocator functions it wants to associate with each context. And that will be a performance gain versus calling SetAllocatorFunctions() all the time.

The above would also allow all ImGui demo code and ImPlot code to remain unchanged if there would be a resource problem, because I can imagine that upgrading those would be a serious effort. Now, defining IM_KEEP_OLD_CONTEXT_ABI would be all it takes to keep them running.

From a documentation perspective this needs only a few lines of explanation, as well for existing as for new users.

Well, that is as far as I understood the ImGui code correctly. I am studying it from an SDL3/Vulkan back-end perspective.

With the above approach I am under the impression that ImGui would be fully multi-treaded, multi-context, multi-dll and docking and multi-viewport enabled in a relatively simple way. OK, I know, simple is easily said if you need to do the above for many hundreds of functions, but you get the picture.

That would, IMO, make Dear ImGui one of the most complete, performant and versatile ImGui implementation.

My 5 cent, for what its worth.

FunMiles commented 4 months ago

The 'context argument per function' will indeed be an ABI breaking change, and maybe it doesn't have to be.

As an example, a main function with a context as an argument could become:

void RadioButton(context, ...)

Those that want to do the effort, new user or new code could use that new style.

To maintain backwards ABI compatibility with the current function style, the following could be envisaged.

ifdef IM_KEEP_OLD_CONTEXT_ABI void RadioButton(...) { // Retrieve the context here and pass it as an argument to the new function. RadioButton(ctx, ...); } #endif

Existing code should be able to run unmodified, IMO, with only the above extra function call as overhead. Existing code could even be partially upgraded only focusing on functions that are called at very high frequency if overhead would be a problem, which I doubt. IMO, when one starts to upgrade they will just undefine IM_KEEP_OLD_CONTEXT_ABI and the compiler will bring up all lines that can be upgraded to the new style.

Using macros and defines is last century's solution 😝. First this is C++ where overloading exists, which means that both

void RadioButton(Context* ctx, Arg1 arg1, Arg2 arg2);

and

void RadioButton(Arg1 arg1, Arg2 arg2) {
   context = getGlobalContext();
   RadioButton(context, arg1, arg2);
}

can coexist without conflict. Secondly there are namespaces to offer a cleaner approach, whereby a using namespace back_compat could bring in the function without context with a single line of code modification. Omitting the using line for users of the context aware API would avoid any danger of accidental misuse of an undefined global context.

NostraMagister commented 3 months ago

Using macro's is adhering with what ImGui uses currently (see imgui.h and imgui.c). I suggested something within the style of the library.

ImGui is in a single namespace and introducing using namespace back_compat would, IMO, break the current easy model where, per docs, users are even invited to extend the ImGui namespace.

The macro also allows to keep the old style code completely out of the compile unit (many hundreds of functions) for new users and new code. ImGui is compiled into most applications as source code.

The new-style being the default has IMO only advantages. See the many questions about multi-context, multi-thread, multi-dll that all in someway relate to that context pointer (as mentioned by the authors in in-line docs).

Yet, as written, backwards compatibility must be available for large projects such as the ImGui and ImPlot Demo and existing user projects, allowing their authors to evaluate on a case per case basis if and when to upgrade them.

About your suggested 'misuse'. Being able to use the old and new style intermixed is IMO a big advantage and there is no real misuse if everything works in symphony anyway. It is a no brainer for the users and it allows for partial upgrades.

FunMiles commented 3 months ago

Using macro's is adhering with what ImGui uses currently (see imgui.h and imgui.c). I suggested something within the style of the library.

Macros can be useful but they are inherently broken. They are not part of the language and can create headaches. It is, IMHO best to avoid them and only use them for things the language cannot provide. In this case, namespaces are a perfect fit.

ImGui is in a single namespace and introducing using namespace back_compat would, IMO, break the current easy model where, per docs, users are even invited to extend the ImGui namespace.

using namespace ImGui::V1;

The macro also allows to keep the old style code completely out of the compile unit (many hundreds of functions) for new users and new code. ImGui is compiled into most applications as source code.

You can have a compile header that does not include the compatibility definitions if that is your issue, but reading and parsing all such simple functions as the one I have shown takes no time at all. Not a concern in my book.

The new-style being the default has IMO only advantages. See the many questions about multi-context, multi-thread, multi-dll that all in someway relate to that context pointer (as mentioned by the authors in in-line docs).

I don't see how that goes against the idea of a back-compatibility namespace. The default is the new-style. Just do not use the compatibility namespace.

Yet, as written, backwards compatibility must be available for large projects such as the ImGui and ImPlot Demo and existing user projects, allowing their authors to evaluate on a case per case basis if and when to upgrade them.

Easy with namespace. If, indeed the new style is the default, just add using namespace ImGui::V1; to those projects. You're done. And it is safer than the define. Why? That line can located at a logical place, after the include, before use of old-style functions. With a macro style #define, that definition has to be put before before including the ImGui files. In my experience, having to make sure a #define before a specific #include is asking for problem, particularly in large complex projects. One day, some other programmer will introduce some file that will include directly without the define and you'll be wondering for hours where things went wrong and how to fix it.

About your suggested 'misuse'. Being able to use the old and new style intermixed is IMO a big advantage and there is no real misuse if everything works in symphony anyway. It is a no brainer for the users and it allows for partial upgrades.

You misunderstood me. There are cases where you want to mix, and the namespace approach does not impeded it but there are places where you don't: E.g. You can do partial upgrades: when you upgrade some part of your code, it would be best to make sure that within the upgraded parts of your code, only strictly explicit context functions are being used (that's the avoiding misuse comment). In such cases, not having a call to the old interface compile is a significant advantage because if you forget, by accident, to pass the context, the compiler will stop you. In fact you gave me the greatest advantage of the namespace approach: you can, WITHIN A SINGLE FILE, have the old style function visible only within a certain scope while in other places, they are not visible. That is because the using namespace... can be limited to a scope (within a function, or starting at a certain line, or within the scope of another namespeace). With the macro approach, the declaration is either on or off for the whole file.

All in all, my opinion are stylistic, and I am not imposing them. They are, however, educated by 36 years of C++ programming on large, complex codes.

GamingMinds-DanielC commented 3 months ago

In fact you gave me the greatest advantage of the namespace approach: you can, WITHIN A SINGLE FILE, have the old style function visible only within a certain scope while in other places, they are not visible. That is because the using namespace... can be limited to a scope (within a function, or starting at a certain line, or within the scope of another namespeace). With the macro approach, the declaration is either on or off for the whole file.

Many true things here and namespaces are great, but they do have their limits. While you can import definitions from another (nested or not) namespace into a namespace by using it within the scope of the target namespace, you can't declare or extend a namespace from within the scope of a function. You can't declare a namespace alias to refer to a combination of more than one namespace. So if you need or want to avoid using a namespace, you can't get ImGui::* to refer to ImGui::* and ImGui::compat::* declarations combined. Not in function scope at least.

Just some nitpicking, not an endorsement of macros. ;)

ocornut commented 3 months ago

The solution could be (an opinion) to associate allocator functions with the context. With that concept the application is still in control of what allocator functions it wants to associate with each context.

Allocators cannot conveniently be associated to a context as it would mean e.g. ImVector<> or other leaf helpers, would need a context, or we could decide to need to drop them using our allocators.

And that will be a performance gain versus calling SetAllocatorFunctions() all the time.

???

With the above approach I am under the impression that ImGui would be fully multi-treaded, multi-context, multi-dll and docking and multi-viewport enabled in a relatively simple way.

To clarify. This would only allow different contexts to be used in parallel in multiple threads. Which is already possible with a TLS variable. In no case there is even remotely a possibility that multiple threads would be able to submit or interact simultaneously with a same Dear ImGui context. People seems to be mostly fighting for a feature that's already there, while misunderstanding the actual benefits. The main benefit of explicit context is that a few things would be "neater" (by some vague ideal software engineering definition of neater), it will not enable anything new.

The proposal in #5856 already offers a way to keep an implicit context API which are basically those inline wrappers you suggest, and this is all generated.

The new-style being the default has IMO only advantages.

I personally see one strong disadvantage, which is that it will make Dear ImGui accessible from many less leaf points in a codebase. Technically people could have a global function to retrieve their main context, but I know that in reality that many people are resisting this and would therefore reduce their own access to Dear ImGui. I may personally have the urge to help people fight this by keeping a ImGui::GetCurrentContext() function solely for this purpose. I also noticed while debugging third party codebase that when GImGui was defined to be a function, it would make debugging noticeably more difficult since the pointer is not accessible at all time.

Macros are sometimes a better fit for the need of language bindings generators and use from C, but I haven't explored this particular topic in depth so I don't know what's best. It's a minor detail either way that may be decided when the time has come, but I appreciate the different ideas exposed there.

I think our most likely bet is that we facilitate making #5856 an official thing, and steer toward making backends more usable with multiple contexts.

NostraMagister commented 3 months ago

Yes, (https://github.com/ocornut/imgui/pull/5856) is even better then just only adding the ctx to every function. Thanks for attracting attention on that post. I will follow that thread instead as I see thought and effort have been put in.

FunMiles commented 3 months ago

Many true things here and namespaces are great, but they do have their limits. While you can import definitions from another (nested or not) namespace into a namespace by using it within the scope of the target namespace, you can't declare or extend a namespace from within the scope of a function.

True, but you cannot add any outwardly visible functions from within the scope of a function. Do you have a concrete example of what you are thinking of? I am not seeing an actual limitation but I may be not thinking of some of your usage cases.

You can't declare a namespace alias to refer to a combination of more than one namespace. So if you need or want to avoid using a namespace, you can't get ImGui::* to refer to ImGui::* and ImGui::compat::* declarations combined. Not in function scope at least.

Right but you can mimic the same effect with something like this:

namespace A {
   int f(int) { return 1; }
}

namespace B {
   int f(int, double) { return 2; }
}

namespace C {
using namespace A;
using namespace B;
}

void user() {
  using namespace C;
  f(3);
  f(3, 1.0);
}

Just some nitpicking, not an endorsement of macros. ;)

Healthy dialog :)

I think our most likely bet is that we facilitate making https://github.com/ocornut/imgui/pull/5856 an official thing, and steer toward making backends more usable with multiple contexts.

Totally agree. I have fully embraced the code from https://github.com/ocornut/imgui/pull/5856 but had to tune the backend. So for me the last part of your statement is the currently incomplete part for a smooth use of multiple contexts in a codebase. I only worked on the backend for Vulkan/gflw and am unable to work on/test all the other combinations.

Could we have (maybe in a branch) https://github.com/ocornut/imgui/pull/5856 merged into the main line code and with having the non-explicit-context function be accessible via namespace or any other approach and then start an evolution of the backend code base for a clean context-aware API?

adrianboyko commented 3 months ago

To clarify. This would only allow different contexts to be used in parallel in multiple threads. Which is already possible with a TLS variable. In no case there is even remotely a possibility that multiple threads would be able to submit or interact simultaneously with a same Dear ImGui context. People seems to be mostly fighting for a feature that's already there, while misunderstanding the actual benefits. The main benefit of explicit context is that a few things would be "neater" (by some vague ideal software engineering definition of neater), it will not enable anything new.

I have high hopes that I can one day make Dear ImGui the go-to GUI for the Pony programming language. Since Pony is actor-oriented, its threading story is a bit different. Pony actors are single threaded but the thread comes from a pool and can differ from invocation to invocation. From the perspective of a high-performance actor-oriented system, I suspect that it would be better to avoid TLS if possible. Instead, a Pony actor that has GUI would save its context in its private state and would simply specify it in calls to Dear ImGui.

But, as you've said, this probably doesn't enable anything new. I suppose that the actor could, instead, assign its context to the TLS variable at the beginning of each invocation and nullify it at the end (to prevent the possibility of it leaking out to some other actor). I'm just concerned about the performance impact of frequent TLS assignments that could be avoided.

Just wanted to add this perspective to the discussion in case it makes any difference.

FunMiles commented 3 months ago

To clarify. This would only allow different contexts to be used in parallel in multiple threads. Which is already possible with a TLS variable. In no case there is even remotely a possibility that multiple threads would be able to submit or interact simultaneously with a same Dear ImGui context. People seems to be mostly fighting for a feature that's already there, while misunderstanding the actual benefits. The main benefit of explicit context is that a few things would be "neater" (by some vague ideal software engineering definition of neater), it will not enable anything new.

I have high hopes that I can one day make Dear ImGui the go-to GUI for the Pony programming language. Since Pony is actor-oriented, its threading story is a bit different. Pony actors are single threaded but the thread comes from a pool and can differ from invocation to invocation. From the perspective of a high-performance actor-oriented system, I suspect that it would be better to avoid TLS if possible. Instead, a Pony actor that has GUI would save its context in its private state and would simply specify it in calls to Dear ImGui.

I am glad you bring up the issues of TLS for your usage. TLS is also not appropriate in some C++ coroutine systems. A coroutine can easily be moved from thread to thread and thus there is no direct binding of any TLS variable to a running coroutine.

@ocornut is partially right if you can have several contexts via TLS, you can work with different contexts in parallel. But he totally miss the point for many usages in fully multi-threaded codes. GUI operations may need to be done on any thread of a pool and this is where explicit contexts are the reliable conceptually clear (that's more than neater or vague) approach, while TLS is not. But even with explicit contexts, you eventually have to render the data from those contexts. That is where clear, easy to use API must be introduced for the backends. He's mentioned the backends and I am glad for that. I just hope we can move forward with a bit more enthusiasm. The explicit context PR he mentioned (and which I use) has been in purgatory way too long. Unless it is brought in (with all the care needed) soon, the work on the backend will never take place. It's a chicken and egg situation.

ocornut commented 3 months ago

I doesn't matter if it's "conceptually clear" if it also comes with another set of problems or challenges.

There are hundreds of open useful topics that have been open for many years. I keep them open because I am interested in them. I would happily merge more third-party PR if they were without side-effects or faults but guess what they almost always are coming with problems and it almost always take me more time to finish a PR than for the submitter to make the initial PR. This is what it takes. It's already my life everyday to wade through literally one thousand open topics (not counting another thousands in my own notes) competing for attention. I only have one instance of myself and I don't know how to do better. The endless commentaries are not necessary helping.

FunMiles commented 3 months ago

@ocornut I appreciate all the work you do and understand the difficulty. Wouldn't there be some possible ways to ease this problem? Could others help you do that work? Could bringing into a side branch, like there is the docking branch, what I think is such an important feature for many and for the future of the project, avoid the side-effects or faults for users who do not need multiple context be a viable approach?

ocornut commented 3 months ago

Could bringing into a side branch, like there is the docking branch, what I think is such an important feature for many and for the future of the project, avoid the side-effects or faults for users who do not need multiple context be a viable approach?

That's close to what #5856 is (and it is a great polished PR which I appreciate very much, which is why I haven't ignored it), if it gets finished and well tested I think we can consider making it an official in-repo script and tested on CI. But all of this need further polishing work (e.g. backends, demos), as you stated yourself. But honestly it seems a little bit low-priority to me to move small mountains to avoid what's in most cases is a function call. I don't think it is realistic use case to use dear imgui from coroutines with one dear imgui context per thread allocated to process coroutines, you probably still need some association logic in place. Either you have a single thread running the coroutines and then there's no need to do anything. Either you have multiple threads running the coroutines and then it means you have multiple context and you are likely to want to associate coroutines to a selected context based on some high-level semantic/classification, at which point it may be affordable to just call a TLS setter if using a manually implemented coroutine system. Either way I suspect at this point this is all a theoretical problem.

(That said, a possible sponsor will soon bring me in to investigate similar issues so it may justify liberating the time to investigate it)

FunMiles commented 3 months ago

I don't think it is realistic use case to use dear imgui from coroutines with one dear imgui context per thread allocated to process coroutines, you probably still need some association logic in place. Either you have a single thread running the coroutines and then there's no need to do anything. Either you have multiple threads running the coroutines and then it means you have multiple context and you are likely to want to associate coroutines to a selected context based on some high-level semantic/classification, at which point it may be affordable to just call a TLS setter if using a manually implemented coroutine system. Either way I suspect at this point this is all a theoretical problem.

I am using coroutines with imgui. It is unreasonable to make a coroutine system aware of some imgui TLS variable and forcing the coroutine system to set TLS variables for any library (not just imgui) when re-starting a coroutine on a thread. A coroutine system and any library use are orthogonal principles. A given coroutine is associated to a context simply by having that context as a variable of the coroutine. The following skeletal example how the context aware code can be used cleanly and comments show where TLS cause trouble:

thread_local ImGuiContext* tls_context;
task<void> visualization(Rendering &rendering) {
    auto ctx = ImGui::CreateContext(); // On some thread B

    while(not window_closed) {
       tls_context = ctx; //  Has not effect in the explicit-context imgui code, just for illustration of TLS vs coroutines.
       [...]
       // run calls to the backend on the main thread and wait for it to complete 
       co_await on_main_thread([&]) {
            // This is running on the main thread
            ImGui_ImplVulkan_NewFrame(ctx);
            ImGui_ImplGlfw_NewFrame(ctx);
            ImGui::NewFrame(ctx);
        });
       // the coroutine can now be running on another thread B than the original A. 
       assert(tls_context == ctx); // This assert will fail any time the coroutine is moved between threads.
       // We don't want to have to set TLS variables after every `co_await` it is error prone and unnecessary if the context
       // is explicit in function calls that need it.
       [...] 
   }
}
Dragnalith commented 3 months ago

I am not sure why there are long debates:

So whatever people opinion on TLS being good enough or not, it has not impact on the current course of action.

On Sat, Jun 8, 2024, 02:35 FunMiles @.***> wrote:

I don't think it is realistic use case to use dear imgui from coroutines with one dear imgui context per thread allocated to process coroutines, you probably still need some association logic in place. Either you have a single thread running the coroutines and then there's no need to do anything. Either you have multiple threads running the coroutines and then it means you have multiple context and you are likely to want to associate coroutines to a selected context based on some high-level semantic/classification, at which point it may be affordable to just call a TLS setter if using a manually implemented coroutine system. Either way I suspect at this point this is all a theoretical problem.

I am using coroutines with imgui. It is unreasonable to make a coroutine system aware of some imgui TLS variable and forcing the coroutine system to set TLS variables for any library (not just imgui) when re-starting a coroutine on a thread. A coroutine system and any library use are orthogonal principles. A given coroutine is associated to a context simply by having that context as a variable of the coroutine. The following skeletal example how the context aware code can be used cleanly and comments show where TLS cause trouble:

thread_local ImGuiContext* tls_context; task visualization(Rendering &rendering) { auto ctx = ImGui::CreateContext(); // On some thread B

while(not window_closed) {
   tls_context = ctx; //  Has not effect, just for illustration.
   [...]
   // run calls to the backend on the main thread and wait for it to complete
   co_await on_main_thread([&]) {
        // This is running on the main thread
        ImGui_ImplVulkan_NewFrame(ctx);
        ImGui_ImplGlfw_NewFrame(ctx);
        ImGui::NewFrame(ctx);
    });
   // the coroutine can now be running on another thread B than the original A.
   assert(tls_context == ctx); // This assert will fail any time the coroutine is moved between threads.
   // We don't want to have to set TLS variables after every `co_await` it is error prone and unnecessary if the context
   // is explicit in function calls that need it.
   [...]

} }

— Reply to this email directly, view it on GitHub https://github.com/ocornut/imgui/issues/586#issuecomment-2155253916, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHSSFE7YUGZTMU5YBTFGSDZGHVOJAVCNFSM4CAXWF6KU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TEMJVGUZDKMZZGE3A . You are receiving this because you were mentioned.Message ID: @.***>

zachmmiller commented 3 days ago

I am using version 1.90.0 and I am trying to implement the thread local wrapper for GImGui (as mentioned here and in imgui.cpp). I have this in my imconfig.h:

struct ImGuiContext; extern thread_local ImGuiContext* MyImGuiTLS;

define GImGui MyImGuiTLS

and I have this define in my main.cpp file:

define MyImGuiTLS

but I am getting this linker error:

Undefined symbols for architecture arm64: "thread-local wrapper routine for MyImGuiTLS", referenced from: ImGui::MemAlloc(unsigned long) in imgui.o ImGui::MemFree(void) in imgui.o ImFormatStringToTempBufferV(char const, char const, char const, char) in imgui.o ImGui::SetNextItemWidth(float) in imgui.o ImGui::CalcListClipping(int, float, int, int*) in imgui.o GetSkipItemForListClipping() in imgui.o ImGui::GetCurrentContext() in imgui.o ... ld: symbol(s) not found for architecture arm64

Anyone know what I am doing wrong here?

Thanks in advance!

NostraMagister commented 2 days ago

@zachmmiller

In my .cpp I have this:

// Declared ImGuiTLS (Thread Local Storage) and initialized it in ImGuiInitialize (see below) thread_local ImGuiContext* ImGuiTLS=nullptr;

and after initializing like this: // Setup Dear ImGui context this->gui_context = ImGui::CreateContext();

Then I do:

// To make ImGui thread safe for contexts we must copy the context to a thread local variable. ImGuiTLS = this->gui_context;

That works. Grtz

On Sun, Sep 29, 2024 at 7:15 PM zachmmiller @.***> wrote:

I am using version 1.90.0 and I am trying to implement the thread local wrapper for GImGui (as mentioned here and in imgui.cpp). I have this in my imconfig.h:

struct ImGuiContext; extern thread_local ImGuiContext* MyImGuiTLS;

define GImGui MyImGuiTLS

and I have this define in my main.cpp file:

define MyImGuiTLS

but I am getting this linker error:

Undefined symbols for architecture arm64: "thread-local wrapper routine for MyImGuiTLS", referenced from: ImGui::MemAlloc(unsigned long) in imgui.o ImGui::MemFree(void) in imgui.o ImFormatStringToTempBufferV(char const, char const, char const, char) in imgui.o ImGui::SetNextItemWidth(float) in imgui.o ImGui::CalcListClipping(int, float, int, int*) in imgui.o GetSkipItemForListClipping() in imgui.o ImGui::GetCurrentContext() in imgui.o ... ld: symbol(s) not found for architecture arm64

Anyone know what I am doing wrong here?

Thanks in advance!

— Reply to this email directly, view it on GitHub https://github.com/ocornut/imgui/issues/586#issuecomment-2381431315, or unsubscribe https://github.com/notifications/unsubscribe-auth/AM3IBX5EVUA7XHPQ4OUMTBLZZAYQ5AVCNFSM4CAXWF6KU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TEMZYGE2DGMJTGE2Q . You are receiving this because you commented.Message ID: @.***>

ocornut commented 2 days ago

and I have this define in my main.cpp file:

define MyImGuiTLS

That seems like a unused macros and it means you are never declaring an instance of your variable.

Your .cpp file should contain the variable instance, aka

thread_local ImGuiContext* MyImGuiTLS;

Anyhow this is OFF TOPIC with the PR here, please raise an issue to discuss this if you have more issues.