Open YellowApple opened 7 years ago
Hi @YellowApple! This is actually the first I've heard of the Nuklear egg, but thanks for bringing it to my attention!
I would guess that your first approach is probably the way to go, and I'd be interested in investigating that a bit further. Are you able to give me a concrete example of what you tried, and we can go from there?
I actually managed to narrow it down to the fact that Hypergiant declares window hints for OpenGL 3, while the nuklear egg only exposes a backend for OpenGL 2. I ended up recreating a stripped-down version of window.scm
to verify this; leaving out the OpenGL-3-related hints seems to do the trick. I haven't tested further so far (namely: whether or not I'm still able to render scenes/cameras, and whether or not this messes with capturing non-UI events), but it's a start.
I think I just need to replace the existing nuklear-glfw-opengl2 backend with a nuklear-glfw-opengl3 backend; the upstream C headers have very similar APIs, so hopefully it won't require too much mucking about in C.
In the process of investigating the feasibility of the third approach, though, I ran into another question: what's the best way to go about ray-based picking? I think I understand the general concept, but am I able to inspect the camera somehow to get a list of the nodes inside its frustum? I have a sphere/ray intersection test in place, but I can't seem to figure out a good way to query Hyperscene for the nodes that are actually visible. Or do I have to manage this separate from Hyperscene? The example on your blog (for the go board) helps, but it seems to rely on the fact that everything's on the same plane (whereas I need something that works more generally).
Hypergiant should generally work with OpenGL 2 (although not all its features will). You can modify what sort of context Hypergiant creates when you call start
, which has the same signature as http://api.call-cc.org/doc/glfw3/make-window . In other words, you should be able to pass context-version-major: 2
to start
if that's the sort of context you want. (I actually suspect there might be a bug that prevents one from achieving this, but I'm looking into it [Update: there is no bug, it just works])
Ultimately having an OpenGL 3 backend is probably the preferred route. OpenGL 3 is almost a decade old now, so there's not much reason not to be using it. In fact, in the Apple world there is nothing else to use!
As for picking an object from a point on the screen:
As I'm sure you saw, get-cursor-world-position
gives you two points (near
and far
) that make up a ray that goes through a given viewport. Figuring out what nodes intersect this ray would indeed be something that Hyperscene would be good at doing (but it doesn't know how to do that yet). Baring adding that feature to Hyperscene, you should probably be keeping track of what nodes you have in the Scheme-level of things so that you can... do things to them. Given this list you could do the intersection calculation yourself (many applications lend themselves to some heuristic for doing so relatively efficiently). Apologies that this isn't a very satisfying answer 😅
Basically, I'd like to use Nuklear (via the egg that wraps it), but the egg in question seems to be written with the assumption that I'm using the glfw3 egg directly, and I'm not sure exactly where to stick the Nuklear-specific code in the context of a Hypergiant application.
Approaches I've considered:
(backend:new-frame)
,(nk:window-begin ...)
,(nk:window-end)
, and(backend:render!)
calls tostart
'spost-render
hook (as well as(backend:init! ...)
/(backend:init-font!)
calls ininit
and(backend:shutdown!)
incleanup
). This miraculously doesn't crash anything, and Hypergiant seems to have no problems, but Nuklear windows/widgets don't show up this way, and the application spams my STDOUT withGL error: invalid operation
.pre-render
instead ofpost-render
. Same issue.ui
scene, which is probably the "right" way to go about this, but I haven't the slightest idea how to bend eitherui
or Nuklear to satisfy the other's expectations. I'm guessing I somehow need to turn a Nuklear context into a Hyperscene node hierarchy somehow, but Nuklear (or at least the particular backend exposed by thenuklear
egg) wants to render directly to a viewport or something. Maybe I just need to write a different Nuklear backend that turns windows/widgets into Hyperscene/Hypergiant nodes? And then add them to theui
scene? Or am I overthinking this?I'm admittedly pretty new to this OpenGL thing, and I'm sure I'm going about this the entirely wrong way. I'm not even sure if this is the right place to ask this sort of question (maybe I should bug folks on the Nuklear side of the world instead?), but hopefully it's a good start to getting some kind of documentation or best practice or somesuch.