AlexCharlton / Hypergiant

An OpenGL-based game library for CHICKEN Scheme
BSD 2-Clause "Simplified" License
68 stars 5 forks source link

Recommendations for integrating with UI libraries? #12

Open YellowApple opened 7 years ago

YellowApple commented 7 years ago

Basically, I'd like to use Nuklear (via the egg that wraps it), but the egg in question seems to be written with the assumption that I'm using the glfw3 egg directly, and I'm not sure exactly where to stick the Nuklear-specific code in the context of a Hypergiant application.

Approaches I've considered:

I'm admittedly pretty new to this OpenGL thing, and I'm sure I'm going about this the entirely wrong way. I'm not even sure if this is the right place to ask this sort of question (maybe I should bug folks on the Nuklear side of the world instead?), but hopefully it's a good start to getting some kind of documentation or best practice or somesuch.

AlexCharlton commented 7 years ago

Hi @YellowApple! This is actually the first I've heard of the Nuklear egg, but thanks for bringing it to my attention!

I would guess that your first approach is probably the way to go, and I'd be interested in investigating that a bit further. Are you able to give me a concrete example of what you tried, and we can go from there?

YellowApple commented 7 years ago

I actually managed to narrow it down to the fact that Hypergiant declares window hints for OpenGL 3, while the nuklear egg only exposes a backend for OpenGL 2. I ended up recreating a stripped-down version of window.scm to verify this; leaving out the OpenGL-3-related hints seems to do the trick. I haven't tested further so far (namely: whether or not I'm still able to render scenes/cameras, and whether or not this messes with capturing non-UI events), but it's a start.

I think I just need to replace the existing nuklear-glfw-opengl2 backend with a nuklear-glfw-opengl3 backend; the upstream C headers have very similar APIs, so hopefully it won't require too much mucking about in C.

In the process of investigating the feasibility of the third approach, though, I ran into another question: what's the best way to go about ray-based picking? I think I understand the general concept, but am I able to inspect the camera somehow to get a list of the nodes inside its frustum? I have a sphere/ray intersection test in place, but I can't seem to figure out a good way to query Hyperscene for the nodes that are actually visible. Or do I have to manage this separate from Hyperscene? The example on your blog (for the go board) helps, but it seems to rely on the fact that everything's on the same plane (whereas I need something that works more generally).

AlexCharlton commented 7 years ago

Hypergiant should generally work with OpenGL 2 (although not all its features will). You can modify what sort of context Hypergiant creates when you call start, which has the same signature as http://api.call-cc.org/doc/glfw3/make-window . In other words, you should be able to pass context-version-major: 2 to start if that's the sort of context you want. (I actually suspect there might be a bug that prevents one from achieving this, but I'm looking into it [Update: there is no bug, it just works])

Ultimately having an OpenGL 3 backend is probably the preferred route. OpenGL 3 is almost a decade old now, so there's not much reason not to be using it. In fact, in the Apple world there is nothing else to use!

As for picking an object from a point on the screen: As I'm sure you saw, get-cursor-world-position gives you two points (near and far) that make up a ray that goes through a given viewport. Figuring out what nodes intersect this ray would indeed be something that Hyperscene would be good at doing (but it doesn't know how to do that yet). Baring adding that feature to Hyperscene, you should probably be keeping track of what nodes you have in the Scheme-level of things so that you can... do things to them. Given this list you could do the intersection calculation yourself (many applications lend themselves to some heuristic for doing so relatively efficiently). Apologies that this isn't a very satisfying answer 😅