Open cesss opened 5 years ago
You can use JSON-RPC(over http using civetweb https://github.com/civetweb/civetweb) or rpclib https://github.com/rpclib/rpclib to communicate with other app.
I'm using JSON-RPC + civetweb for our internal projects and it works very well.
PR of adding support of JSON-PRC or rpclib to gltf-insight
is always welcome.
You can use JSON-RPC(over http using civetweb https://github.com/civetweb/civetweb) or rpclib https://github.com/rpclib/rpclib to communicate with other app.
I'm using JSON-RPC + civetweb for our internal projects and it works very well.
PR of adding support of JSON-PRC or rpclib to
gltf-insight
is always welcome.
I was not thinking in inter-application communication, but about making it very easy to add new functionality to gltf-insight
without cluttering the main GUI nor the source code repository. So, I'm thinking in a menu called "Modules" or "Plugins" that you can populate by building your addons at the same time as gltf-insight
.
Imagine somebody creates a module called "UV Editor". If you download both gltf-insight
and the uveditor
module source code, and build both at the same time, when you start gltf-insight
you would see that the "Modules" menu has a new entry called "UV Editor". When you click on it, a new window is opened, with its own GUI, that lets you edit the UV coordinates of the model (or perhaps of only the selected geometry at that moment, for more flexibility). When you finish editing the UV coordinates, you close the "UV Editor" window, and you get back to the main gltf-insight
window.
In this way, lots of cool functionality could be added: model generators, animation editors, renderers, etc, etc, etc...
Even some of the new feature requests posted today could be good candidates for implementing them as modules rather than into the main gltf-insight
GUI and code (for example, creating morphs or a material editor could be a task for a module, leaving the main gltf-insight
GUI and code very simple).
Of course the inter-application communication you mention could also be used, but it would be harder (if you want to create a UV editor you would need to create a brand new application from scratch, which is a hard task). OTOH, a module/plugin architecture would give you everything ready to work, even creating a window with OpenGL 3.x context for you, with all the API for creating/modifying all the data in the gltf-insight
current document.
A related topic would be whether to support dynamically loaded plugins or not, but I think it would be wise to start just with static compile-time plugins, because dynamically loaded ones would add complexity and could be done at a later time if desired (build the plugins as DSOs, implement multiplatform DSO loading, choose what folder to put plugins... all of that adds complexity and can be decided at a later time, if at all). So, I'd rather prefer to start with static compile-time plugins only (the only added "complexity" would be to implement an API for creating/modifying all the gltf data, as well as get the CMake files ready to accept building user-added modules together with gltf-insight
).
Many, many ideas come to my mind and I would starting them today if this module/plugin architecture was possible. OTOH, with the JSON-RPC I would perhaps need to duplicate the gltf-insight
code so that I have two different gltf-insights
that can talk to each other.
@cesss I think you are putting the cart before the horses here.
This software was thought as a lean and mean tool to inspect the content of glTF assets, and be able to see animation data, and do small manipulation on them.
The need stems from the fact that in a glTF asset, animations are stored in the binary buffers, and thus aren't human readable without a lot of decoding. I don't think a plugin system is a bad idea. But that would require quite a bit of work that is not useful right now. (DSO are not a problem, it's actually pretty easy to deal with a shared interface. The data layout inside the code is not thought to be shared with other things. Otherwise, the UI would be trivial to be extended, as it's 100% built with ImGui)
IMHO What you are starting to describe is a fully fledged 3D editor that you would want to bolt on glTF insight. I wold suggest you to look at Blender version 2.80. It hasn't been released as a "stable" software yet, but the first Release Candidate is out. It does integrate an officially developed by members of the Khronos Group glTF Importer and Exported, has almost all the features you describe in your post and is easily extensible via Python scripting.
@Ybalrid I understand. However, I'm going to start a new project, and I really need the system I described above, no matter if it's done with gltf-insight
or any other open source software. Blender is not an option because its design is completely against and radically opposite to my Mac-like idea of simplicity, that I always try to follow when writing software. Moreover, taking Blender in its current status would be not a cart without horses, but a cart with ten thousand horses which I'd need months to master its source code design.
I need morph blending, skinning, PBR, animation playback, and everything with a simple, clean, minimal design, in C++. Correct me if I'm wrong, but gltf-insight
is the only open source tool that has these features while being simple at the same time.
I understand however that implementing what I'd need at this moment would be a big push into the design criteria you are following, so I understand it might be not convenient at this time.
But anyway, I can try advancing in the direction I explained above, and, if at some point you consider it worthy of being pushed back into gltf-insight
, it would be great.
Yes, what I described is a system that might become a fully fledged 3D suite in the future, but note that the concept is closer to one of those music apps that have a main sequencer where you can plug tons of synths, effects, generators, I/O modules, etc, etc... so, I think the approach is different from the usual 3D suites.
@cesss Don't get me wrong, contributions are always welcomed. I just wanted to clarify the state of this project because you were asking for quite a lot of things in that last post! 😅
I need morph blending, skinning, PBR, animation playback, and everything with a simple, clean, minimal design, in C++. Correct me if I'm wrong, but gltf-insight is the only open source tool that has these features while being simple at the same time.
Yes indeed. Most tools you'll find out there here on GitHub are probably JavaScript applications, not C++. C++ projects with these features are probably more involved and more complex that this program. (There's one "mesh viewer" but it's using Ogre 2.x. I'm quite familiar with it as it is using some of my own code. There's also this sample program, but it's built with Vulkan
. You probably need to learn a lot more about Vulkan before this code can be of any use to you).
If you want something that has all the animations features and a PBR renderer I can see that glTF-insight
is 100% your best optionas a base to build something, but right know, please understand that it's still really early* in it's development, and probably need some refactoring and documentation work before what you want here is doable.
Considering an eventual "plugin" system... It is not the end of the world, just so you get a generally idea how it could be done:
Changes in glTF-insight
:
app
classmesh
/animation
/gltf-node
classesSDL2
instead of GLFW
as the underlying windoing and OS interaction library, but that's not a huge deal, it's like 20 lines of C++ and #ifdef __OS_NAME__
to deal with that on Windows and Unix-likes.)app
class loads DSO, try to find a specific "plugin initialization" function, calls it. Plugin create instances of it's own implementation of these interfaces and return a pointer to them. There's some specific considerations on Windows platform that needs to be respected, but that's pretty easy to make an universal macro to be put in front of function declarationsAnd here's how involved creating a plugin compatible with this would be:
git submodule
to checkout ImGui here). Could provide a CMake file in the "template plugin" that uses your checkout of the glTF source code to be sure everything is OK here. I don't know if there's a way to have access to the ImGui headers without carrying the implementation twice. There's could be an issue with the static nature of the GUI initialization there.extern "C"
function) that new
an instance of the overloaded classes, and cast them back as a pointer to the interface type and return it.This is not a huge amount of work in the glTF-insight
side of things (one day at most if I had to do that), but I would wait after I have implemented the features that lighttransport actually need before adding this kind of stuff, and as I'm likely to change things as they are internally, I will not be able to promise any API stability for this as of now. (I see at least one item on my todolist that may require me to change both some of the code that compute skinning, and the way skinning joints and weights are handled).
My point is, If you can wait a bit I may consider this, I actually like to be able to do this kind of stuff too, but I only started working on this 2.5 months ago, and I haven't completed the list of features I need to do, so I cannot take to many tangents.
P.S: If you ever want to discuss about what's that new project you want to start is with somebody familiar with glTF and related things, you can DM me on twitter, or use the e-mail address in my GitHub profile
@Ybalrid I think we are ourselves in a race condition 🤣 because you first obviously need to finish your first milestones, while I need to start my project this week. And it is a "race condition" because at the end we can end up having repositories that are not API binary compatible, as you wisely noted.
Anyway, I think it's worth it: I believe the best thing I can do is try to get to a proof-of-concept and then publish it in a repository, so that you can look at it, share your thoughts, and even suggest different directions, or adopt it in your repository if you like it.
I just saw you are implementing selection in your mouse pick branch. Do you plan to merge this branch soon? I'm asking because I'd obviously prefer working when selection is operative (at least in a minimal status).
I'll try to keep my code in separate files from yours, in order to minimize "surgery" and make it easier to remain compatible.
Also, thanks a lot for all the very detailed ideas you posted. I'll try to turn some of them even simpler: For example, if I can avoid exposing ImGui to plugins, I'll avoid it (I'm hoping to expose only a anttweakbar-like API that manages simple ImGui controls in a transparent way... this would imply that plugins will have a simple GUI, but for the moment I think it could be perfect).
If you merge the mouse pick branch one of these days, I'll start from there. Otherwise, I'll start from the current status and merge later.
Oh, one thing I forgot: Is OpenGL code confined in a few files, or is it widespread in all the code? (I ask because it might be a good idea to introduce the idea of gfx backend, in order to make it possible to support Vulkan in the future, even if nowadays we only have the OpenGL backend).
Few things I have to say :
devel
once it's actually working. Probably this evening or tomorrow.devel
branch if you want to be able to contribute. When you give a patch to some project, or make a Pull Request, the responsibility to make sure that the patch can be applied/the Pull Request can be merged is on your side. (Common rule of all open-source software development :wink:) Any amount of surgery required to get your changes in are on you. So it's up to you to keep your job easier 😉 On the way things are architectured right now:
OpenGL code confined in a few files, or is it widespread in all the code?
- Currently there's no abstraction on top of the 3D rendering API, meaning that it's just a program doing GL calls, there's no "graphics backend". Code is written to be more lean and mean.
Separating the graphics backend is a thing that may be done later down the line. To either be able to rewrite this thing in Vulkan, or to keep an OpenGL frontend for the GUI and display a software raytraced image of the mesh in the current state. This is also why there's both hardware (GPU) skinning like you traditionally have in a video game (it's the fastest technique, and this will be upgraded to ), and CPU based skinning, because we can have direct access to transformed polygons that way.
This code was written with a more "data oriented" approach in mind, notably on on the way mesh geometry is handled. You will not find a classic "vertex" structure containing one of each position/normal/uv/color/joint/weight attributes, but one array for each. The philosophy is "structures of arrays, not arrays of structures". These are passed to the functions that act on them, and they directly contains the most basic datatype (they are arrays of floats, not arrays of 2/3/4D vectors). This is an attempt to optimize CPU cache usage as a lot of operations are done linearly on these buffers (notably skinning and morphing). This is also why the glTF loading code may seems a bit intricate just for an OpenGL application. The common approach would be to just upload buffers
as is, and use the accessors/bufferview to configure vertex inputs for OpenGL shader, here since we need to process and also display the raw vertex data, buffers are explicitly de-interleaved. You probably don't want to touch the definition of the mesh
class without taking this into account.
As a lot of open-source software, It's also built like a bazaar, not like a cathedral. Contributions are always welcomed. But also contribution that "basically changes everything" aren't great because they are too hard to merge.
I can avoid exposing ImGui to plugins, I'll avoid it (I'm hoping to expose only a anttweakbar-like API that manages simple ImGui controls in a transparent way... this would imply that plugins will have a simple GUI, but for the moment I think it could be perfect).
Having an abstraction layer on top of ImGui sounds like a bad idea, even for just exposing plugins. Since code from an eventual DLL (DSO) will be called by the app::main_loop_frame
it shouldn't be too hard of an issue. ImGui seems to have some pre-processor macros that permit to flag it's functions in it's header as being "DLL exported/imported" too. I haven't extensively read the documentation, but that problem I mentioned is not a huge one, it's probably a matter of settings a few flags in an imgui.h file...
I think that's pretty much all the things you should take into account... Happy hacking! ^^"
@Ybalrid Thanks a lot for all the information!! Regarding the C++ standard, I never use new C++ things because I don't need them... In fact I would use today Cfront if it was still maintained, because I truly believe the only critical pieces in a system should be the C compiler (not C++ but C) and the libC runtime (again, not libC++ but libC), and that everything should be built on top of C. That's my view at system design. The only new feature that I'd use if new C++ standards supported it would be self-reflection (for querying class members), but the only way of achieving it in C++17 is with dirty hacks. So, no, for my part no C++17, no C++14, and even no C++11 (in new code, I mean).
Anyway, back to the point, I foresee that we might have different design purposes, which at the end may mean that our code will tend to diverge rather than to converge. And maybe the main point here is that I don't consider glTF as my main goal: I think it is the best 3D file format standard out there right now, but, still, is it enough for the complete serialization of my work? To be honest, it's almost there, but... there are some minor things that I'd like to serialize and that are not in the glTF spec now. So, in my case, I might take the decision of making glTF I/O a import/export file format, and using a custom dump format for serializing the complete data. Correct me if I'm wrong, but that choice wouldn't fit in the gltf-insight
design criteria (it's in the name!! 😄 ).
I believe that's a key difference between our approaches, and may have a big influence in our decisions at coding.
I'll start when mouse selection is functional. When I have something to show, I'll inform you.
BTW, is mesh instancing fully functional in this moment? (I'm asking because I consider memory-efficient instancing of meshes an important feature) If affirmative, do you send to the GPU the instance data once and reference it for each instance, or do you "un-instance" them and send flat vertex arrays only?
BTW, is mesh instancing fully functional in this moment? (I'm asking because I consider memory-efficient instancing of meshes an important feature) If affirmative, do you send to the GPU the instance data once and reference it for each instance, or do you "un-instance" them and send flat vertex arrays only?
A mesh data is loaded and sent to the GPU once, (unless mesh is modified by CPU powered skinning and morphing, at this stage, mesh is updated), so what glTF calls "instancing" should work (having one mesh
object referenced my multiple nodes
in one scene
).
On the OpenGL side, they will be drawn using the same set of VAO (one VAO per glTF "primitive" inside a mesh). So meshes aren't "un-instanced". But their drawing are still performed one by one, they aren't instanced or batched in any way.
I actually haven't tested a gltf that instantiate a mesh at multiple scene nodes. The glTF assets we use (and plan to use) probably don't use that specific feature, but this needs to work so if it's broken I will fix it.
I think it is the best 3D file format standard out there right now, but, still, is it enough for the complete serialization of my work? To be honest, it's almost there, but... there are some minor things that I'd like to serialize and that are not in the glTF spec now.
You can probably avoid having a custom file format for the missing data: One great thing about glTF is that this particular situation you have is gracefully handled, there are 2 ways to add more data to a glTF asset :
extras
property, you have the right to put everything you want there. You will need to also update your exporters and importers so that they can write or read them, but this space is just for you. Your asset is still a valid compliant glTF 2.0 even if you have more data than the spec provide to save.If you want an example on how to load optional extras
parameters using the tinygltf
library, gltf-insight does it here to load a non-standard (but common) practice of stashing human readable names for morph targets in an array inside mesh.extras.targetNames
. (A oversight of the file format that I have a personal grudge against myself. I'm that dude that added an implementation node about it in the specs to at least document the current practice.)
@Ybalrid Thanks a lot for all the information! I don't have instanced models for testing in this moment, but I need to support it from the beginning, so I'm glad it's currently implemented. Regarding the extras
property, I didn't know about it. It can certainly help me to save everything I need.
I'm still brainstorming, but I'll start this week. It's quite likely that I'll try to move to SDL2, as you said, and not because of plugins, but because it offers you more OS functionality in a multiplatform compatible way, and has also support for Vulkan (ImGui has a sample that uses SDL2 with Vulkan, I tried it with MoltenVK on Mac, and it works fine 😃 ). For the moment I don't plan to write in Vulkan, but I'd like to encapsulate all the OpenGL stuff in a backend structure, so that support for Vulkan can be added in the future. Also, SDL2 has support for iOS and Android, so... I really want to use it instead of GLFW.
Looking at the new feature requests posted today, I see gltf-insight has a very promising future, but it has also the risk of becoming cluttered with a complex GUI.
I feel I'd like to add a lot of functionality into gltf-insight, but it could make the application grow exponentially.
Instead, I think it would make a lot of sense to implement a module/plugin architecture, so that you can open "sub-apps" that start a new GUI (even perhaps a new window), and you get back to the main gltf-insight window (with the data updated) when you close your module/plugin/sub-app.
Also it would allow to any developer to contribute modules/plugins/sub-apps as external github repositories that can be easily compiled together with gltf-insight.
I believe the best moment for doing this is now that the application is still in development stage.
For example, I'd like to add a InstantMeshes module inside gltf-insight, but obviously I'd like it to open a new window rather than cluttering the main gltf-insight GUI.
What do you think?