vircadia / vircadia-native-core

Vircadia open source agent-based metaverse ecosystem.
https://vircadia.com/
Other
531 stars 174 forks source link

RFC: Renderer Upgrade Project #1390

Closed digisomni closed 1 year ago

digisomni commented 3 years ago

Overview

This is the first project in a series of projects to bootstrap the Vircadia C++ SDK and its future ecosystem. It also serves to solve some core issues.

Considerations

Possible Candidates: Renderer Only

To be considered, the library must be well supported by multiple contributors.

Possible Candidates: Engine + Renderer

Shader Standard

A primary shader editor and implementation needs to be selected. This will serve as the master and all other implementations of Vircadia will have this master translated for them.

Platforms we intend to support for our own Interfaces as well as for creating SDKs for others to use:

Possible masters:

Questions

odysseus654 commented 3 years ago

Okay, I've been asked why my initial reaction to using IPC to communicate between the rendering engine and the rest of the system has been rather negative. I'm hoping that this doesn't turn into "get off my lawn", but here goes.

Our platform is currently split up into multiple pieces (on the server side) and a unified whole on the client side. These pieces are split up by their function and communicate with each other using (at minimum) IPC; they can be moved to different machines and still function about the same.

Most of what I'm going to say below this point is speculation based on my not diving very deeply into any of the technologies involved, but then again that seems to be the problem; the discussions we're having are focused on design decisions based on something I haven't heard anyone yet having in-depth knowledge to making an informed decision as to whether it would work for us (my missing today's meeting probably doesn't help here though).

The systems we seem to be discussing here seem to fall into two camps:

So, where does the IPC discussion come in with all this?

If we're discussing IPC with regard to a gaming engine well then yah, we'll likely need to use IPC (if we're even able to) because they control the gaming process and would never permit themselves be treated as just a lowly library. The trick is whether we can convince them to do something different from what they were designed to do, to handle low-level commands from us, in such as way that we can have a world from two different engines look anywhere similar, and do it in such a way that it doesn't break on the next version. I have very low hopes of being able to integrate gaming engines into our platform with much degree of success and am concerned that if this is our path we may quickly disappear as a project.

If we're discussing IPC with regard to a rendering engine then I would ask what the purpose of IPC is here. The rendering pathway feels to me like the part of the program that we need to optimize the most, keeping latency low and bandwidth high; sticking an IPC barrier here feels like the opposite of what we should be doing. No matter what approach we use IPC is going to be slower than a function call; the fastest method (shared memory) is fairly high bandwidth but is a passive approach that requires a lot of signaling that needs to be found or created through some other means.

I guess the question I would ask is why are we discussing IPC, and are we doing so for the "right reasons"? If it's because it's the best answer to a difficult solution (see "game engine" above) well then it sounds like something we need to do to get something working. If it's so we can work more independently then... we have a plugin system, let's use it or improve it please?

My personal approach to this would be to avoid the gaming engines (unless we have a really good reason not to) and refactor the rendering logic into a plugin, with the research and knowledge of what the prospective rendering engines are willing to do and what we would have to do for them. The approach would be nearly identical to #1200 , which has its purpose to refactor the script engine code to pass through a common interface that could be implemented by an outside project if need be.

In summary I'm okay with IPC if it turns out to be the best tool for the job. But I haven't heard anything suggesting it is, and the push towards this rather than seeing what the "right" answer would be concerns me.

Penguin-Guru commented 3 years ago

Many specifics were discussed during the meeting, including the latency I.P.C. might introduce and whether existing code would be operable with a game engine. I was also concerned about latency but 74hc595 thought it was worth testing. He was going to look into whether/how certain parts of the current code would be compatible with Godot using I.P.C. (and libraries if they offer any). That's how I would summarise the meeting (someone correct me if I'm wrong).

Don't worry about being negative, I think it's good to be critical. It does feel like this is a big prospect that came out of nowhere but this R.F.C. is here precisely so we can talk about it before rushing to any conclusions. My understanding of the events is that 74hc595 thinks it may be possible to integrate Godot and is motivated to because of several nice features that would become available as a result. He wants to try setting it up himself but has not started yet. I guess Kalila wanted to get other people's opinions on it first, since it does seem like a big prospect. Hopefully this is all accurate.

odysseus654 commented 3 years ago

I was pretty excited about Godot4 coming up, to the point that I was tempted to try integrating it myself. Then I wandered around the website trying to figure out how to download and link it in as a library... and then realized it was a game engine, not a rendering engine. Found this thread written by someone else that would be in (my concept of how this might work) and saw the response that seems to read as "don't use us, we're not a good fit": https://godotforums.org/discussion/23089/using-godots-renderer-as-a-library

odysseus654 commented 3 years ago

I'm not going to stay people shouldn't be working on this, the is an open source project after all and the last thing I want to do is to say that my perspective is definitive-good. As long as we don't commit towards migrating the project to any kind of gaming engine without a heck of a lot more research and an idea of how it might work... and how this would change the project. A bit worried that if we do this wrong and with enough force we could dissipate the Vircadia project.

ksuprynowicz commented 3 years ago

You are right that committing towards a particular solution at this point would be a bad idea. What I plan to do instead is doing a quick proof of concept to see if it's viable. I'm thinking about either IPC, or building interface as a library (I have no idea how difficult would it be to set up the build system this way yet). Any other suggestions would be greatly appreciated too. Either way we would have two separate main threads running - one for game loop and one for renderer, so very similar to how it's like now. I have to disagree on your strict distinction between rendering engines and game engines. A good example here is OGRE 3D, which is a purely rendering/graphics engine. It's built as a framework, so it can't be used as a library either. In case of Godot a major rewrite of Interface code would be needed only if we used Godot's features like physics, networking and other stuff. I agree that would be a bad idea due to amount of work required, and due to being locked in to Godot after making this decision. That's why I'm proposing to keep current networking, physics, game logic and rest of the Interface intact. It will save a lot of work and will be consitent across all Interface vesions.

HifiExperiments commented 3 years ago

I agree with odysseus, I think we need to take a step back and consider why and how we’re doing something like this.

like everyone else I’d love if we had amazing graphics and could take advantage of the rapid improvements being made to engines like godot. hifi was all about experimentation and rolling their own solutions to things. this gave them tons of freedom at a time when other companies were still scrambling to establish standards. but I think we’re dealing with the fallout from those decisions now, as we struggle to maintain a large and messy codebase with limited resources. I think we all agree the best path forward for this project is through adoption of these standards now to streamline the codebase (gltf/assimp, openxr, a more modern scripting engine), especially since for the first time in history, big companies are finally adopting these standards too.

but real games built on these engines employ a ton of non-standard tricks specific to each game to make things really work. and there are a number of complexities to the rendering engine specifically that make this task particularly hard…and maybe not worth it (at least at this time).

firstly, behavior across renders. just off the top of my head, we have a lot of things that might not translate easily to other engines: our text font format (which needs to change, yes, but any format will be tricky), basically any entity type other than shape and model (how do we translate polyvox? polyline? particles?), custom shaders (less worried about the shader language than how you actually specify a custom shader in each engine and what features they allow), parabolic picking, renderLayer behavior, zones, secondary camera, material entities…supporting multiple engines sounds cool but what if one supports some feature another doesn’t? to get features like global illumination, we’ll have to add knobs to our side to control them, so we are locking ourselves in. I don’t think this is an insurmountable problem because I simply think we don’t have enough users to care if we break a few existing things (which is why now is the best time to do so!), but we should be really intentional about how we’re solving these problems.

ignoring that, there’s essentially two parts to the rendering engine on our system:

(there’s some complexity where sometimes the render objects actually ARE the game objects, which is necessary for things like models and polyvox where the triangles are needed for raypicking but the game object doesn’t actually know about triangles but whatever)

when we talk about IPC my understanding is that we’d basically send messages to the other engine instead of either of these. so we’d say like “add a new box at this position” or “change the box to blue”. I’m also skeptical that this will be performant enough for pretty much any game/VR experience. it also becomes quite a bit harder to query information back I think, like those raypicking results. I’d be interested in seeing a prototype of this but I have lots of doubts. what about all our debug visualizations? what about UI? these are hard with any solution but if we kept everything in one process we might have easier access to the framebuffer or other ways of doing this

the plugin/module case seems a bit better to me. it’s a lot closer to how our physics works, and more similar to the way I picture the web Interface working already, so we can learn from that. it will certainly be complicated…you’ll basically need to design an isolating interface just like odysseus said and handle swapping them out…except the rendering code is spread all over. instead of initializing/running our render engine you’ll probably set up a scene with whatever engine and add/remove/update objects. we’ll probably pay a cost for mapping from our entity properties to the right properties in the engine we pick…my head hurts just trying to think of all the places this will touch

I definitely agree that we want to leave the “game engine” side of our system alone: our engine (while buggy!) is specifically designed to allow for and scale to big open worlds with any user-designed content in a way that other engines simply aren’t. we need that control over updating, networking, and physics, etc.

so…idk! go for it I guess? I certainly don’t want to dissuade you from trying something cool, and I’m MORE than happy to walk you through specifics of the engine code. I think the biggest benefits of something like this is for rendering effects, but I honestly think it’d be easier for us to just fix our own effects (TAA, shadows, add an OpenXR plugin, etc). but I also have my own selfish reasons for liking the idea of keeping our own engine because it’s a nice playground for rendering fun (a middle ground between having to write low level graphics calls and just moving around objects in a game engine).

in an ideal world I would maybe suggest putting together a proper proposal first and outlining how you’re going to do this. what engine are you picking? how does it work? how will you isolate our code to be compatible? what do you expect to work/not work? it’s extra work for you but might allow us to talk through the specifics more

ksuprynowicz commented 3 years ago

I agree that keeping the current engine would be a good idea, if it's possible to maintain it. As you said, it allows a lot of freedom. Things like automatic LOD generation and occlusion culling could be added to it to make it perform reasonably well on larger worlds. If it's decided that we are keeping existing engine I could work on adding automatic LOD generation instead, similar to one in Godot. I just don't want to see Vircadia move to proprietary solution like UE5, because by tight integration with it it would become a proprietary project itself.

odysseus654 commented 3 years ago

Yah, I think my objections to UE5 would be similar to Godot (rather disappointed that's the case, I liked the idea of Godot)

And keep in mind that Babylon is one of our proposed rendering engines that isn't going away at all here, so whatever we come up with has to be compatible with that as well.

I really don't want to dissuade experimentation and exploration with this though, I'm more concerned with the community as a whole jumping on a solution that may not fit us, followed by our cutting off body parts to make it fit.

ksuprynowicz commented 3 years ago

I really don't want to dissuade experimentation and exploration with this though, I'm more concerned with the community as a whole jumping on a solution that may not fit us, followed by our cutting off body parts to make it fit.

I totally agree with your opinion. If the current engine has a chance of remaining in use, I'd be happy to help make it better instead. In my opinion it mostly needs automatic LOD, automatic occlusion culling and global illumination solution that doesn't involve baking. That would allow large worlds with diverse environments. Current engine also seems to not work with some AMD cards on Windows, which would need to be fixed.

odysseus654 commented 3 years ago

I'd like to help however I can (within reason, really badly crashed yesterday after my first week of work).

I will likely have somewhat limited input before #1200 lands though, especially as my thinking would be to create another one like it for the rendering engine and... that's a lot of work that touches a lot of the code. (and an area of the code I'm not that familiar with too). Having too many open PRs that touch "everything" feels like it makes things a bit harder.

HifiExperiments commented 3 years ago

something like LOD generation is a great example of why we should be careful about just jumping into replacing the renderer! what we really want is offline LOD generation, so we should be building it into the baking system (along with other improvements to the baking system), and then a system for switching between the LODs in the engine (which can be a totally separate project). that way we can take advantage of these LODs on other platforms like web

ksuprynowicz commented 3 years ago

True. That would mean that platforms with limited memory and processing power like standalone headsets wouldn't have to deal with fully detailed geometry. What is the state of current baking system? Can it be extended? Is it intended to be included in web version?

HifiExperiments commented 3 years ago

the baking system is pretty simple, but could use an overhaul. we could make a separate issue for that and discuss it more. I think the basic steps would be:

and then on the Interface side (both for native and web), invent a system (akin to our physics workload system, can even reuse the code!) to swap between the LODs.

this is definitely a MUCH simpler project than refactoring all of the rendering code.

HifiExperiments commented 3 years ago

exactly, baking is intended to generate optimized models for native and android, so this is a natural extension. the baking tool - the oven - is an offline tool so can be shipped separate from the web version, but can also be integrated for ease of use. it’s already integrated into the asset server but doesn’t work great there supposedly

ksuprynowicz commented 3 years ago

From what I know Godot uses an external library for decimation, and it even supports meshes with armature. Maybe we could integrate the same tool they use, or maybe there are even better ones that could be used. The baked file format should be designed in such way that it's possible to load only lower LOD versions, which could be helpful for mobile devices. Downloading low LOD versions first would also help with world loading times on Desktop. I'd love to help with integrating a mesh decimation tool.

HifiExperiments commented 3 years ago

yeah!! I’m out of town until tuesday but then I can send you a brain dump on the baking system

odysseus654 commented 3 years ago

FYI I'm hoping we can leverage something like this to take all of our shaders and convert them into an optimized form for whatever engine (babylon, hifi, or whatever) we end up choosing. I'm noticing that most of the rendering engines on the table here seem to take HLSL and convert it to whatever shader language is required for the particular runtime they're using, and mechanical shader conversion doesn't feel like too significant a barrier to get past

https://github.com/ozthekoder/glsl-parser

HifiExperiments commented 3 years ago

so for that you can cross compile shaders using spir-v cross which is why I’m less concerned about choosing a shading language and more about what custom shader features are exposed and how

odysseus654 commented 3 years ago

I think I've seen spir-v in the compiler stream, but I'm talking about doing this at runtime. Also having it build a dependency list, having it change the names of global variables for compatibility with different environments, only including/defining functions that are required, and so on.

digisomni commented 3 years ago

Lots of good information and discussion on this topic so far.

The current situation can be summarized as such with two obvious options:

  1. The renderer needs to become more maintainable (for example by using a 3rd party engine).
  2. The renderer needs more maintainers.

In its current state with the current development resources, it's not amicable to the future of the project. Is it an immediately pressing issue though? Eh, it all depends on what kind of user audiences we are targeting in the short, medium, and long term.

Currently we are aiming for consumer adoption (through accessibility) and enterprise adoption through things like the Web Interface and Web SDK respectively. We will also begin on something like a C++ SDK in the future as well.

If we should keep the current renderer for the moment, then the goal now should be to figure out what are the smallest improvements / fixes that can be made to the renderer. It is effectively on maintenance mode. In parallel, we should be introducing specific groups of people to adopt the platform, grow the developer base, which gets more work done on the ecosystem, which grows the user base, which grows the developer base.... and so on.

daleglass commented 3 years ago

I'm a bit late to the party here, but I agree with the concerns some people raised here. A radical change to the engine needs to be done with a lot of care.

To that end, I think as a project we'll need to set some conditions for such work to be accepted. I think at a first approximation:

  1. The current engine remains official and in need of maintenance until something better is merged. We don't stop working on it just because somebody has a really cool alternative in the works.
  2. The replacement engine must provide a clear improvement over what we have. It's a lot of work and risk so there must be a clear benefit.
  3. The replacement engine must not lose significant features we currently have.
  4. The replacement engine must be well beyond a proof of concept before it can be merged.
  5. While any such work is ongoing, but the new engine isn't ready, we can't accept any significant degradation of the current system.

Meaning, this is complicated and dangerous and for the sake of the project, we'll have to be careful, deliberate and picky about it. This means that anybody taking on such work needs to have in mind that they're likely volunteering to take on months of work that might still not make it in if it fails to perform well enough in the end.

ksuprynowicz commented 3 years ago

I totally agree with Dale's requirements, and I'd like to expand point 3. One of the most attractive features of Vircadia is being open source - it offers a lot of freedom due to this. For this reason I believe that replacement of the current renderer with a proprietary engine should be not accepted. Tight integration of Vircadia codebase with a proprietary engine would make entire solution proprietary. In the future it might also endanger the project when licensing terms of proprietary engine change.

ksuprynowicz commented 3 years ago

Right now the greatest problem with the renderer is the fact that it doesn't support AMD GPUs on Windows. Two different users with AMD GPUs were unable to get it working: https://github.com/vircadia/vircadia/issues/1409

ksuprynowicz commented 3 years ago

Tivoli, which is based on the same renderer as Vircadia flat out refuses to support AMD cards, which in my opinion is unacceptable, even if their drivers are buggy.

stale[bot] commented 2 years ago

Hello! Is this still an issue?