Open ryevdokimov opened 9 months ago
You propose to run "similar to tool scripts" and those scripts can easily crash the editor with some bad code, which is not true for running a game today, as it is very rare to have a catastrophic failure.
I believe this point has been argued ad nauseam in the other proposals, and I understand it. The reality is that stable implementation is possible, because it has been done before in other engines. I will always agree that stability is something always to strive for, but not at the cost of innovation. I would argue the process of implementing something like this will be insightful to other aspects of the engine that can further improve stability overall in the long term.
Another thing is that, as far as I understand your proposal, the way you are suggesting doing that, is bringing this instability to everyone, even if I'm not using it, as you are adding a lot of new code into the main editor/viewport path.
I am not necessarily proposing removing the ability to view the remote tree in a game running in a separate process. Just like in other engines, users who prefer to keep doing it this way can do so without simulating the scene in the editor. I'm not against having #7213 implemented as well. Unreal gives you several options of how to test your scene.
The reality is that stable implementation is possible, because it has been done before in other engines.
Certainly not in Unity where a while loop can freeze everything and force you to restart the editor.
Certainly not in Unity where a while loop can freeze everything and force you to restart the editor.
That's true. The same is true of writing plugins within Godot, which is allowed, but I get it is considered a "more advanced" feature of the engine. This is also true of software like Excel or CAD where many users will write scripts that can lock up the software. My point is more that it's possible to implement this feature where it itself isn't not inherently dangerous to the engine and its existence isn't a risk to users who want to use Godot the way it is currently used. Like I said Unreal gives you these options.
I'd love to start thinking about this more seriously and maybe come up with a series of even more granular/practical steps for implementation in the Editor codebase itself. For a working proof of concept utilizing the existing IPC method, I think we'd only need to come up with a way to display what's currently in the remote tab as visual objects for a separate mode within the viewport, then change back to regular mode when play is stopped. Obviously there's a lot more steps as listed in the OP before its viable for actual use but I think just that would be enough to show that there's something to this (and all along the way, options for enabling/disabling certain features of it so that the original workflow can always be preserved for people).
From what I understand about the remote tab currently (though I might be very wrong so please correct me if I'm mistaken) is that the game process sends change signals to the editor constantly to reflect the current state of the game. So really you'd just have to either hook into those signals to keep the viewport in sync to the remote tab OR just poll the state of the remote hierarchy and render that to the 2D/3D view tab, which is how I imagine that Godot renders the "current scene" in usual operations.
As for restoring the original scene after playmode is done, saving to a file works but I was also thinking (depending on a user option perhaps) that we could just leave the original state of viewport in memory and instead send a new scene (the remote tab) to be rendered in place of the one being edited. Think of it as essentially opening up a new scene tab in the editor, but it displays everything in the Remote hierarchy. Downside would of course be memory consumption, but I don't think it would be that extreme, and a user option could let you utilize file storage if needed. Upside would be massively increasing the speed it takes to start and end playmode, with no worries of domain reloading or anything like that. I'm making a lot of assumptions here since I haven't yet researched how the Remote tab works in the code, and this is already a pretty hefty task, but imo this seems like a more attainable first step to look into that leaves the old workflow perfectly intact.
@ryevdokimov Have you delved into the Godot source code yet or just worked with plugins to get your demo video working? Would be happy to bounce around some implementation ideas with you if you'd like :)
The videos come from an open-source project that I've been working on here: https://github.com/Open-Industry-Project/Open-Industry-Project. It can reasonably serve as a proof-of-concept for this proposal. It currently uses a combination of plugins and a fork of Godot that has had several modifications pushed upstream, although none are particularly related to this proposal, besides making things when the physics server is enabled in the editor a bit more stable.
As parts of the project become more concrete, I may plan to shift even more of the plugin code to the fork which will make it possible to push functionality related to this proposal upstream, but it's a slow process since I'm currently the only maintainer for the project.
It can reasonably serve as a proof-of-concept for this proposal.
Thanks for the link! I've had a look through the code, but it seems like in order to utilize the simulation start/stop setup you have going, all scripts need to be labeled as [Tool] and hook into callbacks (OnSimulationStarted
and OnSimulationEnded
) in order to know when to run, and it seems they each have to handle resetting their own values and such. Looks to be a great setup for your use case, but even with simplifications for a real PR I think it would be tough to go that route.
By "proof of concept" I meant something more fleshed out that can work generically without consideration from the end user and potentially be taken further towards implementation, sorry if that wasn't clear. I was thinking more along the lines of utilizing the existing Remote hierarchy so that we don't have to utilize Tool scripts or run the game in the editor. Even if you did find a way to run any arbitrary script in "tool mode" so that the user doesn't have to consider it, you'd run into the problem of potential desync between the running game instance and the editor. You could run just the tool scripts and rely on that as it seems to be the case in your project, but then you'd lose the ability to run a game window alongside the debugging session.
Hopefully that made sense haha; ultimately my point is that to be less disruptive to the existing codebase I've been thinking it may easier to rely on what already exists, i.e. the IPC signals that drive the Remote tab. The downside to that approach compared to yours is probably bad performance, but maybe there's some wiggle room for improvements in that regard. With this I see a much easier path as opposed to hooking up physics/tool scripts/etc. to the editor, but I guess I won't really know until I look into it further. I'll do some research when I have a chance
All good, I understood what you meant.
I will admit that I do have to sit down and give some practical consideration of how this could be implemented without being too intrusive with the existing codebase. I suppose having this discussion is one way to start. The way I see it, is moving the OnSimulationStarted
, OnSimulationEnded
, and scene resetting logic into the engine code would probably be the easiest task for the proposal. Assuming that the functionality in the engine can be added to allow scripts to easily become "tool mode' without attributing it such, which is probably the more difficult task (in terms of avoiding big changes in the engine).
Looks to be a great setup for your use case, but even with simplifications for a real PR I think it would be tough to go that route.
Tough in what sense? A lot of the code for resetting the values in the tool scripts is mostly because I haven't decided how I want to handle live edits while the scene is being simulated. Theoretically, you could take a snapshot of the "pre-simulated" tree and use that to reset everything, but then you would need a mechanism to selectively transfer live edits to that snapshot, which I still don't see as being too crazy.
You could run just the tool scripts and rely on that as it seems to be the case in your project, but then you'd lose the ability to run a game window alongside the debugging session.
I might be missing something here, but how would the "game window" differ from bottom viewport in the example video in the original post. Assuming that it was able to capture input map actions any other game specific things I might be forgetting?
Tough in what sense?
Tough in the sense of what you mentioned above in regards to making "tool mode" work generically. It's true you could build the simulation steps into the editor and then activate/deactivate _process based on running state, but you'd end up with basically two ways of running the game which could result in a lot of complicated changes. There's a lot of stuff that probably won't work out of the box, as you mentioned physics but also animation and any other nodes that need to process over time. With the remote tab idea I think you'd just get that stuff for free since it's never driving the position of anything but simply reflecting changes from the game window.
Theoretically, you could take a snapshot of the "pre-simulated" tree and use that to reset everything, but then you would need a mechanism to selectively transfer live edits to that snapshot, which I still don't see as being too crazy.
Definitely possible, and I don't even think live edits which affect the original state are necessary (at least I don't miss that coming from Unity lol) though I realize this is currently possible if you edit a scene so it might be expected from users. Again though, wouldn't need to worry about this so long as the original game window/local hierarchy remains intact and in memory.
I might be missing something here, but how would the "game window" differ from bottom viewport in the example video in the original post. Assuming that it was able to capture input map actions any other game specific things I might be forgetting?
Input would've been my first thought, yeah. The other is the aspect ratio and editor overlays, but maybe those are easy to address. Could be other differences too, maybe post processing? Screen space coordinates for mouse input/raycast? I'm not familiar enough with it to know what could crop up, but I worry about introducing issues that are only present in one workflow and not the other as any inconsistency between the "in editor" view and the game window would be very worrying for the developer. Maybe if I had a better understanding about what would have to change in editor for this to work I'd feel differently though haha
And btw appreciate the discussion, I think this goes a long way towards understanding all the possible approaches better :)
There's a lot of stuff that probably won't work out of the box, as you mentioned physics but also animation and any other nodes that need to process over time.
Physics itself is pretty straight forward, you just have to enable the physics server in the editor with Physics3DServer.set_active(true)
. It's just doing so revealed some jankiness that wasn't as easily caught without it active, but actually resulted in a PR that was merged for 4.3 that made things better overall for the engine. That was my point in the original post that going through the process of implementing this proposal could reveal and improve aspects of the engine outside the proposal. Besides stuff like that I haven't run into issues with nodes that use _process in the editor.
Input would've been my first thought, yeah. The other is the aspect ratio and editor overlays, but maybe those are easy to address.
I don't think these would be too difficult to address. It probably is important to be able to detach editor viewports as I mention in one of the tasks to make the work better though.
Screen space coordinates for mouse input/raycast
Shouldn't be an issue. There is nothing special that I'm aware of about the editor viewport compared to game viewports. Raycasts for example are already used in the editor for dragging-and-dropping scenes into the viewport.
I know this section of documentation can sometimes be misleading, but for the most part it is true: https://docs.godotengine.org/en/stable/getting_started/introduction/godot_design_philosophy.html#the-godot-editor-is-a-godot-game
This solves a lot of the editor/game differences, which I think people overestimate but I could be wrong. The beauty of currently having tool scripts and an editor that can be easily modified via plugins, is trying out some of these ideas aren't too difficult. If people find other roadblocks, I can add it to the task list in the original post to track it.
I personally miss being able to just select something while the game is running and move/inspect it's state.
I see why this was not done, but like mentioned above: A nice middle ground would be that the default behavior (aka the safe behavior) is running it as a seperate process, and an extra button to run it in the editor.
Why not just add a display for the remote scene tree, that doesn't display the live game but a representation of the scene tree that's live... the data is already there
I'm not sure what problem that solves. The data is in the regular scene tree as well, and at least that data is already representative of what is in the editor viewport. Storing the initial state of the regular scene tree isn't too particularly difficult I don't think, so I'm not seeing what we gain by using the remote tree for anything.
I'm not sure what problem that solves. The data is in the regular scene tree as well, and at least that data is already representative of what is in the editor viewport. Storing the initial state of the regular scene tree isn't too particularly difficult I don't think, so I'm not seeing what we gain by using the remote tree for anything.
Ok Imagine this. A game where 30 or more units are instantiated based on a save file, dictionary, etc. The units look the same but have different stats. And not all stats are shown to player. In Local tree have nothing because the units are generated in runtime. In Remote when the game is running all units appear there. But you CAN'T select any of it like you can in Unity to inspect them. You can to select from the tree (but they are scattered so you have to select one by one to discover, which unit you want to inspect and even then there is no feedback).
I personally miss being able to just select something while the game is running and move/inspect it's state.
I see why this was not done, but like mentioned above: A nice middle ground would be that the default behavior (aka the safe behavior) is running it as a seperate process, and an extra button to run it in the editor.
YES
In Local tree have nothing because the units are generated in runtime. In Remote when the game is running all units appear there. But you CAN'T select any of it like you can in Unity to inspect them.
You can generate these things in the regular scene tree, and they will be selectable in the editor viewport. It's a matter of running scripts as if they were tool scripts so that things happen in the editor viewport as opposed to the runtime.
It's a matter of running scripts as if they were tool scripts so that things happen in the editor viewport as opposed to the runtime.
That right there is the problem though; you have to go out of your way to implement that, which may also have specific requirements per project. Why? Because how do you know when to run the code that spawns the dynamic units in this example?
If we could automatically view the remote tree as objects nobody would have to use tool scripts and deal with the complications of that, it would be a whole lot easier without it. Even if it saves just 5 minutes in a simple example, it is worth it to consider because you remove the learning curve of tool scripts for newcomers to Godot. AND it has the benefit of working everywhere in your project by default without needing to potentially manage multiple entry points or create an event bus for previewing things in motion. I believe it fits right in with Godot's philosophy of making everything easier to develop because it simply shows information it already has in a cleaner/more user-friendly way.
how do you know when to run the code that spawns the dynamic units in this example?
When the scene is being ran in "simulated live edit mode" which will be different function from the existing play button which starts a new runtime.
it is worth it to consider because you remove the learning curve of tool scripts for newcomers to Godot.
No one should have to manage the tool scripts. If programmed correctly regular scripts will work "like" tool scripts automatically when ran in that mode. Unity is running scripts "like" tool scripts inside the editor when you are running a scene. I'm proposing we do the same.
If we could automatically view the remote tree as objects nobody would have to use tool scripts and deal with the complications of that...
You're moving the complication to managing the remote tree and having it reflect changes in the editor viewport, which in my opinion is arguably more complicated and riskier. Having scripts run inside the editor is already trivial, I'm just saying we find a way to do it automatically.
When the scene is being ran in "simulated live edit mode" which will be different function from the existing play button which starts a new runtime. Having scripts run inside the editor is already trivial, I'm just saying we find a way to do it automatically.
My bad, I thought you were implying that the end user should manage that and I didn't connect that to your earlier ideas oops. If it's handled by the engine and users don't have to mess around with tool scripts/entry points themselves then I agree this would also be pretty easy to use.
It's just as I mentioned above, I'm not convinced that this way is less complicated than utilizing the remote tree... but ultimately I'm not sure which method is better for the long term. I'm just feeling like the remote tree approach is more achievable immediately since it doesn't create a new way of running the game and might mesh better with the current workflow as it's built currently. However I totally get the benefit of being able to play in the same program that you're previewing for debugging, I just think it'll take a lot more work to get both views on the same screen (ultimately the goal, right?) and pipe in input when focused, etc. Could be worth it though depending on the results
All good.
I still don't quite understand what the strategy would be to make this work with the remote scene tree. My understanding is that the data for it is fed back to the editor via inter-process communication from the runtime. So, to show a live in-editor version of it basically puts you back at square one. You can't interact with the nodes in the remote scene tree in the editor because it's not in the editor, so then you need a mechanism to actually put them in the editor, but then you need to have scripts attached to them to run them in the editor, but then those scripts need to run like tool scripts, so why bother and just take the stuff that already in the editor and make those things work using methods that are already clearly available to us.
Would it be possible to reconstruct the scene, including anything instantiated in runtime, just from the remote scene tree to show a purely visual representation of everything happening in the game? This doesn't solve the overall problem of being able to modify nodes live easily, but seems like it would be a good starting point.
Would it be possible to reconstruct the scene, including anything instantiated in runtime, just from the remote scene tree to show a purely visual representation of everything happening in the game?
I can't think of a straight-forward way to do this, and don't see what benefit it would have over #7213, which is to embed the entire runtime.
I understand that the remote scene tree seems like a natural place to start investigating/implementing things because it represents data from the runtime in the editor, so it seems like some of amount of work has already been done, but I think it is ultimately a dead end. Nodes are the fundamental building blocks of Godot, so you want to start solving this problem where they actually exist, not where data that describes their existence is. Reconstructing their existence with that data seems redundant and backwards to me.
Nodes actually exist in two places. In the editor and in the runtime, so in my opinion the only two options are to bring runtime functionality to the editor (this proposal) or to bring editor functionality to the runtime (the other proposal). They both have their pros and cons, and I'm not necessarily against doing both to give users that flexibility.
I understand that the remote scene tree seems like a natural place to start investigating/implementing things because it represents data from the runtime in the editor, so it seems like some of amount of work has already been done, but I think it is ultimately a dead end. Nodes are the fundamental building blocks of Godot, so you want to start solving this problem where they actually exist, not where data that describes their existence is. Reconstructing their existence with that data seems redundant and backwards to me.
True, I can see why your proposal eliminates some technical work at the cost of shaking up the existing workflow. From what I understand, the remote tree reflects the game by catching "change signals" from the build that update each item in the list based on new info. To be able to show it visually, I was thinking something like this:
That's all much easier said than done though and at the end of the day, you may be right that this is more redundant and labor intensive. Coming at this from a user workflow perspective this is what made sense to me, but the more I think about it I'm not sure how bad this is to implement vs your suggestion.
I'm currently working on other PRs and learning as much as I can about how the editor works under the hood so I'd like to investigate your tool script idea when I know a bit more and have the time :) In the meantime I appreciate the discussion and would be happy to hear any other specifics that can help clarify the work that has to be done or ideas to lessen the amount of code changes for this (a smaller PR is always easier for them to review and merge)
You're suggesting essentially to have the runtime remotely drive a duplicate scene in the editor (and itself). It's an interesting idea, but I'm suspicious of the complexity of implementation and the quality of the result.
I'm curious what the result would look like in terms of synchronization between the runtime and editor given the additional data layer being transferred and the editor/runtime having their own rendering loops.
* You would have to dynamically associate remote nodes with editor nodes. * You would have to create a mechanism to create/delete editor nodes when new runtime nodes are created/deleted. * You would have to create a mechanism for the editor nodes to have their data modified by the remote nodes (transforms, materials, etc).
These features are only necessary if you need changes in your local scenes to affect the running game, this is somewhat how Unity does it, except then the changes don't save to disk. The suggestion is that local and remote nodes are kept completely separate, as they currently are. Any changes to the local tree will not apply until the game is restarted, and changes to the remote tree will immediately apply to the running game, but will not save to disk. That is all exactly how Godot currently works, and I don't think it's necessarily a worse system than how other engines do it.
The main proposal here is that you should be able to open some sort of preview that would just show the contents of the remote scene tree in the editor, and allow you to view it like an open scene. This is somewhat possible currently by using the Project Camera Override button in the viewport, but would still be a nice feature to have, to allow you to select remote nodes by clicking on them in the viewport, and see a version of the full game scene without post processing effects.
The main proposal here is that you should be able to open some sort of preview that would just show the contents of the remote scene tree in the editor, and allow you to view it like an open scene.
That's the problem though. The "contents" are nodes, and these nodes don't exist in the editor, only the data for them through the remote scene tree passed back from the runtime.
In order to select something through the viewport, you need to either make these nodes exist in the editor or bring selecting functionality to the runtime.
In order to select something through the viewport, you need to either make these nodes exist in the editor or bring selecting functionality to the runtime.
By "exist in the editor", do you just mean that we need to make them visually exist in the 3D/2D view in the editor? We already have the data for all of the nodes in the runtime, as you said, but couldn't it just open a new tab in the scene view titled "remote" or something similar, and use the data received from the runtime of the current state of all nodes to create an editor version of the current runtime?
@ryevdokimov is correct, when mentioning "editor nodes" they meant "the editor representation of the remote tree" as the current tree is just made up of data that reflects the game. So even without making changes back to the scene on disk you have to sync the tree to the visual nodes OR when receiving the change signal, apply it to both the tree and the visual nodes.
I think I've already stated my issues with each approach but to reiterate: with the "run scripts as tool scripts" approach it creates an entirely new way of running the game which may clash with existing workflows (not lining up 100% exactly the same as a build) and requires adding physics/input/etc. to the editor in specific viewports. With the "remote tree visualization" approach we keep the existing workflow but incur the workload of creating visual nodes, syncing them, and dealing with issues of performance, hence why:
curious what the result would look like in terms of synchronization between the runtime and editor given the additional data layer being transferred and the editor/runtime having their own rendering loops.
Is a valid concern because there already exists a stutter/delay doing IPC but this will surely make it worse. However, I was hoping that if it at least works we can address IPC performance in the future and for some people it will be enough to view their runtime-generated scenes even if they can't interact much.
From an implementation and user perspective, I'd prefer either of these approaches over runtime tools as I believe it will be more beneficial in the long term and can be much nicer to work with imo. For the tool script approach, I'm wondering - are you thinking the new "play mode" buttons could exist alongside the existing run buttons? I'm trying to visualize it UX wise and make sure it could be added in a way that doesn't interfere with existing workflows. Definitely an "experimental" editor setting could be used to enable the buttons and then some editor-wide indication that it's in run mode would be helpful, right?
@ryevdokimov is correct, when mentioning "editor nodes" they meant "the editor representation of the remote tree" as the current tree is just made up of data that reflects the game. So even without making changes back to the scene on disk you have to sync the tree to the visual nodes OR when receiving the change signal, apply it to both the tree and the visual nodes.
Is a valid concern because there already exists a stutter/delay doing IPC but this will surely make it worse. However, I was hoping that if it at least works we can address IPC performance in the future and for some people it will be enough to view their runtime-generated scenes even if they can't interact much.
I don't understand how this would be any different from when I change the values (for example: position) of a node in a local scene, and the editor viewport updates to show the object modified. This happens in realtime already with no stutter. I understand that IPC would add stuttering/delay, and solving that delay could be improved on later, like @RobProductions said, but rendering a scene tree as a viewport is by definition the viewport's entire purpose.
I think that making the remote tree visualization would be fairly simple and would mostly be combining other existing Godot features, and the small optimizations on the IPC and rendering could come later.
The difference is that not even the tree for the runtime exists in the editor either. What you are seeing in the remote scene tree inspector is just that, data for the inspector fed back via IPC from the runtime.
Picture a car without an engine being towed by an identical car with an engine. You're sitting inside the engineless car, looking at a mechanical speedometer and seeing that you're going 100 kilometers per hour. Interacting with the pedals and shifters is not going to change your situation. You have no engine.
I change the values (for example: position) of a node in a local scene, and the editor viewport updates to show the object modified
What you can do I suppose, is to flash your high beams at the towing car to signal them to do something.
The suggestion I'm hearing is basically something like trying to now stick an engine in the engineless car while the car is already in motion. I don't think it will be that easy, and I'm not confident the result will be great either. My proposal is basically having both cars parked (you're not running anything yet), borrowing the engine from the car that has one (the runtime), and throwing it inside the other car (the editor).
Now we just use the editor scene tree with typical runtime features to do what we need. Only issue is that if you crash the editor "car", it sucks a little more because that car is not insured as much.
Sorry if these metaphors are trash lol
@RobProductions:
I'm wondering - are you thinking the new "play mode" buttons could exist alongside the existing run buttons?
Yes, you could probably have some setting somewhere to enable this feature, and then maybe these buttons show up over the editor viewport, which will strictly run the editor scene(s) in the editor.
Inspired by this thread, I spent a bit of time looking at the approach of pushing more features into the runtime, while improving communication back to the editor. A proof of concept addon is here:
https://github.com/bbbscarter/GodotRuntimeDebugTools
Essentially it's a combination of:
It's a slightly hacky proof of concept, but I've found it very useful already.
Inspired by this thread, I spent a bit of time looking at the approach of pushing more features into the runtime, while improving communication back to the editor. A proof of concept addon is here:
https://github.com/bbbscarter/GodotRuntimeDebugTools
[...]
That was unexpected! I just tried it and it works perfectly. It's definitely super useful. Thank you so much for your effort!
Thanks, looking forward for the 2D support it you implement.
@mrussogit - FYI I've pushed some basic 2D debugging tools to the version on git; let me know if it works for you.
@mrussogit - FYI I've pushed some basic 2D debugging tools to the version on git; let me know if it works for you.
Works GREAT when click an object and it shows in Remote which object it is! Thanks a lot! 5 stars add on!
@mrussogit - FYI I've pushed some basic 2D debugging tools to the version on git; let me know if it works for you.
Works GREAT when click an object and it shows in Remote which object it is! Thanks a lot! 5 stars add on!
Great - thanks for confirming!
Hello, any news about this?
Hello, any news about this?
If you're interested in having the ability to interact with a running scene there is a PR https://github.com/godotengine/godot/pull/97257 that is attempting to implement #7213.
In terms of this specific proposal. I've been implementing it more or less in my fork for my project that heavily relies on this kind of functionality, but it's a bit fractured between an editor plugin and the modified engine. I do plan on shifting parts of this code into the engine as things get more fleshed out. Personally, my priorities are mainly to have it work for my personal project, but I do try to keep in mind the possibility of submitting work upstream (to Godot) as it's developed.
Here is a preview of some of the progress.
https://github.com/user-attachments/assets/4ecd3e65-15cb-4fde-8911-71d838495d6f
Nice work!
In terms of this specific proposal. I've been implementing it more or less in my fork for my project that heavily relies on this kind of functionality, but it's a bit fractured between an editor plugin and the modified engine.
How far did you have to modify the game engine, and is it possible for people like me to test it out?
Cheers
Thanks!
How far did you have to modify the game engine, and is it possible for people like me to test it out?
Surprisingly and to the credit to all those that contributed to Godot, not a lot. As I've mentioned before, a lot the pieces to solving this problem I believe are already there in the engine, it's mostly about organizing it in a way that works for majority of users. To answer the 2nd question, not outside the context of the project due to the fractured nature that I was mentioning, at least not yet anyways.
If you're interested in testing out the project to get an idea of how this proposal will work, it's located here: https://github.com/Open-Industry-Project/Open-Industry-Project
I'm not sure how to show support for proposals, but I'd like to at least say here that this would be simply amazing. I have another use case, one I've encountered multiple times in my game dev career, that I think makes something like this essential for Godot in the long-run: caching temporary external resources. Say you have files, resources, etc that live on an external system that you need to load in, or maybe a separate VM you have to run for a project-specific application (Lua, pipeline applications, etc). For one reason or another, it can't live on your hard drive or in the project (usually either for security or remote collaboration reasons until launch). Right now, in most cases, you have to reload/re-download those resources at play time, even if you'd loaded them into the editor with a tool script. It would be much more convenient to be able to have an in-editor, out-of-the-box simulation that can access all the same resources and memory.
Copy pasting a comment I made in https://github.com/godotengine/godot/pull/97257 for posterity.
Might be a stupid idea but having worked on an "alternative" pathway for this kind of functionality in https://github.com/godotengine/godot-proposals/issues/9142, it sounds like a possible inevitable progression is just the play button runs another (slightly modified) instance of the Godot editor that can run the scenes in process like in the proposal.
This would result in a kind of hybrid with https://github.com/godotengine/godot-proposals/issues/7213, with embedding coming free, and if you crash the "nested" editor it's no big deal. The beauty of the editor itself is that it's very lightweight, and going this route kind of helps dogfooding the editor just like the Godot's own UI.
Just food for thought.
Edit: Also, I think the idea of having the editor run on the game engine, but the debug runtime runs on the editor is pretty funny. See: https://docs.godotengine.org/en/stable/getting_started/introduction/godot_design_philosophy.html#the-godot-editor-is-a-godot-game
Describe the project you are working on
N/A
Describe the problem or limitation you are having in your project
I'm just going quote reduz here because he said it best:
Describe the feature / enhancement and how it helps to overcome the problem or limitation
This is an alternative to: #7213 and to flesh out #1864
I want to create this proposal to avoid blowing up other proposals that want to implement a different solution.
This will bring in functionality that exists in engines like Unity.
I know several proposals like this have been made before, but I will try to provide some technical implementation.
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
This can already kind of be done via scripting, but it requires a lot of setup and could be much more intuitive.
https://github.com/godotengine/godot-proposals/assets/105675984/53a5aabe-3e77-4fbd-a5e9-1db1c52b3ef8
We already have viewports where you can view the scene from the perspective of a game camera. One of these viewports could default to doing that as the "Main Camera".
We could add "Play Mode Buttons" similar to in this video above that basically runs all scripts as if they were tool scripts and enable physics. The downside of this of course is that this is now modifying the real scene. Now we have to create a system to save the original state before the game was run. In the video above this was done by just storing the transforms of all the objects in a scene separately from the TSCN file, so that when it is stopped everything goes back its original position.
Modifying the active scene could be done via and check-in/check-out process, where you can select a node that you want to modify and have its state be saved in the original state.
Editor Viewports acting as the game should be allowed to receive input mapped actions.
I would like to be able to detach the editor viewports and make them float like the docks. This would probably require #4565 to be implemented.
I do agree that editor stability is a risk, but all risks can be mitigated if implemented correctly.
There are probably several other caveats that I'm missing, let me know and I edit the post to better encompass a full idea.
If this enhancement will not be used often, can it be worked around with a few lines of script?
Definitely not with a few lines of script.
Is there a reason why this should be core and not an add-on in the asset library?
I believe this is one of the most asked for features and would vastly improve development workflow.