Open willnationsdev opened 5 years ago
You've mentioned in the topic of the proposal about tracking
as well as executing
. Do you see how EditorCondition
could also provide similar functionality?
Also this could be related to "Condition Builder" concept.
@Xrayez Well, by tracking, I also meant users should be able to fetch a list of completed actions, but yes having a dynamic signal-like system would also be an important aspect of this functionality. Something that lets the engine universally declare, from a single point, "X happened", and for users to likewise be able to use this same point to declare that their own things happened.
I do not believe a Condition Builder would be a necessary part of this process as, again, we are talking strictly about backend changes for this Issue. Someone might make a plugin that allows someone to use a GUI to compose boolean expressions for use in conditionally triggering an EditorActions dynamic signal emission. Maybe it would be called EditorActions::Message
, with an EditorActions::get_singleton()->dispatch_message("group/message_name")
method? Just going off the MessageDispatcher class in godot-next which likewise provides users with a dynamic signal system.
I recently opened an issue like this, #1689. Is anyone working on this currently? I'm asking this hoping that I could take a shot at it if no one was already working on it.
@firststef I don't think anyone is currently working on this, so go ahead :slightly_smiling_face:
@firststef Yeah, go for it!
I think the first step is to just create a low-level system in the Editor for storing the key-value pairs of names and Callables, create an access point for it in EditorInterface, and then making it all script-accessible.
Once you have that merged in, it will be possible to start porting things, not just by you, but by any given contributor. It'll be a pretty massive effort though, so lots of people will have to chip in over time.
Edit: One thing we'll need to do is make a start cataloguing what all operations need to be registered and made available to scripts (there's bound to be a ton).
@firststef Feel free to message me directly on Discord, Reddit, or Twitter (same username in all communities) if you have any questions about implementing it.
@firststef I actually just remembered that I did previously start working on this, a long time ago. In fact, the low-level bit of it is pretty much done, iirc, though it hasn't been tested. I have an EditorActions class that is created in the EditorNode and is made available through the EditorInterface with all methods exposed to scripting. In fact, that commit alone could probably be submitted as an initial pull request. You are free to take it and build on it to add integrations for other actions and the like, or if you think of anything else you'd like to add to it.
Cool! Very nice! I'll start workin on it! I'll message you soon, thx!
This is interesting, but would indeed require huge changes to almost all editor controls. It would also make #1444 a lot easier!
I sort of did some method refactoring in https://github.com/godotengine/godot/pull/43663 where I took all the logic out of the GUI input method on text edit and line edit. This pr would require a lot more of that, which is actually good because it promotes single responsibility for methods as opposed to a huge gui input method which has all the logic.
Overall the editor code is quite messy (gui code often is) and I think this idea would actually go a long way in helping compartmentalise functionality and make it more extensible.
One big issue I see is how should context be handled? For example, how to handle commands which require specific conditions to be fulfilled? For example - Toggle Visibility
of selected node, but no node is selected, or execute a command for the Script Text Editor, but you are in the 2D viewport - would this be allowed?
For this same reason, I think linking keyboard shortcuts directly with actions could be a bad idea unless some context-sensitivity is brought in.
@EricEzaM you'd either have to just leave it as a documentation concern so that people have to know which context they are in (mental map) before they execute an operation that necessitates being in that context OR you'd have logic within the EditorActions API that takes into account a requested context before attempting to execute any operations. And that API would need to line up correctly with stuff in the actual Editor to ensure that the API doesn't attempt to access objects in the Editor node tree that haven't been instantiated or something.
If you look at add_shortcut, you can bind the existing shortcuts to a dropdown list search database.
void EditorSettings::add_shortcut(const String &p_name, Ref<Shortcut> &p_shortcut) {
shortcuts[p_name] = p_shortcut;
}
See also https://github.com/godotengine/godot-proposals/issues/1444.
After starting with the branch of @willnationsdev for Editor Actions, I have added a bit more and have a bit of a test going. A few notes:
add_action(StringName name, const Vector<Variant> &p_args, Control* p_context)
. This is needed because basically all the UI code uses something like _menu_option(int p_option)
, so you need to provide a way of passing params to that, like editor_actions->add_action("quick_open_script", callable_mp(this, &EditorNode::_menu_option), varray(FILE_QUICK_OPEN_SCRIPT));
execute_action
which:
So it works pretty well I guess. Super easy to construct a command palette.
Shortcuts/event handling is where it becomes a bit more tricky. In the op it says the flow should ideally be:
Control input callback/notification:
- detects EditorActions::is_action(event)
- handle logic associated with the event by calling EditorActions::get_singleton()->execute("group/behavior", params...):
How would this work with the fact that most (if not all) shortcuts go through Buttons and PopupMenus (indirectly MenuButtons)? The input
methods on those are called, and then they emit signals that something happened - button pressed, popup menu index selected, etc. Then, typically in their parent, that signal is connected to and then some logic is executed, "Quick Open Script", "Select Rotate Tool", etc. So how would EditorActions take over that input handling? The input would still need to go via buttons. I don't see how that would fit together at the moment.
Unless the plan is to do away with ED_SHORTCUT and replace it with EditorActions, in which case it would just be a straight up replacement of the system:
p->add_shortcut(ED_SHORTCUT("editor/quick_open_script", TTR("Quick Open Script..."), KEY_MASK_CMD + KEY_MASK_ALT + KEY_O), FILE_QUICK_OPEN_SCRIPT);
// becomes...
editor_actions->add_action("quick_open_script", TTR("Quick Open Script..."), callable_mp(this, &EditorNode::_menu_option), varray(FILE_QUICK_OPEN_SCRIPT), KEY_MASK_CMD | KEY_MASK_ALT | KEY_O);
p->add_shortcut(editor_actions->get_action_shortcut("quick_open_script"), FILE_QUICK_OPEN_SCRIPT);
^ This is a bit lame because the EditorAction is not invoked... rather, the popup would still need to be connected to "id_selected", passing the id to _menu_option, which the EditorAction is already set up to do. So there is some duplication of logic here, which is not ideal.
Some places, like the CodeTextEditor use ED_IS_SHORTCUT directly. This logic could potentially be simplified:
if (ED_IS_SHORTCUT("script_text_editor/move_up", key_event)) {
move_lines_up();
accept_event();
return;
}
// becomes...
bool execute_success = editor_actions->execute_action_by_event("script_text_editor/move_up", key_event)
// elsewhere, the above action was defined:
editor_action->add_action("script_text_editor/move_up", TTR("Move Lines Up"), callable_mp(this, &CodeTextEditor::move_lines_up), varray() /* no args to pass */, KEY_MASK_CMD | KEY_UP);
I think EditorActions could be used to completely remove the ED_SHORTCUT
system. After all, all EditorSettings does is store a Map<String, Ref<Shortcut>>
for shortcuts, which is what EditorActions would basically be doing anyway.
@EricEzaM AAAAACK, wut. I mean, cool! But also, we should chat some time with you, me, and @firststef since I've helped him a bit with a fuller implementation of what I started with that goes along my own vision for the feature. We should evaluate what we've each done and try to merge the experimental features to ensure that one implementation supports both of our needs.
@willnationsdev sure. I saw @firststef asked on the dev IRC but didnt get much of a response.
I think there is quite a bit of depth to the changes an implementation of this idea would require. The implementation in my previous comment is non-ideal due to the fact that EditorActions could not be bound directly to signals from button, popupmenu, etc, and fixing that could potentially be quite a bit of work.
Additionally, I have pretty much ignored @firststef's initial idea of having chainable or custom-callable editor actions like create_new_scene("myscene").add_node(Node3D, "My3DNode")
, etc. Kind of like the commands you can use in AutoCAD if you have ever used that. That would require even more work, because currently "creating a new scene" only opens it in the editor - to name it, it must be saved. Additionally, an action like "Add Node" would (currently) be bound so it just opens the type selection dialog. The system which would allow adding the node, selecting the type and giving it a name all in one action would require a lot of work I think.
Anyway, for now I think it's probably best to aim for the easier target and just get a basic "EditorActions" system implemented and have it feed to some sort of command palette. It could also potentially to replace the shortcuts system in EditorSettings? I dunno, I would need to play around with it a bit more.
- A Command Pallete plugin akin to VS Code where 1) the palette can automatically populate itself with existing Editor commands and 2) users can add user-defined commands related to their own editor tools, etc. Someone already has a plugin for this, and another one. It shouldn't have to reinvent the wheel for a lot of stuff though, and many features it simply can't do since they aren't yet exposed to the scripting API through any "arbitrary execution" API.
Looks like there's a PR for this now: godotengine/godot#49417.
But we also have a dedicated proposal for this by now as well: #1444.
The linked PR had something, why it was closed?
What do you mean? I only see a proposal that as replaced with a different one.
So https://github.com/godotengine/godot/pull/47002 was closed and this issue was resolved by the implementation of command palette?
The linked PR had something, why it was closed?
I closed it as I did not have the time to improve/finish the feature. If anyone needs anything you can leave me a message.
This is a rewriting of godotengine/godot#31190 as a proposal, updated with more information.
Describe the project you are working on:
Plugins for the Godot ecosystem that would ideally be able to detect editor actions and/or instruct the editor to perform specific actions.
Describe how this feature / enhancement will help your project:
If I wanted to build a tool that manipulated the Godot Editor or reacted to Editor operations, and also enabled content added by users to also control/be controlled via this same tool, then I would not have an adequate system by which to do this. Examples of plugins that could take advantage of this are...
Vim emulation support for the Godot Editor. You would need to be able to detect and override specific editor actions as well as arbitrarily execute editor actions based on user-defined input sequences. This would extend beyond the capabilities of the current EditorPlugin/EditorInterface API and get into the specifics of bypassing actual input handling (opening and closing dialogs, selecting options in arbitrary in-focus Control nodes) and instead directly executing the behaviors associated with those inputs, much like keyboard shortcuts. (A project I started and stopped on my own because I ran into these roadblocks).
Controlling the Godot Editor's state through an external program, similar to how the GDScript Language Server would give GDScript information to an external program. Only, this would encompass the behavior of the entire editor, which could be useful for more than just a text editing program like VS Code/Atom, etc.
An interactive tutorial system akin to UE4 (can't find an example online, but it also has these built into the editor) and Unity. They offer a more streamlined/involved introduction to engine concepts by familiarizing the user with how to do things inside the Editor itself (no going back and forth between a video/docs and the editor). Someone on Reddit already wants to pursue this idea.
A Command Pallete plugin akin to VS Code where 1) the palette can automatically populate itself with existing Editor commands and 2) users can add user-defined commands related to their own editor tools, etc. Someone already has a plugin for this, and another one. It shouldn't have to reinvent the wheel for a lot of stuff though, and many features it simply can't do since they aren't yet exposed to the scripting API through any "arbitrary execution" API.
An editor console that implements a bash-like scripting system for manipulating the editor and its contents, complete with miniature editor programs and inter-program piping (could put the console in the bottom panel). No, EditorScript doesn't count. They do not support piping (no return value), nor can they be executed in sequence. They also cannot support user-defined operations. (A project I started and stopped on my own because I ran into these roadblocks).
Godot currently supports two editor action-mapping systems, each with their own issues:
EditorInterface:
Pro: Collects a number of operations for manipulating the editor from the scripting API.
Con: User plugins do not have the ability to add to its functionality.
Con: Many of the operations it supports are embedded within nested data structures. The full scope of what it can do is not clearly accessible from the top-level of the API.
KeyboardShortcuts:
Pro: Defines a large collection of operations supported by the editor in a list.
Pro: Any statically-defined triggers for operations are tracked and can be updated easily.
Pro: Users can define their own shortcuts.
Con: Operations can only be triggered by using input events.
Con: If one were to try and trigger these behaviors manually, one would have to simulate the input events as if they were applied directly to the editor. This does not work for key remapping systems such as vim bindings or emacs bindings, which necessarily need to override shortcuts with custom functionality.
Show a mock up screenshots/video or a flow diagram explaining how your proposal will work:
This is more about a backend change. Front-facing changes would be plugin implementations. Mockups of plugins is beyond the scope of this repository.
Currently, the flow of things is like this:
Would prefer something like this
Describe implementation detail for your proposal (in code), if possible:
If this enhancement will not be used often, can it be worked around with a few lines of script?:
There are a wide variety of plugin contexts in which this set of changes could be applied, all of which would require extensive changes to the editor codebase in order to accommodate their ideal workflow/features.
Is there a reason why this should be core and not an add-on in the asset library?:
The changes involve extensive editor source code changes. Cannot be accomplished by an addon, in any sufficient capacity.