Closed bryanedds closed 3 years ago
I've currently got the opportunity to do some work with Unreal, so I'm going to use this issue as a canvas for my thoughts on how Nu relates to it.
Perhaps the deepest way in which Nu varies from Unreal is that Nu is, at its heart, a code-first, language-oriented game engine whereas Unreal is, for lack of a better term, a design-oriented game engine. Whereas Nu gives you the ability to define little languages, sometimes dynamic, sometimes static, for each of your game's domain problems, Unreal gives you a single runtime uber-language (blueprints). The languages exposed and implemented in terms of Nu are interoperable only when defined as such explicitly whereas systems implemented in terms of Unreal's blueprint have an implicit base level of interoperability. Whereas Nu wants the top level of program control to be driven entirely by code, Unreal wants to top level program control to emerge from dynamically-defined behavior potentially peppered across Actors that are related only implicitly.
There are many trade-offs implicated by this contrast:
1) Nu is aimed at making programs bugs as obvious as possible, leaning heavily on types to preempt as many of them as possible. Unreal, on the other hand, provides run-time tooling to help surface and diagnose bugs to the extent possible.
2) Nu provides a pretty clear dichotomy between code-time behavior and design-time behavior, stratifying them between procedural F# code and declarative s-expression-based representations. Unreal intentionally blurs the lines heavily - anything that can be done in blueprints can be done in C++, and nearly anything that can be done in C++ can be done in blueprints.
3) Whereas programmers often see the unique value of Nu and the Elmish approach, artists strongly prefer Unreal. Rather than learning programming up-front, many artists learn Unreal and then learn to program it only as necessary. Nu doesn't have any specific appeal to artists uninterested in coding.
So the pedagogical question I can pose here is - if I could go back in time and write Nu all over AND I had the budget required to implement something at the scale of Unreal, what would I build? How similar in structure would it be to what I have now? How big a factor would the budget in the imagined differences? Would I still take a language-oriented approach focusing on correctness and ease of reasoning, or would I build a designer-oriented tool closer to what Unreal has grown into?
These are the questions I aim to answer as I continue to explore Unreal.
If you look at Nu in terms of just its classic programming interface, it's not structured as differently from Unreal. It is component-based (with Facets) and event-driven just like Unreal. Nu's scripting language could also be seen as equivalent in ways to Unreal's blueprints.
One interesting difference is that Unreal elides calls to Play and Stop while editing - the analogues to Nu's dispatcher Register and Unregister methods. Which raises an interesting question - would it make sense for Gaia to also avoid calling the dispatcher Register and Unregister methods until Ticking is enabled?
To answer the above question - no, but only because that's exactly where the design of UnrealEd and Gaia differ! UnrealEd, being a design-first tool, does not support in-editor play (EDIT: this is somewhat inaccurate). Gaia, on the other hand, is built not just for in-editor play, but also allows undoing and redoing of gameplay. In that way, they are fundamentally different tools. Whereas Gaia is a real-time live game editor, UnrealEd is more like an IDE with viewports. So if Nu ever wanted to provide a more design-centric editing tool like UnrealEd, it would have to be a different program than Gaia, or Gaia itself would have to be repurposed. THAT tool or repurposing would elide the calls to dispatcher Register and Unregister, spawning off an entirely new process (as is done by UnrealEd) in order to play the game, actuating any static compilation phases in the process.
So what if Nu provided an UnrealEd style of program? It would have the following features -
A Tabbed F# and NuScript Code Editor (integrate VSCode editing?) View Ports each with their own Camera and potentially Screen (all share the same World / Game instances). [Compile Project] button. [Run Viewport in Gaia] button. [Run Game in New Window] button. Unreal-style Content Explorer. NuScript would be a substitute for Blueprints (a corresponding node-based visual editor would be straight-forward to implement using Nu code).
What would this program be called?
Since the real-time editor is called Gaia, this one might be called Luna?
A bit more on my recent experience working with Unreal -
I hate, hate, HATE working with blueprints. They sucked every last bit of motivation I had out of working on the game. I thought they would be a good alternative to UE3's somewhat shitty scripting language. but no, they are just as bad. I can understand if they are used to configure some isolated component behavior here and there, but when they are used to encode top-level program flow control and other non-trivial emergent behaviors, they seem to me to be very unfit for purpose. At least with UE3's scripting language you had traditional text and symbol searching capabilities to help you correlate and surface emergent program behavior artifacts. With blueprints, you have little to none of that. There does exist a diffing tool - in Unreal - but that's hardly comparable to raw source diff'ing one has come to expect. In Unity, you can at least re-org your content folder in a practical manner to keep complexity under control - that barely functions in Unreal. Many artists do like blueprints as an alternative to coding, but one of the nicest properties of such a system - dynamicity - doesn't even seem to be supported. Blueprints are compiled and locked down before being able to execute them in the program. So with blueprints we have many downsides but very few upsides. It's just not something I can stomach dealing with.
Unreal has a world class renderer and asset streaming system, but from a programmability perspective, it doesn't seem to be something I can personally work with. Unreal has maximized support for the design story but seemingly only by minimizing support for the programmability story. Unity at least has balanced support for both stories.
So what about this potential synthesis -
Designers use blueprints in Unreal because the alternative programming approaches on offer are too difficult for them (C++ or the old C-like UnrealScript language). This is more than understandable because even I as a programmer do not like those approaches for developing gameplay code.
But what if instead of providing more sophisticated design tools we dramatically improved the programming story? Improve the programming story so much that the designers won't mind learning and doing it?
Well, that's exactly what the Elm / MVU programming style might be able to do. Programming in such a declarative, high-level style could be, in principle, more appropriate for designers than using the monolithic and heavy-weight toolset that Unreal provides. In fact, I think it's likely that this is exactly what would already be happening if the commercial game industry embraced FP and array-based programming rather than OO.
Look back at this in 50 years and see if that isn't exactly what transpires.
I went ahead and renamed layer to group in Nu, even though it ends up contradicting Unreal's nomenclature. No particular reason to be consistent with Unreal anyway, tho.
So, why do Unreal's Blueprints suck so much? I think it boils down to this -
"Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious."
In Unreal, the "tables" (data / models) are buried in the editor's content explorer while the primary view is that of the "flowcharts" (blueprints).
In Nu, however, your data / models are defined in statically-typed code before even a lick of algorithm code. While Unreal gets the ideal backwards, Nu gets it right. It's all about the data / models; everything else is happenstance / glue.
If you want your program to be understandable, the key is to keep your data / models defined in your native language, and the earlier they appear in your program's definition the better. Attempting to understand a program by looking primarily at its algorithms is more like reverse engineering than studying structure. This is why I think I hated delving into the Unreal project so much - everything was backward and in no way self-explanatory.
As usual, Fred Brooks is right, and no end of pain is inflicted by those who failed to leverage his advice.
This issue has been resolved to my satisfaction.
Unreal allows 2 dimensions of entity stratification via Levels and Layers. Unreal's Levels correspond to Nu's Layers. But currently there is no correspondence between Unreal's Layers and anything Nu provides. If Nu ends up needing something like Unreal's Layers - which, BTW, an entity can be a member of multiple Layers - we could implement it and call it Groups. Since an entity can be a part of multiple Groups (or none?), then I think the name 'Group' would actually be more accurate than Unreal's nomenclature. The only funky thing will be that an entity's visibility will have to be controlled by both the parent Layer's visibility as well as its Groups'. This might lead to confusion as to why an entity isn't appearing when a designer toggles its parent Layer's visibility on but forgetting that its associated Group visibilities are still off.
It is an open question whether Nu's Layers should be renamed to something better than either Layer or Level. Layer sort of implies Z-based stratification, but that doesn't always hold. Maybe Partition? EDIT: pretty sure this isn't worth renaming. Hard to find a significantly better name anyways. EDIT2: I ended up renaming Layer to Group rather than Partition. It really needed a nice abstract name. Funny, this is what Layers were originally called in Nu :)