Closed mitchmindtree closed 10 months ago
@mitchmindtree I've also been diving deep into bevy in the past few months since opening my PR to explore its potential for creative coding, and think that leveraging bevy as the "backend" to nannou is a great idea!
While the bevy ecosystem is still immature, bevy ecs is incredible. The community has some really talented people working on the renderer and other subsystems which would be great to benefit from so we can focus on providing tools for creative coders. The mantra bevy has that "every bevy user is already an engine developer" — i.e. that bevy itself is implemented using the ecs and plugins — seems like a great ethos for creative coding. We can provide a comfortable environment for new developers to get started, but also support advanced users replacing any part of the engine they need.
Thinking about nannou and bevy, a few more random thoughts and questions come to mind:
Short answer: I'd love to be involved! My longer term dream is to build something halfway between touchdesigner and nannou/openframeworks (i.e. node based but code first), and so this project to re-platform on bevy would be a great place to start. :)
My thoughts coming from a heavy user of the framework. I started doing dailies with nannou on January 1st 2019, with no experience in Rust at that time (I started my dailies in 2017 in scala, using the processing core library). I am not going to lie - the first days were hard. Thanks again to @mitchmindtree @JoshuaBatty and @freesig for all the help (and the features that you added for me!).
I love nannou's approach to the whole mess that is "how do I get something to just draw.ellipse()
when graphics programming is hard and beginners are often confused about some important things (linear/sRGB anyone?)".
Nannou has a soluion that just works with draw.ellipse()
. The basic draw example is a wonder of simplicity. (And the more I understand what happens "inside", the more I like what I see.)
Yet you can access all the underlying tools directly. The webgpu device
is just there. My recent endeavour is a massive particle system (running in a compute shader) with instance rendering and post-processing effects. Each of the steps is a custom webgpu pipeline with wonky shaders (some of them wgsl, some glsl, because that's how long it took me to add a step - and well you might not want to see the code that does the hot-reloading of shaders, but calling glslangValidator in the background to get spirv from glsl might be happening). And I can just use draw
in the middle - it just blends in without any issue. I don't think any other framework would let that happen - by design or by nature of the backend.
So here is the thing for me: this flexibility is key and this modularity with the parts I personally wanted to play with allowed me to develop a better understanding of what I needed. In short, the design of nannou itself made be better as an artist and I hope as a programmer.
What I believe is hard is to keep this straightforward approach to creative coding: I argue that most users don't care much about how it works and how "safe" their language/framework is (at least in the beginning). They might just want to draw.ellipse()
a lot of times per frame and that's it (and also maybe having a mutable reference to the Model in view
but I digress). I don't know Bevy enough to know how this would work, but I have to say that the homepage description gives a very technical overview of what Bevy can do... When I look at the basic 2D shapes example I'm really confused as the approach is really different to what I'm used to - and maybe that's because I'm old, but I also think this comes down to a very simple design problem: who is nannou for?
If we're talking beginners (as in, people targeted by most of the videos on "the coding train" youtube channel for example), the workflow should keep in line with the approach of other processing-like frameworks: "create a thing, update the thing, render the thing" without having to think about what happens around or if another approach is better.
If we're talking system programmers where the beauty of the code is as important as the result, does this actually lead to performance improvements over what we (as in this very small niche of graphics programming) usually do? Do we need to define a material when we just want to render flat colors with no lighting system in place? Should beginners be expected to know what diffuse/specular/albedo/BRDF mean when they just want to make a square blink? My point is that the design of a modular rendering engine that aims to do the kind of usual 3D graphics is already miles ahead of what a creative coding framework should start with. I'm not saying it should not have those available, but sometimes you just need one good simple way to render and update a lot of things, and any defaults to suit a more general rendering pipeline might actually be a bottleneck.
I'm sounding harsher than I actually am: I am genuinely interested in the design of a creative framework that would naturally fit within a modern software architecture - on modern hardware (but still beginner-friendly to some extent). I am perfectly aware that the design of processing is really tied to what was available as a Java API back in the day (urh, java3D anyone?)... But this is a very hard question.
Are we merely trying to bring the difference in features from nannou to Bevy? Are we trying to use Bevy to cover what is actually very time-consuming and not very fun, i.e. keeping up with bleeding edge breaking APIs of essential dependencies like winit/wgpu/egui while the whole ecosystem is still being built around that?
I'm not really contributing to the main discussion, so here is what I can actually do to help anyone that is willing to actually take decisions there and move things forward: I can write examples, I can write tutorials (and I believe I actually should, mind you I might have opinions on a lot of topics) and I can help with the documentation. Does that put me at a maintainer level? Not sure, but I can keep up with the routine maintenance work. I will also follow whatever happens here because right now, I can't imagine not using a framework in rust that lets me do what nannou gives me: access to a draw
like API, access to webgpu directly with device
, screenshot taking (I render each frame, then compile them into videos), and egui as a bonus. That's my whole career right now. (And I haven't even talked about wasm!!!)
Seriously, I owe this project so much it would be painful to not do anything to drive it forward.
Happy new year!
@MacTuitui Although my comment was quite positive, I actually agree with much of your hesitation, and have a deep concern for questions of accessibility and user friendliness. This commitment to education is part of what makes creative coding communities so beautiful. We are not, in fact, game developers and do not share all of their concerns. You're absolutely correct that average user doesn't care about how cache friendly their rendering algorithm is, or using super advanced lighting techniques, and I have serious questions about how to provide a good 3d api that abstracts over the significant boilerplate bevy requires to render anything.
The question of whether to use bevy is certainly one of abstraction. But this abstraction is also a strength. Bevy is incredibly modular, for example, they make it incredibly easy to inject the underlying wgpu device anywhere you want:
// An incredibly minimal bevy example
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_systems(Update, update).run();
}
fn update(mut device: ResMut<RenderDevice>) {
// do something with device
}
Here, RenderDevice
comes from the RenderPlugin
which is an item in bevy's DefaultPlugins
. Everything is a plugin in bevy. The RenderPlugin
itself is implemented by adding the following plugins:
app.add_plugins((
ValidParentCheckPlugin::<view::InheritedVisibility>::default(),
WindowRenderPlugin,
CameraPlugin,
ViewPlugin,
MeshPlugin,
GlobalsPlugin,
MorphPlugin,
));
So, for example, we could decide not to use bevy's default renderer at all, but still use the WindowRenderPlugin
, which wraps winit and provides lots of windowing functionality that nannou has to hand roll right now.
Still, it's undoubtedly true that the goal of bevy as a framework is to work with higher level abstractions than just importing the raw wgpu device. Even in this simple example, you have to learn what a ResMut
is, and how does adding this to our app magically make this resource appear in our function? And it's well worth asking whether the cost of this abstraction is worth it to the user. Can we provide nice apis that allow the user to delay learning about bevy until they really need to?
But there are still real advantages here. Just to use a small example, your nasty shader reloading code? Bevy provides that out of the box.
At this point I'm not familiar with bevy to be even remotely relevant to the main discussion (let me try it!), but while the whole beginner friendly aspect is important (or has to be worked for), I also think there might be a place for an advanced creative coding framework that requires a bit more effort to get running if the gains are there eventually. I'm still not sure what this actually means: can we find where such gains could happen beforehand?
One question that I think gives a good idea of what I am talking is: should we consider instance rendering a basic function call? How can we design this? draw.lot_of_ellipses()
? I don't really have any data to back me up, but I am pretty sure anyone has tried to render a lot of things in a creative coding framework and was somehow disappointed by the results.
I believe three.js is one of the few frameworks that has something about instances but what are we actually trying to do with a API around instances? Lead the user to understand how a shader expects data? What would be the difference between doing a wgpu pipeline yourself and be in full control? Should we expect users to code their own shaders as the base level?
In other words, how much grunt work should the framework aim at while helping people navigate around more complex stuff? (I'm at the point personally where I believe I could probably do half of what I do directly with a bare winit+wgpu project, and experience massive pain if I wanted to integrate lyon/egui myself for example)
I know this is not really the focus of the issue, so feel free to ignore this!
There's definitely real friction between nice apis and performance. In some ways, this is like the difference between immediate and retained mode uis, where in order to enable the nice api of "draw this shape, draw that shape", we have to continuously rebuild the entire world rather than having some kind of persistence. Of course, unfortunately for computer graphics, this means we are not just burning cpu cycles, but wasting time copying data back and forth to the gpu. Advanced techniques like indirect rendering rely on living in gpu world as much as possible, that we are not constantly going back to main memory, which becomes complicated when we want to present the user an expressive api that allows them to simply state what they want to appear on screen and voila it happens.
To have something like
for i in 0..1_000_000 {
draw.ellipse();
}
magically be instanced is, I think, very difficult. Not only do we have to infer that this shape shares some kind of identity (i.e. can be batched), but we also have to figure out how to persist that identity between frames on the gpu while still allowing the user to decide to draw a rectangle or whatever else their heart desires on the next frame. I mean, the example bevy provides here is really complicated!
But, I don't think these things are entirely insurmountable. And while it may not be possible to provide a maximally magical api that "just works" for absolute beginners, part of bridging this complexity is recognizing that we have very different needs than the gamedev folks. They don't care to invest time in making instancing easy, because they're solving really hard problems like rendering a million blades of grass that have advanced physics effects that move them individually as your character runs through them. You can't provide a "generic" instancing api for this kind of thing, because it's always going to be so specific to the game you're making. But we're lucky! In some ways, our problems are just easier. Because we often just want to make pretty shapes and colors, not model physical reality itself. And I think this presents us opportunities to make utilities to expose advanced techniques that work for art. But they are obsessed with performance, and I think that means that we can benefit from some of their work and techniques here too.
I agree that magically understanding when to add instancing is very hard, but what about something along the lines of:
draw.instanced_ellipses().centers(centers).radii(radii).angles(angles).colors(colors);
where all parameters are Vec
of the same length. I argue this by itself is also a very strong learning tool: as a beginner you might not understand the difference between asking to tesselate n circles with lyon and asking the GPU to draw the same thing again with different parameters.
Or just drawing a quad with a custom shader.
This of course kills the draw
design if we introduce new pipelines that need to be drawn in-between different draw
calls but I guess this could be managed - and making sure the documentation is here to follow.
If we want to explore even more advanced techniques like indirect rendering then it's another layer - not sure this warrants our attention as this is very close to working with the GPU directly and as you said, maybe a different scope. We should however make sure that whatever we do still allows advanced users to tinker there as well.
Apologies for driving the discussion along those lines, but the main purpose of nannou seems to be the key element here. With my basic understanding of wgpu I am completely happy with what nannou offers and I would love for the framework to just keep up with the latest libraries of the ecosystem. But that's extremely selfish and it's indeed the time for me to contribute back. I'm sadly not confident in writing a library of this level, so I can only discuss at this point. Feel free to disregard everything and just do the thing, that could be more efficient.
Just some more notes from my own research for anyone who wants to follow along.
Some proof of the maintenance burden here. I just tried to upgrade #940 with the last two months of deps upgrades, namely winit 0.29, which includes a huge number of breaking changes as part of their "event loop 3.0" refactor. This crate is constantly churning, and while this update is bigger than normal, it requires a huge number of changes to nannou that at best are totally irrelevant to the user and at worst break their code for zero benefit. I could probably get this to compile in another hour or two, but there are a LOT of subtle changes here, which requires paying a ton of attention to the winit dev cycle or pretty extensive manual testing to ensure we didn't break anything.
I was tagged so I'm replying here. I'm going to do my own thing (minimal, lower level), mostly for learning and to make something laser focused on my own use cases. I appreciate you making nannou
years ago, it's one of the things that allowed me to get into generative art. Since I'll be going my own way I'll be unsubscribing from this thread just to keep my notifications in check.
Just some more notes from my own research for anyone who wants to follow along.
This is awesome @tychedelia, thanks so much for digging in!
Also ty both for sharing your thoughts and particular areas of concern. I totally agree that the Draw
API is one of the bigger causes for focus/concern - it's probably the standout feature that nannou provides that bevy currently lacks an easy alternative for?
- How thick or thin of a wrapper do we envision nannou being over bevy? Ideally, users shouldn't need to be familiar with the ecs programming model to get started, but how might we help users bridge nannou's model/view/update api into writing their own ecs systems?
Totally agree, I'm thinking that we might want to aim for two levels of abstraction:
draw
API.I'm less clear on exactly how we'd go about 2.
here (e.g. the Frame
type might need some rethinking), but I imagine this would become clearer while working on bevy plugins for nannou's Draw API and handy wgpu utils for 1.
. My hope is that 2.
might also allow us to keep the existing examples functioning with minimal (if any) changes, but we'll see!
- Bevy has separate 2d and 3d pipelines, which might pose problems for our api. In general, I have questions about what a great 3d api looks like that may be answered from porting the draw api to bevy.
Good question, I haven't looked too closely at bevy's graphics pipelines. I think our existing Draw API assumes 3D, but defaults to an isometric camera view so that everything looks 2Dish until you start playing with the camera. Maybe we can do something similar with bevy where the Draw API uses bevy's 3D pipeline but by default has an isometric camera? Certainly open to suggestions here though, it sounds like you've taken a closer look than I have!
- Re: isf, bevy actually does support glsl shaders, but wgsl is definitely the idiom.
Yeah nice, maybe it's time for a WGSL alternative to ISF in general :) My memory's a bit foggy on the ISF format and whether it could map directly to WGSL as is :thinking: Either way, perhaps we can leave this as a kind of stretch goal after the other work as our existing nannou_isf
stuff isn't really finished anyways :sweat_smile:
Bevy is current in the process of building a new ui system. Egui is great, but we'd probably want to switch once that's ready.
I'm soooo curious about this, excited to see where bevy take their UI stuff! While egui is incredibly handy atm, I'm certainly not married to it.
This is something that's been on my mind since inception! As you mention @tychedelia I think it can be a bit tricky to provide in a way that actually does provide greater efficiency in the general case, but maybe not unachievable.
Another thing that's been on our mind is the idea of providing a signed-distance function oriented alternative to the Draw API, where rather than constructing a mesh with all the required geometry on the CPU, you provide vertices for positions of shapes and use signed distance functions in the fragment shader to represent them. At one point we were considering overhauling the draw API to work like this by default, but I think there are still advantages to the existing approach like supporting more complex shapes, optionally rendering directly to formats like SVG on the CPU, etc. Having an option for working with SDFs in a similar manner to the way Draw currently works could be a lot of fun though, as they're likely a lot more efficient in cases with a lot of shapes, and there's all kinds of interesting operations you can do with SDFs that can be trickier to achieve otherwise.
In general, it sounds like the sentiment for the bevy approach is generally positive! I'm imagining the path to look something like:
bevy-refactor
branch from master
that we can collaborate on for this refactor.master
README and Contributing chapter of the guide with a notice about the refactor, a link to the branch and a link to this discussion.bevy-refactor
branch, setup shell crates for bevy plugins:
bevy-nannou-wgpu
- any useful utils from nannou-wgpu
that might be beneficial.bevy-nannou-draw
- the nannou::Draw
abstraction and related items as a plugin. Might depend on bevy-nannou-wgpu
?bevy-nannou-laser
- nannou_laser
as a plugin.bevy-nannou-osc
- we should take a look at bevy-osc
and (if they're still using nannou-osc
) maybe organise some collab with their maintenance.bevy-nannou-isf
- not high priority, can revisit after the rest.
I'm open to better names for these crates! Is this roughly the standard approach to naming bevy plugins? Also, we may wish to startnannou
crate in terms of bevy
and the plugins.We might want to create a github project with dedicated issues to track all this?
@tychedelia I've just sent through a maintainer's invite if you're still interested! I really appreciate the care and attention you've shown to nannou so far and I'm super excited to have you hacking on nannou :heart:
W.r.t. #940, I had a brief look and for the most part it's looking good to me! Are you interested in landing that work before we branch off for bevy-refactor
? Is it just the egui
example that's not functioning? I think I ran into the same issue w.r.t. eframe
in my older attempt #861 :sweat_smile: If so, perhaps we can land your work anyway, but refer users interested in the bevy demo to the previous version pending the bevy-refactor
work? Alternatively, we don't have to merge it into master
, but we could still start the bevy-refactor
branch from your work in #940?
I can make a start on the first steps above later next week, but if you (or anyone) would like to beat me to it feel free!
I'm less clear on exactly how we'd go about 2. here (e.g. the Frame type might need some rethinking), but I imagine this would become clearer while working on bevy plugins for nannou's Draw API and handy wgpu utils for 1.. My hope is that 2. might also allow us to keep the existing examples functioning with minimal (if any) changes, but we'll see!
I think keeping (most) of the existing examples functioning is a really good goal.
What to do with Frame
is an interesting question. The primitive for this in bevy is a 3d camera that has a render target, for example in their multi-window example or rendering a camera view to a texture in a first pass. If the base "2d" view is a 3d orthographic camera, it might be confusing for beginners to understand what "camera" means and is helpful to preserve the illusion of rendering in 2d space, but I'm not sure the "frame" metaphor will work with bevy's idioms.
I think we'll also need to consider what to do with App
, since some of the lower level details (windowing, access to the wgpu device, etc.) may make sense to leave for more advanced ECS users.
Another thing that's been on our mind is the idea of providing a signed-distance function
Part of my current workflow requires instancing a particular geometry per pixel of a texture, which makes performance really suffer at higher resolutions. Part of my goal in contributing here is to level up my shader programming and learn some more advanced techniques. :)
I'm open to better names for these crates! Is this roughly the standard approach to naming bevy plugins?
Yes, as per their book. Our "namespace" would be bevy_nannou_$crate
.
We might want to create a github project with dedicated issues to track all this?
That makes sense to me!
w.r.t. https://github.com/nannou-org/nannou/pull/940, I had a brief look and for the most part it's looking good to me! Are you interested in landing that work before we branch off for bevy-refactor?
I think it could be nice to land that work for the benefit of existing users. This also means (unfortunately) getting up to date on winit changes since I opened the pr. The egui stuff was the primary blocker. I'll try to get my head back into that next week to understand what's necessary to close that out.
Looking forward to discussing this more.
I think this is a great idea, just the other week I was wishing for a nannou-style drawing API in bevy. In my mind, being able to start off with a nannou-style sketch/piece and mix in more bevy native things such as custom materials, scenes imported from Blender and postprocessing effects as needed would be really great! I guess this would require staying highly compatible with bevy's cameras and pipelinee? I don't know enough about the internals of either nannou or bevy to visualise how this could mesh together yet code wise.
1. Separate dedicated plugins for features like nannou's drawing, laser integration, etc that allow users to use nannou features in normal bevy apps, leaning into bevy's ECS approach. This will likely be more useful for users working on bigger / longer-term projects who don't mind a bit more verbosity in favour of bevy's modular system approach. 2. A thin wrapper around bevy + the nannou bevy plugins that provides the original sketch experience. This would likely be more useful for folks who are learning how to code through creative coding, and peeps who are interested in getting started quickly, who are less interested in having low-level access to systems provided by bevy, and who are mostly interested in nannou's `draw` API.
I like this plan and if I can identify a part where I'm competent enough to contribute I would be happy to help.
Thank you for all your work on nannou so far!
@ErikNatanael I'm definitely interested in tools to help with bevy's high level api! I think our goal at first is going to be limited to porting the existing nannou apis, but there's definitely scope in the future to provide creative tools to work with the entire surface area of bevy.
We're getting started, but you can follow https://github.com/nannou-org/nannou/labels/bevy label here. :)
Just a heads up that @JoshuaBatty, @tychedelia and I had a call a couple days ago and we've decided to make a start on this work!
Version 0.19.0
has been published with @tychedelia's great work in #940, and will likely be the last minor version release before publishing the bevy rework.
See #953 for tracking the bevy rework.
I'll close this issue in favour of discussing the rework there!
Hi folks, just wanted to start with letting you know I really appreciate your concern for the state of nannou. It means a lot that some peeps have found a use for it and care about it’s direction :heart: A huge thanks to @tychedelia, @zmitchell, @infinity-creative for doing some deep diving and getting to the bottom of the recent egui breakage in #940.
As is obvious, @JoshuaBatty and I have found less and less time to tend to nannou over the past year or two. In the past we’ve been able to use our MindBuffer contract work to guide forward progress on various features, however our more recent software contracting work has left a little less time for playing with LASERs and graphics 🫠 We’ve been chatting about how we can make better use of the time that we do get for nannou and wanted to share some thoughts.
One of the more tedious, time-consuming aspects of nannou maintenance is maintaining the compatibility between wgpu, winit and UI dependencies (previously conrod, now egui) and more generally, the upkeep involved with the custom application event loop, window and graphics wrappers. Back when we kicked off the project ~7 years ago there weren’t many libraries tackling this sort of work, but since then a lot has changed!
Nannou Bevy Plugins?
In particular, Bevy has caught our eye recently. We’ve been doing some experimenting with it over the past few months and we’re thinking it could be a good candidate for taking over some of the heavy lifting previously mentioned. It has multi-window support, supports all platforms nannou already does, provides a more flexible plugin framework (atop the ECS design) and has grown a huge community with some healthy funding. It already has support for many features that nannou is lacking like mobile targets, headless modes, custom event timers and there are plenty of examples.
Perhaps we can leverage bevy’s existing application/events/windowing/audio support to allow us to focus on providing the high-level/fun stuff, and re-orient nannou towards providing a suite of creative-coding focused bevy plugins? These might include:
bevy-nannou-draw
- provide nannou’s simpledraw
API.bevy-nannou-laser
- basically the existing laser crate as a bevy plugin.bevy-nannou-osc
- providing nannou’s simple OSC API (though I think someone’s already wrapped it asbevy-osc
)?bevy-nannou-wgpu
- any of nannou’s extra utils that simplify working with wgpu.bevy-nannou-isf
- ISF support, though we might want to reconsider it in favour of a wgsl-focused approach?For those who appreciate nannou’s processing-inspired API (
model
,update
,view
), we could consider stripping down thenannou
crate to be a simple wrapper aroundbevy
and the hypotheticalbevy-nannou-draw
plugin. This should allow us to remove the custom application loop, the window wrappers, the simplified event types (in favour of using bevy’s), and hopefully only require minimal changes to the examples.This approach won’t entirely remove the aforementioned churn involved with winit, wgpu, egui updates etc, however we’d at least be able to share that load and collaborate upstream with the larger bevy community.
Maintainers Welcome
@tychedelia, @zmitchell, @mactuitui I’d love to add any of you as maintainers if you’re still up for helping out (though we do ask you get a review from at least one other member if we’re absent like we have been), just leave a comment expressing interest below. Thanks a lot to zmitchell for reaching out with 939 in the first place.
Next Actions
If the general sentiment around leaning into the bevy ecosystem seems positive, I’ll do up another rough plan of how we might go about this and start a dedicated branch for the work. In the meantime I plan to take a closer look at tychedelia’s epic work in #940!