Open Vipitis opened 4 months ago
Little update: It seems to sorta work. But a bunch of stuff is still broken:
on_resize
function to fill the new space (similar to behaviour when going fullscreen on the website)new breaking examples found, that might be unrelated to this PR, but I will note them down for later reference:
I think I finally fixed the compatibility issue. There is some small visual issues which look like precision problems to me (not sure yet). And the performance is horrible it seems... Please let me know if you find any shaders that are broken (not due to missing features, wgpu bugs)
Will work on tests, examples and documentation to hopefully get this ready for next week.
E: found this one seemingly broken: >wgpu-shadertoy https://www.shadertoy.com/view/tsKXR3
detailed example of precision of this alpha channel is different: https://www.shadertoy.com/view/wsjSDt
I think this is finally ready for review - and I welcome some feedback.
Cool stuff!
@Korijn will you be able to help with a review with this? Would be great to get this merged and get a v0.2 released in the next couple of weeks.
This branch is pretty huge, I'll resume the review later, sorry, still getting used to my new work rhythm and finding a place for pygfx. Let me know if there are any specific parts of the diff I should focus my attention on first.
No worries, this sorta a large rewrite. Perhaps others can help too, time permitting.
Let me know if there are any specific parts of the diff I should focus my attention on first.
The part I am most unsure about is _update_textures
Since it feels really inefficient to make a new texture for every single frame. That includes a new binding and sampler too. I tried to use TextureView
instead which worked much better, but I couldn't get it to work when the Buffer passes also sample buffer inputs. I feel like I am missing something.
The added overhead makes the example from the API tests run at like 45fps for example.
No worries, this sorta a large rewrite. Perhaps others can help too, time permitting.
Let me know if there are any specific parts of the diff I should focus my attention on first.
The part I am most unsure about is
_update_textures
Since it feels really inefficient to make a new texture for every single frame. That includes a new binding and sampler too. I tried to useTextureView
instead which worked much better, but I couldn't get it to work when the Buffer passes also sample buffer inputs. I feel like I am missing something. The added overhead makes the example from the API tests run at like 45fps for example.
Maybe @almarklein can weigh in on that issue?
Is _update_textures
called every frame? It seems to rebuild everything from scratch, from descriptors all the way to the pipeline. I don't have a clear understanding of what happens and the path that leads up to update_textures
. But Ideally you want to re-use the textures. If that's not possible, you can probably at least re-use the layouts.
Is
_update_textures
called every frame?
yeah, it will be called for each renderpass, every frame and then also iterate through all channels... I think this whole method doesn't actually need to exist. I will try to make some changes that simply use a texture view for all the channels. And then the buffer renders to a temporary render target texture before overwriting the old texture. This was likely the cause for usage conflicts I had on the previous attempt. As it's common to have the previous frame as one of the inputs.
Works already well, but I will try to run some more examples before I push the commits. And it will likely break resizing for which there are no tests in CI. (but resizing in this PR is horrible too).
Resizing now works again as it should. It's not the cleanest solution but it works. I do feel like the performance is really bad again, but I need some proper ways to test that, as I am also using wgpu-py@main currently...
The two CI failures are sorta unrelated, one is deprecated actions and the other is python3.8 not happy with the excessive typing. Python 3.8 will be EOL in a few days - so I no longer care. Will look at the PRs in wgpu-py that updated all the CI stuff and open something separate tomorrow.
part of #4
approximately 17.5% of public Shadertoys are multipass. Multipass allows up to 4 buffers (A through D) to be rendered as a texture. These can also be used to store data and enable quite some more experiences.
Some of the challenges include timing as well as cross inputs. Buffer passes can seemingly take the exact same inputs as the main "Image" renderpass, including other buffers (and themselves?)
This PR starts to bloat a little and contains some refactor for the whole channel input concept... still in flux
Instead, will try to implement BufferTexture as a
ShadertoyChannel
subclass so it can hold for example the sampler settings. Additionally, there will likely be a RenderPass base class and subclasses for Image, Buffer(a-d) and later cube and sound. So the main Shadertoy class contains several render passes, and all of these get their inputs(channels) attached. I even started to try and sketch it out - but will have to sleep through this for a few more days... my concepts change every day but I need to just try and work on the ideas for a bit.The render order should be Buffer A through D and then Image. So you can keep temporal data, by using itself has an input.
TODOs:
additional tests cases for inferred input types, empty channels(caching conflict with pytests)test coverage for examples in readme!(different PR)(maybe) some debug mode where you can render the buffers to canvas?(you can use RenderDoc with "capture child processes")