godotengine / godot-proposals

Godot Improvement Proposals (GIPs)
MIT License
1.07k stars 69 forks source link

Implement Drawable Textures #7379

Open reduz opened 11 months ago

reduz commented 11 months ago

Describe the project you are working on

Godot

Describe the problem or limitation you are having in your project

One feature that is widely requested in Godot is the ability to easily have a texture and just draw to it (or even run a custom shader). Examples of things you want to do:

Traditionally this has be done with Viewports, but the whole thing is still relatively limited, as you can only write to a single image, writing to alpha is more complex, and you can't run multiple writing iterations per frame (ping-pong).

While in Godot 4.0+ users can access compute and RenderingDevice, this is still hard and beyond the level of experience of what most users can do, plus it won't work on GLES3 (compatibility) renderer.

Describe the feature / enhancement and how it helps to overcome the problem or limitation

Ideally, it should be possible to expose a very simple API to do this via RenderingServer, and an higher level DrawableTexture2D exposed for regular usage.

Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams

At low level, the following API should be exposed for RenderingServer:

enum TextureDrawableFormat {
   TEXTURE_DRAWABLE_FORMAT_RGBA8,
   TEXTURE_DRAWABLE_FORMAT_RGBA8_SRGB, // Use this if you want to read the result from both 2D (non-hdr) and 3D.
   TEXTURE_DRAWABLE_FORMAT_RGBAH,
   TEXTURE_DRAWABLE_FORMAT_RGBAF,
};

// Create
RID texture_drawable_create(const Size2i& p_size, TextureDrawableFormat p_format,bool p_with_mipmaps = false);

// Draw

// Blit a rect
void texture_drawable_blit_rect(const Vector<RID> &p_textures, const Rect2i& p_rect, RID p_material,const Color& p_modulate,const Vector<RID>& p_source_textures,int p_to_mipmap=0); 
// Blit a polygon, 1 = point, 2 = line, 3 = polygon (will be tesselated). If UV provided, use these, otherwise UV is from dest texture.
void texture_drawable_blit_polygon(const Vector<RID> &p_textures, const PackedVector2Array& p_points, const PackedVector2Array& p_uvs,const PackedVector2Array& p_modulate_colors, RID p_material,const Vector<RID>& p_source_textures,int p_to_mipmap=0); 

// Utility functions
void texture_drawable_generate_mipmaps(RID p_texture); // Update mipmaps if modified
RID texture_drawable_get_default_material(); // To use with simplified functions in DrawableTexture2D

And that's it! We also need to add a new type of shader in Godot: SHADER_TYPE_TEXTURE_BLIT

Example of how to use:

shader_type texture_blit
// Optional render modes:
render_mode blend_mix /* default , also available: blend_add, blend_sub, blend_mul */;

uniform texture2D source_texture : hint_blit_source;
uniform texture2D source_texture2 : hint_blit_source2;
// up to 4 sources

void blit() {
     // Read sources and blit
     COLOR = texture(source_texture,UV) * MODULATE;
     COLOR2 = texture(source_texture2,UV);
     // up to COLOR4
}

Then put in a material and use.

Higher Level Abstraction

A resource can be provided:

class DrawableTexture2D : public Texture2D {
//...//
   enum DrawableFormat {
       DRAWABLE_FORMAT_RGBA8,
       DRAWABLE_FORMAT_RGBA8_SRGB,
       DRAWABLE_FORMAT_RGBAH,
       DRAWABLE_FORMAT_RGBAF,
   };
    void setup(Size2i p_size, DrawableFormat p_format,bool p_use_mipmaps=false);
    // Simple version without needing to use materials (uses a built-in material). Single source texture.
    void blit_rect(const Rect2i p_rect, const Ref<Texture2D>& p_source, const Color& p_modulate,int p_mipmap=0);
    void blit_polygon(const Rect2i p_rect, const Ref<Texture2D>& p_source, const PackedVector2Array& p_points const PackedVector2Array& p_uvs, const PackedColorArray& p_modulate,int p_mipmap=0);

    void blit_rect_multi(const Rect2i p_rect, const TypedArray<Ref<Texture2D>>& p_sources, const TypedArray<Ref<DrawableTexture2D>>& p_extra_targets, const Color& p_modulate,int p_mipmap=0);
    void blit_polygon_multi(const Rect2i p_rect,  const TypedArray<Ref<Texture2D>>& p_sources, const TypedArray<Ref<DrawableTexture2D>>& p_extra_targets,, const PackedVector2Array& p_points const PackedVector2Array& p_uvs, const PackedColorArray& p_modulate,int p_mipmap=0);

   void generate_mipmaps();
};

If this enhancement will not be used often, can it be worked around with a few lines of script?

N/A

Is there a reason why this should be core and not an add-on in the asset library?

N/A

clayjohn commented 11 months ago

I think this supercedes https://github.com/godotengine/godot-proposals/issues/5272

lyuma commented 11 months ago

Is there a way to support blitting an arbitrary Mesh object and/or expand the Vector2 points to allow for some vector3 data, such as vertices or normals. Clearly this wouldn't have a depth buffer or anything, just access to the mesh data.

As an example, a baking process in which we want to paint from a source texture onto a target texture, using the mesh's UV layout. In this case, baking texture data might need a mesh's UV texture layout, but also with access to vertex, normal and tangent data.

(And if 3D meshes are ruled out due to vertex layout constraints and performance tradeoffs, I could ask a related question for meshes in 2D mode which ought to be compatible with this vertex layout)

reduz commented 11 months ago

@lyuma Yes, but to be honest, my way of thinking is that this is intended to just be a simple API, so if you want to do something considerably more complex, its best to just use RenderingDevice directly.

TokisanGames commented 11 months ago

If I understand it, this would supersede many devs' needs for texture partial updates and high bit viewports as we can now do partial updates directly on the texture.

If so, do the same limitations that have hindered implementation of both of those hinder this? Can this be implemented faster than implementing the other two? Is this proposal just a nice looking API but still has to be figured out underneath, or is it something doable sooner rather than later?

reduz commented 11 months ago

@TokisanGames It should not be hard to implement, but its probably the better way to go forward, as otherwise this means more and more hacks have to be piled up on Viewport, which it is not meant for.

KoBeWi commented 11 months ago

Sounds like it would allow to rework AnimatedTexture instead of removing it.

Zylann commented 11 months ago

Would this API be blocking or asynchronous? I think there are two cases where this can be used (at least from what I would need):

Or would the two still require completely different APIs?

In fact it seems that either way it should not be synchronous until needed (like download back), but in the second case the time to run it matters more than the first case.

bitsawer commented 11 months ago

Instead of limiting texture_drawable_blit_rect() by forcing users to give explicit p_source_textures and modulate arguments, why not leverage existing ShaderMaterial so that you can call set_shader_parameter() to declare and set any parameters you need? I believe that would make the API quite flexible for most uses.

So, for example texture_drawable_blit_rect() would then be something like this:

void texture_drawable_blit_rect(const Vector<RID> &p_textures, const Rect2i& p_rect, RID p_material,int p_to_mipmap=0);

where RID p_material should be a ShaderMaterial. This way you can define and give whatever arguments you need by calling ShaderMaterial.set_shader_parameter() beforehand. We would not need an explicit modulate argument, either. If you need it, just define an uniform for it and set it using set_shader_parameter(). Easy to add any other custom data. This should also be easy to understand as other shader rendering works this way, too.

I think both this and https://github.com/godotengine/godot/pull/75436 are complementary and have different use cases, although they can overlap and many will be happy with just this API. For example, https://github.com/godotengine/godot/pull/75436 would be very useful for batch rendering octahedral impostors in my current project during runtime.

reduz commented 11 months ago

@bitsawer I think you misunderstood the proposal. If you want to do that, nobody stops you. That by sole definition of using a material in the API should work. The idea that you have the option to pass custom textures is for the very fact that it can get quite annoying to have to create a ShaderMaterial for every combination of textures you can pass.

bitsawer commented 11 months ago

Sound good, the proposal just didn't directly mention any way to pass custom data other than textures and modulate so I was a bit worried. But if using a ShaderMaterial with it is possible when you need some extra parameters and customization, it sounds good to me.

reduz commented 11 months ago

@Zylann Its synchronous, if you want to get the texture to CPU you can but you will stall the render until that part is done. Implementing asynchronous texture retrieval is planned, so eventually I guess that could work too.

Zylann commented 11 months ago

@reduz you might have misunderstood me: I meant that the actual rendering should likely not happen litterally at the moment the function is called, depending on the use case, because it can have terrible impact on performance (not sure why it would need to, outside of requesting its data on CPU, which does stall either way?). For tools it's probably fine, but not in a game if used for post-processing that happens every frame. Rather, I'd think rendering would be queued and executed at once with the rest, or specific time, and only stalled if the user wants to get the result as an image immediately after the call for example. Therefore why I posted my question.

Even for tools in fact it could help: one reason I was interested in a more specialized API was when I implemented terrain erosion, I had to run 30 times the same shader on a texture, which had to take 30 frames. I don't actually need the image on CPU between each of them. With the drawable API I can do it in one frame, but if that's 30 synchronous calls, there will be multiplied overhead (that same overhead I refer to in case of post-processing, assuming that proposal can also be used for that use case).

Although, regarding async download, that would be nice to have as well (and not just with textures).

reduz commented 11 months ago

@Zylann oh yeah it wont happen at the time you call it

clayjohn commented 11 months ago

@reduz Is your thinking that we would call an update function in the main RenderingServerDefault::_draw() loop like we do with particles?

Something like:

RSG::texture_storage->update_drawable_textures();
ucmRich commented 11 months ago

Let me tell you what my brain gets from this feature -- if i understand this right....

--OPTION 1--

we could have a texture like a 512x512 set of pixels then at runtime as a game progresses, draw or change RGBA values of a single pixel (or more like from XY to XY) in the texture based on events and stats of the game...

IE: AOE or Zelda SNES....

(think: every tile in a 2D scene... or in a 3D scene, every flat 2D plane divided up into 2D tiles circa checkerboard but exists in a 3D scene as planes in 3D space... orrr voxels....)

For instance:

World Maps: take a player starting a game with game fog mode turned on (circa Age of Empires) as the villagers explore, more of the 'map' gets revealed (in the bottom right corner of the screen)

Level Maps: In top-down Zelda, you can explore dungeons and different areas of the overworld. Different maps for 2D tile placements based on the values in RGBA... or even just RGB for R*G as 65,535 tiles with 0 as null... B as status of tile... maybe A as visible.... with multiple textures you could specify them on top of each other to extend the overall level map capabilities.... for instance: (each pixel == one tile or 1 voxel) texture 1 for basic 16,777,215 (RGB) or 4 billion for RGBA..... then texture 2 for stat types...

Why this would come in handy is saving and loading game data and the ability to map it or even design levels and world maps.

but as png/bitmap/jpg/ or actual textures...

--OPTION 2--

Draw damage and make use of 'decals' to add to existing textures on 3D models.. especially paintball mode heart

--OPTION 3--

Make changes at runtime to all the various texture maps used as a material on a 3D model to change the properties such as emissive lighting, diffuse lighting, shinyness?, even make metals look like they turned into wood but because the model got hit with a "terraForm Gun" of sorts [[ is that a real weapon or did i just invent it? hey use it, i'll play your game especially if it a mars terraforming themed game :-D ]]

--OPTION 4--

change terrain at runtime such as changing a yellow brick road into asphalt or a stone road.

p.s. sorry this was so long

ThreeMileJump commented 11 months ago

@reduz How does this affect imagetextures that are updated from a cpu side image? For some effects it is useful to manipulate a bit image directly and then push that up to an imagetexture. In many cases, the region of the bitmap altered will be much smaller than the whole image. Godot only supports update of an imagetexture from an image source of the same size, where in other libraries a partial update of a texture from a smaller image is allowed. This suits effects like brush painting and dirty-rect updates of a large texture.

Being able to draw directly to a texture would improve the update process, by using a set of smaller imagetextures and bliting them to the drawable texture when they are updated.

What would be ideal is an image texture that supports partial update from smaller images, and can be drawn to as well. This would very neatly provide a mechanism for a texture to be modified by bitmap editing, gpu-side drawing, and also shaders.

Support for partial update ImageTexture #4017 https://github.com/godotengine/godot-proposals/discussions/4017

reduz commented 11 months ago

@clayjohn

@reduz Is your thinking that we would call an update function in the main RenderingServerDefault::_draw() loop like we do with particles?

You don't have to, I think this entirely user driven, they will call the functions every frame with whatever they need to do. @ThreeMileJump

@reduz How does this affect imagetextures that are updated from a cpu side image?

Thats a different use case and should not be affected. Its kind of similar except you do the work on the CPU.

clayjohn commented 11 months ago

@reduz Is your thinking that we would call an update function in the main RenderingServerDefault::_draw() loop like we do with particles?

You don't have to, I think this entirely user driven, they will call the functions every frame with whatever they need to do.

@reduz In earlier comments you say that the rendering won't happen when the function is called. So when should rendering of the texture happen? It seems reasonable that texture_drawable_blit_rect() would queue up an operation that would take place at the beginning of the RenderingServerDefault::_draw() loop in this case.

Zylann commented 11 months ago

that would take place at the beginning of the RenderingServerDefault::_draw() loop

Still related to my earlier post, I'd be curious to see if that order can be controlled, because it matters if this API would be usable for post-processing as well

reduz commented 11 months ago

@clayjohn Oh I meant that it does not happen when the function is called in the sense that you can't get via CPU back the result immediately. The rendering API is called immediately when this function is called if render thread is on the main thread.

BastiaanOlij commented 9 months ago

One thing I'm wondering about, is this actually still needed? We have all the ingredients already to code this in GDScript with the access to RenderingDevice. For now Vulkan only but at some point we should do some work on compatibility to expose things there a little better as well.

Have a look at https://github.com/godotengine/godot-demo-projects/pull/938 that shows how you can render to a buffer using compute shaders and then use it with a Texture2DRD to use it within your materials.

SlugFiller commented 9 months ago

@BastiaanOlij RenderingDevice isn't available on the compatibility renderer. At all. Nor will it ever be. This feature is renderer-agnostic. It is therefore more suitable for add-ons that don't want to limit their users. Or game devs looking for web export for browsers that exist today.

Even if the compatibility renderer eventually exposes its own set of classes, it would still be very difficult to create a device-agnostic class like the above in GDScript. Detecting which classes are available, avoiding crashes due to missing class, and bridging all the tiny differences between the exposed interfaces would simply be too complex.

BastiaanOlij commented 9 months ago

@SlugFiller I know that, but there has been a long standing wish to expose more parts of the compatibility renderer in similar ways as we're right now doing with the RenderingDeviced based renderers.

They are structurally different so you'll always end up implementing things separately if you want to support both renderers but exposing the ability to write your own shader passes on OpenGL is definately something we could investigate.

lyuma commented 9 months ago

There's nothing about the basic drawable textures concept that should depend on RenderingDevice specific functionality. Plus, Texture2DRD requires a lot of setup in GDScript, while the high level drawable texture would ideally be declarative (assign a material in the inspector and go).

I would also suggest that there's generally no need to use a full compute shader for these tasks: often a big triangle with vertex/fragment ought to suffice (and avoiding compute would make it much simpler to support both RD and compatibility).

So I can see both approaches being useful for different usecases: Compute Shader + Texture2DRD for high-end processing; and this proposal's DrawableTexture otherwise, as a full-screen vert + frag.

BastiaanOlij commented 9 months ago

@lyuma you can do both raster and compute through RenderingDevice.

But I get what you're saying, this API makes it much easier to do this, while using RD requires a bit more in depth knowledge (though that could be wrapped up in a plugin). And indeed as long as we don't have similar features on the compatibility renderer it does make sense to have an alternative that is implemented in all backends.

SubtleMetaphor commented 9 months ago

I want to chime in and say that it would be useful to be able to render any mesh using methods similar to these. One use case would for example be to "bake" mesh data to texture according some shader. In that case, you'd use the UV as POSITION output:

shader_type spatial;
render_mode unshaded, cull_disabled;

varying vec3 color;

void vertex()
{
    POSITION.xy = vec2(UV.x, 1. - UV.y) * 2. - 1.;
    color = (NORMAL + 1.) * .5;
}

void fragment() {
    ALBEDO.xyz = color;
}

Here, we bake normals to texture. This might be useful in case we need to do some operations in compute shader which require corresponding normals in texture to be available. More steps would likely be required for the final texture output (whatever that may be), like actually dilating the texture to cover the UV seams. This is not easy to set up in Godot currently.

In Unity, you are able to set render target to RenderTexture, and invoke Graphics.DrawMeshNow() using provided shader, which proves very useful for a scenario such as this. Once texture has been acquired, you could blit it with some other shader to dilate or do other operations.

SlugFiller commented 9 months ago

Hmm, perhaps adding

void texture_drawable_blit_mesh(const Vector<RID> &p_textures, const RID &p_mesh, const RID &p_skeleton, const Color &p_modulate, const Vector<RID>& p_source_textures, int p_to_mipmap=0); 
blackears commented 9 months ago

That looks good, but you want to be able to pass in a shader along with the mesh. Not just textures and color modulation.

SlugFiller commented 9 months ago

Ah, right, I missed the material parameter. This should be

void texture_drawable_blit_mesh(const Vector<RID> &p_textures, RID p_mesh, RID p_skeleton, RID p_material, const Color &p_modulate, const Vector<RID>& p_source_textures, int p_to_mipmap=0); 
jordo commented 9 months ago

Can we please support single channel formats?

   TEXTURE_DRAWABLE_FORMAT_RGBA8,
   TEXTURE_DRAWABLE_FORMAT_RGBA8_SRGB, // Use this if you want to read the result from both 2D (non-hdr) and 3D.
   TEXTURE_DRAWABLE_FORMAT_RGBAH,
   TEXTURE_DRAWABLE_FORMAT_RGBAF,
};

We're definitely missing these as we do a lot of effects with a single channel texture... say RA8, RAH, RAF?

TokisanGames commented 8 months ago

We also need RH, RF and integer based formats on these drawable textures, and in Texture and Image. Our terrain heights are painted with RF. We're painting color with RGBA8. We're painting textures using a bit packed, integer format control map interpreted through an RF texture. It's clunky. Shaders already support usampler and the back-end rendering device already supports integer formats. Image and Texture should be completed and support all common OpenGL formats.

If Terrain3D and other projects are to support GPU based texture modification on all hardware we either need this proposal or access to the RenderingDevice for texture creation and modification in OpenGL / Compatibility mode.

What is more likely in the future: Access to Rendering Device in OpenGL to create and modify uint textures? Or integer formats in Texture, Image, and DrawableTexture?

SlugFiller commented 8 months ago

From what I understood, the concern about available formats has to do with GL support. As such, the following should be available at a minimum (based on code here):

FORMAT_R8
FORMAT_RG8
FORMAT_RGB8
FORMAT_RGBA8
FORMAT_RGBA4444
FORMAT_RF
FORMAT_RGF
FORMAT_RGBF
FORMAT_RGBAF
FORMAT_RH
FORMAT_RGH
FORMAT_RGBH
FORMAT_RGBAH
FORMAT_RGBE9995

These are the formats for which support is expected on any device. So, naturally, they should all be supported.

clayjohn commented 8 months ago

These are the formats for which support is expected on any device. So, naturally, they should all be supported.

Those are some of the common formats supported for reading. The list is different for common formats supported for writing

For OpenGL ES 3.0 the list is:

– RGBA32I, RGBA32UI, RGBA16I, RGBA16UI, RGBA8, RGBA8I, RGBA8UI, SRGB8_ALPHA8, RGB10_A2, RGB10_A2UI, RGBA4, and RGB5_A1. – RGB8 and RGB565. – RG32I, RG32UI, RG16I, RG16UI, RG8, RG8I, and RG8UI. – R32I, R32UI, R16I, R16UI, R8, R8I, and R8UI.

For Desktop GL it is:

– RGBA32F, RGBA32I, RGBA32UI, RGBA16, RGBA16F, RGBA16I, RGBA16UI, RGBA8, RGBA8I, RGBA8UI, SRGB8_ALPHA8, RGB10_A2, and RGB10_A2UI. – R11F_G11F_B10F. – RG32F, RG32I, RG32UI, RG16, RG16F, RG16I, RG16UI, RG8, RG8I, and RG8UI. – R32F, R32I, R32UI, R16F, R16I, R16UI, R16, R8, R8I, and R8UI

Vulkan supports many more formats on top of these

SlugFiller commented 8 months ago

GLES has no writing to float or half? At least that's what I'm getting from the list. Also, what are the I and UI formats? 32 bit integer? I suppose those can be used as an alternative to float writing (by doing float-to-int).

I do kinda agree that the formats should be the lowest common denominator. But it's still way more formats than the specified 4.

(Also, I'm not seeing RGBAF on the GLES write formats, but I am seeing it on the proposal, so something doesn't add up)

(Also, for reference: How many formats are gone if you add GLES2 to the mix as well?)

clayjohn commented 8 months ago

(Also, for reference: How many formats are gone if you add GLES2 to the mix as well?)

The GL ES 2.0 spec requires the following formats be supported RGB565, RGBA4, and RGB5_A1, all others are supported through extensions. The OES_rgb8_rgba8 extensions is supported on pretty much all devices and it provides support for wirting to RGBA8 and RGB8. The extensions for half float and full float are not as widely support (~50% vs 99%)

TokisanGames commented 8 months ago

The OpenGLES 3.0 specification (pdf) lists supported texture formats with bit depths from 8 to 32 per 1-4 channels for float and integer on page 143-145 of this file (marked 130-132).

The first list Clayjohn gave is p141 "Texture and renderbuffer color formats". The second list is part of the "texture-only color formats"

Textures cannot be read in any format unless they can also be written to. So all of these formats should be writeable, though only half can be used as a renderbuffer, which we users can't use in Godot anyway.

RGBAF is 32-bit per 4 channels or 128-bits, aka RGBA32F which is on the list.

Also, what are the I and UI formats? 32 bit integer? I suppose those can be used as an alternative to float writing (by doing float-to-int).

Yes, signed and unsigned integer of various bit depths for specifically writing int data. They aren't "an alternative to float writing". Float writing is the clunky alternative when you want to write integers.

clayjohn commented 8 months ago

Textures cannot be read in any format unless they can also be written to. So all of these formats should be writeable, though only half can be used as a renderbuffer, which we users can't use in Godot anyway.

Sorry, that's not what that means. Renderbuffer formats are formats you can render to with a shader. Texture only formats are formats you can upload from the CPU. You cannot render to texture-only formats.

All the target formats for Drawable Textures need to be renderable, they cannot be texture-only formats

nullMolotov commented 1 month ago

Any news for this? i'm hyped the same as when i first saw this proposal!

Calinou commented 1 month ago

Any news for this? i'm hyped the same as when i first saw this proposal!

To my knowledge, nobody has started working on implementing this feature yet. It's still desired, so contributions are welcome :slightly_smiling_face: