AdamsLair / duality

a 2D Game Development Framework
https://adamslair.github.io/duality
MIT License
1.41k stars 287 forks source link

Combine Texture and Pixmap #782

Open Shadowblitz16 opened 4 years ago

Shadowblitz16 commented 4 years ago

Summary

Please Combine Textures and Pixmaps into one resource

Analysis

Textures and Pixmaps seem to be two resources that do the same thing. I propose that they be combined into a 1 Texture Resource. The new texture resource would have the same properties from both resources however it would be less confusing for newer users. All images imported into the editor would become Textures or at least be serialized as such in the project view.

Pros

Cons requires someone to port the existing components

Attachments

deanljohnson commented 4 years ago

It might be less confusing if they were combined - however separating them leads you towards a more performant design. The big difference between a Pixmap and a Texture is that the Pixmap is stored in system memory and the Texture is stored in video memory. It is very crucial that you use these two types of memory differently. Some operations can only be efficiently done while the image data is in system memory and before it has been uploaded to the GPU. If we try to manage this behind the scenes it is very easy for users to end up forcing the engine to do additional loads/reads to/from the GPU that the user would have seen how to avoid when the resources are separate.

For me personally I would rather have it be obvious where the data is stored so that I can make the proper choices about how to use it. If this was managed internally by the engine I would end up needing to know many implementation details of the engines code to know which operations were performant at which time.

Anytime I've used some kind of abstraction that tries to hide the separation between CPU and GPU memory it has always gotten in the way. It is slightly simpler in the easy cases but is a pain in the complex cases.

Is there something specific about having them as separate resources that has caused you trouble? Perhaps that could be addressed without merging the resources.

Shadowblitz16 commented 4 years ago

It might be less confusing if they were combined - however separating them leads you towards a more performant design. The big difference between a Pixmap and a Texture is that the Pixmap is stored in system memory and the Texture is stored in video memory. It is very crucial that you use these two types of memory differently. Some operations can only be efficiently done while the image data is in system memory and before it has been uploaded to the GPU. If we try to manage this behind the scenes it is very easy for users to end up forcing the engine to do additional loads/reads to/from the GPU that the user would have seen how to avoid when the resources are separate.

what about possibly storing a color array that is easily manipulable on the cpu and it only send it to the gpu to draw every frame using shaders?

Is there something specific about having them as separate resources that has caused you trouble? Perhaps that could be addressed without merging the resources. no its just annoying and doesn't seem to make much sense

deanljohnson commented 4 years ago

Sending it to the GPU every frame is exactly what you want to avoid. That is a relatively expensive operation. One of the key ways to improve rendering performance is to reduce data transfers to/from the GPU. You could check for changes before sending it and only send what’s changed, but what if there were many small changes... at some point batching becomes ineffective and the user would have to understand implementation details (that can change at any time) to make the right choices. The current design makes it simple to understand. If you want to modify the pixel data then you use a Pixmap. When you’re done with it you have a Texture ready for drawing.

Also, what if the user doesn’t need to modify the colors manually - should we always keep that color array allocated in system memory even if it’s never used?

On Mon, Mar 23, 2020 at 10:56 AM Shadowblitz16 notifications@github.com wrote:

It might be less confusing if they were combined - however separating them leads you towards a more performant design. The big difference between a Pixmap and a Texture is that the Pixmap is stored in system memory and the Texture is stored in video memory. It is very crucial that you use these two types of memory differently. Some operations can only be efficiently done while the image data is in system memory and before it has been uploaded to the GPU. If we try to manage this behind the scenes it is very easy for users to end up forcing the engine to do additional loads/reads to/from the GPU that the user would have seen how to avoid when the resources are separate.

what about possibly storing a color array that is easily manipulable on the cpu and it only send it to the gpu to draw every frame using shaders?

Is there something specific about having them as separate resources that has caused you trouble? Perhaps that could be addressed without merging the resources. no its just annoying and doesn't seem to make much sense

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/AdamsLair/duality/issues/782#issuecomment-602761151, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADCY4AXO5GD3T442NWXDEDDRI6PGBANCNFSM4LRSL5ZA .

Shadowblitz16 commented 4 years ago

it could be sent over when the data is changed but only after every frame. this would require keeping track of the color data anyways.

I don't see why keeping the color data is a problem. it already holds onto color data technically, maybe using temporary metadata for the editor would be the solution

deanljohnson commented 4 years ago

The issue in my opinion is that the performance characteristics of the API becomes unclear while only slightly simplifying initial learning while at the same time encouraging a user to not be cognizant of what their code is doing internally. Usually this abstraction is good but I think it is a land mine when we are talking about operations that are so fundamental to game development. You must understand what memory you are operating on if you want fast code.

If we have a Texture.SetColors method the user doesn’t know what the performance implications of this are. Are multiple calls buffered together? What if two updates are non-contiguous, contiguous, or overlapping? When is the transfer to the GPU done? Is the entire texture retransferred or just the changed bits? If it is just the changed bits, does the API use possibly more efficient transfer methods when I update the entire texture? What if I change colors, draw, and then change colors within a single rendering - in some APIs the draw would be affected by the second color change and in others it wouldn’t. Which do we choose, how does the user know, and what are it’s implications? What if I call SetColors in a way that doesn’t change the texture - is there a transfer to the GPU made? Right now we can have two textures sharing the same system memory with some GPU relevant properties different between the two instances. If the texture stores the pixel memory, how is that data shared? If I update a texture that is sharing system memory with another, are both textures updated or is only the video memory changed? When that SetColors call operates on a resource that manages both the system and video memory representations the user has to understand all of these implementation details because they can have drastic performance impacts if you understand it incorrectly. With the current model many of these questions don’t even need to be asked because it is clear they would be operating on system memory which will be orders of magnitudes faster than performing updates on video memory to the point that I wouldn’t care about all the potential optimizations the engine may or may not be doing.

When there is a separation between the two the user knows that Pixmap update operations are fast because they happen within system memory and the performance sensitive portion comes when creating/updating a texture from this. With that simple bit of knowledge they can write fast code.

Currently, a texture doesn’t have to store pixel data after the initial upload. Pixel data is not transferred every frame. I don’t think the engine actually takes advantage of this ability to unload the Pixmap but you could do so within your game code. If the resources are merged and you still support this more things would have to be changed whereas the current model already supports it easily.

If you look at the Texture, Pixmap, and PixelData classes you won’t see a lot of duplicated code because they are doing very different things. If they are all responsible for different things then it seems incorrect to me to combine them into a single resource. Only at the highest level are they similar (they store pixel data). Beyond that they are very different in implementation and purpose.

Some of the points I make here don't necessarily apply to Duality right now but the current design makes them simple to address whereas the proposed change does not in how I see/understand it.

This merger of resources may be possible and beneficial but I think it is a far more complex of a subject than it seems on the surface.

On Mon, Mar 23, 2020 at 6:43 PM Shadowblitz16 notifications@github.com wrote:

it could be sent over when the data is changed but only after every frame. this would require keeping track of the color data anyways.

I don't see why keeping the color data is a problem. it already holds onto color data technically, maybe using temporary metadata for the editor would be the solution

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/AdamsLair/duality/issues/782#issuecomment-602958511, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADCY4AVMITVP4NMGC7HJB6TRJAF47ANCNFSM4LRSL5ZA .

Shadowblitz16 commented 4 years ago

I can understand where your coming from but at the same time I ask why should the user have to worry about this?

Its a game engine and the point of game engines is to make creating games easier. It would be better for the user to not have to worry about low level stuff and just use easy already optimized abstractions to do what they need.

the way I see it pixmaps, textures, and render textures should all be combined and optimized for all use cases.

deanljohnson commented 4 years ago

You're asking for the impossible to have something optimized for all use cases and have a high level of abstraction.

Abstractions almost always have a performance cost. Every engine has to choose what is worth abstracting while consider performance, usability, maintenance, and flexibility. A game made in Unreal Engine will almost always be faster than the same game made in Duality because Unreal uses a lower level of abstraction.

For this feature I think the impacts would be as follows:

the way I see it pixmaps, textures, and render textures should all be combined and optimized for all use cases.

I'm not sure what you are referring to with render textures - those exist in Unity but as far as I am aware are not a thing in Duality. As far as optimizing for all use cases is concerned, that is simply impossible either because optimizing one use case would hurt another or because no contributors have infinite hours to contribute to the engine.

I think we need more justification for this feature - what is hard to do now that this would make easier? When I asked what the issue you had with the existing design were you stated:

no its just annoying and doesn't seem to make much sense

A feature cannot be developed off of this opinion as somebody else could just as easily say the current design is perfect in every way (for the record, I agree that the current system is slightly annoying but disagree that it doesn't make sense). Use cases and examples are going to be needed not just to justify this change but to properly inform the implementer as to what needs to be improved.

Shadowblitz16 commented 4 years ago

so here is a example of what I would use it for...

Setting pixel data on image

public Image  Image;
public Color[] Palette;
public int[]      Data; 

public SomeMethod()
{
    SetRenderImage(Image );
    for (var i=0; i<Image.Width*Image.Height; i++)
    {
        SetPixel((int)(i%image.Width), (int)(i/image.Width), Data[i]);
    }
}

Changing image properties

public Image image {get; set;}
image.SpriteSheetColumns = 1;
image.AnisotropicFilter = true;

Drawing onto screen

var image = new Image(filename);
SetRenderImage(Scene.Canvas); //canvas is a image
DrawImage(new Rect(0,0,Scene.Canvas.Width, Scene.Canvas.Height),new Rect(0,0,image.Width, image.Height) image);

Drawing onto image

var image1 = new Image(filename1);
var image2 = new Image(filename2)
SetRenderImage(image1 );
DrawImage(new Rect(0,0,image1.Width, image1.Height),new Rect(0,0,image2.Width, image2.Height) image2);
deanljohnson commented 4 years ago

Thank you for the examples - should help in informing requirements of this change!

The first example is counter intuitive to me - you are setting an image to be rendered to just to modify pixels within the image in a simple manner. This is mutating global state that can easily break other rendering in unrelated code/files. We can just modify the image pixels directly in the current design without mutating global state. This seems like an example that argues against the proposed change.

Second example seems easy enough to do now so I don't see the value in changing the current design.

Third example may be easy to implement as a new method on Canvas for drawing a Texture. Would this work for your needs? Note that we would have to consider batching, materials, and source/dest rectangles when implementing this. If you don't care about those things then I suggest as extension method on Canvas that does exactly what you want.

Fourth example is the one that seems compelling to me because it is reasonably more difficult in the current design. Rendering to an image/texture is often needed for advanced use cases which the current design supports well (see the CustomRenderingSetup example). As we still need to support these I would be hesitant to change the design and make these advanced use cases more difficult then they already are. An API similar to the given example could be implemented with the current design of Pixmap and Texture being left alone.

From these examples I think the best way to implement this would be a higher level API implemented as a separate plugin. Changing the core implementation would break compatibility, make many things harder to implement, and would lead to an engine implementation that is harder to maintain than the current design. Duality development typically tries to keep the core implementation slim, clean, and flexible so that things can be implemented via plugins. If it can be implemented as a plugin then I think that speaks to the correctness of the core implementation.

Would I be correct in saying that you want a higher level rendering API? If so, Duality's core is the lower level bit on which that higher level API could be implemented.

Shadowblitz16 commented 4 years ago

Thank you for the examples - should help in informing requirements of this change!

The first example is counter intuitive to me - you are setting an image to be rendered to just to modify pixels within the image in a simple manner. This is mutating global state that can easily break other rendering in unrelated code/files. We can just modify the image pixels directly in the current design without mutating global state. This seems like an example that argues against the proposed change.

I'm not sure what you mean by mutating global state.

Second example seems easy enough to do now so I don't see the value in changing the current design. cool.

Third example may be easy to implement as a new method on Canvas for drawing a Texture. Would this work for your needs? Note that we would have to consider batching, materials, and source/dest rectangles when implementing this. If you don't care about those things then I suggest as extension method on Canvas that does exactly what you want.

yes that would work however I think that you should also look into my Utility suggestion. it was meant for easily creating new components.

Fourth example is the one that seems compelling to me because it is reasonably more difficult in the current design. Rendering to an image/texture is often needed for advanced use cases which the current design supports well (see the CustomRenderingSetup example). As we still need to support these I would be hesitant to change the design and make these advanced use cases more difficult then they already are. An API similar to the given example could be implemented with the current design of Pixmap and Texture being left alone.

I actually realized that unity has both images and sprites so thats my bad sorry. maybe instead of merging pixmap and texture, pixmap could be renamed to image and texture and the texture material could be combined and renamed to sprite.

From these examples I think the best way to implement this would be a higher level API implemented as a separate plugin. Changing the core implementation would break compatibility, make many things harder to implement, and would lead to an engine implementation that is harder to maintain than the current design. Duality development typically tries to keep the core implementation slim, clean, and flexible so that things can be implemented via plugins. If it can be implemented as a plugin then I think that speaks to the correctness of the core implementation.

Would I be correct in saying that you want a higher level rendering API? If so, Duality's core is the lower level bit on which that higher level API could be implemented.

that sounds good. again look at my utility suggestion.