17cupsofcoffee / tetra

🎮 A simple 2D game framework written in Rust
MIT License
909 stars 63 forks source link

Large canvas/texture is silently dropped and renders as a black square #260

Closed VictorKoenders closed 3 years ago

VictorKoenders commented 3 years ago

When creating a canvas with a large size, rendering to this canvas and then saving it as a texture, the texture sometimes ends up being black. On my pc it happens between 30_000x30_000 and 40_000x40_000 pixels. I'm guessing this might be different based on the videocard that users have.

I'd expect this code to give an error, instead it silently fails and renders a black square.

use tetra::*;
use tetra::graphics::{self, Texture, Color, Canvas, DrawParams};
use tetra::math::Vec2;

fn main() {
    ContextBuilder::new("Test", 800, 600)
        .show_mouse(true)
        .build()
        .unwrap()
        .run(GameState::new)
        .unwrap();
}

pub struct GameState {
    text1: Texture,
    text2: Texture,
}

impl GameState {
    pub fn new(ctx: &mut Context) -> Result<Self> {
        let text1 = generate_texture(ctx, 30_000)?;
        let text2 = generate_texture(ctx, 40_000)?;
        Ok(Self {
            text1,
            text2,
        })
    }
}

impl State for GameState {
    fn draw(&mut self, ctx: &mut Context) -> Result {
        graphics::clear(ctx, Color::rgb(0.392, 0.584, 0.929));
        draw_texture(ctx, Vec2::new(100., 100.), &self.text1);
        draw_texture(ctx, Vec2::new(200., 100.), &self.text2);
        Ok(())
    }
}

fn generate_texture(ctx: &mut Context, size: i32) -> Result<Texture> {
    let canvas = Canvas::new(ctx, size, size)?;
    graphics::set_canvas(ctx, &canvas);
    graphics::clear(ctx, Color::rgb(1.0, 0.0, 0.0));
    graphics::flush(ctx);
    graphics::reset_canvas(ctx);

    Ok(canvas.texture().clone())
}

fn draw_texture(ctx: &mut Context, position: Vec2<f32>, texture: &Texture) {
    let scale = 1.0 / (texture.width() as f32) * 75.;
    texture.draw(ctx, DrawParams::new().scale(Vec2::broadcast(scale)).position(position));
}

Screenshot:

image

17cupsofcoffee commented 3 years ago

Good spot - my suspicion would be the texture itself is just completely broken, rather than it specifically being related to canvases, but I'd have to do some playing around with code/RenderDoc to be sure.

In general I think there's probably a few places where we're missing error handling in the GL layer, I need to spend some time cleaning that up a bit :)

VictorKoenders commented 3 years ago

Did some more experimentation: 32766 (i16::max_value - 1) is fine 32767 (i16::max_value) is fine 32768 (i16::max_value + 1) lags my pc 32769 (i16::max_value + 2) creates a black square, but doesn't lag

Additional info: OS: ubuntu 20.04 GPU: geforce GTX 1060 6GB driver: nvidia-driver-460 version 460.73.01-0ubuntu0.20.04.1

17cupsofcoffee commented 3 years ago

I love that there's a magic lag number, that's the most OpenGL thing I've ever heard

(but yeah, it being so close to i16::max_value can't be a coincidence, surely)

17cupsofcoffee commented 3 years ago

Replicated the issue locally :+1:

Also weirdly enough, I get no lag running your example on its own, but when I ran it through RenderDoc my PC was very unhappy with me and nearly crashed entirely...

VictorKoenders commented 3 years ago

Found this online: https://community.khronos.org/t/interpreting-gl-max-texture-size/19939

Added this to the tetra internals:

diff --git a/src/platform/device_gl.rs b/src/platform/device_gl.rs
index b15f5c5..6a275db 100644
--- a/src/platform/device_gl.rs
+++ b/src/platform/device_gl.rs
@@ -304,6 +304,9 @@ impl GraphicsDevice {

             self.bind_texture(Some(&texture));

+            let max_texture_size = self.state.gl.get_parameter_i32(glow::MAX_TEXTURE_SIZE);
+            println!("Max texture size: {}", max_texture_size);
+
             self.state.gl.tex_parameter_i32(
                 glow::TEXTURE_2D,
                 glow::TEXTURE_WRAP_S,

And when I ran cargo run --example texture it does indeed print out 32768

17cupsofcoffee commented 3 years ago

The GL docs also say:

GL_INVALID_VALUE is generated if width is less than 0 or greater than GL_MAX_TEXTURE_SIZE.

So I think the solution is to just return PlatformError if people try this, either by checking for errors after creation or by comparing the size before creation.

17cupsofcoffee commented 3 years ago

Fixed in 5ba0c25! Your code example should throw an error now (and does, on my machine).

VictorKoenders commented 3 years ago

I was about to make a PR, you were 6 minutes faster than me: https://github.com/VictorKoenders/tetra/commit/4571fd255bddf25fa8f778b1d343307b7f35c372

17cupsofcoffee commented 3 years ago

😄 I went for the glGetError approach so I could catch stuff like OOM errors at the same time - doesn't quite give as nice an error message as yours (might tweak that later) but should hopefully guarentee that if you have a Texture, it's actually valid/usable.