gfx-rs / wgpu

A cross-platform, safe, pure-Rust graphics API.
https://wgpu.rs
Apache License 2.0
11.46k stars 855 forks source link

Submitting command encoders caused memory leak #5860

Closed tokiSTTK closed 4 days ago

tokiSTTK commented 6 days ago

Description When I iteratively create command encoders and submit them to queue , memory leak occurred. This has been occurring since wgpu 0.20. Not occurring until 0.19.

Repro steps

  1. create Instance, Adapter, Queue.
  2. create a encoder and submit in loop.
  3. Application uses more cpu memory as iterations progress.

Reproducible example: `

#[tokio::main]
async fn main() {
    let instance = wgpu::Instance::default();

    let adapter = instance
        .request_adapter(&wgpu::RequestAdapterOptions::default())
        .await
        .unwrap();

    let (device, queue) = adapter
        .request_device(
            &wgpu::DeviceDescriptor {
                label: None,
                required_features: wgpu::Features::empty(),
                required_limits: wgpu::Limits::downlevel_defaults(),
            },
            None,
        )
        .await
        .unwrap();

    loop {
        let encoder = device.create_command_encoder(&wgpu::CommandEncoderDescriptor {
            label: Some("debug encoder"),
        });
        let command_buffer = encoder.finish();

        queue.submit(Some(command_buffer));

        device.poll(wgpu::Maintain::Wait);
    }
}

`

Expected vs observed behavior Expected : Memory keeps stable. observed behavior : Memory usage increases

Platform OS: Windows 11 wgpu: 0.20

Wumpf commented 5 days ago

Repros for me. Tried it on Windows as well and Vulkan was picked (as to be expected). Ran it for 6min (in debug) and from the looks of it there's some Vec that keeps doubling in size. image Haven't isolated yet where.

Wumpf commented 5 days ago

hub keeps on accumulating command buffers apparently. Some cleanup isn't happening when it should

Wumpf commented 5 days ago

it's fixed here