Closed sotrh closed 5 years ago
You can't map a buffer that was created without MAP_READ
usage. You are only creating it with BufferUsage::COPY_DST
as far as I see. We have it checked on master
branch, so no fixes are needed on our side.
So I added the MAP_READ
flag, but it's still crashing. I'm going to try pointing to the repository directly instead of pulling from crates.io
So I tried adding a cargo-patch
, but it's still crashing.
[patch.crates-io]
wgpu = { git = "https://github.com/gfx-rs/wgpu-rs", branch = "master"}
I'm going to try making the output_buffer
really small, and having the thread sleep.
I reduced texture_size
to 2u32
, and have the thread sleep for 10 secs, but it's still crashing. Another thing to note that is that it seems the map block isn't getting run at all.
I've added a device.poll(true)
in a few places, and it looks like the following line is the problem.
device.get_queue().submit(&[encoder.finish()]);
I'm not sure why. I'm going to check my encoder code.
Looks like the problem is in my encoding step.
{
let render_pass_desc = wgpu::RenderPassDescriptor {
color_attachments: &[
wgpu::RenderPassColorAttachmentDescriptor {
attachment: &texture_view,
resolve_target: None,
load_op: wgpu::LoadOp::Clear,
store_op: wgpu::StoreOp::Store,
clear_color: wgpu::Color::BLACK,
}
],
depth_stencil_attachment: None,
};
let mut render_pass = encoder.begin_render_pass(&render_pass_desc);
}
encoder.copy_texture_to_buffer(
wgpu::TextureCopyView {
texture: &texture,
mip_level: 0,
array_layer: 1,
origin: wgpu::Origin3d::ZERO,
},
wgpu::BufferCopyView {
buffer: &output_buffer,
offset: 0,
row_pitch,
image_height: texture_size,
},
texture_desc.size,
);
If comment out these lines, there's no crash.
encoder.copy_texture_to_buffer(
wgpu::TextureCopyView {
texture: &texture,
mip_level: 0,
array_layer: 1,
origin: wgpu::Origin3d::ZERO,
},
wgpu::BufferCopyView {
buffer: &output_buffer,
offset: 0,
row_pitch,
image_height: texture_size,
},
texture_desc.size,
);
This part here seems to be the problem. My guess is that it's row_pitch
, image_height: texture_size
.
Great investigation, @sotrh ! So this falls under the category of validating all the inputs, which is fairly straightforward for copy operations.
I read the docs for BufferCopyView
, and it turns out I misunderstood what row_pitch
meant. I thought it meant something akin to stride, but it's the total bytes in a row. The following code fixes things.
encoder.copy_texture_to_buffer(
wgpu::TextureCopyView {
texture: &texture,
mip_level: 0,
array_layer: 0,
origin: wgpu::Origin3d::ZERO,
},
wgpu::BufferCopyView {
buffer: &output_buffer,
offset: 0,
row_pitch: u32_size * texture_size,
image_height: texture_size,
},
texture_desc.size,
);
I'm creating a tutorial site for wgpu at sotrh.github.io/learn-wgpu. As part of my research, I've been trying to write a program that does some rendering and compute work without a window. It works up unto the point ehrn I try to pull the data out of the resulting buffer.
As you can see, all I'm doing is create a texture to render to, clearing it with the color black, copying the texture to a buffer, and trying to save that buffer to a file as a png. Everything seems to work until
device.poll(true)
. I get a panic with the following backtrace.This seems odd, as the example for wgpu-native uses a similar strategy (though it doesn't do anything with the buffer). I'm pretty sure I'm missing something.
I'm on version
3.0.0
from crates.io.