Open wbrickner opened 2 years ago
Weird! I see that you are setting the copy destination flag correctly. Following the execution from .as_device_boxed_mut()
above:
Any idea what the issue is?
Update: removing the
shapes.set(vec![
Shape {
x: 0,
y: 0,
w: 100,
h: 100,
r: [2, 9]
};
1024
])?;
eliminates the error, and things always complete successfully.
Alternatively, leaving the .set
call and removing the final .get
call also prevents the panic.
Are usage flags being corrupted by .set
?
Ok, here's the problem:
when setting, the existing staging buffer is discarded and replaced with a new staging buffer, which is created with the usage flag COPY_SRC
.
Adding to the end of the function the construction of a new staging buffer with MAP_READ | COPY_DST
(as it usually has, from the DeviceBox construction pathway) solves the issue, when it comes time to read the correct usage is there.
I would like to not do it this way, it seems quite wasteful to prepare a new allocation and immediately discard it etc. Mixing
COPY_SRC
and COPY_DST
and MAP_READ
permissions is not allowed evidently. The usage flag paradigm is the problem here, unsure how fundamentally important it really is. Perhaps two staging buffers could be lazily prepared, one for copies to/from the GPU device.
@wbrickner Thanks for doing this investigation in this issue. I see how the staging buffer creation is problematic...
If anyone has a PR that fixes this, I can review/edit/merge.
Hello, running the compute example:
View full code
```rust use emu_core::prelude::*; use emu_glsl::*; use zerocopy::*; #[repr(C)] #[derive(AsBytes, FromBytes, Copy, Clone, Default, Debug, GlslStruct)] struct Shape { x: u32, y: u32, w: i32, h: i32, r: [i32; 2], } fn main() -> Result<(), Boxyields
my understanding is that buffers must have their usage declared correctly (with some amount of detail) at construction time through
wgpu
.