parasyte / pixels

A tiny hardware-accelerated pixel frame buffer. 🦀
https://docs.rs/pixels
MIT License
1.74k stars 116 forks source link

Black screen on Ubuntu 23 within VMWare Workstation 17 #369

Open dbalsom opened 1 year ago

dbalsom commented 1 year ago

Compiling my pixels project on a new install of Ubuntu 23 in VMWare Workstation 17. 3D acceleration is enabled, but I have not installed Vulkan drivers. EGUI is rendered, but the pixel buffer is not.

minimal_egui example has the same behavior:

image

[2023-07-06T22:47:02Z WARN wgpu_hal::gles::egl] EGL context: -robust access [2023-07-06T22:47:02Z WARN wgpu_hal::gles::egl] Re-initializing Gles context due to Wayland window [2023-07-06T22:47:02Z WARN wgpu_hal::gles::egl] EGL context: -robust access [2023-07-06T22:47:02Z WARN egui_wgpu::renderer] Detected a linear (sRGBA aware) framebuffer Rgba8UnormSrgb. egui prefers Rgba8Unorm or Bgra8Unorm

It looks like installing Vulkan drivers resolves, but I was curious if this was supposed to work in the default configuration...

parasyte commented 1 year ago

I haven't seen that before. I've tagged this with a few possible causes.

FWIW, Vulkan is actually the "default configuration" expected. OpenGL support is still "best effort". See the supported platforms table at https://github.com/gfx-rs/wgpu#supported-platforms

dbalsom commented 6 months ago

So there are no vulkan drivers for the guest in vmware. I was mistaken in thinking there were. On my older ubuntu VM, I was using llvmpipe, which provides a software Vulkan target. It works, but it's a bit slow.

Found some time to poke at this a bit and got things rendering. There's something it doesn't like about the 'position' input. Which is weird, since egui is rendering on top of it and seems to use a position input the same way, so it can't just be something simple like that VBOs are not supported.

Since we have a single triangle we can get away with statically defining our vertices in the vertex shader:

@vertex
fn vs_main(@builtin(vertex_index) vidx: u32) -> VertexOutput {

    var positions = array<vec2<f32>, 3>(
        vec2<f32>(-1.0, -1.0),
        vec2<f32>( 3.0, -1.0), 
        vec2<f32>(-1.0,  3.0)  
    );

    var output : VertexOutput;
    output.position = r_locals.transform * vec4<f32>(positions[vidx].x, positions[vidx].y, 0.0, 1.0);
    output.tex_coord = fma(positions[vidx], vec2<f32>(0.5, -0.5), vec2<f32>(0.5, 0.5));

    return output;
}

This seems to make wgpu happy on vmware:

image

Not suggesting you adopt this as a fix, but maybe it's a clue. What's different about pixels' VBO from egui's?

parasyte commented 6 months ago

Cool! Thanks for the extra investigation and added details.

What's different about pixels' VBO from egui's?

Nothing that should concern the driver, honestly. In my experience, llvmpipe has been one of the worst Vulkan implementations I have had to deal with. The “not a conformant driver” warning is really an understatement.

I don’t have any concerns about moving the vertex buffer into the shader. Ideally it should be anyway. And it used to be hardcoded that way, but was moved to the vertex buffer as part of #179 in https://github.com/parasyte/pixels/pull/179/commits/a49f489adbbff962b410234d567253af55add138

The commit comment is light on details but apparently the change fixed a WGSL validation error introduced by wgpu 0.9. If changing back doesn’t regress the validation error on modern wgpu, I’ll happily accept a PR that moves the triangle position and UV back into the shader.

dbalsom commented 6 months ago

In my experience, llvmpipe has been one of the worst Vulkan implementations I have had to deal with. The “not a conformant driver” warning is really an understatement.

just for clarity, it is llvmpipe that works fine in this instance. it's the 'SVGA3D' driver from VMware that has the issue.

I’ll happily accept a PR that moves the triangle position and UV back into the shader.

Let me shake it out a bit with my full build on linux, mac and windows to make sure there are no gotchas, and i'll send one over if it looks stable.

parasyte commented 6 months ago

just for clarity, it is llvmpipe that works fine in this instance. it's the 'SVGA3D' driver from VMware that has the issue.

Ah, sorry for my confusion. Unfortunately, I don't know anything about VMware or its drivers. But I suspect wgpu is actually selecting the OpenGL backend (based on logs in the OP). Using the RUST_LOG='wgpu_core::instance=info' env var will confirm. And the wgpu-info tool will show you all of the available adapters on the system.

There is the potential consideration that wgpu's OpenGL backend could have a bug, and not necessarily the driver. I think this deserves some more investigation and could end up being an opportunity to improve wgpu's OpenGL support!

Anyway, thanks again for helping out on the pixels side of things!