Ipotrick / Daxa

Daxa is a convenient, simple and modern gpu abstraction built on vulkan
MIT License
381 stars 28 forks source link

No example on sampling an image? #68

Closed Firestar99 closed 3 months ago

Firestar99 commented 11 months ago

I wanted to explore your library in RenderDoc, specifically how you handle images, for some inspiration on my own rust-based rendering engine. But it seems like you don't have any examples sampling from images, right? I also haven't found any tests regarding that, though even if I did, it would be kind of difficult to get a capture of it with RenderDoc.

Anyways, so far I've only come across 2 minor issues in the examples:

Thanks for the awesome library :D

Ipotrick commented 11 months ago

Thx for the feedback! Usually the libraries workspace should be set to the root of the repository. This is also the case for the samples. How did you launch the samples? And if you didnt set the workspace to the root of your cloned daxa repo, do the tests/samples work?

GabeRundlett commented 11 months ago

Read bindless.md in the wiki. In there you'll find

#include <daxa/daxa.glsl>
...
daxa_ImageViewId img = ...;
daxa_SamplerId smp = ...;
ivec4 v = texture(daxa_isampler3D(img,smp), vec3(...));

daxa_ImageViewId img2 = ...;
imageStore(daxa_image2D(img2), ivec2(...), vec4(...));

daxa_ImageViewId img3 = ...;
uvec2 size = textureSize(daxa_texture1DArray(img3));
...

If you wanted to see it in use within an actual example, look at 9_shader_integration. It samples a texture in bindless_access_followup.glsl.

Like @Ipotrick said, you need to set the working directory to the path/to/Daxa/ folder. This is shown clearly in our .vscode/launch.json for anyone using the same project set-up that we use.

Firestar99 commented 11 months ago

I'm using CLion on Linux, which by default I think sets the cwd to the build directory of cmake for all launch configurations. For now I've just set them all to their respective shader directory, but setting it to the root of the repository works too, and for all samples.

I've noticed that PipelineManager has a special check for if the shader was found in the current cwd, where it does not resolve the full file path but just returns the relative one. I'd assume it's likely the cause of the 7_pipeline_manager example not finding their shaders when the cwd was set to the shader dir. https://github.com/Ipotrick/Daxa/blob/75f4b85da63af1ad01de0934d479a27c8307ed6f/src/utils/impl_pipeline_manager.cpp#L1025-L1030

Good to know that 9_shader_integration does some texture sampling, now I just got to figure out how to capture a compute application with RenderDoc :D

GabeRundlett commented 11 months ago

This full_path_to_file does not return an absolute path unless one of the roots is an absolute path, just bad naming I guess. It resolves the first relative path that is valid given the set of roots provided. I guess this is a bug for system includes because it should ONLY check the roots.

Jaisiero commented 11 months ago

Read bindless.md in the wiki. In there you'll find

#include <daxa/daxa.glsl>
...
daxa_ImageViewId img = ...;
daxa_SamplerId smp = ...;
ivec4 v = texture(daxa_isampler3D(img,smp), vec3(...));

daxa_ImageViewId img2 = ...;
imageStore(daxa_image2D(img2), ivec2(...), vec4(...));

daxa_ImageViewId img3 = ...;
uvec2 size = textureSize(daxa_texture1DArray(img3));
...

If you wanted to see it in use within an actual example, look at 9_shader_integration. It samples a texture in bindless_access_followup.glsl.

Like @Ipotrick said, you need to set the working directory to the path/to/Daxa/ folder. This is shown clearly in our .vscode/launch.json for anyone using the same project set-up that we use.

Hi! I was looking for that right now. So, in plain Vulkan I have something like this:

#extension GL_EXT_nonuniform_qualifier : enable
layout(set = 0, binding = 0) uniform sampler2D[] texture_samplers;

vec2 uv = ...;
vec3 diffuse_tex = texture(texture_samplers[nonuniformEXT(txtId)], uv).xyz;

Can I do something like that in daxa? Thanks.

GabeRundlett commented 11 months ago

obviously you won't use any bindings in daxa, as it has a bindless architecture. However, if you have an array of images, you can do this just as you'd expect, which is to use the daxa_sampler2D to construct the GLSL sampler2D from a daxa_ImageViewId and a daxa_SamplerId. Images and Samplers are separate in Daxa since of course they really should be. Presumably you just want to index the daxa_ImageViewId and only need a single sampler.

Jaisiero commented 11 months ago

Allright, so I create ImageId and a samplerId and I just pass them to access the resource by daxa_sampler2D then?

What about uploading the image to the GPU. I found this code:

daxa::BufferId gpu_input_buffer = device.create_buffer(daxa::BufferInfo{
      .size = sizeof(GpuInput),
      .name = "gpu_input_buffer",
  });
  GpuInput gpu_input = {};
  daxa::TaskBuffer task_gpu_input_buffer{{.initial_buffers = {.buffers = std::array{gpu_input_buffer}}, .name = "input_buffer"}};

  daxa::ImageId render_image = device.create_image(daxa::ImageInfo{
      .format = daxa::Format::R8G8B8A8_UNORM,
      .size = {size_x, size_y, 1},
      .usage = daxa::ImageUsageFlagBits::SHADER_SAMPLED | daxa::ImageUsageFlagBits::SHADER_STORAGE | daxa::ImageUsageFlagBits::TRANSFER_SRC, // TRANSFER_DST for uploaded images?
      .name = "render_image",
  });
  daxa::TaskImage task_render_image{{.initial_images = {.images = std::array{render_image}}, .name = "render_image"}};
  daxa::SamplerId sampler = device.create_sampler({.name = "sampler"});

  daxa::TimelineQueryPool timeline_query_pool = device.create_timeline_query_pool({
      .query_count = 2,
      .name = "timeline_query",
  });

  daxa::TaskGraph loop_task_graph = record_loop_task_graph();

Or is this code for downloading the image from GPU after each frame? I just need to upload the image once.

Thanks for your time.

GabeRundlett commented 11 months ago

TRANSFER_SRC means it can be copied from (ie blitting) and doesn't necessitate GPU->CPU or CPU->GPU. You may want to blit from that image to another image.

Jaisiero commented 11 months ago

I managed to create and upload an image using a command recorder and a staging buffer like this:

auto exec_cmds = [&]()[
{
    auto recorder = device.create_command_recorder({});

    recorder.pipeline_barrier({
        .src_access = daxa::AccessConsts::HOST_WRITE,
        .dst_access = daxa::AccessConsts::TRANSFER_READ,
    });

    recorder.copy_buffer_to_image({
        .buffer = image_staging_buffer,
        .image = images.at(0),
        .image_extent = {SIZE_X, SIZE_Y, SIZE_Z},
    });

    recorder.pipeline_barrier({
        .src_access = daxa::AccessConsts::TRANSFER_WRITE,
        .dst_access = daxa::AccessConsts::COMPUTE_SHADER_READ,
    });

    return recorder.complete_current_commands();
}();
device.submit_commands({.command_lists = std::array{exec_cmds}});

Thank you.