KhronosGroup / Vulkan-ValidationLayers

Vulkan Validation Layers (VVL)
https://vulkan.lunarg.com/doc/sdk/latest/linux/khronos_validation_layer.html
Other
768 stars 404 forks source link

Bindless descriptor error message doesn't show offending pipeline ID #8380

Open JuanDiegoMontoya opened 2 months ago

JuanDiegoMontoya commented 2 months ago

Environment:

Describe the Issue

I'm currently facing this validation error when running my program:

Validation Error: [ VUID-vkCmdDraw-None-08114 ] Object 0: handle = 0xb9181f0000000029, type = 
VK_OBJECT_TYPE_DESCRIPTOR_SET; | MessageID = 0x2ba3a98e | vkCmdDraw():  the descriptor VkDescriptorSet 
0xb9181f0000000029[] [Set 0, Binding 0, Index 87] is using buffer VkBuffer 0xa25b9c0000000392[ImGui Vertex Buffer] that is invalid 
or has been destroyed. The Vulkan spec states: Descriptors in each bound descriptor set, specified via vkCmdBindDescriptorSets, must 
be valid as described by descriptor validity if they are statically used by the VkPipeline bound to the pipeline bind point used by this 
command and the bound VkPipeline was not created with VK_PIPELINE_CREATE_DESCRIPTOR_BUFFER_BIT_EXT (https://
vulkan.lunarg.com/doc/view/1.3.290.0/windows/1.3-extensions/vkspec.html#VUID-vkCmdDraw-None-08114)

The message complains about an invalid or deleted descriptor being statically used by the VkPipeline used by a draw call, but doesn't tell me which pipeline it was. It only tells me the ID of the descriptor set and the ID+name of the buffer being accessed, which don't help because I'm using one bindless descriptor set across the whole program with all resources bound to it.

Expected behavior

The message should print the ID and name of the pipeline that statically accesses the invalid/deleted descriptor.

Valid Usage ID VUID-vkCmdDraw-None-08114

Additional context The stack trace from the thread that invokes the debug callback isn't very helpful. image

The main thread is executing vkAcquireNextImage2KHR when this message appears, likewise giving no further hints as to which pipeline is causing the error.

JuanDiegoMontoya commented 2 months ago

I see from other issues that GPU-AV is being overhauled, so I think I can wait for that and see if the issue persists.

spencer-lunarg commented 2 months ago

@JuanDiegoMontoya thanks for raising this, yes, GPU-AV was trying to solve stuff with a "big hammer" and was causing many false positives which were just being "fixed" by removing more and more cases instead of fixing the root issue... I will let you know when things are in "better shape"!

arno-lunarg commented 2 months ago

Quickly looking at this, I think it is just an oversight that could be fixed relatively easily. Given that we now store the error printing logic in those error_logger lambdas in cb_state.per_command_error_loggers, we could just pass down the VkPipeline at lambda construction time (the current context allows to just grab the currently bound pipeline), and just use it in the error message creation logic. This makes me realize that while in here, we could also pass in the current debug marker region, we just have to get it from cb_state, and that would be a quite useful piece of info.

arno-lunarg commented 2 months ago

@spencer-lunarg I can do this when I am done with my current issue if you want me to

spencer-lunarg commented 2 months ago

@arno-lunarg please go ahead, there are a few older issues I will want to take first so you might beat me to this