When compiling the following shader with optimization, glslc throws internal error. Although the 'float16_t' conversion is redundant, it should be harmful. Compiling without optimization, or use glslang directly glslang --target-env vulkan1.3 -Os issue2.comp is OK.
The issue keeps if we replace 'float16_t' with other 16-bit types, like 'uint16_t' or 'int16_t', but the issue disappears with other types, like 'float', 'int' or 'uint'
$ glslc --target-env=vulkan1.3 -O issue2.comp
shaderc: internal error: compilation succeeded but failed to optimize: Expected input to have different bit width from Result Type: FConvert
%33 = OpFConvert %half %31
#version 450
#extension GL_EXT_shader_16bit_storage : require
layout(local_size_x = 1, local_size_y = 1, local_size_z = 1) in;
#define TYPE float16_t
layout(binding = 0) readonly buffer A { TYPE data_a[]; };
layout(binding = 1) writeonly buffer D { TYPE data_d[]; };
void main() {
const uint i = gl_GlobalInvocationID.x;
data_d[i] = TYPE(data_a[i]);
}
When compiling the following shader with optimization,
glslc
throws internal error. Although the 'float16_t' conversion is redundant, it should be harmful. Compiling without optimization, or use glslang directlyglslang --target-env vulkan1.3 -Os issue2.comp
is OK.The issue keeps if we replace 'float16_t' with other 16-bit types, like 'uint16_t' or 'int16_t', but the issue disappears with other types, like 'float', 'int' or 'uint'
The case is minified from https://github.com/ggerganov/llama.cpp/blob/adc9ff384121f4d550d28638a646b336d051bf42/ggml_vk_generate_shaders.py#L2057C1-L2079C4, when I was curious why there exists optimization workaround.
Version
OS: Archlinux