KhronosGroup / SPIRV-Tools

Apache License 2.0
1.05k stars 549 forks source link

spirv-fuzz: Add RMW instructions #4419

Open afd opened 3 years ago

afd commented 3 years ago

Add a transformation (and associated fuzzer pass) to add atomic RMW instructions to the module.

The transformation should cover all of the RMW instructions available with the Shader capability.

We can add an RMW operating on any SSBO or workgroup pointer from a dead block

We can add an RMW operating on any SSBO or workgroup pointer with the "pointee is irrelevant" fact from any block

afd commented 3 years ago

@Mostafa-ashraf19 Can you add a suggestion for how the protobuf for this would look?

Mostafa-ashraf19 commented 3 years ago

Overview

The idea is to create a transformation that takes atomic (read-modify-write) instructions with their relevant ids and opcode, then create instruction for this new opcode with needed values.

Implementation details

The structure of the Protobuf following by SPIR-V assembly example, before and after the transformation.

message TransformationAddReadModifyWriteAtomicInstruction {

  // Transformation that adds Read Modify Write atomic instruction from a pointer into an id.

  // The result of the atomic instruction.
  uint32 fresh_id = 1;

  // The pointer to be work on.
  uint32 pointer_id = 2;

  // Read Modify Write atomic instruction opcode.
  uint32 opcode = 3;

  // The memory scope for the atomic operation
  uint32 memory_scope_id = 4;

  // The memory semantics for the atomic operation.
  uint32 memory_semantics_id_1 = 5;

  // The memory semantics for the atomic operation for the instruction 
  // that takes two memory semantics.
  uint32 memory_semantics_id_2 = 6;

  // A descriptor for instruction in a block before which the new Read Modify Write 
  // atomic instruction should be inserted.
  InstructionDescriptor instruction_to_insert_before = 7;

}

SPIR-V example


const std::string shader = R"(
OpCapability Shader
%1 = OpExtInstImport "GLSL.std.450"
OpMemoryModel Logical GLSL450
OpEntryPoint Fragment %4 "main"
OpExecutionMode %4 OriginUpperLeft
OpSource ESSL 320
%2 = OpTypeVoid
%3 = OpTypeFunction %2
%6 = OpTypeInt 32 1
%9 = OpTypeInt 32 0
%26 = OpTypeFloat 32
%8 = OpTypeStruct %6
%10 = OpTypePointer StorageBuffer %8
%11 = OpVariable %10 StorageBuffer
%19 = OpConstant %26 0
%18 = OpConstant %9 1
%12 = OpConstant %6 0
%13 = OpTypePointer StorageBuffer %6
%15 = OpConstant %6 4
%16 = OpConstant %6 7
%20 = OpConstant %9 80
%4 = OpFunction %2 None %3
%5 = OpLabel
%14 = OpAccessChain %13 %11 %12
%24 = OpAccessChain %13 %11 %12
OpReturn
OpFunctionEnd
)";

const std::string after_transformation = R"( OpCapability Shader %1 = OpExtInstImport "GLSL.std.450" OpMemoryModel Logical GLSL450 OpEntryPoint Fragment %4 "main" OpExecutionMode %4 OriginUpperLeft OpSource ESSL 320 %2 = OpTypeVoid %3 = OpTypeFunction %2 %6 = OpTypeInt 32 1 %9 = OpTypeInt 32 0 %26 = OpTypeFloat 32 %8 = OpTypeStruct %6 %10 = OpTypePointer StorageBuffer %8 %11 = OpVariable %10 StorageBuffer %19 = OpConstant %26 0 %18 = OpConstant %9 1 %12 = OpConstant %6 0 %13 = OpTypePointer StorageBuffer %6 %15 = OpConstant %6 4 %16 = OpConstant %6 7 %20 = OpConstant %9 80 %4 = OpFunction %2 None %3 %5 = OpLabel %14 = OpAccessChain %13 %11 %12 %25 = OpAtomicExchange %6 %14 %15 %20 %16 %26 = OpAtomicCompareExchange %6 %14 %15 %20 %12 %16 %15 %27 = OpAtomicIIncrement %6 %14 %15 %20 %28 = OpAtomicIDecrement %6 %14 %15 %20 %29 = OpAtomicIAdd %6 %14 %15 %20 %16 %30 = OpAtomicISub %6 %14 %15 %20 %16 %31 = OpAtomicSMin %6 %14 %15 %20 %16 %32 = OpAtomicUMin %9 %90 %15 %20 %18 %33 = OpAtomicSMax %6 %14 %15 %20 %15 %34 = OpAtomicAnd %6 %14 %15 %20 %16 %35 = OpAtomicOr %6 %14 %15 %20 %16 %36 = OpAtomicXor %6 %14 %15 %20 %16 %24 = OpAccessChain %13 %11 %12 OpReturn OpFunctionEnd )";


What do you think @paulthomson? 
paulthomson commented 3 years ago

This looks good! However, I think your memory semantics in your "after" example are not quite right. Notice how, in your AtomicLoad and AtomicStore tests, that you never just have "SequentiallyConsistent" memory semantics. It is always: