Open alphan opened 1 year ago
I suppose that a way to preserve compatibility would be an optional two-stage branch process. So you'd add something like a new beq_hardened
which would decode as an illegal instruction unless the previous instruction was some magic allow_beq
instruction. If we had a different magic instruction to allow each "hardened" instruction, the pairs would make a sort of artificial hamming distance between them.
An extra plus is that there wouldn't be any new toolchain requirement: a binary compiled with the old toolchain would work just fine (but wouldn't get any new hardening against fault injection attacks)
Thinking about it, another way to do this would be if we added an instruction that said something like "the next instruction will have opcode X" (stored as an immediate). We'd need to check what the existing encoding looks like, but this might be enough to ensure that (e.g.) the next instruction can't be faulted to be different from a beq
.
opcode
funct3
beq
1100011
000
bne
1100011
001
bge
1100011
101
Obviously other instructions can also be critical depending on the context but the table above shows that the HDs of instructions that are used most often for hardening in SW are 1 or 2 bits.
In contexts where active physical attacks such as fault injection are within the scope of the threat model, a larger hamming distance between opcodes is highly desirable. Such a change, however, is far from trivial and would obviously impact the toolchain and RISC-V ISA compatibility among other things. I'm creating this issue to explore this idea and understand if this is something we would like to pursue.
cc @moidx @cfrantz @arunthomas