Closed pgoodman closed 10 years ago
Another alternative is to dynamically generate exception table entries, and make sure that the kernel can see them. It might be necessary in this case for a struct module
to be allocated for the code cache itself, so that the kernel's search routine finds the dynamic entries.
One trickiness here would be keeping the dynamic entries sorted.
I think a nice way to potentially handle this might be to turn a rep movs
type instruction into an expanded form like the one shown below. This could be done within arch/x86-64/early_mangle.cc
.
label: <LabelInstruction>
mov ... <app NativeInstruction>
mov ... <app NativeInstruction>
loop* label <inst BranchInstruction>
This was fixed by the recent overhaul of all kernel exception table-related code.
This comes up in the
copy_user_enhanced_fast_string
function A key challenge is that for some arbitraryREP MOVS
-like instruction, it's not clear if the fault-able instruct is the source or destination memory operand.A potentially reasonable solution is to test the source and destination operands independently; however, currently, Granary does not split up the
REP MOVS
type instructions into a loop form. This in itself would be challenging. There's a few approaches here:copy_user_enhanced_fast_string
doesn't allocate a stack frame, but upon faulting, it goes tail-calls another function. The challenge here would be to figure out an instruction sequence such that the virtual register system maintains the correct stack pointer through the tail-call / recovery code.copy_user_enhanced_fast_string
with ones tocopy_user_handle_tail
.Relevant assembly: