katef / libfsm

DFA regular expression library & friends
BSD 2-Clause "Simplified" License
931 stars 52 forks source link

print/vmc.c: Reduce generated code verbosity for `.{1000,}`. #446

Open silentbicycle opened 11 months ago

silentbicycle commented 11 months ago

There was already an existing special case for repeated FETCH, STOP, FETCH, STOP, ... sequences, so add a similar case for repeated FETCH, BRANCH, ... cases as would be produced by .{1000,}.

dgryski commented 10 months ago

I wonder if this would be useful for the vmops output format too. Obviously it would mean making the interpreter vm a bit smarter.

katef commented 10 months ago

@dgryski I think we should move this logic up a level, to the vm, and introduce an opcode for this, or replace the existing "match a character" with "match a substring". then everybody benefits

sfstewman commented 10 months ago

@dgryski I think we should move this logic up a level, to the vm, and introduce an opcode for this, or replace the existing "match a character" with "match a substring". then everybody benefits

If you look, there are/were plans for opcodes for this. It should speed up certain kinds of matches, but I've never been able to get a test set that would tell me when it helped and when it didn't, so I haven't moved forward with it. It's pretty easy to come up with unanchored REs where this wouldn't help much without some serious graph preprocessing, and where a naive approach may be detrimental. There's also some locality trade-offs between match-a-substring and match-a-character that weren't obvious to me.

I've been noodling around with more general graph optimizations over the last few years, but work has been demanding, so I haven't made much progress lately. Sadly, I don't see that changing in the near future.

Which is to say: I think this is an interesting direction to explore, and I'm glad you guys have the data to test whether it has an impact for your problems. I would make this an optional optimization in case there are other use cases where this is detrimental to performance.

As a last thought, I don't think you want to replace "match a character" with "match a substring." I think you want both to let the failure handling be different. When a substring match fails, you'd want to do something intelligent with where it failed rather than abort the whole substring. @silentbicycle had suggested this at one point, and it's a good idea.

silentbicycle commented 9 months ago

I'm revisiting this now, and agree that having separate character and substring match instructions makes sense (rather than special-casing it in code generation like this). Conditionally emitting substring match would limit the impact of the change to specific cases like .{1000,} that aren't currently handled very well, but the existing behavior is fine otherwise.

silentbicycle commented 9 months ago

In the specific cases I have in mind (.{1000,} and x{1000,}) it's closer to your potential future FINDB command, in that what we really want is to scan forward until the first offset that does/doesn't match some condition until some max bound is reached, failing otherwise -- like FETCH-AND-CHECK cond:COND count:USIZE, and those respective cases would be:

// .{1000,}
FETCH-AND-CHECK cond:NE-'\n' count:1000
-- to represent 1000x
FETCHF
STOPFEQ 0x0a

// x{1000,}
FETCH-AND-CHECK cond:EQ<'x'> count:1000
-- and 1000x
FETCHF
STOPNE 'x'

but that wouldn't extend well to even slightly more complicated inputs, such as [aeiou]{1000,}.

What instead do you think about either:

  1. LOOPED-FETCH-CHECK-TABLE count table

The count could be the bottom 16 or so bits, the upper bits would be an address/table-id for a bool[256] table indicating fail or continue for each of the next bytes read. Or the next 8x 32-bit words could be the bool[256] table, I supposed, but with an address the same tables could be reused. It may be worth still having an instruction for LOOPED-FETCH-CHECK with a single COND, because checking for a single character (often \n) is a very common case.

OR

  1. a pair of instructions:

The first instruction would initialize a new monotonic counter to a max count value. The second would decrement the counter value, then fall through to the next instruction if zero, otherwise jump back to immediately after where the counter was initialized for another repetition. Each id would correspond to a local variable in the code generation output -- at an IR level we could treat this like an unlimited register machine. The output language would either figure out the scoping or could use a stack. In practice both the number of IDs and their scope should be quite small. Nothing else should be able to modify the counter in any way, to avoid unbounded looping.

This would only be used to handle highly repetitive sequences, and any other instructions jumping into the middle of that instruction sequence from outside should be rejected at compile-time, to keep the code using the counter from being abused somehow that the unrolled version couldn't be. A looped regex whose body can match nothing (like (x?){1000,}) would just spin the loop doing nothing, but for a bounded number of cycles, and is probably still no worse than (x?)(x?)(x?)... fully unrolled (ignoring captures for now).

It would also nest, in a way that LOOPED-FETCH-CHECK-TABLE would not -- we could handle e.g. (x{10,}){10,}` smoothly that way for >10, whereas right now that unrolls the entire thing. But, is that a priority? Maybe not.

What do you think?

(edit: formatting)

dgryski commented 9 months ago

Note that Ragel https://www.colm.net/open-source/ragel/ has some extensions that allow it to create not-strictly-regular languages involving counters (such as nested parentheses). It might be interesting to see how it solves a similar problem.

katef commented 5 months ago

We talked about this, we prefer a single instruction. because it makes the IR rewriting simpler