Open hsluoyz opened 6 years ago
Calling out to C carries a cost. We don't want to do it for a basic package like regexp. We're much more interested in speeding up Go's regexp package. If people want to work on that, that would be great.
Note that one reason that Go's regexp package may be slower is that it works on UTF-8 characters, not ASCII bytes. I don't know what Python does.
Also note that Go is committed to using regexps that scale well (see https://swtch.com/~rsc/regexp/). I don't know what Python does.
I'm not sure it's useful to leave a general issue like this open. It doesn't suggest any specific action to take. Are you interested in examining the regexp code to understand why Python does better on this benchmark?
The benchmark code includes compiling the regex. In a common use of regexp, one would compile the regex once and run it many times, so the benchmark numbers aren't very helpful.
Also note that the benchmark numbers are almost a year old at this point, and Go does two releases per year.
I might be mistaken, but doesn't PCRE have a JIT compiler? That might explain it, at-least for a couple of the top ones (I know PHP uses PCRE).
The benchmark code includes compiling the regex. In a common use of regexp, one would compile the regex once and run it many times, so the benchmark numbers aren't very helpful.
https://github.com/mariomka/regex-benchmark/issues/2 I found an example on the same repository which apparently excludes compilation, but it doesn't look too scientific (only ten executions). It shows more or less the same results (which is odd as I'd thought that compilation would have more of an impact on the times).
The compilation does have a large impact on the speed:
$ cat f_test.go
package p
import (
"regexp"
"testing"
)
var Sink bool
func BenchmarkCompileRun(b *testing.B) {
for i := 0; i < b.N; i++ {
rx := regexp.MustCompile(`[\w\.+-]+@[\w\.-]+\.[\w\.-]+`)
Sink = rx.MatchString("123456789 foo@bar.etc")
}
}
func BenchmarkRun(b *testing.B) {
rx := regexp.MustCompile(`[\w\.+-]+@[\w\.-]+\.[\w\.-]+`)
for i := 0; i < b.N; i++ {
Sink = rx.MatchString("123456789 foo@bar.etc")
}
}
$ go test -bench=.
goos: linux
goarch: amd64
pkg: mvdan.cc/p
BenchmarkCompileRun-4 100000 14160 ns/op
BenchmarkRun-4 1000000 1121 ns/op
PASS
ok mvdan.cc/p 2.693s
I presume it doesn't show up in the original numbers because the input data is very large, though.
I agree with @ianlancetaylor that a generic issue like this isn't very helpful. If specific parts of the regexp package could be improved, or certain edge cases are orders of magnitude slower than they should be, we should use separate issues to tackle those. For example, we already have some like #24411 and #21463.
While it's true that this issue is less than actionable as-is, it is also true that the results of the benchmark reported here are not too dissimilar from the ones on https://benchmarksgame-team.pages.debian.net/benchmarksgame/performance/regexredux.html (where they use 1.10). I agree it's unfortunate that all those benchmarks include pattern compilation (although it seems it's not so significant).
My comments on https://github.com/golang/go/issues/26943:
Would it be feasible to move the
syntax.InstRune*
match checks fromstep()
toadd()
? A thread failing instep()
constitutes wasted work – even for a regular expression as simple as[+-]?[0-9]+
.Also, what about using slice assignment when a thread is enqueued? Let
cap
have copy-on-write semantics.Also also, it might be worth evaluating the benefit of using a slice as a stack instead of recursing. Anything to reduce the overhead of
syntax.InstAlt
instructions.
Change https://golang.org/cl/130417 mentions this issue: regexp/syntax: don't do both linear and binary sesarch in MatchRunePos
Timed out in state WaitingForInfo. Closing.
(I am just a bot, though. Please speak up if this is a mistake or you have the requested information.)
Reopening to prevent @junyer having to relocate his ideas yet again. :)
I think @gopherbot needs to be taught some manners.
That benchmark site has not been updated in a year.
I just ran the input with the tip compiler (go version devel +aa20ae4853 Mon Nov 12 23:07:25 2018 +0530 linux/amd64), along with converting the benchmark to an idiomatic one.
Code -
With that, if I compare with 1.10, there is a substantial improvement now
$benchstat go1.10.txt tip.txt
name old time/op new time/op delta
All/Email-4 507ms ± 1% 410ms ± 1% -19.03% (p=0.008 n=5+5)
All/URI-4 496ms ± 1% 398ms ± 1% -19.86% (p=0.008 n=5+5)
All/IP-4 805ms ± 0% 607ms ± 1% -24.63% (p=0.008 n=5+5)
And also the total is now 1415ms which brings us above Python3. If we are to go by the original issue title, I'd say it is pretty much resolved.
Only @junyer's comments here have some concrete suggestions to improve.
I don't know whether they are still applicable in the current tip as of now. I will let someone investigate that and re-purpose the issue to that effect.
As per https://perf.golang.org/search?q=upload:20181127.2, moving the syntax.InstRune*
match checks from step()
to add()
seems helpful.
Note that this is the regular expression used for the Match/Hard1/*
benchmarks:
{"Hard1", "ABCD|CDEF|EFGH|GHIJ|IJKL|KLMN|MNOP|OPQR|QRST|STUV|UVWX|WXYZ"},
As per https://perf.golang.org/search?q=upload:20181127.3, letting cap
have copy-on-write semantics seems additionally helpful.
Both of those are quite trivial changes. I also suggested reducing the overhead of syntax.InstAlt
instructions, but that would require some design discussion, so I'm not tinkering with that tonight.
This may be a bit of a wild idea (that would likely need to be tracked in a separate issue, but as I'm not even sure how feasible it is and as it's related to this pretty open-ended issue I'm dumping it here) but it came to mind while reading this old article from @dgryski: would it make sense, for regexp source expressions known at compile time, having the go compiler compile the regexp and then emit native machine code implementing doExecute
for that specific expression (kind-of like ragel)? At runtime regexp.Compile would somehow discover that the expression has been already compiled to native code, and would use the precompiled doExecute
.
To avoid having to generate native code directly just for the regexp it would probably be ok to generate go code and let the rest of the go compiler handle that.
having the go compiler compile the regexp and then emit native machine code
There is precedent for this, namely CTRE for C++, and what used to be rust's compile-time regex macro. The optimization doesn't seem too farfetched to me, since it's similar to how intrinsics and SSA rules can redirect usage of the standard library to different implementations.
As a side note, it would be an interesting project for someone to put together a go generate
tool and ctre
package that mirrored regexp's API, but generated code at compile time.
CTRE is a veritable nightmare of C++ template metaprogramming. Let us never speak of it again.
The state of the art for code generation is probably Trofimovich's TDFA work for RE2C. See http://re2c.org/2017_trofimovich_tagged_deterministic_finite_automata_with_lookahead.pdf.
This pcre based regex gave better performance compared to regexp - still PHP preg_match out performed in my test of (apache access log parsing).
@CAFxX:
would it make sense, for regexp source expressions known at compile time, having the go compiler compile the regexp and then emit native machine code implementing doExecute for that specific expression
That makes me think of a couple of less extreme optimisations:
for an expression of the form regexp.MustCompile(string_literal)
, at compile-time build the result value, and either store it immutably in the code segment, or clone at runtime. In fact, I guess this could be extended to any function call which can be marked "pure" and has constant arguments.
Unfortunately this won't help benchmarks which loop over lists of strings to test as regexps. And it won't help real-world code much; they can get almost the same benefit if they do var xxx = regexp.MustCompile(...)
globally.
regexp.MustCompile
keeps a cache, i.e. map[string]*Regexp
. It would be interesting to modify the benchmarks to do this explicitly, and see what difference it makes. The cache should have a size limit to avoid problems with dynamically-generated regexps.
FWIW there's a good number of unreviewed CLs by @bboreham improving regexp performance that have been sitting for a cycle or so:
Of the list, CL 355789 is exceptionally small (take the address of a big struct instead of copying it; 4 line change) compared to its 30-40% perf benefit.
And, the benchstat with those 5 applied:
CL 355789 (the "exceptionally small" yet powerful CL mentioned previously) has been merged for Go 1.19! 🎉
(The CL didn't mention this issue, so the commit didn't trigger a thread update, hence me mentioning since so many are following this thread. The other 4 CLs are still pending review.)
are there other significant performance increases in the works, or is the current 1.19 release considered "good enough" compared to other languages that speed is no longer an issue?
Of the CLs I listed in https://github.com/golang/go/issues/26623#issuecomment-1033158328, only one has been reviewed and merged; the rest remain unreviewed. There are certainly a few boosts in there.
(I can't answer a question about what defines "good enough".)
https://github.com/golang/go/issues/11646 isn't "in the works" anymore, evidently, which is unfortunate, but unsurprising due to its complexity.
Just out of curiosity updated the versions and ran it today:
Language | Email(ms) | URI(ms) | IP(ms) | Total(ms) |
---|---|---|---|---|
Rust | 2.02 | 1.67 | 2.46 | 6.15 |
C++ SRELL | 3.35 | 3.68 | 10.08 | 17.10 |
C# .Net Core | 6.46 | 3.79 | 19.35 | 29.59 |
Nim Regex | 0.94 | 24.68 | 6.52 | 32.14 |
PHP | 15.03 | 17.70 | 3.50 | 36.23 |
Nim | 15.74 | 16.17 | 4.99 | 36.89 |
Julia | 36.72 | 35.87 | 3.81 | 76.39 |
C++ Boost | 35.97 | 35.17 | 11.78 | 82.91 |
Javascript | 47.19 | 35.78 | 1.02 | 84.00 |
Perl | 89.80 | 58.62 | 15.88 | 164.30 |
Crystal | 83.45 | 72.39 | 8.89 | 164.74 |
C PCRE2 | 89.19 | 81.06 | 10.35 | 180.59 |
Dart | 68.68 | 67.70 | 51.04 | 187.43 |
D dmd | 137.56 | 143.07 | 4.91 | 285.54 |
D ldc | 180.59 | 136.11 | 4.46 | 321.16 |
Ruby | 187.45 | 166.59 | 35.49 | 389.53 |
Python PyPy2 | 119.10 | 98.73 | 176.11 | 393.94 |
Java | 120.06 | 162.12 | 206.36 | 488.54 |
Kotlin | 116.40 | 183.73 | 206.46 | 506.59 |
Python PyPy3 | 180.68 | 155.52 | 174.64 | 510.84 |
Python 2 | 175.05 | 122.20 | 238.39 | 535.64 |
Dart Native | 268.49 | 297.90 | 4.43 | 570.81 |
Go | 184.79 | 176.84 | 270.63 | 632.26 |
Python 3 | 235.18 | 179.36 | 265.39 | 679.94 |
C++ STL | 338.12 | 269.98 | 183.98 | 792.08 |
C# Mono | 1994.57 | 1725.52 | 96.81 | 3816.91 |
Languages Regex Benchmark:
In the above benchmark, Go's regex is even slower than Python. It is not ideal because as Python is a scripting language, and Go is a static language, Go should be faster than Python.
I noticed that there's an issue here: https://github.com/golang/go/issues/19629, and someone said that because Python uses C for regex, and C is faster than Go. But Python is a cross-platform language and it can enjoy the C regex implementations for all platforms. Why can't Go do the same thing? This may be a stupid question but I just don't understand why Go has to use cgo to call C code, but Python doesn't have this limitation? Thanks.