Open slimsag opened 3 years ago
Some other hangs which I've been able to reproduce with our fork of syntect:
path=sgtest-megarepo-11c726f/chromium/third_party/blink/perf_tests/speedometer/resources/flightjs-example-app/components/bootstrap/js/bootstrap.js line=143
path=sgtest-megarepo-11c726f/chromium/third_party/chaijs/chai.js line=7479
path=sgtest-megarepo-11c726f/mongo/src/third_party/mozjs/extract/js/src/builtin/intl/CommonFunctions.js line=473
path=sgtest-megarepo-11c726f/grafana/public/vendor/bootstrap/bootstrap.js line=229
path=sgtest-megarepo-11c726f/sourcegraph/client/shared/dev/jest-environment.js line=62
Examples taken from https://github.com/sgtest/megarepo/zipball/11c726fd66bb6252cb8e9c0af8933f5ba0fb1e8d (warning: 2 GB zip file).
Most of these seem to be around a /*
comment, with the /*
having one or more spaces before it.
I've filed an issue with more details: https://github.com/sourcegraph/syntect/issues/1
Results for files in original comment:
admin/static/js/dropzone.min.js
. No issues on Sourcegraph.com (no highlighting though).all.fine-uploader.core.min.js
, works fine. No issues on Sourcegraph.com (highlighting only shows for comments).azure.jquery.fine-uploader.min.js
works fine. No issues on Sourcegraph.com (highlighting only shows for comments).azure.fine-uploader.min.js
works fine. No issues on Sourcegraph.com (highlighting only shows for comments).🐢 Slow highlighting for C# (onig = oniguruma which is the default regex engine and supposedly faster, fancy-regex = alternative regex engine in pure rust)
File (size) | M1 Max / Onig | GCP / Onig | GCP / fancy-regex |
---|---|---|---|
standard/src/platforms/xamarin.tvos/System.cs (636K) |
0.58s | 6.1s | 2.0s |
standard/src/platforms/net461/System.cs (796K) |
0.66s | 4.4s | 1.6s |
I can't browse the .cs file at all and it doesn't work.
The goal of this issue is to collect known syntax highlighting failures from logs on sourcegraph.com.
The rate of failures can be seen here: https://sourcegraph.com/-/debug/grafana/d/syntect-server/syntect-server?orgId=1&from=now-7d&to=now
The numbers on this dashboard have been pretty stable over the past 30d, so I don't suspect any regression.
The frontend logs any failed highlighting requests it sends to the service with details on what file/etc to repro the issue, most of them will need to be tracked back to the upstream code https://github.com/trishume/syntect and fixed there.
We can find timeouts (maybe issues, maybe not - only way to tell is to try reproing) by searching frontend logs for
syntax highlighting took longer
:The really critical ones, however, are the worker timeouts - those indicate one of the server subprocesses died completely. We can find those by searching frontend logs for
HSS worker timeout
which gives:We should look at each of these files in e.g. a dev server and see if we can reproduce them there. If we can, they are bugs in the upstream code github.com/trishume/syntect most likely and we should probably try to track them down there