When 'Search whole file' is disabled, search results are tested for visibility on the screen. There is already an optimization that only checks results on visible lines, but the number of offset-to-XY conversions still scales linearly with the number of results, which can be very large in files with long lines.
A simple observation is that every line has a first and last offset that is visible on the screen (which may be different for each line due to proportional fonts).
This commit caches the visible offset range for every line involved in one search query, so testing visibility of a search result becomes a check if its offset is inside its line's visible offset range. Finding the visible offset range requires two XY-to-offset conversions per line, so the total number of conversions is bounded by the number of lines that can fit on the screen.
The worst case for this optimization is when every line has exactly one search result; before, this would lead to one offset-to-XY conversion per line, whereas now it leads to two XY-to-offset conversions per line. However, the maximum number of conversions is twice the number of visible lines, which will generally be very small.
Showing concrete results is difficult, because the difference in scaling lets me come up with extreme examples just by making the lines longer. In this example, I happen to have 89 search results per line and 57 lines on the screen, which seems reasonable.
Before:
After:
Because there are search results on every line, the "After" timings are also the worst case scenario that can ever occur on my particular setup, since it cannot possibly scale any further (besides the cache lookups taking some time for each search result, of course).
Creating this as a PR mainly for a sanity check that I didn't miss something.
When 'Search whole file' is disabled, search results are tested for visibility on the screen. There is already an optimization that only checks results on visible lines, but the number of offset-to-XY conversions still scales linearly with the number of results, which can be very large in files with long lines.
A simple observation is that every line has a first and last offset that is visible on the screen (which may be different for each line due to proportional fonts).
This commit caches the visible offset range for every line involved in one search query, so testing visibility of a search result becomes a check if its offset is inside its line's visible offset range. Finding the visible offset range requires two XY-to-offset conversions per line, so the total number of conversions is bounded by the number of lines that can fit on the screen.
The worst case for this optimization is when every line has exactly one search result; before, this would lead to one offset-to-XY conversion per line, whereas now it leads to two XY-to-offset conversions per line. However, the maximum number of conversions is twice the number of visible lines, which will generally be very small.
Showing concrete results is difficult, because the difference in scaling lets me come up with extreme examples just by making the lines longer. In this example, I happen to have 89 search results per line and 57 lines on the screen, which seems reasonable.
Before:
After:
Because there are search results on every line, the "After" timings are also the worst case scenario that can ever occur on my particular setup, since it cannot possibly scale any further (besides the cache lookups taking some time for each search result, of course).
Creating this as a PR mainly for a sanity check that I didn't miss something.