The current matching algorithm works in a weird way, because of the assumptions at the time it was written.
It had to match custom properties, not CSS selectors. Hence it combines all available selectors for each custom prop. There's some benefit to matching all selectors at the same time, but if they overlap too much it becomes very inefficient.
Even within this, custom props on the same selector are individually evaluated.
It eliminates a lot of things in the first step, as each selector is transformed to also match descendants. This probably is still a good idea. Initially this narrowed down so much that complexity in the following steps didn't matter.
However, when testing extracting every CSS rule with the same algorithm, too many elements remain in this data set. This is partly due to the hack that was used to test this: simply use raw CSS values as both key and value of a variable. Since many values are eventually used in the root element in some form, this lead to a really large set to operate on throughout the whole process of moving up the tree.
It becomes slow because 1) it has to evaluate many selectors multiple times, 2) it still does some regex transformations that can be avoided or at least done up front, and 3) it ends up with some particularly complex and slow selectors. Some slowness could be due to the fact that hack sometimes used very large keys as variable values.
With the recent render optimizations, the matching is now usually the bottleneck. Though in most cases it's still well below the performance budget. The worst case scenario (extract every rule on a very complex page) was still somewhat acceptable at about 150-200ms when inspecting an element with many ancestors. Though this does leave very little room to expand the tasks involved, and can probably be reduced to almost nothing, given all the selector testing duplication.
In the process, a less ad-hoc data model should probably be devised. Properties are currently being defined wherever they're needed first, with no type guarantees.
The current matching algorithm works in a weird way, because of the assumptions at the time it was written.
With the recent render optimizations, the matching is now usually the bottleneck. Though in most cases it's still well below the performance budget. The worst case scenario (extract every rule on a very complex page) was still somewhat acceptable at about 150-200ms when inspecting an element with many ancestors. Though this does leave very little room to expand the tasks involved, and can probably be reduced to almost nothing, given all the selector testing duplication.
In the process, a less ad-hoc data model should probably be devised. Properties are currently being defined wherever they're needed first, with no type guarantees.