Closed pschiffmann closed 2 years ago
Ideally Grammophone should be able to handle that. I recall that many of the calculations are written in a somewhat naive way -- very closely following the way they're first presented in a textbook -- and so there is probably a fair bit of room for optimization, as you've seen. I can't promise that I'll be able to address this, but if you decide to look at the problem I welcome pull requests.
One of the biggest bottlenecks that shows up in the Chromium profiler seems to be the repeated sorting of the queue array in one of the functions. I've made a pull request (#12 ) replacing that with a priority queue, and I've noticed a 40x improvement in performance on some small grammars I've been working with. I assume the difference would be even more dramatic in your case, although I'm not sure what the memory overhead is like.
I've made a change that should address this: fadd52228273fb53e3472ee0e7ecfd2ed34f1dd0
The analysis for the C grammar now displays in a few seconds for me, rather than appearing to never finish.
Hi!
I just tried to analyze this C grammar – "just" being an hour ago, and my Chrome tab has by now allocated >600mb for JS stack memory. ;-)
Rationale: I'm a CS student and I recently heard that (canonical) LR(1) parsers weren't suitable for real world usage because of their huge parser tables. So, to get a better impression of what "huge" actually means, I wanted to look at a real world example, instead of those tiny grammars you see during lectures.
Best regards, Philipp