Open rgrinberg opened 6 years ago
Now, as you see, it fails in colParser, not colLexer.
All reported cases of stack overflow are during processing zh_PINYIN.txt. What zhPINYIN does is reorder all Chinese characters according to their pinyin reading, which should result very long sequences of expressions. So if we are serious to fix it, we must reproduce a process using zh__PINYIN in a "low stack environment" and see where it fails.
The easy fix would be, as you suggested, to use bytecode mode. But this reduces performance and unfortunately, camomilelocaledef does heavy computation. But on the modern hardware it woule be just a breeze.
First try the bytecode mode and later think about overhauling the parser and lexer?
(BTW, I will be away again until the end of this month. All I can do is to see Web and press buttons, even if I have time)
This crash with bytecode-only is still happening today.
This should make the lexer work with a smaller stack.
You can test how well our "low stack mode" works with the following patch:
and by running it with:
Before this patch, this would fail for me on
ar.mar
. Now it fails onja.mar
with a stackoverflow in the parser now:I wonder if this is a sign that we should just give up and force camomilelocaledef to run in bytecode mode so that we don't have to worry about the stack space.