Closed michael-buschbeck closed 2 years ago
Supplemental supplemental experiment: Enclosing an entire chunk of lines with \n
separators with {{
and }}
– which makes it into a single multi-line message as far as Roll20 is concerned – also makes the issue go away.
Since MMM already supports processing multi-line chat messages produced by including {{
}}
pairs in script code (see #26), doing just that for our lengthy autorun macro (after escaping all embedded {{
and }}
as {\{
and }\}
, respectively) serves as a passable workaround:
!rem {{
%{macroSheetLibrary|initGlobals}
!rem }}
The new autorun macro (see #165) feature has been used to great effect to pre-parse sizable chunks of reusable macro code.
However, it has exposed a crippling inefficiency in Roll20's handling of anything that expands to a large amount of text through the
sendChat()
API function that frequently leads to "possible infinite loop detected" API sandbox crashes due to the sandbox becoming unresponsive for significant amounts of time (30 seconds or more).This issue has nothing to do with MMM itself – it can be reproduced without MMM even being installed/enabled simply by using
sendChat()
to send loads of lines looking like this……which (intentionally) don't address any installed API script at all and are effectively completely ignored. (Those lines still pass by any API scripts'
chat:message
handlers, of course, but it turns out this happens reasonably quickly and the slowdown happens after control has returned from sandbox code to Roll20.)Experimentation with those
!foo
dummy macro lines above (and MMM disabled) demonstrate that it's not even the sheer total number of lines sent throughsendChat()
– it's in fact the number of lines sent thoughsendChat()
before control returns to Roll20 from within the sandbox.The following experiment sends chunks of lines separated by
\n
in a singlesendChat()
call, then schedules the next chunk usingsetTimeout()
with a 1-millisecond delay:Supplemental observation: With larger chunks, the first few chunks fly past quickly and then chunks start taking progressively longer until they essentially come to a standstill.
Supplemental experiment: Separating those
!foo
lines with anything else than\n
(e.g. withX
), producing the same amount of data but only a single "physical" line, makes all of the experiments above fly by very quickly without any slowdown at all.Preliminary hypothesis:
Roll20 enqueues all
sendChat()
output received until control returns from the API sandbox to Roll20 code before processing it.Upon return from the API sandbox, enqueued messages are parsed, expanded, split into individual messages and sent individually to all
chat:message
handlers registered by API scripts. (This happens fairly efficiently.)Some aspect of this handling produces JavaScript memory garbage in significantly greater amounts than O(n = number of lines) – perhaps O(n^2), like if it were re-concatenating all chat lines by repeated concatenation of (immutable) string objects.
After all this is done, the JavaScript GC blocks upcoming API sandbox events including the watchdog heartbeat that keeps the "possible infinite loop detected" forced shutdown at bay.
Rationale:
The "happens outside/after API sandbox code" aspect is motivated by the observation that all logging from within the sandbox indicates that
chat:message
handlers are called in fairly reasonable time for every chat message resulting from the chunk that was sent, and only after all messages have passed bychat:message
handlers does the sandbox become unresponsive to further events.The "number of lines enqueued" aspect (as opposed to "amount of data enqueued") is motivated by the observation that separating "lines" with anything but a like break, producing the same amount of data but only a single physical line, makes the problem go away completely.
The "GC" aspect is motivated by the observation that it takes a few iterations of lengthy chunks until the slowdown kicks in. If it were simple O(n^2) compute inefficiency, the slowdown should be similar for all similarly-sized chunks.
The "O(more than n)" aspect is motivated by the observation that the same total number of lines sent (10,000 lines each experiment) produce massively different outcomes depending only on how many lines are sent per individual chunk.