blockpy-edu / blockpy

Blockly that's more Python than JavaScript, powered with Skulpt
Apache License 2.0
391 stars 130 forks source link

Performance #69

Closed acbart closed 3 years ago

acbart commented 4 years ago

Currently, execution can take several whole seconds. Timing tests reveal that the parsing, compilation, and evaluation phases all take about the same amount of time. The major factor comes down to the size/complexity of the Python file being parsed. The biggest offenders are the Pedal heavy weights: tifa.py, stretchy_tree_matching.py, and sandbox.py. A few built-in Python libraries are also taking up more than their fair share (traceback.py and posixpath.py).

The crude solution to at least get some speedup is to cache the parsing/compilation phases. This is a trade-off however, since the compiled versions are quite massive for most of these. Tifa, for instance, is about 10mb when compiled. Just running a bunch of these made a cache of about 50mb. Plus, we need to be smart about invalidating user created modules that have changed. I've written code for this, but I've decided against using it since it can balloon instances too easily.

The best solution would be to convert these massive modules to JS versions. I've been working on a "loose compile" script that tries to convert Python code to Skulpt code in a one-to-one fashion. However, there are a lot of different things involved in a module like Tifa. It'd be smarter to handle this in a more gradual fashion, opportunistically supporting more and more modules as we get loose compilation to be more accurate. However, I do think that the potential speed-ups and memory improvements would be massive. Optimistically, I would expect we could get close to 1-2 seconds again, if not better. That script can be found in the Skulpt library at the top level. Assuming this approach pans out, we'll need to involve it in the compile process.

acbart commented 3 years ago

Fortunately, the newest version really helps with performance:

1) Precompiling Pedal and various libraries traded off huge space for several seconds of runtime. 2) We keep Pedal around between executions (clearing the report) in order to avoid reloading it into memory. I can't wait to find bugs because of this. But the result is that subsequent executions are instantenous.