If a notebook contains a Paragraph using Spark interpreter, the interpreter should be started when the note is opened.
The only exception to this is if the notebook contains a Spark configuration paragraph. In that case, the interpreter should remain closed until the paragraphs are run to ensure that configuration gets applied properly.
[x] Preloader should run automatically when opening a note
[x] Preloader should report back when process is complete
[x] Preloader shouldn't be run when there's a configuration interpreter present within the notebook
[x] Preloader should be able to preload a subset of interpreters
[x] Configuration should be done via Interpreter properties
[x] Handle user-scoped interpreters
[x] Handle note-scoped interpreters
[x] Should have logic to void multiple calls to intepreter.open() at the same time when multiple users are present at the same time
[x] Should be run in parallel so as to not lock the paragraph while interpreters are loading
[x] Unit tests present
Some interpreters, such as Spark, use a shared process, but report whether they have been "opened" separately.
Interpreters are identified by their ID's which can include user-supplied strings. This means that Interpreters must be identified exactly.
Additional information in internal page, search for "Spark Preloader"
If a notebook contains a Paragraph using Spark interpreter, the interpreter should be started when the note is opened. The only exception to this is if the notebook contains a Spark configuration paragraph. In that case, the interpreter should remain closed until the paragraphs are run to ensure that configuration gets applied properly.
Some interpreters, such as Spark, use a shared process, but report whether they have been "opened" separately. Interpreters are identified by their ID's which can include user-supplied strings. This means that Interpreters must be identified exactly. Additional information in internal page, search for "Spark Preloader"