Closed cr closed 3 years ago
Hmm, I see that the checks failed. Is this due to the regression in issue 123? Regardless, I am re viewing this now just reading the code.
Please don't spend much time actually reviewing this. This is just meant for comments and discussion about the direction. It is currently non-integrative, exclusively focusing on rewriting the JS worker and Py client, breaking the rest in the process.
This is one major step in my ongoing quest to move to Celery as a queue backend which will allow all sorts of goodies in TLS Canary 4.0.
Traditionally, the XPCShell JS worker has implemented a stdio based protocol channel to communicate with the Python world. It turned out that this was a fiddly and unstable way of implementing IPC with a sub-process, because it limits communication between JS and Python to the Python process instantiating the worker (usually the main process). As Celery tasks are based on processes, and not threads, switching to sockets looks like the simplest and most straight-forward way to allow them to individually communicate with their dedicated JS worker process.
So far, this WIP patch adds
I see several immediate advantages of the new code:
-j
parallel requests at the worker and wait until the batch has finished. One slow host will slow down the whole set. The new system allows us to dynamically keep a number of requests "in flight", sending a new request once a result arrives.Overall I think that this is something that we should consider landing sooner than later. If you agree, mwobensmith, I'll move on with integrating the changes with the rest of the code.