Closed spine-o-bot closed 1 year ago
In GitLab by @manuelma on Oct 15, 2020, 01:53
marked this issue as related to #819
In GitLab by @soininen on Oct 15, 2020, 10:22
I'm also out of my comfort zone here, but we could also consider xmlrpc
as basis for Engine server as it readily implements a communication protocol between the client and the server.
In GitLab by @manuelma on Oct 15, 2020, 13:49
Ah, xmlrpc looks indeed a bit more appropriate as it provides the support for xml structured data. Beyond that, what do you think about the concept? Maybe something to discuss in depth in the next engine meeting...
In GitLab by @soininen on Oct 15, 2020, 14:26
I think it is a good idea to discuss this in the next meeting. I for sure don't know what options are available out there.
In GitLab by @manuelma on Oct 16, 2020, 04:16
mentioned in merge request !62
In GitLab by @soininen on Oct 16, 2020, 08:11
One design consideration: it might be a good idea to keep the server side of the engine separate from the parts actually executing the DAG so that (at least theoretically) we could switch the server communicating with network sockets to a "server" that is communicating over other pipes or Qt signals or other protocols. I think separating responsibilities this way might lead to cleaner, testable and more flexible code in the end. In instance, you don't need to start the entire server just to test DAG execution.
In GitLab by @manuelma on Oct 16, 2020, 08:19
I think so yes. The server will only handle communication but not execution, execution should stay as it is now (or will be, once we do that new merge).
One complication I foresee is about the Tool, or essentially any executable that needs to write files to disk. How does that work when the execution happens in a remote machine?
In GitLab by @soininen on Oct 16, 2020, 08:41
One complication I foresee is about the Tool, or essentially any executable that needs to write files to disk. How does that work when the execution happens in a remote machine?
My first approximation would be to recreate the project directory structure (at least a skeleton one) on the remote machine as well as same work directories for Tool etc. as we would for local execution. Basically, mirror the essential parts of the local execution environment on the remote system. Tool, Exporter and Gimlet could then write their outputs as they would when executing locally.
The big elephant in the room is the input files: how does the remote machine access these files? Do users need to copy the files on the remote system in exactly the same locations as on the user's local system? Is some kind of file transfer needed between the local and remote systems?
If one had all the data files in a self-contained project directory, it would be as easy as moving the project as-is to the remote system and execute it there.
In GitLab by @manuelma on Oct 16, 2020, 08:46
Sounds good. So if an input file lives outside the project folder, when we recreate the project folder in the remote, we'd need to copy the outsider input file to some dedicated location in the remote project folder. I'm pretty sure we can find a nice definition regarding where these files should go. But my question is, how does file transfer work across the network? I imagine something like shutil.copy
(if I remember the syntax correctly) is not an option...
In GitLab by @soininen on Oct 16, 2020, 09:02
how does file transfer work across the network?
We could transfer files as Base64 data if we end up using xmlrpc
, for example or just leave file transferring to the user in the very first alpha release. This sounds like another topic we should discuss in the engine meeting to assess our options.
In GitLab by @manuelma on Oct 16, 2020, 09:30
Let's see, security is also important.
In GitLab by @manuelma on Nov 7, 2020, 14:44
mentioned in commit 945fc44da0647d1b7e95b56fedac1d4a05155974
@pekkapaa This issue should be of interest to you.
The functionality currently:
To be done:
Security support has been added based on Zero-MQ:
To be done:
The current status:
If this issue is merged in the future, the following issues need to be solved:
@pekkapaa Please do a merge from master to 'server' branch on spine toolbox side. Let's see if the tests run a bit better after that in GitHub. Also, consider doing a WIP PR when you're ready.
master was merged to server branch. Now the installation and unit tests seem to work at GitHub. However, an error in encountered (at spinetoolbox/project_commands.py), when the toolbox is started: "ImportError: cannot import name 'Jump' from 'spine_engine.project_item.connection"
In order to integrate/merge this issue, the following should be considered:
I wonder if the interface to the configuration could be similar to the interface of selecting the kernel/console for Python. There could be multiple servers available and you choose which one to use.
This is tricky, since it would best to be a project and tool specific choice which server to use. On the other hand, most of the time you're just using the same one for the whole project.
First version has been implemented. To be continued in other issues.
In GitLab by @manuelma on Oct 15, 2020, 01:52
After we manage to get #819 working with the experimental engine, I believe the next step is to create some sort of network server that runs the spine engine. I'm thinking of using socketserver, but I don't know if there's a better solution.
For starters, toolbox can start a local engine server at startup. Then, when the user presses execute, toolbox creates a client that connects to the server and sends the workflow in bytes (similar to what's done when calling
SpineEngine.from_json
). The engine server receives the workflow, runs it, and sends bytes to the client so toolbox knows what's happening.How does that sound? Does it make any sense?