Open alecandido opened 7 months ago
Ok, the project is actually ambitious, and there are many gray areas I'm not yet sure how to cover.
But well, I have to try something to see which are the actual issues.
And, as I mentioned above, this is only an initial sketchy plan.
Let's assume for a moment that we have been able to solve somehow #11. Now we're left with the problem of actually choosing a backend to run.
Before this was done setting a global object in Python. That's definitely not an option any longer.
The problem: our executable is written in language X, and the backend is in language Y (you can think that none of them is Rust, to avoid special cases, that will also be solved by the general one)
Now, the executable will link or dynamically import
qibo-core
, and create aqibo-core
circuit. So, whichever was language X, the user can interact withqibo-core
through theqibo-core-X
API.However,
qibo-core
will not know all the possible backends in advance (the backends depend onqibo-core
, not the opposite), so the circuit will have to find which backend to use during execution. A backend might even have to be spawned in a separate process, e.g. if the executable is in C, but the backend is in Python, either the user should link its program with a Python interpreter (not particularly nice) or a Python interpreter should be launched separately.So, how could we interact with a backend created in another process? And how we create it in the first place?
Most likely, we should provide different types of backends instantiation, this is what I have in mind.
Creating the backend
Here I see essentially two ways:
qibo-core
(a' laBox<dyn Backend>
), and the general backend interface is called during circuit execution (and whatever else is doing a backend)qibo-core
will fork a process for thatAnd the way I know to create processes is running executables. Thus, the backends should advertise themselves as executables available on
PATH
(possibly matching some kind of pattern, like"numpy" -> qibo-backend-numpy
,"tensorflow" -> qibo-backend-tensorflow
, and so on...).In this case, another kind of backend object is created in
qibo-core
, holding the PID of the process.Something like the following:
Executing circuits
Once we're in one of the cases above, we should actually send instructions to the backend to operate.
I still have to think carefully about this part, but the sketchy concept is the following.
In case of
handle: UserInstance
, we just runhandle.backend.apply_gate(gate: Gate)
, and we call it a day (here the challenge is just to be able to pass the typed pointer through the API, since using traits would be a challenge... or just impossible...).For
Spawned
, we should setup some kind of comunication. Since the two process will actually run on the same machine, the best would be to use some kind of IPC, and implement some kind of client-server communication:qibo-core
part instantiated by the user executable (through the API) will act as a client, sending instructionsqibo-core
party, instantiated within by the backend process spawned, will act as a server, executing on the actual backend (e.g. NumPy, or Qulacs)This could be implemented reducing the copies, since the two processes could share some memory.
If we succeed with this mechanism, the backends could be completely decoupled from the users' code, and even act as a small tunnel for some remote connection.