OpenFn / apollo

GNU Lesser General Public License v2.1
0 stars 2 forks source link

Service Architecture #41

Closed josephjclark closed 6 months ago

josephjclark commented 9 months ago

This is a high level issue to discuss where the AI gen server fits in OpenFn's architecture

Overview

Basically, right, I've identified a number of data generation services which we need to support Lightning and the CLI:

It seems to me that one server, sitting at gen.openfn.org or data.openfn.org or something, should provide all these services (or perhaps proxy to them).

Architecture

This is the architecture we're gonna need:

image

Editable link: https://excalidraw.com/#json=Z43WyHx3CZcjMrqN1wjfK,5Lf2_ZKsWc168Np7C9Fh0Q

Questions

Is this really all one server?

I think so!

We've talked about making metadata and docs services within lightning. But actually they have nothing to do with lightning.

But we could have a python server handling AI stuff, and worry about the lightning stuff as a separate server later on.

It just seems like we should be able to setup one solution now that we can expand on later.

What language should it be in?

Ideally everything would be in one language, probably node (the metadata and adaptor docs services are javascript based, come to think of it)

The existing AI stuff doesn't really need to be python - but the ML community are likely to want to use python to contribute new services.

See https://github.com/OpenFn/gen/issues/39

Can we set up a really strong JS-python interface?

I think the ideal here is a JS server which has a really strong pattern to call out to python code. So you'd define an endpoint in Javascript, but you'd call out to a python module to generate the result.

Maybe we have a python server running internally, which the JS server proxies to for certain calls.

What about fine tuning?

The only thing we have so far that I think MUST be python is the fine-tuning we've got set up for the llama and gpt models. The training is all written in python and look relatively heavyweight.

But I also don't think that's part of the webserver API -it's more like an offline script. So it could sit as a python module inside a node repo, for example.

josephjclark commented 9 months ago

It might be worth continuing to develop the AI server in python, so that we don't alienate the ML crowd, and deploy a prototype of that. And worry about the other stuff later.

I don't think it's too much effort to get a new python server up and running - maybe a week? Even if it's throwaway that seems acceptable and lets us release a prototype.

josephjclark commented 9 months ago

We could look at node-calls-python as a node-python bridge: https://www.npmjs.com/package/node-calls-python

It seems to have c bindings so it might be neat

josephjclark commented 9 months ago

Or python-shell has way more downloads https://github.com/extrabacon/python-shell

But it doesn't have any c bindings and uses stdout to pipe data. Each script is run as a new child process. It's not much different to doing child_process.spawn

josephjclark commented 8 months ago

I've done a bit of prototyping and I think this is the right architecture:

That should give us a multi-language server which is easy to add new services to.