ivoa-std / ExecutionBroker

IVOA ExecutionBroker service and data model.
Creative Commons Attribution Share Alike 4.0 International
1 stars 1 forks source link

How to handle networking #71

Open aragilar opened 1 month ago

aragilar commented 1 month ago

Looking at the latest draft, there's nothing about networking yet (I'm guessing @Zarquan that only because you've not had time to push what you've got).

My feeling is this can be summarised into two groups:

I think the former is easy to specify in abstract terms, but the latter to me feels a bit system specific (if the state of k8s networking is any guide)?

aragilar commented 1 month ago

On this, PAWS (our pipeline orgistrator) would need to talk to a docker socket (or the k8s api), and so that's a somewhat interesting setup for the second case.

Zarquan commented 1 month ago

First part, basic networking. Yep, there are plans to add support for this directly in the ExecutionBroker data model. Probably around 80% of the use cases will just need an external network port that can be accessed from the public internet.

For a HTTP web service:

networking:
  - port: 8080
    protocol: tcp

Or a webtop desktop application:

networking:
  - port: 3000
    protocol: tcp
Zarquan commented 1 month ago

For communication between executables (e.g. running some analysis with a database sidecar), we could extend the ExecutionBroker model to handle this. However it might be better to delegate a lot of the complexity to an orchestration system like Docker Compose or Kubernetes.

For example, if the application and database sidecar could be described in a Docker Compose file, then that would become the executable. See #72.

executable:
- type: "https://..../docker-compose"
  spec:
    file: "https://github.com/.../compose.yaml"
    ....

The compose file would contain details of the application and database sidecar, along with the internal filesystem and network connecting them. This would provide access to all of the Docker Compose tools for orchestrating inter-container filesystem and networking, without us having to re-invent any wheels.

For a more complex deployment, we could do something similar using Kubernetes. Where the application developer wraps the deployment in a Heml chart, and that becomes the executable. See #73.

executable:
  type: "https://..../helm-chart"
  spec:
    file: "https://github.com/.../app-deploy.yaml"
    ....

Here the user is asking the platform for space on a Kubernetes cluster to run a Helm chart. The Helm chart would include all the details of how to maps storage volumes to PersistentVolumes and PersistentVolumeClaims, and Services to LoadBalancers and network ports.