oakestra / oakestra

A Lightweight Hierarchical Orchestration Framework for Edge Computing
https://oakestra.io
Apache License 2.0
50 stars 20 forks source link
docker edge-computing orchestration

workflow code style example artifacts example artifacts Stable [Github All Releases]() Oakestra

Oakestra is an orchestration platform designed for Edge Computing. Popular orchestration platforms such as Kubernetes or K3s struggle at maintaining workloads across heterogeneous and constrained devices. Oakestra is build from the ground up to support computation in a flexible way at the edge.

🌐 Read more about the project at: oakestra.io

πŸ“š Check out the project wiki at: oakestra.io/docs


Table of Contents

🌳 Get Started

Before being able to deploy your first application, we must create a fully functional Oakestra Root πŸ‘‘, to that we attach the clusters πŸͺ΅, and to each cluster we attach at least one worker node πŸƒ.

In this get-started guide, we place everything on the same machine. More complex setups can be composed following our wiki at oakestra.io/docs/getstarted/get-started-cluster.

Requirements

Your first cluster πŸͺ΅

Let's start our Root, the dashboard, and a cluster orchestrator on your machine. We call this setup 1-DOC which stands for 1 Device One Cluster, meaning that all the components are deployed locally.

curl -sfL oakestra.io/getstarted.sh | sh - 

You can turn off the cluster using docker compose -f ~/oakestra/1-DOC.yaml down

Your first worker node πŸƒ

Download and install the Node Engine and the Network Manager:

curl -sfL https://raw.githubusercontent.com/oakestra/oakestra/develop/scripts/InstallOakestraWorker.sh | sh -  

Configure the Network Manager by editing /etc/netmanager/netcfg.json as follows:

{
  "NodePublicAddress": "<IP ADDRESS OF THIS DEVICE>",
  "NodePublicPort": "<PORT REACHABLE FROM OUTSIDE, use 50103 as default>",
  "ClusterUrl": "<IP Address of cluster orchestrator or 0.0.0.0 if deployed on the same machine>",
  "ClusterMqttPort": "10003"
}

Start the NetManager on port 6000

sudo NetManager -p 6000

On a different shell, start the NodeEngine with the -6000 paramenter to connect to the NetManager.

sudo NodeEngine -a <Cluster Orchestrator IP Address>

If you see the NodeEngine reporting metrics to the Cluster...

πŸ† Success!

βœ¨πŸ†•βœ¨ If the worker node machine has KVM installed and it supports nested virtualization, you can add the flag -u=true to the NodeEngine startup command to enable Oakestra Unikernel deployment support for this machine.

Your first application πŸ’»

Let's use the dashboard to deploy you first application.

Navigate to http://SYSTEM_MANAGER_URL and login with the default credentials:

Deactivate the Organization flag for now. (Not like it is depicted in the reference image)

Add a new application, and specify the app name, namespace, and description. N.b.: Max 30 alphanumeric characters. No symbols.

Then, create a new service using the button.

Fill the form using the following values: N.b.: Max 30 alphanumeric characters. No symbols. image

Service name: nginx
Namespace: test
Virtualization: Container
Memory: 100MB
Vcpus: 1
Port: 80
Code: docker.io/library/nginx:latest

Finally, deploy the application using the deploy button.

Check the application status, IP address, and logs.

image

image

The Node IP field represents the address where you can reach your service. Let's try to use our browser now to navigate to the IP 131.159.24.51 used by this application.

image

🎯 Troubleshoot

πŸ› οΈ How to create a multi-cluster setup

Root Orchestrator

Initialize a standalone root orchestrator.

On a Linux machine first, install Docker and Docker compose v2.

First configure the address used by the dashboard to reach your APIs by running:

export SYSTEM_MANAGER_URL=<Address of current machine>

To run the Root orchestrator from the pre-compiled images:

If you wish to build the Root Orchestrator by yourself from source code, clone the repo and run:

cd root_orchestrator/
docker-compose up --build 

The following ports are exposed:

Cluster Orchestrator

For each cluster, we need at least a machine running the clsuter orchestrator.

## Choose a unique name for your cluster
export CLUSTER_NAME=My_Awesome_Cluster

## Optional: Give a name or geo coordinates to the current location. Default location set to coordinates of your IP
#export CLUSTER_LOCATION=My_Awesome_Apartment

## IP address where this root component can be reached to access the APIs
export SYSTEM_MANAGER_URL=<IP address>
# Note: Use a non-loopback interface IP (e.g. any of your real interfaces that have internet access).
# "0.0.0.0" leads to server issues

You can run the cluster orchestrator using the pre-compiled images:

If you wish yo build the cluster orchestrator yourself simply clone the repo and run:

export CLUSTER_LOCATION=My_Awesome_Apartment #If building the code this is not optional anymore
cd cluster_orchestrator/
docker-compose up --build 

The following ports are exposed:

Worker nodes

For each worker node you can either use the pre-compiled binaries (check 🌳 Get Started ) as usual or compile them on your own.

Build your node engine

Requirements

Compile and install the binary with:

cd go_node_engine/build
./build.sh
./install.sh $(dpkg --print-architecture)

Then configure the NetManager and perform the startup as usual.

N.b. each worker node can now be configured to work with a different cluster.
N.b. you can disable the Overlay Newtork (and therefore avoid using the NetManager) using the -n -1 flag at NodeEngine startup.

🎼 Deployment descriptor

Together with the application, it's possible to perform a deployment by passing a deployment descriptor (or SLA) in .json format to the APIs or the frontend.

Since version 0.4, Oakestra (previously, EdgeIO) uses the following format for a deployment descriptor format.

E.g.: deploy_curl_application.yaml

{
  "sla_version" : "v2.0",
  "customerID" : "Admin",
  "applications" : [
    {
      "applicationID" : "",
      "application_name" : "clientsrvr",
      "application_namespace" : "test",
      "application_desc" : "Simple demo with curl client and Nginx server",
      "microservices" : [
        {
          "microserviceID": "",
          "microservice_name": "curl",
          "microservice_namespace": "test",
          "virtualization": "container",
          "cmd": ["sh", "-c", "curl 10.30.55.55 ; sleep 5"],
          "memory": 100,
          "vcpus": 1,
          "vgpus": 0,
          "vtpus": 0,
          "bandwidth_in": 0,
          "bandwidth_out": 0,
          "storage": 0,
          "code": "docker.io/curlimages/curl:7.82.0",
          "state": "",
          "port": "",
          "added_files": [],
          "constraints":[]
        },
        {
          "microserviceID": "",
          "microservice_name": "nginx",
          "microservice_namespace": "test",
          "virtualization": "container",
          "cmd": [],
          "memory": 100,
          "vcpus": 1,
          "vgpus": 0,
          "vtpus": 0,
          "bandwidth_in": 0,
          "bandwidth_out": 0,
          "storage": 0,
          "code": "docker.io/library/nginx:latest",
          "state": "",
          "port": "80:80/tcp",
          "addresses": {
            "rr_ip": "10.30.55.55"
          },
          "added_files": []
        }
      ]
    }
  ]
}

This deployment descriptor example describes one application named clientsrvr with the test namespace and two microservices:

This is a detailed description of the deployment descriptor fields currently implemented:

Dashboard SLA descriptor

From the dashboard you can create the application graphically and set the services via SLA. In that case you need to submit a different SLA, contianing only the microservice list, e.g.:

{
      "microservices" : [
        {
          "microserviceID": "",
          "microservice_name": "nginx",
          "microservice_namespace": "test",
          "virtualization": "container",
          "cmd": [],
          "memory": 100,
          "vcpus": 1,
          "vgpus": 0,
          "vtpus": 0,
          "bandwidth_in": 0,
          "bandwidth_out": 0,
          "storage": 0,
          "code": "docker.io/library/nginx:latest",
          "state": "",
          "port": "",
          "addresses": {
            "rr_ip": "10.30.55.55"
          },
          "added_files": [],
          "constraints": []
        }
      ]
}

🩻 Use the APIs to deploy a new application and check clusters status

Login

After running a cluster you can use the debug OpenAPI page to interact with the apis and use the infrastructure.

connect to <root_orch_ip>:10000/api/docs

Authenticate using the following procedure:

  1. locate the login method and use the try-out button try-login
  2. Use the default Admin credentials to login execute-login
  3. Copy the result login token token-login
  4. Go to the top of the page and authenticate with this token auth-login auth2-login

Register an application and the services

After you authenticate with the login function, you can try out to deploy the first application.

  1. Upload the deployment description to the system. You can try using the deployment descriptor above. post app

The response contains the Application id and the id for all the application's services. Now the application and the services are registered to the platform. It's time to deploy the service instances!

You can always remove or create a new service for the application using the /api/services endpoints.

Deploy an instance of a registered service

  1. Trigger a deployment of a service's instance using POST /api/service/{serviceid}/instance

each call to this endpoint generates a new instance of the service

Monitor the service status

  1. With GET /api/aplications/<userid> (or simply /api/aplications/ if you're admin) you can check the list of the deployed application.
  2. With GET /api/services/<appid> you can check the services attached to an application
  3. With GET /api/service/<serviceid> you can check the status for all the instances of

Undeploy

Cluster Status

πŸ•ΈοΈ Networking

The network component is maintained in: https://www.oakestra.io/docs/networking

πŸ“ˆ Monitoring

The infrastructure monitoring stack provided is built on Grafana OSS toolset. It monitors both root and cluster services for comprehensive visibility. Default Grafana Dashboard credentials can be used:

To access the provisioned dashboards at:

More details about monitoring stack can be found in config/README.md.