Oakestra is an orchestration platform designed for Edge Computing. Popular orchestration platforms such as Kubernetes or K3s struggle at maintaining workloads across heterogeneous and constrained devices. Oakestra is build from the ground up to support computation in a flexible way at the edge.
π Read more about the project at: oakestra.io
π Check out the project wiki at: oakestra.io/docs
Before being able to deploy your first application, we must create a fully functional Oakestra Root π, to that we attach the clusters πͺ΅, and to each cluster we attach at least one worker node π.
In this get-started guide, we place everything on the same machine. More complex setups can be composed following our wiki at oakestra.io/docs/getstarted/get-started-cluster.
Let's start our Root, the dashboard, and a cluster orchestrator on your machine. We call this setup 1-DOC which stands for 1 Device One Cluster, meaning that all the components are deployed locally.
curl -sfL oakestra.io/getstarted.sh | sh -
You can turn off the cluster using
docker compose -f ~/oakestra/1-DOC.yaml down
Download and install the Node Engine and the Network Manager:
curl -sfL https://raw.githubusercontent.com/oakestra/oakestra/develop/scripts/InstallOakestraWorker.sh | sh -
Configure the Network Manager by editing /etc/netmanager/netcfg.json
as follows:
{
"NodePublicAddress": "<IP ADDRESS OF THIS DEVICE>",
"NodePublicPort": "<PORT REACHABLE FROM OUTSIDE, use 50103 as default>",
"ClusterUrl": "<IP Address of cluster orchestrator or 0.0.0.0 if deployed on the same machine>",
"ClusterMqttPort": "10003"
}
Start the NetManager on port 6000
sudo NetManager -p 6000
On a different shell, start the NodeEngine with the -6000 paramenter to connect to the NetManager.
sudo NodeEngine -a <Cluster Orchestrator IP Address>
If you see the NodeEngine reporting metrics to the Cluster...
π Success!
-u=true
to the NodeEngine startup command to enable Oakestra Unikernel deployment support for this machine.Let's use the dashboard to deploy you first application.
Navigate to http://SYSTEM_MANAGER_URL
and login with the default credentials:
Admin
Admin
Deactivate the Organization flag for now. (Not like it is depicted in the reference image)
Add a new application, and specify the app name, namespace, and description. N.b.: Max 30 alphanumeric characters. No symbols.
Then, create a new service using the button.
Fill the form using the following values: N.b.: Max 30 alphanumeric characters. No symbols.
Service name: nginx
Namespace: test
Virtualization: Container
Memory: 100MB
Vcpus: 1
Port: 80
Code: docker.io/library/nginx:latest
Finally, deploy the application using the deploy button.
Check the application status, IP address, and logs.
The Node IP field represents the address where you can reach your service. Let's try to use our browser now to navigate to the IP 131.159.24.51 used by this application.
After exporting the env variables at step 1, if you're using sudo with docker-compose, remember the -E
parameter.
Something is off at the root level. Most likely, the cluster network component is not receiving a subnetwork from the root. Make sure all the root components are running.
The cluster network components are not reachable. Either they are not running, or the config file /etc/netmanager/netcfg.json
must be updated.
There is no worker node with the specified capacity or no worker node deployed at all. Are you sure the worker node startup was successful?
The node IP is from the cluster orchestrator perspective so far. If it shows a different IP than expected, it's probably the IP of the interface used to reach the cluster orchestrator.
Initialize a standalone root orchestrator.
On a Linux machine first, install Docker and Docker compose v2.
First configure the address used by the dashboard to reach your APIs by running:
export SYSTEM_MANAGER_URL=<Address of current machine>
To run the Root orchestrator from the pre-compiled images:
export OAKESTRA_BRANCH=develop
, default branch is main
.export OVERRIDE_FILES=override-alpha-versions.yaml
curl -sfL https://raw.githubusercontent.com/oakestra/oakestra/develop/scripts/StartOakestraRoot.sh | sh -
If you wish to build the Root Orchestrator by yourself from source code, clone the repo and run:
cd root_orchestrator/ docker-compose up --build
The following ports are exposed:
For each cluster, we need at least a machine running the clsuter orchestrator.
## Choose a unique name for your cluster
export CLUSTER_NAME=My_Awesome_Cluster
## Optional: Give a name or geo coordinates to the current location. Default location set to coordinates of your IP
#export CLUSTER_LOCATION=My_Awesome_Apartment
## IP address where this root component can be reached to access the APIs
export SYSTEM_MANAGER_URL=<IP address>
# Note: Use a non-loopback interface IP (e.g. any of your real interfaces that have internet access).
# "0.0.0.0" leads to server issues
You can run the cluster orchestrator using the pre-compiled images:
export OAKESTRA_BRANCH=develop
, default branch is main
.export OVERRIDE_FILES=override-alpha-versions.yaml
export CLUSTER_LOCATION=<latitude>,<longitude>,<radius>
, default location is automatically inferred from the public IP address of the machine. curl -sfL https://raw.githubusercontent.com/oakestra/oakestra/develop/scripts/StartOakestraCluster.sh | sh -
If you wish yo build the cluster orchestrator yourself simply clone the repo and run:
export CLUSTER_LOCATION=My_Awesome_Apartment #If building the code this is not optional anymore cd cluster_orchestrator/ docker-compose up --build
The following ports are exposed:
For each worker node you can either use the pre-compiled binaries (check π³ Get Started ) as usual or compile them on your own.
Requirements
Compile and install the binary with:
cd go_node_engine/build
./build.sh
./install.sh $(dpkg --print-architecture)
Then configure the NetManager and perform the startup as usual.
N.b. each worker node can now be configured to work with a different cluster.
N.b. you can disable the Overlay Newtork (and therefore avoid using the NetManager) using the -n -1
flag at NodeEngine startup.
Together with the application, it's possible to perform a deployment by passing a deployment descriptor (or SLA) in .json
format to the APIs or the frontend.
Since version 0.4, Oakestra (previously, EdgeIO) uses the following format for a deployment descriptor format.
E.g.: deploy_curl_application.yaml
{
"sla_version" : "v2.0",
"customerID" : "Admin",
"applications" : [
{
"applicationID" : "",
"application_name" : "clientsrvr",
"application_namespace" : "test",
"application_desc" : "Simple demo with curl client and Nginx server",
"microservices" : [
{
"microserviceID": "",
"microservice_name": "curl",
"microservice_namespace": "test",
"virtualization": "container",
"cmd": ["sh", "-c", "curl 10.30.55.55 ; sleep 5"],
"memory": 100,
"vcpus": 1,
"vgpus": 0,
"vtpus": 0,
"bandwidth_in": 0,
"bandwidth_out": 0,
"storage": 0,
"code": "docker.io/curlimages/curl:7.82.0",
"state": "",
"port": "",
"added_files": [],
"constraints":[]
},
{
"microserviceID": "",
"microservice_name": "nginx",
"microservice_namespace": "test",
"virtualization": "container",
"cmd": [],
"memory": 100,
"vcpus": 1,
"vgpus": 0,
"vtpus": 0,
"bandwidth_in": 0,
"bandwidth_out": 0,
"storage": 0,
"code": "docker.io/library/nginx:latest",
"state": "",
"port": "80:80/tcp",
"addresses": {
"rr_ip": "10.30.55.55"
},
"added_files": []
}
]
}
]
}
This deployment descriptor example describes one application named clientsrvr with the test
namespace and two microservices:
clientsrvr.test.nginx.test
clientsrvr.test.curl.test
This is a detailed description of the deployment descriptor fields currently implemented:
customerID: id of the user, default is Admin
default
or production
or test
(max 30 alphanumeric characters)default
or production
or test
(max 30 alphanumeric characters)container
or (β¨πβ¨) unikernel
docker.io/library/nginx:latest
) or (β¨πβ¨) link to unikernel image in .tar.gz
format (e.g. http://<hosting-url-and-port>/nginx_x86.tar.gz
).10.30.x.y
and must not collide with any other Instance Address or Service IP in the system, otherwise an error will be returned. If you don't specify a RR_ip and you don't set this field, a new address will be generated by the system."one_shot": true
in the SLA is possible to deploy a one shot service, a service that when terminating with exit status 0 is marked as completed and not re-deployed. direct
: Send a deployment to a specific cluster and a specific list of eligible nodes. You can specify "node":"node1;node2;...;noden"
a list of node's hostnames. These are the only eligible worker nodes. "cluster":"cluster_name"
The name of the cluster where this service must be scheduled. E.g.:"constraints":[
{
"type":"direct",
"node":"xavier1",
"cluster":"gpu"
}
]
From the dashboard you can create the application graphically and set the services via SLA. In that case you need to submit a different SLA, contianing only the microservice list, e.g.:
{
"microservices" : [
{
"microserviceID": "",
"microservice_name": "nginx",
"microservice_namespace": "test",
"virtualization": "container",
"cmd": [],
"memory": 100,
"vcpus": 1,
"vgpus": 0,
"vtpus": 0,
"bandwidth_in": 0,
"bandwidth_out": 0,
"storage": 0,
"code": "docker.io/library/nginx:latest",
"state": "",
"port": "",
"addresses": {
"rr_ip": "10.30.55.55"
},
"added_files": [],
"constraints": []
}
]
}
After running a cluster you can use the debug OpenAPI page to interact with the apis and use the infrastructure.
connect to <root_orch_ip>:10000/api/docs
Authenticate using the following procedure:
After you authenticate with the login function, you can try out to deploy the first application.
The response contains the Application id and the id for all the application's services. Now the application and the services are registered to the platform. It's time to deploy the service instances!
You can always remove or create a new service for the application using the /api/services endpoints.
POST /api/service/{serviceid}/instance
each call to this endpoint generates a new instance of the service
GET /api/aplications/<userid>
(or simply /api/aplications/ if you're admin) you can check the list of the deployed application.GET /api/services/<appid>
you can check the services attached to an applicationGET /api/service/<serviceid>
you can check the status for all the instances of DELETE /api/service/<serviceid>
to delete all the instances of a serviceDELETE /api/service/<serviceid>/instance/<instance number>
to delete a specific instance of a serviceDELETE /api/application/<appid>
to delete all together an application with all the services and instancesGET /api/clusters/
to get all the registered clusters.GET /api/clusters/active
to get all the clusters currently active and their resources.
It is also possible to use Unikernels by changing the virtulization in of the microservice
{
"sla_version": "v2.0",
"customerID": "Admin",
"applications": [{
"applicationID": "",
"application_name": "nginx",
"application_namespace": "test",
"application_desc": "Simple demo of an Nginx server Unikernel",
"microservices": [{
"microserviceID": "",
"microservice_name": "nginx",
"microservice_namespace": "test",
"virtualization": "unikernel",
"cmd": [],
"memory": 100,
"vcpus": 1,
"vgpus": 0,
"vtpus": 0,
"bandwidth_in": 0,
"bandwidth_out": 0,
"storage": 0,
"code": "https://github.com/Sabanic-P/app-nginx/releases/download/v1.0/kernel.tar.gz",
"arch": ["amd64"],
"state": "",
"port": "80:80",
"addresses": {
"rr_ip": "10.30.30.26"
},
"added_files": []
}]
}]
}
Differences to Container Deployment:
The network component is maintained in: https://www.oakestra.io/docs/networking
The infrastructure monitoring stack provided is built on Grafana OSS toolset. It monitors both root and cluster services for comprehensive visibility. Default Grafana Dashboard credentials can be used:
admin
admin
To access the provisioned dashboards at:
<root_orch_ip>:3000
<cluster_orch_ip>:3001
More details about monitoring stack can be found in config/README.md.