adobe / aquarium-fish

Your best secure distributed heterogeneous dynamic compute resource manager for CI
Other
7 stars 2 forks source link

Aquarium Fish

CI

Main part of the Aquarium distributed p2p system to manage resources. Primarily was developed to manage the dynamic Jenkins CI agents in heterogeneous environment and simplify the infrastructure management, but can be used in various applications to have self-management resources with simple REST API to operate p2p cluster.

Eventually becomes an internal cloud or pool of resources with high availability and business continuity features - an essential part of the modern infrastructure in international companies. It will allow to build the automation without the issues of centralization (by proxying requests to nearby services), complete control of the environments and security provided by sandboxing and dynamic nature of the envs.

The Aquarium system will make the resource management as simple as possible and will unify the dynamic resource management by integrating multiple environment providers (VM, container, native, clouds, etc.) to one entry point of allocating devices which can be used across the organization.

Requirements

In general it can be built and used on any OS/architecture but for now the primary ones are:

To run the Node you need nothing, but the drivers usually require some apps to be installed into the environment.

Goals

Usage

To use the Aquarium Fish you just need to execute the next steps:

To run locally

In order to test the Fish locally with just one node or multiple local nodes:

$ ./aquarium-fish

There is a number of options you can pass to the application, check --help to get them, but the most important ones is:

If you want to use the secondary node on the same host - provide a simple config with overridden node name, because the first will use hostname as node name:

$ ./aquarium-fish --cfg local2.yml

Security

By default Fish generates a simple CA and key/cert pair for Server & Client auth - it just shows the example of cluster communication transport protection via TLS and uses certificate public key as identifier of the cluster node. If a CA certificate is not exists - it will be generated. If node certificate and key are exists, they will be used, but if not - Fish will try to generate them out of CA cert and key. So CA key is not needed for the node if you already generated the node certificate yourself.

TLS encryption is a must, so make sure you know how to generate a CA certificate and control CA to issue the node certificates. Today it's the most secure way to ensure noone will join your cluster without your permission and do not intercept the API & sync communication. Separated CA is used to check that the server (or client) is the one is approved in the cluster.

Maybe in the future Fish will allow to manage the cluster CA and issue certificate for a new node, but for now just check openssl and https://github.com/jcmoraisjr/simple-ca for reference.

To run as a cluster

TODO #30: This functionality is in active development, the available logic can't handle the cluster.

Just make sure there is a path from one node to at least one another - there is no requirement of seeing the entire cluster for each node, but it need to be able to connect to at least one. More visibility is better up to 8 total - because it's the default limit of cluster connections for the node.

Cluster usage

To initialize cluster you need to create users with admin account and create Labels you want to use. In order to use the resources manager manually - check the API section and follow the next general directions:

  1. Get your user and it's token
  2. Check the available Labels on the cluster (and create some if you need them)
  3. Create Application with description of what kind of resource you need
  4. Check the Status of your Application and wait for "ALLOCATED" status
  5. Now resource is allocated, it's all yours and, probably, already pinged you
  6. When you're done - request Application to deallocate the resource
  7. Make sure the Application status is "DEALLOCATED"

To use with Jenkins - you can install Aquarium Net Jenkins cloud plugin to dynamically allocate the required resources. Don't forget to add the served Labels to the cluster and you will be ready to go.

Users policy

For now the policy is quite simple - admin user can do anything, regular users can just use the cluster (create application, list their resources and so on). The applications & resources could contain sensitive information (like jenkins agent secret), so user can see just the owned applications and are able to control only them.

Implementation

Go was initially chosen because of go-dqlite, but became quite useful and modern way of making a self-sufficient one-executable service which can cover multiple areas without performance sacrifice. The way it manages dependencies and subroutines, structures logic makes it much better than python for such purpose. Eventually we've moved away from dqlite (adobe/aquarium-fish#1) but stick with go for good.

Resource drivers are the way nodes managing the resources. For example - if I have VMWare Fusion installed on my machine - I can run Fish and it's VMX driver will automatically detect that it can run VMX images. In case I have docker installed too - I can use both for different workloads or select the ones I actually want to use by --drivers option or via the API.

In the event you need to use more than one configuration for a given driver, you can add a suffix /<name>. For example, aws and aws/dev will both utilize the AWS driver, but use a different configuration. In this example, Labels created will need to specify either driver: aws or driver: aws/dev to select which configuration to run.

Internal DB structure

The cluster supports the internal SQL database, which provides a common storage for the node & cluster data. The current schema could be found in OpenAPI format here:

How the cluster choose node for resource allocation

The cluster can't force any node to follow the majority decision, so the rules are providing full consensus.

For now the rule is simple - when all the nodes are voted, each node can find the first node in the vote table that answered "yes". There are a couple of protection mechanisms like "CreateAt" to find the actual first one and "Rand" field as a last resort (if the other params are identical).

In the future to allow to update cluster with the new rules the Rules table will be created and the different versions of the Aquarium Fish could find the common rules and switch them depends on Application request. Rules will be able to lay on top of any information about the node #15.

The election process:

UI

TODO

Simplify the cluster management, for example adding Labels or check the status #8.

Development

Is relatively easy - you change logic, you run ./build.sh to create a binary, testing it and send the PR when you think it's perfect enough. That will be great if you can ask in the discussions or create an issue on GitHub to align with the current direction and the plans.

Integration tests

To verify that everything works as expected you can run integration tests like that:

$ FISH_PATH=$PWD/aquarium-fish.darwin_amd64 go test -v -failfast -parallel 4 ./tests/...

Profiling

Is available through pprof like that:

$ go tool pprof 'https+insecure://<USER>:<TOKEN>@localhost:8001/api/v1/node/this/profiling/heap'
$ curl -ku "<USER>:<TOKEN>" 'https://localhost:8001/api/v1/node/this/profiling/?debug=1'

Or you can open https://localhost:8001/api/v1/node/this/profiling/ in browser to see the index.

API

There is a number of ways to communicate with the Fish cluster, and the most important one is API.

You can use curl, for example, to do that:

$ curl -u "admin:YOUR_TOKEN" -X GET 127.0.0.1:8001/api/v1/label/
{...json data...}

The current API could be found in OpenAPI format here:

Also check example and tests folder to get more info about the typical API usage.