Qubernetes has been deprecated on December 31st 2021, and we are no longer supporting the project.
It has been replaced by quorum-kubernetes that offers wider compatibility with Quorum products and cloud providers
We encourage all users with active projects to migrate to quorum-kubernetes
If you have any questions or concerns, please reach out to the ConsenSys protocol engineering team on #Discord or by email.
Quorum on Kubernetes, including:
Quickest Start:
To deploy 7nodes Tessera with IBFT run: ./quickest-start.sh
To create and deploy an N node Quorum network run: ./quickest-start.sh $NUM
To terminate the network run ./quickest-stop.sh
qctl: :star2:
Quberentes command line tool.
Most comprehensive way to create / interact with a quorum k8s network.
Quberentes command line for initializing, deploying, modifying, interacting with a quorum K8s network.
> qctl init
> qclt generate network --create
> qctl deploy network
see qctl for full set of commands.
7 Node Example On K8s: runs quorum-examples on K8s.
🎬 7nodes Demo
N Node Quorum Network On K8s:
Generates the necessary Quorum resources (keys, configs - genesis, istanbul, etc.) and Kubernetes API resource yaml
for a configurable N node Quorum Network based on a minimal config qubernetes.yaml
.
Requires docker to be running on your machine with sufficient memory ~8GB for a 7 node cluster.
# default 4 nodes IBFT network
$> ./quickest-start.sh
# N node network
$> ./quickest-start.sh 4
# terminate
$> ./quickest-stop.sh
This:
quickest-qube
if it exist locally.quickest-qube
.⭕️ note: if you experience issues with the nodes starting up, check dockers memory and/or try running a smaller network ./quickest-start.sh 3
.
> qctl init
> qclt generate network --create
> qctl deploy network
Quickstart With Minikube: Quickstart for running a Quorum network on minikube.
Quorum Network From Existing Quorum Resources:
Generates Kuberenetes API resources from existing Quorum resources: keys, config, etc.
e.g. The Quorum and Transaction Manager Containers
Note: The below commands assume that the quorum deployment was deployed to the
default
namespace.
$> kubectl get pods
NAME READY STATUS RESTARTS AGE
quorum-node1-deployment-57b6588b6b-5tqdr 1/2 Running 1 40s
quorum-node2-deployment-5f776b479c-f7kxs 2/2 Running 2 40s
....
# connnect to the running transaction manager on node1 (quorum-node1-deployment-57b6588b6b-5tqdr).
# assuming tessera was deployed as the transaction manager.
$> ./connect.sh node1 tessera
connecting to POD [quorum-node1-deployment-676684fddf-9gwxk]
/ >
# connect to the running quorum container
$> ./connect.sh node1 quorum
connecting to POD [quorum-node1-deployment-676684fddf-9gwxk]
/ >
# once inside the quorum container you can run transactions and connect to the geth console.
/ > geth attach $QHOME/dd/geth.ipc
> eth.blockNumber
> 0
> exit
# create some contracts (public and private)
/ > cd $QHOME/contracts
/ > ./runscript.sh public_contract.js
/ > ./runscript.sh private_contract.js
# you should now see the transactions go through
# note: if you are running IBFT (Istanbul BFT consensus) the blockNumber will increment at the user defined
# (configurable) time interval.
/ > geth attach $QHOME/dd/geth.ipc
> eth.blockNumber
> 2
# show connected peers
> admin.peers.length
6
> exit
There is also a helper to attach to the geth console directly.
🎬 Geth Attach Demo
# from the root of the quberenetes repository
qubernetes $> ./geth-attach node1
datadir: /etc/quorum/qdata/dd
modules: admin:1.0 debug:1.0 eth:1.0 istanbul:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
> eth.blockNumber
2
Starts Kind K8s cluster (bottom screen) & deploy 7nodes (IBFT & Tessera)
continued from above
part 1 attach to geth from inside the container
part 2 use helper ./geth-attach node1
Qubernetes enables the creation of customized Quorum networks run on Kubernetes, providing a configurable number of Quorum and Transaction Manager nodes, and creating the associated genesis config, transaction manager config, permissioned-nodes.json, required keys, services, etc. to start the network.
If you have Docker installed, you are all set! Use the Docker Bootstrap Container.
If you do not wish to install Docker, follow the instructions in Install Prerequisites without Docker.
Once you have the prerequisites set up see Modifying The Qubernetes Config File for more information about configuring a custom deployment.
The Docker container quorumengineering/qubernetes
has the necessary binaries installed to generate the necessary Quorum resources.
If you have docker running, you don't have to worry about installing anything else.
Usage:
Note:
qubernetes.yaml
is not added to the docker container, as this file will change between various deployments.
The qubernetes.yaml
file and the desired out
directory will need to be mounted on the quorumengineering/qubernetes
container using -v $PATH/ON/HOST/qubernetes.yaml:$PATH/ON/CONTAINTER/qubernetes.yaml
, e.g. -v $(pwd)/cool-qubernetes.yaml:/qubernetes/qubernetes.yaml
, see below:
qubernetes.yaml
in the base of the qubernetes repository. You may edit this file to create your custom quorum network
$> git clone https://github.com/ConsenSys/qubernetes.git
$> cd qubernetes
qubernetes $> docker run --rm -it -v $(pwd)/qubernetes.yaml:/qubernetes/qubernetes.yaml -v $(pwd)/out:/qubernetes/out quorumengineering/qubernetes ./qube-init qubernetes.yaml
qubernetes $> ls out
cool-qubernetes.yaml
,
you do not need to clone the repo, but mount the file cool-qubernetes.yaml
and the out
directory on the quorumengineering/qubernetes
container,
so the resources will be available after the container exits.
# from some directory containing a config file cool-qubernetes.yaml
myDir$> ls
cool-qubernetes.yaml
myDir$> docker run --rm -it -v $(pwd)/cool-qubernetes.yaml:/qubernetes/cool-qubernetes.yaml -v $(pwd)/out:/qubernetes/out quorumengineering/qubernetes ./qube-init cool-qubernetes.yaml using config file: cool-qubernetes.yaml
The 'out' directory already exist. Please select the action you wish to take:
[1] Delete the 'out' directory and generate new resources. [2] Update / add nodes that don't already exist. [3] Cancel.
1
myDir$> ls cool-qubernetes.yaml out
[![docker-qubernetes-boot-2](docs/resources/docker-qubernetes-boot-2-play.png)](https://ConsenSys.github.io/qubernetes/resources/docker-qubernetes-boot-2.webm)
3. Exec into the `quorumengineering/qubernetes` container to run commands inside. This is useful for testing changes
to the local ruby generator files.
In this example, we are running the container from inside the base qubernetes directory, and mounting the entire directory,
so it is as if we were running on our local host: the files from the host will be used, and generated files will be continue to exist after the container exists.
$> git clone https://github.com/ConsenSys/qubernetes.git $> cd qubernetes qubernetes $> docker run --rm -it -v $(pwd):/qubernetes -ti quorumengineering/qubernetes root@4eb772b14086:/qubernetes# ./qube-init
root@4eb772b14086:/qubernetes# ls out/ 00-quorum-persistent-volumes.yaml 01-quorum-genesis.yaml 02-quorum-shared-config.yaml 03-quorum-services.yaml 04-quorum-keyconfigs.yaml config deployments
[![docker-qubernetes-boot-3](docs/resources/docker-qubernetes-boot-3-play.png)](https://ConsenSys.github.io/qubernetes/resources/docker-qubernetes-boot-3.webm)
### Modifying The Qubernetes Config File
example [qubernetes.yaml](qubernetes.yaml) is the simpliest config, and has many defaults set for you, which can be overridden see [More Qubernetes Config Options](#more-qubernetes-config-options)
![qubernetes-yaml-marked](docs/resources/qubernetes-yaml-marked.png)
For starters, let's see how to modify [`qubernetes.yaml`](qubernetes.yaml) to change the number of nodes deployed in your network:
```yaml
nodes:
- Node_UserIdent: quorum-node1
Key_Dir: key1
quorum:
quorum:
# supported: (raft | istanbul)
consensus: istanbul
Quorum_Version: 21.7.1
tm:
# (tessera|constellation)
Name: tessera
Tm_Version: 21.7.2
- Node_UserIdent: quorum-node2
Key_Dir: key2
quorum:
quorum:
# supported: (raft | istanbul)
consensus: istanbul
Quorum_Version: 21.7.1
tm:
# (tessera|constellation)
Name: tessera
Tm_Version: 21.7.2
# add more nodes if you'd like
# - Node_UserIdent: quorum-node5
# Key_Dir: key5
# quorum:
# quorum:
# # supported: (raft | istanbul)
# consensus: istanbul
# Quorum_Version: 21.7.1
# tm:
# # (tessera|constellation)
# Name: tessera
# Tm_Version: 21.7.2
./quick-start-gen
command to generate the core config
$> ./quick-start-gen --help
Usage: ./quick-start [options] --consensus[ACTION] The consensus to use for the network (raft or istanbul), default istanbul -q, --quorum-version[ACTION] The version of quorum to deploy, default 21.7.1 -t, --tm-version[ACTION] The version of the transaction manager to deploy, default 21.7.2 --tm-name[ACTION] The transaction manager (tessera|constellation) for the network, default tesera -c, --chain_id[ACTION] The chain id for the network manager deploy, default 1000 -n, --num-nodes[ACTION] The number of nodes to deploy, default 4 -h, --help prints this help
$> ./quick-start-gen --chain-id=10 --consensus=raft --quorum-version=21.7.1 --tm-version=21.7.2 --tm-name=tessera --num-nodes=7
2. Once you have your core config, e.g. qubernetes.yaml configured with your desired parameters:
Run `./qube-init` to generate everything needed for the quorum deployment: quorum keys, genesis.json, istanbul-config.json, permissioned-nodes.json, etc.
These resources will be written and read from the directories specified in the `qubernetes.yaml` file.
The default [`qubernetes.yaml`](qubernetes.yaml) is configured to write theses to the `./out/config` directory.
```yaml
Key_Dir_Base: out/config
Permissioned_Nodes_File: out/config/permissioned-nodes.json
Genesis_File: out/config/genesis.json
## in this case, an out directory exists, so select `1`.
$> ./qube-init qubernetes.yaml
The 'out' directory already exist.
Please select the action you wish to take:
[1] Delete the 'out' directory and generate new resources.
[2] Update / add nodes that don't already exist.
[3] Cancel.
..
Creating all new resources.
Generating keys...
INFO [01-14|17:05:09.402] Maximum peer count ETH=25 LES=0 total=25
INFO [01-14|17:05:11.302] Maximum peer count ETH=25 LES=0 total=25
INFO [01-14|17:05:13.160] Maximum peer count ETH=25 LES=0 total=25
After the Quorum resources have been generated, the necessary K8s resources will be created from them and all generated files will be in the out
directory:
# list the generated Quorum resources
$> ls out/config
genesis.json key2 key5 key8 tessera-config-9.0.json
istanbul-validator-config.toml key3 key6 nodes.yaml tessera-config-enhanced.json
key1 key4 key7 permissioned-nodes.json tessera-config.json
# list the Kubernetes yaml files
$> ls out
00-quorum-persistent-volumes.yaml 02-quorum-shared-config.yaml 04-quorum-keyconfigs.yaml config
01-quorum-genesis.yaml 03-quorum-services.yaml deployments
# list the k8s deployment files
$> ls out/deployments
# deploy the resources
$> kubectl apply -f out -f out/deployments
01-quorum-single-deployment.yaml 03-quorum-single-deployment.yaml 05-quorum-single-deployment.yaml 07-quorum-single-deployment.yaml
02-quorum-single-deployment.yaml 04-quorum-single-deployment.yaml 06-quorum-single-deployment.yaml
./qubernetes
command can be run to generate variations of the Kubernetes
Resources using those resources, e.g. ClusterIP
vs NodePort
. The ./qubernetes
command can be run multiple times and is idempotent as long as the
underlying Quorum resources and your core configuration file do not change.# Generate the Kubernetes resources necessary to support a Quorum deployment
# this will be written to the `out` dir.
$> ./qubernetes qubernetes.yaml
# apply all the generated .yaml files that are in the ./out and ./out/deployments directory.
$> kubectl apply -f out -f out/deployments
$> kubectl delete -f out -f out/deployments
The directory examples/config contains various qubernetes config examples, such as adding K8s Ingress, K8s security context, etc.
example qubes-full.yaml
Thanks to Maximilian Meister blog and code which provided an awesome starting point! and is a good read to understand the different components.
Stuck at some step? Please join our slack community for support.