[!WARNING] SPECIAL NOTICE: Introducing v1.0.0 comes with BREAKING CHANGES. We have removed caching of the flags in the solo config file. All commands will need required flags or user will need to answer the prompts. See more details in our release notes: release/tag/v1.0.0
An opinionated CLI tool to deploy and manage standalone test networks.
Solo Version | Node.js | Kind | Solo Chart | Hedera | Kubernetes | Kubectl | Helm | k9s | Docker Resources | Java |
---|---|---|---|---|---|---|---|---|---|---|
0.29.0 | >= 20.14.0 (lts/hydrogen) | >= v1.29.1 | v0.30.0 | v0.53.0+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 8GB, CPU >= 4 | >= 21.0.1+12 |
0.30.0 | >= 20.14.0 (lts/hydrogen) | >= v1.29.1 | v0.30.0 | v0.54.0+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 8GB, CPU >= 4 | >= 21.0.1+12 |
0.31.4 | >= 20.18.0 (lts/iron) | >= v1.29.1 | v0.31.4 | v0.54.0+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 8GB, CPU >= 4 | >= 21.0.1+12 |
nvm install lts/hydrogen
nvm use lts/hydrogen
npm install -g @hashgraph/solo
kubectl config use-context <context-name>
kind
to create a clusterFirst, use the following command to set up the environment variables:
export SOLO_CLUSTER_NAME=solo
export SOLO_NAMESPACE=solo
export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster
Then run the following command to set the kubectl context to the new cluster:
kind create cluster -n "${SOLO_CLUSTER_NAME}"
Example output
Creating cluster "solo" ...
β’ Ensuring node image (kindest/node:v1.27.3) πΌ ...
β Ensuring node image (kindest/node:v1.27.3) πΌ
β’ Preparing nodes π¦ ...
β Preparing nodes π¦
β’ Writing configuration π ...
β Writing configuration π
β’ Starting control-plane πΉοΈ ...
β Starting control-plane πΉοΈ
β’ Installing CNI π ...
β Installing CNI π
β’ Installing StorageClass πΎ ...
β Installing StorageClass πΎ
Set kubectl context to "kind-solo"
You can now use your cluster with:
kubectl cluster-info --context kind-solo
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community π
You may now view pods in your cluster using k9s -A
as below:
Context: kind-solo <0> all <a> Attach <ctr⦠____ __.________
Cluster: kind-solo <ctrl-d> Delete <l> | |/ _/ __ \______
User: kind-solo <d> Describe <p> | < \____ / ___/
K9s Rev: v0.32.5 <e> Edit <shif| | \ / /\___ \
K8s Rev: v1.27.3 <?> Help <z> |____|__ \ /____//____ >
CPU: n/a <shift-j> Jump Owner <s> \/ \/
MEM: n/a
ββββββββββββββββββββββββββββββββββββββββββββββββββ Pods(all)[11] ββββββββββββββββββββββββββββββββββββββββββββββββββ
β NAMESPACEβ NAME PF READY STATUS RESTARTS IP NODE β
β solo-setup console-557956d575-4r5xm β 1/1 Running 0 10.244.0.5 solo-con β
β solo-setup minio-operator-7d575c5f84-8shc9 β 1/1 Running 0 10.244.0.6 solo-con β
β kube-system coredns-5d78c9869d-6cfbg β 1/1 Running 0 10.244.0.4 solo-con β
β kube-system coredns-5d78c9869d-gxcjz β 1/1 Running 0 10.244.0.3 solo-con β
β kube-system etcd-solo-control-plane β 1/1 Running 0 172.18.0.2 solo-con β
β kube-system kindnet-k75z6 β 1/1 Running 0 172.18.0.2 solo-con β
β kube-system kube-apiserver-solo-control-plane β 1/1 Running 0 172.18.0.2 solo-con β
β kube-system kube-controller-manager-solo-control-plane β 1/1 Running 0 172.18.0.2 solo-con β
β kube-system kube-proxy-cct7t β 1/1 Running 0 172.18.0.2 solo-con β
β kube-system kube-scheduler-solo-control-plane β 1/1 Running 0 172.18.0.2 solo-con β
β local-path-storage local-path-provisioner-6bc4bddd6b-gwdp6 β 1/1 Running 0 10.244.0.2 solo-con β
β β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
0.54.0-alpha.4
)solo
directories:# reset .solo directory
rm -rf ~/.solo
solo init"
******************************* Solo *********************************************
Version : 0.31.1
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Kubernetes Namespace : undefined
**********************************************************************************
β― Setup home directory and cache
β Setup home directory and cache
β― Check dependencies
β― Check dependency: helm [OS: darwin, Release: 23.6.0, Arch: arm64]
β Check dependency: helm [OS: darwin, Release: 23.6.0, Arch: arm64]
β Check dependencies
β― Setup chart manager
β Setup chart manager
β― Copy templates in '/Users/user/.solo/cache'
***************************************************************************************
Note: solo stores various artifacts (config, logs, keys etc.) in its home directory: /Users/user/.solo
If a full reset is needed, delete the directory or relevant sub-directories before running 'solo init'.
***************************************************************************************
β Copy templates in '/Users/user/.solo/cache'
pem
formatted node keyssolo node keys --gossip-keys --tls-keys -i node1,node2,node3
******************************* Solo *********************************************
Version : 0.31.1
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Kubernetes Namespace : undefined
**********************************************************************************
β― Initialize
β Initialize
β― Generate gossip keys
β― Backup old files
β Backup old files
β― Gossip key for node: node1
β Gossip key for node: node1
β― Gossip key for node: node2
β Gossip key for node: node2
β― Gossip key for node: node3
β Gossip key for node: node3
β Generate gossip keys
β― Generate gRPC TLS keys
β― Backup old files
β― TLS key for node: node1
β― TLS key for node: node2
β― TLS key for node: node3
β Backup old files
β TLS key for node: node3
β TLS key for node: node2
β TLS key for node: node1
β Generate gRPC TLS keys
β― Finalize
β Finalize
PEM key files are generated in ~/.solo/keys
directory.
hedera-node1.crt hedera-node3.crt s-private-node1.pem s-public-node1.pem unused-gossip-pem
hedera-node1.key hedera-node3.key s-private-node2.pem s-public-node2.pem unused-tls
hedera-node2.crt hedera-node4.crt s-private-node3.pem s-public-node3.pem
hedera-node2.key hedera-node4.key s-private-node4.pem s-public-node4.pem
solo cluster setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
******************************* Solo *********************************************
Version : 0.31.1
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Kubernetes Namespace : undefined
**********************************************************************************
β― Initialize
β Initialize
β― Prepare chart values
β Prepare chart values
β― Install 'solo-cluster-setup' chart
β Install 'solo-cluster-setup' chart
In a separate terminal, you may run k9s
to view the pod status.
solo network deploy -i node1,node2,node3 -n "${SOLO_NAMESPACE}"
******************************* Solo *********************************************
Version : 0.31.1
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Kubernetes Namespace : solo
**********************************************************************************
β― Initialize
β Initialize
β― Prepare staging directory
β― Copy Gossip keys to staging
β Copy Gossip keys to staging
β― Copy gRPC TLS keys to staging
β Copy gRPC TLS keys to staging
β Prepare staging directory
β― Copy node keys to secrets
β― Copy TLS keys
β― Node: node1
β― Node: node2
β― Node: node3
β― Copy Gossip keys
β― Copy Gossip keys
β― Copy Gossip keys
β Copy Gossip keys
β Node: node3
β Copy Gossip keys
β Node: node2
β Copy Gossip keys
β Node: node1
β Copy TLS keys
β Copy node keys to secrets
β― Install chart 'solo-deployment'
β Install chart 'solo-deployment'
β― Check node pods are running
β― Check Node: node1
β Check Node: node1
β― Check Node: node2
β Check Node: node2
β― Check Node: node3
β Check Node: node3
β Check node pods are running
β― Check proxy pods are running
β― Check HAProxy for: node1
β― Check HAProxy for: node2
β― Check HAProxy for: node3
β― Check Envoy Proxy for: node1
β― Check Envoy Proxy for: node2
β― Check Envoy Proxy for: node3
β Check Envoy Proxy for: node2
β Check Envoy Proxy for: node1
β Check Envoy Proxy for: node3
β Check HAProxy for: node1
β Check HAProxy for: node3
β Check HAProxy for: node2
β Check proxy pods are running
β― Check auxiliary pods are ready
β― Check MinIO
β Check MinIO
β Check auxiliary pods are ready
solo node setup -i node1,node2,node3 -n "${SOLO_NAMESPACE}"
******************************* Solo *********************************************
Version : 0.31.1
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Kubernetes Namespace : solo
**********************************************************************************
β― Initialize
β Initialize
β― Identify network pods
β― Check network pod: node1
β― Check network pod: node2
β― Check network pod: node3
β Check network pod: node1
β Check network pod: node2
β Check network pod: node3
β Identify network pods
β― Fetch platform software into network nodes
β― Update node: node1 [ platformVersion = v0.54.0-alpha.4 ]
β― Update node: node2 [ platformVersion = v0.54.0-alpha.4 ]
β― Update node: node3 [ platformVersion = v0.54.0-alpha.4 ]
β Update node: node3 [ platformVersion = v0.54.0-alpha.4 ]
β Update node: node2 [ platformVersion = v0.54.0-alpha.4 ]
β Update node: node1 [ platformVersion = v0.54.0-alpha.4 ]
β Fetch platform software into network nodes
β― Setup network nodes
β― Node: node1
β― Node: node2
β― Node: node3
β― Set file permissions
β― Set file permissions
β― Set file permissions
β Set file permissions
β Node: node3
β Set file permissions
β Node: node1
β Set file permissions
β Node: node2
β Setup network nodes
solo node start -i node1,node2,node3 -n "${SOLO_NAMESPACE}"
******************************* Solo *********************************************
Version : 0.31.1
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Kubernetes Namespace : solo
**********************************************************************************
β― Initialize
β Initialize
β― Identify existing network nodes
β― Check network pod: node1
β― Check network pod: node2
β― Check network pod: node3
β Check network pod: node1
β Check network pod: node3
β Check network pod: node2
β Identify existing network nodes
β― Starting nodes
β― Start node: node1
β― Start node: node2
β― Start node: node3
β Start node: node1
β Start node: node2
β Start node: node3
β Starting nodes
β― Enable port forwarding for JVM debugger
β Enable port forwarding for JVM debugger [SKIPPED: Enable port forwarding for JVM debugger]
β― Check nodes are ACTIVE
β― Check network pod: node1
β― Check network pod: node2
β― Check network pod: node3
β Check network pod: node1 - status ACTIVE, attempt: 17/120
β Check network pod: node2 - status ACTIVE, attempt: 17/120
β Check network pod: node3 - status ACTIVE, attempt: 17/120
β Check nodes are ACTIVE
β― Check node proxies are ACTIVE
β― Check proxy for node: node1
β Check proxy for node: node1
β― Check proxy for node: node2
β Check proxy for node: node2
β― Check proxy for node: node3
β Check proxy for node: node3
β Check node proxies are ACTIVE
β― Add node stakes
β― Adding stake for node: node1
β Adding stake for node: node1
β― Adding stake for node: node2
β Adding stake for node: node2
β― Adding stake for node: node3
β Adding stake for node: node3
β Add node stakes
solo mirror-node deploy -n "${SOLO_NAMESPACE}"
******************************* Solo *********************************************
Version : 0.31.1
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Kubernetes Namespace : solo
**********************************************************************************
β― Initialize
β Initialize
β― Enable mirror-node
β― Prepare address book
β Prepare address book
β― Deploy mirror-node
β Deploy mirror-node
β Enable mirror-node
β― Check pods are ready
β― Check Postgres DB
β― Check REST API
β― Check GRPC
β― Check Monitor
β― Check Importer
β― Check Hedera Explorer
β Check Hedera Explorer
β Check Postgres DB
β Check Monitor
β Check GRPC
β Check Importer
β Check REST API
β Check pods are ready
β― Seed DB data
β― Insert data in public.file_data
β Insert data in public.file_data
β Seed DB data
solo relay deploy -i node1 -n "${SOLO_NAMESPACE}"
******************************* Solo *********************************************
Version : 0.31.1
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Kubernetes Namespace : solo
**********************************************************************************
β― Initialize
β Initialize
β― Prepare chart values
β Prepare chart values
β― Deploy JSON RPC Relay
β Deploy JSON RPC Relay
β― Check relay is ready
β Check relay is ready
You may view the list of pods using k9s
as below:
Context: kind-solo <0> all <a> Attach <ctr⦠____ __.________
Cluster: kind-solo <ctrl-d> Delete <l> | |/ _/ __ \______
User: kind-solo <d> Describe <p> | < \____ / ___/
K9s Rev: v0.32.5 <e> Edit <shif| | \ / /\___ \
K8s Rev: v1.27.3 <?> Help <z> |____|__ \ /____//____ >
CPU: n/a <shift-j> Jump Owner <s> \/ \/
MEM: n/a
ββββββββββββββββββββββββββββββββββββββββββββββββββ Pods(all)[31] ββββββββββββββββββββββββββββββββββββββββββββββββββ
β NAMESPACEβ NAME PF READY STATUS RESTARTS I β
β kube-system coredns-5d78c9869d-994t4 β 1/1 Running 0 1 β
β kube-system coredns-5d78c9869d-vgt4q β 1/1 Running 0 1 β
β kube-system etcd-solo-control-plane β 1/1 Running 0 1 β
β kube-system kindnet-q26c9 β 1/1 Running 0 1 β
β kube-system kube-apiserver-solo-control-plane β 1/1 Running 0 1 β
β kube-system kube-controller-manager-solo-control-plane β 1/1 Running 0 1 β
β kube-system kube-proxy-9b27j β 1/1 Running 0 1 β
β kube-system kube-scheduler-solo-control-plane β 1/1 Running 0 1 β
β local-path-storage local-path-provisioner-6bc4bddd6b-4mv8c β 1/1 Running 0 1 β
β solo envoy-proxy-node1-65f8879dcc-rwg97 β 1/1 Running 0 1 β
β solo envoy-proxy-node2-667f848689-628cx β 1/1 Running 0 1 β
β solo envoy-proxy-node3-6bb4b4cbdf-dmwtr β 1/1 Running 0 1 β
β solo solo-deployment-grpc-75bb9c6c55-l7kvt β 1/1 Running 0 1 β
β solo solo-deployment-hedera-explorer-6565ccb4cb-9dbw2 β 1/1 Running 0 1 β
β solo solo-deployment-importer-dd74fd466-vs4mb β 1/1 Running 0 1 β
β solo solo-deployment-monitor-54b8f57db9-fn5qq β 1/1 Running 0 1 β
β solo solo-deployment-postgres-postgresql-0 β 1/1 Running 0 1 β
β solo solo-deployment-redis-node-0 β 2/2 Running 0 1 β
β solo solo-deployment-rest-6d48f8dbfc-plbp2 β 1/1 Running 0 1 β
β solo solo-deployment-restjava-5d6c4cb648-r597f β 1/1 Running 0 1 β
β solo solo-deployment-web3-55fdfbc7f7-lzhfl β 1/1 Running 0 1 β
β solo haproxy-node1-785b9b6f9b-676mr β 1/1 Running 1 1 β
β solo haproxy-node2-644b8c76d-v9mg6 β 1/1 Running 1 1 β
β solo haproxy-node3-fbffdb64-272t2 β 1/1 Running 1 1 β
β solo minio-pool-1-0 β 2/2 Running 1 1 β
β solo network-node1-0 β 5/5 Running 2 1 β
β solo network-node2-0 β 5/5 Running 2 1 β
β solo network-node3-0 β 5/5 Running 2 1 β
β solo relay-node1-node2-node3-hedera-json-rpc-relay-ddd4c8d8b-hdlpb β 1/1 Running 0 1 β
β solo-cluster console-557956d575-c5qp7 β 1/1 Running 0 1 β
β solo-cluster minio-operator-7d575c5f84-xdwwz β 1/1 Running 0 1 β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
<pod>
Once the nodes are up, you may now expose various services (using k9s
(shift-f) or kubectl port-forward
) and access. Below are most used services that you may expose.
network-<node ID>-svc
haproxy-<node ID>-svc
# enable portforwarding for haproxy
# node1 grpc port accessed by localhost:50211
kubectl port-forward svc/haproxy-node1-svc -n "${SOLO_NAMESPACE}" 50211:50211 &
# node2 grpc port accessed by localhost:51211
kubectl port-forward svc/haproxy-node2-svc -n "${SOLO_NAMESPACE}" 51211:50211 &
# node3 grpc port accessed by localhost:52211
kubectl port-forward svc/haproxy-node3-svc -n "${SOLO_NAMESPACE}" 52211:50211 &
envoy-proxy-<node ID>-svc
# enable portforwarding for envoy proxy
kubectl port-forward svc/envoy-proxy-node1-svc -n "${SOLO_NAMESPACE}" 8181:8080 &
kubectl port-forward svc/envoy-proxy-node2-svc -n "${SOLO_NAMESPACE}" 8281:8080 &
kubectl port-forward svc/envoy-proxy-node3-svc -n "${SOLO_NAMESPACE}" 8381:8080 &
solo-deployment-hedera-explorer
#enable portforwarding for hedera explorer, can be access at http://localhost:8080/
kubectl port-forward svc/solo-deployment-hedera-explorer -n "${SOLO_NAMESPACE}" 8080:80 &
solo relay deploy -i node1
# enable relay for node1
kubectl port-forward svc/relay-node1-hedera-json-rpc-relay -n "${SOLO_NAMESPACE}" 7546:7546 &
Example output
******************************* Solo *********************************************
Version : 0.31.1
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Kubernetes Namespace : solo
**********************************************************************************
β― Initialize
β Initialize
β― Prepare chart values
β Prepare chart values
β― Deploy JSON RPC Relay
β Deploy JSON RPC Relay
β― Check relay is ready
β Check relay is ready
First, please clone hedera service repo https://github.com/hashgraph/hedera-services/
and build the code
with ./gradlew assemble
. If need to running nodes with different versions or releases, please duplicate the repo or build directories in
multiple directories, checkout to the respective version and build the code.
To set customized settings.txt
file, edit the file
~/.solo/cache/templates/settings.txt
after solo init
command.
Then you can start customized built hedera network with the following command:
solo node setup -i node1,node2,node3 -n "${SOLO_NAMESPACE}" --local-build-path <default path to hedera repo>,node1=<custom build hedera repo>,node2=<custom build repo>
# example: solo node setup -i node1,node2,node3 -n "${SOLO_NAMESPACE}" --local-build-path node1=../hedera-services/hedera-node/data/,../hedera-services/hedera-node/data,node3=../hedera-services/hedera-node/data
To deploy node with local build PTT jar files, run the following command:
solo node setup -i node1,node2,node3 -n "${SOLO_NAMESPACE}" --local-build-path <default path to hedera repo>,node1=<custom build hedera repo>,node2=<custom build repo> --app PlatformTestingTool.jar --app-config <path-to-test-json1,path-to-test-json2>
# example: solo node setup -i node1,node2,node3 -n "${SOLO_NAMESPACE}" --local-build-path ../hedera-services/platform-sdk/sdk/data,node1=../hedera-services/platform-sdk/sdk/data,node2=../hedera-services/platform-sdk/sdk/data --app PlatformTestingTool.jar --app-config ../hedera-services/platform-sdk/platform-apps/tests/PlatformTestingTool/src/main/resources/FCMFCQ-Basic-2.5k-5m.json
You can find log for running solo command under the directory ~/.solo/logs/
The file solo.log
contains the logs for the solo command.
The file hashgraph-sdk.log
contains the logs from Solo client when sending transactions to network nodes.
NOTE: the hedera-services path referenced '../hedera-services/hedera-node/data' may need to be updated based on what directory you are currently in. This also assumes that you have done an assemble/build and the directory contents are up-to-date.
Example 1: attach jvm debugger to a hedera node
./test/e2e/setup-e2e.sh
solo node keys --gossip-keys --tls-keys -i node1,node2,node3
solo network deploy -i node1,node2,node3 --debug-node-alias node2 -n "${SOLO_NAMESPACE}"
solo node setup -i node1,node2,node3 --local-build-path ../hedera-services/hedera-node/data -n "${SOLO_NAMESPACE}"
solo node start -i node1,node2,node3 --debug-node-alias node2 -n "${SOLO_NAMESPACE}"
Once you see the following message, you can launch jvm debugger from Intellij
β― Check all nodes are ACTIVE
Check node: node1,
Check node: node3, Please attach JVM debugger now.
Check node: node4,
Example 2: attach jvm debugger with node add operation
./test/e2e/setup-e2e.sh
solo node keys --gossip-keys --tls-keys -i node1,node2,node3
solo network deploy -i node1,node2,node3 --pvcs -n "${SOLO_NAMESPACE}"
solo node setup -i node1,node2,node3 --local-build-path ../hedera-services/hedera-node/data -n "${SOLO_NAMESPACE}"
solo node start -i node1,node2,node3 -n "${SOLO_NAMESPACE}"
solo node add --gossip-keys --tls-keys --node-alias node4 --debug-node-alias node4 --local-build-path ../hedera-services/hedera-node/data -n "${SOLO_NAMESPACE}"
Example 3: attach jvm debugger with node update operation
./test/e2e/setup-e2e.sh
solo node keys --gossip-keys --tls-keys -i node1,node2,node3
solo network deploy -i node1,node2,node3 -n "${SOLO_NAMESPACE}"
solo node setup -i node1,node2,node3 --local-build-path ../hedera-services/hedera-node/data -n "${SOLO_NAMESPACE}"
solo node start -i node1,node2,node3 -n "${SOLO_NAMESPACE}"
solo node update --node-alias node2 --debug-node-alias node2 --local-build-path ../hedera-services/hedera-node/data --new-account-number 0.0.7 --gossip-public-key ./s-public-node2.pem --gossip-private-key ./s-private-node2.pem --agreement-public-key ./a-public-node2.pem --agreement-private-key ./a-private-node2.pem -n "${SOLO_NAMESPACE}"
Example 4: attach jvm debugger with node delete operation
./test/e2e/setup-e2e.sh
solo node keys --gossip-keys --tls-keys -i node1,node2,node3
solo network deploy -i node1,node2,node3,node4 -n "${SOLO_NAMESPACE}"
solo node setup -i node1,node2,node3,node4 --local-build-path ../hedera-services/hedera-node/data -n "${SOLO_NAMESPACE}"
solo node start -i node1,node2,node3,node4 -n "${SOLO_NAMESPACE}"
solo node delete --node-alias node2 --debug-node-alias node3 -n "${SOLO_NAMESPACE}"
If you have a question on how to use the product, please see our support guide.
Contributions are welcome. Please see the contributing guide to see how you can get involved.
This project is governed by the Contributor Covenant Code of Conduct. By participating, you are expected to uphold this code of conduct.