The current project enables provisioning of Hyperledger Fabric (HLF) [https://www.hyperledger.org/projects/fabric] cluster over a host of machines managed by Docker Swarm [https://docs.docker.com/engine/swarm/].
It offers an easily configurable mechanism for creating custom Hyperledger Fabric Blockchain deployment arcitecture.
Current Blockchain as a Service offerings from IBM, Amazon, Microsoft or others tie you and your consortium to their infrastructure and ecosystem. The presented solution is cloud agnostic and can be deployed over any cloud provider or private data centers. Each organization that part of your blockchain can therefore choose their own infrastructure provider and by using the fabric-as-code solution can seamlessly deply a Hyperledger Fabric Blockchain.
Currently it support the spinning up of HLF cluster for just one organization, however, we are worrking towards mechanism for easily adding new organization to an exisiting cluster.
Please see the Overview and TODO sections bellow
ansible --version
on you bash shell. You should receive an output such as this:ansible 2.9.1
config file = /Users/antorweep/Documents/dev/mysome_glusterfs/ansible.cfg
configured module search path = ['/Users/antorweep/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 18:13:23) [Clang 10.0.1 (clang-1001.0.46.4)]
These instructions is to be used only when utilising ansible-semaphore for deployment.
Refer : ansible-semaphore-setup-instructions
For normal deployment process, ignore this and follow the instructions below.
There are very few parameters to be configured currently. All configurations are made inside _groupvars/all.yml.
gluster_cluster_volume
specifies the name of the created glusterfs volume. It should be the same value as the one used for creating the GlusterFS cluster. See pre-requisites step #2 about GlusterFSpeer3_user: "peer3"
peer3_password: "peer3pw"
peer3: { switch: "on", image: "hyperledger/fabric-peer", tag: "2.2", replicas: -1, port: 8054,
caname: "{{orgca.name}}", path: "/root/{{peer3_user}}", bootstrap: "",
dbtype: "goleveldb",
name: "{{peer3_user}}", password: "{{peer3_password}}", type: "peer",
leader: "{peer1_user}}"
}
peerservices:
In order to set up hlf cluster we would need a set of host machines. Ansible will comunicate with these machines and setup your cluster.
inventory/hosts_template
[all:children]
swarm_manager_prime
swarm_managers
swarm_workers
[swarm_manager_prime]
[swarm_managers]
[swarm_workers]
inventory/hosts
inventory/hosts
with the names of the host that you want to create. Each line/row in the file would represent a host machine. The lines with square brackets []
represents groups for internal reference in the project and must not be changed. Please fill each line under a group in the format:hostname ansible_host=remote.machine1.ip.adress ansible_python_interpreter="/usr/bin/python3"
hostname
: can be any name. Must be unique for each machine. The project will internally refer to the machines with this nameansible_host
: the ip address of the remote host. This machine should be accessable over the network with this ip addressansible_python_interpreter
: In order for ansible to work, we need python 2.7.x or above available on each remote machine. Here we specify the path of python on the remote machine so that our local ansible project know where to find python on these machines.[all:children]
swarm_manager_prime
swarm_managers
swarm_workers
[swarm_manager_prime]
hlf0 ansible_host=147.182.121.59 ansible_python_interpreter=/usr/bin/python3
[swarm_managers]
hlf0 ansible_host=147.182.121.59 ansible_python_interpreter=/usr/bin/python3
hlf1 ansible_host=117.247.73.159 ansible_python_interpreter=/usr/bin/python3
[swarm_workers]
hlf2 ansible_host=157.245.79.195 ansible_python_interpreter=/usr/bin/python3
hlf3 ansible_host=157.78.79.201 ansible_python_interpreter=/usr/bin/python3
hlf4 ansible_host=157.190.65.188 ansible_python_interpreter=/usr/bin/python3
Setting up of hyperledger fabric cluster requires the following steps. Creating the infrastructure with all dependencies installed and starting the hlf services in all the host machines. Finally, there is also mounting the glusterfs point.
!!!In our case the user root has passwordless SSH access to all the remote machines. In your case, it its different, please change the value for the argument -u to the appropiate user.
Playbook: 011.initialize_hosts.yml
ansible-playbook -v 011.initialize_hosts.yml -u root
Playbook: 012.prepare_docker_images.yml
ansible-playbook -v 012.prepare_docker_images.yml -u root
Playbook: 013.mount_fs.yml
ansible-playbook -v 013.mount_fs.yml -u root
Playbook: 014.spawn_swarm.yml
ansible-playbook -v 014.spawn_swarm.yml -u root
inventory/hosts
Playbook: 015.deploy_swarm_visualizer.yml
ansible-playbook -v 015.deploy_swarm_visualizer.yml -u root
Playbook: 016.deploy_portainer.yml
ansible-playbook -v 016.deploy_portainer.yml -u root
This will list all swarm information . Almost entire swarm management is supported.
Playbook: 100.deploy_ca
ansible-playbook -v 100.deploy_ca -u root
Playbook: 101.deploy_orderer
ansible-playbook -v 101.deploy_orderer -u root
appchannel
Playbook: 102.deploy_peers
ansible-playbook -v 102.deploy_peers -u root
appchannel
and joins each peer to this channel. Also updates the channel with the anchor peer transactionPlaybook: 103.deploy_cli
Execute: ansible-playbook -v 103.deploy_cli -u root
Contains mounts of MSPs for all agents (admin, orderer, peers, ...)
Can perfrom any and all operations on the blockchain by changing its profile to any of the mounted agents
Mounts a test chaincode under /root/CLI/chaincodes/test_chaincode
Sanity Check the working of the cluster
Install, Instanciate and Test Chaincode
docker exec -it <<CLI_ID>> bash
PEER_HOST=peer2
CORE_PEER_ADDRESS=${PEER_HOST}:7051
CORE_PEER_MSPCONFIGPATH=/root/CLI/${ORGCA_HOST}/${ADMIN_USER}/msp
CORE_PEER_TLS_ROOTCERT_FILE=/root/CLI/${ORGCA_HOST}/${PEER_HOST}/msp/tls/ca.crt
Install the chaincode on peer 2
CORE_PEER_ADDRESS=$CORE_PEER_ADDRESS CORE_PEER_MSPCONFIGPATH=$CORE_PEER_MSPCONFIGPATH CORE_PEER_TLS_ROOTCERT_FILE=$CORE_PEER_TLS_ROOTCERT_FILE peer chaincode install -n testcc -v 1.0 -l node -p /root/CLI/chaincodes/test_chaincode/node
CORE_PEER_ADDRESS=$CORE_PEER_ADDRESS CORE_PEER_MSPCONFIGPATH=$CORE_PEER_MSPCONFIGPATH CORE_PEER_TLS_ROOTCERT_FILE=$CORE_PEER_TLS_ROOTCERT_FILE peer chaincode instantiate -C appchannel -n testcc -v 1.0 -c '{"Args":["init","a","100","b","200"]}' -o ${ORDERER_HOST}:7050 --tls --cafile ${CORE_PEER_TLS_ROOTCERT_FILE}
CORE_PEER_ADDRESS=$CORE_PEER_ADDRESS CORE_PEER_MSPCONFIGPATH=$CORE_PEER_MSPCONFIGPATH CORE_PEER_TLS_ROOTCERT_FILE=$CORE_PEER_TLS_ROOTCERT_FILE peer chaincode list --installed
CORE_PEER_ADDRESS=$CORE_PEER_ADDRESS CORE_PEER_MSPCONFIGPATH=$CORE_PEER_MSPCONFIGPATH CORE_PEER_TLS_ROOTCERT_FILE=$CORE_PEER_TLS_ROOTCERT_FILE peer chaincode list --instantiated -C appchannel
CORE_PEER_ADDRESS=$CORE_PEER_ADDRESS CORE_PEER_MSPCONFIGPATH=/root/CLI/${ORGCA_HOST}/${PEER_HOST}/msp CORE_PEER_TLS_ROOTCERT_FILE=$CORE_PEER_TLS_ROOTCERT_FILE peer chaincode query -C appchannel -n testcc -c '{"Args":["query","a"]}'
CORE_PEER_ADDRESS=$CORE_PEER_ADDRESS CORE_PEER_MSPCONFIGPATH=/root/CLI/${ORGCA_HOST}/${PEER_HOST}/msp CORE_PEER_TLS_ROOTCERT_FILE=$CORE_PEER_TLS_ROOTCERT_FILE peer chaincode invoke -C appchannel -n testcc -c '{"Args":["invoke","a","b","10"]}' -o ${ORDERER_HOST}:7050 --tls --cafile ${CORE_PEER_TLS_ROOTCERT_FILE}
CORE_PEER_ADDRESS=$CORE_PEER_ADDRESS CORE_PEER_MSPCONFIGPATH=/root/CLI/${ORGCA_HOST}/${PEER_HOST}/msp CORE_PEER_TLS_ROOTCERT_FILE=$CORE_PEER_TLS_ROOTCERT_FILE peer chaincode query -C appchannel -n testcc -c '{"Args":["query","a"]}'
gorup_vars/all.yml
as shown below (Default port is : 3003):
######################################### CLI #############################################
cli: { switch: "on", image: "hyperledger/fabric-tools", tag: "2.2",port : 3003,}
Playbook: 104.deploy_hlf_explorer
ansible-playbook -v 104.deploy_hlf_explorer.yml --flush-cache -u root
Hyperledger Explorer Login Credentials
File Configuration Explanations
Service Configuration Explanations
The current commit, specifies all the explorer services to be started as swarm services in the prime manager.
Both of the services 1) hlf_explorer_db(Postgresql db) and 2) hlf_explorer are started in the prime manager.
The playbook also supports deploying the hlf_explorer services using a docker compose file (and) docker stack deploy
This features are commented out currently. Only swarm service deployment is enabled in this commit.
However a docker-compose.yaml to deploy the hlf_explorer service is templated and configured dynamically for additional support.
This file will be available in the "root/hlf-explorer/hlf-explorer-docker-compose.yaml" in the prime manager machine.
a) Deploying Demo Bank App Chaindode
Chaincode will be auto installed along with the start of the CLI service.
Sample Bank App Chaincode Repository : https://github.com/bityoga/articonf-bank-chaincode.git
b) Deploying Demo Bank App Application
105.deploy_bank_app.yml
ansible-playbook -v 105.deploy_bank_app.yml --flush-cache -u root
106.deploy_rest_api.yml
ansible-playbook -v 106.deploy_rest_api.yml --flush-cache -u root