Kube test network provides a good opportunity to showcase and promote the best practices of building cloud-native Fabric applications using the new Gateway and Chaincode-as-a-Service SDKs. This can be further improved by setting up an automated CI pipeline for an initial test and validation of basic-asset-transfer on Kubernetes.
The scope for this issue is to set up ONE CI test suite, using it as an opportunity to build up the framework and tooling such that it has a good chance of application to all of the Fabric samples, without forcing a refactoring of all of the sample code lines. The long-term vision is to establish a CI flow supporting a mix of remote Kubernetes (AKS, EKS, GKS, IKS, etc..), local Kubernetes (KIND, Rancher, minikube, etc...) and legacy Docker Compose test networks. This issue is NOT an opportunity to refactor the entire samples projects to align with Gateway and Kube platforms - it's just working through the mechanics of getting ONE test suite up and running, exercising the parts, and setting up for a long-term alignment with Fabric 3.
The scope of work in this issue involves:
Ensuring an Azure image includes necessary prerequisites for a test run (kind, kubectl, docker, jq, etc.)
Creating a ci/scripts/run-k8s-test-network-basic.sh script and linking into the CI / merge pipeline.
set up / tear down an ephemeral KIND cluster for the scope of a suite; set up / tear down a Fabric network for the scope of a test;
compile, build, and tag a Docker image using /asset-transfer-basic/chaincode-external (or some suitable CC dialect).
deploy the chaincode to Kubernetes using the "Chaincode-as-a-Service" pattern. The connection and metadata json files should be refactored from the test-network-k8s/chaincode folder over to the external chaincode folder. Each "externally built" chaincode project should contain a fully-self describing environment for building, deploying, and testing the CC in Kubernetes (or within a local IDE/debugger.)
deploy an ingress controller OR port-forward to expose a gateway peer to a host-local port. Some consideration may be necessary to align with DNS, or ensuring that the peer TLS certificate CSRs include a localhost, *.vcap.me, or *.nip.io host alias for the gateway peer.
Extract the gateway client and/or Admin certificates in a manner that is amenable to running the application on the local host OS. One challenge in this area is that ALL of the Fabric samples hard-code a path structure within /test-network/organizations/*, assuming that certificates were created using cryptogen and the Compose test-network. The plan for this issue is to overlay certificate structures from the Kube test network into the target folder structure of the test-network. While this is a bizarre technique, it postpones the need to refactor any of the client application logic as part of this PR / issue. (We will need to address this as part of a touch-up of the fabric samples when making a concerted effort to update everything to the Gateway SDKs.)
run a gateway client application, validating exit code and/or system output. Use @sapthasurendran 's new application-gateway-typescript as the reference gateway application. Note that the gateway client application will build and run locally on the host OS, tunnel through the k8s ingress, and connect to remote CC running adjacent to the peer as a pod in Kubernetes.
Start with just this one sample application - get it working, and review with the team on outcomes and next steps. If it pans out well then consider adding a supplemental path to the newbie app developer guide.
Kube test network provides a good opportunity to showcase and promote the best practices of building cloud-native Fabric applications using the new Gateway and Chaincode-as-a-Service SDKs. This can be further improved by setting up an automated CI pipeline for an initial test and validation of basic-asset-transfer on Kubernetes.
The scope for this issue is to set up ONE CI test suite, using it as an opportunity to build up the framework and tooling such that it has a good chance of application to all of the Fabric samples, without forcing a refactoring of all of the sample code lines. The long-term vision is to establish a CI flow supporting a mix of remote Kubernetes (AKS, EKS, GKS, IKS, etc..), local Kubernetes (KIND, Rancher, minikube, etc...) and legacy Docker Compose test networks. This issue is NOT an opportunity to refactor the entire samples projects to align with Gateway and Kube platforms - it's just working through the mechanics of getting ONE test suite up and running, exercising the parts, and setting up for a long-term alignment with Fabric 3.
The scope of work in this issue involves:
Ensuring an Azure image includes necessary prerequisites for a test run (kind, kubectl, docker, jq, etc.)
Creating a
ci/scripts/run-k8s-test-network-basic.sh
script and linking into the CI / merge pipeline.set up / tear down an ephemeral KIND cluster for the scope of a suite; set up / tear down a Fabric network for the scope of a test;
compile, build, and tag a Docker image using /asset-transfer-basic/chaincode-external (or some suitable CC dialect).
deploy the chaincode to Kubernetes using the "Chaincode-as-a-Service" pattern. The connection and metadata json files should be refactored from the test-network-k8s/chaincode folder over to the external chaincode folder. Each "externally built" chaincode project should contain a fully-self describing environment for building, deploying, and testing the CC in Kubernetes (or within a local IDE/debugger.)
deploy an ingress controller OR port-forward to expose a gateway peer to a host-local port. Some consideration may be necessary to align with DNS, or ensuring that the peer TLS certificate CSRs include a
localhost
,*.vcap.me
, or*.nip.io
host alias for the gateway peer.Extract the gateway client and/or Admin certificates in a manner that is amenable to running the application on the local host OS. One challenge in this area is that ALL of the Fabric samples hard-code a path structure within /test-network/organizations/*, assuming that certificates were created using
cryptogen
and the Composetest-network
. The plan for this issue is to overlay certificate structures from the Kube test network into the target folder structure of the test-network. While this is a bizarre technique, it postpones the need to refactor any of the client application logic as part of this PR / issue. (We will need to address this as part of a touch-up of the fabric samples when making a concerted effort to update everything to the Gateway SDKs.)run a gateway client application, validating exit code and/or system output. Use @sapthasurendran 's new application-gateway-typescript as the reference gateway application. Note that the gateway client application will build and run locally on the host OS, tunnel through the k8s ingress, and connect to remote CC running adjacent to the peer as a pod in Kubernetes.
Start with just this one sample application - get it working, and review with the team on outcomes and next steps. If it pans out well then consider adding a supplemental path to the newbie app developer guide.
cc: @denyeart @bestbeforetoday @mbwhite