Deploy Hyperledger Fabric network on Kubernetes cluster
The below tutorial covers setting up Hyperledger Fabric v1.4 on a Kubernetes Cluster.
Pre-requisites
One should have a basic understanding on containerization tech and basic know how of Kubernetes works, plus the following
- A Kubernetes cluster up and running with access to it via kubectl. If you wish to setup one from scratch please follow the tutorial here “Setup a Kubernetes cluster from scratch”
- An NFS server or equivalent. You can setup an NFS server using the tutorial here. You only need server side setup as Kubernetes cluster will act as client.
- Network artifacts (Crypto material and genesis block) generated for the network you wish to deploy. You can find the artifacts used in this tutorial here.
- Knowledge of common operations using kubectl commands is highly recommended
Configuration
The tutorial will deploy a network of two organisations P1 and P2 with 5 node ordering service using RAFT consensus.
All the boxes in Fig 1. will be run inside docker containers. The peers will be using LevelDB as state databases. The big dark blue box is an NFS server which all the containers will be using.
Setup
Step 1: Clone the github repository
https://github.com/harishgupta/fabric-k8s
Step 2: Open command line in the cloned directory. Create a Config map from the genesis.block file using the below command
kubectl create configmap kubetest-genesis --from-file=genesis.block
Step 3: Copy the network artifacts (fabric-files directory from the clones repository) to the NFS file server. Note the path of the directory where the folder was stored e.g.
/var/nfs/general/fabric-files
Step 4: Create Persistent Volume and Persistent Volume Claim. Update the file pv-pvc.yaml to update the folder and server address.
nfs:
path: /var/nfs/general/fabric-files #as per your folder location
server: <your nfs server ip/host name>
readOnly: false
If you are not familiar with Persistent Volume and Persistent Volume claim concepts, think of it as letting Kubernetes processes know about the shared storage. Once you have updated pv-pvc.yaml, run the following command
kubectl apply -f pv-pvc.yaml
You can run the following commands to check if everything went fine
kubectl describe pv pv-fabrickubectl describe pvc pvc-fabric
If you don’t see any errors, you are good to go.
Step 5: Setup Orderers, Peers and respective services. Once the above steps are done, we finally start the network. Run the below command to setup 5 Node Ordering Service, 1 Peer for each organisations P1 and P2
kubectl apply -f kube.yaml
Run the below command to check the status of pods deployed
kubectl get po
You should see something similar to this
If any of the pods are not in running status check the logs of that pod using the below command. This is an example of checking orderer0 logs
kubectl logs orderer0-app-68bb8d49dc-wfwvj
If you see ,Raft leader changed, in the logs, it is an indicator that the Orderer nodes have started talking to each other
[orderer.consensus.etcdraft] serveRequest -> INFO 03d Raft leader changed: 0 -> 5 channel=byfn-sys-channel node=1
If any of the pods are not in “Running” status and has failed to start altogether, there may not be any logs available. In that case run the below command to check the errors stopping the containers/pods to start.
kubectl describe <pod-name>
If you wish to connect to the network for further operations like Creating a channel or chaincode operations, you need to make use of the exposed services. You can see the list of services and the corresponding exposed ports using the below command
kubectl get svc
The output should be similar to below
Channel and Chaincode operations in a separate topic altogether which I will be covering in a separate tutorial soon.