I recently had the “amazing 🙄” opportunity to spend two weeks figuring out how to install hyperledger fabric on Kubernetes. There aren’t many guides on the internet besides this one . Unfortunately, I found it to be very vague and missing some keys steps.
For this tutorial, let’s go step-by-step on how to install hyperledger on kubernetes. We are going to use some information from the guide linked above, along with some stuff my team and I were able to figure out along the way. For this setup, we will be creating a kubernetes cluster with two nodes.
Environment
We are going to start by installing kubernetes on Ubuntu. I have tried some of the other approaches like using Microk8s and Minikube, but installing barebones kubectl and kubeadm was the only approach that worked for my environment. I am following the instructions outlined on the official kubernetes install page .
Install Python3.5
There are lots of ways to install Python, here’s the quickest:
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get install python3.5
Install Docker and Docker-Compose
Begin by installing docker and docker-compose on your system.
sudo apt-get -y remove docker docker-engine docker.io containerd runc
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get -y install -y docker-ce
sudo apt-get -y install docker-ce docker-ce-cli containerd.io
sudo curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
Make sure docker and docker-compose are properly installed
docker --version
docker-compose --version
If they return a version number then you are good to go.
Install Kubernetes
Now let’s move on to installing kubernetes . Download and install Kubeadm akd Kubectl. You will need to run the following as the sudo user.
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
Disable Swap
Kubernetes will not run unless swap is disabled.
sudo swapoff -a
Initialize Network
We will need our pods to communicate with each other. In order to do this, we will need to install a network add-on. I used the flannel add-on. In order for the flannel add-on to work properly, we will need to pass some extra parameters when we init Kubeadm.
kudeadmn init --pod-network-cidr=10.244.0.0/1
sysctl net.bridge.bridge-nf-call-iptables=1
After initialization, you should get some extra instructions (posted below) detailing how to run your code as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
It’s important that you run the above command, otherwise you will get an error saying can’t connect to localhost when you run any kubectl command.
If you are the root user, use this command instead of the above command:
export KUBECONFIG=/etc/kubernetes/kubelet.conf
At then end of the kubeadmn init command you will see instructions for how to join another peer. You should see something like the following:
kubeadm join 10.17.189.209:6443 — token oyu3i2.md9tnvyp31f5b7ju — discovery-token-ca-cert-hash sha256:2960349cff48d1041ff087735b3dbe995642dad557098bca64717f06959890e7
Copy and paste this command into notepad or something. We will use this in a bit.
Install Flannel Addon
Now you need to install the flannel add-on that I mentioned earlier. Run the following in the command line:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
This will install the flannel add-on.
If you don’t plan on adding any additional nodes to your setup, then you can run the following command:
kubectl taint nodes --all node-role.kubernetes.io/master-
This will allow your master node to schedule pods (it can’t by default).
Setting up additional nodes
If you have another node that you want to setup, you will need to repeat the steps to install docker and install kubernetes. Afterwards, paste that join command that I told you to copy down earlier
kubeadm join 10.17.189.209:6443 — token oyu3i2.md9tnvyp31f5b7ju — discovery-token-ca-cert-hash sha256:2960349cff48d1041ff087735b3dbe995642dad557098bca64717f06959890e7
This will join the other node to the network. If you forget the token, then you can run the following command on the master node to retrieve it:
kubeadm token list
On the master node, you should be able to see the both nodes in the ready state using this command:
kubectl get nodes
You won’t be able to issue any kubectl command through the worker nodes, unless you copy the admin.conf from the kubernetes directory. You can find more instructions on how to do that on Kubernetes’ website.
Setup an NFS
In the example network that we will create, there is a requirement to mount a PersistantVolume as NFS. If you’re using a single node, you could change that to hostPath and use your local file system, otherwise you will have to setup a nfs so you can share the necessary data. On your master node install the nfs-server.
sudo apt-get install nfs-kernel-server
On your worker node, install the nfs-client
sudo apt-get install nfs-common
On your master node, make a directory in /opt/share
sudo mkdir -p /opt/share
Remove restrictive permission to that folder so that it can be viewed by anybody.
sudo chown nobody:nogroup /opt/share
sudo chmod 777 /opt/share
Add the entry to your NFS exports file so that clients can access it.
sudo nano /etc/exports
Add the following line:
/opt/share *(rw,sync,no_subtree_check,no_root_squash)
Save the file (CRTL + O). Now export the shares and restart the server:
sudo exportfs -a
sudo systemctl restart nfs-kernel-server
On the worker node, mount the NFS using the following command:
sudo mount 10.62.157.209:/opt/share /opt/share
The IP address will be the IP address of your master node. To automatically mount it every time, add the following to your fstab:
sudo nano /etc/fstab
10.62.157.209:/opt/share /opt/share auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0
Save that file and you’re good. Try making a test file in the NFS folder and make sure you can see it on both systems.
Hyperledger Fabric on Kubernetes
Now let’s get to the fun part. The configuration of our network will look like the following:
crypto-config
|--- ordererOrganizations
| |--- orgorderer1
| |--- msp
| |--- ca
| |--- tlsca
| |--- users
| |--- orderers
| |--- orderer0.orgorderer1
| |--- msp
| |--- tls
|
|--- peerOrganizations
|--- org1
| |--- msp
| |--- ca
| |--- tlsca
| |--- users
| |--- peers
| |--- peer0.org1
| | |--- msp
| | |--- tls
| |--- peer1.org1
| |--- msp
| |--- tls
|--- org2
|--- msp
|--- ca
|--- tlsca
|--- users
|--- peers
|--- peer0.org2
| |--- msp
| |--- tls
|--- peer1.org2
|--- msp
|--- tls
You can read about it more here
All of these steps will be performed on your master node. Start off in your home directory and clone the following git repo:
git clone https://github.com/hainingzhang/articles.git
Template Changes
Change to the templates directory directory (inside of Fabric-on-K8s/SetupCluster). You will need to change the following:
In the fabric_1_0_template_pod_cli.yaml file, add the following in the env section under the container section:
- name: CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE
value: bridge
Next, change the address of the NFS file share so that it points to your your master node IP address.
nfs:
path: /opt/share/channel-artifacts
server: 10.62.157.209
Save that file. Next in the fabric_1_0_template_pod_namespace.yaml file, change the address of the NFS file share so that it points to your master node IP address.
nfs:
path: $path
server: 10.62.157.209
Save that file. Next in the fabric_1_0_template_pod_orderer.yaml file, add the following in the env section under the container section:
- name: CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE
value: bridge
Save that file. Finally in the fabric_1_0_template_pod_peer.yaml file, add the following lines in the env section under the container section:
- name: CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE
value: bridge
- name: CORE_PEER_ADDRESSAUTODETECT
value: "true"
Save that file.
Building the Containers
Go back to the Fabric-on-K8s folder. We will proceed to download the hyperledger binaries. For this tutorial, we are using hyperledger version 1.1. If you want to use a newer version of hyperledger then you will need to make changes to the config configtx.yaml to support new keywords/formatting. Slight warning, it becomes a pain to do so.
Download the hyperledger binaries in the Fabric-on-K8s directory:
curl -sSL http://bit.ly/2ysbOFE | bash -s -- 1.1.0
Once the download completes, you will see a folder called fabric samples. Inside of that folder is another folder called bin. Move that bin folder outside of the fabric-samples folder (one layer up).
mv ./fabric-samples/bin ./
Now go to the setup cluster folder and run the generateAll.sh file.
chmod a+x ./generateAll.sh
./generateAll.sh
Next, run the python script to build the containers:
python transform/run.py
All of the containers should now be created. Hopefully you should see a list of containers running that include peer0, peer1, ca and cli when issuing the following command:
kubectl get pods --all-namespaces
Creating Channels
Now you need to create the channel block. Go back to your setupCluster folder and run the following command:
../bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID mychannel
It should output a channel.tx in the channel-artifacts folders. Now create an upgrade channel that will update the anchor of Org1:
../bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID mychannel -asOrg Org1MSP
Now do the same thing with Org2:
../bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org2MSPanchors.tx -channelID mychannel -asOrg Org2MSP
Now copy the channel artifacts folder to the /opt/share directory so that it can be accessible from the CLI
sudo cp -r ./channel-artifacts /opt/share
Installing and Instantiating Chaincode
We will need to manually enter into the CLI container to install the chaincode. First, find the ID of your CLI pod:
kubectl get pods --namespace org1
You should retrieve a list of pods in Org1. You will need to find the one that has cli in its name and enter it:
kubectl exec -it cli-2586535453-yclmr bash --namespace=org1
Once you enter into the pod’s terminal, create a channel:
peer channel create -o orderer0.orgorderer1:7050 -c mychannel -f ./channel-artifacts/channel.tx
Then copy the channel block to channel artifacts:
cp mychannel.block ./channel-artifacts
Finally, join this peer to the channel:
peer channel join -b ./channel-artifacts/mychannel.block
Now update the anchor peer. You will need to do this once for each organization:
peer channel update -o orderer0.orgorderer1:7050 -c mychannel -f ./channel-artifacts/Org1MSPanchors.tx
Next, change to the channel-artifacts directory inside of that peer. It should be one layer from where you are currently at. Once you are inside of that directory, download the fabric-samples repo.
git clone https://github.com/hyperledger/fabric-samples.git
Now install the chaincode:
peer chaincode install -n mybb -v 1.0 -p github.com/hyperledger/fabric/peer/channel-artifacts/fabric-samples/chaincode/chaincode_example02/go
Afterwards, instantiate the chaincode:
peer chaincode instantiate -o orderer0.orgorderer1:7050 -C mychannel -n mybb -v 1.0 -c '{"Args":["init","a","100","b","200"]}'
Finally, test a query.
peer chaincode query -C mychannel -n mybb -v v0 -c '{"Args":["query","a"]}'
If you get back 100, then everything works! You can repeat the channel copy, install and instantiation in the Cli of Org2.
So that’s it! From this point you are on your own. Hopefully this helps someone. Like I said, there were a couple of guides on the internet but I ran into a lot of hiccups and quirks trying to get this basic demo working. So hopefully this saves you some time.