On-Premises Deployment
1. Wireguard
Wireguard bastian server provides secure private channel to access MOSIP cluster.
Bastian server restricts public access, and enables access to only those clients who have their public key listed in Wireguard server.
Bastion server listens on UDP port 51820.
In case you already have VPN configured to access nodes privately please skip Wireguard installation and continue to use the same VPN.
Setup Wireguard VM and wireguard bastion server
Create a Wireguard server VM with mentioned 'Hardware and Network Requirements'.
Open required ports in the Bastian server VM.
cd $K8_ROOT/wireguard/Create copy of
hosts.ini.sampleashosts.iniand update the required details for wireguard VMcp hosts.ini.sample hosts.ini
Note :
Remove
[Cluster]complete section from copiedhosts.inifile.Add below mentioned details:
ansible_host : public IP of Wireguard Bastion server. eg. 100.10.20.56
ansible_user : user to be used for installation. In this ref-impl we use Ubuntu user.
ansible_ssh_private_key_file : path to pem key for ssh to wireguard server. eg.
~/.ssh/wireguard-ssh.pem
Execute ports.yml to enable ports on VM level using ufw:
ansible-playbook -i hosts.ini ports.yaml
Note:
Permission of the pem files to access nodes should have 400 permission.
sudo chmod 400 ~/.ssh/privkey.pemThese ports are only needed to be opened for sharing packets over UDP.
Take necessary measure on firewall level so that the Wireguard server can be reachable on 51820/udp.
Install docker
execute docker.yml to install docker and add user to docker group:
ansible-playbook -i hosts.ini docker.yaml
Setup Wireguard server
SSH to wireguard VM
ssh -i <path to .pem> ubuntu@<Wireguard server public ip>Create directory for storing wireguard config files.
mkdir -p wireguard/configInstall and start wireguard server using docker as given below:
sudo docker run -d \ --name=wireguard \ --cap-add=NET_ADMIN \ --cap-add=SYS_MODULE \ -e PUID=1000 \ -e PGID=1000 \ -e TZ=Asia/Calcutta \ -e PEERS=30 \ -p 51820:51820/udp \ -v /home/ubuntu/wireguard/config:/config \ -v /lib/modules:/lib/modules \ --sysctl="net.ipv4.conf.all.src_valid_mark=1" \ --restart unless-stopped \ ghcr.io/linuxserver/wireguard
Note:
Increase the no. of peers above in case more than 30 wireguard client confs (-e PEERS=30) are needed.
Change the directory to be mounted to wireguard docker as per need. All your wireguard confs will be generated in the mounted directory (
-v /home/ubuntu/wireguard/config:/config).
Setup Wireguard Client in your PC
Install Wireguard client in your PC using steps.
Assign
wireguard.conf:SSH to the wireguard server VM.
cd /home/ubuntu/wireguard/configassign one of the PR for yourself and use the same from the PC to connect to the server.
create
assigned.txtfile to assign the keep track of peer files allocated and update everytime some peer is allocated to someone.
peer1 : peername peer2 : xyzuse
lscmd to see the list of peers.get inside your selected peer directory, and add mentioned changes in
peer.conf:cd peer1nano peer1.confDelete the DNS IP.
Update the allowed IP's to subnets CIDR ip . e.g. 10.10.20.0/23
Note:
CIDR Range will be shared by the Infra provider.
Make sure all the nodes are covered in the provided CIDR range. (nginx server, K8 cluster nodes for observation as well as mosip).
Share the updated
peer.confwith respective peer to connect to wireguard server from Personel PC.
Add
peer.confin your PC’s/etc/wireguarddirectory aswg0.conf.Start the wireguard client and check the status:
sudo systemctl start wg-quick@wg0 sudo systemctl status wg-quick@wg0Once connected to wireguard, you should be now able to login using private IP’s.
2. Observation K8s Cluster setup and configuration
Install all the required tools mentioned in 'Personel Computer Setup' section.
Setup Observation Cluster node VM’s hardware and network configuration as per (requirements).
Setup passwordless SSH into the cluster nodes via pem keys. (Ignore if VM’s are accessible via pem’s).
Generate keys on your PC
ssh-keygen -t rsaCopy the keys to remote observation node VM’s
ssh-copy-id <remote-user>@<remote-ip>SSH into the node to check password-less SSH
ssh -i ~/.ssh/<your private key> <remote-user>@<remote-ip>
Note:
Make sure the permission for
privkey.pemfor ssh is set to 400.
Open ports and install docker on Observation K8 Cluster node VM’s.
cd $K8_ROOT/rancher/on-premCopy
hosts.ini.sampletohosts.iniand update required details.cp hosts.ini.sample hosts.ini
Note:
Ensure you are inside
on-premdirectory as mentioned above.ansible_host : internal IP of nodes. eg. 100.10.20.56, 100.10.20.57 ...
ansible_user : user to be used for installation. In this ref-implementation we use Ubuntu user.
ansible_ssh_private_key_file : path to pem key for ssh to wireguard server. eg.
~/.ssh/nodes-ssh.pem
Update
vpc_ipvariable inports.yamlwith vpc CIDR ip to allow access only from machines inside same vpc.Note:
CIDR Range will be shared by the Infra provider.
Make sure all the nodes are covered in the provided CIDR range. (nginx server, K8 cluster nodes for observation as well as mosip).
Execute
ports.ymlto enable ports on VM level using ufw:ansible-playbook -i hosts.ini ports.yamlDisable swap in cluster nodes. (Ignore if swap is already disabled)
ansible-playbook -i hosts.ini swap.yaml
Caution: Always verify swap status with
swapon --showbefore running the playbook to avoid unnecessary operations.execute
docker.ymlto install docker and add user to docker group:ansible-playbook -i hosts.ini docker.yaml
Creating RKE Cluster Configuration file
rke configCommand will prompt for nodal details related to cluster, provide inputs w.r.t below mentioned points:
SSH Private Key Path:Number of Hosts:SSH Address of host:SSH User of host:
Is host (<node1-ip>) a Control Plane host (y/n)? [y]: y Is host (<node1-ip>) a Worker host (y/n)? [n]: y Is host (<node1-ip>) an etcd host (y/n)? [n]: yMake all the nodes
Worker hostby default.To create an HA cluster, specify more than one host with role
Control Planeandetcd host.
Network Plugin Type: Continue with canal as default network plugin.For rest of other configurations, opt the required or default value.
As result of
rke configcommandcluster.ymlfile will be generated inside same directory, update the below mentioned fields:nano cluster.ymlRemove the default Ingress install
ingress: provider: noneUpdate the name of the kubernetes cluster in
cluster.yml.cluster_name: observation-cluster
For production deplopyments edit the
cluster.yml, according to this RKE Cluster Hardening Guide.Setup up the cluster:
Once
cluster.ymlis ready, you can bring up the kubernetes cluster using simple command.This command assumes the
cluster.ymlfile is in the same directory as where you are running the command.rke up
INFO[0000] Building Kubernetes cluster INFO[0000] [dialer] Setup tunnel for host [10.0.0.1] INFO[0000] [network] Deploying port listener containers INFO[0000] [network] Pulling image [alpine:latest] on host [10.0.0.1] ... INFO[0101] Finished building Kubernetes cluster successfully ```The last line should read
Finished building Kubernetes clustersuccessfully to indicate that your cluster is ready to use.
Note:
Incase
rke upcommand is unsucessfull due to any underline error then we need to fix the same by checking the logs.Once the issue is fixed we need to remove the cluster using
rke remove.Once
rke removeis executed sucessfully need to delete cluster related incomplete configuration using :ansible-playbook -i hosts.ini ../../utils/rke-components-delete.yaml
As part of the Kubernetes creation process, a
kubeconfigfile has been created and written atkube_config_cluster.yml, which can be used to start interacting with your Kubernetes cluster.Copy the kubeconfig files
cp kube_config_cluster.yml $HOME/.kube/<cluster_name>_config chmod 400 $HOME/.kube/<cluster_name>_configTo access the cluster using
kubeconfigfile use any one of the below method:cp $HOME/.kube/<cluster_name>_config $HOME/.kube/configAlternativelyexport KUBECONFIG="$HOME/.kube/<cluster_name>_config
Test cluster access:
kubectl get nodesCommand will result in details of the nodes of the Observation cluster.
Save your files
Save a copy of the following files in a secure location, they are needed to maintain, troubleshoot and upgrade your cluster.
cluster.yml: The RKE cluster configuration file.kube_config_cluster.yml: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster.cluster.rkestate: The Kubernetes Cluster State file, this file contains credentials for full access to the cluster.
3. Observation K8s Cluster Ingress, Storageclass setup
Once the rancher cluster is ready, we need ingress and storage class to be set for other applications to be installed.
3.a.Nginx Ingress Controller: used for ingress in rancher cluster.
cd $K8_ROOT/rancher/on-prem
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install \
ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--version 4.0.18 \
--create-namespace \
-f ingress-nginx.values.yamlNote:
This will install ingress in namespace
ingress-nginxof Observation cluster.Crosscheck using below mentioned command:
kubectl get all -n ingress-nginxCommand should result with list of all the pods, deployments etc in ingress nginx namespace.
3.b. Storage classes
Multiple storage classes options are available for onprem K8's cluster. In this reference deployment will continue to use NFS as a storage class.
Move to nfs directory in your personel computer.
cd $K8_ROOT/mosip/nfsCreate a copy of hosts.ini.sample as hosts.ini.
cp hosts.ini.sample hosts.iniUpdate the NFS machine details in
hosts.inifile.Note :
Add below mentioned details:
ansible_host : internal IP of NFS server. eg. 10.12.23.21
ansible_user : user to be used for installation, in this ref-impl we use Ubuntu user.
ansible_ssh_private_key_file : path to pem key for ssh to wireguard server. eg.
~/.ssh/wireguard-ssh.pem
.
Make sure Kubeconfig file is set correctly to point to required Observation cluster.
kubectl config viewNote:
Output should show the cluster name to confirm you are pointing to right kubernetes cluster.
If not pinting to right K8 cluster change the kubeconfig to connect to right K8 cluster.
Enable firewall with required ports:
ansible-playbook -i ./hosts.ini nfs-ports.yamlSSH to the nfs node:
ssh -i ~/.ssh/nfs-ssh.pem ubuntu@<internal ip of nfs server>Clone
k8s-infrarepo in nginx VM:git clone https://github.com/mosip/k8s-infra -b v1.2.0.1Move to the nfs directory:
cd /home/ubuntu/k8s-infra/mosip/nfs/Execute script to install nfs server:
sudo ./install-nfs-server.shNote:
Script prompts for below mentioned user inputs:
..... Please Enter Environment Name: <envName> ..... ..... ..... [ Export the NFS Share Directory ] exporting *:/srv/nfs/mosip/<envName> NFS Server Path: /srv/nfs/mosip/<envName>envName: env name eg. dev/qa/uat...
Switch to your personel computer and excute below mentioned commands:
cd $K8_ROOT/mosip/nfs/ ./install-nfs-client-provisioner.shNote:
Script prompts for:
NFS Server: NFS server ip for persistence.
NFS Path : NFS path for storing the persisted data. eg. /srv/nfs/mosip/
Post installation check:
Check status of NFS Client Provisioner.
kubectl -n nfs get deployment.apps/nfs-client-provisionerCheck status of nfs-client storage class.
kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE longhorn (default) driver.longhorn.io Delete Immediate true 57d nfs-client cluster.local/nfs-client-provisioner Delete Immediate true 40sSet
nfs-clientstorage class as default:kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
4. Setting up nginx server for Observation K8s Cluster
4.a. SSL Certificate setup for TLS termination
For Nginx server setup we need ssl certificate, add the same into Nginx server.
Incase valid ssl certificate is not there generate one using letsencrypt:
SSH into the nginx server
Install Pre-requisites
sudo apt update -y sudo apt-get install software-properties-common -y sudo add-apt-repository ppa:deadsnakes/ppa sudo apt-get update -y sudo apt-get install python3.8 -y sudo apt install letsencrypt -y sudo apt install certbot python3-certbot-nginx -yGenerate wildcard SSL certificates for your domain name.
sudo certbot certonly --agree-tos --manual --preferred-challenges=dns -d *.org.netreplace
org.netwith your domain.The default challenge HTTP is changed to DNS challenge, as we require wildcard certificates.
Create a DNS record in your DNS service of type TXT with host
_acme-challenge.org.net, with the string prompted by the script.Wait for a few minutes for the above entry to get into effect.Verify:
host -t TXT _acme-challenge.org.netPress enter in the
certbotprompt to proceed.Certificates are created in
/etc/letsencrypton your machine.Certificates created are valid for 3 months only.
Wildcard SSL certificate renewal. This will increase the validity of the certificate for next 3 months.
4.b. Install Nginx :
Move to nginx directory in your local:
cd $K8_ROOT/mosip/on-prem/nginx/Open required ports :
Use any editor to create new
hosts.inifile:nano hosts.iniAdd below mentioned lines with updated details of nginx server to the
hosts.iniand save.[nginx] node-nginx ansible_host=<internal ip> ansible_user=root ansible_ssh_private_key_file=<pvt .pem file>Execute below mentoned command to open required ports:
ansible-playbook -i hosts.ini mosip/on-prem/nginx/nginx_ports.yaml
Login to nginx server node.
ssh -i ~/.ssh/<pem to ssh> ubuntu@<nginx server ip>Clone k8s-infra
cd $K8_ROOT/rancher/on-prem/nginx sudo ./install.shProvide below mentioned inputs as and when promted
Rancher nginx ip : internal ip of the nginx server VM.
SSL cert path : path of the ssl certificate to be used for ssl termination.
SSL key path : path of the ssl key to be used for ssl termination.
Cluster node ip's : ip’s of the rancher cluster node
Post installation check:
sudo systemctl status nginxSteps to Uninstall nginx (in case required)
sudo apt purge nginx nginx-commonDNS mapping: Once nginx server is installed sucessfully, create DNS mapping for rancher cluster related domains as mentioned in DNS requirement section. (rancher.org.net, keycloak.org.net)
5. Observation K8's Cluster Apps Installation
5.a. Rancher UI
Rancher provides full CRUD capability of creating and managing kubernetes cluster.
Install rancher using Helm, update
hostnameinrancher-values.yamland run the following command to install.cd $K8_ROOT/rancher/rancher-ui helm repo add rancher-latest https://releases.rancher.com/server-charts/latest helm repo update helm install rancher rancher-latest/rancher \ --version 2.6.9 \ --namespace cattle-system \ --create-namespace \ -f rancher-values.yamlLogin:
Connect to the Wireguard. (in case using Windows via WSL, make sute to connect to wireguard server from windows instead of WSL).
Open Rancher page.
Get Bootstrap password using
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{ .data.bootstrapPassword|base64decode}}{{ "\n" }}'
Note: Assign a password. IMPORTANT: makes sure this password is securely saved and retrievable by Admin.
5.b. Keycloak
Keycloak is an OAuth 2.0 compliant Identity Access Management (IAM) system used to manage the access to Rancher for cluster controls.
cd $K8_ROOT/rancher/keycloak ./install.sh <iam.host.name>Post installation access the keycloak using
iam.mosip.netand get the credentials as per the post installation steps defined
.
5.c. Keycloak - Rancher UI Integration
Login as
adminuser in Keycloak and make sureemailandfirstNamefields are populated for the admin user. These are required for Rancher authentication to work properly.In Keycloak (in the
masterrealm), create a new client with the following values:Client ID:https://<your-rancher-host>/v1-saml/keycloak/saml/metadataClient Protocol:samlRoot URL: (leave empty)After saving, configure the client with:
Name:rancherEnabled:ONLogin Theme:keycloakSign Documents:ONSign Assertions:ONEncrypt Assertions:OFFClient Signature Required:OFFForce POST Binding:OFFFront Channel Logout:OFFForce Name ID Format:OFFName ID Format:usernameValid Redirect URIs:https://<your-rancher-host>/v1-saml/keycloak/saml/acsIDP Initiated SSO URL Name:IdPSSOName
Save the client
In the same client, go to the
Mapperstab and create the following:Mapper 1:
Protocol:samlName:usernameMapper Type:User PropertyProperty:usernameFriendly Name:usernameSAML Attribute Name:usernameSAML Attribute NameFormat:Basic
Mapper 2:
Protocol:samlName:groupsMapper Type:Group ListGroup Attribute Name:memberFriendly Name: (Leave empty)SAML Attribute NameFormat:BasicSingle Group Attribute:ONFull Group Path:OFF
Click
Add Builtin→ select all →Add Selected
Download the Keycloak SAML descriptor XML file from:
https://<your-keycloak-host>/auth/realms/master/protocol/saml/descriptor
Generate a self-signed SSL certificate and private key (if not already available):
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout myservice.key -out myservice.certRancher UI Configuration
In Rancher UI, go to :
Users & Authentication→Auth Providers→Keycloak (SAML)Configure the fields as follows :
Display Name Field:givenNameUser Name Field:emailoruidUID Field:usernameGroups Field:memberEntity ID Field: (Leave empty)Rancher API Host:https://<your-rancher-host>Upload the following files:
Private Key:myservice.keyCertificate:myservice.certSAML Metadata XML: (from the Keycloak descriptor link)Click Enable to activate Keycloak authentication.
After successful integration, Rancher users should be able to log in using their Keycloak
5.d. RBAC for Rancher using Keycloak
For users in Keycloak assign roles in Rancher - cluster and project roles. Under
defaultproject add all the namespaces. Then, to a non-admin user you may provide Read-Only role (under projects).If you want to create custom roles, you can follow the steps given here.
Add a member to cluster/project in Rancher:
Navigate to RBAC cluster members
Add member name exactly as
usernamein KeycloakAssign appropriate role like Cluster Owner, Cluster Viewer etc.
You may create new role with fine grained acccess control.
Add group to to cluster/project in Rancher:
Navigate to RBAC cluster members
Click on
Addand select a group from the displayed drop-down.Assign appropriate role like Cluster Owner, Cluster Viewer etc.
To add groups, the user must be a member of the group.
Creating a Keycloak group involves the following steps:
Go to the "Groups" section in Keycloak and create groups with default roles.
Navigate to the "Users" section in Keycloak, select a user, and then go to the "Groups" tab. From the list of groups, add the user to the required group.
6. MOSIP K8s Cluster setup
Pre-requisites:
Install all the required tools mentioned in Pre-requisites for PC.
kubectl
helm
helm repo add bitnami https://charts.bitnami.com/bitnami helm repo add mosip https://mosip.github.io/mosip-helmansible
rke (version 1.3.10)
Setup MOSIP K8 Cluster node VM’s as per 'Hardware and Network Requirements'.
Run
env-check-setup.yamlto check if cluster nodes are fine and dont have known issues in it.cd $K8_ROOT/rancher/on-premCreate copy of
hosts.ini.sampleashosts.iniand update the required details for MOSIP k8 cluster nodes.cp hosts.ini.sample hosts.ini
Note:
Ensure you are inside
on-premdirectory as mentioned above.ansible_host : internal IP of nodes. eg. 100.10.20.56, 100.10.20.57 ...
ansible_user : user to be used for installation. In this ref-implementation we use Ubuntu user.
ansible_ssh_private_key_file : path to pem key for ssh to wireguard server. eg.
~/.ssh/nodes-ssh.pem
ansible-playbook -i hosts.ini env-check-setup.yamlThis ansible checks if localhost mapping ia already present in
/etc/hostsfile in all cluster nodes, if not it adds the same.
Setup passwordless ssh into the cluster nodes via pem keys. (Ignore if VM’s are accessible via pem’s).
Generate keys on your PC
ssh-keygen -t rsa
Copy the keys to remote rancher node VM’s:
ssh-copy-id <remote-user>@<remote-ip>
SSH into the node to check password-less SSH
ssh -i ~/.ssh/<your private key> <remote-user>@<remote-ip>
Rancher UI : (deployed in Observation K8 cluster)
Open ports and Install docker on MOSIP K8 Cluster node VM’s.
cd $K8_ROOT/mosip/on-premcreate copy of
hosts.ini.sampleashosts.iniand update the required details for wireguard VM.cp hosts.ini.sample hosts.ini
Update
vpc_ipvariable inports.yamlwithvpc CIDR ipto allow access only from machines inside same vpc.Note:
CIDR Range will be shared by the Infra provider.
Make sure all the nodes are covered in the provided CIDR range. (nginx server, K8 cluster nodes for observation as well as mosip).
execute
ports.ymlto enable ports on VM level using ufw:ansible-playbook -i hosts.ini ports.yamlDisable swap in cluster nodes. (Ignore if swap is already disabled)
ansible-playbook -i hosts.ini swap.yaml
Caution: Always verify swap status with
swapon --showbefore running the playbook to avoid unnecessary operations.execute
docker.ymlto install docker and add user to docker group:ansible-playbook -i hosts.ini docker.yaml
Creating RKE Cluster Configuration file
rke configCommand will prompt for nodal details related to cluster, provide inputs w.r.t below mentioned points:
SSH Private Key Path:Number of Hosts:SSH Address of host:SSH User of host:
Is host (<node1-ip>) a Control Plane host (y/n)? [y]: y Is host (<node1-ip>) a Worker host (y/n)? [n]: y Is host (<node1-ip>) an etcd host (y/n)? [n]: yMake all the nodes Worker
hostby default.To create an HA cluster, specify more than one host with role
Control Planeandetcd host.
Network Plugin Type: Continue with canal as default network plugin.For rest for other configuration opt the required or default value.
As result of rke config command
cluster.ymlfilewill be generated inside same directory, update the below mentioned fields:nano cluster.ymlRemove the default Ingress install
ingress: provider: noneUpdate the name of the kubernetes cluster in
cluster.yaml`cluster_name: sandbox-name`For production deplopyments edit the
cluster.yml, according to this RKE Cluster Hardening Guide.
Setup up the cluster:
Once
cluster.ymlis ready, you can bring up the kubernetes cluster using simple command.This command assumes the
cluster.ymlfile is in the same directory as where you are running the command.rke up
INFO[0000] Building Kubernetes cluster INFO[0000] [dialer] Setup tunnel for host [10.0.0.1] INFO[0000] [network] Deploying port listener containers INFO[0000] [network] Pulling image [alpine:latest] on host [10.0.0.1] ... INFO[0101] Finished building Kubernetes cluster successfullyThe last line should read
Finished building Kubernetes cluster successfullyto indicate that your cluster is ready to use.Copy the kubeconfig files
cp kube_config_cluster.yml $HOME/.kube/<cluster_name>_config chmod 400 $HOME/.kube/<cluster_name>_config
To access the cluster using kubeconfig filr use any one of the below method:
cp $HOME/.kube/<cluster_name>_config $HOME/.kube/configAlternatively
* `export KUBECONFIG="$HOME/.kube/<cluster_name>_config`Test cluster access:
kubectl get nodesCommand will result in details of the nodes of the rancher cluster.
Save Your files
Save a copy of the following files in a secure location, they are needed to maintain, troubleshoot and upgrade your cluster.:
cluster.yml: The RKE cluster configuration file.kube_config_cluster.yml: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster.cluster.rkestate: The Kubernetes Cluster State file, this file contains credentials for full access to the cluster.
7. MOSIP K8 Cluster Global configmap, Ingress and Storage Class setup
7.a. Global configmap:
Global configmap contains the list of neccesary details to be used throughout the namespaces of the cluster for common details.
cd $K8_ROOT/mosipCopy
global_configmap.yaml.sampletoglobal_configmap.yaml.cp global_configmap.yaml.sample global_configmap.yaml
Update the domain names in
global_configmap.yamland run.kubectl apply -f global_configmap.yaml
7.b. Istio Ingress setup:
It is a service mesh for the MOSIP K8 cluster which provides transparent layers on top of existing microservices along with powerful features enabling a uniform and more efficient way to secure, connect, and monitor services.
cd $K8_ROOT/mosip/on-prem/istio./install.shThis will bring up all the Istio components and the Ingress Gateways.
Check Ingress Gateway services:
kubectl get svc -n istio-system
Note: Response should contain service names as mentioned below.
istio-ingressgateway: external facing istio service.istio-ingressgateway-internal: internal facing istio service.istiod: Istio daemon for replicating the changes to all envoy filters.
7.c. Storage classes
Multiple storage classes options are available for onprem K8's cluster. In this reference deployment will continue to use NFS as a storage class.
Move to nfs directory in your personel computer.
cd $K8_ROOT/mosip/nfsCreate a copy of hosts.ini.sample as hosts.ini.
cp hosts.ini.sample hosts.iniUpdate the NFS machine details in
hosts.inifile.Note :
Add below mentioned details:
ansible_host : internal IP of NFS server. eg. 10.12.23.21
ansible_user : user to be used for installation, in this ref-impl we use Ubuntu user.
ansible_ssh_private_key_file : path to pem key for ssh to wireguard server. eg.
~/.ssh/wireguard-ssh.pem
.
Make sure Kubeconfig file is set correctly to point to required mosip cluster.
kubectl config viewNote:
Output should show the cluster name to confirm you are pointing to right kubernetes cluster.
If not pinting to right K8 cluster change the kubeconfig to connect to right K8 cluster.
Enable firewall with required ports:
ansible-playbook -i ./hosts.ini nfs-ports.yamlSSH to the nfs node:
ssh -i ~/.ssh/nfs-ssh.pem ubuntu@<internal ip of nfs server>Clone
k8s-infrarepo in nginx VM:git clone https://github.com/mosip/k8s-infra -b v1.2.0.1Move to the nfs directory:
cd /home/ubuntu/k8s-infra/mosip/nfs/Execute script to install nfs server:
sudo ./install-nfs-server.shNote:
Script prompts for below mentioned user inputs:
..... Please Enter Environment Name: <envName> ..... ..... ..... [ Export the NFS Share Directory ] exporting *:/srv/nfs/mosip/<envName> NFS Server Path: /srv/nfs/mosip/<envName>envName: env name eg. dev/qa/uat...
Switch to your personel computer and excute below mentioned commands:
cd $K8_ROOT/mosip/nfs/ ./install-nfs-client-provisioner.shNote:
Script prompts for:
NFS Server: NFS server ip for persistence.
NFS Path : NFS path for storing the persisted data. eg. /srv/nfs/mosip/
Post installation check:
Check status of NFS Client Provisioner.
kubectl -n nfs get deployment.apps/nfs-client-provisionerCheck status of nfs-client storage class.
kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE longhorn (default) driver.longhorn.io Delete Immediate true 57d nfs-client cluster.local/nfs-client-provisioner Delete Immediate true 40s
8. Import MOSIP Cluster into Rancher UI
Login as admin in Rancher console
Select
ImportExisting for cluster addition.Select
Genericas cluster type to add.Fill the
Cluster Namefield with unique cluster name and selectCreate.You will get the kubectl commands to be executed in the kubernetes cluster. Copy the command and execute from your PC (make sure your
kube-configfile is correctly set to MOSIP cluster).
e.g.:
kubectl apply -f https://rancher.e2e.mosip.net/v3/import/pdmkx6b4xxtpcd699gzwdtt5bckwf4ctdgr7xkmmtwg8dfjk4hmbpk_c-m-db8kcj4r.yamlWait for few seconds after executing the command for the cluster to get verified.
Your cluster is now added to the rancher management server.
9. MOSIP K8 cluster Nginx server setup
9.a. SSL certificates creation
For Nginx server setup, we need ssl certificate, add the same into Nginx server.
Incase valid ssl certificate is not there generate one using letsencrypt:
SSH into the nginx server
Install Pre-requisites:
sudo apt update -y sudo apt-get install software-properties-common -y sudo add-apt-repository ppa:deadsnakes/ppa sudo apt-get update -y sudo apt-get install python3.8 -y sudo apt install letsencrypt -y sudo apt install certbot python3-certbot-nginx -yGenerate wildcard SSL certificates for your domain name.
sudo certbot certonly --agree-tos --manual --preferred-challenges=dns -d *.sandbox.mosip.net -d sandbox.mosip.netreplace
sanbox.mosip.netwith your domain.The default challenge HTTP is changed to DNS challenge, as we require wildcard certificates.
Create a DNS record in your DNS service of type TXT with host
_acme-challenge.sandbox.xyz.net, with the string prompted by the script.Wait for a few minutes for the above entry to get into effect. ** Verify**:
host -t TXT _acme-challenge.sandbox.mosip.netPress enter in the
certbotprompt to proceed.Certificates are created in
/etc/letsencrypton your machine.Certificates created are valid for 3 months only.
Wildcard SSL certificaterenewal. This will increase the validity of the certificate for next 3 months.
9.b. Nginx server setup for MOSIP K8's cluster
Move to nginx directory in your local:
cd $K8_ROOT/mosip/on-prem/nginx/Open required ports :
Use any editor to create new
hosts.inifile:nano hosts.iniAdd below mentioned lines with updated details of nginx server to the
hosts.iniand save.[nginx] node-nginx ansible_host=<internal ip> ansible_user=root ansible_ssh_private_key_file=<pvt .pem file>Execute below mentoned command to open required ports:
ansible-playbook -i hosts.ini mosip/on-prem/nginx/nginx_ports.yaml
Login to the nginx server node.
Clone k8s-infra
cd $K8_ROOT/mosip/on-prem/nginx sudo ./install.shProvide below mentioned inputs as and when prompted
MOSIP nginx server internal ip
MOSIP nginx server public ip
Publically accessible domains (comma seperated with no whitespaces)
SSL cert path
SSL key path
Cluster node ip's (comma seperated no whitespace)
Post installation check
sudo systemctl status nginxSteps to uninstall nginx (incase it is required)
sudo apt purge nginx nginx-commonDNS mapping: Once nginx server is installed sucessfully, create DNS mapping for observation cluster related domains as mentioned in DNS requirement section.
9.c. Check Overall nginx and istio wiring
Install
httpbin: This utility docker returns http headers received inside the cluster.httpbincan be used for general debugging - to check ingress, headers etc.cd $K8_ROOT/utils/httpbin ./install.shTo see what is reaching the httpbin (example, replace with your domain name):
curl https://api.sandbox.xyz.net/httpbin/get?show_env=true curl https://api-internal.sandbox.xyz.net/httpbin/get?show_env=true
10. Monitoring module deployment
Note :
Monitoring in the sandbox environment is optional and can be deployed if required.
For production environments, alternative monitoring tools can be used.
These steps can also be skipped in development environments if monitoring is not needed.
Incase skipping execute below commands to install monitoring crd as the same is required by mosip services:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update kubectl create ns cattle-monitoring-system helm -n cattle-monitoring-system install rancher-monitoring-crd mosip/rancher-monitoring-crd
Prometheus and Grafana and Alertmanager tools are used for cluster monitoring.
Select 'Monitoring' App from Rancher console ->
Apps & Marketplaces.In Helm options, open the YAML file and disable Nginx Ingress.
ingressNginx: enabled: falseClick on
Install.
11. Alerting setup
Note :
Alerting in the sandbox environment is optional and can be deployed if required.
For production environments, alternative alerting tools can be used.
These steps can also be skipped in development environments if alerting is not needed.
Alerting is part of cluster monitoring, where alert notifications are sent to the configured email or slack channel.
Monitoring should be deployed which includes deployment of prometheus, grafana and alertmanager.
Create slack incoming webhook.
After setting slack incoming webhook update
slack_api_urlandslack_channel_nameinalertmanager.yml.cd $K8_ROOT/monitoring/alerting/nano alertmanager.ymlUpdate:
global: resolve_timeout: 5m slack_api_url: <YOUR-SLACK-API-URL> ... slack_configs: - channel: '<YOUR-CHANNEL-HERE>' send_resolved: trueUpdate
Cluster_nameinpatch-cluster-name.yaml.cd $K8_ROOT/monitoring/alerting/nano patch-cluster-name.yamlUpdate:
spec:
externalLabels:
cluster: <YOUR-CLUSTER-NAME-HERE>Install Default alerts along some of the defined custom alerts:
cd $K8_ROOT/monitoring/alerting/
./install.shAlerting is installed.
12. Logging module setup and installation
Note :
Logging in the sandbox environment is optional and can be deployed if required.
For production environments, alternative logging tools can be used.
These steps can also be skipped in development environments if logging is not needed.
MOSIP uses Rancher Fluentd and elasticsearch to collect logs from all services and reflect the same in Kibana Dashboard.
Install Rancher FluentD system : Required for screpping logs outs of all the microservices from MOSIP k8 cluster.
Install Logging from Apps and marketplace within the Rancher UI.
Select Chart Version
100.1.3+up3.17.7from Rancher console -> Apps & Marketplaces.
Configure Rancher FluentD
Create
clusteroutputkubectl apply -f clusteroutput-elasticsearch.yaml
Start
clusterFlowkubectl apply -f clusterflow-elasticsearch.yaml
Install elasticsearch, kibana and Istio addons\
cd $K8_ROOT/logging ./intall.shset
min_ageinelasticsearch-ilm-script.shand execute the same.min_age: is the minimum no. of days for which indices will be stored in elasticsearch.cd $K8_ROOT/logging ./elasticsearch-ilm-script.shMOSIP provides set of Kibana Dashboards for checking logs and throughputs.
Brief description of these dashboards are as follows:
01-logstash.ndjson contains the logstash Index Pattern required by the rest of the dashboards.
02-error-only-logs.ndjson contains a Search dashboard which shows only the error logs of the services, called
MOSIP Error Logsdashboard.03-service-logs.ndjson contains a Search dashboard which show all logs of a particular service, called MOSIP Service Logs dashboard.
04-insight.ndjson contains dashboards which show insights into MOSIP processes, like the number of UINs generated (total and per hr), the number of Biometric deduplications processed, number of packets uploaded etc, called
MOSIP Insightdashboard.05-response-time.ndjson contains dashboards which show how quickly different MOSIP Services are responding to different APIs, over time, called
Response Timedashboard.
Import dashboards:
cd K8_ROOT/logging./load_kibana_dashboards.sh ./dashboards <cluster-kube-config-file>
View dashboards
Open kibana dashboard from https://kibana.sandbox.xyz.net.
Kibana --> Menu (on top left) --> Dashboard --> Select the dashboard.
13. MOSIP External Dependencies setup
External Dependencies are set of external requirements that are needed for functioning of MOSIP’s core services like DB, Object Store, HSM etc.
cd $INFRA_ROOT/deployment/v3/external/all
./install-all.shClick here to check the detailed installation instructions of all the external components.
Note:
Connect to
mosip_pmsDB in postgres and execute the query to changevalid_to_dateformpolicy-default-mobileinpms.auth_policytable.
Open the terminal.
Use the psql command to connect to the PostgreSQL server. The general syntax is:
psql -h <host> -p 5432 -U postgres -d mosip_pms
: The server address (e.g., localhost or an IP address).
Assuming other details remain same like port and user.
UPDATE pms.auth_policy SET valid_to_date = valid_to_date + interval '1 year' WHERE name = 'mpolicy-default-mobile';
14. MOSIP Modules Deployment
Now that all the Kubernetes cluster and external dependencies are already installed, will continue with MOSIP service deployment.
cd $INFRA_ROOT/deployment/v3/mosip/all
./install-all.shNote:
In case of failure on execution of
install-all.shfollow MOSIP Modules Deployment installation steps from the failure point.Installation for config-server and admin service is facing delay in this version hence need to execute below mentioned commands respectively for config-server and admin-service once the script fails.
For config server:
Update
failureThresholdforstartupProbeto 60.kubectl -n config-server edit deployment config-serverFor admin service:
Update
failureThresholdforstartupProbeto 60.kubectl -n admin edit deployment admin-service
Once admin service is up and running re-execute
install.shscript after commenting out below mentioned commands:#echo Installing Admin-Proxy into Masterdata and Keymanager. #kubectl -n $NS apply -f admin-proxy.yaml #echo Installing admin hotlist service. #helm -n $NS install admin-hotlist mosip/admin-hotlist --version $CHART_VERSION #echo Installing admin service. Will wait till service gets installed. #helm -n $NS install admin-service mosip/admin-service --set istio.corsPolicy.allowOrigins\[0\].prefix=https://$ADMIN_HOST --wait --version $CHART_VERSION
15. API Testrig
MOSIP’s successfull deployment can be verified by comparing the results of api testrig with testrig benchmark.
Navigate to the Infra Root Directory:
cd $INFRA_ROOTClone the Functional Tests Repository
git clone -b v1.3.3 https://github.com/mosip/mosip-functional-tests.gitAfter cloned successfully try to install apitestrig
cd $INFRA_ROOT/mosip-functional-tests/deploy/apitestrigMake script executable
chmod +x copy_cm_func.shRun the Installer Script
./install.shNote:
Script prompts for below mentioned inputs please provide as and when needed:
Enter the time (hr) to run the cronjob every day (0–23): Specify the hour you want the cronjob to run (e.g., 6 for 6 AM)
Do you have a public domain and valid SSL certificate? (Y/n):
Y – If you have a public domain and valid SSL certificate
n – If you do not have one (recommended only for development environments)
Retention days to remove old reports (Default: 3): Press Enter to accept the default or specify another value (e.g., 5).
Provide Slack Webhook URL to notify server issues on your Slack channel: (change the URL to your channel one)
https://hooks.slack.com/services/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Is the eSignet service deployed? (yes/no):
no – If eSignet is not deployed, related test cases will be skipped.
Is values.yaml for the apitestrig chart set correctly as part of the prerequisites? (Y/n): * Enter Y if this step is already completed.
Do you have S3 details for storing API-Testrig reports? (Y/n):
Enter Y to proceed with S3 configuration.
S3 Host: eg.
http://minio.minio:9000S3 Region:(Leave blank or enter your specific region, if applicable)
S3 Access Key:admin
16. DSL Rig
Install Packetcreater
Navigate to the packetcreator directory:
cd $INFRA_ROOT/deployment/v3/testrig/packetcreatorRun the installation script:
./install.shNFS server details provide the following inputs:
NFS Host:
NFS PEM File :
User for SSH Login: ubuntu
Select the Ingress Controller Type: *
Ingress
Istio (Choose 2)
Install DSLrig
Navigate to the dslrig directory:
cd $INFRA_ROOT/deployment/v3/testrig/dslrig
Run the installation script:
* ./install.shNote: Before running install.sh, please ensure the following is added in the Helm install command:
--set dslorchestrator.configmaps.dslorchestrator.servicesNotDeployed="esignet" \After adding the above, the full Helm installation command should look like this:
helm -n $NS install dslorchestrator mosip/dslorchestrator \ --set crontime="0 $time * * *" \ --version $CHART_VERSION \ --set dslorchestrator.configmaps.s3.s3-host='http://minio.minio:9000' \ --set dslorchestrator.configmaps.s3.s3-user-key='admin' \ --set dslorchestrator.configmaps.s3.s3-region='' \ --set dslorchestrator.configmaps.db.db-server="$DB_HOST" \ --set dslorchestrator.configmaps.db.db-su-user="postgres" \ --set dslorchestrator.configmaps.db.db-port="5432" \ --set dslorchestrator.configmaps.dslorchestrator.USER="$USER" \ --set dslorchestrator.configmaps.dslorchestrator.ENDPOINT="https://$API_INTERNAL_HOST" \ --set dslorchestrator.configmaps.dslorchestrator.packetUtilityBaseUrl="$packetUtilityBaseUrl" \ --set persistence.nfs.server="$NFS_HOST" \ --set persistence.nfs.path="/srv/nfs/mosip/dsl-scenarios/$ENV_NAME" \ --set dslorchestrator.configmaps.dslorchestrator.reportExpirationInDays="$reportExpirationInDays" \ --set dslorchestrator.configmaps.dslorchestrator.NS="$NS" \ --set dslorchestrator.configmaps.dslorchestrator.servicesNotDeployed="esignet" \ $ENABLE_INSECUREWhen prompted, provide the following inputs:
NFS Host:
NFS PEM :
User for SSH Login:ubuntu
Cronjob Time (hour of the day, 0–23): (e.g., enter 6 for 6 AM)
Do you have a public domain and valid SSL certificate? (Y/n):
Y: If you have a valid public domain and SSL certificate
n: Use only in development environments
Packet Utility Base URL:
https://packetcreator.packetcreator:80/v1/packetcreatorRetention Days to Remove Old Reports (default is 3): (Press Enter to accept default or provide a custom value)
Last updated
Was this helpful?