All pages
Powered by GitBook
1 of 7

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

V3 installation

Version: 1.2.0.1-B3 (Latest stable release)

  • Release date: 14th April 2023

  • Release notes

  • On-Prem Installation Guidelines

Version: 1.2.0.1-B2

  • Release date: 8-Jan 2023

Version: 1.2.0.1-B1

  • Release date: 14-Oct 2022

On-Prem without DNS Installation Guidelines
AWS Installation Guidelines
Release notes
On-Prem Installation Guidelines
Release notes
Deployment Instructions

MOSIP External Dependencies

External Dependencies are set of external requirements that are needed for functioning of MOSIP’s core services like DB, Object Store, and HSM etc.

List of external dependencies:

  • Postgres: Relational Database system used for storing data in MOSIP.

  • IAM: IAM tool is for authentication and authorization. Reference implementation here uses Keycloak for the same purpose.

  • : Hardware Security Module (HSM) stores the cryptographic keys used in MOSIP. Reference implementation is provided as SoftHSM here.

  • : MOSIP uses S3 API compliant object store for storing biometric and other data. Reference implementation here uses .

  • Anti-virus: Used for document scanning and packets scanning throughout MOSIP modules. Reference implementation uses dockerised version of .

  • Queuing tool: Tool used for queuing messages to external MOSIP components. Reference implementation used .

  • Event Publisher/ streamer: MOSIP uses Kafka for publishing events to it's internal as well as external partners modules.

  • BioSDK: Biometric SDK for quality check and authentication purpose using biometrics.

  • : Performs the de-duplication of a resident's biometric data.

  • Message Gateway: This is for notifying residents about different OTPs and other information.

Installation

Postgres

  • Install Postgres

  • Initialize Postgres DB

Opt for yes and enter Y.

Keycloak

  • Install Keycloak

  • Initialize Keycloak

Setup SoftHSM

Setup Object store

MinIO installation

S3 Credentials setup

  • Opt 1 for MinIO

  • Opt 2 for S3 (incase you are not going with MinIO installation and want s3 to be installed)

    • Enter the prompted details.

ClamAV setup

ActiveMQ setup

Kafka setup

BioSDK Server setup

Reference implementation of Biometric SDK server will be installed separately in MOSIP service installation section as the same is dependent on artifactory which is a MOSIP component.

ABIS

ABIS is needed to be up and running outside the MOSIP cluster and should be able to connect to the activeMQ. For testing purpose, MOSIP has provided a mock stimulator for the same named as mock-abis which will be deployed as part of the MOSIP services installation.

MSG Gateway

  • MOSIP provides mock smtp server which will be installed as part of default installation, opt for Y.

Docker Secrets

Incase the images are getting pulled from private repositories.

Captcha

To setup the captcha for pre-reg and resident domains.

Landing page setup

HSM
Object Store
MinIO
ClamAV
Artemis ActiveMQ
ABIS
cd $INFRA_ROOT/deployment/v3/external/postgres
./install.sh
cd $INFRA_ROOT/deployment/v3/external/postgres
./init_db.sh
cd $INFRA_ROOT/deployment/v3/external/iam
./install.sh
cd $INFRA_ROOT/deployment/v3/external/iam
./keycloak_init.sh
cd $INFRA_ROOT/deployment/v3/external/hsm/softhsm
./install.sh
cd $INFRA_ROOT/deployment/v3/external/object-store/minio
./install.sh
cd $INFRA_ROOT/deployment/v3/external/object-store/
./cred.sh
cd $INFRA_ROOT/deployment/v3/external/antivirus/clamav
./install.sh
cd $INFRA_ROOT/deployment/v3/external/activemq
./install.sh
cd $INFRA_ROOT/deployment/v3/external/kafka
./install.sh
cd $INFRA_ROOT/deployment/v3/external/msg-gateway
./install.sh
cd $INFRA_ROOT/deployment/v3/external/docker-secrets
./install.sh
cd $INFRA_ROOT/deployment/v3/external/msg-gateway
./install.sh
cd $INFRA_ROOT/deployment/v3/external/landing-page
./install.sh

Testrig

Authdemo Service

  • The Authdemo service is utilized to execute the IDA APIs that are employed by API-testrig and DSLrig.

  • The purpose of the Authdemo Service is to showcase the functionality of authentication.

  • It can be considered as a simplified iteration of an authentication service, serving as a mock or prototype for the purpose of testing.

  • When prompted, input the NFS host, it's pem-key, ssh login user of NFS server.

  • Install script will create the NFS directories i.e., /srv/nfs/mosip/packetcreator-authdemo-authcerts to store the certificates generated by Authdemo service.

These certificates will be used by API-testrig, orchestrator and packetcreator.

API-testrig

API-testRig tests the working of APIs of the MOSIP modules.

MOSIP’s successful deployment can be verified by comparing the results of API-testrig with the testrig benchmark.

  • When prompted, input the hour of the day to execute the API-testrig.

  • Daily API-testrig cron job will be executed at the very opted hour of the day.

  • The reports will move to the object store ( i.e., s3/minio) under automationtests bucket.

Packetcreator

Packetcreator will create packets for DSL orchestrator.

Note: It is recommended to deploy the packetcreator on a separate server/ cluster from where the other DSL orchestrators can access this service.

  • When prompted, input the NFS host, it's pem-key, ssh login user of NFS server.

  • Install script will create two NFS directories i.e., /srv/nfs/mosip/packetcreator_data, /srv/nfs/mosip/packetcreator-authdemo-authcerts.

  • Packetcreator_data contains biometric data which is used to create packets.

DSLrig/ DSLOrchestrator

  • DSLrig will test end-to-end functional flows involving multiple MOSIP modules.

  • The Orchestrator utilizes the Packet Creator to generate packets according to the defined specifications. It then communicates with the Authdemo Service making REST API calls to perform authentication-related actions or retrieve the necessary information.

  • When prompted, input the NFS host, it's pem-key, ssh login user of NFS server.

  • Install script will create NFS directories, i.e., /srv/nfs/mosip/dsl-scenarios/sandbox.xyz.net to store the DSL scenario sheet.

  • The Default template for DSL scenario sheet can be accessible from .

The default packetcreator_data can be accessible from here.

  • Copy the packetcreator_data from the link mentioned above to the NFS directory /srv/nfs/mosip/packetcreator_data.

  • Ensure to use the same NFS host and path i.e., /srv/nfs/mosip/packetcreator-authdemo-authcerts for Authdemo and packetcreator service.

  • When prompted, input the kubernetes ingress type (i.e., Ingress/Istio) and DNS as required if you are using the Ingress-nginx.

  • Copy the scenario csv from the above link to the NFS directory /srv/nfs/mosip/dsl-scenarios/sandbox.xyz.net. Make sure to rename the csv files by replacing env with your domain ex: sandbox.xyz.net.
  • To run the dslorchestrator for sanity only, update the dslorchestrator configmap TESTLEVEL key to sanity.

  • The reports will move to object store (i.e., s3/minio) under dslreports bucket.

  • here
    cd $INFRA_ROOT/deployment/v3/testrig/authdemo
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/testrig/apitestrig
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/testrig/packetcreator
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/testrig/dslrig
    ./install.sh

    MOSIP Modules Deployment

    Below is the sequence of installation of MOSIP modules and the sequence must be followed to resolve all interdependencies.

    1. Config Server Secrets

    2. Config Server

    3. Artifactory

    Installation

    • Conf secrets

    • Config Server

    • Artifactory

    • Keymanager

    • WebSub

    • Mock-SMTP

    • Kernel

    • Masterdata-loader

    • Mock-biosdk

    • Packetmanager

    • Datashare

    • Pre-reg

    • Idrepo

    • Partner Management Services

    • Mock ABIS

    • Mock-mv

    • Registration Processor

    • Admin

    • ID Authentication

    • Print

    • Partner Onboarder

    • MOSIP File Server

    • Resident services

    • Registration Client

    Key Manager
    WebSub
    Mock SMTP
    Kernel
    Masterdata-loader
    Biosdk server
    Packet Manager
    Datashare
    Pre-registration
    ID Repository
    Partner Management
    Mock ABIS
    Mock MV
    Registration Processor
    Admin
    ID Authentication
    Print
    Partner Onboarder
    MOSIP File Server
    Registration Client
    Resident Services
    cd $INFRA_ROOT/deployment/v3/mosip/conf-secrets
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/config-server
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/artifactory
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/keymanager
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/websub
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/mock-smtp
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/kernel
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/masterdata-loader
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/biosdk
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/packetmanager
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/datashare
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/prereg
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/idrepo
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/pms
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/mock-abis
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/mock-mv
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/regproc
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/admin
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/ida
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/print
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/partner-onboarder
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/mosip-file-server
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/resident
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/mosip/regclient
    sudo apt-get update
    sudo apt-get install jq
    ./install.sh

    On-Prem Installation Guidelines

    Overview

    • MOSIP modules are deployed in the form of microservices in kubernetes cluster.

    • Wireguard is used as a trust network extension to access the admin, control, and observation pane.

    • It is also used for the on-the-field registrations.

    • MOSIP uses server for:

      • SSL termination

      • Reverse Proxy

      • CDN/Cache management

    • Kubernetes cluster is administered using the and tools.

    • In V3, we have two Kubernetes clusters:

    Observation cluster - This cluster is a part of the observation plane and it helps in administrative tasks. By design, this is kept independent of the actual cluster as a good security practice and to ensure clear segregation of roles and responsibilities. As a best practice, this cluster or it's services should be internal and should never be exposed to the external world.

    • is used for managing the MOSIP cluster.

    • in this cluster is used for cluster user access management.

    • It is recommended to configure log monitoring and network monitoring in this cluster.

    • In case you have an internal container registry, then it should run here.

    MOSIP cluster - This cluster runs all the MOSIP components and certain third party components to secure the cluster, API’s and data.

    Architecture

    Deployment repos

    • : contains the scripts to install and configure Kubernetes cluster with required monitoring, logging and alerting tools.

    • : contains the deployment scripts to run charts in defined sequence.

    • : contains all the configuration files required by the MOSIP modules.

    Pre-requisites

    Hardware requirements

    • VM’s required can be with any OS as per convenience.

    • Here, we are referting to Ubuntu OS throughout this installation guide.

    Sl no.
    Purpose
    vCPU's
    RAM
    Storage (HDD)
    no. ofVM's
    HA

    Network requirements

    • All the VM's should be able to communicate with each other.

    • Need stable Intra network connectivity between these VM's.

    • All the VM's should have stable internet connectivity for docker image download (in case of local setup ensure to have a locally accesible docker registry).

    • Server Interface requirement as mentioned in below table:

    Sl no.
    Purpose
    Network Interfaces

    DNS requirements

    Domain Name
    Mapping details
    Purpose

    Certificate requirements

    As only secured https connections are allowed via nginx server will need below mentioned valid ssl certificates:

    • One valid wildcard ssl certificate related to domain used for accessing Observation cluster, this needs to be stored inside the nginx server VM for Observation cluster. In above e.g.: *.org.net is the similiar example domain.

    • One valid wildcard ssl certificate related to domain used for accesing Mosip cluster, this needs to be stored inside the nginx server VM for mosip cluster. In above e.g.: *.sandbox.xyz.net is the similiar example domain.

    Tools to be installed in Personel Computers for complete deployment

    • - any client version above 1.19

    • - any client version above 3.0.0 and add below repos as well:

    • : version: 1.15.0

    • : version:

    • [Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html: version > 2.12.4

    Installation

    Wireguard

    A Wireguard bastion host (Wireguard server) provides secure private channel to access MOSIP cluster. The host restricts public access, and enables access to only those clients who have their public key listed in Wireguard server. Wireguard listens on UDP port51820.

    Setup Wireguard VM and wireguard bastion server

    • Create a Wireguard server VM with above mentioned Hardware and Network requirements.

    • Open ports and Install docker on Wireguard VM.

      • cd $K8_ROOT/wireguard/

    Note:

    • Permission of the pem files to access nodes should have 400 permission. sudo chmod 400 ~/.ssh/privkey.pem

    • These ports are only needed to be opened for sharing packets over UDP.

    • Take necessary measure on firewall level so that the Wireguard server can be reachable on 51820/udp.

    • Setup Wireguard server

      • SSH to wireguard VM

      • Create directory for storing wireguard config files. mkdir -p wireguard/config

    Note:

    • Increase the no. of peers above in case more than 30 wireguard client confs (-e PEERS=30) are needed.

    • Change the directory to be mounted to wireguard docker as per need. All your wireguard confs will be generated in the mounted directory (-v /home/ubuntu/wireguard/config:/config).

    Setup Wireguard Client in your PC

    • Install Wireguard client in your PC.

    • Assign wireguard.conf:

      • SSH to the wireguard server VM.

    • Once connected to wireguard, you should be now able to login using private IP’s.

    Observation K8s Cluster setup and configuration

    Observation K8s Cluster setup

    • Install all the required tools mentioned in pre-requisites for PC.

      • kubectl

      • helm

      • ansible

    Note:

    • Make sure the permission for privkey.pem for ssh is set to 400.

    • Run env-check.yaml to check if cluster nodes are fine and do not have known issues in it.

      • cd $K8_ROOT/rancher/on-prem

      • create copy of hosts.ini.sample as hosts.ini and update the required details for Observation k8 cluster nodes.

    Observation K8s Cluster Ingress and Storage class setup

    Once the rancher cluster is ready, we need ingress and storage class to be set for other applications to be installed.

    • : used for ingress in rancher cluster.

    this will install ingress in ingress-nginx namespace of rancher cluster.

    Storage classes

    The following storage classes can be used:

    • : If you are already using VMware virtual machines, you can proceed with the vSphere storage class.

    • .

    • [ceph-csi](TODO Implementation in progress)

    Pre-requisites:

    Install Longhorn via helm

    • ./install.sh

    • Note: Values of below mentioned parametrs are set as by default Longhorn installation script:

      • PV replica count is set to 1. Set the replicas for the storage class appropriately.

    Setting up nginx server for Observation K8s Cluster

    • For Nginx server setup we need ssl certificate, add the same into Nginx server.

    • Incase valid ssl certificate is not there generate one using letsencrypt:

      • SSH into the nginx server

    Observation K8's Cluster Apps Installation

    Rancher UI

    • Rancher provides full CRUD capability of creating and managing kubernetes cluster.

    • Install rancher using Helm, update hostname in rancher-values.yaml and run the following command to install.

    • Login:

      • Open page.

      • Get Bootstrap password using

      Assign a password. IMPORTANT: makes sure this password is securely saved and retrievable by Admin.

    Keycloak

    • is an OAuth 2.0 compliant Identity Access Management (IAM) system used to manage the access to Rancher for cluster controls.

    keycloak_client.json: Used to create SAML client on Keycloak for Rancher integration.

    Keycloak - Rancher UI Integration

    • Login as admin user in Keycloak and make sure an email id, and first name field is populated for admin user. This is important for Rancher authentication as given below.

    • Enable authentication with Keycloak using the steps given .

    • In Keycloak add another Mapper for the rancher client (in Master realm) with following fields:

    RBAC for Rancher using Keycloak

    • For users in Keycloak assign roles in Rancher - cluster and project roles. Under default project add all the namespaces. Then, to a non-admin user you may provide Read-Only role (under projects).

    • If you want to create custom roles, you can follow the steps given .

    • Add a member to cluster/project in Rancher:

    Certificates expiry

    In case you see certificate expiry message while adding users, on local cluster run these commands:

    MOSIP K8s Cluster setup

    • Pre-requisites:

    • Install all the required tools mentioned in Pre-requisites for PC.

      • kubectl

      • helm

    • ansible

    • rke (version 1.3.10)

    • Setup MOSIP K8 Cluster node VM’s as per the hardware and network requirements as mentioned above.

    • Run env-check.yaml

    Alternatively

    • Test cluster access:

      • kubect get nodes

      • Command will result in details of the nodes of the rancher cluster.

    MOSIP K8 Cluster Global configmap, Ingress and Storage Class setup

    Global configmap: Global configmap contains the list of neccesary details to be used throughout the namespaces of the cluster for common details.

    • cd $K8_ROOT/mosip

    • Copy global_configmap.yaml.sample to global_configmap.yaml.

    • Update the domain names in global_configmap.yaml and run.

    Storage classes

    The following storage classes can be used:

    • : If you are already using VMware virtual machines, you can proceed with the vSphere storage class.

    • .

    • [ceph-csi](TODO Implementation in progress)

    For time being, we are considering Longhorn as a storage class.

    • Storage class setup: Longhorn creates a storage class in the cluster for creating pv (persistence volume) and pvc (persistence volume claim).

      • Pre-requisites:

      • Install Longhorn via helm

    Import MOSIP Cluster into Rancher UI

    • Login as admin in Rancher console

    • Select Import Existing for cluster addition.

    • Select Generic as cluster type to add.

    • Wait for few seconds after executing the command for the cluster to get verified.

    • Your cluster is now added to the rancher management server.

    MOSIP K8 cluster Nginx server setup

    • For Nginx server setup, we need ssl certificate, add the same into Nginx server.

    • Incase valid ssl certificate is not there generate one using letsencrypt:

      • SSH into the nginx server

    Monitoring module deployment

    • Prometheus and Grafana and Alertmanager tools are used for cluster monitoring.

    • Select 'Monitoring' App from Rancher console -> Apps & Marketplaces.

    • In Helm options, open the YAML file and disable Nginx Ingress.

    Alerting setup

    Alerting is part of cluster monitoring, where alert notifications are sent to the configured email or slack channel.

    • Monitoring should be deployed which includes deployment of prometheus, grafana and alertmanager.

    • Create .

    • After setting slack incoming webhook update slack_api_url and slack_channel_name in alertmanager.yml.

    • Install Default alerts along some of the defined custom alerts:

    • Alerting is installed.

    Logging module setup and installation

    MOSIP uses and elasticsearch to collect logs from all services and reflect the same in Kibana Dashboard.

    • Install Rancher FluentD system : for screpping logs outs of all the microservices from MOSIP k8 cluster.

      • Install Logging from Apps and marketplace within the Rancher UI.

      • Select Chart Version 100.1.3+up3.17.7 from Rancher console -> Apps & Marketplaces.

    Open kibana dashboard from https://kibana.sandbox.xyz.net.

    Kibana --> Menu (on top left) --> Dashboard --> Select the dashboard.

    MOSIP External Dependencies setup

    External Dependencies are set of external requirements that are needed for functioning of MOSIP’s core services like DB, Object Store, HSM etc.

    Click to check the detailed installation instructions of all the external components.

    MOSIP Modules Deployment

    Now that all the Kubernetes cluster and external dependencies are already installed, will continue with MOSIP service deployment.

    Check detailed installation steps.

    Loadbalancing

    mosip-helm : contains packaged helm charts for all the MOSIP modules.

    2.

    Observation Cluster nodes

    2

    8 GB

    32 GB

    2

    2

    3.

    Observation Nginx server (use Loadbalancer if required)

    2

    4 GB

    16 GB

    2

    Nginx+

    4.

    MOSIP Cluster nodes

    12

    32 GB

    128 GB

    6

    6

    5.

    MOSIP Nginx server ( use Loadbalancer if required)

    2

    4 GB

    16 GB

    1

    Nginx+

    Mosip Nginx server

    One internal interface : that is on the same network as all the rest of nodes (e.g.: inside local NAT Network). One public interface : Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 443/tcp port to this interface IP.

    Private IP of Nginx server for MOSIP cluster

    Index page for links to different dashboards of MOSIP env. (This is just for reference, please do not expose this page in a real production or UAT environment)

    4.

    api-internal.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    Internal API’s are exposed through this domain. They are accessible privately over wireguard channel

    5.

    api.sandbox.xyx.net

    Public IP of Nginx server for MOSIP cluster

    All the API’s that are publically usable are exposed using this domain.

    6.

    prereg.sandbox.xyz.net

    Public IP of Nginx server for MOSIP cluster

    Domain name for MOSIP's pre-registration portal. The portal is accessible publicly.

    7.

    activemq.sandbox.xyx.net

    Private IP of Nginx server for MOSIP cluster

    Provides direct access to activemq dashboard. It is limited and can be used only over wireguard.

    8.

    kibana.sandbox.xyx.net

    Private IP of Nginx server for MOSIP cluster

    Optional installation. Used to access kibana dashboard over wireguard.

    9.

    regclient.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    Registration Client can be downloaded from this domain. It should be used over wireguard.

    10.

    admin.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    MOSIP's admin portal is exposed using this domain. This is an internal domain and is restricted to access over wireguard

    11.

    object-store.sandbox.xyx.net

    Private IP of Nginx server for MOSIP cluster

    Optional- This domain is used to access the object server. Based on the object server that you choose map this domain accordingly. In our reference implementation, MinIO is used and this domain let's you access MinIO’s Console over wireguard

    12.

    kafka.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    Kafka UI is installed as part of the MOSIP’s default installation. We can access kafka UI over wireguard. Mostly used for administrative needs.

    13.

    iam.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    MOSIP uses an OpenID Connect server to limit and manage access across all the services. The default installation comes with Keycloak. This domain is used to access the keycloak server over wireguard

    14.

    postgres.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    This domain points to the postgres server. You can connect to postgres via port forwarding over wireguard

    15.

    pmp.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    MOSIP’s partner management portal is used to manage partners accessing partner management portal over wireguard

    16.

    onboarder.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    Accessing reports of MOSIP partner onboarding over wireguard

    17.

    resident.sandbox.xyz.net

    Public IP of Nginx server for MOSIP cluster

    Accessing resident portal publically

    18.

    idp.sandbox.xyz.net

    Public IP of Nginx server for MOSIP cluster

    Accessing IDP over public

    19.

    smtp.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    Accessing mock-smtp UI over wireguard

    Create a directory as mosip in your PC and:
    • clone k8’s infra repo with tag : 1.2.0.1-B2 (whichever is the latest version) inside mosip directory. git clone https://github.com/mosip/k8s-infra -b v1.2.0.1-B2

    • clone mosip-infra with tag : 1.2.0.1-B2 (whichever is the latest version) inside mosip directory. git clone https://github.com/mosip/mosip-infra -b v1.2.0.1-B2

    • Set below mentioned variables in bashrc

    source .bashrc

    Note: Above mentioned environment variables will be used throughout the installation to move between one directory to other to run install scripts.

    create copy of
    hosts.ini.sample
    as
    hosts.ini
    and update the required details for wireguard VM\

    cp hosts.ini.sample hosts.ini

  • execute ports.yml to enable ports on VM level using ufw:

    ansible-playbook -i hosts.ini ports.yaml

  • Install and start wireguard server using docker as given below:
    cd /home/ubuntu/wireguard/config
  • assign one of the PR for yourself and use the same from the PC to connect to the server.

    • create assigned.txt file to assign the keep track of peer files allocated and update everytime some peer is allocated to someone.

    • use ls cmd to see the list of peers.

    • get inside your selected peer directory, and add mentioned changes in peer.conf:

      • cd peer1

      • nano peer1.conf

  • add peer.conf in your PC’s /etc/wireguard directory as wg0.conf.

  • start the wireguard client and check the status:

  • rke (version 1.3.10)

  • Setup Observation Cluster node VM’s as per the hardware and network requirements as mentioned above.

  • Setup passwordless SSH into the cluster nodes via pem keys. (Ignore if VM’s are accessible via pem’s).

    • Generate keys on your PC ssh-keygen -t rsa

    • Copy the keys to remote observation node VM’s ssh-copy-id <remote-user>@<remote-ip>

    • SSH into the node to check password-less SSH ssh -i ~/.ssh/<your private key> <remote-user>@<remote-ip>

    • cp hosts.ini.sample hosts.ini

    • ansible-playbook -i hosts.ini env-check.yaml

    • This ansible checks if localhost mapping is already present in /etc/hosts file in all cluster nodes, if not it adds the same.

  • Open ports and install docker on Observation K8 Cluster node VM’s.

    • cd $K8_ROOT/rancher/on-prem

    • Ensure that hosts.ini is updated with nodal details.

    • Update vpc_ip variable in ports.yaml with vpc CIDR ip to allow access only from machines inside same vpc.

    • Execute ports.yml to enable ports on VM level using ufw:

      • ansible-playbook -i hosts.ini ports.yaml

    • Disable swap in cluster nodes. (Ignore if swap is already disabled)

      • ansible-playbook -i hosts.ini swap.yaml

    • execute docker.yml to install docker and add user to docker group:

      • ansible-playbook -i hosts.ini docker.yaml

  • Creating RKE Cluster Configuration file

    • rke config

    • Command will prompt for nodal details related to cluster, provide inputs w.r.t below mentioned points:

      • SSH Private Key Path :

      • Number of Hosts:

      • SSH Address of host :

      • SSH User of host :

      • Make all the nodes Worker host by default.

      • To create an HA cluster, specify more than one host with role Control Plane and etcd host.

    • Network Plugin Type : Continue with canal as default network plugin.

    • For rest of other configurations, opt the required or default value.

  • As result of rke config command cluster.yml file will be generated inside same directory, update the below mentioned fields:

    • nano cluster.yml

      • Remove the default Ingress install

    • Add the name of the kubernetes cluster

      • cluster_name: sandbox-name

  • For production deplopyments edit the cluster.yml, according to this RKE Cluster Hardening Guide.

  • Setup up the cluster:

    • Once cluster.yml is ready, you can bring up the kubernetes cluster using simple command.

      • This command assumes the cluster.yml file is in the same directory as where you are running the command.

      • rke up

    • As part of the Kubernetes creation process, a kubeconfig file has been created and written at kube_config_cluster.yml, which can be used to start interacting with your Kubernetes cluster.

    • Copy the kubeconfig files

    • To access the cluster using kubeconfig file use any one of the below method:

      • cp $HOME/.kube/<cluster_name>_config $HOME/.kube/config Alternatively

      • export KUBECONFIG="$HOME/.kube/<cluster_name>_config

  • Storage class setup: Longhorn creates a storage class in the cluster for creating pv (persistence volume) and pvc (persistence volume claim).

  • Total available node CPU allocated to each instance-manager pod in the longhorn-system namespace.
    • The value "5" means 5% of the total available node CPU.

    • This value should be fine for sandbox and pilot but you may have to increase the default to "12" for production.

    • The value can be updated on Longhorn UI after installation.

  • Access the Longhorn dashboard from Rancher UI once installed.

  • Setup Backup : In case you want to bacup the pv data from longhorn to s3 periodically follow instructions. (Optional, ignore if not required)

  • Install Pre-requisites
  • Generate wildcard SSL certificates for your domain name.

    • sudo certbot certonly --agree-tos --manual --preferred-challenges=dns -d *.org.net

      • replace org.net with your domain.

      • The default challenge HTTP is changed to DNS challenge, as we require wildcard certificates.

      • Create a DNS record in your DNS service of type TXT with host _acme-challenge.org.net, with the string prompted by the script.

      • Wait for a few minutes for the above entry to get into effect.

    Verify:

    host -t TXT _acme-challenge.org.net

    • Press enter in the certbot prompt to proceed.

    • Certificates are created in /etc/letsencrypt on your machine.

    • Certificates created are valid for 3 months only.

  • Wildcard SSL certificate renewal. This will increase the validity of the certificate for next 3 months.

  • Clone k8s-infra

  • Provide below mentioned inputs as and when promted

    • Rancher nginx ip : internal ip of the nginx server VM.

    • SSL cert path : path of the ssl certificate to be used for ssl termination.

    • SSL key path : path of the ssl key to be used for ssl termination.

    • Cluster node ip's : ip’s of the rancher cluster node

  • Post installation check:

    • sudo systemctl status nginx

    • Steps to Uninstall nginx (in case required)

    sudo apt purge nginx nginx-common

    DNS mapping: Once nginx server is installed sucessfully, create DNS mapping for rancher cluster related domains as mentioned in DNS requirement section. (rancher.org.net, keycloak.org.net)

  • Protocol: saml

  • Name: username

  • Mapper Type: User Property

  • Property: username

  • Friendly Name: username

  • SAML Attribute Name: username

  • SAML Attribute NameFormat: Basic

  • Specify the following mappings in Rancher's Authentication Keycloak form:

    • Display Name Field: givenName

    • User Name Field: email

    • UID Field: username

    • Entity ID Field: https://your-rancher-domain/v1-saml/keycloak/saml/metadata

    • Rancher API Host: https://your-rancher-domain

    • Groups Field: member

  • Navigate to RBAC cluster members

  • Add member name exactly as username in Keycloak

  • Assign appropriate role like Cluster Owner, Cluster Viewer etc.

  • You may create new role with fine grained acccess control.

  • Add group to to cluster/project in Rancher:

    • Navigate to RBAC cluster members

    • Click on Add and select a group from the displayed drop-down.

    • Assign appropriate role like Cluster Owner, Cluster Viewer etc.

    • To add groups, the user must be a member of the group.

  • Creating a Keycloak group involves the following steps:

    • Go to the "Groups" section in Keycloak and create groups with default roles.

    • Navigate to the "Users" section in Keycloak, select a user, and then go to the "Groups" tab. From the list of groups, add the user to the required group.

  • to check if cluster nodes are fine and dont have known issues in it.
    • cd $K8_ROOT/rancher/on-prem

    • create copy of hosts.ini.sample as hosts.ini and update the required details for MOSIP k8 cluster nodes.

      • cp hosts.ini.sample hosts.ini

      • ansible-playbook -i hosts.ini env-check.yaml

      • This ansible checks if localhost mapping ia already present in /etc/hosts file in all cluster nodes, if not it adds the same.

  • Setup passwordless ssh into the cluster nodes via pem keys. (Ignore if VM’s are accessible via pem’s).

    • Generate keys on your PC

      • ssh-keygen -t rsa

    • Copy the keys to remote rancher node VM’s:

      • ssh-copy-id <remote-user>@<remote-ip>

    • SSH into the node to check password-less SSH

      • ssh -i ~/.ssh/<your private key> <remote-user>@<remote-ip>

    • Rancher UI : (deployed in Rancher K8 cluster)

  • Open ports and Install docker on MOSIP K8 Cluster node VM’s.

    • cd $K8_ROOT/mosip/on-prem

    • create copy of hosts.ini.sample as hosts.ini and update the required details for wireguard VM.

      • cp hosts.ini.sample hosts.ini

    • Update vpc_ip variable in ports.yaml with vpc CIDR ip to allow access only from machines inside same vpc.

    • execute ports.yml to enable ports on VM level using ufw:

      • ansible-playbook -i hosts.ini ports.yaml

    • Disable swap in cluster nodes. (Ignore if swap is already disabled)

      • ansible-playbook -i hosts.ini swap.yaml

    • execute docker.yml to install docker and add user to docker group:

      • ansible-playbook -i hosts.ini docker.yaml

  • Creating RKE Cluster Configuration file

    • rke config

    • Command will prompt for nodal details related to cluster, provide inputs w.r.t below mentioned points:

      • SSH Private Key Path :

      • Number of Hosts:

      • SSH Address of host :

      • SSH User of host :

      • Make all the nodes Worker host by default.

      • To create an HA cluster, specify more than one host with role Control Plane and etcd host.

    • Network Plugin Type : Continue with canal as default network plugin.

    • For rest for other configuration opt the required or default value.

  • As result of rke config command cluster.ymlfile will be generated inside same directory, update the below mentioned fields:

    • nano cluster.yml

    • Remove the default Ingress install

    • Add the name of the kubernetes cluster

    • For production deplopyments edit the cluster.yml, according to this .

  • Setup up the cluster:

    • Once cluster.yml is ready, you can bring up the kubernetes cluster using simple command.

      • This command assumes the cluster.yml file is in the same directory as where you are running the command.

        • rke up

      • The last line should read Finished building Kubernetes cluster successfully to indicate that your cluster is ready to use.

      • Copy the kubeconfig files

    • To access the cluster using kubeconfig filr use any one of the below method:

    • cp $HOME/.kube/<cluster_name>_config $HOME/.kube/config

  • Save Your files
    • Save a copy of the following files in a secure location, they are needed to maintain, troubleshoot and upgrade your cluster.:

      • cluster.yml: The RKE cluster configuration file.

      • kube_config_cluster.yml: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster.

      • cluster.rkestate: The , this file contains credentials for full access to the cluster.

    kubectl apply -f global_configmap.yaml

  • Istio Ingress setup: It is a service mesh for the MOSIP K8 cluster which provides transparent layers on top of existing microservices along with powerful features enabling a uniform and more efficient way to secure, connect, and monitor services.

    • cd $K8_ROOT/mosip/on-prem/istio

    • ./install.sh

    • This will bring up all the Istio components and the Ingress Gateways.

    • Check Ingress Gateway services:

      • kubectl get svc -n istio-system

        • istio-ingressgateway: external facing istio service.

  • ./install.sh

  • Note: Values of below mentioned parameters are set as by default Longhorn installation script:

    • PV replica count is set to 1. Set the replicas for the storage class appropriately.

    • Total available node CPU allocated to each instance-manager pod in the longhorn-system namespace.

    • The value "5" means 5% of the total available node CPU

    • This value should be fine for sandbox and pilot but you may have to increase the default to "12" for production.

    • The value can be updated on Longhorn UI after installation.

  • Fill the Cluster Name field with unique cluster name and select Create.
  • You will get the kubecl commands to be executed in the kubernetes cluster. Copy the command and execute from your PC (make sure your kube-config file is correctly set to MOSIP cluster).

  • Install Pre-requisites:
  • Generate wildcard SSL certificates for your domain name.

    • sudo certbot certonly --agree-tos --manual --preferred-challenges=dns -d *.sandbox.mosip.net -d sandbox.mosip.net

      • replace sanbox.mosip.net with your domain.

      • The default challenge HTTP is changed to DNS challenge, as we require wildcard certificates.

      • Create a DNS record in your DNS service of type TXT with host _acme-challenge.sandbox.xyz.net, with the string prompted by the script.

      • Wait for a few minutes for the above entry to get into effect. ** Verify**: host -t TXT _acme-challenge.sandbox.mosip.net

      • Press enter in the certbot prompt to proceed.

      • Certificates are created in /etc/letsencrypt on your machine.

      • Certificates created are valid for 3 months only.

  • Wildcard SSL certificate renewal. This will increase the validity of the certificate for next 3 months.

  • Clone k8s-infra

  • Provide below mentioned inputs as and when prompted

    • MOSIP nginx server internal ip

    • MOSIP nginx server public ip

    • Publically accessible domains (comma seperated with no whitespaces)

    • SSL cert path

    • SSL key path

    • Cluster node ip's (comma seperated no whitespace)

  • Post installation check\

    • sudo systemctl status nginx

    • Steps to uninstall nginx (incase it is required) sudo apt purge nginx nginx-common

    • DNS mapping: Once nginx server is installed sucessfully, create DNS mapping for observation cluster related domains as mentioned in DNS requirement section.

  • Check Overall if nginx and istio wiring is set correctly

    • Install httpbin: This utility docker returns http headers received inside the cluster. You may use it for general debugging - to check ingress, headers etc.

    • To see what is reaching the httpbin (example, replace with your domain name):

  • Click on Install.
  • cd $K8_ROOT/monitoring/alerting/

  • nano alertmanager.yml

  • Update:

  • Update Cluster_name in patch-cluster-name.yaml.

  • cd $K8_ROOT/monitoring/alerting/

  • nano patch-cluster-name.yaml

  • Update:

  • Configure Rancher FluentD

    • Create clusteroutput

      • kubectl apply -f clusteroutput-elasticsearch.yaml

    • Start clusterFlow

      • kubectl apply -f clusterflow-elasticsearch.yaml

    • Install elasticsearch, kibana and Istio addons\

    • set min_age in elasticsearch-ilm-script.sh and execute the same.

    • min_age : is the minimum no. of days for which indices will be stored in elasticsearch.

    • MOSIP provides set of Kibana Dashboards for checking logs and throughputs.

      • Brief description of these dashboards are as follows:

        • contains the logstash Index Pattern required by the rest of the dashboards.

  • Import dashboards:

    • cd K8_ROOT/logging

    • ./load_kibana_dashboards.sh ./dashboards <cluster-kube-config-file>

  • View dashboards

  • 1.

    Wireguard Bastion Host

    2

    4 GB

    8 GB

    1

    (ensure to setup active-passive)

    1.

    Wireguard Bastion Host

    One Private interface : that is on the same network as all the rest of nodes (e.g.: inside local NAT Network). One public interface : Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 51820/udp port to this interface IP.

    2.

    K8 Cluster nodes

    One internal interface: with internet access and that is on the same network as all the rest of nodes (e.g.: inside local NAT Network )

    3.

    Observation Nginx server

    One internal interface: with internet access and that is on the same network as all the rest of nodes (e.g.: inside local NAT Network).

    1.

    rancher.xyz.net

    Private IP of Nginx server or load balancer for Observation cluster

    Rancher dashboard to monitor and manage the kubernetes cluster.

    2.

    keycloak.xyz.net

    Private IP of Nginx server for Observation cluster

    Administrative IAM tool (keycloak). This is for the kubernetes administration.

    3.

    Nginx
    Rancher
    rke
    Rancher
    Keycloak
    MOSIP External Components
    MOSIP Services
    k8s-infra
    mosip-infra
    mosip-config
    kubectl
    helm
    Istioctl
    rke
    1.3.10
    Nginx Ingress Controller
    Vsphere storage class
    NFS client provisioner storage class
    Longhorn
    Rancher
    Keycloak
    here
    here
    Vsphere storage class
    NFS client provisioner storage class
    Longhorn
    slack incoming webhook
    Rancher Fluentd
    here
    MOSIP Modules Deployment

    4.

    sandbox.xyx.net

    peer1 :   peername
    peer2 :   xyz
    ingress:
    provider: none
    cd $K8_ROOT/rancher/on-prem/nginx
    sudo ./install.sh
    ingress:
    provider: none
    persistence.defaultClassReplicaCount=1
    defaultSettings.defaultReplicaCount=1
    guaranteedEngineManagerCPU: 5
    guaranteedReplicaManagerCPU: 5   
    cd $K8_ROOT/mosip/on-prem/nginx
    sudo ./install.sh
    cd $K8_ROOT/utils/httpbin
    ./install.sh
    curl https://api.sandbox.xyz.net/httpbin/get?show_env=true
    curl https://api-internal.sandbox.xyz.net/httpbin/get?show_env=true
    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm repo add mosip https://mosip.github.io/mosip-helm
    * execute docker.yml to install docker and add user to docker group:
    
        `ansible-playbook -i hosts.ini docker.yaml`
        
    sudo systemctl start wg-quick@wg0
    sudo systemctl status wg-quick@wg0
    cd $K8_ROOT/rancher/on-prem
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update
    helm install \                                                                                                             
      ingress-nginx ingress-nginx/ingress-nginx \
      --namespace ingress-nginx \
      --version 4.0.18 \
      --create-namespace  \
      -f ingress-nginx.values.yaml
    cd $K8_ROOT/longhorn
    ./pre_install.sh
    persistence.defaultClassReplicaCount=1
    defaultSettings.defaultReplicaCount=1
    cd $K8_ROOT/rancher/rancher-ui
    helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
    helm repo update
    helm install rancher rancher-latest/rancher \
    --namespace cattle-system \
    --create-namespace \
    -f rancher-values.yaml
    kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{ .data.bootstrapPassword|base64decode}}{{ "\n" }}'
    cd $K8_ROOT/rancher/keycloak
    ./install.sh <iam.host.name>
    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm repo add mosip https://mosip.github.io/mosip-helm
    * `export KUBECONFIG="$HOME/.kube/<cluster_name>_config`
    cd $K8_ROOT/longhorn
    ./pre_install.sh
    e.g.:
    kubectl apply -f https://rancher.e2e.mosip.net/v3/import/pdmkx6b4xxtpcd699gzwdtt5bckwf4ctdgr7xkmmtwg8dfjk4hmbpk_c-m-db8kcj4r.yaml
     ingressNginx:
     enabled: false
    spec:
    externalLabels:
    cluster: <YOUR-CLUSTER-NAME-HERE>
    cd $K8_ROOT/monitoring/alerting/
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/external/all
    ./install-all.sh
    cd $INFRA_ROOT/deployment/v3/mosip/all
    ./install-all.sh
    export MOSIP_ROOT=<location of mosip directory>
    export K8_ROOT=$MOSIP_ROOT/k8s-infra
    export INFRA_ROOT=$MOSIP_ROOT/mosip-infra
    sudo docker run -d \
    --name=wireguard \
    --cap-add=NET_ADMIN \
    --cap-add=SYS_MODULE \
    -e PUID=1000 \
    -e PGID=1000 \
    -e TZ=Asia/Calcutta \
    -e PEERS=30 \
    -p 51820:51820/udp \
    -v /home/ubuntu/wireguard/config:/config \
    -v /lib/modules:/lib/modules \
    --sysctl="net.ipv4.conf.all.src_valid_mark=1" \
    --restart unless-stopped \
    ghcr.io/linuxserver/wireguard
    guaranteedEngineManagerCPU: 5
    guaranteedReplicaManagerCPU: 5   
    sudo apt update -y
    sudo apt-get install software-properties-common -y
    sudo add-apt-repository ppa:deadsnakes/ppa
    sudo apt-get update -y
    sudo apt-get install python3.8 -y
    sudo apt install letsencrypt -y
    sudo apt install certbot python3-certbot-nginx -y
    sudo apt update -y
    sudo apt-get install software-properties-common -y
    sudo add-apt-repository ppa:deadsnakes/ppa
    sudo apt-get update -y
    sudo apt-get install python3.8 -y
    sudo apt install letsencrypt -y
    sudo apt install certbot python3-certbot-nginx -y
    global:
    resolve_timeout: 5m
    slack_api_url: <YOUR-SLACK-API-URL>
    ...
    slack_configs:
    - channel: '<YOUR-CHANNEL-HERE>'
    send_resolved: true
    Delete the DNS IP.
  • Update the allowed IP's to subnets CIDR ip . e.g. 10.10.20.0/23

  • Share the updated peer.conf with respective peer to connect to wireguard server from Personel PC.

  • Test cluster access:

    • kubect get nodes

      • Command will result in details of the nodes of the Observation cluster.

  • Save your files

    • Save a copy of the following files in a secure location, they are needed to maintain, troubleshoot and upgrade your cluster.

      • cluster.yml: The RKE cluster configuration file.

      • kube_config_cluster.yml: The for the cluster, this file contains credentials for full access to the cluster.

      • cluster.rkestate: The , this file contains credentials for full access to the cluster.

  • istio-ingressgateway-internal: internal facing istio service.

  • istiod: Istio daemon for replicating the changes to all envoy filters.

  • 02-error-only-logs.ndjson contains a Search dashboard which shows only the error logs of the services, called MOSIP Error Logs dashboard.
  • 03-service-logs.ndjson contains a Search dashboard which show all logs of a particular service, called MOSIP Service Logs dashboard.

  • 04-insight.ndjson contains dashboards which show insights into MOSIP processes, like the number of UINs generated (total and per hr), the number of Biometric deduplications processed, number of packets uploaded etc, called MOSIP Insight dashboard.

  • 05-response-time.ndjson contains dashboards which show how quickly different MOSIP Services are responding to different APIs, over time, called Response Time dashboard.

  • RKE Cluster Hardening Guide
    Kubernetes Cluster State file
    01-logstash.ndjson
    Is host (<node1-ip>) a Control Plane host (y/n)? [y]: y
    Is host (<node1-ip>) a Worker host (y/n)? [n]: y
    Is host (<node1-ip>) an etcd host (y/n)? [n]: y
    INFO[0000] Building Kubernetes cluster
    INFO[0000] [dialer] Setup tunnel for host [10.0.0.1]
    INFO[0000] [network] Deploying port listener containers   
    INFO[0000] [network] Pulling image [alpine:latest] on host [10.0.0.1]
    ...
    INFO[0101] Finished building Kubernetes cluster successfully
    ```
    * The last line should read `Finished building Kubernetes cluster` successfully to indicate that your cluster is ready to use.
    cp kube_config_cluster.yml $HOME/.kube/<cluster_name>_config
    chmod 400 $HOME/.kube/<cluster_name>_config
    Is host (<node1-ip>) a Control Plane host (y/n)? [y]: y
    Is host (<node1-ip>) a Worker host (y/n)? [n]: y
    Is host (<node1-ip>) an etcd host (y/n)? [n]: y
    `cluster_name: sandbox-name`
    INFO[0000] Building Kubernetes cluster
    INFO[0000] [dialer] Setup tunnel for host [10.0.0.1]
    INFO[0000] [network] Deploying port listener containers
    INFO[0000] [network] Pulling image [alpine:latest] on host [10.0.0.1]
    ...
    INFO[0101] Finished building Kubernetes cluster successfully
    cp kube_config_cluster.yml $HOME/.kube/<cluster_name>_config
    chmod 400 $HOME/.kube/<cluster_name>_config
    cd $K8_ROOT/logging
    ./intall.sh
     cd $K8_ROOT/logging
    
    ./elasticsearch-ilm-script.sh
    Kubeconfig file
    Kubernetes Cluster State file

    On-Prem without DNS Installation Guidelines

    Overview

    • MOSIP modules are deployed in the form of microservices in kubernetes cluster.

    • Wireguard is used as a trust network extension to access the admin, control, and observation pane.

    • It is also used for the on-the-field registrations.

    • MOSIP uses server for:

      • SSL termination

      • Reverse Proxy

      • CDN/Cache management

    • Kubernetes cluster is administered using the and tools.

    • In V3, we have two Kubernetes clusters:

    Observation cluster - This cluster is a part of the observation plane and it helps in administrative tasks. By design, this is kept independent of the actual cluster as a good security practice and to ensure clear segregation of roles and responsibilities. As a best practice, this cluster or it's services should be internal and should never be exposed to the external world.

    • is used for managing the MOSIP cluster.

    • in this cluster is used for cluster user access management.

    • It is recommended to configure log monitoring and network monitoring in this cluster.

    • In case you have an internal container registry, then it should run here.

    MOSIP cluster - This cluster runs all the MOSIP components and certain third party components to secure the cluster, API’s and data.

    Architecture diagram

    Deployment repos

    • : contains the scripts to install and configure Kubernetes cluster with required monitoring, logging and alerting tools.

    • : contains the deployment scripts to run charts in defined sequence.

    • : contains all the configuration files required by the MOSIP modules.

    Pre-requisites

    Hardware requirements

    • VM’s required can be with any OS as per convenience.

    • Here, we are referring to Ubuntu OS throughout this installation guide.

    Sl no.
    Purpose
    vCPU's
    RAM
    Storage (HDD)
    no. ofVM's
    HA

    Network requirements

    • All the VM's should be able to communicate with each other.

    • Need stable Intra network connectivity between these VM's.

    • All the VM's should have stable internet connectivity for docker image download (in case of local setup ensure to have a locally accessible docker registry).

    • Server Interface requirement as mentioned in below table:

    Sl no.
    Purpose
    Network Interfaces

    DNS requirements

    Domain Name
    Mapping details
    Purpose

    Certificate requirements

    As only secured https connections are allowed via nginx server will need below mentioned valid ssl certificates:

    • One valid wildcard ssl certificate related to domain used for accessing Observation cluster, this needs to be stored inside the nginx server VM for Observation cluster. In above e.g.: *.org.net is the similar example domain.

    • One valid wildcard ssl certificate related to domain used for accesing Mosip cluster, this needs to be stored inside the nginx server VM for mosip cluster. In above e.g.: *.sandbox.xyz.net is the similar example domain.

    Tools to be installed in Personel Computers for complete deployment

    • - any client version above 1.19

    • - any client version above 3.0.0 and add below repos as well:

    • : version: 1.15.0

    • : version:

    • [Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html: version > 2.12.4

    Installation

    Wireguard

    A Wireguard bastion host (Wireguard server) provides secure private channel to access MOSIP cluster. The host restricts public access, and enables access to only those clients who have their public key listed in Wireguard server. Wireguard listens on UDP port51820.

    Setup Wireguard VM and wireguard bastion server

    • Create a Wireguard server VM with above mentioned Hardware and Network requirements.

    • Open ports and Install docker on Wireguard VM.

      • cd $K8_ROOT/wireguard/

    Note:

    • Permission of the pem files to access nodes should have 400 permission. sudo chmod 400 ~/.ssh/privkey.pem

    • These ports are only needed to be opened for sharing packets over UDP.

    • Take necessary measure on firewall level so that the Wireguard server can be reachable on 51820/udp.

    • Setup Wireguard server

      • SSH to wireguard VM

      • Create directory for storing wireguard config files. mkdir -p wireguard/config

    Note:

    • Increase the no. of peers above in case more than 30 wireguard client confs (-e PEERS=30) are needed.

    • Change the directory to be mounted to wireguard docker as per need. All your wireguard confs will be generated in the mounted directory (-v /home/ubuntu/wireguard/config:/config).

    Setup Wireguard Client in your PC

    • Install Wireguard client in your PC.

    • Assign wireguard.conf:

      • SSH to the wireguard server VM.

    • Once connected to wireguard, you should be now able to login using private IP’s.

    Observation K8s Cluster setup and configuration

    Observation K8s Cluster setup

    • Install all the required tools mentioned in pre-requisites for PC.

      • kubectl

      • helm

      • ansible

    Note:

    • Make sure the permission for privkey.pem for ssh is set to 400.

    • Run env-check.yaml to check if cluster nodes are fine and do not have known issues in it.

      • cd $K8_ROOT/rancher/on-prem

      • create copy of hosts.ini.sample as hosts.ini and update the required details for Observation k8 cluster nodes.

    Observation K8s Cluster Ingress and Storage class setup

    Once the rancher cluster is ready, we need ingress and storage class to be set for other applications to be installed.

    • : used for ingress in rancher cluster.

    this will install ingress in ingress-nginx namespace of rancher cluster.

    • Storage class setup: creates a storage class in the cluster for creating pv (persistence volume) and pvc (persistence volume claim).

    Pre-requisites:

    Install Longhorn via helm

    • ./install.sh

    • Note: Values of below mentioned parameters are set as by default Longhorn installation script:

      • PV replica count is set to 1. Set the replicas for the storage class appropriately.

    Setting up nginx server for Observation K8s Cluster

    • For Nginx server setup we need ssl certificate, add the same into Nginx server.

    • SSL certificates can be generated in multiple ways. Either via lets encrypt if you have public DNS or via openssl certs when you don't have Public DNS.

      • Letsencrypt: Generate wildcard ssl certificate having 3 months validity when you have public DNS system using below steps.

    • Post installation check:

      • sudo systemctl status nginx

    • Steps to Uninstall nginx (in case required) sudo apt purge nginx nginx-common

    Observation K8's Cluster Apps Installation

    Rancher UI: Rancher provides full CRUD capability of creating and managing kubernetes cluster.

    • Install rancher using Helm, update hostname, & add privateCA to true in rancher-values.yaml, and run the following command to install.

    • Login:

      • Open page.

      • Get Bootstrap password using

      Assign a password. IMPORTANT: makes sure this password is securely saved and retrievable by Admin.

    keycloak_client.json: Used to create SAML client on Keycloak for Rancher integration.

    Keycloak - Rancher UI Integration

    • Login as admin user in Keycloak and make sure an email id, and first name field is populated for admin user. This is important for Rancher authentication as given below.

    • Enable authentication with Keycloak using the steps given .

    • In Keycloak add another Mapper for the rancher client (in Master realm) with following fields:

    RBAC :

    • For users in Keycloak assign roles in Rancher - cluster and project roles. Under default project add all the namespaces. Then, to a non-admin user you may provide Read-Only role (under projects).

    • If you want to create custom roles, you can follow the steps given .

    • Add a member to cluster/project in Rancher:

    Certificates expiry

    In case you see certificate expiry message while adding users, on local cluster run these commands:

    MOSIP K8s Cluster setup

    • Pre-requisites:

    • Install all the required tools mentioned in Pre-requisites for PC.

      • kubectl

      • helm

    • ansible

    • rke (version 1.3.10)

    • Setup MOSIP K8 Cluster node VM’s as per the hardware and network requirements as mentioned above.

    • Run env-check.yaml

    Alternatively

    • Test cluster access:

      • kubect get nodes

      • Command will result in details of the nodes of the rancher cluster.

    MOSIP K8 Cluster Global configmap, Ingress and Storage Class setup

    Global configmap: Global configmap contains the list of neccesary details to be used throughout the namespaces of the cluster for common details.

    • cd $K8_ROOT/mosip

    • Copy global_configmap.yaml.sample to global_configmap.yaml.

    • Update the domain names in global_configmap.yaml and run.

    Import MOSIP Cluster into Rancher UI

    • Login as admin in Rancher console

    • Select Import Existing for cluster addition.

    • Select Generic as cluster type to add.

    • Wait for few seconds after executing the command for the cluster to get verified.

    • Your cluster is now added to the rancher management server.

    MOSIP K8 cluster Nginx server setup

    • For Nginx server setup, we need ssl certificate, add the same into Nginx server.

    • SSL certificates can be generated in multiple ways. Either via lets encrypt if you have public DNS or via openssl certs when you don't have Public DNS.

      • Letsencrypt: Generate wildcard ssl certificate having 3 months validity when you have public DNS system using below steps.

    Monitoring module deployment

    • Prometheus and Grafana and Alertmanager tools are used for cluster monitoring.

    • Select 'Monitoring' App from Rancher console -> Apps & Marketplaces.

    • In Helm options, open the YAML file and disable Nginx Ingress.

    Alerting setup

    Alerting is part of cluster monitoring, where alert notifications are sent to the configured email or slack channel.

    • Monitoring should be deployed which includes deployment of prometheus, grafana and alertmanager.

    • Create .

    • After setting slack incoming webhook update slack_api_url and slack_channel_name in alertmanager.yml.

    • Install Default alerts along some of the defined custom alerts:

    • Alerting is installed.

    Logging module setup and installation

    MOSIP uses and elasticsearch to collect logs from all services and reflect the same in Kibana Dashboard.

    • Install Rancher FluentD system : for scraping logs outs of all the microservices from MOSIP k8 cluster.

      • Install Logging from Apps and marketplace within the Rancher UI.

      • Select Chart Version 100.1.3+up3.17.7 from Rancher console -> Apps & Marketplaces.

    Open kibana dashboard from https://kibana.sandbox.xyz.net.

    Kibana --> Menu (on top left) --> Dashboard --> Select the dashboard.

    MOSIP External Dependencies setup

    External Dependencies are set of external requirements that are needed for functioning of MOSIP’s core services like DB, Object Store, HSM etc.

    Click to check the detailed installation instructions of all the external components.

    Configurational change in case using Openssl wildcard ssl certificate. (Only advised in development env, not recommended for Production setup)

    • Add/ Update the below property in application-default.properties and comment on the below property in the *-default.properties file in the config repo.

    • Add/ Update the below property in the esignet-default.properties file in the config repo.

    MOSIP Modules Deployment

    • Now that all the Kubernetes cluster and external dependencies are already installed, will continue with MOSIP service deployment.

    • While installing a few modules, installation script prompts to check if you have public domain and valid SSL certificates on the server. Opt option n as we are using self-signed certificates. For example:

    • Start installing mosip modules:

    Check detailed installation steps.

    Loadbalancing

    mosip-helm : contains packaged helm charts for all the MOSIP modules.

    2.

    Observation Cluster nodes

    2

    8 GB

    32 GB

    2

    2

    3.

    Observation Nginx server (use Loadbalancer if required)

    2

    4 GB

    16 GB

    2

    Nginx+

    4.

    MOSIP Cluster nodes

    12

    32 GB

    128 GB

    6

    6

    5.

    MOSIP Nginx server ( use Loadbalancer if required)

    2

    4 GB

    16 GB

    1

    Nginx+

    Mosip Nginx server

    One internal interface : that is on the same network as all the rest of nodes (e.g.: inside local NAT Network). One public interface : Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 443/tcp port to this interface IP.

    Private IP of Nginx server for MOSIP cluster

    Index page for links to different dashboards of MOSIP env. (This is just for reference, please do not expose this page in a real production or UAT environment)

    4.

    api-internal.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    Internal API’s are exposed through this domain. They are accessible privately over wireguard channel

    5.

    api.sandbox.xyx.net

    Public IP of Nginx server for MOSIP cluster

    All the API’s that are publically usable are exposed using this domain.

    6.

    prereg.sandbox.xyz.net

    Public IP of Nginx server for MOSIP cluster

    Domain name for MOSIP's pre-registration portal. The portal is accessible publicly.

    7.

    activemq.sandbox.xyx.net

    Private IP of Nginx server for MOSIP cluster

    Provides direct access to activemq dashboard. It is limited and can be used only over wireguard.

    8.

    kibana.sandbox.xyx.net

    Private IP of Nginx server for MOSIP cluster

    Optional installation. Used to access kibana dashboard over wireguard.

    9.

    regclient.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    Registration Client can be downloaded from this domain. It should be used over wireguard.

    10.

    admin.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    MOSIP's admin portal is exposed using this domain. This is an internal domain and is restricted to access over wireguard

    11.

    object-store.sandbox.xyx.net

    Private IP of Nginx server for MOSIP cluster

    Optional- This domain is used to access the object server. Based on the object server that you choose map this domain accordingly. In our reference implementation, MinIO is used and this domain let's you access MinIO’s Console over wireguard

    12.

    kafka.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    Kafka UI is installed as part of the MOSIP’s default installation. We can access kafka UI over wireguard. Mostly used for administrative needs.

    13.

    iam.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    MOSIP uses an OpenID Connect server to limit and manage access across all the services. The default installation comes with Keycloak. This domain is used to access the keycloak server over wireguard

    14.

    postgres.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    This domain points to the postgres server. You can connect to postgres via port forwarding over wireguard

    15.

    pmp.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    MOSIP’s partner management portal is used to manage partners accessing partner management portal over wireguard

    16.

    onboarder.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    Accessing reports of MOSIP partner onboarding over wireguard

    17.

    resident.sandbox.xyz.net

    Public IP of Nginx server for MOSIP cluster

    Accessing resident portal publically

    18.

    idp.sandbox.xyz.net

    Public IP of Nginx server for MOSIP cluster

    Accessing IDP over public

    19.

    smtp.sandbox.xyz.net

    Private IP of Nginx server for MOSIP cluster

    Accessing mock-smtp UI over wireguard

    Create a directory as MOSIP in your PC and:
    • clone k8’s infra repo with tag : 1.2.0.1-B2 (whichever is the latest version) inside mosip directory. git clone https://github.com/mosip/k8s-infra -b v1.2.0.1-B2

    • clone mosip-infra with tag : 1.2.0.1-B2 (whichever is the latest version) inside mosip directory. git clone https://github.com/mosip/mosip-infra -b v1.2.0.1-B2

    • Set below mentioned variables in bashrc

    source .bashrc

    Note: Above mentioned environment variables will be used throughout the installation to move between one directory to other to run install scripts.

    create copy of
    hosts.ini.sample
    as
    hosts.ini
    and update the required details for wireguard VM\

    cp hosts.ini.sample hosts.ini

  • execute ports.yml to enable ports on VM level using ufw:

    ansible-playbook -i hosts.ini ports.yaml

  • Install and start wireguard server using docker as given below:
    cd /home/ubuntu/wireguard/config
  • assign one of the PR for yourself and use the same from the PC to connect to the server.

    • create assigned.txt file to assign the keep track of peer files allocated and update every time some peer is allocated to someone.

    • use ls cmd to see the list of peers.

    • get inside your selected peer directory, and add mentioned changes in peer.conf:

      • cd peer1

      • nano peer1.conf

  • add peer.conf in your PC’s /etc/wireguard directory as wg0.conf.

  • start the wireguard client and check the status:

  • rke (version 1.3.10)

  • Setup Observation Cluster node VM’s as per the hardware and network requirements as mentioned above.

  • Setup passwordless SSH into the cluster nodes via pem keys. (Ignore if VM’s are accessible via pem’s).

    • Generate keys on your PC ssh-keygen -t rsa

    • Copy the keys to remote observation node VM’s ssh-copy-id <remote-user>@<remote-ip>

    • SSH into the node to check password-less SSH ssh -i ~/.ssh/<your private key> <remote-user>@<remote-ip>

    • cp hosts.ini.sample hosts.ini

    • ansible-playbook -i hosts.ini env-check.yaml

    • This ansible checks if localhost mapping is already present in /etc/hosts file in all cluster nodes, if not it adds the same.

  • Open ports and install docker on Observation K8 Cluster node VM’s.

    • cd $K8_ROOT/rancher/on-prem

    • Ensure that hosts.ini is updated with nodal details.

    • Update vpc_ip variable in ports.yaml with vpc CIDR ip to allow access only from machines inside same vpc.

    • Execute ports.yml to enable ports on VM level using ufw:

      • ansible-playbook -i hosts.ini ports.yaml

    • Disable swap in cluster nodes. (Ignore if swap is already disabled)

      • ansible-playbook -i hosts.ini swap.yaml

    • execute docker.yml to install docker and add user to docker group:

      • ansible-playbook -i hosts.ini docker.yaml

  • Creating RKE Cluster Configuration file

    • rke config

    • Command will prompt for nodal details related to cluster, provide inputs w.r.t below mentioned points:

      • SSH Private Key Path :

      • Number of Hosts:

      • SSH Address of host :

      • SSH User of host :

      • Make all the nodes Worker host by default.

      • To create an HA cluster, specify more than one host with role Control Plane and etcd host.

    • Network Plugin Type : Continue with canal as default network plugin.

    • For rest of other configurations, opt the required or default value.

  • As result of rke config command cluster.yml file will be generated inside same directory, update the below mentioned fields:

    • nano cluster.yml

      • Remove the default Ingress install

    • Add the name of the kubernetes cluster

      • cluster_name: sandbox-name

  • For production deployments edit the cluster.yml, according to this RKE Cluster Hardening Guide.

  • Setup up the cluster:

    • Once cluster.yml is ready, you can bring up the kubernetes cluster using simple command.

      • This command assumes the cluster.yml file is in the same directory as where you are running the command.

      • rke up

    • As part of the Kubernetes creation process, a kubeconfig file has been created and written at kube_config_cluster.yml, which can be used to start interacting with your Kubernetes cluster.

    • Copy the kubeconfig files

    • To access the cluster using kubeconfig file use any one of the below method:

      • cp $HOME/.kube/<cluster_name>_config $HOME/.kube/config Alternatively

      • export KUBECONFIG="$HOME/.kube/<cluster_name>_config

  • Total available node CPU allocated to each instance-manager pod in the longhorn-system namespace.
    • The value "5" means 5% of the total available node CPU.

    • This value should be fine for sandbox and pilot but you may have to increase the default to "12" for production.

    • The value can be updated on Longhorn UI after installation.

  • Access the Longhorn dashboard from Rancher UI once installed.

  • Setup Backup : In case you want to backup the pv data from longhorn to s3 periodically follow instructions. (Optional, ignore if not required)

  • SSH into the nginx server node.

  • Install Pre-requisites

    • Generate wildcard SSL certificates for your domain name.

    • sudo certbot certonly --agree-tos --manual --preferred-challenges=dns -d *.org.net

    • replace org.net with your domain.

    • The default challenge HTTP is changed to DNS challenge, as we require wildcard certificates.

    • Create a DNS record in your DNS service of type TXT with host _acme-challenge.org.net, with the string prompted by the script.

    • Wait for a few minutes for the above entry to get into effect. Verify: host -t TXT _acme-challenge.org.net

    • Press enter in the certbot prompt to proceed.

    • Certificates are created in /etc/letsencrypt on your machine.

    • Certificates created are valid for 3 months only.

    • Wildcard SSL certificate . This will increase the validity of the certificate for next 3 months.

  • Openssl : Generate wildcard ssl certificate using openssl in case you don't have public DNS using below steps. (Ensure to use this only in development env, not suggested for Production env).

    • Install docker on nginx node.

    • Generate a self-signed certificate for your domain, such as *.sandbox.xyz.net.

    • Execute the following command to generate a self-signed SSL certificate. Prior to execution, kindly ensure to update environmental variables & rancher domain passed to openssl command:

    • Above command will generate certs in below specified location. Use it when prompted during nginx installation.

      • fullChain path: /etc/ssl/certs/tls.crt.

      • privKey path: /etc/ssl/private/tls.key.

  • Install nginx:

    • Login to nginx server node.

    • Clone k8s-infra

    • Provide below mentioned inputs as and when promted

      • Rancher nginx ip : internal ip of the nginx server VM.

      • SSL cert path : path of the ssl certificate to be used for ssl termination.

      • SSL key path : path of the ssl key to be used for ssl termination.

      • Cluster node ip's : ip’s of the rancher cluster node

  • Restart nginx service.

  • DNS mapping:

    • Once nginx server is installed successfully, create DNS mapping for rancher cluster related domains as mentioned in DNS requirement section. (rancher.org.net, keycloak.org.net)

    • In case used Openssl for wildcard ssl certificate add DNS entries in local hosts file of your system.

      • For example: /etc/hosts files for Linux machines.

    Keycloak: Keycloak is an OAuth 2.0 compliant Identity Access Management (IAM) system used to manage the access to Rancher for cluster controls.

    Protocol: saml

  • Name: username

  • Mapper Type: User Property

  • Property: username

  • Friendly Name: username

  • SAML Attribute Name: username

  • SAML Attribute NameFormat: Basic

  • Specify the following mappings in Rancher's Authentication Keycloak form:

    • Display Name Field: givenName

    • User Name Field: email

    • UID Field: username

    • Entity ID Field: https://your-rancher-domain/v1-saml/keycloak/saml/metadata

    • Rancher API Host: https://your-rancher-domain

    • Groups Field: member

  • Give member name exactly as username in Keycloak

  • Assign appropriate role like Cluster Owner, Cluster Viewer etc.

  • You may create new role with fine grained access control.

  • to check if cluster nodes are fine and don't have known issues in it.
    • cd $K8_ROOT/rancher/on-prem

    • create copy of hosts.ini.sample as hosts.ini and update the required details for MOSIP k8 cluster nodes.

      • cp hosts.ini.sample hosts.ini

      • ansible-playbook -i hosts.ini env-check.yaml

      • This ansible checks if localhost mapping is already present in /etc/hosts file in all cluster nodes, if not it adds the same.

  • Setup passwordless ssh into the cluster nodes via pem keys. (Ignore if VM’s are accessible via pem’s).

    • Generate keys on your PC

      • ssh-keygen -t rsa

    • Copy the keys to remote rancher node VM’s:

      • ssh-copy-id <remote-user>@<remote-ip>

    • SSH into the node to check password-less SSH

      • ssh -i ~/.ssh/<your private key> <remote-user>@<remote-ip>

    • Rancher UI : (deployed in Rancher K8 cluster)

  • Open ports and Install docker on MOSIP K8 Cluster node VM’s.

    • cd $K8_ROOT/mosip/on-prem

    • create copy of hosts.ini.sample as hosts.ini and update the required details for wireguard VM.

      • cp hosts.ini.sample hosts.ini

    • Update vpc_ip variable in ports.yaml with vpc CIDR ip to allow access only from machines inside same vpc.

    • execute ports.yml to enable ports on VM level using ufw:

      • ansible-playbook -i hosts.ini ports.yaml

    • Disable swap in cluster nodes. (Ignore if swap is already disabled)

      • ansible-playbook -i hosts.ini swap.yaml

    • execute docker.yml to install docker and add user to docker group:

      • ansible-playbook -i hosts.ini docker.yaml

  • Creating RKE Cluster Configuration file

    • rke config

    • Command will prompt for nodal details related to cluster, provide inputs w.r.t below mentioned points:

      • SSH Private Key Path :

      • Number of Hosts:

      • SSH Address of host :

      • SSH User of host :

      • Make all the nodes Worker host by default.

      • To create an HA cluster, specify more than one host with role Control Plane and etcd host.

    • Network Plugin Type : Continue with canal as default network plugin.

    • For rest for other configuration opt the required or default value.

  • As result of rke config command cluster.ymlfile will be generated inside same directory, update the below mentioned fields:

    • nano cluster.yml

    • Remove the default Ingress install

    • Add the name of the kubernetes cluster

    • For production deplopyments edit the cluster.yml, according to this .

  • Setup up the cluster:

    • Once cluster.yml is ready, you can bring up the kubernetes cluster using simple command.

      • This command assumes the cluster.yml file is in the same directory as where you are running the command.

        • rke up

      • The last line should read Finished building Kubernetes cluster successfully to indicate that your cluster is ready to use.

      • Copy the kubeconfig files

    • To access the cluster using kubeconfig filr use any one of the below method:

    • cp $HOME/.kube/<cluster_name>_config $HOME/.kube/config

  • Save Your files
    • Save a copy of the following files in a secure location, they are needed to maintain, troubleshoot and upgrade your cluster.:

      • cluster.yml: The RKE cluster configuration file.

      • kube_config_cluster.yml: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster.

      • cluster.rkestate: The , this file contains credentials for full access to the cluster.

  • In case not having Public DNS system add the custom DNS configuration for the cluster.

    • Check whether coredns pods are up and running in your cluster via the below command:

    • Update the IP address and domain name in the below DNS hosts template and add it in the coredns configmap Corefile key in the kube-system namespace.

    • Update coredns configmap via below command.

    example:

    • Check whether the DNS changes are correctly updated in coredns configmap.

    • Restart the coredns pod in the kube-system namespace.

    • Check status of coredns restart.

  • kubectl apply -f global_configmap.yaml

  • Istio Ingress setup: It is a service mesh for the MOSIP K8 cluster which provides transparent layers on top of existing microservices along with powerful features enabling a uniform and more efficient way to secure, connect, and monitor services.

    • cd $K8_ROOT/mosip/on-prem/istio

    • ./install.sh

    • This will bring up all the Istio components and the Ingress Gateways.

    • Check Ingress Gateway services:

      • kubectl get svc -n istio-system

        • istio-ingressgateway: external facing istio service.

  • Storage class setup: Longhorn creates a storage class in the cluster for creating pv (persistence volume) and pvc (persistence volume claim).

    • Pre-requisites:

    • Install Longhorn via helm

      • ./install.sh

      • Note: Values of below mentioned parameters are set as by default Longhorn installation script:

        • PV replica count is set to 1. Set the replicas for the storage class appropriately.

        • Total available node CPU allocated to each instance-manager pod in the longhorn-system namespace.

  • Fill the Cluster Name field with unique cluster name and select Create.
  • You will get the kubecl commands to be executed in the kubernetes cluster. Copy the command and execute from your PC (make sure your kube-config file is correctly set to MOSIP cluster).

  • SSH into the nginx server node.

  • Install Pre-requisites

    • Generate wildcard SSL certificates for your domain name.

    • sudo certbot certonly --agree-tos --manual --preferred-challenges=dns -d *.org.net

    • replace org.net with your domain.

    • The default challenge HTTP is changed to DNS challenge, as we require wildcard certificates.

    • Create a DNS record in your DNS service of type TXT with host _acme-challenge.org.net, with the string prompted by the script.

    • Wait for a few minutes for the above entry to get into effect. Verify: host -t TXT _acme-challenge.org.net

    • Press enter in the certbot prompt to proceed.

    • Certificates are created in /etc/letsencrypt on your machine.

    • Certificates created are valid for 3 months only.

    • Wildcard SSL certificate . This will increase the validity of the certificate for next 3 months.

  • Openssl : Generate wildcard ssl certificate using openssl in case you don't have public DNS using below steps. (Ensure to use this only in development env, not suggested for Production env).

    • Install docker on nginx node.

    • Generate a self-signed certificate for your domain, such as *.sandbox.xyz.net.

    • Execute the following command to generate a self-signed SSL certificate. Prior to execution, kindly ensure that the environmental variables passed to the OpenSSL Docker container have been properly updated:

    • Above command will generate certs in below specified location. Use it when prompted during nginx installation.

      • fullChain path: /etc/ssl/certs/nginx-selfsigned.crt.

      • privKey path: /etc/ssl/private/nginx-selfsigned.key.

  • Install nginx:

    • Login to nginx server node.

    • Clone k8s-infra

    • Provide below mentioned inputs as and when prompted

      • MOSIP nginx server internal ip

      • MOSIP nginx server public ip

      • Publically accessible domains (comma separated with no whitespaces)

      • SSL cert path

  • When utilizing an openssl wildcard SSL certificate, please add the following server block to the nginx server configuration within the http block. Disregard this if using SSL certificates obtained through letsencrypt or for publicly available domains. Please note that this should only be used in a development environment and is not recommended for production environments.

    • nano /etc/nginx/nginx.conf

    • Note: HTTP access is enabled for IAM because MOSIP's keymanager expects to have valid SSL certificates. Ensure to use this only for development purposes, and it is not recommended to use it in production environments.

    • Restart nginx service.

  • Post installation check:

    • sudo systemctl status nginx

  • Steps to Uninstall nginx (in case required) sudo apt purge nginx nginx-common

  • DNS mapping:

    • Once nginx server is installed successfully, create DNS mapping for rancher cluster related domains as mentioned in DNS requirement section. (rancher.org.net, keycloak.org.net)

    • In case used Openssl for wildcard ssl certificate add DNS entries in local hosts file of your system.

      • For example: /etc/hosts files for Linux machines.

  • Check Overall if nginx and istio wiring is set correctly

    • Install httpbin: This utility docker returns http headers received inside the cluster. You may use it for general debugging - to check ingress, headers etc.

    • To see what is reaching the httpbin (example, replace with your domain name):

  • Click on Install.
  • cd $K8_ROOT/monitoring/alerting/

  • nano alertmanager.yml

  • Update:

  • Update Cluster_name in patch-cluster-name.yaml.

  • cd $K8_ROOT/monitoring/alerting/

  • nano patch-cluster-name.yaml

  • Update:

  • Configure Rancher FluentD

    • Create clusteroutput

      • kubectl apply -f clusteroutput-elasticsearch.yaml

    • Start clusterFlow

      • kubectl apply -f clusterflow-elasticsearch.yaml

    • Install elasticsearch, kibana and Istio addons\

    • set min_age in elasticsearch-ilm-script.sh and execute the same.

    • min_age : is the minimum no. of days for which indices will be stored in elasticsearch.

    • MOSIP provides set of Kibana Dashboards for checking logs and throughputs.

      • Brief description of these dashboards are as follows:

        • contains the logstash Index Pattern required by the rest of the dashboards.

  • Import dashboards:

    • cd K8_ROOT/logging

    • ./load_kibana_dashboards.sh ./dashboards <cluster-kube-config-file>

  • View dashboards

  • 1.

    Wireguard Bastion Host

    2

    4 GB

    8 GB

    1

    (ensure to setup active-passive)

    1.

    Wireguard Bastion Host

    One Private interface : that is on the same network as all the rest of nodes (e.g.: inside local NAT Network). One public interface : Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 51820/udp port to this interface IP.

    2.

    K8 Cluster nodes

    One internal interface: with internet access and that is on the same network as all the rest of nodes (e.g.: inside local NAT Network )

    3.

    Observation Nginx server

    One internal interface: with internet access and that is on the same network as all the rest of nodes (e.g.: inside local NAT Network).

    1.

    rancher.xyz.net

    Private IP of Nginx server or load balancer for Observation cluster

    Rancher dashboard to monitor and manage the kubernetes cluster.

    2.

    keycloak.xyz.net

    Private IP of Nginx server for Observation cluster

    Administrative IAM tool (keycloak). This is for the kubernetes administration.

    3.

    Nginx
    Rancher
    rke
    Rancher
    Keycloak
    MOSIP External Components
    MOSIP Services
    k8s-infra
    mosip-infra
    mosip-config
    kubectl
    helm
    Istioctl
    rke
    1.3.10
    Nginx Ingress Controller
    Longhorn
    Rancher
    here
    here
    slack incoming webhook
    Rancher Fluentd
    here
    MOSIP Modules Deployment

    4.

    sandbox.xyx.net

    peer1 :   peername
    peer2 :   xyz
    ingress:
    provider: none
    sudo apt-get update --fix-missing
    sudo apt install docker.io -y
    sudo systemctl restart docker
    cd $K8_ROOT/rancher/on-prem/nginx
    sudo ./install.sh
     <INTERNAL_IP_OF_OBS_NGINX_NODE>    rancher.xyz.net keycloak.xyz.net
    ingress:
    provider: none
    kubectl -n kube-system get pods -l k8s-app=kube-dns
    hosts {
      <PUBLIC_IP_OF_MOSIP_NGINX_NODE>    api.sandbox.xyz.net resident.sandbox.xyz.net esignet.sandbox.xyz.net prereg.sandbox.xyz.net healthservices.sandbox.xyz.net
      <INTERNAL_IP_OF_MOSIP_NGINX_NODE>  sandbox.xyz.net api-internal.sandbox.xyz.net activemq.sandbox.xyz.net kibana.sandbox.xyz.net regclient.sandbox.xyz.net admin.sandbox.xyz.net minio.sandbox.xyz.net iam.sandbox.xyz.net kafka.sandbox.xyz.net postgres.sandbox.xyz.net pmp.sandbox.xyz.net onboarder.sandbox.xyz.net smtp.sandbox.xyz.net compliance.sandbox.xyz.net
      fallthrough
    }
    cd $K8_ROOT/longhorn
    ./pre_install.sh
    sudo apt-get update --fix-missing
    sudo apt install docker.io -y
    sudo systemctl restart docker
    cd $K8_ROOT/mosip/on-prem/nginx
    sudo ./install.sh
    server{
       listen <cluster-nginx-internal-ip>:80;
       server_name iam.sandbox.xyz.net;
       location /auth/realms/mosip/protocol/openid-connect/certs {
            proxy_pass                      http://myInternalIngressUpstream;
            proxy_http_version              1.1;
            proxy_set_header                Upgrade $http_upgrade;
            proxy_set_header                Connection "upgrade";
            proxy_set_header                Host $host;
            proxy_set_header                Referer $http_referer;
            proxy_set_header                X-Real-IP $remote_addr;
            proxy_set_header                X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header                X-Forwarded-Proto $scheme;
            proxy_pass_request_headers      on;
            proxy_set_header  Strict-Transport-Security "max-age=0;";
       }
       location / { return 301 https://iam.sandbox.xyz.net; }
      }
    cd $K8_ROOT/utils/httpbin
    ./install.sh
    curl https://api.sandbox.xyz.net/httpbin/get?show_env=true
    curl https://api-internal.sandbox.xyz.net/httpbin/get?show_env=true
    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm repo add mosip https://mosip.github.io/mosip-helm
    * execute docker.yml to install docker and add user to docker group:
    
        `ansible-playbook -i hosts.ini docker.yaml`
        
    sudo systemctl start wg-quick@wg0
    sudo systemctl status wg-quick@wg0
    cd $K8_ROOT/rancher/on-prem
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update
    helm install \                                                                                                             
      ingress-nginx ingress-nginx/ingress-nginx \
      --namespace ingress-nginx \
      --version 4.0.18 \
      --create-namespace  \
      -f ingress-nginx.values.yaml
    cd $K8_ROOT/longhorn
    ./pre_install.sh
    persistence.defaultClassReplicaCount=1
    defaultSettings.defaultReplicaCount=1
    sudo systemctl restart nginx
    cd $K8_ROOT/rancher/rancher-ui
    helm repo add rancher https://releases.rancher.com/server-charts/stable
    helm repo update
    kubectl create ns cattle-system
    kubectl -n cattle-system create secret generic tls-ca --from-file=cacerts.pem=./tls.crt
    helm install rancher rancher/rancher --version 2.6.3 \
    --namespace cattle-system \
    --create-namespace \
    --set privateCA=true \
    -f rancher-values.yaml
    kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{ .data.bootstrapPassword|base64decode}}{{ "\n" }}'
    cd $K8_ROOT/rancher/keycloak
    ./install.sh <iam.host.name>
    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm repo add mosip https://mosip.github.io/mosip-helm
    * `export KUBECONFIG="$HOME/.kube/<cluster_name>_config`
    e.g.:
    kubectl apply -f https://rancher.e2e.mosip.net/v3/import/pdmkx6b4xxtpcd699gzwdtt5bckwf4ctdgr7xkmmtwg8dfjk4hmbpk_c-m-db8kcj4r.yaml
     ingressNginx:
     enabled: false
    spec:
    externalLabels:
    cluster: <YOUR-CLUSTER-NAME-HERE>
    cd $K8_ROOT/monitoring/alerting/
    ./install.sh
    cd $INFRA_ROOT/deployment/v3/external/all
    ./install-all.sh
    mosip.iam.certs_endpoint=http://${keycloak.external.host}/auth/realms/mosip/protocol/openid-connect/certs
    spring.security.oauth2.resourceserver.jwt.jwk-set-uri=http://${keycloak.external.host}/auth/realms/mosip/protocol/openid-connect/certs
    ./install.sh
    Do you have public domain & valid SSL? (Y/n) 
     Y: if you have public domain & valid ssl certificate
     n: If you don't have a public domain and a valid SSL certificate. Note: It is recommended to use this option only in development environments.
    cd $INFRA_ROOT/deployment/v3/mosip/all
    ./install-all.sh
    export MOSIP_ROOT=<location of mosip directory>
    export K8_ROOT=$MOSIP_ROOT/k8s-infra
    export INFRA_ROOT=$MOSIP_ROOT/mosip-infra
    sudo docker run -d \
    --name=wireguard \
    --cap-add=NET_ADMIN \
    --cap-add=SYS_MODULE \
    -e PUID=1000 \
    -e PGID=1000 \
    -e TZ=Asia/Calcutta \
    -e PEERS=30 \
    -p 51820:51820/udp \
    -v /home/ubuntu/wireguard/config:/config \
    -v /lib/modules:/lib/modules \
    --sysctl="net.ipv4.conf.all.src_valid_mark=1" \
    --restart unless-stopped \
    ghcr.io/linuxserver/wireguard
    guaranteedEngineManagerCPU: 5
    guaranteedReplicaManagerCPU: 5   
    sudo apt update -y
    sudo apt-get install software-properties-common -y
    sudo add-apt-repository ppa:deadsnakes/ppa
    sudo apt-get update -y
    sudo apt-get install python3.8 -y
    sudo apt install letsencrypt -y
    sudo apt install certbot python3-certbot-nginx -y
    sudo apt update -y
    sudo apt-get install software-properties-common -y
    sudo add-apt-repository ppa:deadsnakes/ppa
    sudo apt-get update -y
    sudo apt-get install python3.8 -y
    sudo apt install letsencrypt -y
    sudo apt install certbot python3-certbot-nginx -y
    global:
    resolve_timeout: 5m
    slack_api_url: <YOUR-SLACK-API-URL>
    ...
    slack_configs:
    - channel: '<YOUR-CHANNEL-HERE>'
    send_resolved: true
    Delete the DNS IP.
  • Update the allowed IP's to subnets CIDR ip . e.g. 10.10.20.0/23

  • Share the updated peer.conf with respective peer to connect to wireguard server from Personel PC.

  • Test cluster access:

    • kubect get nodes

      • Command will result in details of the nodes of the Observation cluster.

  • Save your files

    • Save a copy of the following files in a secure location, they are needed to maintain, troubleshoot and upgrade your cluster.

      • cluster.yml: The RKE cluster configuration file.

      • kube_config_cluster.yml: The for the cluster, this file contains credentials for full access to the cluster.

      • cluster.rkestate: The , this file contains credentials for full access to the cluster.

  • In case not having Public DNS system add the custom DNS configuration for the cluster.

    • Check whether coredns pods are up and running in your cluster via the below command:

    • Update the IP address and domain name in the below DNS hosts template and add it in the coredns configmap Corefile key in the kube-system namespace.

    • Update coredns configmap via below command.

    example:

    • Check whether the DNS changes are correctly updated in coredns configmap.

    • Restart the coredns pod in the kube-system namespace.

    • Check status of coredns restart.

  • istio-ingressgateway-internal: internal facing istio service.

  • istiod: Istio daemon for replicating the changes to all envoy filters.

  • The value "5" means 5% of the total available node CPU

  • This value should be fine for sandbox and pilot but you may have to increase the default to "12" for production.

  • The value can be updated on Longhorn UI after installation.

  • SSL key path

  • Cluster node ip's (comma separated no whitespace)

  • 02-error-only-logs.ndjson contains a Search dashboard which shows only the error logs of the services, called MOSIP Error Logs dashboard.
  • 03-service-logs.ndjson contains a Search dashboard which show all logs of a particular service, called MOSIP Service Logs dashboard.

  • 04-insight.ndjson contains dashboards which show insights into MOSIP processes, like the number of UINs generated (total and per hr), the number of Biometric deduplications processed, number of packets uploaded etc, called MOSIP Insight dashboard.

  • 05-response-time.ndjson contains dashboards which show how quickly different MOSIP Services are responding to different APIs, over time, called Response Time dashboard.

  • renewal
    RKE Cluster Hardening Guide
    Kubernetes Cluster State file
    renewal
    01-logstash.ndjson
    kubectl -n kube-system get pods -l k8s-app=kube-dns
    hosts {
      <INTERNAL_IP_OF_OBS_NGINX_NODE>    rancher.xyz.net keycloak.xyz.net
      fallthrough
    }
    Is host (<node1-ip>) a Control Plane host (y/n)? [y]: y
    Is host (<node1-ip>) a Worker host (y/n)? [n]: y
    Is host (<node1-ip>) an etcd host (y/n)? [n]: y
    INFO[0000] Building Kubernetes cluster
    INFO[0000] [dialer] Setup tunnel for host [10.0.0.1]
    INFO[0000] [network] Deploying port listener containers   
    INFO[0000] [network] Pulling image [alpine:latest] on host [10.0.0.1]
    ...
    INFO[0101] Finished building Kubernetes cluster successfully
    ```
    * The last line should read `Finished building Kubernetes cluster` successfully to indicate that your cluster is ready to use.
    cp kube_config_cluster.yml $HOME/.kube/<cluster_name>_config
    chmod 400 $HOME/.kube/<cluster_name>_config
     mkdir -p /etc/ssl/certs/
     export VALIDITY=700 
     export COUNTRY=IN 
     export STATE=KAR
     export LOCATION=BLR
     export ORG=MOSIP
     export ORG_UNIT=MOSIP
     export COMMON_NAME=*.xyz.net
     
     openssl req -x509 -nodes -days $VALIDITY \
       -newkey rsa:2048 -keyout /etc/ssl/certs/tls.key -out /etc/ssl/certs/tls.crt \
       -subj "/C=$COUNTRY/ST=$STATE/L=$LOCATION/O=$ORG/OU=$ORG_UNIT/CN=$COMMON_NAME" \
       -addext "subjectAltName = DNS:rancher.xyz.net, DNS:*.xyz.net"
    Is host (<node1-ip>) a Control Plane host (y/n)? [y]: y
    Is host (<node1-ip>) a Worker host (y/n)? [n]: y
    Is host (<node1-ip>) an etcd host (y/n)? [n]: y
    `cluster_name: sandbox-name`
    INFO[0000] Building Kubernetes cluster
    INFO[0000] [dialer] Setup tunnel for host [10.0.0.1]
    INFO[0000] [network] Deploying port listener containers
    INFO[0000] [network] Pulling image [alpine:latest] on host [10.0.0.1]
    ...
    INFO[0101] Finished building Kubernetes cluster successfully
    cp kube_config_cluster.yml $HOME/.kube/<cluster_name>_config
    chmod 400 $HOME/.kube/<cluster_name>_config
    kubectl -n kube-system edit cm coredns
    kubectl -n kube-system get cm coredns -o yaml
    kubectl -n kube-system rollout restart deploy coredns coredns-autoscaler
    kubectl -n kube-system rollout status deploy coredns
    kubectl -n kube-system rollout status coredns-autoscaler
    persistence.defaultClassReplicaCount=1
    defaultSettings.defaultReplicaCount=1
    guaranteedEngineManagerCPU: 5
    guaranteedReplicaManagerCPU: 5   
    docker volume create --name gensslcerts --opt type=none --opt device=/etc/ssl --opt o=bind
    docker run -it --mount type=volume,src='gensslcerts',dst=/home/mosip/ssl,volume-driver=local \
    -e VALIDITY=700        \
    -e COUNTRY=IN          \
    -e STATE=KAR           \
    -e LOCATION=BLR        \
    -e ORG=MOSIP           \
    -e ORG_UNIT=MOSIP      \
    -e COMMON_NAME=*.sandbox.xyz.net \
    mosipdev/openssl:latest
    sudo systemctl restart nginx
     <PUBLIC_IP>    api.sandbox.xyz.net resident.sandbox.xyz.net esignet.sandbox.xyz.net prereg.sandbox.xyz.net healthservices.sandbox.xyz.net
     <INTERNAL_IP>  sandbox.xyz.net api-internal.sandbox.xyz.net activemq.sandbox.xyz.net kibana.sandbox.xyz.net regclient.sandbox.xyz.net admin.sandbox.xyz.net minio.sandbox.xyz.net iam.sandbox.xyz.net kafka.sandbox.xyz.net postgres.sandbox.xyz.net pmp.sandbox.xyz.net onboarder.sandbox.xyz.net smtp.sandbox.xyz.net compliance.sandbox.xyz.net
    cd $K8_ROOT/logging
    ./intall.sh
     cd $K8_ROOT/logging
    
    ./elasticsearch-ilm-script.sh
    Kubeconfig file
    Kubernetes Cluster State file
    kubectl -n kube-system edit cm coredns
    kubectl -n kube-system get cm coredns -o yaml
    kubectl -n kube-system rollout restart deploy coredns coredns-autoscaler
    kubectl -n kube-system rollout status deploy coredns
    kubectl -n kube-system rollout status coredns-autoscaler

    AWS Installation Guidelines

    Overview

    • MOSIP modules are deployed in the form of microservices in a Kubernetes cluster.

    • Wireguard is used as a trust network extension to access the admin, control, and observation pane

    • It is also used for on-the-field registrations.

    • MOSIP uses AWS load balancers for:

      • SSL termination

      • Reverse Proxy

      • CDN/Cache management

    • Kubernetes cluster is administered using the and

    • In V3, we have two Kubernetes clusters:

      • Observation Cluster - This cluster is a part of the observation plane and it helps in administrative tasks. By design, this is kept independent of the actual cluster as a good security practice and to ensure clear segregation of roles and responsibilities. As a best practice, this cluster or its services should be internal and should never be exposed to the external world.

        • is used for managing the Mosip cluster.

    Deployment Repos

    • : contains scripts to install and configure Kubernetes cluster with required monitoring, logging and alerting tools.

    • : contains deployment scripts to run charts in defined sequence.

    • : contains all the configuration files required by the MOSIP modules.

    Pre-requisites:

    Hardware Requirements

    VM’s required have any Operating System and can be selected as per convenience. In this installation guide, we are referring to Ubuntu OS throughout.

    Network Requirements

    • All the VM's should be able to communicate with each other.

    • Need stable Intra network connectivity between these VM's.

    • All the VM's should have stable internet connectivity for docker image download (in case of local setup ensure to have a locally accesible docker registry).

    • During the process, we will be creating two loadbalancers as mentioned in the first table below:

    DNS Requirements

    Note:

    • Only proceed to DNS mapping after the ingressgateways are installed and the load balancer is already configured.

    • The above table is just a placeholder for hostnames, the actual name itself varies from organisation to organisation.

    Certificate requirements

    As only secured https connections are allowed via nginx server, you will need the below mentioned valid ssl certificates:

    • One valid wildcard ssl certificate related to domain used for accesing Observation cluster which will be created using ACM (Amazon certificate manager). In above e.g. *. is the similiar example domain.

    • One valid wildcard ssl certificate related to domain used for accessing MOSIP cluster which will be created using ACM (Amazon certificate manager). In above e.g. *. is the similiar example domain.

    Prerequisite for complete deployment in Personal Computer

    • client version 1.23.6

    • client version 3.8.2 and add below repos as well :

    • : version: 1.15.0

    Installation

    A Wireguard bastion host (Wireguard server) provides secure private channel to access MOSIP cluster. The host restricts public access, and enables access to only those clients who have their public key listed in Wireguard server. Wireguard listens on UDP port 51820.

    Architecture diagram

    Setup Wirguard VM and wireguard bastion server:

    • Create a Wireguard server VM in aws console with above mentioned Hardware and Network requirements.

    • Edit the security group and add the following inbound rules in aws console

      • type ‘custom TCP', port range ‘51820’ and source '0.0.0.0/0’

    Setup Wireguard Client in your PC

    • Install in your PC.

    • Assign wireguard.conf:

      • SSH to the wireguard server VM.

    Observation K8s Cluster setup and configuration

    Observation K8s Cluster setup

    • Setup rancher cluster,

      • cd $K8_ROOT/rancher/aws

      • Copy rancher.cluster.config.sample to rancher.cluster.config.

    Observation K8s Cluster’s Ingress and Storage class setup

    Once the rancher cluster is ready we need ingress and storage class to be set for other applications to be installed.

    • : used for ingress in rancher cluster.

      • The above will automatically spawn an .

    • Check the following on AWS console:

    Domain name

    Create the following domain names:

    • Rancher: rancher.xyz.net

    • Keycloak: keycloak.xyz.net

    Point the above to internal ip address of the NLB. This assumes that you have a has been installed. On AWS this is done on Route 53 console.

    Rancher K8s Cluster Apps Installation

    • Rancher UI : Rancher provides full CRUD capability of creating and managing kubernetes cluster.

      • Install rancher using Helm, update hostname in rancher-values.yaml and run the following command to install.

    MOSIP K8s Cluster setup

    • Setup mosip cluster

      • cd $K8_ROOT/mosip/aws

      • Copy cluster.config.sample to mosip.cluster.config.

    Import Mosip Cluster into Rancher UI

    • Login as admin in Rancher console

    • Select Import Existing for cluster addition.

    • Select the Generic as cluster type to add.

    • Wait for few seconds after executing the command for the cluster to get verified.

    • Your cluster is now added to the rancher management server.

    MOSIP K8 Cluster Global configmap, Ingress and Storage Class setup

    • Global configmap: Global configmap contains list of necesary details to be used throughout the namespaces of the cluster for common details.

      • cd $K8_ROOT/mosip

      • Copy global_configmap.yaml.sample to global_configmap.yaml.

    Monitoring Module deployment

    Prometheus and Grafana and Alertmanager tools are used for cluster monitoring.

    • Select 'Monitoring' App from Rancher console -> Apps & Marketplaces.

    • In Helm options, open the YAML file and disable Nginx Ingress.

    • Click on Install

    Alerting Setup

    Alerting is part of cluster monitoring, where alert notifications are sent to the configured email or slack channel.

    • Monitoring should be deployed which includes deployment of prometheus, grafana and alertmanager.

    • Create .

    • After setting slack incoming webhook update slack_api_url and slack_channel_name in alertmanager.yml.

    Logging Module Setup and Installation

    Mosip uses Rancher Fluentd and elasticsearch to collect logs from all services and reflect the same in Kibana Dashboard.

    • Install Rancher FluentD system : for screpping logs outs of all the microservices from Mosip k8 cluster.

      • Install Logging from Apps and marketplace within the Rancher UI.

      • Select Chart Version 100.1.3+up3.17.7 from Rancher console -> Apps & Marketplaces.

    Mosip External Dependencies setup

    • External Dependencies: are set of external requirements needed for funtioning of MOSIP’s core services like DB, object store, hsm etc.

    • Check detailed installation instruction of all the .

    MOSIP Modules Deployment

    • Now that all the Kubernetes cluster and external dependencies are already installed, you can continue with MOSIP service deployment.

    • Check the detailed MOSIP Modules Deployment steps.

    API Testrig

    • MOSIP’s successfull deployment can be verified by comparing the results of api testrig with testrig benchmark.

        • When prompted input the hour of the day to execute the api-testrig.

        • Daily api testrig cron jon will be executed at the very opted hour of the day.

    Loadbalancing

    Keycloak in this cluster is used for cluster user access management.

  • It is recommended to configure log monitoring and network monitoring in this cluster.

  • In case you have a internal container registry, then it should run here.

  • MOSIP Cluster - This cluster runs all the MOSIP components and certain third party components to secure the cluster, API’s and Data.

    • MOSIP External Components

    • Mosip Services

  • mosip-helm : contains packaged helm charts for all the MOSIP modules.

    Rancher Cluster nodes (EKS managed)

    2

    8 GB

    32 GB

    2

    2

    3

    Mosip Cluster nodes (EKS managed)

    8

    32 GB

    64 GB

    6

    6

  • Server Interface requirement as mentioned in the second table:

  • Private Load balancer of MOSIP cluster

    Index page for links to different dashboards of Mosip env. (This is just for reference, Please do not expose this page in a real production or uat environment)

    4

    Private Load balancer of MOSIP cluster

    Internal API’s are exposed through this domain. They are accessible privately over wireguard channel

    5

    Public Load balancer of MOSIP cluster

    All the API’s that are publically usable are exposed using this domain.

    6

    Public Load balancer of MOSIP cluster

    Domain name for Mosip’s pre-registration portal. The portal is accessible publicly.

    7

    Private Load balancer of MOSIP cluster

    Provides direct access to activemq dashboard. Its limited and can be used only over wireguard

    8

    Private Load balancer of MOSIP cluster

    Optional instalation. Used to access kibana dashboard over wireguard

    9

    Private Load balancer of MOSIP cluster

    Regclient can be downloaded from this domain. It should be used over wireguard.

    10

    Private Load balancer of MOSIP cluster

    Mosip’s admin portal is exposed using this domain. This is an internal domain and is restricted to access over wireguard

    11

    Private Load balancer of MOSIP cluster

    Optional- This domain is used to access the object server. Based on the object server that you choose map this domain accordingly. In our reference implementation Minio is used and this domain lets you access Minio’s Console over wireguard

    12

    Private Load balancer of MOSIP cluster

    Kafka UI is installed as part of the Mosip’s default installation. We can access kafka ui over wireguard. Mostly used for administrative needs.

    13

    Private Load balancer of MOSIP cluster

    Mosip uses an Openid connect server to limit and manage access across all the services. The default installation comes with Keycloak. This domain is used to access the keycloak server over wireguard

    14

    Private Load balancer of MOSIP cluster

    This domain points to the postgres server. You can connect to postgres via port forwarding over wireguard

    15

    Public Load balancer of MOSIP cluster

    Mosip’s partner management portal is used to manage partners accessing partner management portal over wireguard

    16

    Public Load balancer of MOSIP cluster

    accessident resident portal publically

    17

    Public Load balancer of MOSIP cluster

    accessing IDP over public

    18

    Private Load balancer of MOSIP cluster

    Accessing mock-smtp UI over wireguard

    eksctl : version: 0.121.0

  • AWS account and credentials with permissions to create EKS cluster.

  • AWS credentials in ~/.aws/ folder as given here.

  • Save ~/.kube/config file with another name. (IMPORTANT. As in this process your existing ~/.kube/config file will be overridden).

  • Save .pem file from AWS console and store it in ~/.ssh/ folder. (Generate a new one if you do not have this key file).

  • Create a directory as mosip in your PC and

    • clone k8’s infra repo with tag : 1.2.0.1-B2 inside mosip directory. git clone https://github.com/mosip/k8s-infra -b v1.2.0.1-B2

    • clone mosip-infra with tag : 1.2.0.1-B2 inside mosip directory git clone https://github.com/mosip/mosip-infra -b v1.2.0.1-B2

    • Set below mentioned variables in bashrc

      • source .bashrc Note: Above mentioned environment variables will be used throughout the installation to move between one directory to other to run install scripts.

  • type ‘custom UDP', port range ‘51820’ and source '0.0.0.0/0’
  • Install docker in the Wireguard machine as given here.

  • Setup Wireguard server

    • SSH to wireguard VM

    • Create directory for storing wireguard config files. mkdir -p wireguard/config

    • Install and start wireguard server using docker as given below:

      Note: * Increase the no of peers above in case needed more than 30 wireguard client confs. (-e PEERS=30) * Change the directory to be mounted to wireguard docker in case needed. All your wireguard confs will be generated in the mounted directory. (-v /home/ubuntu/wireguard/config:/config)

  • cd /home/ubuntu/wireguard/config
  • assign one of the PR for yourself and use the same from the PC to connect to the server.

    • create assigned.txt file to assign the keep track of peer files allocated and update everytime some peer is allocated to someone.

    • Use ls cmd to see the list of peers.

    • get inside your selected peer directory, and add mentioned changes in peer.conf:

      • cd peer1

      • nano peer1.conf

  • add peer.conf in your PC’s /etc/wireguard directory as wg0.conf.

  • start the wireguard client and check the status:

  • Once Connected to wireguard you should be now able to login using private ip’s.

  • Review and update the below mentioned parameters of rancher.cluster.config carefully.

    • name

    • region

    • version: “1.24“

    • instance related details

      • instanceName

      • instanceType

      • desiredcapacity

    • update the details of the subnets to be used from vpc

  • Install

    • eksctl create cluster -f rancher.cluster.config

  • Wait for the cluster creation to complete, generally it takes around 30 minutes to create or update cluster.

  • Once EKS K8 cluster is ready below mentioned output will be displayed in the console screen. EKS cluster "my-cluster" in "region-code" region is ready

  • The config file for the new cluster will be created on ~/.kube/config

  • Make sure to backup and store the ~/.kube/config with new name. e.g. ~/.kube/obs-cluster.config.

  • Change file permission using below command: chmod 400 ~/.kube/obs-cluster.config

  • Set the KUBECONFIG properly so that you can access the cluster. export KUBECONFIG=~/.kube/obs-cluster.config

  • Test cluster access:

    • kubect get nodes

      • Command will result in details of the nodes of the rancher cluster.

  • An NLB has been created. You may also see the DNS of NLB with

  • Obtain AWS TLS certificate as given here

  • Edit listner "443". Select "TLS".

  • Note, the target group name of listner 80. Set target group of 443 to target group of 80. Basically, we want TLS termination at the LB and it must forward HTTP traffic (not HTTPS) to port 80 of ingress controller. So

    • Input of LB: HTTPS

    • Output of LB: HTTP --> port 80 of ingress nginx controller

  • Enable "Proxy Protocol v2" in the target group settings

  • Make sure all subnets are selected in LB -->Description-->Edit subnets.

  • Check health check of target groups.

  • Remove listner 80 from LB as we will receive traffic only on 443.

  • Storage class setup:

    • Default storage class on EKS is gp2 . GP2 by default is in Delete mode which means if PVC is deleted, the underlying storage PV is also deleted.

    • To enable volume expansion for the existing gp2 storage class, modify the YAML configuration by adding allowVolumeExpansion: true to the gp2 storage class configuration.

      • kubectl edit sc gp2 : to edit the yaml configuration.

    • Create storage class gp2-retain by running sc.yaml for PV in Retain mode. Set the storage class as gp2-retain in case you want to retain PV.

    • we need the EBS driver for our storage class to work, follow the steps to setup EBS driver.

  • Login:

    • Open Rancher page https://rancher.org.net.

    • Get Bootstrap password using

    • Assign a password. IMPORTANT: makes sure this password is securely saved and retrievable by Admin.

  • Keycloak : Keycloak is an OAuth 2.0 compliant Identity Access Management (IAM) system used to manage the access to Rancher for cluster controls.

    • keycloak_client.json: Used to create SAML client on Keycloak for Rancher integration.

  • Keycloak - Rancher Integration

    • Login as admin user in Keycloak and make sure an email id, and first name field is populated for admin user. This is important for Rancher authentication as given below.

    • Enable authentication with Keycloak using the steps given here.

    • In Keycloak add another Mapper for the rancher client (in Master realm) with following fields:

      • Protocol: saml

      • Name: username

      • Mapper Type: User Property

    • Specify the following mappings in Rancher's Authentication Keycloak form:

      • Display Name Field: givenName

      • User Name Field: email

      • UID Field: username

  • RBAC :

    • For users in Keycloak assign roles in Rancher - cluster and project roles. Under default project add all the namespaces. Then, to a non-admin user you may provide Read-Only role (under projects).

    • If you want to create custom roles, you can follow the steps given here.

    • Add a member to cluster/project in Rancher:

      • Give member name exactly as username in Keycloak

      • Assign appropriate role like Cluster Owner, Cluster Viewer etc.

      • You may create new role with fine grained acccess control.

  • Certificates expiry

    • In case you see certificate expiry message while adding users, on local cluster run these commands:

      https://rancher.com/docs/rancher/v2.6/en/troubleshooting/expired-webhook-certificates/

  • Review and update the below mentioned parameters of cluster.config.sample carefully.

    • name

    • region

    • version: “1.24“

    • instance related details

      • instanceName

      • instanceType

      • desiredcapacity

    • update the details of the subnets to be used from vpc

  • Install

  • eksctl create cluster -f mosip.cluster.config

    • Wait for the cluster creation to complete, generally it takes around 30 minutes to create or update cluster.

    • Once EKS K8 cluster is ready below mentioned output will be displayed in the console screen. EKS cluster "my-cluster" in "region-code" region is ready

    • The config file for the new cluster will be created on ~/.kube/config

    • Make sure to backup and store the ~/.kube/config with new name. e.g. ~/.kube/mosip-cluster.config.

    • Change file permission using below command: chmod 400 ~/.kube/mosip-cluster.config

    • Set the KUBECONFIG properly so that you can access the cluster. export KUBECONFIG=~/.kube/mosip-cluster.config

    • Test cluster access:

      • kubect get nodes

        • Command will result in details of the nodes of the MOSIP cluster.

    Fill the Cluster Name field with unique cluster name and select Create.
  • You will get the kubectl commands to be executed in the kubernetes cluster. Copy the command and execute from your PC. (make sure your kube-config file is correctly set to Mosip cluster)

  • Update the domain names in global_configmap.yaml and run.

  • kubectl apply -f global_configmap.yaml

  • Storage class setup:

    • Default storage class on EKS is gp2 . GP2 by default is in Delete mode which means if PVC is deleted, the underlying storage PV is also deleted.

    • To enable volume expansion for the existing gp2 storage class, modify the YAML configuration by adding allowVolumeExpansion: true to the gp2 storage class configuration.

      • kubectl edit sc gp2 : to edit the yaml configuration.

    • Create storage class gp2-retain by running sc.yaml for PV in Retain mode. Set the storage class as gp2-retain in case you want to retain PV.

    • we need the EBS driver for our storage class to work, follow the steps to setup EBS driver.

    • also we need EFS CSI driver for the regproc services, because EBS driver only supports RWO but we need RWX, follow these to setup EFS CSI driver.

  • Ingress and load balancer (LB) :

    • Ingress is not installed by default on EKS. We use Istio ingress gateway controller to allow traffic in the cluster. Two channels are created - public and internal. See architecture.

    • Install istioctl as given here in your system.

    • Install ingresses as given here:

  • Load Balancers setup for istio-ingress.

    • The above istio installation will automatically spawn an Internal AWS Network Load Balancer (L4).

    • These may be also seen with

    • You may view them on AWS console in Loadbalancer section.

    • TLS termination is supposed to be on LB. So all our traffic coming to ingress controller shall be HTTP.

    • Obtain AWS TLS certificate as given

    • Add the certificates and 443 access to the LB listener.

    • Update listener TCP->443 to TLS->443 and point to the certificate of domain name that belongs to your cluster.

    • Forward TLS->443 listner traffic to target group that corresponds to listener on port 80 of respective Loadbalancers. This is because after TLS termination the protocol is HTTP so we must point LB to HTTP port of ingress controller.

    • Update health check ports of LB target groups to node port corresponding to port 15021. You can see the node ports with

    • Enable Proxy Protocol v2 on target groups.

    • Make sure all subnets are included in Availability Zones for the LB. Description --> Availability Zones --> Edit Subnets

    • Make sure to delete the listeners for port 80 and 15021 from each of the loadbalancers as we restrict unsecured port 80 access over http.

  • DNS mapping:

    • Initially all the services will be accesible only over the internal channel.

    • Point all your domain names to internal LoadBalancers DNS/IP intially till testing is done.

    • On AWS this may be done on Route 53 console.

    • After Go live decision enable .

  • Check Overall if nginx and istio wiring is set correctly

    • Install httpbin: This utility docker returns http headers received inside the cluster. You may use it for general debugging - to check ingress, headers etc.

      • To see what's reaching httpbin (example, replace with your domain name):

  • .
  • cd $K8_ROOT/monitoring/alerting/

  • nano alertmanager.yml

  • Update

  • Update Cluster_name in patch-cluster-name.yaml.

    • cd $K8_ROOT/monitoring/alerting/

    • nano patch-cluster-name.yaml

    • Update

  • Install Default alerts along some of the defined custom alerts:

  • Alerting is Installed.

  • Configure Rancher FluentD

    • Create clusteroutput :

      • kubectl apply -f clusteroutput-elasticsearch.yaml

    • start clusterFlow

      • kubectl apply -f clusterflow-elasticsearch.yaml

  • Install elasticsearch, kibana and Istio addons

    • cd $K8_ROOT/logging

    • ./intall.sh

  • set min_age in elasticsearch-ilm-script.sh and execute the same. min_age : is the min no. of days for which indices will be stored in elasticsearch.

    • cd $K8_ROOT/logging

    • ./elasticsearch-ilm-script.sh

  • Mosip provides set of Kibana Dashboards for checking logs and throughputs .

    • Brief description of these dashboards are as follows:

      • 01-logstash.ndjson contains the logstash Index Pattern required by the rest of the dashboards.

      • contains a Search dashboard which shows only the error logs of the services, called MOSIP Error Logs dashboard.

      • contains a Search dashboard which show all logs of a particular service, called MOSIP Service Logs dashboard.

      • contains dashboards which show insights into MOSIP processes, like the number of UINs generated (total and per hr), the number of Biometric deduplications processed, number of packets uploaded etc, called MOSIP Insight dashboard.

      • contains dashboards which show how quickly different MOSIP Services are responding to different APIs, over time, called Response Time dashboard.

    • Import dashboards:

      • cd K8_ROOT/logging/dashboard

      • ./load_kibana_dashboards.sh ./dashboards <cluster-kube-config-file>

    • View dashbords

      • Open kibana dashboard from: https://kibana.sandbox.xyz.net.

      • Kibana --> Menu (on top left) --> Dashboard --> Select the dashboard.

  • Purpose

    vCPU’s

    RAM

    Storage (HDD)

    no. of VM’s

    HA

    1

    Wireguard Bastion Host

    2

    4 GB

    8 GB

    1

    (ensure to setup active-passive)

    Loadbalancer

    Purpose

    Private loadbalancer Observation cluster

    This will be used to access Rancher dashboard and keycloak of observation cluster.

    Note: access to this will be restricted only with wireguard key holders.

    Public loadbalancer MOSIP cluster

    This will be used to access below mentioned services:

    • Pre-registration

    • Esignet

    • IDA

    • Partner management service api’s

    • Mimoto

    • Mosip file server

    • Resident

    Private loadbalancer MOSIP cluster

    This will be used to access all the services deployed as part of the setup inclusing external as well as all the MOSIP services.

    Note: access to this will be restricted only with wireguard key holders.

    Purpose VM

    Network Interfaces

    1

    Wireguard Bastion Host

    • One Private interface: that is on the same network as all the rest of nodes. (Eg: inside local NAT Network )

    • One public interface: Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 51820/udp port to this interface ip.

    Domain name

    Mapping details

    Purpose

    1

    rancher.xyz.net

    Load balancer of Observation cluster

    Rancher dashboard to monitor and manage the kubernetes cluster. You can share an existing rancher cluser.

    2

    keycloak.xyz.net

    Load balancer of Observation cluster

    Administrative IAM tool (keycloak). This is for the kubernetes administration.

    3

    Rancher
    EKS
    Rancher
    k8s-infra
    mosip-infra
    mosip-config
    org.net
    sandbox.xyz.net
    kubectl
    helm
    istioctl
    Wireguard
    Wireguard client
    Nginx Ingress Controller
    Internal AWS Network Load Balancer (L4)
    Wireguard Bastion Host
    slack incoming webhook
    external componets
    MOSIP Modular installation

    2

    peer1 :   peername
    peer2 :   xyz
    sudo systemctl start wg-quick@wg0
    sudo systemctl status wg-quick@wg0
    kubectl -n ingress-nginx get svc
    kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{ .data.bootstrapPassword|base64decode}}{{ "\n" }}'
    cd $K8_ROOT/rancher/keycloak
    ./install.sh <iam.host.name>
    kubectl -n istio-system get svc
    kubectl -n istio-system get svc
    cd $K8_ROOT/utils/httpbin
    ./install.sh
    curl https://api-internal.sandbox.xyz.net/httpbin/get?show_env=true
    Once public access is enabled also check this:
    curl https://api.sandbox.xyz.net/httpbin/get?show_env=true
    global:
      resolve_timeout: 5m
      slack_api_url: <YOUR-SLACK-API-URL>
    ...
      slack_configs:
      - channel: '<YOUR-CHANNEL-HERE>'
        send_resolved: true
    cd $K8_ROOT/monitoring/alerting/
    ./install.sh
    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm repo add mosip https://mosip.github.io/mosip-helm
    cd $K8_ROOT/rancher/aws
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update
    helm install \                               
      ingress-nginx ingress-nginx/ingress-nginx \
      --namespace ingress-nginx \
      --create-namespace  \
    -f nginx.values.yaml
    cd $K8_ROOT/rancher/rancher-ui
    helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
    helm repo update
    helm install rancher rancher-latest/rancher \
      --namespace cattle-system \
      --create-namespace \
      -f rancher-values.yaml
    eg.
    kubectl apply -f https://rancher.e2e.mosip.net/v3/import/pdmkx6b4xxtpcd699gzwdtt5bckwf4ctdgr7xkmmtwg8dfjk4hmbpk_c-m-db8kcj4r.yaml
    ingressNginx:
    enabled: false
    cd $INFRA_ROOT/deployment/v3/external/all
    ./install-all.sh
    cd $INFRA_ROOT/deployment/v3/mosip/all
    ./install-all.sh
    cd $INFRA_ROOT/deployment/v3/apitestrig
    ./install.sh

    Delete the DNS IP.

  • Update the allowed IP's to subnets CIDR ip . e.g. 10.10.20.0/23

  • Share the updated peer.conf with respective peer to connect to wireguard server from Personel PC.

  • volumeSize
  • volumeType

  • publicKeyName.

  • Property: username
  • Friendly Name: username

  • SAML Attribute Name: username

  • SAML Attribute NameFormat: Basic

  • Entity ID Field: https://your-rancher-domain/v1-saml/keycloak/saml/metadata

  • Rancher API Host: https://your-rancher-domain

  • Groups Field: member

  • volumeSize
  • volumeType

  • publicKeyName.

  • here
    here
    steps
    here
    public access
    02-error-only-logs.ndjson
    03-service-logs.ndjson
    04-insight.ndjson
    05-response-time.ndjson
    sandbox.xyx.net
    api-internal.sandbox.xyz.net
    api.sandbox.xyx.net
    prereg.sandbox.xyz.net
    activemq.sandbox.xyx.net
    kibana.sandbox.xyx.net
    regclient.sandbox.xyz.net
    admin.sandbox.xyz.net
    object-store.sandbox.xyx.net
    kafka.sandbox.xyz.net
    iam.sandbox.xyz.net
    postgres.sandbox.xyz.net
    pmp.sandbox.xyz.net
    resident.sandbox.xyz.net
    esignet.sandbox.xyz.net
    smtp.sandbox.xyz.net
    export MOSIP_ROOT=<location of mosip directory>
    export K8_ROOT=$MOSIP_ROOT/k8s-infra
    export INFRA_ROOT=$MOSIP_ROOT/mosip-infra
    sudo docker run -d \
       --name=wireguard \
       --cap-add=NET_ADMIN \
       --cap-add=SYS_MODULE \
       -e PUID=1000 \
       -e PGID=1000 \
       -e TZ=Asia/Calcutta\
       -e PEERS=30 \
       -p 51820:51820/udp \
       -v /home/ubuntu/wireguard/config:/config \
       -v /lib/modules:/lib/modules \
       --sysctl="net.ipv4.conf.all.src_valid_mark=1" \
       --restart unless-stopped \
       ghcr.io/linuxserver/wireguard
    cd $K8_ROOT/utils/misc/
    kubectl apply -f sc.yaml
    cd $K8_ROOT/utils/misc/
    kubectl apply -f sc.yaml
    cd $K8_ROOT/istio
    ./install.sh
    spec:
      externalLabels:
        cluster: <YOUR-CLUSTER-NAME-HERE>
    mosip-without-dns-1.png
    mosip-without-dns-1.png