Loading...
Effortlessly deploy and configure MOSIP with installation guides, upgrades and more.
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
MOSIP is deployed as a combination of microservices. MOSIP deployment as mircroservices adds more scalability, robustness, resilience and fault tolerance. MOSIP services are grouped as multiple modules like Kernel, prereg, regproc etc. These modules can be deployed independently with all its dependencies as per the business needs.
MOSIP releases the below mentioned artifacts in open source:
Jars : Artifacts released into Maven Central Repository after successful compilation of tagged code from all Mosip repositories.
Docker images : Docker images for all MOSIP services.
MOSIP helm: Packaged charts for every MOSIP service.
helm repo add mosip https://mosip.github.io/mosip-helm
K8s Infra: Scripts to create kubernetes cluster and configure it for MOSIP.
Deployment scripts: These scripts are a part of mosip-infra to install the helm charts in a desired sequence.
MOSIP by default supports two ways of installation:
V2 - A simple sandbox-based implementation with very low resources to help understand the working modules (deprecated).
V3 - A scalable deployment with a service mesh, HA and other security protection (supported from v1.2.0.1-B1).
Apart from these installation models, countries can adopt a model of their choice. We recommend using Kubernetes or equivalent container orchestration tools for better management of the microservices.
Set up your environment with deployment options, installation guides and more.
Welcome to the Setup section! This section will help you get started with installing, configuring, and maintaining the system. Whether you’re deploying for the first time, implementing specific configurations, or upgrading an existing setup, you’ll find step-by-step instructions in the following sections:
A Wireguard bastion host (Wireguard server) provides secure private channel to access MOSIP cluster. The host restrics public access, and enables access to only those clients who have their public key listed in Wireguard server. Wireguard listens on UDP port51820.
Install docker, and make sure you add $USER
to docker group:
If you already have a config file you may mount it with -v <your host path>:/config
.
You may increase the number of peers keeping the above mounted folders intact, stopping the docker and running it again with -e PEERS=<number of peers>
Install a Wireguard app on your machine. For MacOS there is a Wireguard app on the App Store.
Enter the server docker and cd to /config
folder. Here you will find the config files for peers. You may add the corresponding peer.conf
file in client Wireguard config.
Make sure Endpoint
mentioned for the client is Wireguard bastion hosts' IP adddress.
Modify the Allowed IPs
of the client to private IP addresses for Internal Load Balancers of your clusters. Here, we assumed that all your clusters are running in the same VPC so that bastion host is able to reach all of them.
The following versioning conventions are followed for repos related to deployment:
MOSIP release version: w.x.y.z
. Example 1.2.0.1
Helm chart version: wx.y.z
. Example 12.0.1
(as Helm follows 3 digit versioning).
In case of develop
branch of mosip-helm
version
in Chart.yaml
points to next planned release version of MOSIP (as Helm does not allow version like develop
).
Helm charts contain default compatible docker image tag.
appVersion
field in Charts.yaml
is not used.
👉
👉
👉
Provision a Virtual Machine (VM) and make sure it has access to internal load balancer (refer . Recommended configuration of VM is 2 vCPU, 4 GB RAM, 16 GB storage. While this configuration should work for small scale deployments, it must be scaled up if the host becomes a bottleneck in high loads.
Install Wireguard on the VM using Docker as given . Sample config :
branch == MOSIP release version == branch.
is on main
branch.
~= but with following versioning convention:
in values.yaml
of Helm chart == MOSIP release version.
Helm chart version
version
field in Charts.yaml
MOSIP release version
Version as published. If a release is w.x.y
, it implies w.x.y.0
. Patch releases may have have 4 digits like w.x.y.z
.
Docker image tag
Version of MOSIP service/module published as docker on Docker Hub.
Below table lists various checks that must be performed before actual roll out of a deployment. This list is not exhaustive and it is expected that SIs use this as a reference and augment their own hardening procedures.
In V3 installation cluster can be administered by logging into organisation wide Rancher setup. Rancher is integrated with Keycloak for authentication. To provide cluster access to a user perform the following steps as administrator:
Login into organisation wide Keycloak e.g https://iam.xyz.net
. It is assumed that you have admin
role in Keycloak.
Create a new user.
Make sure a strong password is set for the same under Credentials tab.
On Details tab you should see Update Password flag under Required User Actions. This will prompt a user to change the password during first login. Disable the same only if you are sure you don't want user to change password.
Login to Rancher as administrator, e.g. https://rancher.xyz.net
.
Select a cluster for which you would like to enable access to the user.
Add the user as member of the cluster.
Assign a role, e.g Cluster Owner, Cluster Viewer.
MOSIP deployment is split into two distinct parts:
Pre-registration
Registration
The server-side hardware estimates for the above are specified at a high level in terms of compute (Virtual CPU, RAM) and storage requirements. We provide estimates for MOSIP core modules only. External components are not in the scope. See Exclusions.
The variables that largely determine the hardware requirements are:
The population of the country
Rate of enrolment
Usage of foundation ID by various services
Refer to Pre-registration Resource Calculator XLS
Allow for 20% additional compute and storage for monitoring and any overheads.
The registration compute resources are related to the max rate of enrolment desired. The processing throughput must match the enrolment rate to avoid a pile-up of pending registration packets.
The data here is based on actual field data of a MOSIP deployment.
Assumptions:
Rate of enrolment: 216000 per day
Average packet size: 2MB
Biometric modalities: Finger, iris, face
Pod replication as given here. (TBD)
Configuration of compute node: 12 VCPU, 64GB RAM, 64GB disk store.
Number of nodes: 21
VCPU
12
21
252
RAM (GB)
64
21
1344
Node disk (GB)
64
21
1344
Storage is dependent on the population of a country (i.e. the number of UINs to be issued). Storage requirements for various types of data are listed below.
3200 GB/million packets/replication
Replication factor to be applied based on replication strategy
Postgres storage
30 GB/million packets
Includes all databases
Unprocessed packets X avg packet size
The size of landing zone depends on the estimated lag in packet processing and packet uploads. Once UINs are issued, the packets may be removed from the landing zone as a copy is already saved in Object Store
Logs (Elasticsearch)
80 GB/day
Logs maybe archived after, say, 2 weeks
Monitoring (Prometheus)
1.2 GB/day
Kafka
NA
Resource allocation is part of cluster node
ActiveMQ
NA
Resource allocation depends on the deployment - standalone or part of cluster
Redis
Single VM with, RAM = Cache size * 1.5 VCPU = 4 to 16 depending on number of packets getting processed per min Hardware: Minimum
Cache size = Avg. packet size * No. of packets processed in a min * Packet to be stored in cache for X mins
Allow for 20% additional compute and storage for monitoring and any overheads.
Refer to IDA Resource Calculator XLS
Allow for 20% additional compute and storage for monitoring and any overheads.
The compute and storage estimates for the following components are not included:
Postgres
Only storage estimated above.
Object store
Only storage estimated above.
Bio SDK
Antvirus (AV)
Default scanner (ClamAV) in included, however, if you integrate your AV, the same needs to be estimated.
Load balancers
External IAM (for Rancher)
Disaster recovery(DR)
DR would significantly increase compute and storage requirements. It is expected that System Integrator works out the appropriate DR strategy and arrives at an estimate.
The Authdemo service is utilized to execute the IDA APIs that are employed by API-testrig and DSLrig.
The purpose of the Authdemo Service is to showcase the functionality of authentication.
It can be considered as a simplified iteration of an authentication service, serving as a mock or prototype for the purpose of testing.
When prompted, input the NFS host, it's pem-key, ssh login user of NFS server.
Install script will create the NFS directories i.e., /srv/nfs/mosip/packetcreator-authdemo-authcerts
to store the certificates generated by Authdemo service.
These certificates will be used by API-testrig, orchestrator and packetcreator.
API-testRig tests the working of APIs of the MOSIP modules.
MOSIP’s successful deployment can be verified by comparing the results of API-testrig with the testrig benchmark.
When prompted, input the hour of the day to execute the API-testrig.
Daily API-testrig cron job will be executed at the very opted hour of the day.
The reports will move to the object store ( i.e., s3/minio) under automationtests
bucket.
Packetcreator will create packets for DSL orchestrator.
Note: It is recommended to deploy the packetcreator on a separate server/ cluster from where the other DSL orchestrators can access this service.
When prompted, input the NFS host, it's pem-key, ssh login user of NFS server.
Install script will create two NFS directories i.e., /srv/nfs/mosip/packetcreator_data
, /srv/nfs/mosip/packetcreator-authdemo-authcerts
.
Packetcreator_data
contains biometric data which is used to create packets.
Copy the packetcreator_data
from the link mentioned above to the NFS directory /srv/nfs/mosip/packetcreator_data
.
Ensure to use the same NFS host and path i.e., /srv/nfs/mosip/packetcreator-authdemo-authcerts
for Authdemo and packetcreator service.
When prompted, input the kubernetes ingress type (i.e., Ingress/Istio) and DNS as required if you are using the Ingress-nginx.
DSLrig will test end-to-end functional flows involving multiple MOSIP modules.
The Orchestrator utilizes the Packet Creator to generate packets according to the defined specifications. It then communicates with the Authdemo Service making REST API calls to perform authentication-related actions or retrieve the necessary information.
When prompted, input the NFS host, it's pem-key, ssh login user of NFS server.
Install script will create NFS directories, i.e., /srv/nfs/mosip/dsl-scenarios/sandbox.xyz.net
to store the DSL scenario sheet.
Copy the scenario csv from the above link to the NFS directory /srv/nfs/mosip/dsl-scenarios/sandbox.xyz.net
. Make sure to rename the csv files by replacing env
with your domain ex: sandbox.xyz.net
.
To run the dslorchestrator for sanity only, update the dslorchestrator
configmap TESTLEVEL
key to sanity
.
The reports will move to object store (i.e., s3/minio) under dslreports
bucket.
External Dependencies are set of external requirements that are needed for functioning of MOSIP’s core services like DB, Object Store, and HSM etc.
List of external dependencies:
Event Publisher/ streamer: MOSIP uses Kafka for publishing events to it's internal as well as external partners modules.
BioSDK: Biometric SDK for quality check and authentication purpose using biometrics.
Message Gateway: This is for notifying residents about different OTPs and other information.
Install Postgres
Initialize Postgres DB
Opt for yes and enter Y.
Install Keycloak
Initialize Keycloak
Opt 1 for MinIO
Opt 2 for S3 (incase you are not going with MinIO installation and want s3 to be installed)
Enter the prompted details.
Reference implementation of Biometric SDK server will be installed separately in MOSIP service installation section as the same is dependent on artifactory which is a MOSIP component.
ABIS is needed to be up and running outside the MOSIP cluster and should be able to connect to the activeMQ. For testing purpose, MOSIP has provided a mock stimulator for the same named as mock-abis which will be deployed as part of the MOSIP services installation.
MOSIP provides mock smtp server
which will be installed as part of default installation, opt for Y.
Incase the images are getting pulled from private repositories.
To setup the captcha for pre-reg and resident domains.
Below is the sequence of installation of MOSIP modules and the sequence must be followed to resolve all interdependencies.
Conf secrets
Config Server
Artifactory
Keymanager
WebSub
Mock-SMTP
Kernel
Masterdata-loader
Mock-biosdk
Packetmanager
Datashare
Pre-reg
Idrepo
Partner Management Services
Mock ABIS
Mock-mv
Registration Processor
Admin
ID Authentication
Partner Onboarder
MOSIP File Server
Resident services
Registration Client
Change log level to INFO in application properties.
Disable registration processor External Stage if not required.
Reprocessor cronjob frequency and other settings
All cronjobs timings according to the country (check property files).
Disable '111111' default OTP.
Review idschema attribute names against names in Datashare policy and Auth policy for all partner (including IDA).
Review attributes specified in ida-zero-knowledge-unencrypted-credential-attributes
Review id-authentication-mapping.json` in config vis-a-vis attribute names in idschema
Kafka: disable option to delete a topic: (this is set while installing ).
Backup
Set up backup for Longhorn.
Backup of Postgres db.
Replication factor in Minio.
Cluster hardening
On-prem K8s cluster production configuration as given .
Archival
Archival of logs: Since logs data grows at a rapid pace, the data needs to be achived frequently. Set up an archival process.
Keycloak
Keycloak Realm connection timeout settings - review all.
Valid urls redirect in Keycloak - set specific urls.
Postgres
Backup
Secure admin password
Access control
Multi-factor authentication for Rancher and Keycloak.
Review all Wireguard keys. Are all keys accounted for? Do the machines with Wireguard keys have sufficient protection - like firewalls, password/biometric login etc.
Are correct cluster roles assigned to users in Rancher? Is set properly?
Do the users of Rancher have strong passwords only known to them?
Is Rancher and Keycloak accessible only on Wireguard and not on public net?
Who holds the Keycloak Admin credentials? Are the credentials secure?
Any stray passwords lying on the disks?
Cluster setup
Increase the number of nodes in the cluster according to expected load.
Set rate control (throttling) parameters for PreReg.
Scripts to clean up processed packets in landing zone.
Review pod replication factors for all modules. E.g ClamAV.
Persistence
Enable persistence in all modules. On cloud change the storage class from 'Delete' to 'Retain'. If you already have PV as 'Delete', you can edit the PV config and change it to 'Retain' (without having to change storage class).
Make sure storage class allows expansion of storage.
allowVolumeExpansion: true
Review size of persistent volumes and update.
Increase MinIO persistent volume size based on your estimations.
Review production settings of .
(S3/Minio)
The default packetcreator_data
can be accessible from .
The Default template for DSL scenario sheet can be accessible from .
: Relational Database system used for storing data in MOSIP.
IAM: IAM tool is for authentication and authorization. Reference implementation here uses for the same purpose.
: Hardware Security Module (HSM) stores the cryptographic keys used in MOSIP. Reference implementation is provided as SoftHSM here.
: MOSIP uses S3 API compliant object store for storing biometric and other data. Reference implementation here uses .
Anti-virus: Used for document scanning and packets scanning throughout MOSIP modules. Reference implementation uses dockerised version of .
Queuing tool: Tool used for queuing messages to external MOSIP components. Reference implementation used .
: Performs the de-duplication of a resident's biometric data.
This document outlines the steps required for migrating the deployment architecture from V2 to V3.
This is required for migration from V2 to V3 architecture
Make sure to have all the pre-requisites ready as per the details present in the section pre-requisites
Setup Wireguard Bastion host
Setup wireguard client in your local and complete the configuration
Setup Observation K8 cluster
Configure Observation k8 cluster
Observation cluster’s nginx setup
Observation cluster applications setup
Observation cluster keycloak-rancher integration
Setup new MOSIP k8 cluster
MOSIP k8 cluster configuration
MOSIP cluster nginx setup
Setting up Monitoring for MOSIP cluster
Setting up Alerting for MOSIP cluster
Setting up Logging for MOSIP cluster
(Required for V2 to V3 architecture migration)
Setup postgres server
Note:
i. Deploy postgres server in a seperate node.
ii. Make sure postgres initialisation is not done (only install postgres).
Setup Keycloak server
Note: Make sure keycloak initialisation is not done (only install keycloak).
Setup Softhsm
Setup Minio server
Setup ClamAV
Setup ActiveMQ
Setup Message Gateway
Setup docker registry secrets if you are using private dockers.
Note: These instructions are only applicable if you need to access Private Docker Registries. You may disregard them if all of your Docker containers are downloaded from the public Docker Hub.
Setup Captcha for the required domains.
Setup Landing page for new MOSIP cluster.
This step is required for V2 to V3 architecture migration.
Softhsm (only required if softhsm is used instead of real HSM)
i. Backup keys
ii. Restore old key
iii. Update softhsm ida and softhsm kernel security pin
Kafka
i. setup external minio for backup.
ii. backup kafka
iii. restore kafka
Conf-secrets
Update the secrets in existing secrets in conf-secrets namspace.
Packets in landing to be copied from old environment to the upgraded environment or same NFS folder can be mounted to regproc packet server and group 1 stage groups. Refer here for more details.
dmz-sc.yaml
dmz-pkt-pv.yaml
dmz-pkt-pvc.yaml
dmz-landing-pv.yaml
dmz-landing-pvc.yaml
Reference implementations of modules or components are non production grade implementations that are meant to showcase a reference design or architecture. They can be used as references or starting points for customization and actual implementations.
Pre-registration portal
Admin portal
Partner Management portal
Authentication Application
Registration Client
ID object validator (kernel-ref-idobjectvalidator)
Integration with the SMS service provider (kernel-smsserviceprovider-msg91)
Integration with a Virus Scanner (kernel-virusscanner-clamav)
HSM Keystore Implementation (hsm-keystore-impl)
IAM: Keycloak
External Stage
Demo deduplication
Hazelcast Cache Provider
Demographic authentication (normalization)
Child Authentication Filter
Booking Service
This is a guide to implement MOSIP for a country. It is advised that Government and System Integrators (SI) study the recommended steps to work out an appropriate implementation strategy. The items are in "near-chronological order" and may differ for an implementation.
Choice of deployment of Pre-registration.
Rate of enrolment desired.
Rate of authentication expected.
Customisation and procurement of components as given here.
ID schema (as prescribed by the country's regulatory authority).
Hardware requirements estimate.
ID Card print design.
MOSIP versions.
MOSIP support (scope).
Disaster recovery strategy.
A phased approach for rollout.
Engagement with an SI - terms and conditions.
Procurement of biometrics and other external components.
HSM
Computer hardware
Customisation of components as decided in step 5 of Key decisions above.
Disaster recover set up
Biometric thresholding
Phased implementation
Sandbox, staging, development setups.
IDA installation
Onboarding of trusted partners
Print partner
Set up of registration centers
Onboarding of officers and supervisors
Training
MOSIP periodically releases new versions of the platform, tagged with their version numbers.
Asymmetric Amoeba is the latest stable Long Term Support (LTS 1.2.0) version. This release focuses on easy manageability, usability, enhanced performance, robustness, security, inclusivity, and comprehensive documentation. Additionally, multiple languages are now supported across modules.
LTS versions get patch releases with minor and major updates, as and when required.
SUPPORTED FOR A MINIMUM OF 5 YEARS
LTS 1.2.0 was released in February 2022 and will be supported at the minimum until February 2027.
What happens after five years?
Support will be available for migration to the next LTS version for two years following its release.
Adopter inputs and experiences will be factored in to fine-tune subsequent versions.
LTS releases offer:
Completely implemented roadmap features
Frozen API and data formats
Tooling, add-ons and extensions
Compatible components and solutions in the marketplace
Compliance and certification programs
Active support
Proactive security updates
Patches for bugs
Periodical cumulative updates of functional, non-functional fixes and patches
Support for ecosystem partners for integration and implementation
Additional support to adopting countries under MOU for versions under active support
L3 support to adopting countries
Training and capacity building
Technical advisory on ID and use case implementations
MOSIP's current LONG TERM SUPPORT RELEASE is v1.2.0. The section below highlights the key features, benefits, and new modules added.
Functional Benefits
New Admin UI with robust APIs
New Partner Management Portal UI with robust APIs
New Resident Portal with robust APIs
Other Benefits
Enhanced security
Finer documentation
Enhanced mechanism to evaluate performance
Improved service-level performance
Standalone stages of registration processing
Tools & Add-ons
Anonymous profiling to cater to analytical needs
Improved reporting for better forecasting and efficient decision-making
V3 Deployment Architecture
Dockerized test automation
Newer Modules Compatible with LTS
eSignet
Inji
OpenG2P
OpenCRVS
Compliance Tool Kit
Android RegClient
The existing adopters of pre-LTS stable versions can:
Get access to the full feature set, latest tools, and add-on modules
Get access to the best tier of support
Get access to periodic updates and fixes
Utilise the upgrade window to avoid falling into unsupported mode
Standard Procedure for LTS Migration
Discussing migration and communication strategies (relying parties, stakeholders etc.)
Prioritising a list of issues that must be fixed before migration
Understanding and analysing, in detail, latest changes and customised features
Identifying sequencing of components and infra to be migrated
Discussing ways and means to automate specific steps
Migrating using upgrade scripts: DB, template, config, seed data, etc.
Phased migration: sandbox, staging, production
Hardening security
Marking off checklist template items
Note:
The time and effort involvement depends on customisation requirements.
Without customisations, the time needed would be about 2-3 months.
Time to deploy: 2 weeks
Time to execute upgrade scripts: 2-3 weeks
Time taken to test/ verify:
Two days for environment sanity using test rig automation
3-4 weeks for full blown testing for all the modules
For detailed information on all the enhancements in the LTS 1.2.0 version, refer the Release Notes.
See below for a brief overview of features and newly-added modules.
LTS 1.2.0 features a new full-blown admin console with robust APIs
Admin application is a web-based application used by a privileged group of administrative personnel to manage various master data and resources like centres, devices, machines, users.
Along with the resource and data management, the admin can generate master keys, check registration status, retrieve lost RID, resume processing of paused packets etc.
To know more, read through the Admin Portal User Guide.
A control panel has been added in the Registration Client for biometric and non-biometric devices and an for configurable settings. These settings gives the operator/supervisor better control of the system.
The Registration Client is now capable of handling multiple time zone including the local time zone.
A feature to retrieve lost RID has been added.
To know more, read the Registration Client Settings page.
Authentication filters have been added to ID Authentication.
IDA is externalized through data sync via WebSub. This allows the relying parties to create their own IDA module or extend the existing IDA module.
The ID Authentication module has been enhanced to retrieve any missing data such as credentials, partner data, and policies, in the event of a crash or time-out.
The IDA database now stores the partner details, which includes data like the partner ID, policy ID, MISP ID, and API keys, to authenticate the above entities directly from IDA.
A new Partner Management Portal is available with robust APIs. This portal can be used by the partner admin, policy admin, and various MOSIP partners like authentication partners, device providers, FTM providers, and credential partners.
The partner portal has the following features:
Self-registration of partners
Partner-specific certificate upload
Partner can add device models, FTM models, policy mapping, SBI details
Partner and policy admins can approve/reject partner requests, create policies, and can add MOSIP-compliant CA (certificate authorities)
To know more, read through the Partner Management Portal User Guide.
Resident portal is an online ID management system where the residents can avail services like download their UIN card, view history, lock/ unlock authentication types, generate/revoke VID, grievance redressal etc.
To know more, read through Resident Services.
Upgrade scripts are available that help in hassle free migration from 1.1.5.x to 1.2.0.x. These scripts will include the db upgrade scripts, template upgrade scripts, config upgrade scripts, MOSIP seed data upgrade scripts etc.
In LTS, support for legacy V2 deployment has been deprecated and the updated deployment method is V3 deployment that promises:
Enhanced security
High availability (owing to better load balancing)
Highly recommended for production
Performance enhancements
To learn more, read through Deployment.
LTS 1.2.0 provides a reporting framework for real-time streaming of data visualization on dashboards that gives a visual display of metrics and important data to track the status of various pre and post-enrolment processes. This data helps the ID issuers in improving efficiency, forecasting, and better decision-making. The framework has been used to create a set of default dashboards using Kibana.
To know more, read through Reporting.
We have curated a data set called anonymous profile to cater to the analytic needs of the adopting countries which will help to assess the progress of ID programs.
It is accessible to search engines such as elasticsearch.
To know more, read through the Anonymous Profiling Support.
We now have a dockerized API test automation as opposed to the jar file execution due to which the entire process of automation has become faster owing to
Cron jobs that handle daily automation reports eliminating any need for manual trigger for test report generation.
Automatic language, environment, and secret key configuration.
To know more, read through Automation Testing.
Security should be built-in and not bolt-on. Taking our security checks to the next level, we completed a security audit as part of MOSIP 1.2.0 LTS (Asymmetric Amoeba), which was certified by Aujas.
The components have been significantly tested for scale and performance. The adopting countries can now cater to their millions of customers with confidence.
Documentation has evolved, thereby making a huge difference in resolving the issues of accessibility and assistive technology.
The availability of comprehensive and well-devised user guides for all modules has helped the community to move one step closer towards simplified ways of working autonomously.
With the LTS 1.2.0 version, performance has been significantly improved.
To know more, read through Performance Test Reports.
eSignet allows easy login to any of the government services using a single credential and passwordless login using the supported authentication factors
To know more, read through the eSignet documentation.
A safe, trusted & inclusive mobile wallet and authenticator, that enables you to carry your digital IDs, prove your presence, (offline and online), and avail services in a snap.
To know more, read through the Inji documentation.
the Compliance Tool Kit (CTK) is an online portal that can be used by MOSIP partners to test the compliance of their product developed as per specifications (specs) published/adopted by MOSIP.
To know more, read through Compliance Tool Kit documentation.
OpenG2P is an open source platform upon which government-to-person (G2P) solutions can be built.
The platform offers people facing processes such as onboarding into schemes, identity verification, and cash transfers to their bank accounts along with a self-serviced beneficiary portal.
It also incorporates the government department facing features such as creation of registries and beneficiary lists, eligibility checks, scheme definition, payment disbursement and reconciliation.
To know more, read through OpenG2P documentation.
OpenCRVS is a digital platform for recording a person's major life events like births, deaths, marriages, and divorces.
It is a customisable open source solution designed for civil registration and it's essential services such as social protection, health care, education and economic and social opportunities.
Version: 1.2.0.1 (Latest stable release)
Release date: 19th March 2024
Version: 1.2.0.1-B4
Release date: 12th January 2024
Version: 1.2.0.1-B3
Release date: 14th April 2023
Version: 1.2.0.1-B2
Release date: 8th Jan 2023
Version: 1.2.0.1-B1
Release date: 14th Oct 2022
MOSIP has recently announced the release of its latest version, 1.2.0.1. This new release brings about an architectural upgrade and addresses multiple bugs. To assist users in migrating from the MOSIP version 1.1.5.x to the latest version 1.2.0.1, a detailed Runbook has been provided, offering comprehensive guidance.
To set up the new environment and deploy the upgraded version of MOSIP, carefully follow the procedures outlined below step-by-step.
The migration will only be supported if the latest version deployed is 1.1.5.x (mostly 1.1.5.5-P1 and above). Countries should ensure that MOSIP is updated to version 1.1.5.x before migrating to the newer version.
Once the upgrade is completed, all the modules running on the server will be upgraded together.
The IDA Database will not have any entries from version 1.1.4.
Migration of websub event is not supported for countries transitioning from db-based
websub to kafka-based
websub.
After migration, if a new machine is added for the Registration Client, it must be installed with the latest version of the Registration Client. We will not provide support for the old version.
Partners who decrypt data encrypted by the key manager will support decryption with thumbprint support after upgrading to version 1.2.0.1. Therefore, backward support to encrypt without thumbprint is not available.
Prior to migration, countries should prioritize packets stuck in between stages using the registration processor and complete the processing.
Any third-party subsystems such as Manual adjudication/ABIS will not respond after migration for a request received before migration. Therefore, it is suggested that all subsystems, such as ABIS and manual adjudication, consume all the messages from ActiveMQ, complete the processing, mark all for reprocess, and respond back to registration processor before migration.
Default resource allocation (CPU and memory) has been added to all the pods in version 1.2.0.1, so additional nodes may be required after the upgrade.
The rollback process during the migration needs to be handled by the respective system integrators (SI).
All the server services and registration clients will be running version 1.1.5.x.
Registration packets available in MinIO and the landing zone can be version 1.1.4.x / 1.1.5.x.
Existing Grafana and Kibana reports from version 1.1.5 are not required to be viewable in the upgraded environment.
If there are additional attributes in the data share for manual adjudication, the manual adjudication system will ignore them and not fail.
The first global admin user login does not need to be handled during the upgrade.
Out of scope for migration:
Configuration of correction flow and related changes are not included in this migration.
Anonymous profiles for existing data are not handled in this upgrade.
Support for the Lost RID/AID search feature for packets synced before the migration is out of scope.
This comprehensive upgrade process entails the deployment architecture upgrade from V2 to V3, as well as the MOSIP platform upgrade from version 1.1.5.5-P1 to 1.2.0.1. The various tasks involved in this process are organized into the following categories.
Installation and configuration of new environment with V3 architecture.
Deployment of external services.
Backup and restoration of external services.
Upgrade of necessary external services.
Migration of properties.
Upgrade of MOSIP services.
Execution of activities once all upgraded services are operational.
Carrying out activities after completion of initial round of testing.
Let us go through the processes discussed above in detail.
MOSIP modules are deployed in the form of microservices in a Kubernetes cluster.
Wireguard is used as a trust network extension to access the admin, control, and observation pane
It is also used for on-the-field registrations.
MOSIP uses AWS load balancers for:
SSL termination
Reverse Proxy
CDN/Cache management
Loadbalancing
In V3, we have two Kubernetes clusters:
Observation Cluster - This cluster is a part of the observation plane and it helps in administrative tasks. By design, this is kept independent of the actual cluster as a good security practice and to ensure clear segregation of roles and responsibilities. As a best practice, this cluster or its services should be internal and should never be exposed to the external world.
MOSIP Cluster - This cluster runs all the MOSIP components and certain third party components to secure the cluster, API’s and Data.
k8s-infra : contains scripts to install and configure Kubernetes cluster with required monitoring, logging and alerting tools.
mosip-infra : contains deployment scripts to run charts in defined sequence.
mosip-config : contains all the configuration files required by the MOSIP modules.
mosip-helm : contains packaged helm charts for all the MOSIP modules.
VM’s required have any Operating System and can be selected as per convenience.
In this installation guide, we are referring to Ubuntu OS
throughout.
Purpose
vCPU’s
RAM
Storage (HDD)
no. of VM’s
HA
1
Wireguard Bastion Host
2
4 GB
8 GB
1
(ensure to setup active-passive)
2
Rancher Cluster nodes (EKS managed)
2
8 GB
32 GB
2
2
3
Mosip Cluster nodes (EKS managed)
8
32 GB
64 GB
6
6
All the VM's should be able to communicate with each other.
Need stable Intra network connectivity between these VM's.
All the VM's should have stable internet connectivity for docker image download (in case of local setup ensure to have a locally accesible docker registry).
During the process, we will be creating two loadbalancers as mentioned in the first table below:
Server Interface requirement as mentioned in the second table:
Loadbalancer
Purpose
Private loadbalancer Observation cluster
This will be used to access Rancher dashboard and keycloak of observation cluster.
Note: access to this will be restricted only with wireguard key holders.
Public loadbalancer MOSIP cluster
This will be used to access below mentioned services:
Pre-registration
Esignet
IDA
Partner management service api’s
Mimoto
Mosip file server
Resident
Private loadbalancer MOSIP cluster
This will be used to access all the services deployed as part of the setup inclusing external as well as all the MOSIP services.
Note: access to this will be restricted only with wireguard key holders.
Purpose VM
Network Interfaces
1
Wireguard Bastion Host
One Private interface: that is on the same network as all the rest of nodes. (Eg: inside local NAT Network )
One public interface: Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 51820/udp port to this interface ip.
Domain name
Mapping details
Purpose
1
Load balancer of Observation cluster
Rancher dashboard to monitor and manage the kubernetes cluster. You can share an existing rancher cluser.
2
Load balancer of Observation cluster
Administrative IAM tool (keycloak). This is for the kubernetes administration.
3
Private Load balancer of MOSIP cluster
Index page for links to different dashboards of Mosip env. (This is just for reference, Please do not expose this page in a real production or uat environment)
4
Private Load balancer of MOSIP cluster
Internal API’s are exposed through this domain. They are accessible privately over wireguard channel
5
Public Load balancer of MOSIP cluster
All the API’s that are publically usable are exposed using this domain.
6
Public Load balancer of MOSIP cluster
Domain name for Mosip’s pre-registration portal. The portal is accessible publicly.
7
Private Load balancer of MOSIP cluster
Provides direct access to activemq dashboard. Its limited and can be used only over wireguard
8
Private Load balancer of MOSIP cluster
Optional instalation. Used to access kibana dashboard over wireguard
9
Private Load balancer of MOSIP cluster
Regclient can be downloaded from this domain. It should be used over wireguard.
10
Private Load balancer of MOSIP cluster
Mosip’s admin portal is exposed using this domain. This is an internal domain and is restricted to access over wireguard
11
Private Load balancer of MOSIP cluster
Optional- This domain is used to access the object server. Based on the object server that you choose map this domain accordingly. In our reference implementation Minio is used and this domain lets you access Minio’s Console over wireguard
12
Private Load balancer of MOSIP cluster
Kafka UI is installed as part of the Mosip’s default installation. We can access kafka ui over wireguard. Mostly used for administrative needs.
13
Private Load balancer of MOSIP cluster
Mosip uses an Openid connect server to limit and manage access across all the services. The default installation comes with Keycloak. This domain is used to access the keycloak server over wireguard
14
Private Load balancer of MOSIP cluster
This domain points to the postgres server. You can connect to postgres via port forwarding over wireguard
15
Public Load balancer of MOSIP cluster
Mosip’s partner management portal is used to manage partners accessing partner management portal over wireguard
16
Public Load balancer of MOSIP cluster
accessident resident portal publically
17
Public Load balancer of MOSIP cluster
accessing IDP over public
18
Private Load balancer of MOSIP cluster
Accessing mock-smtp UI over wireguard
Note:
Only proceed to DNS mapping after the ingressgateways are installed and the load balancer is already configured.
The above table is just a placeholder for hostnames, the actual name itself varies from organisation to organisation.
As only secured https
connections are allowed via nginx server, you will need the below mentioned valid ssl certificates:
One valid wildcard ssl certificate related to domain used for accesing Observation cluster which will be created using ACM (Amazon certificate manager). In above e.g. *.org.net is the similiar example domain.
One valid wildcard ssl certificate related to domain used for accessing MOSIP cluster which will be created using ACM (Amazon certificate manager). In above e.g. *.sandbox.xyz.net is the similiar example domain.
kubectl client version 1.23.6
helm client version 3.8.2 and add below repos as well :
istioctl : version: 1.15.0
eksctl : version: 0.121.0
AWS account and credentials with permissions to create EKS cluster.
AWS credentials in ~/.aws/
folder as given here.
Save ~/.kube/config
file with another name. (IMPORTANT. As in this process your existing ~/.kube/config
file will be overridden).
Save .pem
file from AWS console and store it in ~/.ssh/
folder. (Generate a new one if you do not have this key file).
Create a directory as mosip in your PC and
clone k8’s infra repo with tag : 1.2.0.1-B2 inside mosip directory.
git clone https://github.com/mosip/k8s-infra -b v1.2.0.1-B2
clone mosip-infra with tag : 1.2.0.1-B2 inside mosip directory
git clone https://github.com/mosip/mosip-infra -b v1.2.0.1-B2
Set below mentioned variables in bashrc
source .bashrc
Note:
Above mentioned environment variables will be used throughout the installation to move between one directory to other to run install scripts.
A Wireguard bastion host (Wireguard server) provides secure private channel to access MOSIP cluster. The host restricts public access, and enables access to only those clients who have their public key listed in Wireguard server. Wireguard listens on UDP port 51820.
Setup Wirguard VM and wireguard bastion server:
Create a Wireguard server VM in aws console with above mentioned Hardware and Network requirements.
Edit the security group and add the following inbound rules in aws console
type ‘custom TCP', port range ‘51820’ and source '0.0.0.0/0’
type ‘custom UDP', port range ‘51820’ and source '0.0.0.0/0’
Install docker in the Wireguard machine as given here.
Setup Wireguard server
SSH to wireguard VM
Create directory for storing wireguard config files.
mkdir -p wireguard/config
Install and start wireguard server using docker as given below:
Note:
* Increase the no of peers above in case needed more than 30 wireguard client confs. (-e PEERS=30
)
* Change the directory to be mounted to wireguard docker in case needed.
All your wireguard confs will be generated in the mounted directory. (-v /home/ubuntu/wireguard/config:/config
)
Setup Wireguard Client in your PC
Install Wireguard client in your PC.
Assign wireguard.conf
:
SSH to the wireguard server VM.
cd /home/ubuntu/wireguard/config
assign one of the PR for yourself and use the same from the PC to connect to the server.
create assigned.txt
file to assign the keep track of peer files allocated and update everytime some peer is allocated to someone.
Use ls
cmd to see the list of peers.
get inside your selected peer directory, and add mentioned changes in peer.conf:
cd peer1
nano peer1.conf
Delete the DNS IP.
Update the allowed IP's to subnets CIDR ip . e.g. 10.10.20.0/23
Share the updated peer.conf
with respective peer to connect to wireguard server from Personel PC.
add peer.conf
in your PC’s /etc/wireguard
directory as wg0.conf
.
start the wireguard client and check the status:
Once Connected to wireguard you should be now able to login using private ip’s.
Observation K8s Cluster setup
Setup rancher cluster,
cd $K8_ROOT/rancher/aws
Copy rancher.cluster.config.sample
to rancher.cluster.config
.
Review and update the below mentioned parameters of rancher.cluster.config
carefully.
name
region
version: “1.24“
instance related details
instanceName
instanceType
desiredcapacity
volumeSize
volumeType
publicKeyName.
update the details of the subnets to be used from vpc
Install
eksctl create cluster -f rancher.cluster.config
Wait for the cluster creation to complete, generally it takes around 30 minutes to create or update cluster.
Once EKS K8 cluster is ready below mentioned output will be displayed in the console screen.
EKS cluster "my-cluster" in "region-code" region is ready
The config file for the new cluster will be created on ~/.kube/config
Make sure to backup and store the ~/.kube/config
with new name. e.g. ~/.kube/obs-cluster.config
.
Change file permission using below command:
chmod 400 ~/.kube/obs-cluster.config
Set the KUBECONFIG
properly so that you can access the cluster.
export KUBECONFIG=~/.kube/obs-cluster.config
Test cluster access:
kubect get nodes
Command will result in details of the nodes of the rancher cluster.
Once the rancher cluster is ready we need ingress and storage class to be set for other applications to be installed.
Nginx Ingress Controller : used for ingress in rancher cluster.
The above will automatically spawn an Internal AWS Network Load Balancer (L4).
Check the following on AWS console:
An NLB has been created. You may also see the DNS of NLB with
Obtain AWS TLS certificate as given here
Edit listner "443". Select "TLS".
Note, the target group name of listner 80. Set target group of 443 to target group of 80. Basically, we want TLS termination at the LB and it must forward HTTP traffic (not HTTPS) to port 80 of ingress controller. So
Input of LB: HTTPS
Output of LB: HTTP --> port 80 of ingress nginx controller
Enable "Proxy Protocol v2" in the target group settings
Make sure all subnets are selected in LB -->Description-->Edit subnets.
Check health check of target groups.
Remove listner 80 from LB as we will receive traffic only on 443.
Storage class setup:
Default storage class on EKS is gp2
. GP2
by default is in Delete
mode which means if PVC is deleted, the underlying storage PV is also deleted.
To enable volume expansion for the existing gp2
storage class, modify the YAML configuration by adding allowVolumeExpansion: true
to the gp2
storage class configuration.
kubectl edit sc gp2
: to edit the yaml configuration.
Create storage class gp2-retain
by running sc.yaml
for PV in Retain mode. Set the storage class as gp2-retain in case you want to retain PV.
we need the EBS driver for our storage class to work, follow the steps here to setup EBS driver.
Create the following domain names:
Rancher: rancher.xyz.net
Keycloak: keycloak.xyz.net
Point the above to internal ip address of the NLB. This assumes that you have a Wireguard Bastion Host has been installed. On AWS this is done on Route 53 console.
Rancher UI : Rancher provides full CRUD capability of creating and managing kubernetes cluster.
Install rancher using Helm, update hostname
in rancher-values.yaml
and run the following command to install.
Login:
Open Rancher page https://rancher.org.net
.
Get Bootstrap password using
Assign a password. IMPORTANT: makes sure this password is securely saved and retrievable by Admin.
Keycloak : Keycloak is an OAuth 2.0 compliant Identity Access Management (IAM) system used to manage the access to Rancher for cluster controls.
keycloak_client.json
: Used to create SAML client on Keycloak for Rancher integration.
Keycloak - Rancher Integration
Login as admin
user in Keycloak and make sure an email id, and first name field is populated for admin
user. This is important for Rancher authentication as given below.
Enable authentication with Keycloak using the steps given here.
In Keycloak add another Mapper for the rancher client (in Master realm) with following fields:
Protocol: saml
Name: username
Mapper Type: User Property
Property: username
Friendly Name: username
SAML Attribute Name: username
SAML Attribute NameFormat: Basic
Specify the following mappings in Rancher's Authentication Keycloak form:
Display Name Field: givenName
User Name Field: email
UID Field: username
Entity ID Field: https://your-rancher-domain/v1-saml/keycloak/saml/metadata
Rancher API Host: https://your-rancher-domain
Groups Field: member
RBAC :
For users in Keycloak assign roles in Rancher - cluster and project roles. Under default
project add all the namespaces. Then, to a non-admin user you may provide Read-Only role (under projects).
If you want to create custom roles, you can follow the steps given here.
Add a member to cluster/project in Rancher:
Give member name exactly as username
in Keycloak
Assign appropriate role like Cluster Owner, Cluster Viewer etc.
You may create new role with fine grained acccess control.
Certificates expiry
In case you see certificate expiry message while adding users, on local cluster run these commands:
https://rancher.com/docs/rancher/v2.6/en/troubleshooting/expired-webhook-certificates/
Setup mosip cluster
cd $K8_ROOT/mosip/aws
Copy cluster.config.sample
to mosip.cluster.config
.
Review and update the below mentioned parameters of cluster.config.sample
carefully.
name
region
version: “1.24“
instance related details
instanceName
instanceType
desiredcapacity
volumeSize
volumeType
publicKeyName.
update the details of the subnets to be used from vpc
Install
eksctl create cluster -f mosip.cluster.config
Wait for the cluster creation to complete, generally it takes around 30 minutes to create or update cluster.
Once EKS K8 cluster is ready below mentioned output will be displayed in the console screen.
EKS cluster "my-cluster" in "region-code" region is ready
The config file for the new cluster will be created on ~/.kube/config
Make sure to backup and store the ~/.kube/config
with new name. e.g. ~/.kube/mosip-cluster.config
.
Change file permission using below command:
chmod 400 ~/.kube/mosip-cluster.config
Set the KUBECONFIG
properly so that you can access the cluster.
export KUBECONFIG=~/.kube/mosip-cluster.config
Test cluster access:
kubect get nodes
Command will result in details of the nodes of the MOSIP cluster.
Login as admin in Rancher console
Select Import Existing
for cluster addition.
Select the Generic
as cluster type to add.
Fill the Cluster Name
field with unique cluster name and select Create
.
You will get the kubectl commands to be executed in the kubernetes cluster. Copy the command and execute from your PC. (make sure your kube-config file is correctly set to Mosip cluster)
Wait for few seconds after executing the command for the cluster to get verified.
Your cluster is now added to the rancher management server.
Global configmap: Global configmap contains list of necesary details to be used throughout the namespaces of the cluster for common details.
cd $K8_ROOT/mosip
Copy global_configmap.yaml.sample
to global_configmap.yaml.
Update the domain names in global_configmap.yaml
and run.
kubectl apply -f global_configmap.yaml
Storage class setup:
Default storage class on EKS is gp2
. GP2
by default is in Delete
mode which means if PVC is deleted, the underlying storage PV is also deleted.
To enable volume expansion for the existing gp2
storage class, modify the YAML configuration by adding allowVolumeExpansion: true
to the gp2
storage class configuration.
kubectl edit sc gp2
: to edit the yaml configuration.
Create storage class gp2-retain
by running sc.yaml
for PV in Retain mode. Set the storage class as gp2-retain in case you want to retain PV.
we need the EBS driver for our storage class to work, follow the steps here to setup EBS driver.
also we need EFS CSI driver for the regproc services, because EBS driver only supports RWO but we need RWX, follow these steps to setup EFS CSI driver.
Ingress and load balancer (LB) :
Ingress is not installed by default on EKS. We use Istio ingress gateway controller to allow traffic in the cluster. Two channels are created - public and internal. See architecture.
Install istioctl as given here in your system.
Install ingresses as given here:
Load Balancers setup for istio-ingress.
The above istio installation will automatically spawn an Internal AWS Network Load Balancer (L4).
These may be also seen with
You may view them on AWS console in Loadbalancer section.
TLS termination is supposed to be on LB. So all our traffic coming to ingress controller shall be HTTP.
Obtain AWS TLS certificate as given here
Add the certificates and 443 access to the LB listener.
Update listener TCP->443 to TLS->443 and point to the certificate of domain name that belongs to your cluster.
Forward TLS->443 listner traffic to target group that corresponds to listener on port 80 of respective Loadbalancers. This is because after TLS termination the protocol is HTTP so we must point LB to HTTP port of ingress controller.
Update health check ports of LB target groups to node port corresponding to port 15021. You can see the node ports with
Enable Proxy Protocol v2 on target groups.
Make sure all subnets are included in Availability Zones for the LB. Description --> Availability Zones --> Edit Subnets
Make sure to delete the listeners for port 80 and 15021 from each of the loadbalancers as we restrict unsecured port 80 access over http.
DNS mapping:
Initially all the services will be accesible only over the internal channel.
Point all your domain names to internal LoadBalancers DNS/IP intially till testing is done.
On AWS this may be done on Route 53 console.
After Go live decision enable public access.
Check Overall if nginx and istio wiring is set correctly
Install httpbin: This utility docker returns http headers received inside the cluster. You may use it for general debugging - to check ingress, headers etc.
To see what's reaching httpbin (example, replace with your domain name):
Prometheus and Grafana and Alertmanager tools are used for cluster monitoring.
Select 'Monitoring' App from Rancher console -> Apps & Marketplaces.
In Helm options, open the YAML file and disable Nginx Ingress.
Click on Install
.
Alerting is part of cluster monitoring, where alert notifications are sent to the configured email or slack channel.
Monitoring should be deployed which includes deployment of prometheus, grafana and alertmanager.
Create slack incoming webhook.
After setting slack incoming webhook update slack_api_url
and slack_channel_name
in alertmanager.yml
.
cd $K8_ROOT/monitoring/alerting/
nano alertmanager.yml
Update
Update Cluster_name
in patch-cluster-name.yaml
.
cd $K8_ROOT/monitoring/alerting/
nano patch-cluster-name.yaml
Update
Install Default alerts along some of the defined custom alerts:
Alerting is Installed.
Mosip uses Rancher Fluentd and elasticsearch to collect logs from all services and reflect the same in Kibana Dashboard.
Install Rancher FluentD system : for screpping logs outs of all the microservices from Mosip k8 cluster.
Install Logging from Apps and marketplace within the Rancher UI.
Select Chart Version 100.1.3+up3.17.7
from Rancher console -> Apps & Marketplaces.
Configure Rancher FluentD
Create clusteroutput
:
kubectl apply -f clusteroutput-elasticsearch.yaml
start clusterFlow
kubectl apply -f clusterflow-elasticsearch.yaml
Install elasticsearch, kibana and Istio addons
cd $K8_ROOT/logging
./intall.sh
set min_age
in elasticsearch-ilm-script.sh
and execute the same. min_age
: is the min no. of days for which indices will be stored in elasticsearch.
cd $K8_ROOT/logging
./elasticsearch-ilm-script.sh
Mosip provides set of Kibana Dashboards for checking logs and throughputs .
Brief description of these dashboards are as follows:
01-logstash.ndjson contains the logstash
Index Pattern required by the rest of the dashboards.
02-error-only-logs.ndjson contains a Search dashboard which shows only the error logs of the services, called MOSIP Error Logs
dashboard.
03-service-logs.ndjson contains a Search dashboard which show all logs of a particular service, called MOSIP Service Logs
dashboard.
04-insight.ndjson contains dashboards which show insights into MOSIP processes, like the number of UINs generated (total and per hr), the number of Biometric deduplications processed, number of packets uploaded etc, called MOSIP Insight
dashboard.
05-response-time.ndjson contains dashboards which show how quickly different MOSIP Services are responding to different APIs, over time, called Response Time
dashboard.
Import dashboards:
cd K8_ROOT/logging/dashboard
./load_kibana_dashboards.sh ./dashboards <cluster-kube-config-file>
View dashbords
Open kibana dashboard from: https://kibana.sandbox.xyz.net
.
Kibana --> Menu (on top left) --> Dashboard --> Select the dashboard.
External Dependencies: are set of external requirements needed for funtioning of MOSIP’s core services like DB, object store, hsm etc.
Check detailed installation instruction of all the external componets.
Now that all the Kubernetes cluster and external dependencies are already installed, you can continue with MOSIP service deployment.
Check the detailed MOSIP Modules Deployment MOSIP Modular installation steps.
MOSIP’s successfull deployment can be verified by comparing the results of api testrig with testrig benchmark.
When prompted input the hour of the day to execute the api-testrig.
Daily api testrig cron jon will be executed at the very opted hour of the day.
MOSIP modules are deployed in the form of microservices in kubernetes cluster.
It is also used for the on-the-field registrations.
SSL termination
Reverse Proxy
CDN/Cache management
Loadbalancing
In V3, we have two Kubernetes clusters:
Observation cluster - This cluster is a part of the observation plane and it helps in administrative tasks. By design, this is kept independent of the actual cluster as a good security practice and to ensure clear segregation of roles and responsibilities. As a best practice, this cluster or it's services should be internal and should never be exposed to the external world.
It is recommended to configure log monitoring and network monitoring in this cluster.
In case you have an internal container registry, then it should run here.
MOSIP cluster - This cluster runs all the MOSIP components and certain third party components to secure the cluster, API’s and data.
VM’s required can be with any OS as per convenience.
Here, we are referting to Ubuntu OS throughout this installation guide.
All the VM's should be able to communicate with each other.
Need stable Intra network connectivity between these VM's.
All the VM's should have stable internet connectivity for docker image download (in case of local setup ensure to have a locally accessible docker registry).
Server Interface requirement as mentioned in below table:
As only secured https connections are allowed via nginx server will need below mentioned valid ssl certificates:
One valid wildcard ssl certificate related to domain used for accessing Observation cluster, this needs to be stored inside the nginx server VM for Observation cluster. In above e.g.: *.org.net is the similar example domain.
One valid wildcard ssl certificate related to domain used for accessing Mosip cluster, this needs to be stored inside the nginx server VM for mosip cluster. In above e.g.: *.sandbox.xyz.net is the similar example domain.
Tools to be installed in Personal Computers for complete deployment
[Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html: version > 2.12.4
Create a directory as mosip in your PC and:
clone k8’s infra repo with tag : 1.2.0.1 (whichever is the latest version) inside mosip directory. git clone https://github.com/mosip/k8s-infra -b v1.2.0.1
clone mosip-infra with tag : 1.2.0.1 (whichever is the latest version) inside mosip directory. git clone https://github.com/mosip/mosip-infra -b v1.2.0.1
Set below mentioned variables in bashrc
source .bashrc
Note: Above mentioned environment variables will be used throughout the installation to move between one directory to other to run install scripts.
A Wireguard bastion host (Wireguard server) provides secure private channel to access MOSIP cluster. The host restricts public access, and enables access to only those clients who have their public key listed in Wireguard server. Wireguard listens on UDP port51820.
Create a Wireguard server VM with above mentioned Hardware and Network requirements.
Open ports and Install docker on Wireguard VM.
cd $K8_ROOT/wireguard/
create copy of hosts.ini.sample
as hosts.ini
and update the required details for wireguard VM\
cp hosts.ini.sample hosts.ini
execute ports.yml to enable ports on VM level using ufw:
ansible-playbook -i hosts.ini ports.yaml
Note:
Permission of the pem files to access nodes should have 400 permission.
sudo chmod 400 ~/.ssh/privkey.pem
These ports are only needed to be opened for sharing packets over UDP.
Take necessary measure on firewall level so that the Wireguard server can be reachable on 51820/udp.
Setup Wireguard server
SSH to wireguard VM
Create directory for storing wireguard config files.
mkdir -p wireguard/config
Install and start wireguard server using docker as given below:
Note:
Increase the no. of peers above in case more than 30 wireguard client confs (-e PEERS=30) are needed.
Change the directory to be mounted to wireguard docker as per need. All your wireguard confs will be generated in the mounted directory (
-v /home/ubuntu/wireguard/config:/config
).
Install Wireguard client in your PC.
Assign wireguard.conf
:
SSH to the wireguard server VM.
cd /home/ubuntu/wireguard/config
assign one of the PR for yourself and use the same from the PC to connect to the server.
create assigned.txt
file to assign the keep track of peer files allocated and update everytime some peer is allocated to someone.
use ls
cmd to see the list of peers.
get inside your selected peer directory, and add mentioned changes in peer.conf
:
cd peer1
nano peer1.conf
Delete the DNS IP.
Update the allowed IP's to subnets CIDR ip . e.g. 10.10.20.0/23
Share the updated peer.conf
with respective peer to connect to wireguard server from Personel PC.
add peer.conf
in your PC’s /etc/wireguard
directory as wg0.conf
.
start the wireguard client and check the status:
Once connected to wireguard, you should be now able to login using private IP’s.
Observation K8s Cluster setup
Install all the required tools mentioned in pre-requisites for PC.
kubectl
helm
ansible
rke (version 1.3.10)
Setup Observation Cluster node VM’s as per the hardware and network requirements as mentioned above.
Setup passwordless SSH into the cluster nodes via pem keys. (Ignore if VM’s are accessible via pem’s).
Generate keys on your PC ssh-keygen -t rsa
Copy the keys to remote observation node VM’s ssh-copy-id <remote-user>@<remote-ip>
SSH into the node to check password-less SSH ssh -i ~/.ssh/<your private key> <remote-user>@<remote-ip>
Note:
Make sure the permission for
privkey.pem
for ssh is set to 400.
Run env-check-setup.yaml
to check if cluster nodes are fine and do not have known issues in it.
cd $K8_ROOT/rancher/on-prem
create copy of hosts.ini.sample
as hosts.ini
and update the required details for Observation k8 cluster nodes.
cp hosts.ini.sample hosts.ini
ansible-playbook -i hosts.ini env-check-setup.yaml
This ansible checks if localhost mapping is already present in /etc/hosts file in all cluster nodes, if not it adds the same.
Open ports and install docker on Observation K8 Cluster node VM’s.
cd $K8_ROOT/rancher/on-prem
Ensure that hosts.ini
is updated with nodal details.
Update vpc_ip variable in ports.yaml
with vpc CIDR ip to allow access only from machines inside same vpc.
Execute ports.yml
to enable ports on VM level using ufw:
ansible-playbook -i hosts.ini ports.yaml
Disable swap in cluster nodes. (Ignore if swap is already disabled)
ansible-playbook -i hosts.ini swap.yaml
execute docker.yml
to install docker and add user to docker group:
ansible-playbook -i hosts.ini docker.yaml
Creating RKE Cluster Configuration file
rke config
Command will prompt for nodal details related to cluster, provide inputs w.r.t below mentioned points:
SSH Private Key Path
:
Number of Hosts
:
SSH Address of host
:
SSH User of host
:
Make all the nodes Worker host
by default.
To create an HA cluster, specify more than one host with role Control Plane
and etcd host
.
Network Plugin Type
: Continue with canal as default network plugin.
For rest of other configurations, opt the required or default value.
As result of rke config
command cluster.yml
file will be generated inside same directory, update the below mentioned fields:
nano cluster.yml
Remove the default Ingress install
Add the name of the kubernetes cluster
cluster_name: sandbox-name
Setup up the cluster:
Once cluster.yml
is ready, you can bring up the kubernetes cluster using simple command.
This command assumes the cluster.yml
file is in the same directory as where you are running the command.
rke up
As part of the Kubernetes creation process, a kubeconfig
file has been created and written at kube_config_cluster.yml
, which can be used to start interacting with your Kubernetes cluster.
Copy the kubeconfig files
To access the cluster using kubeconfig
file use any one of the below method:
cp $HOME/.kube/<cluster_name>_config $HOME/.kube/config
Alternatively
export KUBECONFIG="$HOME/.kube/<cluster_name>_config
Test cluster access:
kubectl get nodes
Command will result in details of the nodes of the Observation cluster.
Save your files
Save a copy of the following files in a secure location, they are needed to maintain, troubleshoot and upgrade your cluster.
cluster.yml
: The RKE cluster configuration file.
Once the rancher cluster is ready, we need ingress and storage class to be set for other applications to be installed.
this will install ingress in ingress-nginx namespace of rancher cluster.
The following storage classes can be used:
MOSIP using NFS as a staorage class for Reference architecture
For Nginx server setup we need ssl certificate, add the same into Nginx server.
Incase valid ssl certificate is not there generate one using letsencrypt:
SSH into the nginx server
Install Pre-requisites
Generate wildcard SSL certificates for your domain name.
sudo certbot certonly --agree-tos --manual --preferred-challenges=dns -d *.org.net
replace org.net
with your domain.
The default challenge HTTP is changed to DNS challenge, as we require wildcard certificates.
Create a DNS record in your DNS service of type TXT with host _acme-challenge.org.net
, with the string prompted by the script.
Wait for a few minutes for the above entry to get into effect.
Verify:
host -t TXT _acme-challenge.org.net
Press enter in the certbot
prompt to proceed.
Certificates are created in /etc/letsencrypt
on your machine.
Certificates created are valid for 3 months only.
Provide below mentioned inputs as and when promted
Rancher nginx ip : internal ip of the nginx server VM.
SSL cert path : path of the ssl certificate to be used for ssl termination.
SSL key path : path of the ssl key to be used for ssl termination.
Cluster node ip's : ip’s of the rancher cluster node
Post installation check:
sudo systemctl status nginx
Steps to Uninstall nginx (in case required)
sudo apt purge nginx nginx-common
DNS mapping: Once nginx server is installed sucessfully, create DNS mapping for rancher cluster related domains as mentioned in DNS requirement section. (rancher.org.net, keycloak.org.net)
Rancher provides full CRUD capability of creating and managing kubernetes cluster.
Install rancher using Helm, update hostname
in rancher-values.yaml
and run the following command to install.
Login:
Get Bootstrap password using
Assign a password. IMPORTANT: makes sure this password is securely saved and retrievable by Admin.
keycloak_client.json
: Used to create SAML client on Keycloak for Rancher integration.
Login as admin
user in Keycloak and make sure an email id, and first name field is populated for admin user. This is important for Rancher authentication as given below.
In Keycloak add another Mapper for the rancher client (in Master realm) with following fields:
Protocol: saml
Name: username
Mapper Type: User Property
Property: username
Friendly Name: username
SAML Attribute Name: username
SAML Attribute NameFormat: Basic
Specify the following mappings in Rancher's Authentication Keycloak form:
Display Name Field: givenName
User Name Field: email
UID Field: username
Entity ID Field: https://your-rancher-domain/v1-saml/keycloak/saml/metadata
Rancher API Host: https://your-rancher-domain
Groups Field: member
For users in Keycloak assign roles in Rancher - cluster and project roles. Under default
project add all the namespaces. Then, to a non-admin user you may provide Read-Only role (under projects).
Add a member to cluster/project in Rancher:
Navigate to RBAC cluster members
Add member name exactly as username
in Keycloak
Assign appropriate role like Cluster Owner, Cluster Viewer etc.
You may create new role with fine grained acccess control.
Add group to to cluster/project in Rancher:
Navigate to RBAC cluster members
Click on Add
and select a group from the displayed drop-down.
Assign appropriate role like Cluster Owner, Cluster Viewer etc.
To add groups, the user must be a member of the group.
Creating a Keycloak group involves the following steps:
Go to the "Groups" section in Keycloak and create groups with default roles.
Navigate to the "Users" section in Keycloak, select a user, and then go to the "Groups" tab. From the list of groups, add the user to the required group.
Certificates expiry
In case you see certificate expiry message while adding users, on local cluster run these commands:
Pre-requisites:
Install all the required tools mentioned in Pre-requisites for PC.
kubectl
helm
ansible
rke (version 1.3.10)
Setup MOSIP K8 Cluster node VM’s as per the hardware and network requirements as mentioned above.
Run env-check-setup.yaml
to check if cluster nodes are fine and dont have known issues in it.
cd $K8_ROOT/rancher/on-prem
create copy of hosts.ini.sample
as hosts.ini
and update the required details for MOSIP k8 cluster nodes.
cp hosts.ini.sample hosts.ini
ansible-playbook -i hosts.ini env-check-setup.yaml
This ansible checks if localhost mapping ia already present in /etc/hosts
file in all cluster nodes, if not it adds the same.
Setup passwordless ssh into the cluster nodes via pem keys. (Ignore if VM’s are accessible via pem’s).
Generate keys on your PC
ssh-keygen -t rsa
Copy the keys to remote rancher node VM’s:
ssh-copy-id <remote-user>@<remote-ip>
SSH into the node to check password-less SSH
ssh -i ~/.ssh/<your private key> <remote-user>@<remote-ip>
Rancher UI : (deployed in Rancher K8 cluster)
Open ports and Install docker on MOSIP K8 Cluster node VM’s.
cd $K8_ROOT/mosip/on-prem
create copy of hosts.ini.sample
as hosts.ini
and update the required details for wireguard VM.
cp hosts.ini.sample hosts.ini
Update vpc_ip
variable in ports.yaml
with vpc CIDR ip
to allow access only from machines inside same vpc.
execute ports.yml
to enable ports on VM level using ufw:
ansible-playbook -i hosts.ini ports.yaml
Disable swap in cluster nodes. (Ignore if swap is already disabled)
ansible-playbook -i hosts.ini swap.yaml
execute docker.yml
to install docker and add user to docker group:
ansible-playbook -i hosts.ini docker.yaml
Creating RKE Cluster Configuration file
rke config
Command will prompt for nodal details related to cluster, provide inputs w.r.t below mentioned points:
SSH Private Key Path
:
Number of Hosts
:
SSH Address of host
:
SSH User of host
:
Make all the nodes Worker host
by default.
To create an HA cluster, specify more than one host with role Control Plane
and etcd host
.
Network Plugin Type
: Continue with canal as default network plugin.
For rest for other configuration opt the required or default value.
As result of rke config command cluster.ymlfile
will be generated inside same directory, update the below mentioned fields:
nano cluster.yml
Remove the default Ingress install
Add the name of the kubernetes cluster
Setup up the cluster:
Once cluster.yml
is ready, you can bring up the kubernetes cluster using simple command.
This command assumes the cluster.yml
file is in the same directory as where you are running the command.
rke up
The last line should read Finished building Kubernetes cluster successfully
to indicate that your cluster is ready to use.
Copy the kubeconfig files
To access the cluster using kubeconfig filr use any one of the below method:
cp $HOME/.kube/<cluster_name>_config $HOME/.kube/config
Alternatively
Test cluster access:
kubect get nodes
Command will result in details of the nodes of the rancher cluster.
Save Your files
Save a copy of the following files in a secure location, they are needed to maintain, troubleshoot and upgrade your cluster.:
cluster.yml
: The RKE cluster configuration file.
Global configmap: Global configmap contains the list of neccesary details to be used throughout the namespaces of the cluster for common details.
cd $K8_ROOT/mosip
Copy global_configmap.yaml.sample
to global_configmap.yaml
.
Update the domain names in global_configmap.yaml
and run.
kubectl apply -f global_configmap.yaml
cd $K8_ROOT/mosip/on-prem/istio
./install.sh
This will bring up all the Istio components and the Ingress Gateways.
Check Ingress Gateway services:
kubectl get svc -n istio-system
istio-ingressgateway
: external facing istio service.
istio-ingressgateway-internal
: internal facing istio service.
istiod
: Istio daemon for replicating the changes to all envoy filters.
The following storage classes can be used:
MOSIP using NFS as a staorage class for Reference architecture
Login as admin in Rancher console
Select Impor
t Existing for cluster addition.
Select Generic
as cluster type to add.
Fill the Cluster Name
field with unique cluster name and select Create
.
You will get the kubecl commands to be executed in the kubernetes cluster. Copy the command and execute from your PC (make sure your kube-config
file is correctly set to MOSIP cluster).
Wait for few seconds after executing the command for the cluster to get verified.
Your cluster is now added to the rancher management server.
For Nginx server setup, we need ssl certificate, add the same into Nginx server.
Incase valid ssl certificate is not there generate one using letsencrypt:
SSH into the nginx server
Install Pre-requisites:
Generate wildcard SSL certificates for your domain name.
sudo certbot certonly --agree-tos --manual --preferred-challenges=dns -d *.sandbox.mosip.net -d sandbox.mosip.net
replace sanbox.mosip.net
with your domain.
The default challenge HTTP is changed to DNS challenge, as we require wildcard certificates.
Create a DNS record in your DNS service of type TXT with host _acme-challenge.sandbox.xyz.net
, with the string prompted by the script.
Wait for a few minutes for the above entry to get into effect.
** Verify**: host -t TXT _acme-challenge.sandbox.mosip.net
Press enter in the certbot
prompt to proceed.
Certificates are created in /etc/letsencrypt
on your machine.
Certificates created are valid for 3 months only.
Clone k8s-infra
Provide below mentioned inputs as and when prompted
MOSIP nginx server internal ip
MOSIP nginx server public ip
Publically accessible domains (comma seperated with no whitespaces)
SSL cert path
SSL key path
Cluster node ip's (comma seperated no whitespace)
Post installation check\
sudo systemctl status nginx
Steps to uninstall nginx (incase it is required)
sudo apt purge nginx nginx-common
DNS mapping: Once nginx server is installed sucessfully, create DNS mapping for observation cluster related domains as mentioned in DNS requirement section.
Check Overall if nginx and istio wiring is set correctly
Install httpbin
: This utility docker returns http headers received inside the cluster. You may use it for general debugging - to check ingress, headers etc.
To see what is reaching the httpbin (example, replace with your domain name):
Prometheus and Grafana and Alertmanager tools are used for cluster monitoring.
Select 'Monitoring' App from Rancher console -> Apps & Marketplaces
.
In Helm options, open the YAML file and disable Nginx Ingress.
Click on Install
.
Alerting is part of cluster monitoring, where alert notifications are sent to the configured email or slack channel.
Monitoring should be deployed which includes deployment of prometheus, grafana and alertmanager.
After setting slack incoming webhook update slack_api_url
and slack_channel_name
in alertmanager.yml
.
cd $K8_ROOT/monitoring/alerting/
nano alertmanager.yml
Update:
Update Cluster_name
in patch-cluster-name.yaml
.
cd $K8_ROOT/monitoring/alerting/
nano patch-cluster-name.yaml
Update:
Install Default alerts along some of the defined custom alerts:
Alerting is installed.
Install Rancher FluentD system : for screpping logs outs of all the microservices from MOSIP k8 cluster.
Install Logging from Apps and marketplace within the Rancher UI.
Select Chart Version 100.1.3+up3.17.7
from Rancher console -> Apps & Marketplaces.
Configure Rancher FluentD
Create clusteroutput
kubectl apply -f clusteroutput-elasticsearch.yaml
Start clusterFlow
kubectl apply -f clusterflow-elasticsearch.yaml
Install elasticsearch, kibana and Istio addons\
set min_age
in elasticsearch-ilm-script.sh
and execute the same.
min_age
: is the minimum no. of days for which indices will be stored in elasticsearch.
MOSIP provides set of Kibana Dashboards for checking logs and throughputs.
Brief description of these dashboards are as follows:
Import dashboards:
cd K8_ROOT/logging
./load_kibana_dashboards.sh ./dashboards <cluster-kube-config-file>
View dashboards
Open kibana dashboard from https://kibana.sandbox.xyz.net
.
Kibana --> Menu (on top left) --> Dashboard --> Select the dashboard.
External Dependencies are set of external requirements that are needed for functioning of MOSIP’s core services like DB, Object Store, HSM etc.
Now that all the Kubernetes cluster and external dependencies are already installed, will continue with MOSIP service deployment.
This document outlines the necessary steps for upgrading the Platform from version 1.1.5.5-P1 to 1.2.0.1.
Postgres:
Change shareDomain
in all the relevant policies to point to latest datashare
Change shareDomain's value from datashare-service
to datashare.datashare
in the policy_file_id
column for each partner.
Keycloak:
In Keycloak, it is important to ensure that the VID / UIN of each operator and supervisor is collected and updated in the individualId field. Failure to do so may cause complications during the onboarding or re-onboarding processes to new or existing machines, as well as during the biometrics update process for these users.
Activemq:
Clear all the objects along with topics in the activemq or deploy a fresh instance of activemq with no previous data
ABIS:
Stop and clear all the inprogress items as it will be reprocessed freshly.
Review the queue names and update if required (mosip-to-abis and abis-to-mosip).
Manual adjudication system:
Stop and clear all the in-progress items as it will be reprocessed freshly.
Review the queue names and update if required (mosip-to-adjudication and adjudication-to-mosip).
Manual verification system:
Stop and clear all in-progress items as it will be reprocessed freshly.
Review the queue names and update if required (mosip-to-verification and verification-to-mosip).
Update registration-processor-default.properties
reprocess elapse time to a larger time to avoid reprocessing before migration is fully complete (registration.processor.reprocess.elapse.time=315360000).
Add the below properties to syncdata-default.properties
file if reg-client versions 1.1.5.4 and below are to be supported additionally.
Configuration property files required to be updated for language specific deployments. Please follow the below snippet.
Note: Ensure that the transliteration line is not commented out, even for a single language.
Please ensure that the mosip.regproc.packet.classifier.tagging.agegroup.ranges
property is aligned with the camel route.xml file.
To begin, set up the Configuration server.
Next, configure and setup the Artifactory.
Execute the salt generation job
to generate salts for the newly created table in the regproc.
Run the key generation job
to ensure that all new module keys comply with the key_policy_def
table.
Note: Disable the masterdata loader
and regproc-reprocessor
.
Finally, restart all the services to take care of old data caching.
Initiate the regproc reprocessor.
Backup and delete any unnecessary tables and databases.
Manually remove the "mosip_regdevice" and "mosip_authdevice" databases, as they have been moved to "mosip_pms".
Delete all tables ending with "<table_name>_to_be_deleted" and "<table_name>_migr_bkp".
Remove any unnecessary roles for clients and users.
Below is the list of admin roles:
GLOBAL_ADMIN
ZONAL_ADMIN
REGISTRATION_ADMIN
MASTERDATA_ADMIN
KEY_MAKER
Here:
Green- colored represent persisted roles.
Blue- colored cells represent newly added roles.
Red- colored cells represent removed roles.
How to adjust the role accessibilities for existing users after upgrading to 1.2.0.1-x from 1.1.5.5-P1?
For a user having GLOBAL_ADMIN
role:
If a GLOBAL_ADMIN user is performing Certificate related operations then KEY_MAKER role need to be added to that user.
If a GLOBAL_ADMIN user is performing Packet Bulk Upload then REGISTRATION_ADMIN role need to be added to that user.
For a user having ZONAL_ADMIN
role:
If a ZONAL_ADMIN user is performing Certificate related operations then KEY_MAKER role need to be added to that user.
If a ZONAL_ADMIN user is performing Packet Bulk Upload then REGISTRATION_ADMIN role need to be added to that user.
For a user having REGISTRATION_ADMIN
role:
If a REGISTRATION_ADMIN user is performing Certificate related operations then KEY_MAKER role need to be added to that user.
For a user having MASTERDATA_ADMIN
role:
If a MASTERDATA_ADMIN user is performing GenerateCSR then KEY_MAKER role need to be added to that user.
If a MASTERDATA _ADMIN user is performing Packet Bulk Upload then REGISTRATION_ADMIN role need to be added to that user.
Note: A few new permissions were added to MASTERDATA_ADMIN and KEY_MAKER roles, please refer to the above role matrix table and if there is any inconsistency in the accessibility or roles of existing user, please reassign the roles to the user accordingly.
Applicant-type MVEL script usages in MOSIP modules :
This MVEL script is used to determine the type of applicant based on the captured demographic data during the registration process.
→ Set of rules to determine the type of applicant is written as an MVEL script.
→ Applicant Data required for the evaluation passed as “identity” map to the MVEL context.
→ def getApplicantType()
method MUST be defined in the script. The string returned from this method should be a valid applicant type code or error code (KER-MSD-151, KER-MSD-147)
applicant_type_code
- Must be one of the values in the apptyp_code column in “master.applicant_valid_document" table.
"KER-MSD-147"
- returned when any of the demographics that are required for the script to return a code are empty (As per default script, it throws this exception if gender or residenceStatus or age is not filled / empty).
"KER-MSD-151"
- returned when the DOB exceeds the current date.
→ Data in the “identity” map are key-value pairs. The Key is the field id in the id-schema.
→ For the fields which are based on dynamic field values. For Ex: gender
“identity” map will have 2 mappings, genderCode, and gender. where the
identity.genderCode = “FLE”
identity.gender = “Female”
→ Age group configuration is also passed in the MVEL context as below
{ “ageGroups” : {'INFANT':'0-5','MINOR':'6-17','ADULT':'18-200'} }
and will be accessible as below in the script.
ageGroups.INFANT = “0-5”
ageGroups.MINOR = “6-17”
ageGroups.ADULT = “18-200”
Sample MVEL script is defined here https://github.com/mosip/mosip-config/blob/master/applicanttype.mvel
Note: In Pre-registration and Registration-Client, applicant-type
code is used to control the list of documents to be collected during the registration process.
The applicant_type_code
returned from this mvel script will be then used to fetch the required documents from master.applicant_valid_document
table.
For example, if the script returns applicant_type_code as “001”, all those entries in the applicant_valid_document table with app_typ_code as “001” will be picked and shown in the respective document dropdowns.
Attaching the sample csv file below which lists down the required entries for master.applicant_valid_document
table.
We can upload this default data from Admin Portal through Bulk Upload feature.
The steps to be followed are mentioned below:
Login to Admin Portal.
Navigate to Bulk Upload → Master Data.
Select the Insert operation, select the table name (ApplicantValidDocument
) from the dropdown and upload the csv file.
Click on Upload, which saves the uploaded data to the server DB.
Attaching screenshot for reference:
Modular Open Source Identity Platform (MOSIP) integrates a suite of Mock Services designed to emulate key functionalities of MOSIP services within the framework. In the development, testing, and demonstration phases, Mock Services will make available a controlled environment to evaluate MOSIP features.
This document details each of the mock services and explains its significance within the MOSIP architecture.
Below are the current set of Mock Services available in MOSIP, the services are subject to modifications, as may be planned, in future releases. Please refer to the latest available version of the document.
Simulates device services for testing, authentication and delete registration functionalities.
Allows developers to interact with a device-service environment without a physical device.
Mock MV (Manual Verification)
Reproduces the manual verification process for testing and validation purposes.
Enables the testing of manual verification workflows without human intervention.
Simulates the functionality of the Automated Biometric Identification System (ABIS).
Facilitates testing of biometric matching, search, and integration with ABIS without accessing production data.
Maintains resident biometric uniqueness through de-duplication.
Interfaces with MOSIP via message queues in JSON format.
Supports 1:N de-duplication and adheres to ABIS API Specifications.
Replicates MOSIP's Biometric Software Development Kit (SDK) for testing and debugging purposes.
Allows developers to integrate biometric functionalities into applications without connecting to physical device.
Used for 1:N match, quality, extraction, etc.
Simulation is available as Mock BioSDK, installed in the MOSIP sandbox.
Exposes REST APIs for 1:1 match and quality check at the MOSIP backend.
This document outlines the changes made to the camel route file following the migration.
In the 1.2.0.1 release, there is a default camel route file for each registration type, without any differentiation between the dmz and mz concepts. This is due to the transition from V2 to V3 deployment, which is mandatory.
Workflow commands are implemented to handle the isValid and internal error, with the primary purpose of making important decisions regarding the overall workflow state. Previously, these decisions were made within each individual stage, but now we are transferring them to the camel route, allowing for easier customization by different countries. This change grants more flexibility in controlling the workflow and reduces the reliance on specific stages for decision-making. It is mandatory for the registration table to be updated with packet processing results in each stage, whether successful or failed, excluding the status code. The example below demonstrates one of the workflow commands utilized in routes.
<to uri="workflow-cmd://complete-as-failed" />
Below are the workflow commands:
The OSI validator stage is divided into four stages: operator-validator stage, supervisor-validator stage, introducer-validator stage, and cmd-validator stage. If a packet is determined to be valid, it will be directed from the packet classifier to the cmd-validator stage. This step is mandatory.
Tags will be created in the packet classifier stage. Depending on the tags, the packet will be transferred from the command validator stage to either the supervisor or operator stage. In order to introduce validation and check tags, packets will be moved accordingly. The availability of tags allows us to modify camel routes. (Mandatory)
The quality checker stage has been updated to the quality classifier stage. Packets are now transferred from the supervisor, operator, and introducer stages to the quality classifier stage, depending on the designated route. This change is mandatory.
Instead of manual verification in section 1.1.5.5, it is now replaced with manual adjudication in section 1.2.0.1. Additionally, a new route has been specified from the manual adjudication stage to the UIN generator stage in the XML route. In cases where duplicates are identified, the manual adjudication stage is added to the route after the demo dedupe and bio dedupe processes. This change is mandatory.
New route has been added from UIN generator stage to biometric-extraction stage. This stage fetches biometric extraction policy from PMS and sends to ID Repository (mandatory).
New route has been added from biometric-extraction stage to Finalization stage. This stage publishes draft version to ID Repository DB (Mandatory).
New route has been added from Finalization stage to Printing stage. This stage creates Credential Request for printing systems (mandatory).
Based on Tags related to quality score, the packets will move form quality classifier to workflow-cmd://pause-and-request-additional-info
or demodedupe (optional).
We can use JSON path also and update the conditions. If we want to update route based on isValid
and internalError
then follow the below syntax (optional)
If we wish to use check condition based on address, then follow the below syntax (optional) ,
If we wish to use the check condition based on Tags, then follow as below (optional):
If we want to set the property in camel route, then follow the steps as below. .
This property is used for PAUSE and RESUME feature. We cant set application properties here (optional)
As a part of the 1.2.0.1 update, if no biometric data is available, the system will proceed to the verification stage. This stage is only relevant in cases where the required biometrics are missing from the packet. The system will send the applicant's demographic and biometric information to the external Verification System (VS) through a queue and Datashare. Upon receiving the decision from the VS, the system will proceed accordingly and forward the packets. In case of rejection, the applicant will be notified (optional).
New route is specified from verification to UIN generator stage route.xml
Now securezone-notification stage can consume from securezone-notification-bus-in address which is from packet receiver stage. We can even use, http://regproc-group2.regproc/registrationprocessor/v1/securezone/notification instead of http://mz.ingress:30080/registrationprocessor/v1/securezone/notification (optional)
1.1.5.5
1.2.0.1
~~~
is used as a trust network extension to access the admin, control, and observation pane.
MOSIP uses server for:
Kubernetes cluster is administered using the and tools.
is used for managing the MOSIP cluster.
in this cluster is used for cluster user access management.
: contains the scripts to install and configure Kubernetes cluster with required monitoring, logging and alerting tools.
: contains the deployment scripts to run charts in defined sequence.
: contains all the configuration files required by the MOSIP modules.
: contains packaged helm charts for all the MOSIP modules.
- any client version above 1.19
- any client version above 3.0.0 and add below repos as well:
: version: 1.15.0
: version:
For production deplopyments edit the cluster.yml
, according to this .
kube_config_cluster.yml
: The for the cluster, this file contains credentials for full access to the cluster.
cluster.rkestate
: The , this file contains credentials for full access to the cluster.
: used for ingress in rancher cluster.
: If you are already using VMware virtual machines, you can proceed with the vSphere storage class.
.
.
Wildcard SSL certificate . This will increase the validity of the certificate for next 3 months.
Clone
Open page.
is an OAuth 2.0 compliant Identity Access Management (IAM) system used to manage the access to Rancher for cluster controls.
Enable authentication with Keycloak using the steps given .
If you want to create custom roles, you can follow the steps given .
For production deplopyments edit the cluster.yml
, according to this .
kube_config_cluster.yml
: The for the cluster, this file contains credentials for full access to the cluster.
cluster.rkestate
: The , this file contains credentials for full access to the cluster.
Ingress setup: It is a service mesh for the MOSIP K8 cluster which provides transparent layers on top of existing microservices along with powerful features enabling a uniform and more efficient way to secure, connect, and monitor services.
: If you are already using VMware virtual machines, you can proceed with the vSphere storage class.
.
.
Wildcard SSL certificate
. This will increase the validity of the certificate for next 3 months.
Create .
MOSIP uses and elasticsearch to collect logs from all services and reflect the same in Kibana Dashboard.
contains the logstash Index Pattern required by the rest of the dashboards.
contains a Search dashboard which shows only the error logs of the services, called MOSIP Error Logs
dashboard.
contains a Search dashboard which show all logs of a particular service, called MOSIP Service Logs dashboard.
contains dashboards which show insights into MOSIP processes, like the number of UINs generated (total and per hr), the number of Biometric deduplications processed, number of packets uploaded etc, called MOSIP Insight
dashboard.
contains dashboards which show how quickly different MOSIP Services are responding to different APIs, over time, called Response Time
dashboard.
Click to check the detailed installation instructions of all the external components.
Check detailed installation steps.
Check and remove the duplicate thumbprint entries in keymanager ca_cert_store
. Refer the to know more.
Refer the for DB upgrade scripts to update the DB.
Check and rectify the partner name mismatch issue for certificate renewal. To know more, refer .
Follow this to check on the validity of the partner certificate and for renewal/ extension if required.
Check mvel expression, id schema and document mappings and add the required applicant document mappings. Click to know more.
Follow the steps mentioned to execute upgrade keycloak init with import-init.yaml
.
Verify all the existing users of admin and update the roles according to the latest role matrix. To know more about the existing users, refer .
Manually update roles for client IDs that have been added as part of customization. For more information about the changes, please refer .
Run the data movement to the necessary three tables using the provided script. Afterward, run the migration script to re-encrypt the data and perform the movement of data from the bucket to the folder (This step is only necessary if the pre-registration has been upgraded from version 1.1.3.x). Please consult the provided for detailed instructions on how to carry out the data movement process.
Refer this to run the property migration script.
Take the latest version of the identity-mapping.json file (1.2.0.1) from mosip-config
and update the mapping values based on the country's id schema. Please refer for instructions on making the necessary updates.
Additionally, make adjustments to the mvel
config file for the application type according to each country's specific requirements. For more details on how to modify the mvel config file, please refer .
The camel routes need to be modified to accommodate the new workflow commands and ensure proper integration with external subsystems such as manual adjudication and manual verification. To understand the specific changes required, refer .
Proceed with the installation in the specified sequence. Refer to the provided for the correct order.
To resend the partner and policy details to IDA, please run the PMS utility job once. You can find the steps to run the job .
The UI specs for pre-registration should be published via the MasterData API in version 1.2.0. Previously, in version 1.1.5, the UI specs were saved in the config server. To upgrade the UI specs, please refer .
To proceed with the masterdata country specific upgrade scripts, please follow the instructions outlined .
Please create all the required applicant type details according to the applicanttype.mvel
file created in the property migration section. For more information, please refer to the document .
Starting from version 1.2.0.1, it is mandatory to prepend the thumbprint for all encryptions. Therefore, we need to ensure that the certificate thumbprint for a particular partner exactly matches in both the keymanager and IDA key_alias
tables. To learn how to check thumbprints and for further steps, please refer .
Please check and rectify any mixed case user names in the user details and zone mapping. For more information, refer .
Configure the Registration Client upgrade at the server side. Please refer to this for further instructions.
Run the query to identify all the packets stuck between the stages. Use the manual reprocess utility to reprocess all the RIDs found using the above query. Please refer to this to carry out the reprocess.
In case packets continue to fail due to performance issues, follow the steps mentioned in the to process packets from the beginning.
Perform the ID Repository tasks. Run the archival script and reprocess SQL script on the credential transaction table as specified in the .
Ensure that the datashare property is properly configured in the abis policy for the domain. Please refer to this for more detailed information.
When the admin portal becomes accessible, the admin user should generate the master keys that have been recently added to the key_policy_def
table. This can be done using the admin UI master key generation page (Keymanager) for the ADMIN_SERVICES
and RESIDENT
roles. Only proceed with this step if the corresponding entries are not already available in the key_alias
table of keymanager. For more detailed instructions, please consult the provided .
During the pre-registration upgrade process, if the encryption key is REGISTRATION,
which is an old key, it must be updated. To update the encryption key, please refer to the migration utility process by clicking .
Location: repository
1.
Wireguard Bastion Host
2
4 GB
8 GB
1
(ensure to setup active-passive)
2.
Observation Cluster nodes
2
8 GB
32 GB
2
2
3.
Observation Nginx server (use Loadbalancer if required)
2
4 GB
16 GB
2
Nginx+
4.
MOSIP Cluster nodes
12
32 GB
128 GB
6
6
5.
MOSIP Nginx server ( use Loadbalancer if required)
2
4 GB
16 GB
1
Nginx+
1.
Wireguard Bastion Host
One Private interface : that is on the same network as all the rest of nodes (e.g.: inside local NAT Network). One public interface : Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 51820/udp port to this interface IP.
2.
K8 Cluster nodes
One internal interface: with internet access and that is on the same network as all the rest of nodes (e.g.: inside local NAT Network )
3.
Observation Nginx server
One internal interface: with internet access and that is on the same network as all the rest of nodes (e.g.: inside local NAT Network).
4.
Mosip Nginx server
One internal interface : that is on the same network as all the rest of nodes (e.g.: inside local NAT Network). One public interface : Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 443/tcp port to this interface IP.
1.
rancher.xyz.net
Private IP of Nginx server or load balancer for Observation cluster
Rancher dashboard to monitor and manage the kubernetes cluster.
2.
keycloak.xyz.net
Private IP of Nginx server for Observation cluster
Administrative IAM tool (keycloak). This is for the kubernetes administration.
3.
sandbox.xyx.net
Private IP of Nginx server for MOSIP cluster
Index page for links to different dashboards of MOSIP env. (This is just for reference, please do not expose this page in a real production or UAT environment)
4.
api-internal.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
Internal API’s are exposed through this domain. They are accessible privately over wireguard channel
5.
api.sandbox.xyx.net
Public IP of Nginx server for MOSIP cluster
All the API’s that are publically usable are exposed using this domain.
6.
prereg.sandbox.xyz.net
Public IP of Nginx server for MOSIP cluster
Domain name for MOSIP's pre-registration portal. The portal is accessible publicly.
7.
activemq.sandbox.xyx.net
Private IP of Nginx server for MOSIP cluster
Provides direct access to activemq
dashboard. It is limited and can be used only over wireguard.
8.
kibana.sandbox.xyx.net
Private IP of Nginx server for MOSIP cluster
Optional installation. Used to access kibana dashboard over wireguard.
9.
regclient.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
Registration Client can be downloaded from this domain. It should be used over wireguard.
10.
admin.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
MOSIP's admin portal is exposed using this domain. This is an internal domain and is restricted to access over wireguard
11.
object-store.sandbox.xyx.net
Private IP of Nginx server for MOSIP cluster
Optional- This domain is used to access the object server. Based on the object server that you choose map this domain accordingly. In our reference implementation, MinIO is used and this domain let's you access MinIO’s Console over wireguard
12.
kafka.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
Kafka UI is installed as part of the MOSIP’s default installation. We can access kafka UI over wireguard. Mostly used for administrative needs.
13.
iam.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
MOSIP uses an OpenID Connect server to limit and manage access across all the services. The default installation comes with Keycloak. This domain is used to access the keycloak server over wireguard
14.
postgres.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
This domain points to the postgres server. You can connect to postgres via port forwarding over wireguard
15.
pmp.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
MOSIP’s partner management portal is used to manage partners accessing partner management portal over wireguard
16.
onboarder.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
Accessing reports of MOSIP partner onboarding over wireguard
17.
resident.sandbox.xyz.net
Public IP of Nginx server for MOSIP cluster
Accessing resident portal publically
18.
idp.sandbox.xyz.net
Public IP of Nginx server for MOSIP cluster
Accessing IDP over public
19.
smtp.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
Accessing mock-smtp UI over wireguard
Centers
Centers
Packet Status
Devices
GenerateMasterKey
User Zone Mapping
Devices
Pause/ Resume RID
Machines
GenerateCSR
All Master Data
Machines
Retrieve Lost RID
All Master Data
GetCertificate
Masterdata Bulk Upload
User Zone Mapping
Packet Bulk Upload
Masterdata Bulk Upload
UploadCertificate
Packet Bulk Upload
User Center Mapping
UploadCertificate
GenerateCSR
UploadOtherDomainCertificate
GenerateCSR
All Master Data
Upload OtherDomainCertificate
Devices
GetCertificate
Masterdata Bulk Upload
Machines
UploadCertificate
GenerateCSR
Upload OtherDomainCertificate
UploadCertificate
Upload OtherDomainCertificate
Packet Bulk Upload
Module Name
Before LTS
LTS
Pre-registration
Yes
Yes
Registration Client
No
No
workflow-cmd://complete-as-processed
The status code will be updated to "PROCESSED" and a websub event will be sent to the notification service for notification purposes. Additionally, it will check if there is an additional request ID present. If so, a tag will be created with a specific registration type and flow status set as "PROCESSED". Furthermore, the latest transaction status code of the main flow will be updated to "REPROCESS" in order to resume processing. Lastly, notifications will be added for failed, rejected, processed, and pause-and-request-additional-info records within the workflow.
workflow-cmd://complete-as-rejected
The status code will be updated to "REJECTED" and a websub event will be sent to the notification service for notification. Additionally, it will verify if there are any additional request IDs present. If so, a tag will be created for the specific registration type with a flow status of "REJECTED", and the processing of the main flow will be resumed.
workflow-cmd://complete-as-failed
The status code will be updated to FAILED, and a websub event will be sent to the notification service for notifying relevant parties. Additionally, if there is an additional request ID present, a tag will be created with the corresponding registration type and flow status as FAILED. Following this, the processing of the main flow will resume.
workflow-cmd://mark-as-reprocess
It will update status code to REPROCESS. It will create tag with particular reg type with flow status as FAILED
workflow-cmd://anonymous-profile
To store packet details in anonymous profile table
workflow-cmd://pause-and-request-additional-info
It will verify if there is an additional request ID. If there is, it will update the status code to "FAILED" and the latest transaction status code of the main workflow to "REPROCESS" in order to resume processing of the main workflow. If there is no additional request ID, it will update the status code to "PAUSED_FOR_ADDITIONAL_INFO" and create the additional request ID. It will then send a web sub-event to the notification service to trigger a notification.
All MOSIP services are packaged as Helm charts for ease of installation on the Kubernetes cluster. The source code of Helm charts is available in mosip-helm
repository. The packaged charts (*.tgz
) are checked in gh-pages
branch of mosip-helm
repo. GitHub automatically hosts them at https://mosip.github.io/mosip-helm/index.yaml
. See the sections below for further details.
Refer Versioning.
Make sure the version in Charts.yaml
is updated for all charts when a new branch is created on mosip-helm
.
To install the charts, add the repository as below:
To publish charts manually follow these steps:
In the branch where changes have been made run the following from mosip-helm
folder
You will see packaged .tgz
files created in the current directory.
Copy the .tgz
files to gh-pages
branch of mosip-helm
repo. You can clone another copy of the repo and check out gh-pages
branch to achieve this.
Run
If a country desires to designate specific data sharing to be transmitted on HTTP or HTTPS endpoints, they should accomplish this by including the Domain URL of the data sharing in the policy field itself. This will have priority over any other settings.
Furthermore, two new fields have been incorporated into the policy:
shareDomainUrlWrite
: This field should be employed by individual modules when calling the data sharing functionality to write data.
shareDomainUrlRead
: This field should be used by the data sharing functionality when generating a URL to share with modules for reading data.
Note: It is important to note that these fields are compatible with previous versions and are not obligatory to include in all policies. They can be utilized only if a country sees a need for the new features.
First, we will compare the thumbprints in the key_alias tables' thumbprint column of the mentioned IDA and Keymanager DB.
To check if the thumbprints are the same in both databases, we can follow these steps. For demonstration purposes, we will use 'mpartner-default-auth' as an example.
In the results of the above query, if it is found that the thumbprints do not match, the next objective is to take the MOSIP signed certificate from keymanager and store it in IDA manually, so that they match.
Here is a simple method to accomplish that task.
A. Perform the required authentication at authmanager portal using the below swagger URL
Sample request body:
B. Get the certificate using following swagger URL
In the app_id
field use : PARTNER , in the ref_id
field use : name of the partner whose cert thumbprints are mismatching such as mpartner-default-auth
.
Sample response:
C. Now, reauthenticate in the same authmanager URL (note the different clientId , appId and corresponding secret key changes )
https://api-internal.dev.mosip.net/v1/authmanager/swagger-ui/index.html?configUrl=/v1/authmanager/v3/api-docs/swagger-config#/authmanager/clientIdSecretKey
Sample Request
D. After getting the certificate through step B mentioned above, copy it and use it in the following POST request in the below swagger URL:
https://api-internal.dev.mosip.net/idauthentication/v1/internal/swagger-ui/index.html?configUrl=/idauthentication/v1/internal/v3/api-docs/swagger-config#/keymanager/uploadCertificate
In applicationId
field use IDA
and in the referenceId
field use name of the partner whose cert thumbprints are mismatching such as mpartner-default-auth
.
Sample request
After successfully completing this final step, we can proceed to the SQL cmd check mentioned at the beginning of this document and ensure that the thumbprints now match.
Always ensure that you are using the correct base-url for your environment. In our case, it is dev.mosip.net and this should be used in all swagger links. Make sure to change it according to your requirement.
If you encounter an error code such as "errorCode": "500", "message": "401 Unauthorized", please re-authenticate using the authmanager token provided and ensure that you are using the proper credentials.
If you receive a 400 Bad request error, please resend your request with the correct time format and verify that your request JSON is in the specified format.
If you encounter any other issues, please remember to post your queries on the MOSIP Community.
As part of the migration process, we will be updating the latest version of the identity-mapping.json
file (1.2.0.1) from the mosip-config
. This update involves modifying the mapping values to align with the id schema of the respective country.
To guide you through the updating process, please refer to the following information:
In the provided sample identity-mapping.json
, the focus will be solely on modifying the mapper values to match the id schema of the country.
According to the identity-mapping.json
file mentioned above, we need to verify if a value is present in the country's ID schema. If the value exists, we can retain it as is. Otherwise, we should update the value in the identity-mapping.json
file.
To illustrate, let's consider a few examples:
The fullName field is not included in the ID schema. Instead, it consists of firstName, middleName, and lastName. Therefore, we should replace the fullName with firstName, middleName, and lastName in the identity-mapping.json file.
Similarly, the introducerUIN field is not present in the schema, but instead, it has introducerCredentialID. Hence, we need to substitute introducerUIN with introducerCredentialID in the mapping.json file.
Additionally, since addressLine1 is not part of the schema, we should replace its value with presentAddressLine1, which is present.
Lastly, the phone field is not found in the schema, but mobileno is present. Thus, we need to replace phone with mobileno in the mapping.json file.
Our task is to compare each field value in the identity-mapping.json file with the ID schema and update it with the appropriate value based on the schema.
After running the reprocess utility and regproc reprocessor, it is possible that a few packets may end up in the FAILED status. This can occur due to various reasons such as environment instability or high parallel processing of packets. The following steps will help in identifying if such packets exist and how to handle them.
To handle these packets, please follow the below steps:
Use the sample query below to find out if there are packets in a non-recoverable state:
Here, cr_Dtimes
should be less than the time of upgrade completion and the processing of the first packet. latest_trn_Dtimes
should be greater than the time of upgrade completion and the processing of the first packet. If there are no packets in this status, no further action is required. If any packets are found with the above status, proceed to step 2.
Before running the reprocess utility to process the packet from the beginning as per the APPROACH 1 in document, update the DEFAULT QUERY in the config.py file as per the requirements to process non-recoverable records.
The sample query is as follows:
Here, the status code should be set to FAILED. cr_Dtimes
should be less than the time of upgrade completion and the processing of the first packet. latest_trn_Dtimes
should be greater than the time of upgrade completion and the processing of the first packet.
After changing the query in config.py, please refer to the documentation on how to set up and run the reprocessor script.
To reprocess packets, use the following command:
In previous versions (1.1.5.x) of our system, we utilized the mosip-partner-client
for Partner Management Services (PMS). However, starting from version 1.2.0.1 onwards, we have implemented the use of mosip-pms-client
instead. This transition has led to updates in service account roles, client scopes, and client configurations.
Please find below the details of the changes made to service account roles and client scopes.
offline access
CREATE_SHARE
REGISTRATION_PROCESSOR
default_roles_mosip
uma_authorization
DEVICE_PROVIDER
PARTNER
PARTNER_ADMIN
PMS_ADMIN
PMS_USER
PUBLISH_APIKEY_APPROVED_GENERAL
PUBLISH_APIKEY_UPDATED _GENERAL
PUBLISH_CA_CERTIFICATE_UPLOADED_GENERAL
PUBLISH_MISP_LICENSE_GENERATED_GENERAL
PUBLISH_MISP_LICENSE_UPDATED_GENERAL
PUBLISH_OIDC_CLIENT_CREATED_GENERAL
PUBLISH_OIDC_CLIENT_UPDATED _GENERAL
PUBLISH_PARTNER _UPDATED _GENERAL
PUBLISH_POLICY_UPDATED _GENERAL
REGISTRATION_PROCESSOR
SUBSCRIBE_CA_CERTIFICATE_UPLOADED_GENERAL
ZONAL_ADMIN
add_oidc_client
profile
roles
get_certificate
web-origins
profile
roles
send_binding_otp
update_oidc_client
uploaded_certificate
wallet_binding
web_origins
In version 1.1.5.x, the mosip-admin-client
was utilized for administrative services. We are also continuing to utilize the same client in version 1.2.0.1. While there have been modifications to the service account roles, the Client scopes have remained unchanged. Please find below the updated service account role adjustments. Additionally, it is worth noting that MOSIP Commons is also utilizing this client.
Service account roles for Admin-Services:
MASTERDATA_ADMIN
Default-roles-mosip
offline_access
ZONAL_ADMIN
uma_authorization
offline-access
PUBLISH_MASTERDATA_IDAUTHENTICATION_TEMPLATES_GENERAL
PUBLISH_MASTERDATA_TITLES_GENERAL
PUBLISH_MOSIP_HOTLIST_GENERAL
uma_authorization
Client scopes are the same for mosip-admin-client in 1.2.0.1 & 1.1.5.1
profile
roles
web-origins
In version 1.1.5.x, we utilized the 'mosip-prereg-client' for Pre-Registration. This client is also utilized in version 1.2.0.1. There have been modifications in the service account roles, while the client scopes have remained unchanged. Please find below the updated service account roles.
Service account roles for Pre-Registration:
INDIVIDUAL
offline_access
PRE_REGISTRATION_ADMIN
PREREG
REGISTRATION_PROCESSOR
uma_authorization
default_roles_mosip
PRE_REGISTRATION_ADMIN
PREREG
REGISTRATION_PROCESSOR
Note: Prior to proceeding with the upgrade, please ensure that the INDIVIDUAL
role has been removed.
Client scopes are the same for mosip-prereg-client in 1.2.0.1 & 1.1.5.1
profile
roles
web-origins
In the previous version 1.1.5.x, the mosip-ida-client
module was responsible for handling ID authentication. However, starting from version 1.2.0.1, we have switched to using mpartner-default-auth
for this purpose. This transition has brought about several changes, including modifications to service account roles, client scopes, and client configurations. Below is an overview of the changes in service account roles and client scopes.
Service account roles for id-authentication:
AUTH
AUTH_PARTNER
ID_AUTHENTICATION
offline_access
uma_authorization
CREDENTIAL_REQUEST
default_roles_mosip
ID_AUTHENTICATION
offline_access
PUBLISH_ANONYMOUS_PROFILE_GENERAL
PUBLISH_AUTH_TYPE_STATUS_UPDATE_ACK_GENERAL
PUBLISH_AUTHENTICATION_TRANSACTION_STATUS_GENERAL
PUBLISH_CREDENTIAL_STATUS_UPDATE_GENERAL
PUBLISH_IDA_FRAUD_ANALYTICS_GENERAL
SUBSCRIBE_ACTIVATE_ID_INDIVIDUAL
SUBSCRIBE_APIKEY _APPROVED_GENERAL
SUBSCRIBE_APIKEY _UPDATED _GENERAL
SUBSCRIBE_AUTH_TYPE_STATUS_UPDATE_ACK_GENERAL
SUBSCRIBE_AUTH_TYPE_STATUS_UPDATE_INDIVIDUAL
SUBSCRIBE_CA_CERTIFICATE_UPLOADED_GENERAL
SUBSCRIBE_CREDENTIAL_ISSUED_INDIVIDUAL
SUBSCRIBE_DEACTIVATE_ID_INDIVIDUAL
SUBSCRIBE_MASTERDATA_IDAUTHENTICATION_TEMPLATES_GENERAL
SUBSCRIBE_MASTERDATA_TITLES_GENERAL
SUBSCRIBE_MISP_LICENSE_GENERATED_GENERAL
SUBSCRIBE_MISP_LICENSE_UPDATED_GENERAL
SUBSCRIBE_MOSIP_HOTLIST_GENERAL
SUBSCRIBE_OIDC_CLIENT_CREATED_GENERAL
SUBSCRIBE_OIDC_CLIENT_UPDATED_GENERAL
SUBSCRIBE_PARTNER_UPDATED_GENERAL
SUBSCRIBE_POLICY _UPDATED_GENERAL
SUBSCRIBE_REMOVE _ID_INDIVIDUAL
uma_authorization
Client Scopes for id-authentication:
profile
roles
web-origins
add_oidc_client
profile
roles
update_oidc_client
web-origins
In the previous version, 1.1.5.x, we did not employ any clients for our digital card service. However, in the latest version, 1.2.0.1, we have implemented the use of the mpartner-default-digitalcard
client. Please find below the service account roles and client scopes associated with the mpartner-default-digitalcard
client.
Service account roles assigned to _mpartner-default-digitalcard_** in 1.2.0.1**
CREATE_SHARE
CREDENTIAL_REQUEST
default_roles_mosip
PRINT_PARTNER
PUBLISH_CREDENTIAL_STATUS_UPDATE_GENERAL
SUBSCRIBE_ CREDENTIAL_ISSUED_INDIVIDUAL
SUBSCRIBE_IDENTITY_CREATED_GENERAL
SUBSCRIBE_IDENTITY_UPDATED _GENERAL
Client scopes assigned to _mpartner-default-digitalcard_** in 1.2.0.1**
profile
roles
web-origins
In version 1.1.5.x, we do not employ any clients for printing. However, beginning from version 1.2.0.1, we utilize the mpartner-default-prin
t client. Please find below the service account roles and client scopes associated with the mpartner-default-print
client.
Service account roles assigned to _mpartner-default-print_** in 1.2.0.1**
CREATE_SHARE
default_roles_mosip
PUBLISH_CREDENTIAL_STATUS_UPDTAE_GENERAL
SUBSCRIBE_ CREDENTIAL_ISSUED_INDIVIDUAL
Client scopes assigned to _mpartner-default-print_** in 1.2.0.1**
profile
roles
web-origins
In version 1.1.5.x, we utilized the mosip-regproc-client
for id-repository. Starting from version 1.2.0.1, we have transitioned to using mosip-idrepo-client
. This switch has led to modifications in service account roles, client scopes, and client settings. Below are the details of the changes in service account roles and client scopes.
Client Scopes for id-repository:
profile
roles
web-origins
profile
roles
web-origins
Service account roles for id-repository:
ABIS_PARTNER
CENTRAL_ADMIN
CENTRAL_APPROVER
CREDENTIAL_INSURANCE
CREDETIAL_PARTNER
Default
DEVICE_PROVIDER
DIGITAL_CARD
FTM_PROVIDER
GLOBAL_ADMIN
INDIVIDUAL
KEY_MAKER
MASTERDATA_ADMIN
MISP
MISP_PARTNER
ONLINE_VERIFICATION_PARTNER
POLICYMANAGER
PRE_REGISTRATION
PRE_REGISTRATION_ADMIN
PREREG
REGISTRATION_ADMIN
REGISTRATION_OFFICER
REGISTRATION_OPERATOR
REGISTRATION_SUPERVISOR
ZONAL_ADMIN
ZONAL_APPROVER
default_roles_mosip
ID_REPOSITORY
offline_access
PUBLISH_ACTIVATE_ID_ALL_INDIVIDUAL
PUBLISH_AUTH_TYPE_STATUS_UPDATE_ALL_INDIVIDUAL
PUBLISH_AUTHENTICATION_TRANSACTION_STATUS_GENERAL
PUBLISH_DEACTIVATE_ID_ALL_INDIVIDUAL
PUBLISH_IDENTITY_CREATED_GENERAL
PUBLISH_IDENTITY_UPDATED _GENERAL
PUBLISH_REMOVE _ID_ALL_INDIVIDUAL
PUBLISH_VID_CRED_STATUS_UPDATE_GENERAL
SUBSCRIBE_VID_CRED_STATUS_UPDATE_GENERAL
uma_authorization
In version 1.1.5.x, we utilized the mosip-resident-client
for Resident Services. This client is also employed in version 1.2.0.1. Although there were modifications in service account roles, the client scopes remain unchanged. Below the details of the alterations made in service account roles.
Service account roles for Resident-Services:
CREDENTIAL_ISSUANCE
CREDENTIAL_REQUEST
offline_access
RESIDENT
uma_authorization
CREDENTIAL_REQUEST
default_roles_mosip
offline_access
RESIDENT
SUBSCRIBE_AUTH_TYPE_STATUS_UPDATE_ACK_GENERAL
SUBSCRIBE_AUTHENTICATION_TRANSACTION_STATUS_GENERAL
SUBSCRIBE_CREDENTIAL_STATUS_UPDATE_GENERAL
uma_authorization
Client Scopes for Resident-Services:
profile
roles
web-origins
ida_token
individual_id
profile
roles
web-origins
In previous iterations (1.1.5.x) of our system, we did not employ any clients for the compliance toolkit. However, beginning with version 1.2.0.1, we have implemented the use of mosip_toolkit_clien
t. The following information outlines the service account roles and client scopes associated with mosip_toolkit_client
.
Service account roles assigned to _mosip_toolkit_client_** in 1.2.0.1**
default_roles_mosip
Client scopes assigned to _mosip_toolkit_client_** in 1.2.0.1**
profile
roles
web-origins
Here's how to fix it!.
The key point to note here is that MOSIP only accepts partners whose client certificates have a minimum of 1 year of validity remaining. MOSIP re-signs the client certificate with a 1-year validity. The respective partner must renew their certificate before the MOSIP-signed certificate expires in order to continue communication with MOSIP.
However, if this renewal is not done, the certificates will expire.
Here is a three-step process to address this scenario:
a. How to check the validity of your partner's certificate?
To check the validity, the user must have access to the database.
The user can access the mosip_keymanager
database and open two tables: ca_cert_store
and partner_cert_store
.
The ca_cert_store
table stores the CA and SUBCA/Intermediate CA certificates, while the partner_cert_store
table stores the PARTNER/CLIENT certificates. In both of these tables, there are columns named cert_not_before
and cert_not_after
which provide details about the validity of the partner's certificate.
b. How to extend the validity of the partner's certificate?
There are two categories of partners: MOSIP Internal partners (e.g., IDA) and External partners (e.g., Device Partners, MISP partners).
For MOSIP Internal partners, if the validity of their certificate needs to be increased, the onboarder script can be run again for that specific partner. The onboarder will create fresh certificates and upload them to extend the validity.
For External Partners, they will need to obtain fresh certificates from their CA and upload them to extend their validity.
c. Troubleshooting common errors:
KER-ATH-401 Authentication Failed
If the user encounters this error code, it means that the user is not authenticated. The user should authenticate first before using the API.
KER-PCM-005 Root CA Certificate not found
.
If the user encounters this error code, it means that there is an issue with either the CA or SUBCA certificate. The user must resolve the CA/SUBCA issue first.
In the registration processor, there was an issue where packets were failing at the supervisor validation stage when the username of the supervisor was entered in a different case than it appeared in the database. To resolve this issue, a case insensitive search will be implemented to retrieve the username of the supervisor during the validation stage.
In order for this fix to work properly, it is necessary for the user_id
column of the user_details
table in the master database to not contain any case insensitive duplicates.
If there are any duplicates in this table, packets will fail at the supervisor validation stage once again. Therefore, it is important to deactivate or delete these case insensitive duplicates. It should be noted that this action will not have any impact on other areas, as the mapping between keycloak users and the user_id
of the user_details
table is one-to-one and case insensitive.
user_id
column of the user_details
table?Follow these steps:
Log in to the master schema of the mosip_master
database.
Open a query tool.
Execute the following SQL command in the query tool:
Make sure to copy the output to a text file to manage the duplicate data effectively.
Login in to the admin portal with a user having ZONAL_ADMIN
role.
On the left pane, click on Resources
in the side-menu.
Select User Center Mapping
under Resources in the side-menu.
Click Filter on the User Center Mapping
page.
Enter the user_id
that was retrieved from the database and copied into the text file. After entering the user_id
, click on the Apply button.
Now, on the User Center Mapping
page, case insensitive duplicates of user_id
would be displayed.
Based on the Center, choose the entry that can be deactivated/deleted.
Now click on the ellipsis of the selected entry.
Select the appropriate action (Delete/ Deactivate) on that entry.
MOSIP modules are deployed in the form of microservices in kubernetes cluster.
Wireguard is used as a trust network extension to access the admin, control, and observation pane.
It is also used for the on-the-field registrations.
MOSIP uses Nginx server for:
SSL termination
Reverse Proxy
CDN/Cache management
Load balancing
In V3, we have two Kubernetes clusters:
Observation cluster - This cluster is a part of the observation plane, and it helps in administrative tasks. By design, this is kept independent of the actual cluster as a good security practice and to ensure clear segregation of roles and responsibilities. As a best practice, this cluster or it's services should be internal and should never be exposed to the external world.
Rancher is used for managing the MOSIP cluster.
Keycloak in this cluster is used for cluster user access management.
It is recommended to configure log monitoring and network monitoring in this cluster.
In case you have an internal container registry, then it should run here.
MOSIP cluster - This cluster runs all the MOSIP components and certain third party components to secure the cluster, APIs and data.
k8s-infra : contains the scripts to install and configure Kubernetes cluster with required monitoring, logging and alerting tools.
mosip-infra : contains the deployment scripts to run charts in defined sequence.
mosip-config : contains all the configuration files required by the MOSIP modules.
mosip-helm : contains packaged helm charts for all the MOSIP modules.
VM’s required can be with any OS as per convenience.
Here, we are referring to Ubuntu OS throughout this installation guide.
1.
Wireguard Bastion Host
2
4 GB
8 GB
1
(ensure to setup active-passive)
2.
Observation Cluster nodes
2
8 GB
32 GB
2
2
3.
Observation Nginx server (use Loadbalancer if required)
2
4 GB
16 GB
2
Nginx+
4.
MOSIP Cluster nodes
12
32 GB
128 GB
6
6
5.
MOSIP Nginx server ( use Loadbalancer if required)
2
4 GB
16 GB
1
Nginx+
All the VM's should be able to communicate with each other.
Need stable Intra network connectivity between these VM's.
All the VM's should have stable internet connectivity for docker image download (in case of local setup ensure to have a locally accessible docker registry).
Server Interface requirement as mentioned in below table:
1.
Wireguard Bastion Host
One Private interface : that is on the same network as all the rest of nodes (e.g.: inside local NAT Network). One public interface : Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 51820/udp port to this interface IP.
2.
K8 Cluster nodes
One internal interface: with internet access and that is on the same network as all the rest of nodes (e.g.: inside local NAT Network )
3.
Observation Nginx server
One internal interface: with internet access and that is on the same network as all the rest of nodes (e.g.: inside local NAT Network).
4.
Mosip Nginx server
One internal interface : that is on the same network as all the rest of nodes (e.g.: inside local NAT Network). One public interface : Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 443/tcp port to this interface IP.
1.
rancher.xyz.net
Private IP of Nginx server or load balancer for Observation cluster
Rancher dashboard to monitor and manage the kubernetes cluster.
2.
keycloak.xyz.net
Private IP of Nginx server for Observation cluster
Administrative IAM tool (keycloak). This is for the kubernetes administration.
3.
sandbox.xyx.net
Private IP of Nginx server for MOSIP cluster
Index page for links to different dashboards of MOSIP env. (This is just for reference, please do not expose this page in a real production or UAT environment)
4.
api-internal.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
Internal API’s are exposed through this domain. They are accessible privately over wireguard channel
5.
api.sandbox.xyx.net
Public IP of Nginx server for MOSIP cluster
All the API’s that are publically usable are exposed using this domain.
6.
prereg.sandbox.xyz.net
Public IP of Nginx server for MOSIP cluster
Domain name for MOSIP's pre-registration portal. The portal is accessible publicly.
7.
activemq.sandbox.xyx.net
Private IP of Nginx server for MOSIP cluster
Provides direct access to activemq
dashboard. It is limited and can be used only over wireguard.
8.
kibana.sandbox.xyx.net
Private IP of Nginx server for MOSIP cluster
Optional installation. Used to access kibana dashboard over wireguard.
9.
regclient.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
Registration Client can be downloaded from this domain. It should be used over wireguard.
10.
admin.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
MOSIP's admin portal is exposed using this domain. This is an internal domain and is restricted to access over wireguard
11.
object-store.sandbox.xyx.net
Private IP of Nginx server for MOSIP cluster
Optional- This domain is used to access the object server. Based on the object server that you choose map this domain accordingly. In our reference implementation, MinIO is used and this domain let's you access MinIO’s Console over wireguard
12.
kafka.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
Kafka UI is installed as part of the MOSIP’s default installation. We can access kafka UI over wireguard. Mostly used for administrative needs.
13.
iam.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
MOSIP uses an OpenID Connect server to limit and manage access across all the services. The default installation comes with Keycloak. This domain is used to access the keycloak server over wireguard
14.
postgres.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
This domain points to the postgres server. You can connect to postgres via port forwarding over wireguard
15.
pmp.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
MOSIP’s partner management portal is used to manage partners accessing partner management portal over wireguard
16.
onboarder.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
Accessing reports of MOSIP partner onboarding over wireguard
17.
resident.sandbox.xyz.net
Public IP of Nginx server for MOSIP cluster
Accessing resident portal publically
18.
idp.sandbox.xyz.net
Public IP of Nginx server for MOSIP cluster
Accessing IDP over public
19.
smtp.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
Accessing mock-smtp UI over wireguard
As only secured https connections are allowed via nginx server will need below mentioned valid ssl certificates:
One valid wildcard ssl certificate related to domain used for accessing Observation cluster, this needs to be stored inside the nginx server VM for Observation cluster. In above e.g.: *.org.net is the similar example domain.
One valid wildcard ssl certificate related to domain used for accessing Mosip cluster, this needs to be stored inside the nginx server VM for mosip cluster. In above e.g.: *.sandbox.xyz.net is the similar example domain.
Tools to be installed in Personal Computers for complete deployment
kubectl - any client version above 1.19
helm - any client version above 3.0.0 and add below repos as well:
Istioctl : version: 1.15.0
Ansible: version > 2.12.4
Create a directory as MOSIP in your PC and:
Clone k8’s infra repo with tag : 1.2.0.1-B2 (whichever is the latest version) inside mosip directory.
git clone https://github.com/mosip/k8s-infra -b v1.2.0.1-B2
Clone mosip-infra with tag : 1.2.0.1-B2 (whichever is the latest version) inside mosip directory.
git clone https://github.com/mosip/mosip-infra -b v1.2.0.1-B2
Set below mentioned variables in bashrc
Note: Above-mentioned environment variables will be used throughout the installation to move between one directory to other to run install scripts.
A Wireguard bastion host (Wireguard server) provides secure private channel to access MOSIP cluster. The host restricts public access, and enables access to only those clients who have their public key listed in Wireguard server. Wireguard listens on UDP port51820.
Create a Wireguard server VM with above-mentioned Hardware and Network requirements.
Open ports and Install docker on Wireguard VM.
cd $K8_ROOT/wireguard/
Create copy of hosts.ini.sample
as hosts.ini
and update the required details for wireguard VM
cp hosts.ini.sample hosts.ini
Execute ports.yml to enable ports on VM level using ufw:
ansible-playbook -i hosts.ini ports.yaml
Note:
Permission of the pem files to access nodes should have 400 permission.
sudo chmod 400 ~/.ssh/privkey.pem
These ports are only needed to be opened for sharing packets over UDP.
Take necessary measure on firewall level so that the Wireguard server can be reachable on 51820/udp.
Execute docker.yml to install docker and add user to docker group:
ansible-playbook -i hosts.ini docker.yaml
Setup Wireguard server
SSH to wireguard VM
Create directory for storing wireguard config files.
mkdir -p wireguard/config
Install and start wireguard server using docker as given below:
Note:
Increase the no. of peers above in case more than 30 wireguard client confs (-e PEERS=30) are needed.
Change the directory to be mounted to wireguard docker as per need. All your wireguard confs will be generated in the mounted directory (
-v /home/ubuntu/wireguard/config:/config
).
Install Wireguard client in your PC.
Assign wireguard.conf
:
SSH to the wireguard server VM.
cd /home/ubuntu/wireguard/config
Assign one of the PR for yourself and use the same from the PC to connect to the server.
Create assigned.txt
file to assign the keep track of peer files allocated and update every time some peer is allocated to someone.
Use ls
cmd to see the list of peers.
Get inside your selected peer directory, and add mentioned changes in peer.conf
:
cd peer1
nano peer1.conf
Delete the DNS IP.
Update the allowed IP's to subnets CIDR ip . e.g. 10.10.20.0/23
Share the updated peer.conf
with respective peer to connect to wireguard server from Personel PC.
Add peer.conf
in your PC’s /etc/wireguard
directory as wg0.conf
.
Start the wireguard client and check the status:
Once connected to wireguard, you should be now able to login using private IPs.
Observation K8s Cluster setup
Install all the required tools mentioned in pre-requisites for PC.
kubectl
helm
ansible
rke (version v1.3.10 )
istioctl (istioctl version: v1.15.0)
Setup Observation Cluster node VM’s as per the hardware and network requirements as mentioned above.
Setup passwordless SSH into the cluster nodes via pem keys. (Ignore if VM’s are accessible via pem’s).
Generate keys on your PC
Copy the keys to remote observation node VM’s
SSH into the node to check password-less SSH
Note:
Make sure the permission for
privkey.pem
for ssh is set to 400.
Run env-check.yaml
to check if cluster nodes are fine and do not have known issues in it.
cd $K8_ROOT/rancher/on-prem/
create copy of hosts.ini.sample
as hosts.ini
and update the required details for Observation k8 cluster nodes.
cp hosts.ini.sample hosts.ini
ansible-playbook -i hosts.ini env-check.yaml
This ansible checks if localhost mapping is already present in /etc/hosts
file in all cluster nodes, if not it adds the same.
Open ports and install docker on Observation K8 Cluster node VM’s.
cd $K8_ROOT/rancher/on-prem/
Ensure that hosts.ini
is updated with nodal details.
Update vpc_ip
variable in ports.yaml
with vpc CIDR ip to allow access only from machines inside same vpc.
Execute ports.yaml
to enable ports on VM level using ufw:
Disable swap in cluster nodes. (Ignore if swap is already disabled)
Execute docker.yml
to install docker and add user to docker group:
Creating RKE Cluster Configuration file
rke config
Command will prompt for nodal details related to cluster, provide inputs w.r.t below mentioned points:
SSH Private Key Path
:
Number of Hosts
:
SSH Address of host
:
SSH User of host
:
Make all the nodes Worker host
by default.
To create an HA cluster, specify more than one host with role Control Plane
and etcd host
.
Network Plugin Type
: Continue with canal as default network plugin.
For rest of other configurations, opt the required or default value.
As result of rke config
command cluster.yml
file will be generated inside same directory, update the below mentioned fields:
nano cluster.yml
Remove the default Ingress install
Add the name of the kubernetes cluster
cluster_name: sandbox-name
For production deployments edit the cluster.yml
, according to this RKE Cluster Hardening Guide.
Setup up the cluster:
Once cluster.yml
is ready, you can bring up the kubernetes cluster using simple command.
This command assumes the cluster.yml
file is in the same directory as where you are running the command.
rke up
The last line should read Finished building Kubernetes cluster
successfully to indicate that your cluster is ready to use.
As part of the Kubernetes creation process, a kubeconfig
file has been created and written at kube_config_cluster.yml
, which can be used to start interacting with your Kubernetes cluster.
Copy the kubeconfig files.
To access the cluster using kubeconfig
file use any one of the below method:
cp $HOME/.kube/<cluster_name>_config $HOME/.kube/config
Alternatively
export KUBECONFIG="$HOME/.kube/<cluster_name>_config
Test cluster access:
kubect get nodes
Command will result in details of the nodes of the Observation cluster.
Save your files
Save a copy of the following files in a secure location, they are needed to maintain, troubleshoot and upgrade your cluster.
cluster.yml
: The RKE cluster configuration file.
kube_config_cluster.yml
: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster.
cluster.rkestate
: The Kubernetes Cluster State file, this file contains credentials for full access to the cluster.
In case not having Public DNS system add the custom DNS configuration for the cluster.
Check whether coredns pods are up and running in your cluster via the below command:
Update the IP address and domain name in the below DNS hosts template and add it in the coredns configmap Corefile key in the kube-system namespace.
To update the coredns configmap, use the below command.
Check whether the DNS changes are correctly updated in coredns configmap.
Restart the coredns
pod in the kube-system
namespace.
Check status of coredns restart.
Once the rancher cluster is ready, we need ingress and storage class to be set for other applications to be installed.
Nginx Ingress Controller: used for ingress in rancher cluster.
this will install ingress in ingress-nginx namespace of rancher cluster.
Storage class setup: Longhorn creates a storage class in the cluster for creating pv (persistence volume) and pvc (persistence volume claim).
Pre-requisites:
Note: Values of below mentioned parameters are set as by default Longhorn installation script:
PV replica count is set to 1. Set the replicas for the storage class appropriately.
Total available node CPU allocated to each instance-manager
pod in the longhorn-system
namespace.
The value 5
means 5% of the total available node CPU.
This value should be fine for sandbox and pilot, but you may have to increase the default to "12" for production.
The value can be updated on Longhorn UI after installation.
Access the Longhorn dashboard from Rancher UI once installed.
Setup Backup : In case you want to back up the pv data from longhorn to s3 periodically follow instructions. (Optional, ignore if not required)
For Nginx server setup we need ssl certificate, add the same into Nginx server.
SSL certificates can be generated in multiple ways. Either via lets encrypt if you have public DNS or via openssl certs when you don't have Public DNS.
Letsencrypt: Generate wildcard ssl certificate having 3 months validity when you have public DNS system using below steps.
SSH into the nginx server node.
Install Pre-requisites
Generate wildcard SSL certificates for your domain name.
sudo certbot certonly --agree-tos --manual --preferred-challenges=dns -d *.org.net
replace org.net
with your domain.
The default challenge HTTP is changed to DNS challenge, as we require wildcard certificates.
Create a DNS record in your DNS service of type TXT with host _acme-challenge.org.net
, with the string prompted by the script.
Wait for a few minutes for the above entry to get into effect. Verify: host -t TXT _acme-challenge.org.net
Press enter in the certbot
prompt to proceed.
Certificates are created in /etc/letsencrypt
on your machine.
Certificates created are valid for 3 months only.
Wildcard SSL certificate renewal. This will increase the validity of the certificate for next 3 months.
Openssl : Generate wildcard ssl certificate using openssl in case you don't have public DNS using below steps. (Ensure to use this only in development env, not suggested for Production env).
Generate a self-signed certificate for your domain, such as *.sandbox.xyz.net.
Execute the following command to generate a self-signed SSL certificate. Prior to execution, kindly ensure to update environmental variables & rancher domain passed to openssl command:
Above command will generate certs in below specified location. Use it when prompted during nginx installation.
fullChain path: /etc/ssl/certs/tls.crt
.
privKey path: /etc/ssl/private/tls.key
.
Install nginx:
Login to nginx server node.
Clone k8s-infra
Provide below mentioned inputs as and when promted
Rancher nginx ip : internal ip of the nginx server VM.
SSL cert path : path of the ssl certificate to be used for ssl termination.
SSL key path : path of the ssl key to be used for ssl termination.
Cluster node IPs : IPs of the rancher cluster node
Restart nginx service.
Post installation check:
sudo systemctl status nginx
Steps to Uninstall nginx (in case required). sudo apt purge nginx nginx-common
.
DNS mapping:
Once nginx server is installed successfully, create DNS mapping for rancher cluster related domains as mentioned in DNS requirement section. (rancher.org.net, keycloak.org.net)
In case used Openssl for wildcard ssl certificate add DNS entries in local hosts file of your system.
For example: /etc/hosts
files for Linux machines.
Rancher UI: Rancher provides full CRUD capability of creating and managing kubernetes cluster.
Install rancher using Helm, update hostname
, & add privateCA
to true
in rancher-values.yaml
, and run the following command to install.
cd $K8_ROOT/rancher/rancher-ui
helm repo add rancher https://releases.rancher.com/server-charts/stable
helm repo update
kubectl create ns cattle-system
Create a secret containing the observation nginx self-signed public certificate (i.e. tls.crt
) generated in openssl section.
Login:
Open Rancher page.
Get Bootstrap password using
Assign a password. IMPORTANT: makes sure this password is securely saved and retrievable by Admin.
Keycloak: Keycloak is an OAuth 2.0 compliant Identity Access Management (IAM) system used to manage the access to Rancher for cluster controls.
keycloak_client.json
: Used to create SAML client on Keycloak for Rancher integration.
Login as admin
user in Keycloak and make sure an email id, and first name field is populated for admin user. This is important for Rancher authentication as given below.
Enable authentication with Keycloak using the steps given here.
In Keycloak add another Mapper for the rancher client (in Master realm) with following fields:
Protocol: saml
Name: username
Mapper Type: User Property
Property: username
Friendly Name: username
SAML Attribute Name: username
SAML Attribute NameFormat: Basic
Specify the following mappings in Rancher's Authentication Keycloak form:
Display Name Field: givenName
User Name Field: email
UID Field: username
Entity ID Field: https://your-rancher-domain/v1-saml/keycloak/saml/metadata
Rancher API Host: https://your-rancher-domain
Groups Field: member
RBAC :
For users in Keycloak assign roles in Rancher - cluster and project roles. Under default
project add all the namespaces. Then, to a non-admin user you may provide Read-Only role (under projects).
If you want to create custom roles, you can follow the steps given here.
Add a member to cluster/project in Rancher:
Give member name exactly as username
in Keycloak
Assign appropriate role like Cluster Owner, Cluster Viewer etc.
You may create new role with fine grained access control.
Certificates expiry
In case you see certificate expiry message while adding users, on local cluster run these commands:
Pre-requisites:
Install all the required tools mentioned in Pre-requisites for PC.
kubectl
helm
ansible
rke (version 1.3.10)
Setup MOSIP K8 Cluster node VM’s as per the hardware and network requirements as mentioned above.
Run env-check.yaml
to check if cluster nodes are fine and don't have known issues in it.
cd $K8_ROOT/rancher/on-prem
create copy of hosts.ini.sample
as hosts.ini
and update the required details for MOSIP k8 cluster nodes.
cp hosts.ini.sample hosts.ini
ansible-playbook -i hosts.ini env-check.yaml
This ansible checks if localhost mapping is already present in /etc/hosts
file in all cluster nodes, if not it adds the same.
Setup passwordless ssh into the cluster nodes via pem keys. (Ignore if VM’s are accessible via pem’s).
Generate keys on your PC
ssh-keygen -t rsa
Copy the keys to remote rancher node VM’s:
ssh-copy-id <remote-user>@<remote-ip>
SSH into the node to check password-less SSH
ssh -i ~/.ssh/<your private key> <remote-user>@<remote-ip>
Rancher UI : (deployed in Rancher K8 cluster)
Open ports and Install docker on MOSIP K8 Cluster node VM’s.
cd $K8_ROOT/mosip/on-prem
create copy of hosts.ini.sample
as hosts.ini
and update the required details for wireguard VM.
cp hosts.ini.sample hosts.ini
Update vpc_ip
variable in ports.yaml
with vpc CIDR ip
to allow access only from machines inside same vpc.
execute ports.yml
to enable ports on VM level using ufw:
ansible-playbook -i hosts.ini ports.yaml
Disable swap in cluster nodes. (Ignore if swap is already disabled)
ansible-playbook -i hosts.ini swap.yaml
execute docker.yml
to install docker and add user to docker group:
ansible-playbook -i hosts.ini docker.yaml
Creating RKE Cluster Configuration file
rke config
Command will prompt for nodal details related to cluster, provide inputs w.r.t below mentioned points:
SSH Private Key Path
:
Number of Hosts
:
SSH Address of host
:
SSH User of host
:
Make all the nodes Worker host
by default.
To create an HA cluster, specify more than one host with role Control Plane
and etcd host
.
Network Plugin Type
: Continue with canal as default network plugin.
For rest for other configuration opt the required or default value.
As result of rke config command cluster.ymlfile
will be generated inside same directory, update the below mentioned fields:
nano cluster.yml
Remove the default Ingress install
Add the name of the kubernetes cluster
For production deployments edit the cluster.yml
, according to this RKE Cluster Hardening Guide.
Setup up the cluster:
Once cluster.yml
is ready, you can bring up the kubernetes cluster using simple command.
This command assumes the cluster.yml
file is in the same directory as where you are running the command.
rke up
The last line should read Finished building Kubernetes cluster successfully
to indicate that your cluster is ready to use.
Copy the kubeconfig files
To access the cluster using kubeconfig filr use any one of the below method:
cp $HOME/.kube/<cluster_name>_config $HOME/.kube/config
Alternatively
Test cluster access:
kubect get nodes
Command will result in details of the nodes of the rancher cluster.
Save Your files
Save a copy of the following files in a secure location, they are needed to maintain, troubleshoot and upgrade your cluster.:
cluster.yml
: The RKE cluster configuration file.
kube_config_cluster.yml
: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster.
cluster.rkestate
: The Kubernetes Cluster State file, this file contains credentials for full access to the cluster.
In case not having Public DNS system add the custom DNS configuration for the cluster.
Check whether coredns pods are up and running in your cluster via the below command:
Update the IP address and domain name in the below DNS hosts template and add it in the coredns configmap Corefile key in the kube-system namespace.
To update the coredns configmap, use the below command.
Check whether the DNS changes are correctly updated in coredns configmap.
Restart the coredns
pod in the kube-system
namespace.
Check status of coredns restart.
Global configmap: Global configmap contains the list of neccesary details to be used throughout the namespaces of the cluster for common details.
cd $K8_ROOT/mosip
Copy global_configmap.yaml.sample
to global_configmap.yaml
.
Update the domain names in global_configmap.yaml
and run.
kubectl apply -f global_configmap.yaml
Istio Ingress setup: It is a service mesh for the MOSIP K8 cluster which provides transparent layers on top of existing microservices along with powerful features enabling a uniform and more efficient way to secure, connect, and monitor services.
cd $K8_ROOT/mosip/on-prem/istio
./install.sh
This will bring up all the Istio components and the Ingress Gateways.
Check Ingress Gateway services:
kubectl get svc -n istio-system
istio-ingressgateway
: external facing istio service.
istio-ingressgateway-internal
: internal facing istio service.
istiod
: Istio daemon for replicating the changes to all envoy filters.
Storage class setup: Longhorn creates a storage class in the cluster for creating pv (persistence volume) and pvc (persistence volume claim).
Pre-requisites:
Install Longhorn via helm
./install.sh
Note: Values of below mentioned parameters are set as by default Longhorn installation script:
PV replica count is set to 1. Set the replicas for the storage class appropriately.
Total available node CPU allocated to each instance-manager
pod in the longhorn-system
namespace.
The value 5
means 5% of the total available node CPU
This value should be fine for sandbox and pilot, but you may have to increase the default to 12
for production.
The value can be updated on Longhorn UI after installation.
Login as admin in Rancher console
Select Import
Existing for cluster addition.
Select Generic
as cluster type to add.
Fill the Cluster Name
field with unique cluster name and select Create
.
You will get the kubectl commands to be executed in the kubernetes cluster. Copy the command and execute from your PC (make sure your kube-config
file is correctly set to MOSIP cluster).
Wait for few seconds after executing the command for the cluster to get verified.
Your cluster is now added to the rancher management server.
For Nginx server setup, we need ssl certificate, add the same into Nginx server.
SSL certificates can be generated in multiple ways. Either via lets encrypt if you have public DNS or via openssl certs when you don't have Public DNS.
Letsencrypt: Generate wildcard ssl certificate having 3 months validity when you have public DNS system using below steps.
SSH into the nginx server node.
Install Pre-requisites
Generate wildcard SSL certificates for your domain name.
sudo certbot certonly --agree-tos --manual --preferred-challenges=dns -d *.org.net
replace org.net
with your domain.
The default challenge HTTP is changed to DNS challenge, as we require wildcard certificates.
Create a DNS record in your DNS service of type TXT with host _acme-challenge.org.net
, with the string prompted by the script.
Wait for a few minutes for the above entry to get into effect. Verify: host -t TXT _acme-challenge.org.net
Press enter in the certbot
prompt to proceed.
Certificates are created in /etc/letsencrypt
on your machine.
Certificates created are valid for 3 months only.
Wildcard SSL certificate renewal. This will increase the validity of the certificate for next 3 months.
Openssl : Generate wildcard ssl certificate using openssl in case you don't have public DNS using below steps. (Ensure to use this only in development env, not suggested for Production env).
Install docker on nginx node.
Generate a self-signed certificate for your domain, such as *.sandbox.xyz.net.
Execute the following command to generate a self-signed SSL certificate. Prior to execution, kindly ensure that the environmental variables passed to the OpenSSL Docker container have been properly updated:
Above command will generate certs in below specified location. Use it when prompted during nginx installation.
fullChain path: /etc/ssl/certs/nginx-selfsigned.crt.
privKey path: /etc/ssl/private/nginx-selfsigned.key.
Install nginx:
Login to nginx server node.
Clone k8s-infra
Provide below mentioned inputs as and when prompted
MOSIP nginx server internal ip
MOSIP nginx server public ip
Publically accessible domains (comma separated with no whitespaces)
SSL cert path
SSL key path
Cluster node ip's (comma separated no whitespace)
When utilizing an openssl wildcard SSL certificate, please add the following server block to the nginx server configuration within the http block. Disregard this if using SSL certificates obtained through letsencrypt or for publicly available domains. Please note that this should only be used in a development environment and is not recommended for production environments.
nano /etc/nginx/nginx.conf
Note: HTTP access is enabled for IAM because MOSIP's keymanager expects to have valid SSL certificates. Ensure to use this only for development purposes, and it is not recommended to use it in production environments.
Restart nginx service.
Post installation check:
sudo systemctl status nginx
Steps to Uninstall nginx (in case required)
sudo apt purge nginx nginx-common
DNS mapping:
Once nginx server is installed successfully, create DNS mapping for rancher cluster related domains as mentioned in DNS requirement section. (rancher.org.net, keycloak.org.net)
In case used Openssl for wildcard ssl certificate add DNS entries in local hosts file of your system.
For example: /etc/hosts
files for Linux machines.
Check Overall if nginx and istio wiring is set correctly
Install httpbin
: This utility docker returns http headers received inside the cluster. You may use it for general debugging - to check ingress, headers etc.
To see what is reaching the httpbin (example, replace with your domain name):
Prometheus and Grafana and Alertmanager tools are used for cluster monitoring.
Select 'Monitoring' App from Rancher console -> Apps & Marketplaces
.
In Helm options, open the YAML file and disable Nginx Ingress.
Click on Install
.
Alerting is part of cluster monitoring, where alert notifications are sent to the configured email or slack channel.
Monitoring should be deployed which includes deployment of prometheus, grafana and alertmanager.
Create slack incoming webhook.
After setting slack incoming webhook update slack_api_url
and slack_channel_name
in alertmanager.yml
.
cd $K8_ROOT/monitoring/alerting/
nano alertmanager.yml
Update:
Update Cluster_name
in patch-cluster-name.yaml
.
cd $K8_ROOT/monitoring/alerting/
nano patch-cluster-name.yaml
Update:
Install Default alerts along some of the defined custom alerts:
Alerting is installed.
MOSIP uses Rancher Fluentd and elasticsearch to collect logs from all services and reflect the same in Kibana Dashboard.
Install Rancher FluentD system : for scraping logs outs of all the microservices from MOSIP k8 cluster.
Install Logging from Apps and marketplace within the Rancher UI.
Select Chart Version 100.1.3+up3.17.7
from Rancher console -> Apps & Marketplaces.
Configure Rancher FluentD
Create clusteroutput
kubectl apply -f clusteroutput-elasticsearch.yaml
Start clusterFlow
kubectl apply -f clusterflow-elasticsearch.yaml
Install elasticsearch, kibana and Istio addons\
set min_age
in elasticsearch-ilm-script.sh
and execute the same.
min_age
: is the minimum no. of days for which indices will be stored in elasticsearch.
MOSIP provides set of Kibana Dashboards for checking logs and throughput's.
Brief description of these dashboards are as follows:
01-logstash.ndjson contains the logstash Index Pattern required by the rest of the dashboards.
02-error-only-logs.ndjson contains a Search dashboard which shows only the error logs of the services, called MOSIP Error Logs
dashboard.
03-service-logs.ndjson contains a Search dashboard which show all logs of a particular service, called MOSIP Service Logs dashboard.
04-insight.ndjson contains dashboards which show insights into MOSIP processes, like the number of UINs generated (total and per hr), the number of Biometric deduplications processed, number of packets uploaded etc, called MOSIP Insight
dashboard.
05-response-time.ndjson contains dashboards which show how quickly different MOSIP Services are responding to different APIs, over time, called Response Time
dashboard.
Import dashboards:
cd K8_ROOT/logging
./load_kibana_dashboards.sh ./dashboards <cluster-kube-config-file>
View dashboards
Open kibana dashboard from https://kibana.sandbox.xyz.net
.
Kibana --> Menu (on top left) --> Dashboard --> Select the dashboard.
External Dependencies are set of external requirements that are needed for functioning of MOSIP’s core services like DB, Object Store, HSM etc.
Click here to check the detailed installation instructions of all the external components.
Add/Update the below property in application-default.properties and comment on the below property in the *-default.properties file in the config repo.
Add/Update the below property in the esignet-default.properties file in the config repo.
Now that all the Kubernetes cluster and external dependencies are already installed, will continue with MOSIP service deployment.
While installing a few modules, installation script prompts to check if you have public domain and valid SSL certificates on the server. Opt option n as we are using self-signed certificates. For example:
Start installing mosip modules:
Check detailed MOSIP Modules Deployment installation steps.
The Registration Client Docker serves as a registration client zip downloader and upgrade server. The Nginx server within the Registration Client Docker container provides all necessary artifacts and resources during the upgrade process.
Patch updates:
When the registration client is launched, it downloads the manifest file from the upgrade server if the machine is online. Otherwise, it uses the local manifest file.
The client compares the checksum of each JAR file in the lib directory with the checksum stored in the manifest file. If there's a mismatch, the client considers the file invalid and deletes it before downloading it from the client upgrade server.
A checksum mismatch may be intentional for the rollout of hotfixes in the libraries used by the registration client or in the registration-client and registration-service module.
Patch updates do not support the upgrade of local Derby DB.
Assumption:
No major or minor version changes should occur for registration-client and registration-services modules.
Registration clients must be online to receive patch updates.
To roll out patches:
Rebuild the Registration Client Dockerfile and publish it with the same version.
Restart the registration client pod in Rancher.
For slow connections or connection failures:
If the client fails to download the manifest file when the machine is online, the registration client application will exit and report a build status check failure in the pre-loader screen.
If the latest manifest file is successfully downloaded but fails to download all the patches, the registration client application will exit and report a patch download failure.
In both cases, the operator/supervisor must restart the registration client application with a stable network connection. Upon restart, the client application will repeat the check from the server and continue the patch update.
Patch updates can include updates to existing libraries and the addition of new files. However, they are only applied to files in the
lib
directory.
Note: Deleting the registration-client.jar
or registration-services.jar
is not recoverable.
This procedure entails upgrading from one version of the software to the next iteration.
Additionally, this may involve upgrading local derby databases.
Upon each launch of the registration client, the client retrieves the maven-metadata.xml
file from the client upgrade server. The version specified in the local manifest file is then compared to the initial version element found in the maven-metadata.xml
.
Should the version values differ, the client recognizes it as an available upgrade to a new version.
Above is the sample content of maven-metadata.xml
.
The version upgrades are not performed automatically and must be initiated by the operator or supervisors. The process for the version upgrade can be outlined as follows:
Backup the database, libraries, binary, and manifest files and folders.
Download the latest JAR files and manifest files from the upgrade server to the local machine.
Prompt the operator/supervisor to restart the registration client with the upgraded JAR files.
Upon restart, the application executes the database upgrade scripts.
If the execution of the upgrade scripts is successful, the registration client starts and the version upgrade process is considered complete.
In the event of failure, rollback scripts for the database are executed and the registration client application exits. The operator/supervisor must rerun the registration client to initiate the execution of the upgrade database scripts.
Once the version upgrade process is successfully completed, the backup folder is removed and the registration client is fully functional for use.
From version 1.1.5.5 and above, the registration client now has the ability to upgrade directly from one version to another without going through the versions in between.
To enable this upgrade process, a new configuration has been introduced. This configuration is required for the upgrade to any higher version. The configuration key that needs to be set is mosip.registration.verion.upgrade.version-mappings
.
The version-mappings configuration specifies the list of released versions of the registration client, their respective release order, and the database scripts folder name.
For example, to upgrade from version 1.1.4 to version 1.1.5.5, the configuration should be specified as follows:
This configuration needs to be specified in the spring.properties
file of the registration-services.
During the registration client upgrade process, the application retrieves the above configuration and initiates the database upgrade based on the specified list of versions.
The upgrade progresses according to the releaseOrder
and executes the necessary database scripts based on the dbVersion
value.
How to roll out version upgrades?
In rancher,
Delete the existing version helm deployment of the registration client.
Deploy the new version of the registration client helm chart.
Example:
Let us assume that the registration-client version 1.1.5.5 is currently running, and we have to upgrade to version 1.2.0.1 version.
What happens after deploying the 1.2.0.1 version registration client in rancher?
Registration clients downloading the maven-metadata.xml
from the upgrade server will be aware of the new version availability.
content:
Based on the first version available in the maven-metadata.xml
, registration client will next download the MANIFEST.MF
.
After the successful download of the manifest file, the registration client will start the download of all the new files, updates the existing file if the hash is mismatched, and deletes the unused files from the lib directory.
After completing the download, operator/supervisor will be prompted to restart the registration client.
Next restart, will start the registration client as version 1.2.0.1 and starts the DB upgrade script execution if present based on the version-mappings configuration available.
The status of the DB script execution is printed on the preloader screen and also logged into registration.log
.
During the version upgrade process, we create backups of the manifest, db, lib, and bin folders in the designated backup directory. Once the upgrade is completed successfully, the backups are cleared. However, if the upgrade fails, we only roll back the changes made to the database.
In the event that the registration client application gets stuck during the upgrade due to errors or failures in the background, it is necessary for the operator or supervisor to manually roll back to the previous version of the registration client.
Below are the steps for manually rolling back:
Close the registration client application.
Delete the db
and .mosipkeys
folders in the reg-client working directory.
Navigate to the designated backup directory, where you will find a folder named after the previous version and the timestamp when it was created. For example, "1.1.4.4_2023-05-30 13-00-27.238Z".
Copy all the files and folders (lib, bin, MANIFEST.MF) from the backup to the registration client working directory, except for the .mosipkeys
folder.
Copy the .mosipkeys
folder from the backup to the home directory of the current user.
Launch the registration client application again.
Close the registration client application.
Delete the db
folder, if it exists, in the registration client working directory.
Navigate to the designated backup directory, where you will find a folder named after the previous version and the timestamp when it was created. For example, "1.1.5.5_2023-05-30 13-00-27.238Z".
Copy all the files and folders (lib, bin, MANIFEST.MF
) from the backup to the registration client working directory.
Launch the registration client application again.
By following these steps, the registration client application will be successfully rolled back to its previous version.
This document helps in addressing an error related to partner organization name mismatch.
How do we handle this error?
During the partner certificate renewal process, users may encounter an error message while uploading the partner certificate.
The error code KER-PCM-008 indicates a partner organization name mismatch.
This error suggests that the organization name on the partner's certificate is different from the one originally registered with.
To resolve this issue, users need to manually update the name
column in the partner
table of the mosip_pms
database with the new organization name from the fresh certificates.
After successfully uploading the partner certificate, it is important to restart the partner-management-service
pod.
<<>
This document provides instructions on manually reprocessing all packets from the beginning after migration. The 1.2.0.1 release introduces multiple new stages and a new tagging mechanism. All packets that have not been processed before migration will be reprocessed to ensure they go through the new stages.
Note: This script is highly customizable, and each country can modify it according to their specific requirements. This document outlines the general approach for reprocessing packets. If a country has special needs, the query will need to be adjusted accordingly.
The following command should be used to reprocess packets:
It is important to first reprocess just one packet after migration to ensure that all stages are functioning correctly. This can be accomplished by setting the limit to 1. Please refer to the explanation below for instructions on changing the limit.
APPROACH 1
DEFAULT QUERY
This query also selects packets that are one day old (latest_trn_dtimes < (SELECT NOW() - INTERVAL '1 DAY')). This ensures that the script does not reprocess the same packets repeatedly. The time frame should be adjusted according to the system downtime caused by migration.
A country can determine the number of packets to be reprocessed in each batch and set the limit accordingly. The script should be executed the necessary number of times. For example, if there are 10000 pending packets and the limit is set to 1000, the script should be run 10 times.
APPROACH 2
This approach is designed for countries where packets are not directly routed from the securezone. In cases where the country has disabled routing from the securezone by setting the below property to false, the securezone notification stage should be disregarded. This is because any packets that have not moved beyond the securezone will be taken care of by the automated reprocessor.
Property: securezone.routing.enabled=false
This approach is similar to APPROACH 1 with one key difference. It utilizes the latest_trn_type_code
in the query to specifically target packets that are stuck in these stages for reprocessing. It will disregard packets stuck in other stages.
Note: If any custom stage is introduced by the country, the latest_trn_type_code
should be added to the query.
Query:
This document is designed to be a comprehensive resource for users who have deployed the latest versions of the Modular Open Source Identity Platform (MOSIP) compatible with Java 11 and are preparing to upgrade their systems to Java 21. It provides a detailed, step-by-step migration process to facilitate a seamless and efficient transition. By adhering to the guidelines in this document, users can modernize their MOSIP environments to take full advantage of Java 21's improved performance, advanced security features, and enhanced functionality. The guide also emphasizes best practices to minimize disruptions, maintain system stability, and ensure compliance throughout the upgrade process.
JDK 21: Ensure Java Development Kit (JDK) 21 is installed and configured in your system's environment variables.
Maven (Latest Version): To build and manage dependencies, use the latest version of Maven, such as 3.9.6.
Optional:
A modern IDE (e.g., Eclipse, IntelliJ IDEA, or others) is compatible with Java 21 to streamline coding and debugging.
Ensure the Lombok library version is compatible with your IDE. For instance, Lombok 1.18.30 works seamlessly with the latest IDE versions.
Note: After adding Lombok to your project, ensure it is correctly set up in your IDE to avoid compilation issues. This typically involves running the Lombok installer or manually enabling it in the IDE settings.
Java applications compiled in older versions are compatible with Java 21 run time. To support running those application jars in Java 21, additional VM Arguments need to be added while running applications in Java 21.
Java applications compiled with older versions are compatible with the Java 21 runtime. However, to ensure these application JARs run correctly in Java 21, additional JVM arguments may need to be specified when running the applications.
The libraries must be API-compatible and should not rely on deprecated or removed APIs.
The libraries should not depend on older Spring Boot versions (before 3.x), as the newer Spring Boot versions introduce significant API changes. Failure to meet this requirement can lead to compile-time or runtime issues, such as errors during class loading, bean initialization, or method invocation.
The dependent libraries of any module that have a dependency on any other MOSIP library (such as kernel-core) or older Spring Boot version (older than 3.x) need to be migrated before migrating the specific module. This applies not only to the static dependencies mentioned in the POM file, but also to the dynamic dependencies loaded from the classpath such as Kernel Auth Adaptor, BioSDK client, or any such libraries.
All POM versions of the modules and their dependency modules should be updated to reference the Java 21 migrated version.
Change the source and target compiler versions to 21:
Jacoco-plugin version needs to be updated to 0.8.11:
Note: A new kernel-bom file has been introduced as part of this release in the commons repo which contains all the latest version changes to the spring-boot and other dependencies. Here spring-boot:3.2.3 is used.
Unless there is a compelling reason for using a different version of the library than the version defined in kernel-bom, do not mention the version to that dependency, if done it will override the version with the specified one.
Remove any unused version properties from the pom.
2. Remove Deprecated Dependency
Remove any reference to the springdoc-openapi-ui
dependency to prevent conflicts.
Note: If swagger-2 was used in the module already, change it to Swagger-3 and also make the above change.
Any exclusions specified for a library in the POM can be retained. However, the version mentioned in that dependency can be removed, allowing it to inherit the version defined in the kernel-bom or another POM file.
Always make it a practice to keep the versions in the properties instead of hardcoding.
Even if the version is changed in the properties of pom.xml, make sure those properties are used in those dependencies/plugins instead of hardcoding the version.
For example:
maven-javadoc-plugin dependency should refer to ${maven.javadoc.version} property in the POM file.
Check and remove any unused version properties. With the use of kernel-bom, the version does not need to be mentioned to the dependencies mostly, unless it needs to be overridden or a different version is used.
POM files should not include duplicate version properties, as this can lead to errors. For example, even if the version property is updated correctly in its first occurrence, subsequent occurrences may override it with an outdated or incorrect version. This behavior can go unnoticed and may cause unexpected errors or functionality issues. To avoid such problems, carefully review POM files to identify and remove any repeated version properties. This ensures consistent version management and prevents overriding conflicts.
Below are the package changes that need to be applied in Java files.
Postgres Hibernate Dialect: Instead of using specific version dialects for PostgreSQL, such as org.hibernate.dialect.PostgreSQL95Dialect or org.hibernate.dialect.PostgreSQL92Dialect, it should be org.hibernate.dialect.PostgreSQLDialect. This is applicable for properties referred to in Java code and any properties file as well.
The Sleuth configuration has now been migrated to the Micrometer Tracer
Code changes were also necessary for the migration.
Import BraveAutoConfiguration.class
instead of using a component scan of org.springframework.cloud.sleuth.autoconfig.*
Use the AccessLogValve
class as the base class instead of ValveBase
for the SleuthValve
class.
Replace the following code:
tracer.newTrace()
→ tracer.nextSpan()
span.context().traceIdString()
→ span.context().traceId()
span.context().spanIdString()
→ span.context().spanId()
Refer to the details below for the changes made.
HTTP Connection Manager changes:
The following changes have been introduced related to the HTTP Connection Manager:
Deprecation of Existing Configuration Methods
The previously existing configuration methods have been deprecated and removed.
New configuration methods must be used, utilizing updated classes such as:
ReactorLoadBalancerExchangeFilterFunction
PoolingHttpClientConnectionManagerBuilder
Direct Auto-Wiring of ReactorLoadBalancerExchangeFilterFunction
Instead of using LoadBalancerClient
with LoadBalancerExchangeFilterFunction
, directly auto-wire ReactorLoadBalancerExchangeFilterFunction
in your implementation.
New Way to Set setMaxConnPerRoute
The method for setting setMaxConnPerRoute
in the HTTP connection manager now uses PoolingHttpClientConnectionManagerBuilder
.
Spring Security Changes: The Old way of extending WebSecurityConfigurationAdapter for SecurityConfig is deprecated, now it requires a new way of configuring the same using as mentioned below.
Remove extends WebSecurityConfigurerAdapter
Replace @EnableGlobalMethodSecurity(prePostEnabled = true)
with @EnableMethodSecurity
Replace the .antMatchers()
method with .requestMatchers()
Replace the @Override
on the configure method with @Bean
that returns the SecurityFilterChain
instance as a result of http.build()
Spring Batch Migration: Spring Batch has been migrated to version 5.x. The following changes need to be applied:
DB Changes:
If a Spring datasource is already being used in the Spring Batch job application, the Batch Job-related tables must be created in that datasource (i.e., database) using the aforementioned DB script.
The existing Spring Batch Job tables have to be applied with the below upgrade script:
The rollback script for the above is given below:
POM Changes:
Add spring-boot-starter-batch
if it does not already exist. The version will be 3.x if kernel-bom
is used, as discussed in the previous sections.
Add hibernate-validate dependency if it does not exist. The version will be 8.x if kernel-bom is used as discussed in the previous sections.
Java Code changes:
Remove @EnableBatchProcessing
Annotation
The BatchConfigurer
and DefaultBatchConfigurer
classes are deprecated, and any references to them must be removed. For example, in the kernel-salt-generator
, these classes were used to implement a Map-based Job Repository. However, since the Map-based Job Repository is no longer supported and requires the use of database tables, the only option is to remove references to these classes and create the necessary batch job tables.
JobBuilderFactory
and StepBuilderFactory
are deprecated. Instead, use JobBuilder
and StepBuilder
.
Since JobExecutionListenerSupport
is deprecated in favor of the JobExecutionListener
interface, update the Batch Job Listener class to implement JobExecutionListener
instead of extending JobExecutionListenerSupport
.
The write
method parameter in the org.springframework.batch.item.ItemWriter
interface has been changed from List
to Chunk
.
Key Changes:
To obtain a stream from a Chunk
, use the StreamSupport
utility class as follows:
Similar to the List.of()
the method used for creating a List
, you can use the Chunk.of()
method to create a Chunk
.
References:
In @RestControllerAdvice
, if the response content type is not explicitly set to application/json
, it might default to returning an XML response. To prevent this, ensure the contentType
is specified in the ResponseEntity
as shown below:
In org.apache.commons.lang3.time.DateUtils
, null parameters will now throw a NullPointerException
instead of an IllegalArgumentException
. To prevent breaking functionality, handle the exception as shown below.
The impact is observed in utility methods for the following classes:
You can handle the NullPointerException
appropriately by adding a null check or by using a try-catch block.
In a JPA repository, for non-native query methods, the column names should be based on the entity’s field names rather than the actual column names. For example, if an entity has a column lang_code
mapped to a field private String langCode
, the non-native query should use langCode
instead of lang_code
.
Since Joda Time is deprecated, the related date formatting classes have been removed from spring-context
, which may cause compatibility issues. Therefore, replace Joda's Time-based logic with java.time
-based logic to ensure compatibility and maintainability.
Hibernate CriteriaBuilder: Hibernate’s CriteriaBuilder now cannot be reused for multiple CriteriaQuery instances using the createQuery method on the same builder instance. When we want to create multiple CriteriaQuery it should be associated with a new CriteriaBuilder instance, otherwise, it will throw an error saying java.lang.IllegalArgumentException: Already registered a copy: SqmBasicValuedSimplePath.
JPA Naming Strategy: The SpringPhysicalNamingStrategy
class is no longer available in the latest Spring Boot jar. Therefore, any JPA configuration related to it should refer to the class org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
.
For example:
If component scanning does not work with the scanBasePackages
attribute within the @SpringBootApplication
annotation, move that definition inside the @ComponentScan
annotation’s basePackages
attribute. The same applies to any exclusions. When using the exclusion attribute in the @ComponentScan
annotation, apply the AspectJ filter-type
to exclude a list of packages for convenience.
Spring Boot Test Error Fix:
If you encounter the following error:
Add the exclusion in the Spring Boot Test application as shown below:
This will exclude the DataSourceAutoConfiguration.class
, resolving the issue.
Mockito Related errors: The Mockito
dependencies in the spring-boot-bom
were incorrect, but this has now been corrected in kernel-bom
. The Mockito version is now correctly set to 3.4.3
.
Power-Mockito related issues:
Mockito Core Version Issue:
If you encounter the following error:
Solution: Ensure that the mockito-core
version is set to 3.4.3
.
b. Maven Build Access Issues:
If the build fails due to access-related issues, add the following argLine
configuration in the maven-surefire-plugin
section of the pom.xml
:
For example, if you are facing the following error:
then add the below command to the argument line.
If you encounter the PBKDF2WithHmacSHA256 algorithm not supported
error, add javax.crypto.*
to @PowerMockitoIgnore
as shown below:
If you get the error PowerMockitoInjectingAnnotationEngine.injectMocks() is not accessible
because it is a private method, use mockito-core version 3.11.2
in that module specifically.
Test Security Config: To configure test security, a SecurityFilterChain
bean is required. The bean can be implemented as shown below:
If you encounter the following error:
Use the -e
flag with the mvn clean install
command to enable stack trace printing:
After running the command, scroll to the top of the errors to locate the real root cause, as this error is not the original issue.
Dependency Version Issue: In kernel-pdfgenerator-itext
, downgrade the itext-core
version from 7.2.0
to 7.1.0
to fix test case issues.
Error: MockMvc - 401 or 403 Status
If using mockMvc
results in errors like the below:
Check the TestSecurityConfig
to ensure that the URLs are permitted. Configure it as shown below to allow all requests during JUnit tests:
Component Scanning Fix: If component scanning does not work with the scanBasePackages
attribute within the @SpringBootApplication
annotation, follow these steps:
Move the scanBasePackages
definition to the @ComponentScan
annotation’s basePackages
attribute.
The same approach applies to any exclusions.
When using the exclusion
attribute in the @ComponentScan
annotation, use the ASPECTJ
filter type to conveniently exclude specific packages.
Example:
Test Properties Not Loaded: If JUnit tests are not loading the test properties, ensure that the following annotation is added to the test class:
Replace application.properties
with the correct properties file name, if different.
Note: Avoid Mixing JUnit 4 and JUnit 5
While this may not be directly related to Java migration, it is strongly recommended to use either JUnit 4 or JUnit 5 consistently throughout your test classes. Mixing the two versions can cause compatibility issues, leading to test failures or unexpected behavior.
Key Differences and Equivalents:
Assertions
JUnit 4: org.junit.Assert
JUnit 5: org.junit.jupiter.api.Assertions
Lifecycle Annotations
JUnit 4: @Before
JUnit 5: @BeforeEach
1. Explicitly Configuring Ant Path Matcher
In Spring MVC, the default path-matching strategy has changed to PathPatternParser
. This can cause failures when processing Ant-style patterns. To avoid such issues, explicitly configure the application to use AntPathMatcher
by setting the appropriate property in your configuration file.
For example:
2. Updated HTTP Header Size Property
The server.max-http-header-size
property is now deprecated. Use server.max-http-request-header-size
instead to configure the maximum HTTP request header size.
For example:
The following steps can be followed to configure Artifactory during the Java migration for the modules:
If a dynamic dependency is migrated for a module that needs to be loaded from the artifactory (either directly downloaded as a jar file or packaged in a zip file), it can be created as a new entry to the artifactory pom.xml file by mentioning the version and its path which is different from an existing entry.
The idea is to keep the existing artifacts unchanged to have the existing services unaffected and only add new entries that are differentiated by the new version in the jar file or the containing folder. Finally, when all modules migration is done we can remove the old version artifact entries.
While deploying a migrated service, it is essential to update the Docker run command or Helm chart configuration to include the appropriate argument for loading the newer version of the dynamic library from the Artifactory server.
The following updates are done in the docker file.
If the new branch develop-java21
is not included in the GitHub workflow push trigger, ensure that it is added.
Key Updates:
In the push trigger, the Java version was updated from:
java-version: 11
→ java-version: 21
Please refer to the changes made for running the audit service in the following pull requests (PRs):
While these changes apply generally to most modules, specific dependencies or code used by other modules may require additional, module-specific modifications.
Perform the following additional steps if you encounter the issue mentioned below during migration:
Interceptor Issue: An empty interceptor is not working with Hibernate 6. To resolve this, you need to implement the Interceptor
interface.
As part of the Java upgrade from an older version, the above method is deprecated. Therefore, use the following method instead:
The registration-client
internally uses Derby as the local database. As part of the migration to Java 21, the Derby dialect has been updated from DerbyTenSevenDialect
to DerbyDialect
to ensure compatibility.
During the migration, issues were observed when converting request and response objects to String
or Map
with registration-processor
APIs. To address this:
The requestType
for some APIs was explicitly set to String
.
The response is now received as an Object
and converted to a Map
to handle exceptions.
Due to this change:
A non-migrated registration-client
will not work with a migrated registration-processor
.
However, a migrated registration-client
will work with a non-migrated registration-processor
, or both modules must be migrated together.
As part of the Java migration, JavaFX has been upgraded to version 21.0.3 to support the latest features. When setting up the Java 21-migrated registration-client
repository in your IDE, you will need to:
Download the required JavaFX ZIP file.
While running or debugging the Initialization.java
class to start the registration-client
application, we pass certain VM arguments to ensure the application runs correctly. One of these arguments specifies the path to the OpenJFX ZIP file. You will need to update the path in the VM arguments to point to the latest JavaFX ZIP file that was downloaded in the previous step. Additionally, a few changes have been made to the existing VM arguments to support Java 21.
The updated VM arguments are listed below:
The base image in the Dockerfile has been updated to mosipdev/openjdk-21-jdk:latest
to support Java 21.
The registration-api-stub-impl
JAR dependency has been added to the Artifactory POM file. During the deployment of the registration client, this dependency is pulled from Artifactory and bundled with other JAR files in the lib
folder. This approach avoids adding the dependency directly to the registration-services
POM file. If custom implementations related to document scanning or geo-positioning are required, they can be pulled from Artifactory and bundled without modifying the registration-client
codebase.
Since JavaFX has been migrated to version 21.0.3, the JavaFX-related files, specifically zulu21.34.19-ca-fx-jre21.0.3-win_x64.zip
, have been added to Artifactory. This ZIP file is used when preparing the registration-client downloadable ZIP file.
Modifications have been made to the configure.sh
script to support the above two changes.
Please refer to the below points for additional steps for the deployment of the pre-registration module:
Pre-registration Batch Job Upgrade Scripts for PostgreSQL DB:
Run the scripts in the respective pre-registration batch job tables.
Pre-reg Service Migration:
Java 21 Jars:
Note:
Keycloak Role Removal: Please remove the INDIVIDUAL role from the mosip_prereg_client Available Roles in Keycloak.
Please refer to the following points while migrating the Web-Sub Java version:
The Web-Sub repository contains a Java-based module called kafka-admin-client
.
As part of the migration process, the websub/kafka-admin-client
module has been updated to Java 17, the latest version supported by Ballerina.
Additionally, Ballerina has been upgraded to the latest version, 2201.9.0 (Swan Lake Update 9), to ensure compatibility with Java 17. This upgrade enables the module to leverage the new features and improvements introduced in both Java 17 and the updated Ballerina version.
The Pre-Registration UI-spec file pre-registration-demographic.json
was previously included in the mosip-config repository in version 1.1.5.*, but starting from version 1.2.0, it should be manually published using the master data UI-spec API.
Go to Swagger clientIdSecretKey
to get the Authentication token:
Go to Swagger defineUISpec
to define the new UI Specifications
Go to publishUISpec
to Publish the newly defined UI Spec
Once done, check the master.ui_spec
table.
The following new attributes have been added:
subType (optional - for dynamic dropdowns)
transliteration (mandatory to enable transliteration)
locationHierarchyLevel (mandatory to be added in each location dropdown to indicate the location hierarchy level)
parentLocCode (mandatory to be added in the topmost dropdown in the location hierarchy to indicate the parent for it. It can also be omitted, in which case the mosip.country.code property will be used)
gender
Attribute should be mandatory, and the parameter required
should be true
The control type for the date of birth should be changed to ageDate
The labelName should be provided with the "languageCode" as the "key" and the label as the "value". Example: {"labelName": { "eng": "Date Of Birth", "ara": "تاريخ الولادة", "fra": "Date de naissance" }}
visibleCondition (optional)
requiredCondition (optional)
alignmentGroup (optional)
containerStyle (optional)
headerStyle (optional)
changeAction (optional)
To facilitate packet reprocessing, MOSIP provides a Python script. This approach involves fetching all RIDs from the database using a query and processing them from the beginning. Please consult the documentation for . The query can be found in the file.
The default query reprocesses all packets that were not "PROCESSED" or "REJECTED" before migration. The query uses a limit of 1000 packets and a 1-second delay between each packet. This means that when the script is executed, it will reprocess 1000 packets one by one with a 1-second interval. These settings can be adjusted if necessary in the file:
Please refer to this to learn about the dependencies and versions. this file is created to remove the repetitiveness in defining the dependencies. If you have other repositories that use repeated definitions of dependencies, create a new bom file for that specific repository that includes the kernel-bom in the dependencyManagement section then add the extra dependencies with appropriate versions, and then use the same bom file in your respective modules pom files.
Any module that needs the predefined versions for any of its dependencies should import the kernel-bom file into the module pom file’s dependency management section. Remove all the versions from the properties and dependencies section for which kernel-bom has already defined the version. Please refer to the example in the . (You will need to include the latest link once tagging is done). If a module does not need any version from the kernel-bom, it's not needed to import it.
A module pom or bom file can import one or more bom files, and if there is a dependency referred from more than one bom file, it will be referred from the last bom file. Please refer .
Swagger UI update:
To upgrade to Swagger-UI version 3, make the following changes:
1. Update the Dependency
Add the springdoc-openapi-starter-webmvc-ui
dependency to your file.
Bouncy Castle Version update: Remove reference to an older version of the bouncycastle library and use the below one. The same is applied in kernel-bom and kernel-core. Please refer to the file.
Note: While doing a text replacement for the above packages, it might be accidentally renaming the properties having the same naming for example javax.persistence.jdbc.driver. Please make sure this does not happen. Please refer to such fixes.
This required the addition of micrometer tracing dependencies and quartz scheduler dependency which is already included in
For detailed implementation and examples, refer to this .
Replace the existing deprecated way of HttpSecurity configurations with lambda based configuration. Please refer to this file present .
HttpStatusCodeException.getStatusCode()
should be replaced with HttpStatus.valueOf(statusCodeException.getStatusCode().value())
Please refer to where all above stated changes have been made:
The existing bootstrap.properties
file continues to function only if the spring-cloud-starter-bootstrap dependency is added. This dependency is currently included in the kernel-core mentioned in the file. If the dependency is not added, you will need to use application.properties instead.
The Map-based job repository is now deprecated. Therefore, any application running a batch job must have a database with Batch Job-related tables. If a database is not feasible, at least an in-memory database must be configured. Refer to the for creating the tables.
Create a bean definition for PlatformTransactionManager
Please refer to the file. Below is the relevant code snippet from the file:
For reference, please refer to the file.
For reference, see the relevant code in the file.
For reference, please refer to the file.
Spring Batch 5.0 Migration Guide:
In the org.apache.commons.lang3.StringUtils.join
method, invalid arguments now throw an IllegalArgumentException
instead of an ArrayIndexOutOfBoundsException
.To prevent breaking functionality, handle this exception as shown in the .
Symmetric Algorithm AES/GCM/PKCS5Padding is no longer supported and it should be replaced with AES/GCM/NoPadding. Please refer to the CryptoUtil changes mentioned in the file.
JUnit Test Dependency: To run the JUnit tests, the following dependency is required and has now been added to kernel-core
. For reference, see:
MVEL Dependency Update: If MVEL-related test cases are failing, update the MVEL dependency as per the file.
For reference, please refer to the file.
For reference, see the implementation in the file (lines 34-38)
Refer to the implementation here:
Refer to the implementation here:
To see the changes made please refer to .
For PR checks, any references to the kattu
repository in the GitHub workflow YAML files should now point to the master-java21
branch.
Please refer to the related commit .
Migrate the pre-reg service from Java 11 to Java 21. For more details, refer to the in the MOSIP repository.
Java 21 JARs can be taken from .
The document provides details about all UI spec attributes. This document can be referred to in order to identify the changes between versions 1.1.5 and 1.2.0.1.
javax.servlet.*
jakarta.servlet.*,
javax.annotation.*
jakarta.annotation.*,
javax.activation.*
jakarta.activation.*,
javax.persistence.*
jakarta.persistence.*,
javax.validation.*
jakarta.validation.*,
javax.mail.*
jakarta.mail.*,
org.apache.http.impl.client.*
org.apache.hc.client5.http.impl.classic.*,
org.apache.http.conn.ssl.SSLConnectionSocketFactory
org.apache.hc.client5.http.ssl.SSLConnectionSocketFactory
FROM openjdk:11
FROM eclipse-temurin:21-jre-alpine
apt-get -y update
apk -q update
apt-get update -y
apk -q update
apt-get install -y
apk add -q
apk add -q unzip \
apk add -q unzip wget \
apt-get -y install
apk add -q
groupadd -g ${container_user_gid} ${container_user_group}
addgroup -g ${container_user_gid} ${container_user_group}
useradd -u ${container_user_uid} -g ${container_user_group} -s /bin/sh -m ${container_user}
adduser -s /bin/sh -u ${container_user_uid} -G ${container_user_group} -h /home/${container_user} --disabled-password ${container_user}
ARG container_user_uid=1001
ARG container_user_uid=1002
This document addresses how an error in Database upgrade script can be managed effectively.
If the below error has been encountered while attempting to execute the upgrade script, this can be resolved by following the steps mentioned in this document.
Error message
The error message states that a unique index, named cert_thumbprint_unique
, cannot be created due to a duplicate entry. Specifically, the values for the cert_thumbprint
and partner_domain
columns, which are (231bd472ab24ef60ec6*******2cace89c34, AUTH), already exist in the database. This duplicate entry violates the unique constraint defined for the ca_cert_store
table in the mosip_keymanager
database.
To address and successfully execute the DB upgrade script, the following steps can be taken:
Identify the duplicate entries in the mosip_keymanager table.
To accomplish this, use the provided SQL query:
This query will retrieve the rows of data that contain duplicate entries.
As a precautionary measure, it is advisable to create a backup of all the duplicate values.
Remove the duplicate entries so that only one composite key remains. The aforementioned SQL script can be reused to verify that the duplicates have been successfully deleted. If the result is empty, then all duplicates have been removed.
By following these steps, the problem should be resolved, and the DB upgrade script can be executed without any further issues.