Sandbox Installer
Overview
The Sandbox is a safe environment isolated from your PC's underlying environment. You may use the sandbox to execute files without having to worry about malicious files or unstable programs impacting data on the system.
System Requirement
This section describes handy information that is useful to know when operating the Sandbox Installer which is tested under mentioned configuration.
Component | Number of VMs | Configuration | Persistence |
---|---|---|---|
Console | 1 | 4 vCPU*, 16 GB RAM | 128 GB SSD** |
K8s MZ master | 1 | 4 vCPU, 8 GB RAM | 32 GB |
K8s MZ workers | 9 | 4 vCPU, 16 GB RAM | 32 GB |
K8s DMZ master | 1 | 4 vCPU, 8 GB RAM | 32 GB |
K8s DMZ workers | 1 | 4 vCPU, 16 GB RAM | 32 GB |
*vCPU: Virtual CPU
** Console has all the persistent data stored under /srv/nfs
. Recommended storage here is SSD or any other high IOPS disk for better performance
Introduction
The Multi-VM Sandbox Installer is a fully automated deplorer that incorporates all MOSIP modules into a virtual machine cluster that can be either cloud or on premise.
The sandbox can be used for development and testing, while the Ansible scripts run MOSIP on a multi-virtual machine (VM) setup.
Caution - The sandbox is not intended for use by serious pilots or for production purposes. Also, do not run the sandbox with any confidential data.
Minibox
In Minibox, note that for any form of load or multiple pod replication scenarios, this may not be sufficient. It is possible, however, to enable the feature to bring up MOSIP modules with lesser VMs as below:
Component | Number Of VMs | Configuration | Persistence |
---|---|---|---|
Console | 1 | 4 vCPU*, 16 GB RAM | 128 GB SSD |
K8s MZ master | 1 | 4 vCPU, 8 GB RAM | 32 GB |
K8s MZ workers | 9 | 4 vCPU, 16 GB RAM | 32 GB |
K8s DMZ master | 1 | 4 vCPU, 8 GB RAM | 32 GB |
K8s DMZ workers | 1 | 4 vCPU, 16 GB RAM | 32 GB |
Precondition
Terraform is a tool to securely and efficiently develop, edit, and update infrastructure. Using Terraform scripts available in _terraform/
._the initial installation stage is achieved. AWS scripts are being used and maintained at present.
It is strongly recommended that the scripts be analyzed in depth before running them.
Virtual Machines (VMs) Setup
Before, MOSIP modules installation process that runs on a preset time or when a predefined condition, need to have VMs set up on all Machines. The user must ensure if the CentOS 7.8 OS is installed on all the machines:
Create user 'mosipuser' on console machine with password-less
sudo su
The hostname must match hostnames in
hosts.ini
on all machines. Set the same withEnable Internet access on all machines
Disable firewalld on all machines
Exchange
ssh
keys between console machines and K8s cluster machines so that ssh is password-less from console machines:Make the console machine available via a public domain name (e.g. sandbox.mycompany.com)
(When you do not intend to access the sandbox externally, this step can be skipped)
Ensure the date/time is in UTC on all machines
Open ports 80, 443, 30090 (postgres), 30616 (activemq), 53 (coredns) on console machine for external access
Make sure your firewall doesn't block the UDP ports (s)
Software Prerequisites
To ensure proper installation, install these pre-requisites manually:
Git
Git Clone
Ansible
On the Installation Options, click git to install on your machine:
In User Home Directory, select Git Clone and switch to appropriate branch:
Install Ansible and create shortcuts:
Sandbox Architectural View
Installing MOSIP
This section helps you to plan an installation of MOSIP suited to your environment. Before installing MOSIP, it is recommended that the scripts be analyzed in depth before running them.
Site Settings
Suited to your configuration, update hosts.ini. Make sure your configuration matches the system names and IP addresses
In
group_vars/all.yml
changesandbox_domain_name
to domain name of the console machineBy default, installation scripts will try to fetch Letsencrypt's new SSL certificate for the above domain. If you already have the same, however, then set the following variables in the file group
group_vars/all.yml
:
Network Interface
It is the interconnection between a computer and a public or private network. If it is other than “eth0” by your cluster machines, update it to group_vars/k8s.yml
Ansible Vault
In the Ansible vault _secrets.yml
_file, all the secrets (passwords) used in automation are stored. To access the file, the default password is 'foo'. Changing this password with the following command is recommended:
The contents of secrets.yml
can be viewed and edited based on following command:
Install MOSIP
When this equipment is connected to your machine it allows you to install MOSIP modules using command:
If a message prompting you for password, enter default vault password "foo" to proceed installation.
MOSIP Configuration
This section provides the following major sections to describe how to configure and verify the proper interface. The sandbox installs with default general configuration. To configure MOSIP differently, refer to the following sections:
Domain Name System (DNS)
DNS translates human readable domain names to machine readable IP addresses. A private DNS (CoreDNS) is mounted on the console machine by default, and /etc/resolv.conf
refers to this DNS on all machines.
However, if you want to use DNS cloud providers (like Route53 on AWS), disable the installation of a private DNS by setting the following flag:
Ensure your DNS routing is taken care of by your cloud deployment. Uncomment the Route53 code for AWS in the scripts given in the:
The corends.yml
``playbook configures the CoreDNS and updates the /etc/resolv.conf
file for all devices. If a system needs to be restarted, re-run the playbook to restore /etc/resolv.conf
.
Local Docker Registry
This part contains information about hosting your own registry using the Local Docker Registry.
Local Registry on Console
Instead of using the default Docker Hub, you may run a local Docker registry. This is particularly useful when the Kubernetes cluster is sealed for protection on the Internet. With this sandbox, a sample Docker registry implementation is available, and you can run the same by triggering the following in group_vars/all.yml
.
Notice that this register is running on the computer on the console and can be accessed as console.sb:5000
. Control is through http and not through https.
Ensure that in this registry you pull all the appropriate Dockers and update versions.yml
.
Caution: If you delete/reset this registry or restart the console computer, all the registry contents will be lost and the Dockers will have to be removed again.
Additional Local Registries:
If you wish to have additional local registries for Dockers, then list them here:
The list here is necessary to ensure that http access from cluster machines is allowed for the above registries.
Private Dockers
When you set up a private registry, you assign a server to communicate with Docker Hub over the internet. If you are pulling Dockers in Docker Hub from the private registry, then provide secrets.yml
with the Docker Hub credentials and set the following flag in:
Update with versions.yml
your Dockers versions.
Sandbox Access
When installing the default Sandbox, you must have a public domain name, so that the domain name refers to the console computer. However, if you want to access your internal network's Sandbox (for example via VPN), set the following in group_vars/all.yml
:
A self-signed certificate is created and the sandbox access URL is https://{{inventory hostname}}'
Secrets
All secrets are stored in secrets.yml
. Edit the file and change all of the passwords for a secure Sandbox. For creation and testing, defaults will be used, but be aware that the sandbox will not be secure with defaults. In order to edit secrets.yml
.
If you update PostGres passwords, you will need to update their ciphers in the property files. See the section below on Config Server. To be able to find out the text password, all the passwords used in. properties
were added to secrets.yml
- some of them for purely informational purposes.
Caution : Make sure that secrets.yml
is updated when you change any password in. properties
.
Config Server
Config server is one of the more popular centralized configuration servers used in a micro service-based application. For all modules, configurations are defined through property files located in the GitHub repository. For example, for this sandbox, the properties are located within the sandbox folder at https://github.com/mosip/mosip-config.
You can have a repository of your own with a folder containing files for properties. On GitHub, the repo will be private. In group vars/all.yml
, configure the following parameters as below (example):
If private: true, then, in group vars/all.yml
, update your GitHub username as above. Please change the password to secrets.yml
:
The repo is cloned to the NFS mounted folder if local git repo is allowed, and the config server pulls the properties locally. This option is useful if the sandbox is secured without access to the Internet. You should search git-in locally for any changes. Remember, however, that you will have to push them manually if you want the changes to be reflects in the parent GitHub repo. When making improvements to the configuration repo, there is no need to restart the config-server pod.
If you have updated the default passwords in secrets.yml
, create these password ciphers and update the changed password property files. After the config server is up, the ciphers can be created from the console machine using the following curl command:
The above command connects via input to the Config server pod of the MZ cluster. You may also use the script to encrypt all the secrets at once by the following methods:
Several secrets are required in Ansible's secrets.yml
in the config server property files. We use config server encryption to encrypt the secrets in order to prevent explicit text secrets in properties using the following command:
The script here converts all secrets in secrets.yml
using above command implemented in Python.
Prerequisites:
Install required modules using
Ensure config server is running
Set the server URL in
config.py
If the URL has an HTTPS certificate and the SSL server is self-signed, set
Run the following command:
In this sandbox
secrets_file_path
is/home/mosipuser/mosip-infra/deployment/sandbox-v2/secrets.yml
Output is saved in
out.yaml
.
Pre-Reg Captcha
Captcha protects your website from fraud and abuse. It uses an advanced risk analysis engine and adaptive challenges to keep malicious software.
Get Captcha for the sandbox domain from "Google Re-captcha Admin" if you would like to allow Captcha for Pre-Reg UI. Get reCAPTCHA v2 keys for "I'm not a robot"
Set Captcha as:
OTP Setting
As below, to receive OTP (one-time password) over email and SMS set properties:
SMS
File:
kernel-mz.properties
Properties:
kernel.sms
Email
File:
kernel-mz.properties
Properties
You may want to run MOSIP in Proxy OTP mode if you do not have access to Email and SMS gateways, in that case you can skip Proxy OTP Settings.
To run MOSIP in Proxy OTP mode set the following:
Note : The default OTP is set to 111111.
Master Data
Before you start installing the sandbox, load country-specific master data:
Ensure the Master Data
.csv
files are available in a folder, saymy_dml
Add the following line in
group_vars/all.yml
``-> databases ->mosip_master
Pod Replication
For production setups you may want to replicate pods more than the default replication factor of 1. Upgrade podconfig.yml
to the same file. A separate production file can be generated and pointed to from group vars/all.yml
``--> podconfig
file.
Taints
A taint allows a node to refuse pod to be scheduled unless that pod has a matching toleration. Kubernetes offers the functionality of taints to run a pod solely on a node. This is especially useful during performance tests where you would like to assign different nodes to non-MOSIP components.
By default, in the sandbox, taints are not added. The following modules have been provided with provisions to allow taints for:
Postgres
Minio
HDFS
Set the following in group
vars/all.yml
to allowtaint
. EXAMPLE:The node here is the machine on which you would like to exclusively run the module.
Ensure the above setting is done before you install the sandbox.
TPM for Reg Client
By default, the sandbox installs a disabled Trusted Platform Module (TPM) Reg Client Downloader.
Reg Client Downloader:
Convert helm template to helm values:
To enable TPM to use trusted private/public Reg client machine private/public keys, do the following:
Update the registered client downloader TPM environment variable:
If, before installing the sandbox, you have done the above, then you may skip this step. Otherwise, if the downloader reg client is already running on your sandbox, delete it and restart as follows:
(Wait for all resources to get terminated)
Add the name and public key in MOSIP-master/machine-master and MOSIP-master/machine-master table of the registered client machine in DB. Using TPM Utility, you can get your machine's public key
Utility to obtain public TPM keys along with the name of the computer
Prerequisites:
Build:
Run:
(Use jar-with-dependencies to run under target folder)
Machine Master Table:
The publicKey, signingPublicKey, keyIndex and signingKeyIndex - all of them to be populated in the machine_master
table of mosip_master
DB.
Download the registered client from https://{{sandbox domain name}}/registration-client/1.1.3/reg-client.zip
Configure Pre-Reg for ID Schema
The sandbox comes with its default ID Schema (in Master DB, identity_schema
table) and Pre-Reg UI Schema pre-registration-demographic.json
. In order to use different schemas, do the following:
Ensure new ID Schema is updated in Master DB,
identity_schema
table
Replace
mosip-config/sandbox/pre-registration
-demographic.json
with new Pre-Reg UI SchemaMap values in
pre-registration-identity-mapping.json
topre-registration-demographic.json
as below:Update the following properties in pre-registration-
mz.properties preregistartion.identity.name=< identity.name.value (above)> preregistration.notification.nameFormat=< identity.name.value>
Restart the Pre-Reg Application service
Registration Client with Mock MDS and Mock SDK
Download Reg Client:
Download zip file from:
Unzip the file and launch registered client by running
run.bat
Reg client will generate public/private keys in the following folder
You will need the public key and key index mentioned in
readme.txt
for the later step to update master DB
Run MDS:
Run mock MDS as per procedure give here: Mock MDS
Pickup device details from this repo. You will need them for device info updates in the later step
Add Users in Keycloak:
Make sure keycloak admin credentials are updated in
config.py
Add users like registration officers and supervisors in
csv/keycloak_users.csv
with their rolesRun
Update Master Data:
In the master DB DML directory, change the following CSVs. The DMLs are located in the
sandbox at /home/mosipuser/mosip-infra/deployment/sandbox-v2/tmp/commons/db-scripts/mosip-master/dml
master-device_type.csv
master-device_spec.csv
master-device_master.csv
master-device_master_h.csv
master-machine_master.csv
master-machine_master_h.csv
master-user_detail.csv
master-user_detail_h.csv
master-zone_user.csv
master-zone_user_h.csv
Run
Example:
CAUTION : The above will reset entire DB and load it fresh
You may want to maintain the DML directory separately in your repo
It is assumed that all other tables of master DB are already updated
Device Provider Partner Registration:
Update the following CSVs in PMS DML directory. On sandbox the DMLs are located at
/home/mosipuser/mosip-infra/deployment/sandbox-v2/tmp/partner-management-services/db_scripts/mosip_pms/dml
pms-partner.csv
pms-partner_h.csv
pms-policy_group.csv
Run
update_pmsdb.sh
. Example:CAUTION*: The above will reset entire DB and load it fresh
Some example CSVs are located at
csv/regdevice
IDA Check:
Disable IDA check in registration-mz.properties
:
Launch Reg Client:
Set Environment Variable
mosip.hostname
to {sandbox domain name}Login as a user (e.g. 110011) with password (MOSIP) to login into the client
Integrations
Guide to Work with Real HSM
Introduction:
The default sandbox uses simulator of HSM called SoftHSM. To connect to a real HSM you need to do the following:
Create
client.zip
Update MOSIP properties
Point MOSIP services to HSM
client.zip:
The HSM connects over the network. Client.zip
, which is a package of self-dependent PKCS11client.zip
file is extracted from the artifactory when Dockers launch, unzipped, and install.sh is executed.
The zip must fulfil the following:
Contain an
install.sh
Available in the artifactory
install.sh
This script must fulfil the following:
Have executable permission
Set up all that is needed to connect to HSM
Able to run inside Dockers that are based on Debian, inherited from OpenJDK Dockers
Place HSM client configuration file in
mosip.kernel.keymanager.softhsm.config-path
(see below)Not set any environment variables. If needed, they should be passed while running the MOSIP service Dockers
Properties:
Update the following properties in Kernel and IDA property files:
Ensure you restart the services after this change.
Caution: The password is highly critical. To encrypt it, make sure you use a really strong password (using Config Server encryption). In addition, Config Server access should be very tightly regulated.
Artifactory:
Artifactory is built as a Docker in the sandbox and accessed via services. In that Docker, replace the client.zip
. The changed Docker can be uploaded to your own Docker Hub registry for subsequent use.
HSM URL
HSM is used by Kernel and IDA services. Point the TCP URL of these services to new HSM host and port:
The above parameter is available in the Helm Chart of respective service.
Integrating Antivirus Scanner
In MOSIP, virus scanners can be implemented at different levels. ClamAV is used as an antivirus scanner by default. If you want your anti-virus (AV) to be incorporated, the same can be achieved as follows:
Registration Client
Running your AV on the registration client machine is sufficient. Not required for integration with MOSIP.
Server
This is implemented as a part of Kernel ClamAV project project. MOSIP uses this project to scan registration packets. You may integrate your anti-virus (AV) in the following ways:
Option 1
The registration packets are stored in Minio. Several AVs provide traffic flow analysis in line with the stream to defend against hazards. This form of implementation based on the network can be carried out without any alteration of the MOSIP code. But to ensure that network traffic passes through your AV, a careful network configuration is required.
Option 2
To support your AV at the code level, the following Java code has to be altered. In
VirusScannerImpl.java
, thescanFile/scanFolder/scanDocument
API must be implemented with your AV SDK.
BioSDK Integration
In reg client
, reg proc
, and ida
, the biosdk library is included. The guide offers steps for these integrations to be enabled here.
Integration with IDA
It is expected that Biosdk will be available as an HTTP service for IDA. The ID Authentication module then calls this service. To build such a service, refer to the reference implementation. /service
contains service code; while /client
contains client code that is combined with the IDA that connects to the service. This service can be operated as a pod or hosted outside the cluster within the Kubernetes cluster.
It is important to compile the client code into biosdk.zip and copy it to Artifactory. It is currently available at the following address:/artifactory/libs-release-local/biosdk/mock/0.9/biosdk.zip
. This zip is downloaded by IDA dockers and installed during docker startup.
Integration with Reg Proc
The above service works for regproc
as well.
Integration of External Postgres DB
Sandbox Parameters
****
Make sure the Postgres is configured as 'UTC' for the time zone. This configuration is set to postgresql.conf
when you install Postgres.
Integration with External Print Service
Introduction
MOSIP provides a reference implementation of print service that interfaces with the MOSIP system.
Integration Steps
Ensure the Following:
Compliant libraries, is reqartifactoryervices to link to HSM. MOSIP services install the same thing before the services start. The HSM vendor must have this library. The 1. Websub runs as
https://{sandbox
domain name}/websub
on MOSIP and is accessible externally via Nginx. Websub runs on DMZ and nginx in the sandbox as configured for this accessYour service is able to register a topic with Websub with a callback url
The callback url is accessible from MOSIP websub
The print policy was established (be careful about enabled/disabled encryption)
Print partner created and certs uploaded DB Timezone6. The private and certificate of print partner is converted to p12 keystore format. You may use the following command:
This p12 key and password is used in your print service
Your print service reads the relevant (expected) fields from received credentials
Your print service is able to update MOSIP data share service after successfully reading the credentials
Dashboards Guide
This guide includes numerous tips for using various dashboards made available as part of the default installation of the sandbox. The links to various dashboards are available at:
KIBANA
A default dashboard to display the logs of all MOSIP services is installed as part of the sandbox installation. To view the Dashboard:
Go to Kibana Home
On the drop down on the top left select Kibana->Dashboard
In the list of dashboards search for "MOSIP Service Logs"
Select the dashboard
Kubernetes Dashboard
Dashboard links:
MZ:
https://{sandbox
domain name}/mz-dashboard
DMZ:
https://{sandbox
domain name}/dmz-dashboard
On the console machine, the tokens for the above dashboards are accessible at_
/home/mosipuser/mosip-infra/deployment/sandbox-v2/tmp
_. For each dashboard, two tokens are created - admin and view-only. View-only privileges are restricted.
Grafana
Link:
https://{sandbox
domain name}/grafana
Recommended charts:
11074 (for node level stats)
4784 (for container level stats)
Admin
Open the MOSIP Admin portal from the home page of the sandbox. Login with superAdmin username, MOSIP password.
Sanity Checks
The Sanity Check Procedures are the steps to verify that an installation is ready to be tested by a system administrator. In quality audits sanity check is consider as a major activity. It performs a quick test to check the main functionality of the software.
Checks while Deployment
During deployment all pods should be 'green' on the dashboard of Kubernetes, or both these commands would display pods in 1/1 or 2/2 state if you are on the command line.
Some pods that show status 0/1 Complete are Kubernetes jobs - they will not turn 1/1.
Note the following namespaces
Module | Namespace |
---|---|
MOSIP modules | Default |
Kubernetes dashboard | Kubernetes-dashboard |
Grafana | Monitoring |
Prometheus | Monitoring |
Filebeat | Monitoring |
Ingress Controller | Ingress-Nginx |
To check pods in a particular namespace. Example:
If any pod is 0/1 then the Helm install command times out after 20 minutes
Following are some useful commands:
Some pods have logs available in logger-sidecar as well. These are application logs.
To re-run a module, helm delete module and then start with playbook. Example:
Module Basic Sanity Checks
Quick Sanity Check of Pre-Registration
Open Pre-Reg home page:
https://{sandbox domain name}/pre-registration-ui/
Enter your email or phone no to create an account
Enter the OTP that you received via email/sms in the selected box, or enter 111111 for Proxy OTP mode
Accept the Terms and Condition and CONTINUE after filling the demographic data
Enter your DOB or age
Select any of the Region, Province, City, Zone from the dropdown
Select any pin code from the dropdown
Phone number should be 10 digits and must not start with 0
CONTINUE after uploading required document of given size and type or skip the document upload process. (Recommended: upload any one document for testing purposes.)
Verify the demographic data and document uploaded previously and CONTINUE. You may edit with BACK if required
Choose any of the Recommended Registration Centre registration and CONTINUE
Select date and time-slot for Registration and add it to Available Applicants by clicking on + and CONTINUE
Now your first Appointment booking is done. You may view or modify your application in Your Application section
Registration Processor Test Packet Uploader
Prerequisites
Auth Partner Onboarding
Packet Creation
Refer to notes in config.py
and data/packet*/ptkconf.py
for various parameters of a packet. Parameters here must match records in Master DB.
Following example packets are provided. All these are for new registration:
Packet1: Individual 1 biometrics, no operator biometrics
Packet2 : Individual 2 biometrics different from above, no operator biometrics
Packet2 : Individual 2 biometrics with operator biometrics of Individual 1
Clearing the DB
This is optional. To see your packet clearly, you may want to clear all records of previous packets in mosip_regprc
tables:
Provide your postgres
password.
Caution: Ensure that you want to clear the DB. Delete this script if you are in production setup.
Upload Registration Packet
Use the following command:
Verify
Verify the transactions as below:
Provide
postgres
password. Note that it may take several seconds for packet to go through all the stages. You must see a SUCCESS for all stages.UIN should have got generated
The latest transaction must be seen in
credential_transaction
table ofmosip_credential
DBFurther,
identity_cache
table ofmosip_ida
db should have fresh entries corresponding to the timestamp of UIN generated
Reset
Before we look at how to reset installation, you should ensure you have a recent backup of the clusters and persistence data.
Performing a reset will wipe out all your clusters and delete all persistence data. To reset your machine back to fresh install, run the following script:
If a message prompting you to confirm the reset of machine, select the option “Yes/No” to proceed.
Persistence
Persistent data in the field of data processing is available over Network File System (NFS) that is hosted on console at location /srv/nfs/mosip
.
For any persistent data, all pods write to this location. If required, you can backup this folder.
Note:
Postgres is initialized only once and populated. Postgres is not initialized if persistent data is present in
/srv/nfs/mosip/postgres
. In order to force an init, execute the following:Postgres includes data from Keycloak.
Keycloak-init
does not overwrite any data, but just updates and adds information. If you want to clean up data from Keycloak, you need to manually clean it up or reset all postgres.
Useful Tools
There are plenty of tools are installed with preinstall.sh
to help in fact shortcuts commands to troubleshoot and diagnose technical issues or just little hacks that make tasks a little quicker.
Shortcut Commands
The second part after adding above:
Tmux
Tmux means that you can start a Tmux session and then open multiple windows inside that session. To enable it copy the config file
as following:
Property File Comparator
This is tool to compare text and Property to find the difference between two text files**(*.properties)
**:
Last updated