Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Running a national ID system is no mean task and involves numerous challenging aspects. The software system at the core is a critical infrastructure and needs to address high availability, reliability, scalability, security, resilience, and manageability. Choosing the right deployment architecture plays an important role in helping achieving architectural goals while also catering to the law of the land. Cost of implementing such an architecture also matters.
Mosip has a micro-services architecture that organizes functionality into myriad small services and execution units. Each of these can be scaled separately as well as replaced / upgraded. This makes the platform powerful and provides plenty of flexibility and configurability in the hands of the implementor. There is also the corresponding complexity of dealing with a higher number of components in the system in the areas of configuration, security, deployment, dependency management, monitoring and testing.
In order to get the best out of mosip and keep manageability high the deployment architecture plays a crucial role. Let us take a look at a few of the common deployment architecture options available based on various perspectives.
Packaging choices
Option 'Jar' - Spring boot services in Virtual Machines|
Option 'Docker' - Dockers on a Kubernetes container management setup
Infrastructure choices
Option 'On-Premise' - Deploy in a private or own data center
Option 'Cloud' - Deploy in a cloud
Option 'Hybrid' - Cloud + On Premises
Platform choices
Option 'Open Source' - Proven community favored platforms
Option 'Cloud Native' - Cutting edge supported cloud technologies from AWS, Azure, GCP et al
Option 'Commercial' - Established and well supported priced packages
The architecture proposed may be deployed on-premise or cloud. Here, all MOSIP modules are installed with clear separation between militarised and demilitarised zones.
For linear scaling of capacity and provisioning of hardware, a cell based architecture (along with secure zones) may be preferred.
A hybrid architecture may be considered where benefits of cloud and on-premise are leveraged. While cloud provides rapid deployment and ease of management, on-premise can facilitate data localization and any other policy requirements.
An example of hybrid architecture is given below:
This documentation is for setting up HDFS (v2.8.1) cluster with one namenode and one datanode
Create 2 VMs. They’ll be referred to throughout this guide as,
Install java (java-8-openjdk
) to all the machines in the cluster and setup the JAVA_HOME
environment variable for the same.
Get your Java installation path.
Note: Take the value of the current link and remove the trailing /bin/java
.
For example on RHEL 7, the link is /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/bin/java
So JAVA_HOME
should be /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre
.
Export JAVA_HOME={path-tojava}
with your actual java installation path.
For example on a Debian with open-jdk-8: export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre
Note: In the further steps when u login to the hadoop account set the java path in ~/hadoop/etc/hadoop/hadoop-env.sh
also.
Get the IP of master and slave nodes using:
Adjust /etc/hosts
on all nodes according to your configuration.
Note:
While adding same machine ip to /etc/hosts
, use private ip that machine instead of public IP. For other machine in the cluster use public IP. * Edit the Master node VM /etc/hosts
file use private IP of Master node and public IP of the Slave node. * Edit the Slave node VM /etc/hosts
file use private IP of Slave node and Public IP of Master node. * Example: 10.0.22.11 node-master.example.com 10.0.3.12 node-slave1.example.com
Create a hadoop user in every machine in the cluster to followup the documentation or replace the hadoop user in the documentation with your own user.
Log in to the system as the root user.
Create a hadoop user account using the useradd command.
Set a password for the new hadoop user using the passwd command.
Add the haddop user to the wheel group using the usermod command.
Test that the updated configuration allows the user you created to run commands using sudo.
Use the su to switch to the new user account that you created.
Use the groups to verify that the user is in the wheel group.
Use the sudo command to run the whoami command. As this is the first time you have run a command using sudo from hadoop user account the banner message will be displayed. You will be also be prompted to enter the password for the hadoop account.
The last line of the output is the user name returned by the whoami command. If sudo is configured correctly this value will be root.
You have successfully configured a hadoop user with sudo access. You can now log in to this hadoop account and use sudo to run commands as if you were logged in to the account of the root user.
The master node will use an ssh-connection to connect to other nodes with key-pair authentication, to manage the cluster.
Login to node-master as the hadoop user, and generate an ssh-key:
id_rsa.pub
will contains the generated public key
Copy the public key to all the other nodes.
or
Update the $HOME/.ssh/id_rsa.pub
file contents of slave node to Master node $HOME/.ssh/authorized_keys file
and also Update $HOME/.ssh/id_rsa.pub
file contents of Master node to Slave node $HOME/.ssh/authorized_keys manually
.
Note: if ssh fails, try setting up again the authorized_keys to the machine.
Login to node-master as the hadoop user, download the Hadoop tarball from Hadoop project page, and unzip it: cd wget https://archive.apache.org/dist/hadoop/core/hadoop-2.8.1/hadoop-2.8.1.tar.gz tar -xzf hadoop-2.8.1.tar.gz mv hadoop-2.8.1 hadoop
Add Hadoop binaries to your PATH. Edit /home/hadoop/.bashrc
or /home/hadoop/.bash_profile
and add the following line: export HADOOP_HOME=$HOME/hadoop export HADOOP_CONF_DIR=$HOME/hadoop/etc/hadoop export HADOOP_MAPRED_HOME=$HOME/hadoop export HADOOP_COMMON_HOME=$HOME/hadoop export HADOOP_HDFS_HOME=$HOME/hadoop export YARN_HOME=$HOME/hadoop export PATH=$PATH:$HOME/hadoop/bin export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64/jre
Run following command to apply environment variable changes, using source command: source /home/hadoop/.bashrc or source /home/hadoop/.bash_profile
Configuration will be done on node-master and replicated to other slave nodes.
Update ~/hadoop/etc/hadoop/core-site.xml
: <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://node-master.example:51000</value> </property> </configuration>
Edit ~/hadoop/etc/hadoop/hdfs-site.xml
:
Create directories
Edit ~/hadoop/etc/hadoop/masters
to be: node-master.example.com
Edit ~/hadoop/etc/hadoop/slaves
to be: This slaves file will specifies the datanode to be setup in which machine node-slave1.example.com
Copy the hadoop binaries to slave nodes:
or copy each configured files to other nodes
Connect to node1 via ssh. A password isn’t required, thanks to the ssh keys copied above:
Unzip the binaries, rename the directory, and exit node-slave1.example.com to get back on the node-master.example.com:
Copy the Hadoop configuration files to the slave nodes:
HDFS needs to be formatted like any classical file system. On node-master, run the following command: hdfs namenode -format
Your Hadoop installation is now configured and ready to run.
Start the HDFS by running the following script from node-master: start-dfs.sh
, stop-dfs.sh
script files will be present in hadoop_Installation_Dir/sbin/start.dfs.sg
It’ll start NameNode and SecondaryNameNode on node-master.example.com
, and DataNode on node-slave1.example.com
, according to the configuration in the slaves config file.
Check that every process is running with the jps command on each node. You should get on node-master.example.com
(PID will be different):
and on node-slave1.example.com
:
Hdfs has been Configured Successfully
Note: If datanode and namenode has not started, look into hdfs logs to debug: $HOME/hadoop/logs/
To create users for hdfs (regprocessor, prereg, idrepo), run this command:
Note: Configure the user in module specific properties file (ex- pre-registration-qa.properties) as mosip.kernel.fsadapter.hdfs.user-name=prereg**
Create a directory and give permission for each user
Note: If different port has been configured , enable those port.
Kerberos server(KDC) and the client needs to be installed. Install the client on both master and slave nodes. KDC server will be installed on the master node.
To install packages for a Kerberos server:
To install packages for a Kerberos client:
Edit the /etc/krb5.conf
: Configuration snippets may be placed in this directory (includedir /etc/krb5.conf.d/
) as well,
Note: Place this krb5.conf
file in /kernel/kernel-fsadapter-hdfs/src/main/resources
mosip.kernel.fsadapter.hdfs.krb-file=classpath:krb5.conf
Or if kept outside resource then give absolute path mosip.kernel.fsadapter.hdfs.krb-file=file:/opt/kdc/krb5.conf
Edit /var/kerberos/krb5kdc/kdc.conf
Create the database using the kdb5_util
utility.
Edit the /var/kerberos/krb5kdc/kadm5.acl
Create the first principal using kadmin.local at the KDC terminal:
Start Kerberos using the following commands:
To set up the KDC server to auto-start on boot. ``` RHEL/CentOS/Oracle Linux 6
Verify that the KDC is issuing tickets. First, run kinit to obtain a ticket and store it in a credential cache file.
Next, use klist to view the list of credentials in the cache.
Use kdestroy to destroy the cache and the credentials it contains.
If you have root access to the KDC machine, use kadmin.local, else use kadmin. To start kadmin.local
(on the KDC machine), run this command: sudo kadmin.local
Do the following steps for masternode.
In the kadmin.local or kadmin shell, create the hadoop principal. This principal is used for the NameNode, Secondary NameNode, and DataNodes.
Create the HTTP principal.
Create principal for all user of hdfs (regprocessor, prereg, idrepo)
Create the hdfs keytab file that will contain the hdfs principal and HTTP principal. This keytab file is used for the NameNode, Secondary NameNode, and DataNodes. kadmin: xst -norandkey -k hadoop.keytab hadoop/admin HTTP/admin
Use klist to display the keytab file entries; a correctly-created hdfs keytab file should look something like this: $ klist -k -e -t hadoop.keytab Keytab name: FILE:hadoop.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 1 02/11/2019 08:53:51 hadoop/admin@NODE-MASTER.EXAMPLE.COM (aes256-cts-hmac-sha1-96) 1 02/11/2019 08:53:51 hadoop/admin@NODE-MASTER.EXAMPLE.COM (aes128-cts-hmac-sha1-96) 1 02/11/2019 08:53:51 hadoop/admin@NODE-MASTER.EXAMPLE.COM (des3-cbc-sha1) 1 02/11/2019 08:53:51 hadoop/admin@NODE-MASTER.EXAMPLE.COM (arcfour-hmac) 1 02/11/2019 08:53:51 hadoop/admin@NODE-MASTER.EXAMPLE.COM (camellia256-cts-cmac) 1 02/11/2019 08:53:51 hadoop/admin@NODE-MASTER.EXAMPLE.COM (camellia128-cts-cmac) 1 02/11/2019 08:53:51 hadoop/admin@NODE-MASTER.EXAMPLE.COM (des-hmac-sha1) 1 02/11/2019 08:53:51 hadoop/admin@NODE-MASTER.EXAMPLE.COM (des-cbc-md5) 1 02/11/2019 08:53:51 HTTP/admin@NODE-MASTER.EXAMPLE.COM (aes256-cts-hmac-sha1-96) 1 02/11/2019 08:53:51 HTTP/admin@NODE-MASTER.EXAMPLE.COM (aes128-cts-hmac-sha1-96) 1 02/11/2019 08:53:51 HTTP/admin@NODE-MASTER.EXAMPLE.COM (des3-cbc-sha1) 1 02/11/2019 08:53:51 HTTP/admin@NODE-MASTER.EXAMPLE.COM (arcfour-hmac) 1 02/11/2019 08:53:51 HTTP/admin@NODE-MASTER.EXAMPLE.COM (camellia256-cts-cmac) 1 02/11/2019 08:53:51 HTTP/admin@NODE-MASTER.EXAMPLE.COM (camellia128-cts-cmac) 1 02/11/2019 08:53:51 HTTP/admin@NODE-MASTER.EXAMPLE.COM (des-hmac-sha1) 1 02/11/2019 08:53:51 HTTP/admin@NODE-MASTER.EXAMPLE.COM (des-cbc-md5)
Creating keytab [mosip.keytab]
file for application to authenticate with HDFS cluster
To view the principals in keytab
On every node in the cluster, copy or move the keytab file to a directory that Hadoop can access, such as /home/hadoop/hadoop/etc/hadoop/hadoop.keytab
.
Place this mosip.keytab file in /kernel/kernel-fsadapter-hdfs/src/main/resources
and update the application properties for mosip.kernel.fsadapter.hdfs.keytab-file=classpath:mosip.keytab mosip.kernel.fsadapter.hdfs.authentication-enabled=true mosip.kernel.fsadapter.hdfs.kdc-domain=NODE-MASTER.EXAMPLE.COM mosip.kernel.fsadapter.hdfs.name-node-url=hdfs://host-ip:port
Note: Configure the user in module specific properties file (example: pre-registration-qa.properties
as mosip.kernel.fsadapter.hdfs.user-name=prereg
).
To enable security in hdfs, you must stop all Hadoop daemons in your cluster and then change some configuration properties. sh hadoop/sbin/stop-dfs.sh
To enable Hadoop security, add the following properties to the ~/hadoop/etc/hadoop/core-site.xml
file on every machine in the cluster:
Add the following properties to the ~/hadoop/etc/hadoop/hdfs-site.xml
file on every machine in the cluster.
The first step of deploying HTTPS is to generate the key and the certificate for each machine in the cluster. You can use Java’s keytool utility to accomplish this task: Ensure that firstname/lastname OR common name (CN) matches exactly with the fully qualified domain name (e.g. node-master.example.com) of the server. keytool -genkey -alias localhost -keyalg RSA -keysize 2048 -keystore keystore.jks
We use openssl to generate a new CA certificate: openssl req -new -x509 -keyout ca-key.cer -out ca-cert.cer -days 365
The next step is to add the generated CA to the clients’ truststore so that the clients can trust this CA: keytool -keystore truststore.jks -alias CARoot -import -file ca-cert.cer
The next step is to sign all certificates generated with the CA. First, you need to export the certificate from the keystore: keytool -keystore keystore.jks -alias localhost -certreq -file cert-file.cer
Then sign it with the CA: openssl x509 -req -CA ca-cert.cer -CAkey ca-key.cer -in cert-file.cer -out cert-signed.cer -days 365 -CAcreateserial -passin pass:12345678
Finally, you need to import both the certificate of the CA and the signed certificate into the keystore keytool -keystore keystore.jks -alias CARoot -import -file ca-cert.cer keytool -keystore keystore.jks -alias localhost -import -file cert-signed.cer
Change the ssl-server.xml and ssl-client.xml on all nodes to tell HDFS about the keystore and the truststore
Edit ~/hadoop/etc/hadoop/ssl-server.xml
Edit ~/hadoop/etc/hadoop/ssl-client.xml
After restarting the HDFS daemons (NameNode, DataNode and JournalNode), you should have successfully deployed HTTPS in your HDFS cluster.
Following configuration is required to run HDFS in secure mode. Read more about kerberos here:
Install Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy File on all cluster and Hadoop user machines. Follow this
For more information, check here:
For you face error during kerberos, check this:
Often simply Postgres, is an object-relational database management system (ORDBMS) with an emphasis on extensibility and standards compliance. It can handle workloads ranging from small single-machine applications to large Internet-facing applications (or for data warehousing) with many concurrent users Postgresql Prerequisites On a Linux or Mac system, you must have superuser privileges to perform a PostgreSQL installation. To perform an installation on a Windows system, you must have administrator privileges.
To changing default port 5432 to 9001 and connection + buffer size we need to edit the postgresql.conf file from below path PostgreSQL is running on default port 5432. you decide to change the default port, please ensure that your new port number does not conflict with any services running on that port.
It will ask new password to login to postgresql
Example for sourcing the sql file form command line $ psql --username=postgres --host=<server ip> --port=9001 --dbname=postgres
** Default lines are present in pg_hab.conf
file**
local
all
all
peer
host
all
all
127.0.0.1/32
ident
host
all
all
::1/128
ident
local
replication
all
peer
host
replication
all
127.0.0.1/32
ident
host
replication
all
::1/128
ident
** Modify with below changes in file /var/lib/pgsql/10/data/pg_hba.conf
**
local
all
all
md5
host
all
all
127.0.0.1/32
ident
host
all
all
0.0.0.0/0
md5
host
all
all
::1/128
ident
local
replication
all
peer
host
replication
all
127.0.0.1/32
ident
host
replication
all
::1/128
ident
Reference link: https://www.tecmint.com/install-postgresql-on-centos-rhel-fedora
HSM stands for Hardware Security Module and is an incredibly secure physical device specifically designed and used for crypto processing and strong authentication. It can encrypt, decrypt, create, store and manage digital keys, and be used for signing and authentication. The purpose is to safeguard and protect keys.
MOSIP highly recommends the following specifications for HSM:
Must support cryptographic offloading and acceleration.
Should provide Authenticated multi-role access control.
Must have strong separation of administration and operator roles.
Capability to support client authentication.
Must have secure key wrapping, backup, replication and recovery.
Must support 2048, 4096 bit RSA Private Keys, 256 bit AES keys on FIPS 140-2 Level 3 Certified Memory of Cryptographic Module.
Must support at least 10000+ 2048 RSA Private Keys on FIPS 140-2 Level 3 Certified Memory of Cryptographic Module.
Must support clustering and load balancing.
Should support cryptographic separation of application keys using logical Partitions.
Must support M of N multi-factor authentication.
PKCS#11, OpenSSL, Java (JCE), Microsoft CAPI and CNG.
Minimum Dual Gigabit Ethernet ports (to service two network segments) and 10G Fibre port should be available.
Asymmetric public key algorithms: RSA, DiffieHellman, DSA, KCDSA, ECDSA, ECDH, ECIES.
Symmetric algorithms: AES, ARIA, CAST, HMAC, SEED, Triple DES, DUKPT, BIP32.
Hash/message digest: SHA-1, SHA-2 (224, 256, 384, 512 bit).
Full Suite B implementation with fully licensed ECC including Brainpool, custom curves and safe curves.
Safety and environmental compliance
Compliance to UL, CE, FCC part 15 class B.
Compliance to RoHS2, WEEE.
Management and monitoring
Support Remote Administration —including adding applications, updating firmware, and checking the status— from NoC.
Syslog diagnostics support.
Command line interface (CLI)/graphical user interface (GUI).
Support SNMP monitoring agent.
Physical characteristics
Standard 1U 19in. rack mount with integrated PIN ENTRY Device.
Performance
RSA 2048 Signing performance – 10000 per second.
RSA 2048 Key generation performance – 10+ per second.
RSA 2048 encryption/decryption performance - 20000+.
RSA 4096 Signing performance - 5000 per second.
RSA 4096 Key generation performance - 2+ per second.
RSA 4096 encryption/decryption performance - 20000+.
Should have the ability to backup keys, replicate keys, store keys in offline locker facilities for DR. The total capacity is inline with the total number of keys prescribed.
Clustering minimum of 20 HSMs.
Less than 30 seconds for key replication across the cluster.
A minimum of 30 logical partitions and their license should be included in the cost.
The hardware compute and storage requirements for MOSIP core platform are estimated as below.
Compute hardware estimates for a production deployment:
Pre-registration
7200 pre-regs/hour*
10
4 VCPU**, 16 GB RAM
Registration Processor
200,000 registrations per day
80
4 VCPU, 16 GB RAM
ID Authentication
2,000,000 auth requests per day
20
4 VCPU, 16 GB RAM
Resident Services
7200 resident services/hour*
10
4 VCPU, 16 GB RAM
* Average throughput
** VCPU: Virtual CPU
We estimate 30% (approx) additional compute capacitiy for administration, monitoring and maintenance. This may be optimized by the System Integrator.
Notes
High availability is taken into consideration with assumed replication factor of 2 per service pod/docker
Storage estimates for production deployment:
Database and HDFS/CEPH
MOSIP Storage Requirement Calculator XLS
Appication and system logs
Application logs
Pre-Reg
100 pre-regs
20 MB
Reg Proc
100 registrations
200 MB
The above estimates are approximate, and may inflate if, for example, there are too many exception traces.
The logs may be compressed and archived after a week or so. The compression ratio achieved with tar+gz utility is 15-20.
System logs
To be estimated by System Integrator according to the deployment
Additional compute and storage is needed for the following setups.
Dev
Sandbox
13
4 VCPU, 16 GB RAM
128 GB SSD
QA
Sandbox
13
4 VCPU, 16 GB RAM
128 GB SSD
Staging
Sandbox
13
4 VCPU, 16 GB RAM
128 GB SSD
Pre-production
Cell
*
4 VCPU, 16 GB RAM
*
* To be decided by the country/System Integrator.
Install java (java-8-openjdk) in all the machines in the cluster and setup the JAVA_HOME
environment variable for the same.
Get your Java installation path.
Take the value of the current link and remove the trailing /bin/java
. For example, on RHEL 7, the link is /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/bin/java
, So, JAVA_HOME
should be /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre.
Export JAVA_HOME={path-tojava}
with your actual java installation path. For example on a Debian with open-jdk-8:
Download and unzip Keycloak
We have installed postgres as the database for keycloak; you can use any database supported by Keycloak.
Documentation for Keycloak Database Setup is available here.
Install Postgres in your VM. Guide to install PostgreSQL is available here.
Within the …/modules/
directory of your Keycloak distribution, you need to create a directory structure to hold your module definition. The convention is use the Java package name of the JDBC driver for the name of the directory structure. For PostgreSQL, create the directory org/postgresql/main
. Copy your database driver JAR into this directory and create an empty module.xml
file within it too.
Module Directory
After you have done this, open up the module.xml
file and create the following XML:
Module XML
The module name should match the directory structure of your module. So, org/postgresql
maps to org.postgresql
. The resource-root path
attribute should specify the JAR filename of the driver. The rest are just the normal dependencies that any JDBC driver JAR would have.
To enable SSL we need a certificate which here in example we will use Lets encrypt.
Follow the steps in this link to create a certificate for your domain.
We will create a keystore in which we will store certificate chain and private key and give them an alias
Go to {{keycloak folder}}/standalone/configuration
Open Standalone.xml
and make following changes
Add a driver for postgres(Or your database)
Change the datasource properties
Register the datasource While registering change the schema name if you want.
Change network configuration
Inet address for both public and management profile to access it remotely
Default ports from 8080 -> 80
and 8443 -> 443
to not give ports at time of accessing Keycloak
Adding a SSL certificate to Keycloak Here we will give the keystore we created to keycloak
Add Keycload admin user from keycloak bin directory run
Create a new Realm (eg. mosip).
Create clients for every module (i.e. ida, pre-registration, registration-processor, registration-client, auth, resident, mosip-client).
Enable authorization and service account for every client and provide valid redirect uri. These clients will be used by all modules to get client tokens.
For this example we will be configuring LDAP as user federation
Go to "User Federation".
Create a new User Federation for LDAP.
Make Edit Mode Writable.
Configure field based on your LDAP(There are many vendors for ldap you can connect to any ldap vendor based on configurations).
Go to Mappers and Create mappers for each field you want keycloak to take from LDAP.
Sync Users and Roles from LDAP .
Create INDIVIDUAL, RESIDENT Role from Keycloak in Realm Roles
Assign Roles from LDAP and Keycloak to All Clients
If you find that a particular service will take more time to complete the process within stipulated time period, your token perhaps will get invalidated. Use refresh token mechanism to get latest token or if that is not implemented you can increase the access token lifespan at client level or realm level.
SSL in keycloak is enabled by default but it can be toggled for all request, external request, and none.
<> is for variable properties with this sign need to be updated.
MOSIP offers high configurability to customise and deploy for a country. Many components are available out of the box. However, for a specific deployment certain customisations and additions may be needed as follows:
ID Object
Schema and field custom validations on Reg Client and Reg Processor. (Ref validator)
Languages
Defining primary and secondary languages
Transliteration libraries integration in Reg Client
Messaging templates (master data and configuration files)
Master Data: Country specific master data
Adding/modifying Reg Processor flow
Adding a new stage (e.g. fetch data from CRVS system). (Camel configurations)
Remove or re-arrange the stages
Demographic dedup logic
Registration Client App
Fields as per ID Object
Labels in preferred languages
Field validations
Screen flow changes
Integration with MDS
Residents Portal: UI implementation
Admin portal: UI modifications (if needed)
Integration with external components
Virus scanner
ABIS
Biometric SDKs (in Registration Client, Registration Processor & ID Authentication)
Manual Adjudication
IAM (OAuth 2.0 compliant)
HSM
Postal service
Email/SMS gateway
UIN Generator
Yes
Token Generator
Yes
Partner Management
Yes
Device Management
Yes
Admin portal allows managing registered devices. Device registration API is available. Device vendor to provide Device Management Server which takes care of registering devices and Key rotation
SMS Notification
Yes
Interface available. SMS Gateway/service to be provided by SI
Email Notification
Yes
Interface available. SMS Gateway/service to be provided by SI
Audit Trail
Yes
Technical Help Desk
No
Customer Relationship Management
No
Backup/Restore Management
No
Manual Adjudication
No
APIs available to retrieve data and approve/reject a packet when a Biometric Duplicate is found
Manual Verification
No
Analytics
No
Authentication OTP
Yes
Authentication Biometrics
Yes
Knowledge Management System
No
Payment Gateway
No
Card Production
No
Card Management
No
Implementation available for sending cards in queue to print and forward to postal system
Fraud Management
No
Supporting Document Retrieval
No
Registration processor may be customized for the same
Token Management at Registration Center
No
Registration of Pre-registered/Appointment
Yes
If MOSIP PreReg module is deployed
UIN Retrieval (Lost UIN)
Yes
Update of Demographic Information
Yes
Update of Biometric data
Yes
Grievance Reporting
No
Lock UIN against Auth
Yes
Transaction History Generator
No
Audit logs, DB records, and Resident Services APIs available
Enrollment Status/Update
Yes
Payment Gateway
No
Mobile/Table Registration App
No
Virus Scanner
No
Integration hooks provided. SI to procure and integrate
This document defines the public and private services of MOSIP.
Public Services: MOSIP services available to the general public and can be accessed by UI or user token.
Private Services: MOSIP services available for service to service call and should be accessed by service token or restricted user.
Admin /Bulk Upload
Admin /Login
Admin /AuditManager
Admin /PacketUpdateStatus
Commons /PacketReader-Writer
Kernel /AuditManager
Kernel /AuthManager
Kernel /Login
Kernel /Refresh
Kernel /Jasperreport
Kernel /ClientCrypto
Kernel /CryptoManager
Kernel /KeyManager
Kernel /LicenceKey
Kernel /PartnerCertManager
Kernel /Signature
Kernel /TokenIDGenerator
Kernel /ZKCryptoManager
Kernel /ApplicantType
Kernel /ApplicantValidDocument
Kernel /Application
Kernel /BiometricAttribute
Kernel /BiometricType
Kernel /BlacklistedWords
Kernel /Device
Kernel /DeviceHistory
Kernel /DeviceProvider
Kernel /DeviceProviderManagement
Kernel /DeviceRegister
Kernel /DeviceSpecification
Kernel /DeviceType
Kernel /DocumentCategory
Kernel /DocumentType
Kernel /DynamicField
Kernel /ExceptionalHoliday
Kernel /FoundationalTrustProvider
Kernel /GenderType
Kernel /Holiday
Kernel /IdType
Kernel /IndividualType
Kernel /Language
Kernel /Location
Kernel /LocationHierarchy
Kernel /Machine
Kernel /MachineHistory
Kernel /MachineSpecification
Kernel /MachineType
Kernel /Module
Kernel /MOSIPDeviceService
Kernel /PacketRejectionReason
Kernel /RegisteredDevice
Kernel /RegistrationCenter
Kernel /RegistrationCenterDevice
Kernel /RegistrationCenterHistory
Kernel /RegistrationCenterType
Kernel /RegistrationCenterUserMachineHistory
Kernel /Schema
Kernel /Template
Kernel /TemplateFileFormat
Kernel /TemplateType
Kernel /Title
Kernel /UserDetailsHistory
Kernel /ValidDocument
Kernel /WorkingDay
Kernel /Zone
Kernel /EmailNotification
Kernel /SmsNotification
Kernel /OtpGenerator
Kernel /OtpValidator
Kernel /RidGenerator
Kernel /SyncData
ID Authentication /AuditTest
ID Authentication /Test
ID Authentication /CredentialIssueanceCallback
ID Authentication /Cryptomanager
ID Authentication /InternalAuth
ID Authentication /InternalAuthTxn
ID Authentication /InternalOTP
ID Authentication /InternalUpdateAuthType
ID Authentication /Keymanager
ID Authentication /Signature
ID Authentication /WebSub
ID Authentication /KycAuth
ID Authentication /OTP
ID Authentication /Auth
ID Authentication /StaticPin
ID Authentication /VID
ID Repository /BiometricExtractor
ID Repository /CredentialRequestGenerator
ID Repository /CredentialStore
ID Repository /ID Repository
ID Repository /Vid
Partner Management Service /Misp
Partner Management Service /PartnerManagement
Partner Management Service /DeviceDetail
Partner Management Service /FTPChipDetail
Partner Management Service /RegisteredDevice
Partner Management Service /SecureBiometricInterface
Partner Management Service /PartnerService
Partner Management Service /PolicyManagement
Pre Registration /Demographic
Pre Registration /Document
Pre Registration /GenerateQRcode
Pre Registration /Login
Pre Registration /Notification
Pre Registration /Transliteration
Pre Registration /Booking
Pre Registration /Captcha
Pre Registration /DataSync
Registration Processor /BioDedupe
Registration Processor /RegistrationStatus
Registration Processor /RegistrationSync
Registration Processor /PrintApi
Registration Processor /RegistrationTransaction
Registration Processor /External
Registration Processor /QCUsers
Registration Processor /QualityChecker
Resident Services /Resident
Resident Services /ResidentVid
The Sandbox is a safe environment isolated from your PC's underlying environment. You may use the sandbox to execute files without having to worry about malicious files or unstable programs impacting data on the system.
This section describes handy information that is useful to know when operating the Sandbox Installer which is tested under mentioned configuration.
Console
1
4 vCPU*, 16 GB RAM
128 GB SSD**
K8s MZ master
1
4 vCPU, 8 GB RAM
32 GB
K8s MZ workers
9
4 vCPU, 16 GB RAM
32 GB
K8s DMZ master
1
4 vCPU, 8 GB RAM
32 GB
K8s DMZ workers
1
4 vCPU, 16 GB RAM
32 GB
*vCPU: Virtual CPU
** Console has all the persistent data stored under /srv/nfs
. Recommended storage here is SSD or any other high IOPS disk for better performance
The Multi-VM Sandbox Installer is a fully automated deplorer that incorporates all MOSIP modules into a virtual machine cluster that can be either cloud or on premise.
The sandbox can be used for development and testing, while the Ansible scripts run MOSIP on a multi-virtual machine (VM) setup.
Caution - The sandbox is not intended for use by serious pilots or for production purposes. Also, do not run the sandbox with any confidential data.
In Minibox, note that for any form of load or multiple pod replication scenarios, this may not be sufficient. It is possible, however, to enable the feature to bring up MOSIP modules with lesser VMs as below:
Console
1
4 vCPU*, 16 GB RAM
128 GB SSD
K8s MZ master
1
4 vCPU, 8 GB RAM
32 GB
K8s MZ workers
9
4 vCPU, 16 GB RAM
32 GB
K8s DMZ master
1
4 vCPU, 8 GB RAM
32 GB
K8s DMZ workers
1
4 vCPU, 16 GB RAM
32 GB
Terraform is a tool to securely and efficiently develop, edit, and update infrastructure. Using Terraform scripts available in _terraform/
._the initial installation stage is achieved. AWS scripts are being used and maintained at present.
It is strongly recommended that the scripts be analyzed in depth before running them.
Before, MOSIP modules installation process that runs on a preset time or when a predefined condition, need to have VMs set up on all Machines. The user must ensure if the CentOS 7.8 OS is installed on all the machines:
Create user 'mosipuser' on console machine with password-less sudo su
The hostname must match hostnames in hosts.ini
on all machines. Set the same with
Enable Internet access on all machines
Disable firewalld on all machines
Exchange ssh
keys between console machines and K8s cluster machines so that ssh is password-less from console machines:
Make the console machine available via a public domain name (e.g. sandbox.mycompany.com)
(When you do not intend to access the sandbox externally, this step can be skipped)
Ensure the date/time is in UTC on all machines
Open ports 80, 443, 30090 (postgres), 30616 (activemq), 53 (coredns) on console machine for external access
Make sure your firewall doesn't block the UDP ports (s)
To ensure proper installation, install these pre-requisites manually:
Git
Git Clone
Ansible
On the Installation Options, click git to install on your machine:
In User Home Directory, select Git Clone and switch to appropriate branch:
Install Ansible and create shortcuts:
This section helps you to plan an installation of MOSIP suited to your environment. Before installing MOSIP, it is recommended that the scripts be analyzed in depth before running them.
Suited to your configuration, update hosts.ini. Make sure your configuration matches the system names and IP addresses
In group_vars/all.yml
change sandbox_domain_name
to domain name of the console machine
By default, installation scripts will try to fetch Letsencrypt's new SSL certificate for the above domain. If you already have the same, however, then set the following variables in the file group group_vars/all.yml
:
It is the interconnection between a computer and a public or private network. If it is other than “eth0” by your cluster machines, update it to group_vars/k8s.yml
In the Ansible vault _secrets.yml
_file, all the secrets (passwords) used in automation are stored. To access the file, the default password is 'foo'. Changing this password with the following command is recommended:
The contents of secrets.yml
can be viewed and edited based on following command:
When this equipment is connected to your machine it allows you to install MOSIP modules using command:
If a message prompting you for password, enter default vault password "foo" to proceed installation.
This section provides the following major sections to describe how to configure and verify the proper interface. The sandbox installs with default general configuration. To configure MOSIP differently, refer to the following sections:
DNS translates human readable domain names to machine readable IP addresses. A private DNS (CoreDNS) is mounted on the console machine by default, and /etc/resolv.conf
refers to this DNS on all machines.
However, if you want to use DNS cloud providers (like Route53 on AWS), disable the installation of a private DNS by setting the following flag:
Ensure your DNS routing is taken care of by your cloud deployment. Uncomment the Route53 code for AWS in the scripts given in the:
The corends.yml
``playbook configures the CoreDNS and updates the /etc/resolv.conf
file for all devices. If a system needs to be restarted, re-run the playbook to restore /etc/resolv.conf
.
This part contains information about hosting your own registry using the Local Docker Registry.
Local Registry on Console
Instead of using the default Docker Hub, you may run a local Docker registry. This is particularly useful when the Kubernetes cluster is sealed for protection on the Internet. With this sandbox, a sample Docker registry implementation is available, and you can run the same by triggering the following in group_vars/all.yml
.
Notice that this register is running on the computer on the console and can be accessed as console.sb:5000
. Control is through http and not through https.
Ensure that in this registry you pull all the appropriate Dockers and update versions.yml
.
Caution: If you delete/reset this registry or restart the console computer, all the registry contents will be lost and the Dockers will have to be removed again.
Additional Local Registries:
If you wish to have additional local registries for Dockers, then list them here:
The list here is necessary to ensure that http access from cluster machines is allowed for the above registries.
When you set up a private registry, you assign a server to communicate with Docker Hub over the internet. If you are pulling Dockers in Docker Hub from the private registry, then provide secrets.yml
with the Docker Hub credentials and set the following flag in:
Update with versions.yml
your Dockers versions.
When installing the default Sandbox, you must have a public domain name, so that the domain name refers to the console computer. However, if you want to access your internal network's Sandbox (for example via VPN), set the following in group_vars/all.yml
:
A self-signed certificate is created and the sandbox access URL is https://{{inventory hostname}}'
All secrets are stored in secrets.yml
. Edit the file and change all of the passwords for a secure Sandbox. For creation and testing, defaults will be used, but be aware that the sandbox will not be secure with defaults. In order to edit secrets.yml
.
If you update PostGres passwords, you will need to update their ciphers in the property files. See the section below on Config Server. To be able to find out the text password, all the passwords used in. properties
were added to secrets.yml
- some of them for purely informational purposes.
Caution : Make sure that secrets.yml
is updated when you change any password in. properties
.
Config server is one of the more popular centralized configuration servers used in a micro service-based application. For all modules, configurations are defined through property files located in the GitHub repository. For example, for this sandbox, the properties are located within the sandbox folder at https://github.com/mosip/mosip-config.
You can have a repository of your own with a folder containing files for properties. On GitHub, the repo will be private. In group vars/all.yml
, configure the following parameters as below (example):
If private: true, then, in group vars/all.yml
, update your GitHub username as above. Please change the password to secrets.yml
:
The repo is cloned to the NFS mounted folder if local git repo is allowed, and the config server pulls the properties locally. This option is useful if the sandbox is secured without access to the Internet. You should search git-in locally for any changes. Remember, however, that you will have to push them manually if you want the changes to be reflects in the parent GitHub repo. When making improvements to the configuration repo, there is no need to restart the config-server pod.
If you have updated the default passwords in secrets.yml
, create these password ciphers and update the changed password property files. After the config server is up, the ciphers can be created from the console machine using the following curl command:
The above command connects via input to the Config server pod of the MZ cluster. You may also use the script to encrypt all the secrets at once by the following methods:
Several secrets are required in Ansible's secrets.yml
in the config server property files. We use config server encryption to encrypt the secrets in order to prevent explicit text secrets in properties using the following command:
The script here converts all secrets in secrets.yml
using above command implemented in Python.
Prerequisites:
Install required modules using
Ensure config server is running
Set the server URL in config.py
If the URL has an HTTPS certificate and the SSL server is self-signed, set
Run the following command:
In this sandbox secrets_file_path
is /home/mosipuser/mosip-infra/deployment/sandbox-v2/secrets.yml
Output is saved in out.yaml
.
Captcha protects your website from fraud and abuse. It uses an advanced risk analysis engine and adaptive challenges to keep malicious software.
Get Captcha for the sandbox domain from "Google Re-captcha Admin" if you would like to allow Captcha for Pre-Reg UI. Get reCAPTCHA v2 keys for "I'm not a robot"
Set Captcha as:
As below, to receive OTP (one-time password) over email and SMS set properties:
SMS
File:
kernel-mz.properties
Properties:
kernel.sms
File:
kernel-mz.properties
Properties
You may want to run MOSIP in Proxy OTP mode if you do not have access to Email and SMS gateways, in that case you can skip Proxy OTP Settings.
To run MOSIP in Proxy OTP mode set the following:
Note : The default OTP is set to 111111.
Before you start installing the sandbox, load country-specific master data:
Ensure the Master Data .csv
files are available in a folder, say my_dml
Add the following line in group_vars/all.yml
``-> databases -> mosip_master
For production setups you may want to replicate pods more than the default replication factor of 1. Upgrade podconfig.yml
to the same file. A separate production file can be generated and pointed to from group vars/all.yml
``--> podconfig
file.
A taint allows a node to refuse pod to be scheduled unless that pod has a matching toleration. Kubernetes offers the functionality of taints to run a pod solely on a node. This is especially useful during performance tests where you would like to assign different nodes to non-MOSIP components.
By default, in the sandbox, taints are not added. The following modules have been provided with provisions to allow taints for:
Postgres
Minio
HDFS
Set the following in group vars/all.yml
to allow taint
. EXAMPLE:
The node here is the machine on which you would like to exclusively run the module.
Ensure the above setting is done before you install the sandbox.
By default, the sandbox installs a disabled Trusted Platform Module (TPM) Reg Client Downloader.
Reg Client Downloader:
Convert helm template to helm values:
To enable TPM to use trusted private/public Reg client machine private/public keys, do the following:
Update the registered client downloader TPM environment variable:
If, before installing the sandbox, you have done the above, then you may skip this step. Otherwise, if the downloader reg client is already running on your sandbox, delete it and restart as follows:
(Wait for all resources to get terminated)
Add the name and public key in MOSIP-master/machine-master and MOSIP-master/machine-master table of the registered client machine in DB. Using TPM Utility, you can get your machine's public key
Utility to obtain public TPM keys along with the name of the computer
Prerequisites:
Build:
Run:
(Use jar-with-dependencies to run under target folder)
Machine Master Table:
The publicKey, signingPublicKey, keyIndex and signingKeyIndex - all of them to be populated in the machine_master
table of mosip_master
DB.
Download the registered client from https://{{sandbox domain name}}/registration-client/1.1.3/reg-client.zip
The sandbox comes with its default ID Schema (in Master DB, identity_schema
table) and Pre-Reg UI Schema pre-registration-demographic.json
. In order to use different schemas, do the following:
Ensure new ID Schema is updated in Master DB, identity_schema
table
Replace mosip-config/sandbox/pre-registration
-demographic.json
with new Pre-Reg UI Schema
Map values in pre-registration-identity-mapping.json
to pre-registration-demographic.json
as below:
Update the following properties in pre-registration-mz.properties preregistartion.identity.name=< identity.name.value (above)> preregistration.notification.nameFormat=< identity.name.value>
Restart the Pre-Reg Application service
Download Reg Client:
Download zip file from:
Unzip the file and launch registered client by running run.bat
Reg client will generate public/private keys in the following folder
You will need the public key and key index mentioned in readme.txt
for the later step to update master DB
Run MDS:
Run mock MDS as per procedure give here: Mock MDS
Pickup device details from this repo. You will need them for device info updates in the later step
Add Users in Keycloak:
Make sure keycloak admin credentials are updated in config.py
Add users like registration officers and supervisors in csv/keycloak_users.csv
with their roles
Run
Update Master Data:
In the master DB DML directory, change the following CSVs. The DMLs are located in the sandbox at /home/mosipuser/mosip-infra/deployment/sandbox-v2/tmp/commons/db-scripts/mosip-master/dml
master-device_type.csv
master-device_spec.csv
master-device_master.csv
master-device_master_h.csv
master-machine_master.csv
master-machine_master_h.csv
master-user_detail.csv
master-user_detail_h.csv
master-zone_user.csv
master-zone_user_h.csv
Run
Example:
CAUTION : The above will reset entire DB and load it fresh
You may want to maintain the DML directory separately in your repo
It is assumed that all other tables of master DB are already updated
Device Provider Partner Registration:
Update the following CSVs in PMS DML directory. On sandbox the DMLs are located at /home/mosipuser/mosip-infra/deployment/sandbox-v2/tmp/partner-management-services/db_scripts/mosip_pms/dml
pms-partner.csv
pms-partner_h.csv
pms-policy_group.csv
Run update_pmsdb.sh
. Example:
CAUTION*: The above will reset entire DB and load it fresh
Some example CSVs are located at csv/regdevice
IDA Check:
Disable IDA check in registration-mz.properties
:
Launch Reg Client:
Set Environment Variable mosip.hostname
to {sandbox domain name}
Login as a user (e.g. 110011) with password (MOSIP) to login into the client
Integrations
Guide to Work with Real HSM
Introduction:
The default sandbox uses simulator of HSM called SoftHSM. To connect to a real HSM you need to do the following:
Create client.zip
Update MOSIP properties
Point MOSIP services to HSM
client.zip:
The HSM connects over the network. Client.zip
, which is a package of self-dependent PKCS11client.zip
file is extracted from the artifactory when Dockers launch, unzipped, and install.sh is executed.
The zip must fulfil the following:
Contain an install.sh
Available in the artifactory
install.sh
This script must fulfil the following:
Have executable permission
Set up all that is needed to connect to HSM
Able to run inside Dockers that are based on Debian, inherited from OpenJDK Dockers
Place HSM client configuration file in mosip.kernel.keymanager.softhsm.config-path
(see below)
Not set any environment variables. If needed, they should be passed while running the MOSIP service Dockers
Properties:
Update the following properties in Kernel and IDA property files:
Ensure you restart the services after this change.
Caution: The password is highly critical. To encrypt it, make sure you use a really strong password (using Config Server encryption). In addition, Config Server access should be very tightly regulated.
Artifactory:
Artifactory is built as a Docker in the sandbox and accessed via services. In that Docker, replace the client.zip
. The changed Docker can be uploaded to your own Docker Hub registry for subsequent use.
HSM URL
HSM is used by Kernel and IDA services. Point the TCP URL of these services to new HSM host and port:
The above parameter is available in the Helm Chart of respective service.
Integrating Antivirus Scanner
In MOSIP, virus scanners can be implemented at different levels. ClamAV is used as an antivirus scanner by default. If you want your anti-virus (AV) to be incorporated, the same can be achieved as follows:
Registration Client
Running your AV on the registration client machine is sufficient. Not required for integration with MOSIP.
Server
This is implemented as a part of Kernel ClamAV project project. MOSIP uses this project to scan registration packets. You may integrate your anti-virus (AV) in the following ways:
Option 1
The registration packets are stored in Minio. Several AVs provide traffic flow analysis in line with the stream to defend against hazards. This form of implementation based on the network can be carried out without any alteration of the MOSIP code. But to ensure that network traffic passes through your AV, a careful network configuration is required.
Option 2
To support your AV at the code level, the following Java code has to be altered. In VirusScannerImpl.java
, the scanFile/scanFolder/scanDocument
API must be implemented with your AV SDK.
BioSDK Integration
In reg client
, reg proc
, and ida
, the biosdk library is included. The guide offers steps for these integrations to be enabled here.
Integration with IDA
It is expected that Biosdk will be available as an HTTP service for IDA. The ID Authentication module then calls this service. To build such a service, refer to the reference implementation. /service
contains service code; while /client
contains client code that is combined with the IDA that connects to the service. This service can be operated as a pod or hosted outside the cluster within the Kubernetes cluster.
It is important to compile the client code into biosdk.zip and copy it to Artifactory. It is currently available at the following address:/artifactory/libs-release-local/biosdk/mock/0.9/biosdk.zip
. This zip is downloaded by IDA dockers and installed during docker startup.
Integration with Reg Proc
The above service works for regproc
as well.
Integration of External Postgres DB
Sandbox Parameters
****
Make sure the Postgres is configured as 'UTC' for the time zone. This configuration is set to postgresql.conf
when you install Postgres.
Integration with External Print Service
Introduction
MOSIP provides a reference implementation of print service that interfaces with the MOSIP system.
Integration Steps
Ensure the Following:
Compliant libraries, is reqartifactoryervices to link to HSM. MOSIP services install the same thing before the services start. The HSM vendor must have this library. The 1. Websub runs as https://{sandbox
domain name}/websub
on MOSIP and is accessible externally via Nginx. Websub runs on DMZ and nginx in the sandbox as configured for this access
Your service is able to register a topic with Websub with a callback url
The callback url is accessible from MOSIP websub
The print policy was established (be careful about enabled/disabled encryption)
Print partner created and certs uploaded DB Timezone6. The private and certificate of print partner is converted to p12 keystore format. You may use the following command:
This p12 key and password is used in your print service
Your print service reads the relevant (expected) fields from received credentials
Your print service is able to update MOSIP data share service after successfully reading the credentials
This guide includes numerous tips for using various dashboards made available as part of the default installation of the sandbox. The links to various dashboards are available at:
A default dashboard to display the logs of all MOSIP services is installed as part of the sandbox installation. To view the Dashboard:
Go to Kibana Home
On the drop down on the top left select Kibana->Dashboard
In the list of dashboards search for "MOSIP Service Logs"
Select the dashboard
Dashboard links:
MZ: https://{sandbox
domain name}/mz-dashboard
DMZ: https://{sandbox
domain name}/dmz-dashboard
On the console machine, the tokens for the above dashboards are accessible at_/home/mosipuser/mosip-infra/deployment/sandbox-v2/tmp
_. For each dashboard, two tokens are created - admin and view-only. View-only privileges are restricted.
Link:
https://{sandbox
domain name}/grafana
Recommended charts:
11074 (for node level stats)
4784 (for container level stats)
Open the MOSIP Admin portal from the home page of the sandbox. Login with superAdmin username, MOSIP password.
The Sanity Check Procedures are the steps to verify that an installation is ready to be tested by a system administrator. In quality audits sanity check is consider as a major activity. It performs a quick test to check the main functionality of the software.
During deployment all pods should be 'green' on the dashboard of Kubernetes, or both these commands would display pods in 1/1 or 2/2 state if you are on the command line.
Some pods that show status 0/1 Complete are Kubernetes jobs - they will not turn 1/1.
Note the following namespaces
MOSIP modules
Default
Kubernetes dashboard
Kubernetes-dashboard
Grafana
Monitoring
Prometheus
Monitoring
Filebeat
Monitoring
Ingress Controller
Ingress-Nginx
To check pods in a particular namespace. Example:
If any pod is 0/1 then the Helm install command times out after 20 minutes
Following are some useful commands:
Some pods have logs available in logger-sidecar as well. These are application logs.
To re-run a module, helm delete module and then start with playbook. Example:
Quick Sanity Check of Pre-Registration
Open Pre-Reg home page:
https://{sandbox domain name}/pre-registration-ui/
Enter your email or phone no to create an account
Enter the OTP that you received via email/sms in the selected box, or enter 111111 for Proxy OTP mode
Accept the Terms and Condition and CONTINUE after filling the demographic data
Enter your DOB or age
Select any of the Region, Province, City, Zone from the dropdown
Select any pin code from the dropdown
Phone number should be 10 digits and must not start with 0
CONTINUE after uploading required document of given size and type or skip the document upload process. (Recommended: upload any one document for testing purposes.)
Verify the demographic data and document uploaded previously and CONTINUE. You may edit with BACK if required
Choose any of the Recommended Registration Centre registration and CONTINUE
Select date and time-slot for Registration and add it to Available Applicants by clicking on + and CONTINUE
Now your first Appointment booking is done. You may view or modify your application in Your Application section
Registration Processor Test Packet Uploader
Prerequisites
Auth Partner Onboarding
Packet Creation
Refer to notes in config.py
and data/packet*/ptkconf.py
for various parameters of a packet. Parameters here must match records in Master DB.
Following example packets are provided. All these are for new registration:
Packet1: Individual 1 biometrics, no operator biometrics
Packet2 : Individual 2 biometrics different from above, no operator biometrics
Packet2 : Individual 2 biometrics with operator biometrics of Individual 1
Clearing the DB
This is optional. To see your packet clearly, you may want to clear all records of previous packets in mosip_regprc
tables:
Provide your postgres
password.
Caution: Ensure that you want to clear the DB. Delete this script if you are in production setup.
Upload Registration Packet
Use the following command:
Verify
Verify the transactions as below:
Provide postgres
password. Note that it may take several seconds for packet to go through all the stages. You must see a SUCCESS for all stages.
UIN should have got generated
The latest transaction must be seen in credential_transaction
table of mosip_credential
DB
Further, identity_cache
table of mosip_ida
db should have fresh entries corresponding to the timestamp of UIN generated
Before we look at how to reset installation, you should ensure you have a recent backup of the clusters and persistence data.
Performing a reset will wipe out all your clusters and delete all persistence data. To reset your machine back to fresh install, run the following script:
If a message prompting you to confirm the reset of machine, select the option “Yes/No” to proceed.
Persistent data in the field of data processing is available over Network File System (NFS) that is hosted on console at location /srv/nfs/mosip
.
For any persistent data, all pods write to this location. If required, you can backup this folder.
Note:
Postgres is initialized only once and populated. Postgres is not initialized if persistent data is present in /srv/nfs/mosip/postgres
. In order to force an init, execute the following:
Postgres includes data from Keycloak. Keycloak-init
does not overwrite any data, but just updates and adds information. If you want to clean up data from Keycloak, you need to manually clean it up or reset all postgres.
There are plenty of tools are installed with preinstall.sh
to help in fact shortcuts commands to troubleshoot and diagnose technical issues or just little hacks that make tasks a little quicker.
The second part after adding above:
Tmux means that you can start a Tmux session and then open multiple windows inside that session. To enable it copy the config file
as following:
This is tool to compare text and Property to find the difference between two text files**(*.properties)
**:
Scalability of complex systems is non-trivial especially when there are multiple running components like microservices, databases, storage clusters etc. with complex interactions. End-to-end performance modelling of such a system poses significant challenges as the performance of the 'whole' does not have a straight-forward linear relationship to its 'parts'.
MOSIP recommends a cell architecture where hardware and software within a cell is fixed (canned), and the cell is benchmarked for input/output capacity. Such cells, then, may be replicated to scale up capacity in a production depolyment with traffic diverted to them via a load balancer. Ideally, each cell must be islolated from each other without any cross-dependencies. Practically, however, they may share certain resources. Scalabilty of such common resources needs to addressed separately.
This document presents cell architecture for all major MOSIP modules for production deployment.
The following resources are shared across cells:
ABIS Queue
Registration Process DB
ID Repository HDFS/CEPH cluster
ID Repository DB
The communication between Demilitrized Zone (DMZ) and Militarized Zone (MZ) is strictly via a firewall.
The encrypted packets from registration client first land into Packet Landing Zone in the DMZ. Some of the Registration Processor stages run in the DMZ for initial packet handling.
ClamAV is a free, cross-platform and open-source antivirus software toolkit able to detect many types of malicious software, including viruses.
Steps to install ClamAV in RHEL-7.5
To install clamAV first we need to install EPEL Repository:
After that we need to install ClamAV and its related tools.
After completion of above steps, we need to configure installed ClamAV. This can be done via editing /etc/clamd.d/scan.conf. In this file we have to remove Example lines. So that ClamAV can use this file's configurations. We can easily do it via running following command -
Another thing we need to do in this file is to define our TCP server type. Open this file using -
here this we need to uncomment line with #LocalSocket /var/run/clamd.scan/clamd.sock. Just remove # symbol from the beginning of the line.
Now we need to configure FreshClam so that it can update ClamAV db automatically. For doing that follow below steps -
First create a backup of original FreshClam Configuration file -
In this freshclam.conf file, Here also we need to remove Example line from the file. Run following command to delete all Example lines-
Test freshclam via running-
After running above command you should see an output similar to this -
We will create a service of freshclam so that freshclam will run in the daemon mode and periodically check for updates throughout the day. To do that we will create a service file for freshclam -
And add below content -
Now save and quit. Also reload the systemd daemon to refresh the changes -
Next start and enable the freshclam service -
Now freshclam setup is complete and our ClamAV db is upto date. We can continue setting up ClamAV. Now we will copy ClamAV service file to system service folder.
Since we have changed the name, we need to change it at the file that uses this service as well -
Remove @ symbol from .include /lib/systemd/system/clamd@.service line and save the file.
We will edit Clamd service file now -
Add following lines at the end of clamd.service file.
And also remove %i symbol from various locations (ex: Description and ExecStart options). Note that at the end of the editing the service file should look something like this -
Now finally start the ClamAV service.
If it works fine, then enable this service and test the status of ClamAV service -
Now in MOSIP we require ClamAV to be available on Port 3310. To expose ClamAV service on Port 3310, edit scan.conf
and Uncomment #TCPSocket 3310 by removing #. After that restart the clamd@scan service -
Since we are exposing ClamAV on 3310 port, we need to allow incoming traffic through this port. In RHEL 7 run below command to add firewall rule -
are available. Portal to be created by SI
available. Portal to be made by SI
Reference link: