API Test Rig is a comprehensive tool designed to rigorously test all APIs provided by MOSIP Identity. It ensures thorough testing by covering various corner cases. This Test Rig is module-specific, allowing users to execute specific APIs based on their requirements efficiently. With its optimized design, API Test Rig can complete test runs in few minutes, enhancing the testing process's speed and effectiveness.
The API Test Rig consists of several key components and follows a specific execution flow.
modules: Indicates the name of the module being tested. Examples include auth, prereg, idrepo, resident, esignet, partner, mimoto, etc. Users can specify the module they wish to test, enabling targeted testing based on specific functionalities.
env.user: This parameter specifies the user of the environment where the Testrig
file will be executed.
env.endpoint: Indicates the environment where the application under test is deployed. Users should replace <base_env>
with the hostname of the desired environment.
env.testlevel: Determines the level of testing to be conducted. It should be set to ‘smoke’ to run only smoke test cases or ‘smokeandRegression’ to execute all tests of all modules.
Ensure that the following software and configurations are set up on your local machine before executing the automation tests:
Java 11 and Maven (3.6.0): Install Java 11 JDK and Maven 3.6.0 on your machine. These are required to compile and execute the automation tests.
Lombok Configuration: Configure Lombok on your machine. Click here to refer to the official Lombok website for installation instructions and setup details.
Git Bash (for Windows): Install Git Bash version 2.18.0.windows.1
on your Windows machine. Git Bash provides a Unix-like command-line environment on Windows, enabling seamless execution of commands.
For Windows: Place the settings.xml
file in the .m2 directory.
For Linux: Place the settings.xml file in the regular Maven configuration folder.Additionally, copy the same settings.xml file to /usr/local/maven/conf directory.
Clone Repository
From a terminal (Linux) or command prompt (Windows), navigate to your preferred directory
Using Git Bash: Copy the Git repository link: https://github.com/mosip/mosip-functional-tests
Open Git Bash at any location
Run the command: git clone
https://github.com/mosip/mosip-functional-tests
Configure Properties
Navigate to the cloned repository directory
Update the properties file as follows:
Uncomment the lines beginning with authDemoServiceBaseURL and authDemoServicePort to run locally
Comment out the lines beginning with authDemoServiceBaseURL and authDemoServicePort intended for Docker
Ensure that the push-reports-to-s3
property is set to no for local execution
Update the kernel.properties file with the appropriate client ID and secret key Values
Ensure that the client ID and secret key are correctly configured for the intended environment.
Make sure globaladmin user is available in the Database.
Pass all the VM Arguments Properly mentioned above and execute.
Setup
Authdemo pod should be up
All Cronjob-apitestrig
should be up
Below Config Maps should be proper
Following are tested modules:
ID Repository (idrepo): ID Repository serves as a centralized database containing individual identity records. It offers API-based functionalities for storing, retrieving, and updating identity details, facilitating seamless integration with other MOSIP modules.
Purpose: Testing the functionality and reliability of identity data management operations within the ID Repository module.
Authentication (auth): Authentication module provides independent services for verifying individual identities. It supports seeding with authentication data and includes services such as Authentication Services, OTP Service, and Internal Services.
Purpose: Evaluating the effectiveness and accuracy of identity authentication processes, including authentication services and OTP generation, within the Authentication module.
Pre-registration (preregistration): Pre-registration module enables residents to enter demographic data, upload supporting documents, and book registration appointments. It supports appointment management, notifications, rescheduling, and cancellation functionalities, enhancing the registration process efficiency.
Purpose: Testing the functionality and usability of pre-registration features, including data entry, appointment booking, and appointment management, to ensure a seamless registration experience for residents.
Mobile ID (MobileId): Mobile ID module facilitates the generation and management of mobile-based identity credentials. It offers secure and convenient identity verification solutions leveraging mobile devices.
Purpose: Assessing the reliability and security of mobile-based identity credential generation and management processes within the Mobile ID module.
Esignet: Esignet module provides electronic signature services, enabling digital signing of documents and transactions. It ensures the integrity and authenticity of electronic documents and transactions.
Purpose: Verifying the functionality and effectiveness of electronic signature services, including document signing and verification processes, within the Esignet module.
Partner Management Services (Partner): Partner Management Services module offers services for partner onboarding and management. It includes Partner Management Service and Policy Management Service functionalities.
Purpose: Testing the partner onboarding process and management services provided by the Partner Management Services module, ensuring smooth collaboration and integration with partner entities.
Identify Report Name
Locate the report file in the 'Object Store' automation bucket.
The report name typically follows the format: mosip-api-internal.<env_name>-<module_name>-<timestamp>_report_T-<Total>_P-<Passed>_S-<Skipped>_F-<Failed>.html
.
<env_name>
: Name of the environment where the tests were conducted
<module_name>
: Name of the module for which tests were executed
<timestamp>
: Timestamp indicating when the report was generated
T, P, S, F: Total, Passed, Skipped, and Failed counts of test cases, respectively
Open the report file and review the summary section
Check the total number of test cases executed (T), along with the counts of passed (P), skipped (S), and failed (F) test cases
In the beginning of the report, you'll find a section listing all API names along with their corresponding test case counts
Click on the specific API you want to inspect further
After selecting an API, you'll be directed to a page displaying all test cases associated with that API
Click on the individual test case to view detailed information, including request and response data
Analyze the request payload, expected response, and actual response received during testing
If a test case failed, analyze the details to identify the root cause
Review the request parameters, headers, and body to ensure correctness
Check the response received against the expected response, focusing on status codes, data format, and content
Create folder structure
Create a folder with the API name in the specific module directory
This folder will contain the necessary files for automating the API
Create Input and Output Templates
Inside the API folder, create one Handlebars (hbs) file for input and one for output
The input hbs file should contain placeholders for parameters that will be used in the test cases
The output hbs file should contain placeholders for expected response data
Parameterize Data in Input Template
Parametrize the data in the input hbs file using Handlebars syntax
Define variables or placeholders for the input data that will be used in the test cases
Create YAML File
Create a YAML file to define the test cases for the API
Follow the predefined structure for the YAML file, including API name, test case names, endpoint, authentication method, HTTP method, input/output templates, and input/output data
Define each test case under the corresponding API with relevant details such as endpoint, authentication method, HTTP method, input/output templates, and input/output data
Input and Output Data in YAML
In the YAML file, provide the input data for each test case under the input block, using the parameters defined in the input template
Similarly, define the expected output data under the output block, using placeholders defined in the output template
Once the folder structure and YAML file are created, execute the test cases using the automation testing framework
The framework will substitute the input data from the YAML file into the input template and compare the actual output with the expected output defined in the YAML file
This documentation provides comprehensive guidance for utilizing and expanding the API Test Rig functionalities effectively.
The UI test rig is a comprehensive automation framework designed to validate the functionality and user interface of various modules within web-based applications. This robust framework encompasses modules tailored for specific functionalities, ensuring thorough testing and validation of essential components. Below are the modules covered:
Admin UI: Admin Service module facilitates the creation of machines and performs user, center, and device mapping. Additionally, it supports master data bulk upload and retrieval of lost RIDs.
PMP UI (Partner Management Services): PMPUI is a web-based UI application offering services for partner onboarding and management. It includes functionalities for Partner Management Service and Policy Management Service.
Resident UI: Resident service is a web-based UI application that enables users to perform various operations related to their UINs / VIDs. Users can utilize these services to view, update, manage, and share their data. Additionally, they can report issues in case of grievances.
Test Script
Description: The test script is the Java code that defines the steps and actions to be executed during the UI automation.
Implementation: Written in Java, it utilizes Selenium WebDriver commands to interact with the web elements of the application.
Selenium WebDriver
Description: The core component of Selenium that provides a programming interface for interacting with web browsers.
Implementation: Instantiate WebDriver (for example, ChromeDriver
, FirefoxDriver
) to control the browser and navigate through the application.
Test Data
Description: Input data required for the test, such as login credentials, file paths for document upload, or any other data necessary for the test scenario.
Implementation: Defined in the test script or loaded from the external sources like kernel files.
Logging and Reporting
Description: Capture and log relevant information during the execution for debugging and reporting purposes.
Implementation: Utilize logging frameworks (for example, Log4j
) and generate test reports (for example, TestNG
).
Setup Phase
Initialize WebDriver: Set up the Selenium WebDriver by instantiating the appropriate browser driver (for example, ChromeDriver).
Navigate to URL: Open the browser and navigate to the URL of the document-based application.
Test Execution Phase
User Authentication (if required): If the application requires login, provide the necessary credentials using Selenium commands.
Interact with UI Elements: Use WebDriver commands to locate and interact with UI elements (for example, buttons, input fields, dropdowns).
Perform Actions: Execute actions such as uploading and downloading documents, entering data, or triggering events.
Data Validation: Implement assertions to validate that the application behaves as expected.
Clone Repository
Admin UI: Clone the repository from Admin UI.
Resident UI: Clone the repository from Resident UI.
PMP UI: Clone the repository from Partner Management Portal.
Navigate to the cloned repository directory.
Navigate to the directory of the cloned repository.
Update the properties file with the following changes:
Verify that the client ID and secret key are accurately configured for the target environment.
Confirm the existence of the 'globaladmin' user in the Database.
Ensure all VM arguments mentioned above are correctly passed and executed.
Update the kernel.properties file with the correct values for the client ID and secret key.
Configure environment-specific secret keys in the Kernel.properties file located at: Kernel.properties
Set environment variables for each unique URL corresponding to different environments (for example, development, test, production).
Setup:
Once the UItest rig
configuration maps are accurately configured, you can proceed to execute the UI test rig pod within the Docker environment
Following execution, the comprehensive execution report will be made available in the Minio S3 bucket, accessible for retrieval and further analysis. For the docker setup of uitestrig
, we need to deploy uitestrig in a docker, then we can directly run it using RUN
button after execution, we will get report in uitestrig
folder in minio.
Application and URL Information
Application URL: Ensure you have the correct URL of the application under test. This is crucial for navigating to the relevant web page during test execution. The application URL is sourced from kernel.properties
.
Login Credentials: If the application necessitates authentication, ensure you possess valid login credentials for testing purposes. During test execution, user accounts are created to facilitate login credential management.
Test Data
Positive Test Data: Positive test data is generated dynamically during the execution of uitestrig via APIs.
Document Files (if applicable): If your automation includes document uploads or downloads, ensure you have sample document files (e.g., PDFs, Word documents) prepared for testing purposes. These files will be utilized during the testing process.
Configuration Files: Ensure all necessary configuration files for your project are prepared. These files may include browser configurations, test environment settings, or other parameters essential for the execution of your tests.
Environment Variables: Consider utilizing environment variables if your application URL or other settings vary across different environments. This approach provides flexibility and allows for easier management of environment-specific configurations during testing.
Generate TestNG Reports: Ensure your Selenium and Java project is configured to generate TestNG reports. Utilize build automation tools like Maven or Gradle to execute tests and automatically generate TestNG reports.
Locate the TestNG Reports: After test execution, locate the TestNG reports directory. Typically, TestNG generates HTML reports in the testng-report
or uitestrig
directory within your project's directory structure.
Reviewing TestNG HTML Reports:
Overview Page: Open the index.html
or emailable-report.html
file to access the test suite overview. Look for summary information such as total tests run, passed, failed, and skipped.
Suite Information: Navigate to the "Suite" section to examine details about individual test suites, including start and end times.
Test Information: Check the "Tests" section for detailed information about each test, including start and end times, test duration, and a summary of passed, failed, and skipped methods.
Methods Information: Explore the "Methods" section to obtain detailed information about each test method. This includes the class name, method name, description, status (pass/fail/skip), and time taken.
Logs and Output: TestNG reports often include logs and output links. These contain additional information about test execution, such as console logs, error messages, or stack traces.
Analyzing Failed Tests:
Failed Tests Section: Focus on the "Failed tests" section within the TestNG report. Here, you'll find detailed information about the failed tests, including the class name, method name, and the reason for the failure. This section provides a concise overview of which tests encountered issues during execution.
Screenshots: Uitestrig framework captures screenshots on test failure. Analyzing screenshots can help identify UI rendering problems, layout issues, or unexpected behaviour that may not be evident from the test logs alone.
To automate a new UI flow for the three modules (AdminUI, PMPUI, and ResidentUI), you can follow these steps:
Understand the New Flow:
Requirement Analysis: Thoroughly review the documentation, user stories, or specifications related to the new functionality to gain a clear understanding of the flow.
Test Scenarios: Identify specific test scenarios for each module's UI flow. Break down the flow into individual steps that can be automated.
Design Test Cases:
Test Case Definition: Define test cases for each identified scenario, outlining the expected behaviour, input data, and verification points.
Use Page Object Model (POM): Implement the Page Object Model to structure your automation framework. Create separate classes representing each page or component within the modules
Write Automation Scripts:
Script Structure: Write automation scripts using Java and Selenium WebDriver, following a modular and maintainable structure.
Implement Test Cases: Translate the defined test cases into executable scripts. Utilize Selenium WebDriver commands to interact with web elements, perform actions, and validate the expected behavior.
For each module (AdminUI, PMPUI, and ResidentUI), create separate sets of automation scripts that cover the identified test scenarios. Each script should focus on a specific test case, interacting with the respective UI elements and validating the expected outcomes.
Utilize the Page Object Model to encapsulate the interaction with web elements within separate page classes. This approach enhances maintainability and reusability of your automation scripts by separating the UI logic from the test scripts.
Lastly, ensure thorough testing and validation of the implemented automation scripts to verify the accuracy and reliability of the automated UI flows for each module
This documentation provides comprehensive guidance for utilizing and expanding the UI Test Rig functionalities effectively.
We implemented three specialized test rigs for automation testing in MOSIP Identity. The UI test rig ensures thorough validation of web-based application modules, including Admin UI, PMP UI, and Resident UI.
The Domain-Specific Language (DSL) test rig facilitates end-to-end testing of MOSIP functionalities like Registration and Authentication. Additionally, the API test rig rigorously tests all MOSIP APIs, covering various corner cases for enhanced reliability.
Click the below mentioned cards to know more.
A Domain-Specific Language (DSL) test framework is crafted specifically to facilitate thorough end-to-end testing of MOSIP functionalities. This framework empowers the specification and execution of test cases that accurately reflect real-world scenarios. These functionalities include:
Registration
Pre-registration + registration
Authentication
Execution Components and Flow
The PacketCreator service, constructed with Spring Boot, is responsible for generating dummy data packets used in registrations. It employs Mockmds
to fabricate realistic, simulated biometric information in the cbeff
format, tailored to various use cases.
During deployment, the PacketCreator
service can be configured to operate on a specific port or URL, providing flexibility in its deployment settings. In Rancher environments, this configuration is managed through ConfigMaps, allowing for seamless adjustment of deployment parameters.
Moreover, PacketCreator
is equipped with the ability to concurrently execute multiple test scenarios, thereby optimising its operational efficiency.
Click here to know more about code and test data to run automation tests.
During deployment, the service can be configured to operate on a specific port or URL, a setup that is facilitated through the use of Config Maps in Rancher environments. Additionally, it includes functionality to generate partner P12 files, which are crucial for establishing secure communication channels.
Click here to know more about MOSIP repository for Functional Tests.
The DSL test rig, also referred to as the DSL orchestrator, plays an important role in managing test data and executing the use cases outlined in scenario sheets. It harnesses the capabilities of PacketCreator
and AuthDemo
certificates to seamlessly orchestrate the complete scenario execution. Furthermore, the test rig is adept at concurrently running multiple scenarios, thereby maximizing efficiency.
Upon completion of execution, comprehensive reports are meticulously stored in a Object Store S3 bucket, with the folder name specified in the configuration maps.
Click here to know more about test data to run automation tests.
PacketCreator
establishes a connection with Mockmds
through the Service Bus Interface (SBI). The SBI acts as a mediator and provides the port number (for example, 4501) used by PacketCreator
to call Mockmds
and access its cbeff generation utility for creating simulated biometric data.
The MountVolume component is responsible for downloading and mounting specific folders from an NFS (Network File System) shared storage (/srv/nfs/mosip/testrig), which is accessible by both DSLtestrig and PacketCreator. These folders, namely:
dsl-scenarios
packetcreator-data
packetcreator-authdemo-authcerts
Device certificates generated by PacketCreator
are stored in the packetcreator-authdemo-authcerts
folder.
Generated packets are stored in the following location for future retrieval.
A profile resource is utilized for each generated packet. This profile acts as a container for test data, including both demographic information and biometric details specific to the packet.
For local execution of DSL scenario, follow these steps:
Configure environment-specific secret keys in the Kernel.properties file located at: Kernel.properties.
Set environment variables for each unique URL corresponding to different environments (e.g., development, test, production).
Set centralized folder locally can be downloaded from the following location: Centralized Folder.
Place the latest build JAR file in the packetcreator folder. Execute run.bat from the packetcreator folder to start the PacketCreator service.
Place the latest build JAR file for the Auth Demo service. Ensure the service runs on the same port specified in the Kernel file.
Identify the specific scenario numbers you want to execute by defining a configuration parameter like scenariosToRun. Assign a comma-separated list of scenario numbers. For a full suite execution, leave this parameter empty.
Define another configuration parameter like scenariosToSkip to list the scenario numbers corresponding to known issues you want to exclude from execution. This allows focusing on new or relevant scenarios without re-running known problematic ones.
To ensure successful execution from Docker, please verify the following prerequisites:
Ensure that the Authdemo pod is running and operational within the Docker environment.
Confirm that the Packetcreator pod is deployed and operational within the Docker environment.
Verify that the configuration maps for the DSL test rig are correctly set up and configured to facilitate communication and orchestration between components.
Once the DSL test rig configuration maps are accurately configured, you can proceed to execute the DSL test rig pod within the Docker environment.
Following execution, the comprehensive execution report will be made available in the 'Object Store' S3 bucket, accessible for retrieval and further analysis.
Before executing the DSL test rig, ensure the following prerequisites are met:
Globaladmin User Configuration: The globaladmin
user should be configured to point to either Mycountry
or the root hierarchy within the user management system.
Keycloak Deployment: Ensure a running Keycloak deployment is available to manage user authentication and authorization processes during test execution.
Network File System (NFS) Setup: NFS must be properly configured and accessible to share data between relevant components, such as DSL test rig and PacketCreator.
PacketCreator and AuthDemo Paths: Verify that the paths for both PacketCreator
and AuthDemo
in the configuration are accurately set, referencing their respective locations within the environment.
Scenario Sheet: The scenario sheet containing test cases must have the correct scenario number(s) specified for execution.
'Object Store' Bucket Access: Confirm that the configuration maps hold valid information to access the 'Object Store' S3 bucket. This access is necessary for retrieving reports after the completion of the test run.
Fetch reports from 'Object Store' as displayed below.
Here is the explanation of the different reports generated by the DSL test rig:
ExtentReport:
Provides a summarized overview of executed scenarios and their outcomes (pass/fail).
Each entry may include a brief description of failure if a scenario fails.
Ideal for quickly identifying failing scenarios without diving into details.
Detailed Testing Report:
Offers a comprehensive overview of each executed scenario.
Allows searching for specific scenarios by their number.
Displays the complete execution flow of a chosen scenario, starting from the beginning.
Provides detailed logs and information about scenario execution and any encountered failures.
Suitable for in-depth analysis of specific scenarios and understanding the root cause of failures.
Pod Logs:
Contain detailed logging information generated by individual pods involved in the test execution (for example, DSL test rig, PacketCreator
).
Provide low-level details about system events, errors, and communication between components
Recommended for advanced troubleshooting and identifying the root cause of complex failures.
To write a new scenario using existing DSL steps for the DSL test rig, focusing on scenario number 2 and adapting it for a different flow, follow these steps:
Analyze Existing Scenario 2:
Thoroughly analyze the existing scenario number 2 to understand its basic UIN generation flow.
Identify the specific changes needed in the flow to adapt it for your new scenario.
Leverage Existing Steps:
Review the existing DSL step definitions available in your test framework.
Identify existing steps that align with your new scenario and can be reused.
Re-using existing steps will save time and effort compared to building new steps from scratch.
Create New Steps (if Necessary)
If your desired flow involves actions not covered by existing steps, create new DSL steps as needed.
Ensure proper documentation and adhere to coding best practices when creating new steps.
Organize the steps in the desired order of execution for your new scenario.
Use existing steps where applicable and incorporate new steps as needed.
Test and Refine
Testing and refining the newly created scenario is crucial to ensure it functions correctly and produces the desired outcome. To test and refine, complete the following steps:
Execute the newly created scenario within the DSL test rig environment.
Verify that the scenario executes as expected and produces the desired outcome. Check for any errors or unexpected behaviour during execution.
If necessary, debug and refine the scenario to address any issues or discrepancies encountered during execution.
Additional Tips:
Start with a simple scenario to build confidence in your understanding of the DSL and scenario writing process.
Consult colleagues who are familiar with the existing DSL and scenario definitions for guidance and support if needed.
Document your new scenario clearly and concisely, explaining its purpose and any changes made compared to the original scenario.
Below snapshot displays many more methods for your reference for the scenarios.
To analyze scenario names effectively, consider the following strategies:
Look for keywords or phrases within scenario step names that provide hints about their purpose or the information they handle. These keywords can offer insights into the actions or data manipulations performed by each step.
Most DSLs come with documentation detailing their syntax, available steps, and structure. Refer to this documentation to understand the expected format and parameters for each step. It can provide clarity on the purpose and usage of individual steps.
Test Scenarios Incrementally: Start by testing a simple scenario that involves only a few steps you understand. Gradually add complexity as you gain confidence in your understanding. This practice can help you identify potential gaps in your knowledge and where further investigation might be needed.
This documentation provides comprehensive guidance for utilizing and expanding the DSL Test Rig functionalities effectively.