Developing a Cloud-Based Point-of-Care System: QA Lessons Learned

Categories: Testing and QAProject ManagementAgileHealthcare

Every project has its challenges and triumphs. In this particular example, GlobalLogic partnered with a multinational manufacturer and provider of animal care services to find an alternative to an existing application. Its limitations in client system deployment and application scalability for users and hospitals called for a robust, cloud-based Point-of-Care technology solution.

In this post, you can see how we tackled this complex project and overcame critical engagement challenges. We’ll share the lessons learned in QA; for example, how the customer QA manager worked dynamic insights into the daily project objectives. You’ll also discover how each release and iteration drove improvements.

A few data points of note for this project:

  • Line of Codes- 9,67,883 (FE) + 49,494 (BE) = 10,17,377 LoC 
  • Project Members- 274
  • Headcount of QA Members - 64
  • Independent Scrum Teams - 16 
  • Delivered Application Modules or Features - 248 
  • Delivered User Stories, Enabler & Change Requests - 3,931 
  • Valid Defects raised till Release 1 - 16,805

Our Technology Stack

# Area Tools, Languages, Libraries
1 Backend Development C#.NET Core 3.1
2 Front End Development Angular, Angular Workspace , NextJs, Puppeteer, Microsoft, Angular Material, Syncfusion, Jest, SonarCube, TypeScript, HTML, SCSS, node js
3 Database Cosmos DB, Managed SQL Instance (Cloud DB, Search Index)
4 DevOps & Infra Azure Cloud, Azure DevOps (Planning, Pipelines & Artifacts), Event Hub, App Config, Function App, App Insights, Azure Key Vault, Signal R, Statsig, Redis Cache Docker, Cloud Flare (CDN), Palo Alto(Networks), Azure Kubernetes (For Orchestrating Containers)
5 Requirement Management Microsoft AzureDevOps - Epic, Feature, User Story, Enabler, Change Request, Observation
6 Defect & Test Management Microsoft AzureDevOps - Test Plans & Defects
7 Test Automation, Security & Performance Protractor, JavaScript, Axios, Jasmine, Azure keyvault, npm Lib, ReportPortal, log4js, Page Object Model, VeraCode, JMeter, Blazemeter

Discovery, Proposal & Kickoff

June 2019 marked the beginning of our discovery phase. We first learned that an animal hospital brand that had been acquired by our client required a system to replace its outdated one that could hold 1000+ hospitals and 1000+ staff for each of the hospitals. By contrast, the existing application could only support 40 hospitals. 

The client sought a robust, scalable cloud-based web application equipped with the latest features for the pet care industry. It also needed the newest technology stack to replace the existing desktop application. 

After taking time to understand the business requirements, we sent a request to gauge the existing team’s capability to deliver Point of Care technology.

The Proposal

In October, five team members were hand-picked to deliver a proof of concept (POC) application. The main expectation for the application was to make it front-end heavy with cloud support. The team completed the POC application in December 2019. 

The client was satisfied with the POC application since the design met user interface (UI) expectations. 

The customized agile model was so well-designed to meet customers’ needs that the team won an award for their work in December 2019.

Recommended reading: POC vs MVP: What's The Difference? Which One To Choose?

The Kickoff

When beginning a project, it’s crucial to establish a team with diverse expertise. As it can be challenging to hire technical experts, we implemented a hiring plan to thoroughly vet applicants which enabled us to quickly establish the required Scrum teams to begin the project.

In January 2020, the teams met in the India office to discuss GlobalLogic’s standards and practices, meet new team members, and review the POC project schedule.

Project Increments

PI0 - Planning & Estimation

Initially, we only had visual designs to help depict the customer’s expectations. Creating a list of initial requirements was challenging. 

After several technical brainstorming sessions, the teams could decipher the visual designs and were able to create a plan for the project. This included an estimate of the resources and work hours needed to complete it, as well as formulating test strategies. 

Recommended reading: 6 Key Advantages of Quarterly Agile Planning [Blog]

PI1 - Execution

Once the project was approved, we refined the requirements, evaluated potential gaps in knowledge, and formulated user stories.

PI1 began with domains such as [User And Staff], [Schedule And Appointment], and [Client And Patient Management]. After a few iterations, we added Admin Domains.

To create the graphical user interface (GUI) and application programming interface (API) automation, we established test automation for the POC and created a framework structure.

PI2 - Continuation

The development and testing of the POC application were on schedule. However, several problems arose with the [Ontology] domain without a frontend and exclusively data-driven (BE). 

To fix this, quality assurance (QA) began making stacks of defects and flooded the system.

With the completion of API and GUI automation, development started to reduce the regression effort in future test cycles. We also set up a User Acceptance Testing (UAT) environment and a QA environment for testing and assessing user stories.

Recommended reading: Zero-Touch Test Automation Enabling Continuous Testing

PI3 - The First Cut

As corner cases increased, more defects and heavy regressions were launched to bombard the application. We completed multiple test cycles and fixed the defects. 

Then, architects started their code standardization processes and helped to fix defects. After many evaluation cycles, we were ready to deliver the project to the customer.

PI4 - Project Scales

Given the customer’s satisfaction with the application, our team was asked to take on additional needs, including plans for the Electronic Medical Records (EMR) domain. There was also a new tower three and a team to create the EMR domain at a new location.

At tower two (Bangalore), there were two domains, the [Orders] and [Code Catalog]. The team quickly discovered that both domains had technical challenges. 

In tower one, there was also a new domain, the [Visit], which was an Azure Event-based domain with more problem statements.

QA Reforms & Process Enrichment

One challenge the customer QA manager encountered was the need to get dynamic insights into the daily project objectives. The solution to this came from the ADO dashboard since it could present dynamic queries making it easier to track progress in the project.

The team then identified, discussed, and documented the Test Automation Framework for the POC, intending to incorporate automation to reduce the time and effort for the testing cycle. With consistent focus, time, and effort, the team was able to implement automation successfully. Another focus for the team was to create 100% API Automation and 65% GUI Automation. 

The team also worked on identifying tools for Non-Functional Testing, such as Security Testing, Performance Testing, Resolution Testing, Cross Browser Testing, Globalization Testing, Keyboard Testing, and Scalability Testing. Not for resale (NFR) testing was a primary deliverable. 

The various process was laid down formally and revised as:

  • User Story Life Cycle 
  • ADO Defects Life Cycle 
  • ADO Tasks Creation & time logging 
  • Test Cases Design Guidelines 
  • Dev Environment Testing by QA

Tracking of QA work and regression testing became effective. Scrum & SoS trackers were upgraded with several ways to track the project better.

Releases & Iterations

Release Part 1 (First 10 Iterations)

The project delivery model changed after the PI model was released, and we started working on a new feature-based approach. This created a solid foundation for Release 1.

We took many steps to make the project transparent, manageable, and well-documented. We also tracked the solution design, HLD, and LLD for each feature. For TechDebt activities, we implemented code sanitization iterations. Then, the integration of User Stories began to capture the regression efforts, and the end-to-end feature testing began after each feature.

After implementing CI/CD, we began hourly deployments for the QA1 environment. We ran sanity test runs in pipelines and began building promotions controls. We then selected the QA2 environment for manual testing, and the certification of the User Stories for Scrum teams began.

Release Part 2 (Second 10 Iterations)

We conducted workshops with customers to estimate new domains and kicked off groomings for newly added domains in Release1, namely [Pharmacy],[Communication], [Document Template].

Release Part 3 (Last 10 Iterations)

After the domains were stabilized, we conducted a regular bug bash and completed the final features for a few older domains. A few domains went into maintenance mode, while others had more features to deliver.

QA Challenges

We encountered many challenges throughout this project’s journey and would like to share a few, along with the steps taken to overcome them.

A. Increasing Functionality & Features - Automation 

Due to significant efforts and iterations in regression testing, there were increasing functionality and test cases in the system. 

Solution: Several initiatives to gear up API & GUI Automation

  1. Framework Enhancements in libs and functions 
  2. Redesigning several aspects 
  3. Code sanitization and standardization 
  4. Prioritizing automation test cases
  5. Smart Automation by clustering the functional flows

B. Continuous Implementation & Deployments

Numerous scrub teams involved in the implementation and deployment process introduced several constraints.

Solution: Several steps were taken to improve the customer experience: 

  1. Automated Build Deployments
  2. Hourly Deployment from Master Branch to Q1. 
  3. Sanity Test Execution is Pipeline on QA1 Env. 
  4. Every 4 Hours Code Promotion to QA2 Env. 
  5. Regression Test Execution in Pipeline on QA2 Env.

Recommended reading: Experience Sequencing: Why We Analyze CX like DNA

C. Testing Layers

Various QA testing stages in multiple environments – including Dev, QA1, QA2, UAT, Stag, Train1, and Train2 – added to this project’s complexity.

Solution: A lengthy work item cycle with different states tracked the defects from new state to closed.

D. Reports & Statistics

We generated reports, statistics, and a representation of Work Items, as Ado is not a great defect management tool and people are less familiar with it. 

Solution: We worked in multiple directions by breaking and solving one by one.

  1. Extensive usage of tags. 
    1. While defect logging for environment identification. 
    2. For retesting of a defect in different environments. 
    3. Categorizing User Stories, Enablers, Change Requests, and Defects using tags for release notes.
    4. Categorization of blocker defects. 
  2. Extensive usage of Queries 
    1. Tracking defects raised by various teams for different features. 
    2. Tracking defects fixed and ready for QA. 
    3. Assignment for testing of defects on multiple environments. 
    4. Scrum Of Scrum - Defect Dashboards. 
    5. Preparing Release Notes. 
    6. Data submission for Metrics.

E. Finding Defects

It was crucial to locate any other defects to ship quality products. 

Solution: We created specialized defect hunters to identify defects. We saw significant results in different domains with this approach. 

F. Defect Hunters 

Quality requires discipline, environment, and a culture of quality product shipping

Solution: We identified and groomed specialized defect hunters with encouragement and support for bombarding defects. In a few domains, this was carried out as a practice, and achieved fantastic results despite consumer Domains.

G. Flexibility

The team often worked around 15 hours daily to meet the client’s deliverables. 

Solution: Many managerial and individual initiatives were taken to achieve the milestones.

  1. Teams showcased commitment.
  2. The teams conducted numerous brainstorming sessions to be able to diagnose and solve problems.
  3. Extensive usage of chat tools. 
  4. Limited emails.
  5. Thorough communication. 
  6. A proactive approach and agility.

H. Conflict Management - Dev vs. QA conflicts

It’s often said that “developers and testers are like oil and water,” and indeed, there was friction when the teams collaborated. 

Solution: With patience, mentoring, and guidance from leadership, they were able to work together cohesively. We implemented a QA bidimensional for major problems where each QA team member worked closely with Scrum teams.

Lessons Learned from Challenges and Bottlenecks

A. Requirement Dependency Management

Given the project’s magnitude and the multiple scrum teams involved, there were still areas where improvements could be made in the future. 

  1. There was less coordination among domain POs for required dependencies, causing problems for consumer domains, as producer domains make delays and insert defects at each level in the project life cycle

Solution: By having onshore & offshore domain POs, you could enforce better communication practices.

  1. Defects are not logged by developers in code but are due to the integration with various other functionalities and domains.

Solution: Due to a lack of formal Product Requirement Documentation, POs and developers deviate or miss the defects at the integration point. Teams can reduce risk by adding national Reviews of User Stories ACs. 

  1. Due to frequent requirement changes and gaps in communication, we encountered delays and defects. The project had cohesive functionality and features with a high degree of interdependence. 

Solution: The customer cannot isolate functionalities due to the tight coupling of the features for an end-user. Daily defect triage was conducted with POs to reduce the gaps and conclude requirements. However, we were still unable to control the delays.

B. Locking Master

By locking master while the sprint ends regressions, we lost time for other work items or next sprint deliverables. 

Solution: For a few sprints, master was not locked and control code promotion by QA Code approvals for each of the work items. This solved the problem somewhat, but only temporarily. Further developer discipline enhanced it and resulted in a regular cadence.

C. Sanity Failures at QA1 

Domains must wait until the other domain sanity failures at QA1 are resolved. 

Solution: We assigned other productive tasks to the team during this time.

D. Unplanned Medical Leaves

Unplanned medical leave due to COVID and medical emergencies.

Solution: With COVID restrictions, more teams could work from home, which helped to balance any progress lost due to unplanned medical leave. 

Recommended reading: 3 Tips for Leading Projects Remotely with Flexible Structure

E. Adhoc Work 

High level of Adhoc work and activities assigned, which were not planned and enforced to achieve. 

Solution: At a later stage in the project, its solution and Tech Debt is being taken care of along with regular development. Due to this reduction in Adhoc work and more planned work allocated.

F. Multiple Environments

Having multiple environments for testing for QA presented challenges for producing high-quality products. 

Solution: Scope of testing per environment decided like on Development Environment only positive scenarios to be checked. On QA2, in-depth certification was to be done over the build. On UAT, only defect verification is to be ensured. By this approach, a significant amount of work was reduced, but it came late.

Project Highlights 

Some of the highlights from the project include: 

  1. Having Automation QA focus on scripting and Manual QA focus on defect hunting. 
  2. Not pushing the dev team to participate in functional testing.
  3. Cross-domain cohesiveness in the QA track to understand the overall product requirements for shipping.

We met display requirements, and developers' input helped improve the overall application. QA also provided various suggestions and observations, which helped to enrich the user experience. With guidance from the project’s architects, we created stability through complex engagement.

The problem must be taken as a challenge to solve. For example, in Agile, an Epic is broken into User Stories which are broken down into simple, achievable user stories and at the end each of the acceptance criteria is agitated to achieve the goals. 

As you can see, the team was effective in our mission and learned valuable skills along the way. If you’re presented with a complex problem, as we were, it helps to plan out the processes step-by-step. The more the problem is broken down, the more realistic its potential solutions become. 

More helpful resources:

Author

Jag_Pic_DSC_0340-Jag-Parvesh-Chauhan

Author

Jag Parvesh Chauhan

Manager

View all Articles

Top Insights

Best practices for selecting a software engineering partner

Best practices for selecting a software engineering partner

SecurityDigital TransformationDevOpsCloudMedia
7 RETAIL TRENDS POWERED BY MOBILE

7 RETAIL TRENDS POWERED BY MOBILE

MobilityConsumer and RetailMedia
My Intro to the Amazing Partnership Between the US Paralympics and the Telecom Industry

My Intro to the Amazing Partnership Between the...

Experience DesignPerspectiveCommunicationsMediaTechnology
Adaptive and Intuitive Design: Disrupting Sports Broadcasting

Adaptive and Intuitive Design: Disrupting Sports Broadcasting

Experience DesignSecurityMobilityDigital TransformationCloudBig Data & AnalyticsMedia

Top Authors

Sandeep Gill

Sandeep Gill

Consultant

Apurva Chaturvedi

Apurva Chaturvedi

Senior Manager

Neha Kukreja

Neha Kukreja

Consultant

Yuriy Yuzifovich

Yuriy Yuzifovich

Chief Technology Officer, AI

Blog Categories

  • URL copied!