Archives

Thinking of becoming a certified Google Cloud Professional Data Engineer?

After becoming certified as a Google Cloud Professional Architect, I wanted to continue the momentum and earn the Google Cloud Professional Data Engineer certification. It took a month and a half to prepare for the certification while working my full-time job. For those familiar with Google’s Cloud Architect exam, the Data Engineer exam questions are slightly more complex, but the scope is much smaller. 

The Data Engineer certification covers many subjects including Google Cloud Platform data storage, analytics, machine learning, and data processing products. In this article, you’ll learn more about each term and find helpful tips and resources to help you prepare to get this certification yourself.

Cloud Storage and Cloud Datastore

As you may know, cloud storage refers to storing data in the cloud rather than locally on a computer. This allows users to access their files from any device connected to the internet. In Google Cloud Storage, users can store objects in categories called buckets.

Cloud datastore is a database service provided by Google for storing large amounts of structured data for web and mobile applications.

Surprisingly, these products aren’t covered as much in the exam, possibly because they are covered more extensively in the Cloud Architect exam. Just know the basic concepts of each product and when it’s appropriate to use each product.

Cloud SQL

Google’s Cloud SQL is a fully managed database service for MySQL, PostgreSQL, and SQL Server. The service runs on Google’s cloud infrastructure, so users don't need to worry about managing servers, storage, backups, or other technology.

There were a few questions on this product in the exam. If you have practical experience using the product, you should be able to answer any questions.

With inquiries related to other data storage products, be sure to know what scenarios are appropriate for Cloud SQL and when it would be more appropriate to use Datastore, Bigquery, Bigtable, or other products.

Recommended Reading: Cloud-Driven Innovations: What Comes Next?

Bigtable

Bigtable is Google’s distributed database system for managing large amounts of data across multiple machines. It’s a scalable NoSQL database service.

The idea behind Bigtable is to store all the data in one place rather than having each machine store small amounts of data. This allows you to scale up when adding more storage capacity.

This product is covered quite extensively in the exam. You should know the basic concepts of the product:

BigQuery

Google’s BigQuery is a cloud data warehouse service for processing large amounts of structured and semi-structured data. The service provides fast query access to petabytes of data stored in Google Cloud Storage.

The exam will thoroughly cover BigQuery and as long as you know it, you can answer many of the questions in the exam. You should know the following details:

Pub/Sub

Pub/Sub is a simple communication medium for modern microservice that allows users to stream analytics.

The exam contains many questions about this product, but they are all reasonably high-level. So, it’s essential to know the basic concepts (topics, subscriptions, push and pull delivery flows, etc.).

Most importantly, you should know when to introduce Pub/Sub as a messaging layer in architecture for a given set of requirements.

Apache Hadoop

Apache Hadoop is a software framework for storing large amounts of data across clusters of commodity servers. While it’s technically not part of the Google Cloud Platform, there are questions about this technology in the exam since it’s the underlying technology for Dataproc.

Expect some questions on what HDFS, Hive, Pig, Oozie, or Sqoop are, but basic knowledge of what each technology is and when to use it should be sufficient.

Cloud Dataflow

Cloud dataflow is a platform for building applications that process large amounts of unstructured data such as text, images, video, audio, etc. The platform provides a set of APIs and SDKs that enable developers to build applications using Apache Beam, based on Google’s open-source project, Flows.

There are numerous questions about this product. Since it’s a crucial focus for Google regarding data processing on the Google Cloud Platform, it’s not surprising that many questions focus on this topic.

In addition to knowing the basic capabilities of the product, you will also need to understand concepts like:

Cloud Dataproc

Cloud Dataproc is Google’s cloud computing platform for big data processing. The service provides users with access to petabytes of storage space, along with computing power, via hundreds of thousands of virtual machines. Users can run Hadoop jobs directly from the web interface.

There are only a few questions on this product besides the Hadoop questions mentioned above. Just be sure to understand the differences between Dataproc and Dataflow and when to use one or the other.

Dataflow is typically preferred for new development, whereas Dataproc would be required if you migrate existing on-premises Hadoop or Spark infrastructure to Google Cloud Platform without redevelopment efforts.

TensorFlow, Machine Learning, Cloud DataLab

TensorFlow is an open-source software library for machine learning. TensorFlow aims to provide tools for researchers, developers, and users interested in applying deep learning techniques such as neural networks, support vector machines, and word embeddings.

The exam contains a significant number of questions on this product. You should understand all the basic concepts of designing and developing a machine learning solution on TensorFlow, including images such as data correlation analysis in the Datalab and overfitting and how to correct it.

Detailed TensorFlow or Cloud machine learning programming knowledge is not required, but a concrete understanding of machine learning design and implementation is essential.

Recommended Reading: Maximise your investment in Machine Learning

Stackdriver

Stackdriver provides visibility into how your applications behave at scale across all cloud platforms. With Stackdriver, you can monitor application performance, identify bottlenecks, troubleshoot issues, and gain insights into how users interact with your app.

There are many questions about Stackdriver. However, they focus more on “ops” than a “data engineering” product. Be sure to know the sub-products of Stackdriver, such as Debugger, Error Reporting, Alerting, Trace, and Logging, what they do, and when they should be used.

Data Studio

Google’s Data Studio allows marketers to create dashboards and reports using real-time data from Google Analytics, Facebook Ads Manager, Salesforce, and other sources. Data Studio also offers advanced segmentation, forecasting, and predictive analytics features.

There were a few questions on this topic, including caching concepts and setting up metrics, dimensions, and filters in a report.

How Do I Prepare?

Here are several courses and resources I recommend:

The Data Engineering, Big Data, and Machine Learning course on Coursera provides students with a comprehensive introduction to data engineering, big data analytics, machine learning, cloud computing, and other related topics. 

This specialization covers all significant data science concepts, such as databases, SQL, NoSQL, Hadoop, Spark, MapReduce, R, Python, Java, C++, and others. Students learn how to use Google Cloud Platform for scalable solutions using these technologies.

This course is divided into five modules with increasing complexity. Modules are initially shaped with slides and discussion, followed by labs that run through Google’s Codelabs, a free-to-use training platform for hands-on labs in the Google Cloud Platform.

This course has 20 thorough chapters to prepare you for the exam. Including deep dives into machine learning, Data Analytics with BigQuery, and NoSQL Data with Cloud Bigtable.

It’s designed for those who want to learn how to build scalable cloud solutions using Google’s BigQuery database service. This course covers all aspects of building a big data solution, from designing the architecture to deploying the application.

It also includes labs on Cloud Run Data in GCS and Firestore and running a Pyspark job on Cloud Dataproc Using Google Cloud Storage.

  • Cloud Academy : Google Professional Data Engineer Exam PreparationThe Google Professional Data Engineer exam preparation course covers all topics, from basic data structures to advanced algorithms. This course also includes real-world projects which help students understand how to apply concepts learned in class. Students who complete this course can pass the Google Professional Data Engineer certification exams.

Exam Guide & Sample Questions

Google has an official exam guide and sample questions for the Professional Data Engineer certification.

More Resources:

Design Thinking & What It Means for Businesses Undergoing Transformation

Design thinking has become a buzzword in recent years. Businesses across industries have adopted design thinking as they transform or face new challenges. But while it has recently gained popularity, the origins of this mindset date back to the 1950s.

Design thinking involves a systematic approach to problem-solving that emphasizes empathy, creativity, and experimentation that can dramatically improve business outcomes. In this article, you’ll learn about the benefits, limitations, and nuances of design thinking, and how you can incorporate it into your organization’s digital transformation.

What Exactly is Design?

To understand the value of design thinking, we need to understand what it’s like to “think like a designer.” To do that, we need to be clear about the term design.

The description below is a schematic to illustrate some vital concepts relevant to design:

Designing is about problem-solving. Finding a solution to a problem and defining the form and function of the solution.

Design involves navigating steps to find a solution to a problem (a process) and providing specifications for the form and function of the solution (an outcome).

While successfully navigating steps to a solution, decisions made by a designer (or presented by a designer for consideration) should support a plan for a viable solution. This means that any potential requirements and constraints of the project's production, implementation, and use must be addressed during the design process.

Because of the project’s various requirements, different outcomes are possible. For example, designing a marketing website differs from developing a mobile application, although they both have a digital interface.

One common aspect of the design process is iterations or successive revision cycles. In some cases, iteration cycles may be used to expand the number of options that might be viable solutions. In other instances, iterations drive out the details and ensure that all requirements and constraints have been considered for a chosen route.

The outcomes of the design process, or what we often refer to as “the design,” can be tangible (toaster, furniture, car, airplanes, or clothes) and intangible (policies, communication protocols, programs, or software). Historically, tangible meant that you couldn’t easily modify characteristics after production, so the design must be thorough. 

There’s a unique opportunity when creating software. On the one hand, design is never complete. There’s a constant cycle of identifying, solving, and specifying errors in the software. However, digital technology is blurring the boundary between tangible and intangible outcomes. This change makes it more critical than ever to be very specific about the meaning of design thinking for a project.

Recommended reading: Next-Generation Architecture Best Practices

How Do “Designers” Think?

Do designers think or process information differently? There’s unlikely a significant difference in the way designers think than those in other fields.

But two things are worth noting. The first is the natural thinking patterns common to people who design for a living, even when not fully immersed in their work. The second is how a designer thinks from the project's initiation to its outcome.

Designers generally think like the rest of us when playing the role of problem solvers. For a designer, however, two key processing patterns are relevant to their role as a designer: curiosity and context-seeking.

Curiosity

The best designers are often curious. They want to know how things work, why they work that way, what happens when you change the design, and what a customer would say about the design.

Curiosity is beneficial as designers need to know enough about the outcome to ensure they take the proper route. They need to have information about the requirements and constraints of the project.

Curiosity is also a tool, as designers know that they can often generate innovative and differentiated solutions if the right inspiration can be found. And curiosity is essential to the first rule of problem-solving: define the problem.

Context-Seeking

Design is all about context: what’s being considered, what’s relevant, where are unanticipated influences going to come from, and what are the existing mechanisms at play?

Is there evidence suggesting we don’t understand the broader context of people buying and using our products and services?

Defining the context for a solution to a problem is essential. When the context is too narrowly defined, the answer is incomplete. When the context is too broad, complexity gets out of hand.

To follow the first rule of problem-solving (define the problem), you must specify the context, which is why curiosity and context-seeking are standard methods of designers.

When a designer is working—engaging in a process to reach an outcome—they have a natural starting point: understanding the problem’s requirements and goals.

They will also need to know the stakeholders in the project and have a point of contact when they have questions.

How Design Thinking Solves Problems

Now let’s discuss how designers problem-solve.

Divergent Thinking

People often have divergent thinking in mind when they imagine design creativity. It’s explorative and generative, focusing on breadth over depth. They use iteration to go broader through lateral thinking until they have a clear definition of ready.

Convergent Thinking

People think of convergent thinking when they imagine the design craft. It’s reductive refining, with specific iterations focused on finding a solution.

Understanding whether you need a divergent or convergent approach is essential to the project’s success. A design rationale that helps people understand past decisions and implications for choices of current options helps ensure that everyone involved in the process is on the same page.

Recommended resource: An Engineer's Essential Tool in Agile is Design Thinking [Webinar]

In choosing the steps of a design process, a designer’s end goal is to provide a solution to the problem based on a clear design rationale.

This design rationale should help people understand why the answer is what it is and what options might exist to change or evolve the solution. In addition, the end goal should include adequate specifications to deliver the solution.

Limitations of Design Thinking

One appealing aspect of design thinking is the priority it places on designing for the human experience. This is why design thinking can be helpful.

But that’s just one aspect of what design thinking is all about. Businesses must consider the entire design process before looking to design thinking to solve their problems. 

There are two critical capabilities that an experienced designer relies upon. The first is experience with applying divergent and convergent thinking in each problem space.  Knowing how to use information is key to developing a design rationale.

The second is that they know—through experience—the requirements, constraints, and specifications needed for a given outcome.

This means they don’t spend time collecting information or trying to understand the implications of a solution option.

The all-inclusive nature of the definitions of design thinking should raise important questions like: who ensures that the design process is based on a proper understanding of the problem? And who ensures that the outcome has appropriate specificity? 

These questions often arise when people use design thinking as a methodology or process, but there may be better methods. Examples of this include:

  • Hoping to improve plans for a product or service by applying divergent thinking during convergent iterations (where they use design thinking during the UX/UI design stage).
  • MVP/CICD-based product strategies, with limited investment in design backed by honest user feedback. Efficiency is gained through the acquisition of design systems and the ability to iterate the products. In some ways, this is turning the traditional design process inside out, where you’ll repeat the design of something based on how people use it.
  • Complex ecosystems require a more structured approach to managing information, including multiple business needs, customer needs, stages of value creation delivery, numerous touch points, and multi-stage interactions.

Note: Service design is a framework and is design thinking-friendly, but design thinking is not the same as service design.

Final Takeaways

Design thinking is now widely accepted as a powerful tool for innovation. It's a process that involves identifying the problem, exploring possible solutions, and testing those solutions through multiple iterations.

This method can help companies build better products, services, and experiences faster than traditional methods.

Companies need to stay nimble and agile in today's rapidly changing environment to compete effectively. They also need to continue to innovate to remain relevant and sustainable. Utilizing methods like design thinking can be a helpful approach to accomplish this.

More Resources:

A clean-shaven man in a blue t-shirt wearing a VR headset holds his finger up to a hologram of financial icons circling the globe in this image demonstrating fintech in the Metaverse.

Metaverse: It’s a term that has earned a great deal of attention in the past year. In 2021, the hype cycle surrounding the Metaverse concept accelerated as two corporate events upended the space. First, Facebook emerged as a key early adopter, pivoting entirely to the Metaverse and rebranding their company as Meta. Second, the successful IPO of Roblox – creators of a highly engaging virtual gaming world – catapulted its valuation from a privately held firm of $4B to a publicly-traded firm valued today at $40B. 

Because we believe Metaverse has potential we can’t even predict yet, GlobalLogic is investing significant energy in our understanding of the impact of the Metaverse across several industries, including retail, finance, healthcare, media, and automotive.   

One particular area of interest is fintech, where the immersive Metaverse environment is poised to erupt with new opportunities.

Given fintech’s incorporation of blockchain and non-fungible token (NFT) technologies, we believe the Metaverse environment will prove a hotbed of innovation. The level of VR penetration in our daily lives is still an open question and perhaps generationally dependent. Even so, the opportunity for fintech in the Metaverse is more than mere speculation. The virtual goods market is already estimated at close to $100 billion annually, and about $50 billion in real-world and cyber assets are traded using cryptocurrencies every day. 

Metaverse: Hype or Future Business Potential?

Combining immersive online experiences with ownership and exchange models that persist worldwide will yield unprecedented opportunities for fintech in this immersive new virtual world. Virtual real estate transactions, convertible investments, markets for exchanging goods and services – the possibilities are endless.

To validate our hypothesis and further strengthen our insights on future impacts in fintech, Dr. Jim Walsh, CTO at GlobalLogic, hosted a dinner in New York at Columbia University, in partnership with the Global CIO Institute, to discuss Metaverse benefits for businesses. The audience of CTOs, CIOs, Chief Product Officers, and even a head of Financial Crimes Compliance from across the BFS and fintech industries joined in the boardroom-style discussion and lent a fresh perspective. 

Our hypothesis was validated, but one question persisted: What is the right strategy for our business?

Unless you’re a hard-core gamer, there are precious few reasons for a mainstream, non-technical person to visit the Metaverse. However, like emerging technologies before it, this space and the potential it offers has only just begun to reveal itself.

Is the Metaverse the Next-Gen Internet?

A man in a long-sleeved flannel shirt sits comfortably at his desk at home, using a VR headset to access the Metaverse.

To contextualize the Metaverse and its potential, we can look back at the evolution of the World Wide Web. When it was first introduced to the public in the early 1990s, there wasn’t much to see, maybe a few hundred websites. At the time, detractors said the Web was a gimmick; that all it did was provide a better user experience (UX). Much of the information and content it contained was already available online via bulletin boards, FTP sites, FAX-back systems, and other platforms. Thankfully, time has proven that UX plays a more critical role than those early detractors imagined.

Recommended reading: 5 UX Connected Products Management Tips [Blog]

Let’s consider Apple’s belief in UX. Apple’s ‘WebObjects’ was one of the initial applications that supported content authored through the web (what’s now called Web 2.0). At the time, content authored through the web was seen as only a minor improvement over static websites. However, companies such as Twitter, Facebook, and many others have proven that this ‘minor’ improvement in the experience has a significant impact on e-commerce, social interactions, and other areas.

Apple further cemented its legend as a UX leader when it first shipped the iPhone in 2007, then the iPad in 2010. While the potential extensibility of each platform through apps was exciting, each product was initially seen mostly as improving the experience of already-existing smart devices. It took time for the world to recognize the game-changing nature of this combination of ease of use and extensibility. 

We are seeing something quite similar today in the Metaverse adoption cycle. 

The immersive experience, while remarkable, is immature. Having to wear large, goggle-like devices on your head means few of us (even gamers) visit the Metaverse from outside of our own homes. Even at home, we probably don’t wear our VR goggles around the house, using them only in the rooms where we play games or watch entertainment.

VR headsets are not “turn back” devices like smartphones. We wouldn’t turn the car around to go home and retrieve the headset if we forgot it. But you bet you would if you left home without that smartphone!

Because an improved customer experience drives innovation, and innovation creates business, the Metaverse has the potential to be a game-changer – just like the Web and smartphones before it. The potential and opportunities will differ, of course. But the impact they’ll have on consumers and the companies who do business with them will be immense.

Yes, Metaverse is an Enhanced UX… And Much, Much More

In the early phases of the Web, Web 2.0, and smart devices, when critics said, “It’s just a better user experience,” they missed the point. When you make a system easier to use, more people will use it. As it attracts new users, people will also find creative things to do with this new technology – things you hadn’t considered. This innovation and evolution drive new revenue streams and even greater adoption.

The key power of VR is that it lets you create new worlds and interact in rich ways with others in a shared, virtual context. 

A woman in a white shirt and black jacket sites at her desk, her laptop open in front of her and one hand raised in front of her, as she uses a VR headset to access a fintech app in the Metaverse.

The Metaverse builds on the notion of this immersive virtual world, adding concepts of ownership and the persistence of objects. 

These ideals are coming to fruition in the contexts of non-fungible tokens (NFTs) and blockchain technology. Their evolution – alongside that of VR itself – will continue to transform the space in the future.  

Recommended reading: Tokenomics with Blockchain [Whitepaper]

The bottom line is that the Metaverse’s virtual reality has important features of physical reality – particularly the ability to create and exchange items of value between users. And as those items persist over time, a foundation for commerce will evolve, creating opportunities for new financial technologies and the fintech space as a whole. 

With the ability to create, sell and develop virtual real estate, for example, we open the door to concepts such as mortgages and leases, rentals, interest payments, and more, all based on virtual assets. Investment and speculation in persistent virtual items of value, including artwork and other intellectual property, are already happening. We already have access through individual sites and apps to e-commerce for designing, clothing and equipping my avatar, and decorating my virtual house or office cubicle, for example. 

The expansion and maturation of the Metaverse will see these virtual possessions become portable and reusable across future sites and apps. The exchange of currencies across virtual and physical reality already happens; in fact, the purchase of virtual goods approaches $100B USD annually in physical money.

Key FinTech Leadership Talking Points on the Metaverse Opportunity

Currently, there are no obvious killer applications for the Metaverse, and no one knows when or how they will evolve. One thing everyone agrees on is this: they do not want to be caught flat-footed. 

During the boardroom dinner discussion, attendant Fintech company leaders shared their thoughts on the emergence of the Metaverse and key areas they must consider as they formulate their own game plans. Here are their top priorities: 

The time to lay the foundation for Metaverse culture is now.

Most early Metaverse users will probably be Generations Z and Alpha (in their 20s or younger). To attract these users, companies must create engaging experiences that grab and hold their attention. Persuading this group to ‘invest’ in virtual assets, and to insure and trade them, requires the development of a foundational Metaverse culture. Trying to impose physical-world constructs on the Metaverse may or may not work. For example, a Metaverse ATM might be a good idea by offering a familiar metaphor for deposits, withdrawals, and currency exchanges. However, a virtual insurance broker might not go over as well (or vice versa).

Plan for scarce talent and resources. 

Skilled resources will be in short supply and hot demand, so companies must be thoughtful about acquiring and developing the required skill sets or identify partners who can help. Predictably, as the Metaverse takes off, designers and engineers skilled in the Metaverse technologies and culture will be in high demand and short supply. This happened for Web 1.0, Web 2.0, and the initial phases of the smartphone revolution, as well.

Examine your new products and services with a Metaverse lens.

Innovators must think outside the box about the new products and services the Metaverse will demand and enable. While it was evident from the outset of Web 2.0 that even non-technical end users could now remotely author web content, few of us thought there would be 600 million blogs worldwide and 31 million active bloggers in the US alone. Picture-, video-, and voice-oriented remotely authored, interactive sites such as Instagram and TikTok have over a billion users each!

Prepare to work through being an outlier in your organization.

Anticipate that your new sources of revenue using emerging technologies will start out small and may come under pressure in your company. Be ready. ‘Simple’ things like accepting micropayments or handling virtual transactions may cause a backlash. Because no one knows if and when this technology will catch on, you must either be comfortable taking the risk ahead of the curve, have a plan to catch up with your own technology, or wait until the Metaverse is established and acquire a successful player – probably at a very high valuation.

How will virtual experiences coexist alongside – and even complement – traditional ones?

If you are a current bank or insurance provider, be ready to support a dual business model, and pay attention to the emerging competition. Don’t underestimate the impact on banks in terms of launching virtual or “banks of the future” in the Metaverse. Insurance companies could open virtual brokerages to help people choose the right policies, for example. Ensure your strategy incorporates how you will support digital currencies. Speculation in virtual 'equities,' art, and 'items of value' and currency arbitrage will certainly see cycles of boom and bust – just like we see in the real world. Have a plan for cryptocurrencies (and even some of the more esoteric currencies of the Metaverse, such as Robux) that could be transacted through the banks or Metaverse ATMs. Be ready to embrace this experience while preserving the traditional models, as well.

Stay agile for first-mover advantages as opportunities arise.

Pay attention to emerging business models as they are unveiled. Paper-driven processes such as KYC (know your customer) or FNOL (first notice of loss) could be transformed through the Metaverse. With 2D technologies, these processes are best at capturing some elements of the real world; with 3D technologies, these could leapfrog a financial institution in its ability to interact – especially with its younger customers. 

Dip a Toe in the Fintech Metaverse Today to Step Confidently Forward Tomorrow

There will be new ways for financial institutions to make money in the Metaverse. This could involve the elements we have outlined already, such as taking out credit to finance the purchase of art, mortgages for virtual real estate, or insurance on virtual goods. Or, it could be something we haven’t even imagined yet.  

If history repeats itself, the winners in the Metaverse will be those who emerge with user experiences and products that seemed surreal just a few short years ago.

This is the time to familiarize yourself with the space, the current players, and the technologies enabling exceptional virtual experiences today. How the Metaverse will evolve and precisely what that means for fintech companies remains to be seen. What we know for certain is this: consumers are moving into a new space, and those companies that transition alongside them will be best positioned to meet their needs in the future. 

Will yours be one of them? 

Learn More:




Many retailers that started their “digital first” initiatives before the pandemic had a clear advantage going into lockdowns over those who did not. There is much we can learn from the key characteristics of programs and initiatives that kept pace with changing customer needs and prioritized customer experience throughout the pandemic. 

Retailers with a culture of innovation and agile thinking implemented improvements faster, communicated better, and ultimately outperformed competitors who were slower to respond to changing market conditions. In this post, we look at six aspects of the retail customer experience that emerged during COVID-19 lockdowns and may just stick around, to varying degrees.

Why is Retail Customer Experience Important During Downturns?

Before we jump into each of the traits represented by the leading firms, why is customer experience such a factor during market downturns? We don’t have to look too far into the past for the 2008 financial crisis. During that time frame, the results showed that customer experience leaders saw less negative impact, rebounded faster, and achieved three times the shareholder returns in the long run compared with market averages.

McKinsey identified three characteristics of the most resilient companies post-2008 financial crisis:

  • “Resilients” created a safety buffer by reducing debt, divesting underperforming segments, and building reserves that enabled them to shift to M&A at the first sign of economic recovery.
  • They cut costs ahead of the curve, moving faster and cutting deeper at the early signs of an impending recession.
  • They focused on growth even where it meant incurring costs, helping them to over deliver on revenue.

Recommended reading – The Store of the Future: 5 Key Areas of Opportunity for Retailers [E-book]

In addition to these “resilients,” within 24 months of that previous crisis, several new experience platforms rose up with innovative new models for growth experience. Who would have thought this was when companies like WhatsApp, Credit Karma, Venmo, Groupon, Instagram, Uber, Pinterest, Slack, Google Ventures, Cloudera, AirBnB, Warby Parker, and the Apple Store would get their start?

6 Aspects of Customer Service That Drive Experience

In the future, I believe we will look back at this pandemic and recall several primary characteristics of customer experience that shaped many new startups. But what’s even more critical is how existing companies reinvented themselves, created new relationships with their customers, and implemented these changes faster than ever before. So, what are these characteristics?

1. Keeping the Distance

Three people with shopping bags stand in front of a business skyline with a distance of six feet between them in this illustration of how social distancing impacted the customer experience.

While we saw many examples of social distancing throughout the pandemic, one thing is for sure — there wasn’t any consistency. What one retailer viewed as a priority (i.e., wearing masks and gloves along with social distancing), others paid no attention to whatsoever. Walking into some retail stores left you feeling there isn’t a pandemic taking place at all.

Retailers focused on social distancing, like Walmart, enacted volume limitations within their stores. For example, during the initial stages of the pandemic, they managed no more than five customers for every 1,000 square feet at any given time, accounting for roughly 20% of the store’s capacity. Clear directions and signage communicated to customers where to stand in queue lines and walk down the aisles.

I saw the most creative approach at Disney theme parks, where they combined their brand with social distancing. Stormtroopers barked out polite reminders for guests to stay the distance, and many guests enjoyed the reminders as it was part of the overall experience!

2. Staying Resilient & Efficient

Several businesspeople post financial results on a large board in this illustration representative of the importance of resilience in times of economic crisis.

While online sales grew 49% year-over-year compared to the March baseline, grocery stores alone increased 110% between March and April. How did retailers and grocers support this massive and unexpected growth? Was it because they already had “doubled down” on digital before the pandemic?

Perhaps, but many others who had forecasted innovative projects to be implemented over the next 2 to 3 years quickly figured out how to roll out improvements within weeks. Innovation and transformation using agile approaches (i.e., small teams, quick sprints, employee empowerment with test and learn) enabled them to make these changes quickly. In addition, decision-making processes became more efficient as silos between business divisions and IT were knocked down overnight by necessity.

Over time, history will show which changes were effective. For example, Lowes rolled out video engagement for their Independent Service PROviders during the pandemic so that an in-store experience could be provided to the contractors in the field. We have seen retailers like DSW Shoes and Hy-Vee Grocery, which are in completely different segments, partner together to leverage one another’s supply chain and location assets.

From a wholesaler perspective, if you live in a major city like New York, you have probably experienced food wholesalers selling directly to the consumer since COVID.

These are all examples where enterprises have shifted on-the-fly and created an entirely new market for themselves via direct-to-consumer experiences.

3. Do We Need to Touch?

One person holds a mobile phone with a QR code displayed on the screen while another stands amidst icons relevant to online shopping and omnichannel customer experience.

Never before has there been such an impact on the customer experience in such a short time. The contactless experience has continued to improve since the pandemic started. For example, it wasn’t until after restaurants opened that we began to see QR codes used in restaurants to access menus. We are seeing the ordering process included in QR codes, too. While mobile contactless payments have been around for a while, overnight, we saw their adoption across a higher percentage of retailers.

Even gas stations got into the act with the availability of gloves at the gas pump. Some retailers won’t let customers handle the merchandise; instead, they have store clerks show the products to the customers. Touch can be an essential part of the customer experience and factor into whether customers return to the store or not. It will be interesting to see which solutions last long-term and which will slowly disappear.

4. Stakeholders vs Shareholders

On the one side, the shareholder reigns supreme. This position gives rise to “short-termism” — i.e., operating a corporation to maximize today’s profits. Conversely, a retailer’s responsibility is to a broader set of stakeholders. Retailers have a responsibility to consider the interests of all stakeholders, including shareholders (e.g., the community around that retailer, its employees, and even those affected by that corporation’s impact on the environment).

By ignoring stakeholders, an organization primarily focused on profits will likely suffer negative results. According to S&P Global Ratings, for example, some retailers committed to avoiding layoffs guaranteed 100% salaries from April to June early in the pandemic. Employee safety concerns also prompted some businesses to ask their staff to work from home, ahead of compulsory lockdowns, or even temporarily close their operations for safety reasons. Others, perceived to have exposed their employees to safety risks by keeping their operations open, faced internal and public controversy.

Recommended Reading: Reducing Threat in Retail [Ebook] 

Besides helping to address the pandemic, stakeholder-focused corporations are creating new ties with stakeholders in the community and society. They might therefore avoid severe reputational and financial repercussions related to the experience they have with their customers.

5. Do We Want the Government’s Intervention?

In this illustration, a man's bust is depicted in front of a state building and surrounded by icons representative of money, finances, revenue, balance sheets, and spending.

Will the government’s involvement continue to rise or decline post-pandemic? If you’re a small-to-medium-sized business, chances are you received benefits from one of the government stimulus programs. While that may not have directly impacted the experience businesses provided to customers, the restrictions imposed by the government on whether you could open your doors certainly did. 

What will the new relationship between the consumer, retailer, and government represent? Is there a trickle-down effect? How might government intervention in different forms impact the customer experience long term? All of this remains to be seen and is changing daily.

6. The Evolution Underfoot

Upon store re-openings, researchers found that a whopping 75% of U.S. consumers had tried new shopping behaviors due to economic pressures, store closures, and shifting priorities. The customer is open to change. How will the customer experience be a factor? How will retailers reinvent themselves to acquire more market share in difficult economic times? 

Some retailers saw their “new customer” volume increase by 300% in 2020.  Customer experience is a significant factor in how many of those become long-lasting, loyal customers. Retailers that do not evolve may find their customer relationships short-lived.

Will Retailers Survive the Pandemic?

In the end, will retailers survive the pandemic? How will retailers emerge stronger?  What will be the deciding factors, in the end? How will customer experience affect a retailer’s sustainability in a post-pandemic world?

Retailers have rewired themselves faster than ever before, with practical approaches representing a significant and improved focus on customer experience. While some retailers were forced to close their doors or file for bankruptcy, those that survived must evaluate how they’ve implemented and evolved the above six factors, and how they will continue to improve customer experience going forward. 

When the next period of uncertainty strikes, retailers will again be challenged to reinvent themselves with new customer experience models to keep customers engaged across the multi-channel shopping experience.

More resources:

With the help of AI-powered tools, project managers can streamline their workflow and increase efficiency. This paper shows you how.

What’s inside:

  • The various roles AI can play in assisting project managers
  • AI automation for routine tasks
  • AI tools for risk management
  • How to properly utilize AI tools in your project management processes
  • Variables you should consider before AI implementation

Click to read The AI-Powered Project Manager

Being a program owner brings many team-related challenges. If you’ve struggled with unclear goals, running targets, low team collaboration or morale, skewed team velocity, or variation in story pointing in your projects, you’re not alone. Program owners may resort to micromanaging to regain control, but this can breed trust issues and hinder project execution.

Agile Scrum is an excellent way to address these problems, mainly when teams work independently on smaller projects. In large engagements, where teams strive to collaborate on global integration platforms, there is a higher need for accountability and ownership.

Agile Product-Oriented Delivery (POD) is one such working model to help you through this journey. In this article, you’ll learn more about this execution model and see it in action via a step-by-step illustration of how it was applied in a recent customer engagement.

Getting Started

A customer enlisted GlobalLogic to partner with their internal team in building a real-time monitoring portal that tracks the migration of 40+ million consumers to their new billing systems.

We started with the most established model for design & development: Scrum. The model worked well – that is, until the start of the integration phase, which brought together over 30 customer and GlobalLogic scrum team members working on a single release. Cracks in accountability and ownership began to surface, and the lack of One Team spirit was causing adverse impacts on timelines and quality.

Recommended reading: Agile Transformation: Are You Ready?

Identifying Key Issues in Team Structure

What had begun as a relatively small team of 15 grew quickly into a mid-size team of 60. Project deliverables and agile ceremonies – the daily scrum, planning, review & retrospection – involved the entire team.

The team acknowledged that there were problems and, on retrospection, identified the following key issues and challenges:

  • No clear communication within the team, which had multiple leaders to answer to in a model of distributed accountability.
  • No clearly called-out ownership.
  • “One Team” spirit was missing.
  • Lengthy team meetings were resulting in inconclusive outcomes.
  • There was little visibility into the outcome of each team member’s efforts.
  • There was a greater focus on budgets/forecasts than the value delivered.
  • The size of the team meant members and leaders could not truly know all team members and their capabilities.

Planning for Team Transformation

As a result, we identified and worked on 5 capabilities that would help our team's transformation journey.

Fig (1). Five Capabilities to Develop

Recommended reading: 6 Key Advantages of Quarterly Agile Planning

Reframing our Goal in a Product-Oriented Delivery Model

Our team deliberated on the identified problems and where we wanted to be. On consensus, we implemented the Product-Oriented Delivery (POD) working method. Our objective was clear: we needed to improve accountability and collaboration across teams, building in agility and discipline, alongside consistency and scalability.

What is the POD Model?

The POD Model is an increasingly popular software development strategy that builds small cross-functional Groups/PODs that own specific tasks or requirements for a product. Teams are structured into Groups/PODs with 8 (+/- 2) members, each working as an independent unit toward its set goals.

By definition, a POD is a small group of autonomous and self-governed team members with complementary skills working for a common purpose. The POD method focuses on teamwork and getting the job done, allowing each team autonomy. It also builds trust, as the team self-organizes and self-executes projects.

A POD's team members will collectively have the skills to design, develop, test, and operate a product, ensuring self-sufficiency. Using this model shifts responsibility for decision-making and task completion completely to the PODs.

Establish the POD Way of Working

Plan the POD Teams

Fig (2). Typical Composition of a POD Team

The first step is to define the operating model for POD-based delivery. We defined each POD as having 8-10 professionals; above is the composition that we put in place for our team.

POD Project Execution

Fig (3). Addressing the problems

In planning, we looked at the number of epics/features required and split each team to focus on separate, unrelated prioritized features. Each POD team/member had clearly called-out responsibilities.

We then established a cross-cutting Guide Team to support all assigned work. This team comprised the Program Manager, Architects, Product Owners, and DevOps/IT engineers.

The project execution model for a POD focuses on:

  • Features planned for independent execution by POD teams.
  • Team capabilities required for teams to deliver current/future product backlogs.
  • Team distribution, based on geographic spread, that distributes features based on the skills, and requirements of the customer.

Measuring POD Functioning

Based on the size of the project, POD teams will be defined and created to sustain continuous growth and improvement. It is therefore important to define the KPIs and success criteria for these teams.

Identifying the measurement period and acceptance criteria for each metric is essential, Here are some more critical metrics to track and have customers agree upon:

Fig (4). Common KPIs/ Metrics to track

Now look at the metrics we defined and reported on, either weekly or by release. The measurement ranges defined the success criteria of the project.

Fig (5). Virtual Operating Control Center KPIs

Key Takeaways

The POD team structure and way of working together can improve efficiency, collaboration, communication, and of course, empowerment. Here’s a sample of the results we experienced by transforming from classic scrum to product-oriented delivery teams.

Fig (6). Before-After Comparison of Key Metrics

And what did we learn? Here are some key benefits we identified as results of implementing the POD model:

  • Enriched Customer Experience. Connecting with the customer via POD teams enables partner team members to interact frequently, maintain visibility on priorities, and improvise accordingly.
  • Improved Quality of Deliverables. Focused goals for each POD team accelerate the establishment of higher-quality milestones.
  • Increased Team Effectiveness. Each POD focuses on customer goals and ensures they align with those expectations.
  • Faster Release Deliveries. With improved Team collaboration, flexibility, and ownership, POD teams work more effectively on releases.
  • Triggers Team Innovation. Teams are continuously looking to improve and find ways to achieve faster results.
  • Cross Functional Adoption. POD team members acknowledge the various roles that make up the Guide and POD teams, and better understand the parts each role has to play.
  • Transparent Collaboration. The POD team can define the success of the project quickly with inter/intra-team collaboration.
  • Team Confidence & Morale. A positive spike was observed, sprint on sprint.

Want to learn more?

In essence, indigenization makes something more native – transforming services or ideas to suit local culture, for example. The defense and sociology domains commonly use indigenization in their reports and studies. The ideology also fits in the cloud age within the information technology sector, where specific cloud platform vendor capabilities symbolize a local culture.

Cloud adoption is a common practice for those who have decided to embark on a digitization journey. Those running enterprise-grade software in physical data centers can opt for specific types of transformation or seek recommendations from cloud enablement experts on the appropriate kinds of cloud indigenization.

For example, some enterprise customers may need a lift and shift cloud migration based on legacy architectures. Others may want to leverage cloud-agnostic capabilities and platform rearchitecting and potentially consider retiring the applications built with age-old technology stacks. Finally, some companies (particularly those ahead of the game) may state that their application uses cloud-native technologies and now require guidance to optimize the enriched cloud capabilities.

Often, when companies seek cloud indigenization expertise, the information they provide is brief and unclear. There may be limited time for enterprise team stakeholders to understand all variables in optimizing the cloud. Therefore, the appointed team’s main task becomes explaining the overall costs so the stakeholders can approve them.

This article is for cloud technical leaders, practitioners, and architects to provide the tools they need to explain the different variables involved in optimizing the cloud, such as the time constraints and access to the ecosystem. The scenario discussed here ensures the application is ‘cloud-ready,’ which serves the customers from an on-premise center that will migrate to a specific cloud environment.

How far is cloud-ready from cloud-native?

Terms related to the cloud tend to lack standard definitions and are often inconsistent between technical stakeholders and business professionals. Let us first see the definitions of the cloud terms to be aligned on same page for the intent of this document.

“Cloud-ready” is used by business and technology experts. On the business front, its meaning describes a big-picture plan of an enterprise aspiring to modernize its application portfolio in all aspects. In addition, it means the readiness to adopt cloud principles, embrace the agile culture, and develop people skills to transform the applications based on cloud environments in all their true capabilities.

On the technology side, cloud-ready describes the state of on-premises hosted legacy applications rearchitected just enough to run on cloud-based infrastructure in the future. They can either be transferred into the cloud as-is or modified into microservices, then become a containerized architecture that continues to run in on-premise environment infrastructure.

To align them to cloud vendor services and capabilities, creators can then revamp the applications using the cloud-native principles, technologies, and practices to lead their deployment and operations. However, such cloud-ready tagged applications cannot take advantage of the full benefits of being in the cloud. Those benefits include elastic scaling, running parallel instances, and increased resilience, as these applications still meet user demands with traditional capacity planning exercises. However, the creators can add these benefits later.

From a cloud maturity perspective, organizations can start modernizing their legacy applications by getting cloud-ready through containerization. The next step is to graduate cloud-ready applications to become cloud compatible and then optimize architecturally to be hosted on the cloud platforms to leverage the native features gradually.

“Cloud-native” is a term used to describe applications designed for specific cloud-based platforms. The official definition from CNCF states that creators can develop cloud-native applications as decoupled microservices running in containers managed by self-hosted platforms (including on-premises) or cloud platforms. Then the creators can extend it to a serverless architecture wherein the microservices run as serverless functions mostly behind an API gateway service.

Therefore, selecting the correct type of cloud service among various products for a specific workload or workflow is a detailed process. This includes leveraging cloud service providers’ proprietary technologies and APIs to access cloud security, cloud storage, backup solutions, disaster recovery, cloud testing, and more. Cloud service vendors can also provide inherent cloud optimization to all their clients. Being cloud-native can overlap with meanings of cloud-enabled or cloud-first applications simultaneously.

Recommended reading: Cloud-Driven Innovations - What Comes Next?

“Cloud-Compatible” describes the state of the application in-between cloud-ready and cloud native. In addition to being cloud-ready, these applications are typically stateless and have externalized configurations, secret management, and good observability. The apps can also scale horizontally.

                                    Application modernization maturity path

Two core aspects for taking the cloud-ready applications to a cloud-native state include:

  1. Deploy and run the containerized microservices onto the cloud infrastructure.
  2. Analyze whether the cloud-ready application leverages DevOps and Agile principles.

It’s evident that the more clarity, the better the results. However, as discussed during the introduction, there may not always be an opportunity to get the answers. If we can get one, we should ask detailed questions like the ones below.

Containerized Applications

  • Which container orchestration engine are you using?
  • Is it Kubernetes or compliant with Kubernetes?
  • Is it a variant of Kubernetes?
  • Is it a heterogeneous container orchestration engine? If so, which one? e.g., Apache Mesos, Docker Swarm.
  • If it is a Kubernetes application, is it self-managed using tools like kops, kubeadmin, or likewise?
  • What is the source cluster’s actual usage of computing, storage, and network resources?
  • What is the technology stack used by the platform where the source cluster is located?
  • Is the application stateful or stateless?
  • What are the dependencies among applications?
  • Is service mesh used? If so, which is the existing service mesh platform?

Configurations

  • Is the pattern of externalized configuration implemented for all sources?
  • Are the secrets protected and managed well?

DevSecOps

  • Is there an existing CI/CD subsystem?
  • How are configurations and secrets managed?
  • What are the security regulations and compliance terms?
  • Which deployment strategy is used?
  • What is the design of the container cluster for cybersecurity?
  • What are on-premise operational practices for tools, people, and processes?

Observability

  • Which monitoring, alerting and auditing subsystem is used?
  • What are the log collection and analyzer subsystems?

Mobilization

  • What is the accepted downtime window for migration?
  • How will running the migration impact normal operations?
  • Is the cloud landing zone already set up? By cloud landing zone, is it that at least the baseline cloud setup with a baseline to get started with multi-account architecture is in place? This should include identity and access management, governance, data security, network design, and logging configurations.

However, If there isn’t the opportunity to get clarification on all questions, assume an existing landscape recommends a migration approach with fair and transparent assumptions and provide its estimates accordingly. Moving forward, at the least, this demonstrates a strong migration experience and creates the opportunity to tailor the efforts in the future.

Taking a cloud-native approach

There are different approaches for evolving cloud-ready applications and making them cloud-native. You can create a suitable strategy based on information from the previous section. Below is an overview of the possible complexities for containerized applications running in a particular cloud.

Container Orchestration Complexity of Migration How to Migrate
Compatible Kubernetes

 

 

Simple to Moderate

 

 

If the CICD process exists, update the delivery/deployment pipeline and release it to the target cluster.

Migrate data through cloud-native tools or third-party partner software.

Variant (e.g., OpenShift)

 

 

Moderate to Complex

 

 

Use a new CICD pipeline for packaging, delivery, and deployment. Migrate data with appropriate tools based on analysis of the source cluster. Consider the migration of network plugins and Ingress.
Heterogenous (e.g., Mesos, Swarm)

 

Most Complex

 

 

Same approach as above. Use a new CICD pipeline for packaging, delivery, and deployment. Migrate data with appropriate tools based on analysis of the source cluster. Consider the migration of network plugins and Ingress.

 

Keep the development process as simple as possible when lacking detailed information or clarification. For estimation purposes, assume that:

  • The application cluster on-premises is hosted as a compatible Kubernetes cluster and is compatible for hosting on cloud-managed Kubernetes clusters like AWS EKS.
  • Complete DevOps with mature operation capabilities already exist.
  • The application landscape consists of both stateless and stateful applications.

With the above core aspects, you should evaluate the migration with simple-to-moderate complexity. Hence the migration methodology, at a high level, can include the following aspects:

  1. You can deploy the stateless applications using the CICD system. Then you can update the existing CICD to deploy it correctly in the required cloud environment.
  2. Then, you can migrate the data from stateful applications using cloud-native tools. For example, if the target cloud is AWS, use AWS migration tools like DMS or MGN for the database. Similarly, you should migrate the data from the content server and other storage using AWS native offerings like AWS Data Sync or AWS Transfer family to AWS S3 or AWS EFS, depending on the type of data and the use case. Finally, if needed, you can use the AWS Snowball.

List all other minor assumptions to support this strategy accordingly.

Early Estimation Areas

As mentioned earlier, the clear-cut estimation for the migration strategy will depend on the assessment and clarifications. Therefore, we require a balanced approach of knowns, unknowns, and assumptions for an early end-to-end estimate. However, most of the migrations for transforming the legacy or cloud-native applications to cloud-optimized ones involve the below areas.

Landing Zone

A cloud platform landing zone is an environment for hosting workloads to enable multiple accounts for scale, security governance, networking, and identity. There are various options and archetypes along with accelerators available by cloud providers for aligning to specific scenarios, which you should leverage for estimates.

 

Cloud Native Resources

The cloud infrastructure’s minimum resources include the managed container cluster, storage services for sessions or caching, content files, databases, integration services, and logging and monitoring services according to the architecture. It also includes configuring and labeling the network and working nodes.

The required infrastructure on the target cloud should preferably be pre-provisioned through code, using cloud-native like AWS Cloud formation for AWS cloud or agnostic tools like Terraform.

Recommended reading: Cloud Sandboxes - How to Train Your Engineers To Go Cloud-Native (whitepaper)

Observability

The monitoring and logging systems can operate the same as those in the existing environment or may be required to complement the native services provided by the cloud.

Security, Risk, and Compliance

Security in the cloud is a shared responsibility, and the vulnerability spectrum is more extensive in the cloud. Therefore, the teams involved need to estimate container security considerations. It includes but is not limited to Image security, Container Privileges, Host Isolation, and Application Layer Sharing.

Other layers to be considered are network, orchestrator, data, runtime, and host when going for cloud deployments. In an existing private data center environment, it may not have applied all security considerations. Prepare an exhaustive list even for early estimations.

Cloud Governance

A holistic cloud governance model enables enterprises to drive the cloud indigenization culture. The core functions include:

  • Implement security and safety measures to minimize data vulnerabilities (discussed above).
  • Define best practices and drive policies in cloud adoption that align with industry standards.
  • Create a continuous improvement plan along with reusable and preconfigured resources.
  • Optimize costs and enhance visibility through advanced analytics and reporting.
  • Automate and scale processes and infrastructure as and when required. Also, automate metering, monitoring, and chargeback policies and processes.

Teams must design the governance functions described above as either centralized or decentralized based on the business goals. The cloud’s SLIs (Service level indicators) and SLOs(Service Level Objectives) differ from those on-premises. It’s necessary to consider the security, risk, and compliance aspects along with availability, latency, and throughput. If we have less visibility of the business requirements and product owners, adopt a reference cloud-native governance model for estimation purposes.

Delivery and Deployment

Teams must consider the plan, design, and efforts to deploy to lower and higher environments, including at least Dev, QA, stage as preproduction environments, and at least one production environment. If the CICD pipeline exists, the teams can update it or create a new one native to the cloud.

Cutover Plan and Dry Run

All stakeholders should create and prepare a concrete plan and checklist for the possible impact and mitigation. This exercise requires many iterations and is crucial to the integration process. Unfortunately, the work item often gets ignored or underestimated and is a specific call out.

Key Takeaways

Being cloud-ready is not the same as being cloud-native.

The technology world consists of language and vocabulary that can be interpreted differently by different stakeholders. As digital transformational leaders, we should translate the technical terms and the marketing claims back into plain language to make informed decisions for ourselves and our customer organizations.

The initial estimates are critical to any transformation.

An early estimate helps to formulate indigenization strategies, provides a basis to plan and execute engineering and delivery, and serves as a baseline for changes. In addition, estimates presented through a sound, thought-through work items structure and its required approximate person efforts give a competitive edge for overall cost approvals from financial service executives and board members.

GlobalLogic can help.

Whatever stage you are at within your cloud-ready and cloud-native journey, GlobalLogic has the experience in technologies and services to partner with you and enable you to accelerate your journey.

We have also developed blueprints and frameworks meshing the reusable cloud patterns with industry best practices and sound architecture principles:

  • Templates to provide the initial estimates
  • Frameworks to assess cloud-ready states and enable them to become cloud-native. See Cloudwave.
  • Accelerator to make legacy applications a cloud-ready state through microservices and containerization. See MSA.
  • Accelerator to modernize legacy data platforms and make them cloud-optimized. See DPA.
  • Accelerator through cloud-native infrastructure setup. See OpeNgine.
  • Cloud-native Quality Assurance as a platform service. See ScaleQA.

Learn more:

Online shopping and digital apps have changed consumer spending patterns, and today, shopping is no longer limited to in-person transactions during regular business hours. Retailers face new challenges with fund transfers as merchants and app partners require faster, more reliable money transfer systems to meet consumers' evolving demands.

Traditional electronic payments bank transfers are not in line with user expectations. Instant payments are expected to become the standard mechanism for electronic fund transfers, merchant payments, and digital transactions.

Instant payments (also called real-time payments) are a method of exchanging money and processing payments that involve the transfer of funds across bank accounts in real-time rather than a couple of business days. Several countries have implemented instant payment systems and platforms due to the increased need for faster and more reliable transactions. Some notable examples of instant payment systems worldwide include Unified Payments Interface or UPI from India, New Payments Platform or NPP from Australia, Pix from Brazil, etc.

These services have become ubiquitous in their respective areas of operation and have cornered a large market share in the digital transactions space. They also provide many advantages such as 24x7 availability, transaction speed, ease of use, low-cost functionality, convenience, versatility, open environment, and safety. 

But this ease of use also comes with its share of security concerns.

This post describes these security concerns and different approaches which can be used to develop a secure system that prevents these services from being misused by criminals.

The Need

Since the advent of COVID-19, the use of real-time payments has risen exponentially. India has led the way with over 25 billion real-time transactions, as UPI payments and UPI-specific payment apps have become pervasive across India. Digital payment options can be found in each nook and corner of India, from luxurious shopping malls to street-side vendors. 

But as real-time payments have increased, the chances of fraud have also grown. The net effect is that the more ways we pay and the more places we interact with payment processors, the greater the opportunities for cybercriminals. The ease and convenience offered to users by these instant payment systems also brings dexterity to criminals, who have discovered the comfort and speed of using it to their advantage. This has led to so-called lightning kidnappings, whereby consumers are forced to make instant transfers to criminals while being held ransom.

In India, UPI payment apps usually come with multiple levels of security – a code to open the app and another PIN to perform the transaction. However, these are not sufficient when both the person and the device (which in most cases is a mobile phone) are held hostage together by criminals.

Possible Approaches to Security

Traditional approaches to cybersecurity react to situations and include rule-based responses such as scanning for a set of ‘known’ indicators that signal an attack, then remediating it. However, this often comes too late.

Given the magnitude of risk exposure, the time for traditional reactive solutions has passed. Machine learning and AI, when combined with behavioral analytics that scan for patterns and inconsistencies, can help financial institutions bolster real-time protection. For example, some helpful patterns include the geolocations where the transactions are being done, time of the day, amount of money transferred, types of accounts transferred to, etc.

Another option for financial institutions is to use Multi-Factor Authentication when the amount transferred or the number of transactions exceeds a certain threshold. However, an important point to note here is that most MFAs implementations require a mobile phone and a PIN to complete the transaction. This is ineffective when a hacker/criminal has access to the person and the mobile phone. 

As a result, the solutions must factor in that both measures will not be available simultaneously. This needs to be thought through, but one option could be the mobile phone of the spouse or an emergency contact person.

In the physical world, we see home burglar alarms with a code to turn off the alarms within a couple of mins when the homeowner opens the door. When someone other than the owner opens the door, he/she will be unaware of the code, and an alarm is sent to the nearest security office/ police station indicating an unexpected home entry. We can take this to the online world by allowing users to set an alarm code that will go off when an unexpected transaction occurs and block the user's account, which can be prevented only by entering a secret code.

Sometimes, fraudsters request money through QR codes by duping gullible sellers of physical items on online marketplaces. Users must be careful and double-check the purpose and amount whenever a PIN code or MFA is needed for instant payment through QR codes. 

The end user can take precautions against fraud, as well. These are not elegant solutions, but provide a way to secure the major funds in a better way. 

One approach is to use a separate bank account with limited funds for online transactions. This way, crooks would have access to only a part of the total amount, resulting in lower losses. Another approach is to use different devices for communication and online transactions. A separate mobile phone can be linked to bank accounts and instant payment systems that can be stored securely from unauthorized use. A third approach is to have a separate security device for transactions – one that is not carried all the time during travel and is required for high-value transactions beyond a certain limit.

Conclusion

Instant payment systems have proved to be a boon for both consumers and businesses that use them to efficiently transfer money across users. However, this also introduces new ways these technologies could potentially be misused by criminals. Financial institutions and users themselves have work to do to better secure the payment system, as well as individual payments.

Learn more:

  • URL copied!