Archives

We as humans have a tendency to adjust rapidly to our environment and to begin to consider it “normal” in a very short time. This has probably been key to the survival of our species: we can’t afford, biologically, to be constantly triggered by recurring events. Instead, we set a new baseline and then are “aroused” only by changes to that baseline.

We’re all familiar with entering a new environment and at first noticing a distinctive odor or sound—baking bread, a person’s perfume or cologne, or the whine of an aircraft engine, for example. Then, within a few minutes, we are no longer consciously aware of it. Although we see this same “habituation” effect happen at a macro level in the technology world, every once in a while the novelty of the situation shines through, and we have a little epiphany—or in the technology space, what we might call a “science fiction moment.”

When I worked for a previous company and was spending a lot of time in India, my company assigned me a car with a driver. Having a local driver was a necessary safety factor given the driving conditions in India in those days, and it was a common practice both for visiting foreigners like myself, as well as for many locals. Coming from the US and being used to driving myself, I found having a driver to be very awkward at first. It seemed absolutely unfathomable to have someone drive me to work and then sit and wait many hours until I was ready to leave. I felt guilty about it.

Even though having a driver in India was both common and reasonably inexpensive (by US standards), the idea that I was keeping an actual person waiting on me—literally—all day long was hard to get used to. But I did get used to it. In just a few weeks, I not only enjoyed having a driver, but I began to appreciate the advantages. For example, I was able to ask him to run errands for me while I was working, enjoy his conversation on long drives, and appreciate my favorite coffee “to-go,” which he’d get for me before he picked me up in the morning. In short order, I was thoroughly “spoiled.” While I still very much appreciated—and over time became friends with—my driver, I no longer felt guilty about his waiting for me when I was busy (unless I was going to be very late). In other words, I became thoroughly habituated to this new experience.

I see the same effect when I travel. I travel a lot on business, and generally I am so focused on my work that I am not too aware of the novelty of my surroundings. Every once and a while, though, something will happen, and I’ll notice what a fantastic place I’m in. We ran an architecture workshop for a client in Paris, for example, in a conference room that had an amazing close-up view of one of the major Paris landmarks, the Arc de Triomphe. As we conducted the workshop or took a break, I’d glance out the window and think “I love my job!”

There are other magical moments—a dinner with my colleagues in an outdoor public square in Lichtenstein, eating roasted chestnuts from a street vendor in Zurich on a cool fall day—that punctuate the habituation of frequent business travel. These and other such moments remind me of something I’ve become habituated to and so often take for granted: what amazing places I’m privileged to visit, and how lucky I am to do such interesting work with such great people.

Something similar happens with technology. When we get a new technology or device, we often feel a sense of fascination or delight. This quickly fades, and while we still enjoy the benefits we get from that device, we start to take them for granted. Then something happens that reminds us of what an amazing era we live in.

This happened to me the other day. I drive a Tesla that has a “navigate on autopilot” feature. This feature was introduced about a year ago (as of this writing), and I’m fairly used to it by now. However, I always enjoy how the car automatically navigates the freeway exit nearest my house. The freeway exit ramp makes a fairly sharp right turn and then a complete U-turn before it joins the major intersection that I take to get home. If you take your hands momentarily off the wheel, it’s pretty obvious that the car is following the road and steering all by itself. The other day I happened to be using Siri voice commands to send some notes to myself at the same time that my car was automatically driving itself around this exit and toward my home. I didn’t consciously plan these things happening at the same time, but it struck me very forcefully that I was having a science fiction moment.

The situation of having a spoken conversation with my “pocket computer” while being automatically driven home by my artificially intelligent car was literally science fiction just a decade ago. We’re not all the way there with either technology, of course. But every once in a while, something like this will happen to remind me that we’re living in a future that people only dreamed about just a short time ago.

I don’t think we can avoid becoming habituated, technically or otherwise; it’s hardwired into us as humans. I think we can, however, stay alert to situations that remind us of what exceptional times we live in, and what exceptional opportunities we have.

All the best for a joyous and prosperous New Year and in the upcoming 2020’s.

In the 1960s, sociologist Everett Rogers produced a roadmap showing how innovations are adopted and, eventually, become obsolete. Later, author Geoffrey Moore wrote a book called “Crossing the Chasm” that detailed how companies and technologies succeed or fail to progress from “early adopter” to “early majority” status. Moore’s work further popularized Roger’s categories, and words like “innovator” and “early adopter” have become a firm fixture of the Silicon Valley and world-wide technology vocabulary.

Fig 1: Diagram based on E. Rogers’ “Diffusion of Innovations,” 1962. Courtesy of Wikimedia Commons.

For many companies who depend on technology, the pragmatic “sweet spot” on the technology adoption curve lies somewhere between the early majority and late majority. By the time a technology begins to be adopted by the early majority, many of its initial challenges have been overcome by the innovators and early adopters. The benefits of that technology can now be realized without the pain those pioneers had to go through. Also, a substantial community of companies and developers are in the same position, so resources, training, tools, and support start to become widely available. Also, the technology is new enough that the best engineers and architects will be excited to learn and work with it—it’s a motivator to attract talent.

This assumes, of course, that the new technology delivers benefits. But, generally, if it “crosses the chasm” and gets to the early majority phase, that has already been soundly proven. For example, digital natives like Amazon, Google, and Facebook were early adopters of a variety of then-new technologies. Their risk—and success—subsequently paved the way for the vast majority of companies that now follow in their shoes.

Most technology-enabled businesses can survive and thrive with technology that is one generation—or even two—behind the technology being used by the early adopters. Once a technology becomes older than that, though, lots of problems come up:

  • It becomes harder to attract and retain good talent.
  • System uptime, stability, and scalability become less competitive relative to more modern systems.
  • The user experience and overall system quality suffers; security threats cannot be readily countered.
  • The availability of good technology options produced by other companies and the open source community become less abundant.

Companies whose technologies fall into Professor Roger’s “laggard” category will generally experience these issues first-hand, whether or not they recognize that their technology is the cause.

By nature, the specific technologies that fall into each category are moving targets, and meaningful market adoption statistics are hard to come by. Forbes reported in 2018 that 77% of enterprises have a portion of their infrastructure on the cloud, or have at least one cloud-deployed application[1]. This figure resonates with our own experience, but it still does not tell us what percentage of new revenue-generating applications are created using cloud-native / mobile-first architectures, or how aggressively businesses are migrating to the cloud. Our experience suggests “nearly all” and “it varies,” respectively.

Classifying Technology from a Practitioner’s Perspective

To provide a practitioner’s perspective on technology adoption, we decided to create a classification based on our own experience with clients, partners and prospects. Collectively, because of our business model, this set of companies cuts a wide swath through the software product development community, including startups, ISVs, SaaS companies, and enterprises. Because GlobalLogic’s business focus is on the development of revenue-producing software products, the enterprises we work with generally either:

  • Already use software to produce direct / indirect revenue
  • Recognize that software has become a key component of their other revenue-generating products and services
  • Are more conventional but aspire to be more software-centric
  • Are “laggards” (usually recently acquired technologies and companies whose technologies are in need a refresh)

So what trends are we seeing in the technologies used by this diverse group of software-producing companies? Before we start, please note that the classifications we make here are about the technologies, not the companies. Even the most innovative company probably has some “laggard” technologies deployed. Similarly, even very conservative companies may be early adopters in some areas. For example, some otherwise ultra-conservative banking software companies are incorporating cryptocurrency support.

As of late 2019, we categorize the various technologies currently in use among our partners, prospects and client-base into five categories: innovators, early adopters, early majority, late majority, laggards. (If you find the term “laggard” offensive, please note that we use it because it is sociologist Everett Roger’s term, not ours. It should be read as only applying to the technology, not the company or the people.)

Fig 2: Technology Adoption Picture in Late 2019

Innovators

Relatively few of the innovator technologies currently under investigation or in use by innovators ever reach the early adopter stage in their current form, but the work done by innovators informs the early adopters and the entire ecosystem.

Late 2019 innovator technologies include:

Quantum computing

Strong AI (that is, systems that “think” like people)

Network-distributed serverless

GlobalLogic has the same curiosity about these types of technologies as other engineering-focused companies tend to have. However, these technologies are primarily in the research phase and not the revenue-generating phase, so our commercial engagements here tend to be limited.

Early Adopters

Technology in this bucket has widespread availability, but it is either in a relatively rough state that has not been fully productized, or else is an obviously good technology that is still looking for the right opportunity to go mainstream. While a substantial number of these technologies will eventually be taken up by the early and late majority in some form, the timing of this event—as well as the specific winners and losers—have not yet become clear. For example, at this writing, cryptographically secure distributed ledgers will clearly become mainstream at some point. However, will blockchain specifically be the big winner? Out of such bets, fortunes are made or lost.

Early adopters need to invest time and energy to find and then fill the gaps and missing pieces of an incompletely productized technology. However, the rewards of using early adopter technology can be very large if they address a genuine need that can’t be readily solved using other approaches. For example, bitcoin as an early adopter of cryptographically secure distributed ledger technology has paid off big for many people, though arguably this technology has yet to become mainstream for business applications.

Within our client community, current early adopter technologies include:

Serverless functions or “Function as a Service” (FaaS)

Serverless containers

Blockchain

Deep learning (different than strong AI)

Computer vision

AR/VR (outside of gaming and entertainment)

Gestural interfaces

We actually have clients using all of these technologies commercially today. For example, we work on public safety systems and autonomous vehicles that use computer vision and deep learning. However, these applications fall into the early adopter / risk taker / first mover / niche application category, rather than what could be considered mainstream business applications. We, along with many others, firmly believe that many of these technologies are rapidly maturing and that they will indeed enter the mainstream in the next few years. But as of right now, we could not claim that they have become part of the mainstream today.

Early Majority

Technology entering the “early majority” bucket is initially a little scary to the mainstream, but it has been thoroughly worked over by the early adopters (and the even earlier entrants into the early majority) and tested in real-life deployments. The blank spaces, boundaries, and rough spots have been largely filled in, and the technology has now been “productized.” Tools and support are available, together with experienced personnel. For many businesses, this is the sweet spot for new product development: early enough to give you a meaningful competitive edge and to be attractive to talented engineers, but not so early that you need to invest the time and energy required to be a pioneer. Early majority technologies also have the longest useful life, since they are just entering the mainstream adoption phase.

Right now among our customer, prospect, and partner base, we see the early majority adopting containerized cloud-native event-driven microservices architectures and fully automated CICD deployment and “Infrastructure as Code.” We saw this trend starting back in 2015 among our mainstream-focused clients.

Enterprises who are developing new systems or extending older ones are widely adopting:

Modern NoSQL databases (as opposed to ancient versions of NoSQL)

Event-driven architectures (microservices-based and otherwise)

Near real-time stream processing

DevOps / CICD / “Infrastructure as Code” / System Reliability Engineering

Containerized cloud-native microservices architectures

On the user experience front, we are beginning to see a significant uptick in mainstream clients who are interested in dynamically extensible “micro front-end” architectures.

Late Majority

Revenue-generating software systems generally age into the late majority. They tend to be created using technologies that were early majority when the system was built, but time has gone by, and those same technologies now fall into the late majority category. This applies to any company that expects to make money from the software—either by selling it (in the case of an ISV or SaaS company), or as part of a product or service (e.g., a car or medical device).

For enterprise-developed non-revenue generating applications (e.g., internal back office or employee-facing systems), the situation is somewhat different. Because cost control and low-cost resource availability are primary drivers, internal-use applications are often developed using lower-cost technologies that are now in the late majority stage. Late majority technologies also enables the use of less expensive resources who may not be skilled in early majority technologies.

As an aside, this attitude toward late majority technologies is one reason for the dichotomy between IT-focused organizations and product-focused organizations—both within a given enterprise—and in the services businesses that support them. Product-driven organizations and services businesses tend to be skilled in developing systems using early adopter and early majority technologies. IT-focused organizations focus on sustaining systems that use late majority and sometimes laggard technologies. This is obviously an oversimplification, as both product- and IT-focused organizations can certainly be skilled in the full range of technology options. However product- and IT-focused companies tend to have different attitudes, approaches, and “DNA” with respect to the different stages of technology maturity.

For revenue-generating applications, while development cost is always a factor, time-to-market, competitive advantage, and the overall useful life of the resulting product are generally more important than cost alone. This desire to maximize the upside potential generally drives new revenue-producing app development toward early majority technologies, while non-customer-facing / non-revenue producing / internal-facing applications tend to use late majority technologies to save money.

As of late 2019, the predominant late majority architectural approach is:

A true N-tier cloud-deployed layered architecture, supporting stateless REST APIs and JavaScript Web / mobile native clients

RDBMS-centric systems using object-relational mappings (ORMs)

Good implementations of these architectures have strong boundaries between layers exhibiting good separation of concerns, are well componentized internally, and may be cloud-deployed. This is a good, familiar paradigm, and we expect elements of it to persist for some years (e.g., the strong separation between client and “server” through a well-defined stateless interface). However, even the best implementations of the N-tier architecture lack the fine-grained scalability that you can get with early majority microservices technology. Many implementations of this paradigm also tend to be built around a large central database, which itself limits the degree to which the system can scale in a distributed, cloud-native environment.

If history is any indication (and it usually is) we believe the majority will—perhaps reluctantly—leave the N-tier paradigm behind in favor of a cloud-native microservices approach within the next several years. 

Laggards

Given its negative connotations, we would really prefer not to use this term. Let’s keep in mind, however, that the term was introduced by Professor Rogers to refer to specific technologies, not the company or the people who work with them.

Laggard technologies are those that are not used to any significant degree for development of new software systems today, either for revenue-producing products or for internal-use systems. Time has passed these technologies by, literally, and they have been superseded by other technologies that the vast majority recognize as being superior (at least 84% according to the curve).

People use laggard technologies only because they have to. Systems based on laggard technologies are still in production, and these systems must be actively enhanced and maintained. Within a given organization, these activities require creating a pool of resources who have knowledge of the laggard technologies. The proliferation of these niche skillsets within the company can drive the creation of new systems using the same laggard technologies, even when better options have become widely available.

For technologies in the laggard category, multiple generations of improved technologies and architectural approaches have, by definition, now become available. In general, these improvements make development easier and faster, scalability and reliability higher, user experience better, and operations cheaper. Nonetheless, companies can find themselves locked into laggard technologies because that is the skillset of their workers. Getting out of this bind is disruptive, and “digital disruption” has become a frequent refrain in the industry.

Current technologies that fall in the laggard category include:

Microsoft Access style “two-tier” (real or effective) client / server architectures with tightly coupled UIs and logic in the database (SPROCs, etc.)

Stateful / session-centric web applications

Conventional “SOA” / SOAP architectures

“Rich client” systems (Siverlight, Flash / Flex, and many but not all desktop systems)

Legacy mainframe-centric systems

In general, any technology that was used by early adopters 20 years ago or longer is a candidate for the laggard bucket.

Conclusion 

Technology stays current for a surprisingly long time. Specifically, some major technologies have stayed in the “majority” category (early and late) for about 16 years and, in a few rare cases, even longer. That’s enough time to raise a child from birth to high school. But, as those of us who have raised children know, while time may seem to stand still day-to-day, looking back it passes by in the blink of an eye.

On the technology front, tiered architectures with REST APIs may still seem modern and current—but in fact, the early adopters were using them in 2002, and they became mainstream in 2006. If past history is any indication, N-tier architectures will enter the “laggard” category by 2022.

Like Cleopatra—of whom Shakespeare’s character said “age cannot wither”—not all technology ages at the same rate. Some technologies that have reached, or nearly reached, the 20+ year mark while remaining vital include:

  • The stateless REST interface paradigm
  • HTML/CSS/JavaScript web applications
  • Modern NoSQL
  • Wi-Fi
  • Texting (SMS)
  • Apple’s OS/X operating system (originally NeXTStep)

However, where technologies are concerned, remaining relevant in old age is the exception, not the rule. The technologies and paradigms that have stayed current have not remained static; they have evolved continuously since their early beginning. Good systems do the same—generally by steadily incorporating “early majority” and “early adopter” technologies to keep themselves fresh.

[1] https://www.forbes.com/sites/louiscolumbus/2018/08/30/state-of-enterprise-cloud-computing-2018/#24d16798265e

The role of mobile in retail has gotten bigger and wider in the last few years. Mobile is not just a platform for consumers to browse and purchase products, but has grown to provide an immersive experience using Augmented Reality (AR). Mobile empowers store associates to enhance productivity and provide a personal experience to their customers. Mobile is not just limited to the smartphone, either; it is an ecosystem of devices that provides a connected experience. These devices include voice assistants, smart speakers, microwaves, camera bells, home security, kitchen appliances, dash cams etc.

Below are 7 trends that demonstrate how mobile is playing a key role to power consumers and retailers.

1. Voice Commerce

More than 100 million Alexa devices have been sold so far. From TVs to speakers, cars to refrigerators, we interact with Amazon Alexa or Google Assistant through all sorts of devices. This trend is not slowing down; we are going to see these voice assistants become integrated in many more devices. These assistants provide a new medium through which consumers can connect to brands and purchase products. Voice assistants are good for reordering consumables that users are already familiar with (i.e., where customers don’t need to see pictures or read reviews).

Voice commerce is a lot different from desktop or mobile app commerce, where users can see product descriptions, view promotions, read reviews, and view product images. Voice commerce is complex, and its UX must be designed from scratch. It can also complement a user’s online shopping experience by allowing him/her to ask about an order status, available offers, or to know his/her reward points. Adding a screen to these assistants can take the shopping experience to the next level.

2. Augmented Reality (AR)

For the past several years, Augmented Reality (AR) has been a buzz word with not much success in Retail. The launch of Apple’s ARKit and Google’s ARCore made it possible to provide immersive AR experiences on smartphones. Gartner predicts that, by 2020, 100 million consumers will shop via AR, both online and in-store. Ikea was one of the first retailers to adopt AR, and Wayfair soon followed. With the retailers’ apps, customers can measure a room in their house and “place” furniture in it. AR can also be used in these other retail use-cases:

  • Virtual Fitting Rooms: Try on products like shoes, jewellery, make-up etc. (Nike, Puma, and Sephora are already doing this). A Smart Mirror can also help customers virtually try on clothes, change colour options, and then order products right from the mirror.
  • Product Demos & Information: Point your phone’s camera at a product to see details and price information, view a demo, and get further recommendations.
  • Product installation: Point your phone’s camera at a product’s QR code to view a step-by-step installation guide.

This is just a beginning, more and more retailers will come up with great immersive AR experiences.

3. Personalization

Consumers are demanding more personalised experiences across all retail touch points, from product discovery, to product purchase, to post-purchase services. They are willing to share meaningful data that retailers can use to provide better product recommendations and contextual offers.

Enormous progress has been made in the field of Artificial Intelligence (AI) and Machine Learning (ML) over the past few years to help create accurate models and provide personalised shopping experiences. By applying AI and ML to enormous data, retailers will be able to predict what their customers want before the customers themselves know. Retailers can then provide all this meaningful data to store associates via a mobile app and thereby empower associates to help customers choose the right product and be a part of their shopping journey.

4. Experiential Retailing

Most physical stores have not been able to meet customers’ changing expectations, and they were dying one after another. Customers love to shop in physical stores, but they demand a better experience. Retailers that are not able to understand these expectations and adapt to them will collapse. Meanwhile, E-commerce players like Amazon, Warby Parker, and Casper Mattress have opened their own physical stores to provide an unparalleled retail experience. These stores don’t just sell products; they provide opportunities to connect with consumers and tell their brand story.

Experience Retailing is the new trend where consumers come to a store to interact with a product, hangout with friends, and then can make purchases through a variety of channels. Products are equipped with smart displays or tablets to show product information and videos. Customers can also make a purchase through these tablets and have a product shipped to their home. These tablets also capture analytics about how customer interacts with a product and its information. By intelligently using technology, stores empower their associates to assist customers personally and create “aha!” moments for them. Before talking to a customer, an associate will already know his/her preferences in order to provide personal assistance.

A great example of Experiential Retailing is Toys R Us. Back in 2017, Toys R Us filed bankruptcy and closed nearly all 800 of its stores. Now the retailer has partnered with a startup called B82a to provide a new type of toy shopping experience for families. In these smaller flagship stores, kids can check out new toys, watch movies, participate in STEAM workshops, and more. After getting hands-on experience with their products, Toys R Us will offer families an opportunity to make purchases both in-store and direct online.

5. Same-Day Delivery

Consumers are no longer satisfied with a 2-day delivery. They demand more — they demand products now. Retailers are trying hard to speed up delivery time to same-day or even just a few hours. For example, Amazon will start delivering products by drones within a few hours after order is placed. Soon this will expand to other retailers and even food delivery. By 2020, same-day delivery will be the new normal. Customers will go to stores to try on a product, choose home delivery, and the the product will be shipped to home the same day.

6. BOPUS & Stores as Fulfilment Centres

“Buy online & pick-up from store” (BOPUS) has already seen enormous success. This trend will continue to spread across most retailers. This is a very important aspect of providing an omnichannel experience, and retailers must make sure their BOPUS customers can pick up their products as quickly as possible. Measures they can take include providing reserved parking / store entries, moving their pickup point closer to the store entry, or providing pickup lockers outside the stores.

Retailers have also started fulfilling online orders from their physical stores to speed up delivery time. Same-day delivery from Amazon has pressured brick-and-mortar retailers to use their physical stores at full capacity. Instead of building new warehouses near every city, retailers will utilize their stores as fulfilment centres. To make associates more effective, retailers are also revamping their in-store technology by implementing smart apps, RFID inventory counting, mobile checkouts, etc.

7. Seamless Multi-Channels Experience

Retailers need to provide a seamless experience across their online and offline channels. Gone are the days when offline and online channels worked in silos. Customers often start their journey on mobile and end up making a purchase on desktop or in physical stores. Customers expect a unified experience from retailers; they want knowledgeable store associates who can help them find the product they saw on their mobile app.

Summary

Consumers have fundamentally changed. Their engagement with new technologies and digital services has driven their expectations up higher and higher. They’re now demanding useful, engaging, and assistive experiences from all the brands they interact with. Retailers are going through a major digital transformation to meet the expectations of demanding customers, and they should seriously consider these trends and align their digital strategy to keep mobile as a key driver.


It’s 2077. A 95-year-old man, Martin, begins his day with a wholesome breakfast followed by a healthy walk prescribed by his caregiver. He is feeling fine. He has long been looked after by specialists who monitor his health and know everything about his illnesses and ailments. He still has dozens of years ahead of him before he turns 122, the current average life expectancy of humans living on Earth.

This vision, which may sound like it was taken from a science fiction movie, is already being worked on today in GlobalLogic laboratories — and soon will become the reality. All thanks to internables.

We are surrounded by Internet of Things (IoT) devices at all times, and we interact with them every step of the way. Reports confirm that this relatively new technology has gained recognition the world over. Today, there are already 26 billion active devices, and by 2025 this number will be three times higher.

Smartwatches and smartbands fit this mobility trend perfectly, with the widely promoted “healthy lifestyle” reinforcing the extensive user base for these devices. We all want to improve our health and stay in good shape, and the inconspicuous yet robust smartbands and smartwatches assist us greatly in doing so. Hardly anybody who has tried an IoT device for their workout goes back to training without it. With the popularity of dedicated apps; sensors that monitor mileage, heart rate, and burned calories; and virtual trainers that create personalized workout plans, it is hardly surprising that 70% of the IoT devices currently trending are focused on health and physical activity.

This IoT treasure trove of health tech — which helps consumers feel safer and save time and money — will soon be extended with internables. So what is this technology all about? In a nutshell, internables (also known as implantables) are sensors implanted inside the human body to naturally enhance the capabilities of health equipment.

Only in the Movies?

Most of us associate these solutions with books, comics, games, and movies. Nanomachines and cyberimplants are the staples of the virtual worlds created by game developers (e.g., Deus Ex series or the upcoming Polish hit Cyberpunk 2077). We can also see some practical applications of user-implanted devices in several episodes of the TV series, Black Mirror. However, we no longer need to delve into the realm of pop culture and science fiction to identify what internables can do. As it turns out, this technology has already been applied in the world we live in today.

The medical sector has always been among the first to implement the latest technology solutions on a wide scale, prioritizing those capable of extending patient care while reducing costs. This has been exactly the case with IoT devices. Forecasts indicate that by 2020, 40% of all active IoT devices worldwide will be used in this sector. Consequently, they shouldn’t be seen as merely a big sensation in a series of groundbreaking biotechnology-based medical projects.

Engineers and scientists have joined forces to better monitor patients’ health and advance the telehealth sector. They have also harnessed existing technologies to fight well-known illnesses and ailments. The range of activities underway is extensive, with various milestones already reached — from insomnia-alleviating sleep bands that use the human body’s natural ability to transfer sound through bones, to designs for miniature robots (“nanomachines”) that will move inside the human body to deliver medicine to a targeted point in the system. For example, nanomachines that look like a cross between a whale and an airplane will be used to — among other things — effectively combat cancer.

Internables offer particularly high hopes for neurosurgery. The bold designs presented in recent months include devices that enable paralyzed patients to control their limb movements, and microdevices that stimulate individual neurons to help treat Alzheimer’s.

Internables at Your Service

Internables are regarded as the key driver to advancing telehealth because they will enable a smoother exchange of information between specialists and users, resulting in an unprecedented scale of care. In the future, individual vital parameters of the human body may be regularly relayed to — and recorded on — users’ digital health cards for faster disease diagnosis and more detailed disease monitoring. These cards could facilitate better communication — not only with medical caregivers, but also with trainers — so that an adequate diet and fine-tuned workout can be prescribed based on the user’s current health status.

Internables can also be implemented in other sectors, like automotive. The swift development of smart cities and smart cars makes traditional cockpits and driver–vehicle communication methods obsolete. For example, GlobalLogic is currently working on a project called GLOko that explores services related to image processing, such as real-time head and eye tracking solutions. Internables could easily be integrated into these solutions to enhance driver-to-vehicle communication.

A New Dimension of Privacy

 The vision of the future where we live happily ever after assisted by technology is very appealing. Who wouldn’t like to be able to record their chosen memories and come back to them at any time? How many people would be able to overcome a disability or illness? However, internables present just as many challenges as they do opportunities.

The idea of privacy acquires a whole new meaning with internables. We are not talking about stolen cars or hacked PCs, but about potentially life-threatening risks. Cybercriminals will definitely not pass up the opportunity to hack and blackmail internable users — such as hacking cardiac pacemaker setting apps. Consequently, it is crucial to establish adequate procedures and protections to prevent any fatal consequences, and to work out mechanisms that will dispel any concerns over compromised privacy and unauthorized surveillance. This will require some effort, but it will certainly pay off, as we all want to enjoy long lives in good health and peace.

Conclusion

 Technology has opened up incredible opportunities and new ways for civilization to develop many times. Internables are undoubtedly another chance for us to live longer, better and safer. Their success, however, depends on the actions taken by companies all over the world.

Only by anticipating the possible negative outcomes of misusing technology at an early stage can we properly protect our users from the potential unpleasant consequences.

At GlobalLogic we face such challenges on a daily basis. We accept this as we strive to harness the potential of internables, which in a few years, perhaps, will change the world as we know it.

Many roles in software development tend to be mislabeled as “architects.” Although these roles are just as vital, using incorrect definitions can lead to miscommunication and unrealistic expectations.

As I work with companies on their digital transformation initiatives, I engage with many software architects, both in those companies and within GlobalLogic. I see many people with the title “architect” who are not what I would call an architect—they actually perform other, distinct, functions. Many of those functions are vital, and saying these people are “not architects” is in no way meant to disparage them or their role. But if everyone is called an “architect,” it certainly makes things confusing.

This confusion is wide-spread. If you search for “software architect definition,” you will see many alternatives that I believe are useless or, at the least, very confusing. Many of these definitions involve creating technical standards, planning projects, and doing other activities that are, in my view, not at all central to architecture itself. It’s not that architects can’t do these things, but you can still be an architect and not do them at all. Let’s take a look at a pragmatic definition of an architect.

In my view, a software architect is a person who figures out how to solve a business or technical problem by creatively using technology. That’s it. Under this definition, many people perform architectural activities, including individual software engineers. In my opinion, engineers are indeed doing architecture when they sit down and think about how they will solve a particular problem, before they start work on actually solving it. It may be “low level” (i.e., tactical) architecture as opposed to “big picture / high-level” architecture, but it’s still architecture.

The difference between an engineer and an architect is their focus: an architect spends the bulk of their time thinking about “how” to solve problems, while the engineer spends most of their time implementing solutions. Being a software architect is not necessarily a question of capability; it’s a question of focus and role.

Traits of a Software Architect

Solves Problems

The most important characteristic of an architect is the ability to solve problems. The wider and deeper the range of these problems, the more senior the architect (in terms of skill—not necessarily years). Some architects focus on network issues, physical deployments, business domain decomposition and “big picture” architecture, integration with existing systems, or even all of the above. But regardless of their focus, the principle task of an architect is to determine a good solution to a problem. It’s not to supply information, coordinate other people, or do research—it’s to describe the solution. The output of an architect is a description or roadmap saying how to solve a problem.

Focuses on “How”

Smart people often have an aversion or even disdain for spending very much time on thinking about “how” to solve a problem. Instead, they want to jump immediately into solving it. This is either because the solution seems obvious to them or because they don’t realize there is value in focusing first on “how.” I remember having this attitude myself in grad school when I was asked for a “plan of work” to solve a particular physics or math problem. I would generally solve the problem first, and then afterwards explain how I did it, presenting my reverse-engineered activity list as the “plan.”

Either because my brain has slowed down, or I’m more experienced, or I’m dealing with more complex problems now—or maybe some combination of all three—I’ve come to value the “how.” In software, there are always many ways to solve a given problem. Of those possible solutions, more than one will generally be a “good” solution. The reason that we separate the “how” from the implementation itself is to give us space to craft and choose among these good solutions, and to examine and reject the bad ones.

Thinks Holistically

To deliver a good solution, an architect must first holistically understand the problem. Problems always have business impact, although frequently they are positioned as purely technical problems. An architect needs to understand the context of the problem they are solving before they can provide a good solution. This requires drawing people out, often for information they don’t necessarily realize they even have or need.

A good architect needs to be a good listener and relentless in tracking down not just what the problems are, but also “why” something is a problem or an opportunity. Since the non-technical side of the company may have little insight into the business impact of a technical decision, it falls to the architect to assess and communicate these impacts in order to choose a good solution.

Uses Technology Creatively

Not every good architecture is novel. In fact, a solid, tried-and-true solution to a standard, recurring technical problem is nearly always better overall (in terms of development and maintenance costs.) than a “creative” approach that is different for its own sake.

That being said, after working with literally hundreds of system architectures over my career, I can’t think of a single one that does not have at least some novel features. This is because the combination of the current situation and constraints, the business requirements, and the technology options available to us at any given moment in time form a large and evolving set. In fact, the number of variables is large enough that their values are rarely the same twice. This gives ample room—and need—for creativity, even when you are not setting out with a goal to be novel.

Architects within established companies have the additional challenge of being thoroughly familiar with their existing system(s). This can naturally incline them toward an evolutionary approach. In their case, the need for creativity often involves the ability to see their current business with fresh eyes; in particular, applying novel techniques to current problems and opportunities, in cases where these approaches provide genuine business or technical value.

Makes Decisions

A primary hallmark of a software architect is their ability to make a decision about which solution is the best fit for a specific business or technical problem (even if that recommendation is ultimately not accepted). While sometimes the ultimate decision-maker does have the title “Chief Architect,” they can often hold the title of “VP/SVP/EVP of Engineering,” “Chief Product Officer,” or some other executive nomenclature. There is nothing wrong with this as long as the person making the decision realizes that they are now acting in an architectural role, not in a purely management / political role. Considering the cost, feasibility, and team preferences and skillsets of a given choice is indeed an architectural function — and can be a good way of deciding between alternatives when they are comparable technically.

Where executives get into trouble as architectural decision-makers is when they choose or introduce a technology that is not technically suitable to the solution of the problem, or that is not nearly as good as the architect-recommended options. For example, I once witnessed an executive override the recommendations of his architects and choose a totally inappropriate technology because he had already paid a lot of money for it. This executive did not appreciate the fact that the success of his project required him to play an architectural role, not a political or managerial one, when making this technology decision. The implementation of his program suffered accordingly as the team tried to work around the limits of an unsuitable technology.

Roles Mislabeled as “Architect”

While architects play a key and often pivotal role in software development, there are many other essential roles to software development. However, I would assert that calling those other essential functions “architects” leads to a lot of confusion and mis-set expectations. Here are some of the other roles who are often labeled as architects but who, in my opinion, often perform non-architect roles.

Researcher

This person surveys the available technologies and approaches within a given area and becomes very knowledgeable about the alternatives through online and offline research, conferences, and vendor presentations. While architects definitely spend time doing research, the fundamental difference between a researcher and an architect is that the architect decides. Researchers provide an essential function, but unless they apply the outcome of their research to a specific situation and ultimately make a specific recommendation as a result, they are not acting in an architectural role.

Evaluator or Analyst

An evaluator / analyst takes the results of research and compares the leading candidates to each other. He or she produces a list of pros and cons for the various alternatives, in the context of the current business or technical problem. Evaluation is also an activity that architects sometimes perform and are even more frequently called on to organize. Again, however, the key differentiator between an evaluator / analyst and an architect is that the architect ultimately makes a choice or single recommendation as a result of these evaluations.

Technical Expert

This person may be a researcher, an evaluator / analyst, or a combination of the two. They are extremely knowledgeable about the range of options available in a particular domain or technology, as well as the pros and cons of each. This particular skillset is often termed a “solution architect,” although again I would assert the word “architect” for this skillset is a misnomer. Knowledge of a given range of solutions or technologies does not in itself make someone an architect. It is the ability to apply such knowledge to a specific situation and to craft a good solution that characterizes an architect. Even with full knowledge of the available options, the ability to make good choices from them is a different skillset (and is rarer than at first it might appear).

Technical experts are extremely valuable, and they may indeed also be architects. However, there are many cases where technical experts are not architects, even if they have that title. This can be quite confusing, and can show itself as “churn” with no clear outcomes and decisions being made despite a high degree of knowledge in the room.

Knowledge Orchestrator

An important function in a complex project is “knowing who knows what” — that is, identifying the right people and getting them plugged into the right places. These people might be architects, researchers, analysts, technical experts, or any of the myriad technical roles that make a software initiative successful.

It’s sometimes hard to distinguish between a “knowledge orchestrator” and an architect because both have decision-making roles. The key distinction is that the knowledge orchestrator is not the originator of the technical ideas (i.e., they are not the person proposing the solutions to the various technical problems). Rather, they are a clearinghouse for information and perhaps also selects and synthesizes the information provided. In other words, they are the “critic” rather than the “author” of the work.

Performing knowledge orchestration successfully requires a high degree of technical skill, the ability to make clear and sometimes tough choices, and the ability to explain and defend those choices. However, I would argue that this role is distinct from an architecture role. As we discussed above, an architect is the person who originates a proposed solution; the knowledge orchestrator role serves as an editor and critic of the proposed ideas.

Architects often play a knowledge orchestration role over more junior or more specialized architects. The distinction here is whether they also initiates novel solutions. There is a Scrum parable that talks about a pig and a chicken who together decide to make a breakfast of ham and eggs. The pig turns to the chicken and says “If we do this, I am committed. You, on the other hand, are merely involved.” In terms of architecture, the knowledge orchestrator is “involved,” while the architects are “committed”.

Conclusion

There are many non-architect roles in software development (e.g., project managers, developers, testers, product owners), but these do not tend to be mislabeled as “architects.” There is, of course, nothing wrong with not being an architect. I myself often play the role of a “knowledge orchestrator” instead of—or in combination with—acting as an architect. I also act as a knowledge provider from time-to-time. In no way do I feel “inferior” in any of these roles than I do when working as a hands-on architect. The roles are simply different. In this essay, I am simply challenging the labels, not the value.

Note also that architecture itself is a “team sport.” It’s very rare in business that a single architect owns every decision unchallenged. Almost invariably an architect works with others and must persuade them—as well as management—of the correctness of their choices. This dynamic is generally healthy, and it often results in a better outcome than any single individual could accomplish unaided. The need to “sell” their choices in no way diminishes the imperative for an architect to make a choice. In fact, a strongly reasoned position that is defended vigorously (but impersonally) often leads to the best outcome. Without these opinionated selections, a person is acting as an information resource, not as an architect.

Architects tend to be exceptional people, but so can people cast in other roles. The best architects are smart, good listeners, not afraid to take risks, not afraid to be “wrong,” and always seeking to learn. Whether you are an architect or play any other part in the software development process, these are traits that all of us can seek to emulate.

Although a well-trained machine learning model can be used to process complex data, such as predicting a company’s employee attrition rate, it is crucial that the learning model be properly optimized in order to minimize errors and produce accurate results. In this white paper, we will explore the various optimization algorithms that can maximize a model’s learning process and output.

Profiling a mobile application enables developers to identify whether or not an app is fully optimized (i.e., effectively using resources like memory, graphics, CPU, etc.). If the app is not optimized, it will experience performance issues like memory crashes and slow response times. However, profiling a mobile app seems to be easier said than done. Every mobile platform offers tools that are very well evolved — and still evolving — to provide profiling data that can be analyzed to identify problem areas. In this blog, we’ll look at which parameters to profile, and where to profile these parameters. If you are interested in learning more about how to profile these parameters, I suggest you review the platform-specific documentation below:

Android: https://developer.android.com/ studio/profile
iOS: https://help.apple.com/instruments/mac/ current/#/dev7b09c84f5

Parameters to Profile

The parameters to be profiled are dependent on the unique problem. For example, if the problem is slow UI rendering, a potential area to look is CPU usage and GPU rendering. On the other hand, if the problem is an unresponsive application over a period of time, it indicates potential memory leaks. If the problem is unknown, then you can do application profiling for the following parameters:

  • CPU Usage: to identify potential threads and methods that are taking a greater CPU time-slice than expected
  • Memory Utilization: to identify potential classes/variables that are holding up the memory
  • UI profiling: to identify the potential overdrawing and redrawing of UI components, unoptimized usage of widgets, and deep hierarchy views
  • Battery Usage: to identify potentially unnecessary running processes that are drawing current from the battery
  • Network Usage: to identify potentially unnecessary network calls or network calls that are taking too much time or downloading heavy data and impacting the user experience
  • I/O operation: to identify potential unoptimized or unnecessary file or database operations

Where to Profile

Identifying the areas to be profiled (i.e., screens, features) is the most critical step, as it varies from application to application. If the problem is known, then the area to be profiled can be narrowed down to a particular screen or feature. But if the problem is unknown, then the only option remaining is to profile the complete application. Since most modern applications have many screens and features, you should target specific areas of the application to profile first.

Start of the Application

The start of an application is a very important part where lots of initialization and resource allocation is done. An area to watch out for is CPU consumption for initialization, which can be done in parallel or can be initialized later in the required screen or feature. In a modern application where dependency injection tools like dagger 2 (Android) or Typhoon (iOS) are used, there is every chance that there has been an unnecessary allocation of memory for the injected classes.

Loading of the Screen

Similar to the start of the application, individual screens may have allocated additional resources that are not required. The time required for loading the screen should also be watched, as unnecessary initializations may be blocking the UI rendering. Depending on what needs to be initialized, you should check whether it can be done in a later stage after the UI rendering.

Loading of Scrollable Views

In the mobile form factor, it is common for applications to have screens with scrollable items. There is every chance that standard guidelines for creating scrollable views may not have been followed, resulting in heavy memory consumption that needs to be identified. Also, the slowness in loading items needs to be looked into, as patterns like lazy loading may not have been followed.

UI-Heavy Screen

A UI-heavy screen needs to be focused, as it may have unoptimized layouts or a deep hierarchy view. Responsiveness should also be checked, as a UI-heavy screen may have an equally heavy backend handling code that may take longer to respond.

Navigation Between Screens

The most common operation done on a mobile application is navigating between screens. As such, you should make sure that resources are being properly allocated and deallocated when navigating between screens. Also, navigation between screens is a potential candidate for a leakage of references, which links to memory leakage.

Network Operations

Peaks in network operations should be reviewed, as heavy network operations can impact the user experience and also lead to heavy CPU and memory usage. Heavy network operations can be broken into smaller logical operations, so unnecessary network operations should be watched.

Repetitive Operations

On many occasions, repetitive operations lead to heavy memory leaks. These repetitive operations can be a scrolling of list items, getting data from the network operations, loading a UI-heavy screen, or navigating between screens.   

Keeping the Application Idle for a Long Duration

Ideally, when an application is kept idle for a long duration, it should not increase memory consumption over time. However, the background operation may not be pausing properly, or resource allocation may still be in progress — which can lead to a memory leak.

File Logging

File logging in release builds should be monitored, as an application may be doing additional/unnecessary file logging, which is an I/O operation. You should also look into the log file rotation policy, as over time it can consume memory in the file system.

Conclusion

Some of the above profiling activities can be achieved using the tools provided by the mobile platform (i.e., Android, OS), while some require manual efforts or code analysis. The ultimate objective of mobile app profiling is to consider the various parameters that could potentially lead to performance problems within your mobile app.

Aphorism: [A] concise statement of a principle [Oxford English Dictionary]

 

Former American baseball player Yogi Berra was famous for aphorisms that at first glance seem reasonable, but on second thought make no sense at all. Some of my favorite sayings of his include one about a favorite restaurant, “No one goes there anymore—it’s too crowded,” and the philosophy, “When you see a fork in the road, take it.” The joke, of course, is that in order for the restaurant to be crowded, lots of people must be going there. Also, by definition, when a road forks, you have at least two options. So, the advice to “take it” doesn’t make any sense at all. They seem sensible—even wise—at first glance, but don’t stand up to scrutiny.

There is another set of aphorisms that are the opposite of Yogi Berra’s. At first glance these sayings seem nonsensical, but on reflection they point to a deeper truth.

Late management guru Stephen Covey liked to say, “The main thing is to keep the main thing the main thing.” At first glance, this makes no sense at all because whatever IS the main thing—to implement that business transformation, to take my startup public, to meet my personal financial or career goals—that objective is of course the main thing, isn’t it? Also, the statement itself is self-contradictory because if I keep the main thing the main thing, then I don’t really have a “main thing” at all, do I?

Stephen Covey quote

The wisdom of Covey’s statement becomes clear when you actually try to accomplish any large goal. The biggest challenge you will inevitably encounter is other demands that take you off course. The bigger and more important your overall goal is, the more opportunities there are to become distracted along the way by things that are not as important, but that are — or seem — more urgent and immediate.

The only way to accomplish your big-picture goal is to keep it as your main objective, in spite of all the distractions that come along. Unless you can stay on course, no matter what distractions come along, you will never accomplish your big picture goal. Staying undistracted is so essential to meeting your goal that unless you put that first, you will fail. In other words, “the main thing is to keep the main thing the main thing”.

Another saying I like comes from software management guru Gerald Weinberg: “Things are the way they are because they got that way.” At first glance this is so blindingly obvious as to seem nonsensical. However, when you are facing a complex situation, it is profound. No matter how chaotic or dysfunctional the situation may look at the moment, there was a cause behind it. When you can figure out why things got to be the way they are, you have already come a long way toward a solution.

Gerald Weinberg quote

In engineering, most people tend to be rational actors most of the time. Most people, even the ones we don’t like or agree with, also tend to be at least relatively smart. This means there was probably a reason why a decision that now seems horribly wrong appeared to be the right idea at the time. I myself have made a few such bone-headed decisions — fortunately not too many, but some. And I’m pretty sure you have, too.

The exact wrong thing to do in such cases is to double-down and dig the hole still deeper — or take a knee-jerk reaction and just do the opposite. The right thing, whether it was your own bad decision or someone else’s, is to take a deep breath, understand what drove the original decision, show some mercy to yourself or the past decision-maker — and then fix it. Until you understand the drivers behind the wrong decision, though, you will never know if your new decision is any better.

Realizing that there were causes behind a current dysfunctional situation is the first step toward looking at it dispassionately enough to make better choices this time. Simply disparaging the previous decision-maker or decision while moving in a new direction can sometimes lead to success. However, it is usually less productive than first figuring out what was behind the old direction in the first place. There may indeed have been a good reason behind what is now clearly a bad choice. You may find that those reasons no longer apply, or that choices were indeed made out of ignorance or other wrong motives. In this case, by all means shed the past and start fresh. However, if there are underlying reasons that still do apply, then you will do better by considering them first before you chose a new direction.

We can all profit from the wisdom of those who came before us; we should never give up on learning. Because, as Yogi Berra once said, “It ain’t over until it’s over.”

Part 2: Making a strategy more “digital”

Part 1  presented symptoms that arise in the process of implementing product strategies that aren’t “digital” enough—meaning that they don’t effectively anticipate or address the emerging needs of product managers and teams. If any of these symptoms are familiar to you and your team, it’s worth discussing how the product strategy might be strengthened to alleviate them.  

Did We Go Wrong, or Just Not Far Enough?

A product strategy may have issues simply because it is unintentionally biased by the focus and experience of those who are tasked with formulating it. A CDO may bring a very high degree of “digital” influence to the strategy, but may lack pragmatic experience building and delivering a variety of products types at scale. Whereas a CPO may produce a very thorough focus on the “What” and “Why” but insufficiently factor the impact of changes in the “How” that will affect the performance of the product organization as they implement the strategy.

A strategy is often made less effective simply because its approach has been too cookie-cutter. Strategy, in a business context, is a tool for planning, consensus-building, and direction-setting among the executive management. The classic approach to strategy is that stakeholders (often with consultants) articulate a plan for the future — based largely on quantifying the known in order to project the expected — and then make decisions about the future. This strategy is given to the next tier of management to define the details, align budgets, and plan for resources and operational details. After the planning and budgeting cycle is done, the plan gets executed, with the outcome not fully measurable until completion.

modern product management

Another characteristic of strategies is the urgency surrounding them, often expressed as a reticence to get feedback or analyze decisions and assumptions— we already have buy-in, just get going. This can reflect the challenge of building consensus and getting buy-in, especially where there’s a high degree of uncertainty or disagreement about the future. It may also be a tacit signal that there is a gut-level awareness that the strategy is incomplete or is glossing over some potential show-stopping issues, but it’s politically or culturally unacceptable to dissent.

Just because a product strategy has been produced and agreed to doesn’t mean it is correct or will be effective. And just because a product strategy isn’t all it should be doesn’t mean it can’t be improved.

Killing Zombies: Bringing a Product Strategy to Life

Static strategies can make a product organization or even a business look like a zombie — making mistake after mistake, completely unaware, and oblivious to anything except  what the strategy dictates.

Strategies can become static if they are only used as an initial stage gate, if they provide little to no ongoing value, or if they lose relevance as time goes by because too many things have changed. Strategies are inherently static if they don’t acknowledge, allow for, and facilitate the adoption of changes — especially at levels that are clearly changing or where there is a high degree of uncertainty. Strategies need to be at least as dynamic and evolving as the context to which they are applied.

Product Strategy 5.2

The goal is to be sure that the strategy doesn’t get stuck in the initial gate, but instead becomes a tool that remains useful. If there is a growing gap between what your product strategy said and what is actually happening, people should be asking: Was the product strategy missing some key points? Are we still actually following a strategic path or are we just bushwhacking?

A static approach can be made more dynamic—brought to life— in 3 ways (and ideally all three would be integrated as part of an ongoing product strategy function):

  • A systematic effort to identify, understand, and then categorize issues, risks, and areas of uncertainty
    The point here isn’t to overload the strategy with details, or hope to address everyone’s concerns. Since every strategy has to be operationalized and tactically implemented, understanding the categories (and size) of issues, risks, and uncertainties along that path, in advance, gives a better picture of which things are likely to have a broader effect and should be explicitly addressed in the product strategy. This doesn’t mean there will always be a clear answer/decision. However it allows everyone affected to be aware of the issues, options, and current thinking behind how to address them.
  • A systematic effort to identify how success/failure of product(s) could influence overall success of the product strategy (i.e., develop a portfolio approach)
    By understanding the business value of platform capabilities and products, and potential risks to delivery and market adoption (and impact on expected ROI), potential pivot points and shifts in priority can be identified as options earlier in the planning cycle and better managed. A portfolio approach starts with the idea that not all products are equal because:
    • How you measure value varies across capabilities, products, or features. Some may be of low-value for near-term revenue, but instrumental for building competitive advantage. Some products may require market-building in order to scale or only be viable as part of a broader solution
    • Different approaches to product development have different requirements. De-risking through an MVP approach assumes a product that will evolve through a feedback cycle, revealing dynamics that can’t be predicted today.

The idea behind a portfolio approach is that is that there is no sure bet on future outcomes. Knowing what you will measure and how you will interpret findings in advance means you are more likely to actually do it and be able to make smart decisions.

  • De-risking by looking for the important “unknown knowns” before planning and budgeting initial implementation programs
    There is often useful information available that gets ignored because it doesn’t seem relevant enough to the situation at hand. Sometimes this bias occurs without anyone really being aware of it, because it’s built into the approach to strategy development.

    For instance, a common tool used in growth planning is a framework called the Ansoff matrix. It’s a 2 x 2 with the horizontal axis representing existing and future products, and the vertical axis representing existing and future markets.

Product Strategy 6

Right out of the gate, the focus shifts away from the other quadrants. They haven’t become suddenly less relevant, yet what they might imply is often assumed as not relevant to forward-looking product strategy. However, those in charge of product strategy can pull a clever trick: they can simply decide to make those other quadrants relevant by looking at them as inputs and exploring relationship scenarios between quadrants over time as a product strategy would be put in play. This will lead to questions that the strategy should seek to confirm as relevant/irrelevant, and when relevant, explain why, and based on what decisions and implications.

The goal is two-fold: (1) make sure there aren’t unstated assumptions being made by other areas of the business about what the product strategy is unaware, and (2) make sure project strategy doesn’t inadvertently miss key interdependencies. The following questions are examples taking this kind of approach when using the Ansoff matrix:

    • What changes in how we currently do business will be needed when we are delivering future products for existing and future customers and to what degree do our product bets assume this will occur?
    • What’s the risk to the product strategy if these changes (in how we do business) don’t happen as planned, or in lock-step with the product strategy implementation?
    • Can we de-risk this in how we define, plan, and launch products, and if so, where and how?
    • Are we expecting existing customers to eventually migrate to new products? If so why, and can they continue to use existing products as they adopt new ones? What does this imply for our product strategy in terms of product adoption requirements/legacy support requirements?
    • Will existing products be relevant to new customers?
    • How much do we know about additional needs of our current customers?
    • How much do we know about needs of our future customers?
    • What changes in requirements for new products and requirements of new customers would likely make our current approach to product management, (definition, development, and deployment) inefficient or unsuccessful?
Conclusion

There may be some reluctance to re-approaching a product strategy that has already been approved or is well underway. Every business needs to consider the kinds of issues they are facing and weigh the costs and benefits of a bespoke solution (i.e., fix it in a product-by-product context) or holistic solution (i.e., address it at a product strategy level). In either case, at least understand what issues can be addressed earlier and how you can improve subsequent strategy exercises. Taking a more considered approach to measuring the effectiveness of the strategy as it is implemented can also begin to shift the overall mind-set away from static strategies, and enable more flexible strategies that provide more value to the teams who implement them.

Symptoms

There is no single “right” approach to developing a product strategy. What makes an effective product strategy varies by industry, business model, stage of business, etc. It’s also a given that a business’s digital strategies will likely need to address areas beyond product strategy.

Many business leaders could confidently answer “yes” to the question posed in the title and justify it by listing out the products and services they think qualify as digital. All this really tells us is that they have digital products as part of their offering. It doesn’t tell us if and how digital has influenced their approach to a product strategy.

The challenge is that “digital” implicitly means that there will be things that need to be accounted for in a product strategy, things that weren’t necessarily considered relevant before. Digital, as a concept applied to business, means that value creation and customer engagement increasingly relies on software and/or software-enabled hardware. Software, as a way to create value, also has different economics than analog value creation.

Digital value creattion

And like most things associated with technology today, change is constant. The way software is approached, what it can do, and the process and resources needed continue to evolve. A product strategy is sufficiently “digital” if it understands these points and makes clear the key assumptions being made on these new kinds of issues.

Some might push back, arguing that it’s not product strategy’s role to define tactics, and it’s better to iron out the planning and implementation. However, it probably won’t be better if this outlook enables a culture and a dynamic in which product managers are forced to address broader issues solely based on what makes sense for their own narrow context. It also probably won’t be as successful an approach if it increases the odds that you will need to implement fixes and address issues very late in the game, where the costs and risks are higher (or the budget is gone along with unmet milestone deadlines). If an issue is relevant or applies to all products, but it is not to be addressed in the product strategy, then where and when will it get addressed?

Often there is too big a gap between what is covered in the product strategy and what is required to enable quality and efficiency throughout the product life-cycle. The good news is that just because a strategy is lacking doesn’t mean there’s nothing to be done about it. To address the situation, business leaders and those tasked with developing and implementing the strategy need to be on the same page, as far as understanding what the symptoms of an incomplete product strategy are, and how and why to deal with them.

Symptoms that a Product Strategy is not "Digital" Enough

One of the qualities often associated with digital is speed. As a result, product organizations are often struggling to increase productivity, quality, and efficiency all at the same time, while inheriting strategic directions that might not be fully actionable. There will always be issues and fires to put out, but if the root causes are misunderstood, attempts to fix and accelerate forward may result in the increase of issues and the spreading of fires.

When symptoms resulting from a static strategy are not recognized as endemic to the strategy, they are seen or treated as if they are confined to a specific product context, and solving for that context will fully eradicate the problem. (Smart money bets that it won’t.)

Symptom #1: The product strategy doesn’t anticipate/address the relationship between products and platforms

Just as there are many ways to frame a product strategy, there are many ways to define “platform.” That said, a common approach to developing today’s digital products and services is to create a broad set of capabilities that can be combined/orchestrated as products or services that can (1) be taken to market, (2) be made available to partners/customers through APIs, and (3) be used by the business itself to increase productivity. This is the basic premise of many approaches to digital business, and it is a core plank of a platform business strategy.

When a platform-approach is not well articulated and accounted for in a product strategy, then the strategy is subject to interpretation and evaluation in terms of products and features. This can lead to confusion and mistakes in how capabilities of the platform and individual products and features get prioritized.

Simply put, if a product or feature relies on a capability, then not having that capability means you can’t deliver the product or feature until you do have it. The prioritization of capabilities directly influences the products and features they support. As a result, the approach to the platform affects all products supported by that platform. It’s not uncommon to find businesses making substantial investments in “platforming” their products, yet product managers and platform architects don’t consider their domains as having significant overlap or interdependence. As such, the business measures progress based solely on product progress.

This lack of common conceptual ground isn’t just inefficient for product managers and teams tasked with delivering products. It can also affect platform development in how microservice architectures are approached. If service decomposition is too tightly coupled to today’s products, it can compromise the future flexibility of the platform.

aggregate features

Misunderstanding the platform-product relationship can also affect the approach to data implied by the product strategy. Without any guidance otherwise, different products may take different approaches, often without a common data model. Data for operational needs may be approached as yet another silo. Products that serve global markets also need to comply with multiple variations of local regulations. With data being a key component to the value of a platform business strategy, not having data addressed or reflected in a product strategy can be a costly oversight. It’s also not a good sign if a separate data strategy exists and it is not reflected in the product strategy.

Symptom #2: The product strategy doesn’t anticipate/address the relationship between changes in products and changes in process

While a product strategy doesn’t need to define the process to be taken on a per product basis, it should reflect the range of processes that the product organization will need to support. And it should anticipate if new products will require new processes.

If there is logical alignment between product, requirements, and process, using the right process increases the odds of success. If new processes are to be used, the biggest risk is people being unaware of the implications to their tasks. If they are an input to the process, or if they don’t provide the right input, the process effectiveness gets compromised. If they are a consumer of the output and they don’t get what they expected, they may disregard the output and do it themselves — and then the process is considered to be a waste of time and resources.

There are many ways to approach product development (user centered design, design thinking, lean UX, MVP-based product evolution, agile, scrum, Kanban, automation, etc.). Product managers may opt for a process that deals with risk and uncertainty by not overinvesting in features or use cases that haven’t been validated as providing real value to real customers. The idea is to get something out early and validate assumptions and next steps based on feedback. What is often missing is how this round trip will be done — what data will be collected, how it will be collected, how it will be analyzed, how the outcome will be used to inform ongoing efforts, etc. In the case of platform-based products, it is simply inefficient for each product manager to develop their own approach to data.

As processes evolve, so do the tools, artifacts, and ways of working. Often this evolution is driven by the opportunity to provide benefits and efficiencies across product development efforts. Examples of this evolution include design systems, UI/UX pattern libraries, and approaches to templating and componentization. To be most effective, these tools and artifacts need to be developed and evolved with input and evaluation across functions and stages of development (versus simply rising from the detailed development of a single product). In fact, if a tool isn’t a component of the product strategy, it may never fully come about or serve all the products.

Localization of products often happens after the product has achieved a degree of completion or maturity. Localization may simply mean a version of the product where certain aspects of the UI are different in order to account for differences in language. Many products will require deeper degrees of localization based on the differences in cultures and cognitive approaches of users. In some cases business rules will require substantial changes (e.g., which features might be available, how a product is purchased, how it is supported, etc.). A product strategy should at least indicate which products will likely require localization and how deep this localization might be in certain markets that are key to the success of the strategy.

Symptom #3: The product strategy over-emphasizes innovation but doesn’t adequately define what is meant by the term

While a product strategy needs to be forward-looking, simply calling everything it covers or implies an “innovation” can create problems. This is especially true if business stakeholders have one model of what innovation should mean and the product organization is faced with real constraints that produce a very different model that is not aligned with stakeholder expectations.

The product strategy should provide the context for defining innovation and expectations around innovation. This allows those tasked with implementing the strategy to make the right choice of process and set expectations for outcomes without being blind-sided by stakeholders applying very different expectations when evaluating the outcome. The point is not to drive innovation out of the products or the product strategy, but to make it much more explicit by what is meant and create a common understanding of what innovation means.

If your goal is to deliver innovation through improvements to existing products for existing customers, but your approach is geared towards disruptive or ground-breaking innovation, it’s highly possible that the outcome might be a version of the product that doesn’t fully cover the existing needs of the customers. Or it might require changes on the customer’s end that they aren’t willing to make.

The product strategy doesn’t need to define the ideal fit between product innovation and market. Experienced product managers know these issues. But they are not always in the driver’s seat of the product strategy. A product strategy that clarifies innovation helps product managers better frame assumptions and requirements around product adoption that should be factored into product definitions and product development. 

Business stakeholders

 

Symptom #4: The product strategy refers to design only in the context of a role, or a specific stage of product life-cycle development

Design is both a process and the output of the process. Simply put, a business that sees design as role and constrained to a particular stage, is really saying design is only about the output. This is an indicator of low design maturity. One of the risks here is the belief (or expectation) that design will increase the overall effectiveness of the product simply by “finishing” or massaging and filling in around decisions that they inherit. While it is certainly more efficient to have design function this way within a fast-paced product organization, it’s unrealistic to think that design can fix a bad or poorly informed decision.

Design maturity means acknowledging and understanding how the quality of design output is tied to how well design processes are integrated in other processes (e.g., strategy development, product definition, innovation). It also recognizes that there’s usually a benefit for businesses and customers in having products share common characteristics in terms of quality and character of the user experience (e.g., single sign-on across all products).

Design maturity also recognizes that different design problems often intersect, and that a designer in one area of focus may not have the tools and experience to be effective in another area of focus. A product design team may own the customer experience for the product, but it’s very rare that the same team can address the overall experience the customer has across all products and all stages of doing business with the company.

Conclusion

Every business has the unique challenge of defining what digital means to them, and what it means to the products they bring to market. One of the most important things a business should understand is this: as the business becomes more “digital,” the product strategy has to reflect that shift. It must provide more explicit direction or guidance on issues that will affect all products but can’t be fully solved as part of the development of any individual product.

Part 2 will look at some ways to improve a product strategy that is in play but wasn’t born “digital.”

  • URL copied!