Archives

Early in my career, I had an administrative assistant who did not work out, and I had to let her go. She did not have the organizational skills I was looking for, and she seemed bored with the day-to-day work of expense reports, travel planning, etc. While she was serving her two-week notice period, I happened to walk by her desk one day at lunchtime, where she was sketching an absolutely beautiful portrait. I asked if she was an artist and she said yes, that was her passion and she worked on it every spare minute. She showed me her work, and it was absolutely incredible.

While I didn’t say anything then, I felt ashamed because I realized that mentally I had been questioning her competency—and maybe even her worth as a human being—because of how well or poorly she did at the job I assigned her. I realized that she was in fact enormously skilled and hardworking—just not at the work I had assigned her to do. I now understood that she was indeed an amazing individual; she was simply in the wrong job. She showed me that everyone is exceptional at something.

I’ve tried to keep this principle in mind ever since. I've found that if I can discover a person's passion and align it with their work objectives, they will deliver extraordinary results. They will also take huge enjoyment in their work and regularly go above and beyond. In fact, the major challenge once you find someone’s “sweet spot” is to keep them from burning themselves out.

Discovering an Employee's Passion

Shortly after I joined a different company, it underwent a major re-organization. Unknown to me at the time, my hiring was part of the new organization plan. Part of my job was to build a new engineering group within the company, and my boss told me to focus my recruiting efforts on a specific division within the same company. When I was given a list of high-performers to go after, I asked my boss, “What if they’re not interested?” He responded, “Don’t worry, they will be.”

As he predicted, I was successful in recruiting my top candidates—in part because I made the work sound interesting, and in part because rumors were flying that the division was being shut down (which turned out to be true). One of my top recruits was an immigrant to the US who would be eligible for permanent residency provided he stayed employed by my new company for another 9 months. I was aware when he accepted my offer that he was almost certainly just grabbing a lifeline to insure his immigration status, and that he probably planned to leave as soon as he got it. He was an extremely talented engineer / architect, so I decided to take the gamble that I would be able to find work that he loved and keep him on-board.

In addition to giving this new recruit a broad and challenging role, my approach was to dangle special projects in front of him and see which ones he latched onto. In each case, I’d try to “sell” the project’s interesting aspects and benefits, but at the same time I made no effort to force him to take any of them on. After two or three attempts, I found one that he got really excited about. I could tell he was sold because he started talking about the project as his own idea, even though I had talked him into doing it in the first place.

I knew he was not trying to take credit for “my idea”—on the contrary, he had started thinking of it as his because he’d made the idea his own. In fact, he not only delivered on the original idea, but took it way past the point I had ever envisioned. He grew it until it turned into a completely new and very successful line of business for the company. All I can claim is that, as his boss, I was smart enough to ride the tornado and keep problems out of his way. It was a win for him, for me, and for our company. And, of course, he ended up staying way past the point where he got his green card.

good-at-something

Mapping Passion to Purpose

As a manager, it’s a bit of an art to figure out what people are exceptional at. I’ve learned that the secret is to keep trying until you find an area where they start driving you instead of vice-versa. It’s also a challenge to figure out what people will be exceptional at before you bring them onto your team—though I’ve gotten better and better at that over the years. The key is to not just listen to what a candidate says during an interview, but to observe what they really get excited about. Excitement can of course be faked, but by looking for congruence between their apparent emotions and past behavior, you can at least start to form a picture.

It’s also an art to stretch the lines of a job description so that the job fits the person, but the work still gets done. There’s an age-old dilemma in business as to whether you fit the job to the person, or the person to the job. My own view is that, like the pieces of a jigsaw puzzle, people are simply the “shape” they are. If you gather the right puzzle pieces together on your team, you can fit them together to cover your objectives. Your puzzle may not be a bunch of neat little rectangles within an organizational box, but your team will perform in a way that outstrips everyone’s expectations, including their own.

Software architect hero

Planning Your Own Path

Perhaps the final frontier is to figure out what you yourself are exceptional at. This comes totally naturally to some people, but it took me years to figure this out for myself. I realized over the years that—regardless of my official job role­—I’ve always ended up playing a key role in launching new products to market. I think people could just tell that I was excited about it and good at it, and my bosses either got out of the way or actively pushed me forward.

Unfortunately for me, most product companies spend the vast majority of their time in routine day-to-day implementation. Launching new products—or even bailing out problem ones—is relatively rare, even in large and very innovative companies. That’s why it was good fortune that I landed at a product development services company. Given that we work with a huge number of initial or major product releases, I found that I can now focus on the areas of software development I enjoy the most—the architecture, design, planning, and launch aspects.

It certainly does happen that some people are great at jobs they hate, while others are terrible doing things they love (singing on YouTube comes to mind). But I have found that people will naturally perform better when they do things that bring them energy and excitement—and as managers, it serves us best to help our teams find that sweet spot.

I sometimes wonder whether that admin I had to let go found a place where her artistic talent could flourish and make her a living. I truly hope so. I owe her a debt for teaching me that we all are exceptional at something. We just need to keep looking until we find it—or it finds us.

Machine learning (ML) is quickly becoming a fundamental building block of business operations, resulting in improved processes, increased efficiency, and accelerated innovation. It is a powerful tool that can be used to build complex prediction systems quickly and affordably; however, it is naive to believe these quick wins won’t have repercussions further down the line.

ML has matured over the last decade to become much more accessible with the availability of high-performance compute, inexpensive storage and elastic compute services in the cloud. However, the maturity of development and operations processes related to applying, enforcing, managing and maintaining a standard process for ML systems is still an emerging capability for most organisations. While some embark on the journey with confidence, often feeling secure in the knowledge that their mature DevOps process will ensure success, they are finding that there are nuances in the ML development process which are not considered in traditional DevOps. This realisation often only becomes apparent after a significant investment has been made into ML projects and inevitably results in failure to deliver.

One of the most effective ways of avoiding many of these pitfalls is the use of containerisation. Containers provide a standardised environment for ML development which can be provisioned rapidly on any device or platform etc.

What are Containers?

Containers provide an abstraction layer between the application and the hardware layers. This abstraction allows software to run reliably when moved between environments i.e. from a developer’s laptop to a test environment, or a staging environment into production or from a physical machine in a Datacentre to a virtual machine in a private or public cloud.

Put simply, a container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. By containerising the application platform and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away.

Why use Containers for ML?

Containers are particularly effective for MLOps as they ensure the consistency and repeatability of ML environments. This simplifies the deployment process for ML models by removing the complexity involved in building and optimising the ML development and test environments while addressing the risk of inconsistencies introduced by manual environment provisioning.

Some of the immediate benefits of containerising MLOps pipelines include:

  1. Rapid deployment. Using pre-packaged Docker images to deploy ML environments saves time and ensures standardisation and consistency across development and testing.
  2. Performance. Powerful ML frameworks including Tensorflow, PyTorch and Apache MxNet enable the best possible performance and provide flexibility, speed and consistency in ML development.
  3. Ease of use. Orchestrate ML applications using Kubernetes (K8s), an open-source container-orchestration system for automating application deployment, scaling, and management on cloud instances. For example, with an application deployed on K8s with Amazon EC2, you can quickly add machine learning as a microservice to applications using AWS Deep Learning (DL) Containers.
  4. Reduced management overhead of ML workflows. Using containers tightly integrated with cloud ML tools gives you choice and flexibility to build custom ML workflows for training, validation, and deployment.

Here are examples of how containers can be applied to resolve key challenges to ML projects running efficiently and cost effectively:

1. Complex model building and selection of the most suitable models

While in theory it makes sense to experiment with models to get the desired predictions from your data, this process is very time and resource intensive. You want the best model, while minimising complexity and securing control over a never-ending influx of data.

Resolution: ML models can be built using pre-packaged machine images which enable developers to test multiple models quickly. These images (e.g. Amazon Machine Images) can contain pre-tested ML framework libraries (e.g. TensorFlow, PyTorch) to reduce the time and effort required. This lets you tweak and adjust the ML models for different sets of data without adding complexity to the final models and gives you more control over monitoring, compliance and data processing.

2. Rapid configuration changes and the integration of tools and frameworks

It is much easier to design, deploy and train ML models the earlier it is done in the project. The catch is to control configuration changes while making sure that any data used for training doesn’t become stale in the process. Stale data (an artefact of caching, in which an object in the cache is not the most recent version committed to the data source) is one of the reasons most ML models never leave the training stage to see the light of day.

Resolution: Using containers enables the orchestration and management of ML application clusters. One example of this approach uses AWS EC2 instances with K8s. A major benefit of this approach is that pre-packaged ML AMIs are pre-tested with resource levels ranging from small CPU-only instances to powerful multi-GPU instances. These AMIs are always up to date with the latest releases of popular DL frameworks, solving the issue of configuration changes needed for training ML-models. Using cloud-based storage such as AWS S3 addresses the storage requirement for ever-changing and growing data sets. Using K8s you can then orchestrate application deployment and add ML as a microservice for those applications.

3. Creating self-learning models and managing data sets

The best way to achieve self-learning capabilities in ML is by using a wide range of parameters to test, train and deploy models. You need to be able to handle rapid configuration changes; have a monitoring platform for ML models; and set up an autonomous error handling process. You also need enough storage to integrate ML clusters with the inevitable expanding data sets and the continuous influx of new data.

Resolution: An increasingly popular and proven approach is to use Amazon Elastic Kubernetes Service (EKS), Amazon Elastic Container Service (ECS) and Amazon Sagemaker. EKS enables you to monitor, scale, and load-balance your applications, and provides a Kubernetes native experience to consume service mesh features and bring rich observability, traffic controls and security features to applications. Additionally, EKS provides a scalable and highly-available control plane that runs across multiple availability zones to eliminate a single point of failure. Amazon Elastic Container Service is a fully managed container orchestration service trusted with mission critical applications because of its security, reliability, and scalability. Amazon Sagemaker is a fully managed service that provides every developer the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models.

How can we help?

Organisations can overcome their ML worries by partnering with GlobalLogic to deploy MLOps using containers. No matter where organisations are on their ML journey, GlobalLogic can guide them to take the next step to ML success.

Our expert team has a track record of deploying and managing complex ML environments for large enterprises including highly regulated FS institutions. GlobalLogic's ML engineering team uses AWS DL Containers which provide Docker images pre-installed with DL frameworks. This enables a highly efficient, consistent and repeatable MLOps process by removing complexity and reducing the risk associated with building, optimising and maintaining ML environments.

We as humans have a tendency to adjust rapidly to our environment and to begin to consider it “normal” in a very short time. This has probably been key to the survival of our species: we can’t afford, biologically, to be constantly triggered by recurring events. Instead, we set a new baseline and then are “aroused” only by changes to that baseline.

We’re all familiar with entering a new environment and at first noticing a distinctive odor or sound—baking bread, a person’s perfume or cologne, or the whine of an aircraft engine, for example. Then, within a few minutes, we are no longer consciously aware of it. Although we see this same “habituation” effect happen at a macro level in the technology world, every once in a while the novelty of the situation shines through, and we have a little epiphany—or in the technology space, what we might call a “science fiction moment.”

When I worked for a previous company and was spending a lot of time in India, my company assigned me a car with a driver. Having a local driver was a necessary safety factor given the driving conditions in India in those days, and it was a common practice both for visiting foreigners like myself, as well as for many locals. Coming from the US and being used to driving myself, I found having a driver to be very awkward at first. It seemed absolutely unfathomable to have someone drive me to work and then sit and wait many hours until I was ready to leave. I felt guilty about it.

Even though having a driver in India was both common and reasonably inexpensive (by US standards), the idea that I was keeping an actual person waiting on me—literally—all day long was hard to get used to. But I did get used to it. In just a few weeks, I not only enjoyed having a driver, but I began to appreciate the advantages. For example, I was able to ask him to run errands for me while I was working, enjoy his conversation on long drives, and appreciate my favorite coffee “to-go,” which he’d get for me before he picked me up in the morning. In short order, I was thoroughly “spoiled.” While I still very much appreciated—and over time became friends with—my driver, I no longer felt guilty about his waiting for me when I was busy (unless I was going to be very late). In other words, I became thoroughly habituated to this new experience.

I see the same effect when I travel. I travel a lot on business, and generally I am so focused on my work that I am not too aware of the novelty of my surroundings. Every once and a while, though, something will happen, and I’ll notice what a fantastic place I’m in. We ran an architecture workshop for a client in Paris, for example, in a conference room that had an amazing close-up view of one of the major Paris landmarks, the Arc de Triomphe. As we conducted the workshop or took a break, I’d glance out the window and think “I love my job!”

There are other magical moments—a dinner with my colleagues in an outdoor public square in Lichtenstein, eating roasted chestnuts from a street vendor in Zurich on a cool fall day—that punctuate the habituation of frequent business travel. These and other such moments remind me of something I’ve become habituated to and so often take for granted: what amazing places I’m privileged to visit, and how lucky I am to do such interesting work with such great people.

Something similar happens with technology. When we get a new technology or device, we often feel a sense of fascination or delight. This quickly fades, and while we still enjoy the benefits we get from that device, we start to take them for granted. Then something happens that reminds us of what an amazing era we live in.

This happened to me the other day. I drive a Tesla that has a “navigate on autopilot” feature. This feature was introduced about a year ago (as of this writing), and I’m fairly used to it by now. However, I always enjoy how the car automatically navigates the freeway exit nearest my house. The freeway exit ramp makes a fairly sharp right turn and then a complete U-turn before it joins the major intersection that I take to get home. If you take your hands momentarily off the wheel, it’s pretty obvious that the car is following the road and steering all by itself. The other day I happened to be using Siri voice commands to send some notes to myself at the same time that my car was automatically driving itself around this exit and toward my home. I didn’t consciously plan these things happening at the same time, but it struck me very forcefully that I was having a science fiction moment.

The situation of having a spoken conversation with my “pocket computer” while being automatically driven home by my artificially intelligent car was literally science fiction just a decade ago. We’re not all the way there with either technology, of course. But every once in a while, something like this will happen to remind me that we’re living in a future that people only dreamed about just a short time ago.

I don’t think we can avoid becoming habituated, technically or otherwise; it’s hardwired into us as humans. I think we can, however, stay alert to situations that remind us of what exceptional times we live in, and what exceptional opportunities we have.

All the best for a joyous and prosperous New Year and in the upcoming 2020’s.

In the 1960s, sociologist Everett Rogers produced a roadmap showing how innovations are adopted and, eventually, become obsolete. Later, author Geoffrey Moore wrote a book called “Crossing the Chasm” that detailed how companies and technologies succeed or fail to progress from “early adopter” to “early majority” status. Moore’s work further popularized Roger’s categories, and words like “innovator” and “early adopter” have become a firm fixture of the Silicon Valley and world-wide technology vocabulary.

Fig 1: Diagram based on E. Rogers’ “Diffusion of Innovations,” 1962. Courtesy of Wikimedia Commons.

For many companies who depend on technology, the pragmatic “sweet spot” on the technology adoption curve lies somewhere between the early majority and late majority. By the time a technology begins to be adopted by the early majority, many of its initial challenges have been overcome by the innovators and early adopters. The benefits of that technology can now be realized without the pain those pioneers had to go through. Also, a substantial community of companies and developers are in the same position, so resources, training, tools, and support start to become widely available. Also, the technology is new enough that the best engineers and architects will be excited to learn and work with it—it’s a motivator to attract talent.

This assumes, of course, that the new technology delivers benefits. But, generally, if it “crosses the chasm” and gets to the early majority phase, that has already been soundly proven. For example, digital natives like Amazon, Google, and Facebook were early adopters of a variety of then-new technologies. Their risk—and success—subsequently paved the way for the vast majority of companies that now follow in their shoes.

Most technology-enabled businesses can survive and thrive with technology that is one generation—or even two—behind the technology being used by the early adopters. Once a technology becomes older than that, though, lots of problems come up:

  • It becomes harder to attract and retain good talent.
  • System uptime, stability, and scalability become less competitive relative to more modern systems.
  • The user experience and overall system quality suffers; security threats cannot be readily countered.
  • The availability of good technology options produced by other companies and the open source community become less abundant.

Companies whose technologies fall into Professor Roger’s “laggard” category will generally experience these issues first-hand, whether or not they recognize that their technology is the cause.

By nature, the specific technologies that fall into each category are moving targets, and meaningful market adoption statistics are hard to come by. Forbes reported in 2018 that 77% of enterprises have a portion of their infrastructure on the cloud, or have at least one cloud-deployed application[1]. This figure resonates with our own experience, but it still does not tell us what percentage of new revenue-generating applications are created using cloud-native / mobile-first architectures, or how aggressively businesses are migrating to the cloud. Our experience suggests “nearly all” and “it varies,” respectively.

Classifying Technology from a Practitioner’s Perspective

To provide a practitioner’s perspective on technology adoption, we decided to create a classification based on our own experience with clients, partners and prospects. Collectively, because of our business model, this set of companies cuts a wide swath through the software product development community, including startups, ISVs, SaaS companies, and enterprises. Because GlobalLogic’s business focus is on the development of revenue-producing software products, the enterprises we work with generally either:

  • Already use software to produce direct / indirect revenue
  • Recognize that software has become a key component of their other revenue-generating products and services
  • Are more conventional but aspire to be more software-centric
  • Are “laggards” (usually recently acquired technologies and companies whose technologies are in need a refresh)

So what trends are we seeing in the technologies used by this diverse group of software-producing companies? Before we start, please note that the classifications we make here are about the technologies, not the companies. Even the most innovative company probably has some “laggard” technologies deployed. Similarly, even very conservative companies may be early adopters in some areas. For example, some otherwise ultra-conservative banking software companies are incorporating cryptocurrency support.

As of late 2019, we categorize the various technologies currently in use among our partners, prospects and client-base into five categories: innovators, early adopters, early majority, late majority, laggards. (If you find the term “laggard” offensive, please note that we use it because it is sociologist Everett Roger’s term, not ours. It should be read as only applying to the technology, not the company or the people.)

Fig 2: Technology Adoption Picture in Late 2019

Innovators

Relatively few of the innovator technologies currently under investigation or in use by innovators ever reach the early adopter stage in their current form, but the work done by innovators informs the early adopters and the entire ecosystem.

Late 2019 innovator technologies include:

Quantum computing

Strong AI (that is, systems that “think” like people)

Network-distributed serverless

GlobalLogic has the same curiosity about these types of technologies as other engineering-focused companies tend to have. However, these technologies are primarily in the research phase and not the revenue-generating phase, so our commercial engagements here tend to be limited.

Early Adopters

Technology in this bucket has widespread availability, but it is either in a relatively rough state that has not been fully productized, or else is an obviously good technology that is still looking for the right opportunity to go mainstream. While a substantial number of these technologies will eventually be taken up by the early and late majority in some form, the timing of this event—as well as the specific winners and losers—have not yet become clear. For example, at this writing, cryptographically secure distributed ledgers will clearly become mainstream at some point. However, will blockchain specifically be the big winner? Out of such bets, fortunes are made or lost.

Early adopters need to invest time and energy to find and then fill the gaps and missing pieces of an incompletely productized technology. However, the rewards of using early adopter technology can be very large if they address a genuine need that can’t be readily solved using other approaches. For example, bitcoin as an early adopter of cryptographically secure distributed ledger technology has paid off big for many people, though arguably this technology has yet to become mainstream for business applications.

Within our client community, current early adopter technologies include:

Serverless functions or “Function as a Service” (FaaS)

Serverless containers

Blockchain

Deep learning (different than strong AI)

Computer vision

AR/VR (outside of gaming and entertainment)

Gestural interfaces

We actually have clients using all of these technologies commercially today. For example, we work on public safety systems and autonomous vehicles that use computer vision and deep learning. However, these applications fall into the early adopter / risk taker / first mover / niche application category, rather than what could be considered mainstream business applications. We, along with many others, firmly believe that many of these technologies are rapidly maturing and that they will indeed enter the mainstream in the next few years. But as of right now, we could not claim that they have become part of the mainstream today.

Early Majority

Technology entering the “early majority” bucket is initially a little scary to the mainstream, but it has been thoroughly worked over by the early adopters (and the even earlier entrants into the early majority) and tested in real-life deployments. The blank spaces, boundaries, and rough spots have been largely filled in, and the technology has now been “productized.” Tools and support are available, together with experienced personnel. For many businesses, this is the sweet spot for new product development: early enough to give you a meaningful competitive edge and to be attractive to talented engineers, but not so early that you need to invest the time and energy required to be a pioneer. Early majority technologies also have the longest useful life, since they are just entering the mainstream adoption phase.

Right now among our customer, prospect, and partner base, we see the early majority adopting containerized cloud-native event-driven microservices architectures and fully automated CICD deployment and “Infrastructure as Code.” We saw this trend starting back in 2015 among our mainstream-focused clients.

Enterprises who are developing new systems or extending older ones are widely adopting:

Modern NoSQL databases (as opposed to ancient versions of NoSQL)

Event-driven architectures (microservices-based and otherwise)

Near real-time stream processing

DevOps / CICD / “Infrastructure as Code” / System Reliability Engineering

Containerized cloud-native microservices architectures

On the user experience front, we are beginning to see a significant uptick in mainstream clients who are interested in dynamically extensible “micro front-end” architectures.

Late Majority

Revenue-generating software systems generally age into the late majority. They tend to be created using technologies that were early majority when the system was built, but time has gone by, and those same technologies now fall into the late majority category. This applies to any company that expects to make money from the software—either by selling it (in the case of an ISV or SaaS company), or as part of a product or service (e.g., a car or medical device).

For enterprise-developed non-revenue generating applications (e.g., internal back office or employee-facing systems), the situation is somewhat different. Because cost control and low-cost resource availability are primary drivers, internal-use applications are often developed using lower-cost technologies that are now in the late majority stage. Late majority technologies also enables the use of less expensive resources who may not be skilled in early majority technologies.

As an aside, this attitude toward late majority technologies is one reason for the dichotomy between IT-focused organizations and product-focused organizations—both within a given enterprise—and in the services businesses that support them. Product-driven organizations and services businesses tend to be skilled in developing systems using early adopter and early majority technologies. IT-focused organizations focus on sustaining systems that use late majority and sometimes laggard technologies. This is obviously an oversimplification, as both product- and IT-focused organizations can certainly be skilled in the full range of technology options. However product- and IT-focused companies tend to have different attitudes, approaches, and “DNA” with respect to the different stages of technology maturity.

For revenue-generating applications, while development cost is always a factor, time-to-market, competitive advantage, and the overall useful life of the resulting product are generally more important than cost alone. This desire to maximize the upside potential generally drives new revenue-producing app development toward early majority technologies, while non-customer-facing / non-revenue producing / internal-facing applications tend to use late majority technologies to save money.

As of late 2019, the predominant late majority architectural approach is:

A true N-tier cloud-deployed layered architecture, supporting stateless REST APIs and JavaScript Web / mobile native clients

RDBMS-centric systems using object-relational mappings (ORMs)

Good implementations of these architectures have strong boundaries between layers exhibiting good separation of concerns, are well componentized internally, and may be cloud-deployed. This is a good, familiar paradigm, and we expect elements of it to persist for some years (e.g., the strong separation between client and “server” through a well-defined stateless interface). However, even the best implementations of the N-tier architecture lack the fine-grained scalability that you can get with early majority microservices technology. Many implementations of this paradigm also tend to be built around a large central database, which itself limits the degree to which the system can scale in a distributed, cloud-native environment.

If history is any indication (and it usually is) we believe the majority will—perhaps reluctantly—leave the N-tier paradigm behind in favor of a cloud-native microservices approach within the next several years. 

Laggards

Given its negative connotations, we would really prefer not to use this term. Let’s keep in mind, however, that the term was introduced by Professor Rogers to refer to specific technologies, not the company or the people who work with them.

Laggard technologies are those that are not used to any significant degree for development of new software systems today, either for revenue-producing products or for internal-use systems. Time has passed these technologies by, literally, and they have been superseded by other technologies that the vast majority recognize as being superior (at least 84% according to the curve).

People use laggard technologies only because they have to. Systems based on laggard technologies are still in production, and these systems must be actively enhanced and maintained. Within a given organization, these activities require creating a pool of resources who have knowledge of the laggard technologies. The proliferation of these niche skillsets within the company can drive the creation of new systems using the same laggard technologies, even when better options have become widely available.

For technologies in the laggard category, multiple generations of improved technologies and architectural approaches have, by definition, now become available. In general, these improvements make development easier and faster, scalability and reliability higher, user experience better, and operations cheaper. Nonetheless, companies can find themselves locked into laggard technologies because that is the skillset of their workers. Getting out of this bind is disruptive, and “digital disruption” has become a frequent refrain in the industry.

Current technologies that fall in the laggard category include:

Microsoft Access style “two-tier” (real or effective) client / server architectures with tightly coupled UIs and logic in the database (SPROCs, etc.)

Stateful / session-centric web applications

Conventional “SOA” / SOAP architectures

“Rich client” systems (Siverlight, Flash / Flex, and many but not all desktop systems)

Legacy mainframe-centric systems

In general, any technology that was used by early adopters 20 years ago or longer is a candidate for the laggard bucket.

Conclusion 

Technology stays current for a surprisingly long time. Specifically, some major technologies have stayed in the “majority” category (early and late) for about 16 years and, in a few rare cases, even longer. That’s enough time to raise a child from birth to high school. But, as those of us who have raised children know, while time may seem to stand still day-to-day, looking back it passes by in the blink of an eye.

On the technology front, tiered architectures with REST APIs may still seem modern and current—but in fact, the early adopters were using them in 2002, and they became mainstream in 2006. If past history is any indication, N-tier architectures will enter the “laggard” category by 2022.

Like Cleopatra—of whom Shakespeare’s character said “age cannot wither”—not all technology ages at the same rate. Some technologies that have reached, or nearly reached, the 20+ year mark while remaining vital include:

  • The stateless REST interface paradigm
  • HTML/CSS/JavaScript web applications
  • Modern NoSQL
  • Wi-Fi
  • Texting (SMS)
  • Apple’s OS/X operating system (originally NeXTStep)

However, where technologies are concerned, remaining relevant in old age is the exception, not the rule. The technologies and paradigms that have stayed current have not remained static; they have evolved continuously since their early beginning. Good systems do the same—generally by steadily incorporating “early majority” and “early adopter” technologies to keep themselves fresh.

[1] https://www.forbes.com/sites/louiscolumbus/2018/08/30/state-of-enterprise-cloud-computing-2018/#24d16798265e

The role of mobile in retail has gotten bigger and wider in the last few years. Mobile is not just a platform for consumers to browse and purchase products, but has grown to provide an immersive experience using Augmented Reality (AR). Mobile empowers store associates to enhance productivity and provide a personal experience to their customers. Mobile is not just limited to the smartphone, either; it is an ecosystem of devices that provides a connected experience. These devices include voice assistants, smart speakers, microwaves, camera bells, home security, kitchen appliances, dash cams etc.

Below are 7 trends that demonstrate how mobile is playing a key role to power consumers and retailers.

1. Voice Commerce

More than 100 million Alexa devices have been sold so far. From TVs to speakers, cars to refrigerators, we interact with Amazon Alexa or Google Assistant through all sorts of devices. This trend is not slowing down; we are going to see these voice assistants become integrated in many more devices. These assistants provide a new medium through which consumers can connect to brands and purchase products. Voice assistants are good for reordering consumables that users are already familiar with (i.e., where customers don’t need to see pictures or read reviews).

Voice commerce is a lot different from desktop or mobile app commerce, where users can see product descriptions, view promotions, read reviews, and view product images. Voice commerce is complex, and its UX must be designed from scratch. It can also complement a user’s online shopping experience by allowing him/her to ask about an order status, available offers, or to know his/her reward points. Adding a screen to these assistants can take the shopping experience to the next level.

2. Augmented Reality (AR)

For the past several years, Augmented Reality (AR) has been a buzz word with not much success in Retail. The launch of Apple’s ARKit and Google’s ARCore made it possible to provide immersive AR experiences on smartphones. Gartner predicts that, by 2020, 100 million consumers will shop via AR, both online and in-store. Ikea was one of the first retailers to adopt AR, and Wayfair soon followed. With the retailers’ apps, customers can measure a room in their house and “place” furniture in it. AR can also be used in these other retail use-cases:

  • Virtual Fitting Rooms: Try on products like shoes, jewellery, make-up etc. (Nike, Puma, and Sephora are already doing this). A Smart Mirror can also help customers virtually try on clothes, change colour options, and then order products right from the mirror.
  • Product Demos & Information: Point your phone’s camera at a product to see details and price information, view a demo, and get further recommendations.
  • Product installation: Point your phone’s camera at a product’s QR code to view a step-by-step installation guide.

This is just a beginning, more and more retailers will come up with great immersive AR experiences.

3. Personalization

Consumers are demanding more personalised experiences across all retail touch points, from product discovery, to product purchase, to post-purchase services. They are willing to share meaningful data that retailers can use to provide better product recommendations and contextual offers.

Enormous progress has been made in the field of Artificial Intelligence (AI) and Machine Learning (ML) over the past few years to help create accurate models and provide personalised shopping experiences. By applying AI and ML to enormous data, retailers will be able to predict what their customers want before the customers themselves know. Retailers can then provide all this meaningful data to store associates via a mobile app and thereby empower associates to help customers choose the right product and be a part of their shopping journey.

4. Experiential Retailing

Most physical stores have not been able to meet customers’ changing expectations, and they were dying one after another. Customers love to shop in physical stores, but they demand a better experience. Retailers that are not able to understand these expectations and adapt to them will collapse. Meanwhile, E-commerce players like Amazon, Warby Parker, and Casper Mattress have opened their own physical stores to provide an unparalleled retail experience. These stores don’t just sell products; they provide opportunities to connect with consumers and tell their brand story.

Experience Retailing is the new trend where consumers come to a store to interact with a product, hangout with friends, and then can make purchases through a variety of channels. Products are equipped with smart displays or tablets to show product information and videos. Customers can also make a purchase through these tablets and have a product shipped to their home. These tablets also capture analytics about how customer interacts with a product and its information. By intelligently using technology, stores empower their associates to assist customers personally and create “aha!” moments for them. Before talking to a customer, an associate will already know his/her preferences in order to provide personal assistance.

A great example of Experiential Retailing is Toys R Us. Back in 2017, Toys R Us filed bankruptcy and closed nearly all 800 of its stores. Now the retailer has partnered with a startup called B82a to provide a new type of toy shopping experience for families. In these smaller flagship stores, kids can check out new toys, watch movies, participate in STEAM workshops, and more. After getting hands-on experience with their products, Toys R Us will offer families an opportunity to make purchases both in-store and direct online.

5. Same-Day Delivery

Consumers are no longer satisfied with a 2-day delivery. They demand more — they demand products now. Retailers are trying hard to speed up delivery time to same-day or even just a few hours. For example, Amazon will start delivering products by drones within a few hours after order is placed. Soon this will expand to other retailers and even food delivery. By 2020, same-day delivery will be the new normal. Customers will go to stores to try on a product, choose home delivery, and the the product will be shipped to home the same day.

6. BOPUS & Stores as Fulfilment Centres

“Buy online & pick-up from store” (BOPUS) has already seen enormous success. This trend will continue to spread across most retailers. This is a very important aspect of providing an omnichannel experience, and retailers must make sure their BOPUS customers can pick up their products as quickly as possible. Measures they can take include providing reserved parking / store entries, moving their pickup point closer to the store entry, or providing pickup lockers outside the stores.

Retailers have also started fulfilling online orders from their physical stores to speed up delivery time. Same-day delivery from Amazon has pressured brick-and-mortar retailers to use their physical stores at full capacity. Instead of building new warehouses near every city, retailers will utilize their stores as fulfilment centres. To make associates more effective, retailers are also revamping their in-store technology by implementing smart apps, RFID inventory counting, mobile checkouts, etc.

7. Seamless Multi-Channels Experience

Retailers need to provide a seamless experience across their online and offline channels. Gone are the days when offline and online channels worked in silos. Customers often start their journey on mobile and end up making a purchase on desktop or in physical stores. Customers expect a unified experience from retailers; they want knowledgeable store associates who can help them find the product they saw on their mobile app.

Summary

Consumers have fundamentally changed. Their engagement with new technologies and digital services has driven their expectations up higher and higher. They’re now demanding useful, engaging, and assistive experiences from all the brands they interact with. Retailers are going through a major digital transformation to meet the expectations of demanding customers, and they should seriously consider these trends and align their digital strategy to keep mobile as a key driver.


It’s 2077. A 95-year-old man, Martin, begins his day with a wholesome breakfast followed by a healthy walk prescribed by his caregiver. He is feeling fine. He has long been looked after by specialists who monitor his health and know everything about his illnesses and ailments. He still has dozens of years ahead of him before he turns 122, the current average life expectancy of humans living on Earth.

This vision, which may sound like it was taken from a science fiction movie, is already being worked on today in GlobalLogic laboratories — and soon will become the reality. All thanks to internables.

We are surrounded by Internet of Things (IoT) devices at all times, and we interact with them every step of the way. Reports confirm that this relatively new technology has gained recognition the world over. Today, there are already 26 billion active devices, and by 2025 this number will be three times higher.

Smartwatches and smartbands fit this mobility trend perfectly, with the widely promoted “healthy lifestyle” reinforcing the extensive user base for these devices. We all want to improve our health and stay in good shape, and the inconspicuous yet robust smartbands and smartwatches assist us greatly in doing so. Hardly anybody who has tried an IoT device for their workout goes back to training without it. With the popularity of dedicated apps; sensors that monitor mileage, heart rate, and burned calories; and virtual trainers that create personalized workout plans, it is hardly surprising that 70% of the IoT devices currently trending are focused on health and physical activity.

This IoT treasure trove of health tech — which helps consumers feel safer and save time and money — will soon be extended with internables. So what is this technology all about? In a nutshell, internables (also known as implantables) are sensors implanted inside the human body to naturally enhance the capabilities of health equipment.

Only in the Movies?

Most of us associate these solutions with books, comics, games, and movies. Nanomachines and cyberimplants are the staples of the virtual worlds created by game developers (e.g., Deus Ex series or the upcoming Polish hit Cyberpunk 2077). We can also see some practical applications of user-implanted devices in several episodes of the TV series, Black Mirror. However, we no longer need to delve into the realm of pop culture and science fiction to identify what internables can do. As it turns out, this technology has already been applied in the world we live in today.

The medical sector has always been among the first to implement the latest technology solutions on a wide scale, prioritizing those capable of extending patient care while reducing costs. This has been exactly the case with IoT devices. Forecasts indicate that by 2020, 40% of all active IoT devices worldwide will be used in this sector. Consequently, they shouldn’t be seen as merely a big sensation in a series of groundbreaking biotechnology-based medical projects.

Engineers and scientists have joined forces to better monitor patients’ health and advance the telehealth sector. They have also harnessed existing technologies to fight well-known illnesses and ailments. The range of activities underway is extensive, with various milestones already reached — from insomnia-alleviating sleep bands that use the human body’s natural ability to transfer sound through bones, to designs for miniature robots (“nanomachines”) that will move inside the human body to deliver medicine to a targeted point in the system. For example, nanomachines that look like a cross between a whale and an airplane will be used to — among other things — effectively combat cancer.

Internables offer particularly high hopes for neurosurgery. The bold designs presented in recent months include devices that enable paralyzed patients to control their limb movements, and microdevices that stimulate individual neurons to help treat Alzheimer’s.

Internables at Your Service

Internables are regarded as the key driver to advancing telehealth because they will enable a smoother exchange of information between specialists and users, resulting in an unprecedented scale of care. In the future, individual vital parameters of the human body may be regularly relayed to — and recorded on — users’ digital health cards for faster disease diagnosis and more detailed disease monitoring. These cards could facilitate better communication — not only with medical caregivers, but also with trainers — so that an adequate diet and fine-tuned workout can be prescribed based on the user’s current health status.

Internables can also be implemented in other sectors, like automotive. The swift development of smart cities and smart cars makes traditional cockpits and driver–vehicle communication methods obsolete. For example, GlobalLogic is currently working on a project called GLOko that explores services related to image processing, such as real-time head and eye tracking solutions. Internables could easily be integrated into these solutions to enhance driver-to-vehicle communication.

A New Dimension of Privacy

 The vision of the future where we live happily ever after assisted by technology is very appealing. Who wouldn’t like to be able to record their chosen memories and come back to them at any time? How many people would be able to overcome a disability or illness? However, internables present just as many challenges as they do opportunities.

The idea of privacy acquires a whole new meaning with internables. We are not talking about stolen cars or hacked PCs, but about potentially life-threatening risks. Cybercriminals will definitely not pass up the opportunity to hack and blackmail internable users — such as hacking cardiac pacemaker setting apps. Consequently, it is crucial to establish adequate procedures and protections to prevent any fatal consequences, and to work out mechanisms that will dispel any concerns over compromised privacy and unauthorized surveillance. This will require some effort, but it will certainly pay off, as we all want to enjoy long lives in good health and peace.

Conclusion

 Technology has opened up incredible opportunities and new ways for civilization to develop many times. Internables are undoubtedly another chance for us to live longer, better and safer. Their success, however, depends on the actions taken by companies all over the world.

Only by anticipating the possible negative outcomes of misusing technology at an early stage can we properly protect our users from the potential unpleasant consequences.

At GlobalLogic we face such challenges on a daily basis. We accept this as we strive to harness the potential of internables, which in a few years, perhaps, will change the world as we know it.

Many roles in software development tend to be mislabeled as “architects.” Although these roles are just as vital, using incorrect definitions can lead to miscommunication and unrealistic expectations.

As I work with companies on their digital transformation initiatives, I engage with many software architects, both in those companies and within GlobalLogic. I see many people with the title “architect” who are not what I would call an architect—they actually perform other, distinct, functions. Many of those functions are vital, and saying these people are “not architects” is in no way meant to disparage them or their role. But if everyone is called an “architect,” it certainly makes things confusing.

This confusion is wide-spread. If you search for “software architect definition,” you will see many alternatives that I believe are useless or, at the least, very confusing. Many of these definitions involve creating technical standards, planning projects, and doing other activities that are, in my view, not at all central to architecture itself. It’s not that architects can’t do these things, but you can still be an architect and not do them at all. Let’s take a look at a pragmatic definition of an architect.

In my view, a software architect is a person who figures out how to solve a business or technical problem by creatively using technology. That’s it. Under this definition, many people perform architectural activities, including individual software engineers. In my opinion, engineers are indeed doing architecture when they sit down and think about how they will solve a particular problem, before they start work on actually solving it. It may be “low level” (i.e., tactical) architecture as opposed to “big picture / high-level” architecture, but it’s still architecture.

The difference between an engineer and an architect is their focus: an architect spends the bulk of their time thinking about “how” to solve problems, while the engineer spends most of their time implementing solutions. Being a software architect is not necessarily a question of capability; it’s a question of focus and role.

Traits of a Software Architect

Solves Problems

The most important characteristic of an architect is the ability to solve problems. The wider and deeper the range of these problems, the more senior the architect (in terms of skill—not necessarily years). Some architects focus on network issues, physical deployments, business domain decomposition and “big picture” architecture, integration with existing systems, or even all of the above. But regardless of their focus, the principle task of an architect is to determine a good solution to a problem. It’s not to supply information, coordinate other people, or do research—it’s to describe the solution. The output of an architect is a description or roadmap saying how to solve a problem.

Focuses on “How”

Smart people often have an aversion or even disdain for spending very much time on thinking about “how” to solve a problem. Instead, they want to jump immediately into solving it. This is either because the solution seems obvious to them or because they don’t realize there is value in focusing first on “how.” I remember having this attitude myself in grad school when I was asked for a “plan of work” to solve a particular physics or math problem. I would generally solve the problem first, and then afterwards explain how I did it, presenting my reverse-engineered activity list as the “plan.”

Either because my brain has slowed down, or I’m more experienced, or I’m dealing with more complex problems now—or maybe some combination of all three—I’ve come to value the “how.” In software, there are always many ways to solve a given problem. Of those possible solutions, more than one will generally be a “good” solution. The reason that we separate the “how” from the implementation itself is to give us space to craft and choose among these good solutions, and to examine and reject the bad ones.

Thinks Holistically

To deliver a good solution, an architect must first holistically understand the problem. Problems always have business impact, although frequently they are positioned as purely technical problems. An architect needs to understand the context of the problem they are solving before they can provide a good solution. This requires drawing people out, often for information they don’t necessarily realize they even have or need.

A good architect needs to be a good listener and relentless in tracking down not just what the problems are, but also “why” something is a problem or an opportunity. Since the non-technical side of the company may have little insight into the business impact of a technical decision, it falls to the architect to assess and communicate these impacts in order to choose a good solution.

Uses Technology Creatively

Not every good architecture is novel. In fact, a solid, tried-and-true solution to a standard, recurring technical problem is nearly always better overall (in terms of development and maintenance costs.) than a “creative” approach that is different for its own sake.

That being said, after working with literally hundreds of system architectures over my career, I can’t think of a single one that does not have at least some novel features. This is because the combination of the current situation and constraints, the business requirements, and the technology options available to us at any given moment in time form a large and evolving set. In fact, the number of variables is large enough that their values are rarely the same twice. This gives ample room—and need—for creativity, even when you are not setting out with a goal to be novel.

Architects within established companies have the additional challenge of being thoroughly familiar with their existing system(s). This can naturally incline them toward an evolutionary approach. In their case, the need for creativity often involves the ability to see their current business with fresh eyes; in particular, applying novel techniques to current problems and opportunities, in cases where these approaches provide genuine business or technical value.

Makes Decisions

A primary hallmark of a software architect is their ability to make a decision about which solution is the best fit for a specific business or technical problem (even if that recommendation is ultimately not accepted). While sometimes the ultimate decision-maker does have the title “Chief Architect,” they can often hold the title of “VP/SVP/EVP of Engineering,” “Chief Product Officer,” or some other executive nomenclature. There is nothing wrong with this as long as the person making the decision realizes that they are now acting in an architectural role, not in a purely management / political role. Considering the cost, feasibility, and team preferences and skillsets of a given choice is indeed an architectural function — and can be a good way of deciding between alternatives when they are comparable technically.

Where executives get into trouble as architectural decision-makers is when they choose or introduce a technology that is not technically suitable to the solution of the problem, or that is not nearly as good as the architect-recommended options. For example, I once witnessed an executive override the recommendations of his architects and choose a totally inappropriate technology because he had already paid a lot of money for it. This executive did not appreciate the fact that the success of his project required him to play an architectural role, not a political or managerial one, when making this technology decision. The implementation of his program suffered accordingly as the team tried to work around the limits of an unsuitable technology.

Roles Mislabeled as “Architect”

While architects play a key and often pivotal role in software development, there are many other essential roles to software development. However, I would assert that calling those other essential functions “architects” leads to a lot of confusion and mis-set expectations. Here are some of the other roles who are often labeled as architects but who, in my opinion, often perform non-architect roles.

Researcher

This person surveys the available technologies and approaches within a given area and becomes very knowledgeable about the alternatives through online and offline research, conferences, and vendor presentations. While architects definitely spend time doing research, the fundamental difference between a researcher and an architect is that the architect decides. Researchers provide an essential function, but unless they apply the outcome of their research to a specific situation and ultimately make a specific recommendation as a result, they are not acting in an architectural role.

Evaluator or Analyst

An evaluator / analyst takes the results of research and compares the leading candidates to each other. He or she produces a list of pros and cons for the various alternatives, in the context of the current business or technical problem. Evaluation is also an activity that architects sometimes perform and are even more frequently called on to organize. Again, however, the key differentiator between an evaluator / analyst and an architect is that the architect ultimately makes a choice or single recommendation as a result of these evaluations.

Technical Expert

This person may be a researcher, an evaluator / analyst, or a combination of the two. They are extremely knowledgeable about the range of options available in a particular domain or technology, as well as the pros and cons of each. This particular skillset is often termed a “solution architect,” although again I would assert the word “architect” for this skillset is a misnomer. Knowledge of a given range of solutions or technologies does not in itself make someone an architect. It is the ability to apply such knowledge to a specific situation and to craft a good solution that characterizes an architect. Even with full knowledge of the available options, the ability to make good choices from them is a different skillset (and is rarer than at first it might appear).

Technical experts are extremely valuable, and they may indeed also be architects. However, there are many cases where technical experts are not architects, even if they have that title. This can be quite confusing, and can show itself as “churn” with no clear outcomes and decisions being made despite a high degree of knowledge in the room.

Knowledge Orchestrator

An important function in a complex project is “knowing who knows what” — that is, identifying the right people and getting them plugged into the right places. These people might be architects, researchers, analysts, technical experts, or any of the myriad technical roles that make a software initiative successful.

It’s sometimes hard to distinguish between a “knowledge orchestrator” and an architect because both have decision-making roles. The key distinction is that the knowledge orchestrator is not the originator of the technical ideas (i.e., they are not the person proposing the solutions to the various technical problems). Rather, they are a clearinghouse for information and perhaps also selects and synthesizes the information provided. In other words, they are the “critic” rather than the “author” of the work.

Performing knowledge orchestration successfully requires a high degree of technical skill, the ability to make clear and sometimes tough choices, and the ability to explain and defend those choices. However, I would argue that this role is distinct from an architecture role. As we discussed above, an architect is the person who originates a proposed solution; the knowledge orchestrator role serves as an editor and critic of the proposed ideas.

Architects often play a knowledge orchestration role over more junior or more specialized architects. The distinction here is whether they also initiates novel solutions. There is a Scrum parable that talks about a pig and a chicken who together decide to make a breakfast of ham and eggs. The pig turns to the chicken and says “If we do this, I am committed. You, on the other hand, are merely involved.” In terms of architecture, the knowledge orchestrator is “involved,” while the architects are “committed”.

Conclusion

There are many non-architect roles in software development (e.g., project managers, developers, testers, product owners), but these do not tend to be mislabeled as “architects.” There is, of course, nothing wrong with not being an architect. I myself often play the role of a “knowledge orchestrator” instead of—or in combination with—acting as an architect. I also act as a knowledge provider from time-to-time. In no way do I feel “inferior” in any of these roles than I do when working as a hands-on architect. The roles are simply different. In this essay, I am simply challenging the labels, not the value.

Note also that architecture itself is a “team sport.” It’s very rare in business that a single architect owns every decision unchallenged. Almost invariably an architect works with others and must persuade them—as well as management—of the correctness of their choices. This dynamic is generally healthy, and it often results in a better outcome than any single individual could accomplish unaided. The need to “sell” their choices in no way diminishes the imperative for an architect to make a choice. In fact, a strongly reasoned position that is defended vigorously (but impersonally) often leads to the best outcome. Without these opinionated selections, a person is acting as an information resource, not as an architect.

Architects tend to be exceptional people, but so can people cast in other roles. The best architects are smart, good listeners, not afraid to take risks, not afraid to be “wrong,” and always seeking to learn. Whether you are an architect or play any other part in the software development process, these are traits that all of us can seek to emulate.

Although a well-trained machine learning model can be used to process complex data, such as predicting a company’s employee attrition rate, it is crucial that the learning model be properly optimized in order to minimize errors and produce accurate results. In this white paper, we will explore the various optimization algorithms that can maximize a model’s learning process and output.

Profiling a mobile application enables developers to identify whether or not an app is fully optimized (i.e., effectively using resources like memory, graphics, CPU, etc.). If the app is not optimized, it will experience performance issues like memory crashes and slow response times. However, profiling a mobile app seems to be easier said than done. Every mobile platform offers tools that are very well evolved — and still evolving — to provide profiling data that can be analyzed to identify problem areas. In this blog, we’ll look at which parameters to profile, and where to profile these parameters. If you are interested in learning more about how to profile these parameters, I suggest you review the platform-specific documentation below:

Android: https://developer.android.com/ studio/profile
iOS: https://help.apple.com/instruments/mac/ current/#/dev7b09c84f5

Parameters to Profile

The parameters to be profiled are dependent on the unique problem. For example, if the problem is slow UI rendering, a potential area to look is CPU usage and GPU rendering. On the other hand, if the problem is an unresponsive application over a period of time, it indicates potential memory leaks. If the problem is unknown, then you can do application profiling for the following parameters:

  • CPU Usage: to identify potential threads and methods that are taking a greater CPU time-slice than expected
  • Memory Utilization: to identify potential classes/variables that are holding up the memory
  • UI profiling: to identify the potential overdrawing and redrawing of UI components, unoptimized usage of widgets, and deep hierarchy views
  • Battery Usage: to identify potentially unnecessary running processes that are drawing current from the battery
  • Network Usage: to identify potentially unnecessary network calls or network calls that are taking too much time or downloading heavy data and impacting the user experience
  • I/O operation: to identify potential unoptimized or unnecessary file or database operations

Where to Profile

Identifying the areas to be profiled (i.e., screens, features) is the most critical step, as it varies from application to application. If the problem is known, then the area to be profiled can be narrowed down to a particular screen or feature. But if the problem is unknown, then the only option remaining is to profile the complete application. Since most modern applications have many screens and features, you should target specific areas of the application to profile first.

Start of the Application

The start of an application is a very important part where lots of initialization and resource allocation is done. An area to watch out for is CPU consumption for initialization, which can be done in parallel or can be initialized later in the required screen or feature. In a modern application where dependency injection tools like dagger 2 (Android) or Typhoon (iOS) are used, there is every chance that there has been an unnecessary allocation of memory for the injected classes.

Loading of the Screen

Similar to the start of the application, individual screens may have allocated additional resources that are not required. The time required for loading the screen should also be watched, as unnecessary initializations may be blocking the UI rendering. Depending on what needs to be initialized, you should check whether it can be done in a later stage after the UI rendering.

Loading of Scrollable Views

In the mobile form factor, it is common for applications to have screens with scrollable items. There is every chance that standard guidelines for creating scrollable views may not have been followed, resulting in heavy memory consumption that needs to be identified. Also, the slowness in loading items needs to be looked into, as patterns like lazy loading may not have been followed.

UI-Heavy Screen

A UI-heavy screen needs to be focused, as it may have unoptimized layouts or a deep hierarchy view. Responsiveness should also be checked, as a UI-heavy screen may have an equally heavy backend handling code that may take longer to respond.

Navigation Between Screens

The most common operation done on a mobile application is navigating between screens. As such, you should make sure that resources are being properly allocated and deallocated when navigating between screens. Also, navigation between screens is a potential candidate for a leakage of references, which links to memory leakage.

Network Operations

Peaks in network operations should be reviewed, as heavy network operations can impact the user experience and also lead to heavy CPU and memory usage. Heavy network operations can be broken into smaller logical operations, so unnecessary network operations should be watched.

Repetitive Operations

On many occasions, repetitive operations lead to heavy memory leaks. These repetitive operations can be a scrolling of list items, getting data from the network operations, loading a UI-heavy screen, or navigating between screens.   

Keeping the Application Idle for a Long Duration

Ideally, when an application is kept idle for a long duration, it should not increase memory consumption over time. However, the background operation may not be pausing properly, or resource allocation may still be in progress — which can lead to a memory leak.

File Logging

File logging in release builds should be monitored, as an application may be doing additional/unnecessary file logging, which is an I/O operation. You should also look into the log file rotation policy, as over time it can consume memory in the file system.

Conclusion

Some of the above profiling activities can be achieved using the tools provided by the mobile platform (i.e., Android, OS), while some require manual efforts or code analysis. The ultimate objective of mobile app profiling is to consider the various parameters that could potentially lead to performance problems within your mobile app.

Aphorism: [A] concise statement of a principle [Oxford English Dictionary]

 

Former American baseball player Yogi Berra was famous for aphorisms that at first glance seem reasonable, but on second thought make no sense at all. Some of my favorite sayings of his include one about a favorite restaurant, “No one goes there anymore—it’s too crowded,” and the philosophy, “When you see a fork in the road, take it.” The joke, of course, is that in order for the restaurant to be crowded, lots of people must be going there. Also, by definition, when a road forks, you have at least two options. So, the advice to “take it” doesn’t make any sense at all. They seem sensible—even wise—at first glance, but don’t stand up to scrutiny.

There is another set of aphorisms that are the opposite of Yogi Berra’s. At first glance these sayings seem nonsensical, but on reflection they point to a deeper truth.

Late management guru Stephen Covey liked to say, “The main thing is to keep the main thing the main thing.” At first glance, this makes no sense at all because whatever IS the main thing—to implement that business transformation, to take my startup public, to meet my personal financial or career goals—that objective is of course the main thing, isn’t it? Also, the statement itself is self-contradictory because if I keep the main thing the main thing, then I don’t really have a “main thing” at all, do I?

Stephen Covey quote

The wisdom of Covey’s statement becomes clear when you actually try to accomplish any large goal. The biggest challenge you will inevitably encounter is other demands that take you off course. The bigger and more important your overall goal is, the more opportunities there are to become distracted along the way by things that are not as important, but that are — or seem — more urgent and immediate.

The only way to accomplish your big-picture goal is to keep it as your main objective, in spite of all the distractions that come along. Unless you can stay on course, no matter what distractions come along, you will never accomplish your big picture goal. Staying undistracted is so essential to meeting your goal that unless you put that first, you will fail. In other words, “the main thing is to keep the main thing the main thing”.

Another saying I like comes from software management guru Gerald Weinberg: “Things are the way they are because they got that way.” At first glance this is so blindingly obvious as to seem nonsensical. However, when you are facing a complex situation, it is profound. No matter how chaotic or dysfunctional the situation may look at the moment, there was a cause behind it. When you can figure out why things got to be the way they are, you have already come a long way toward a solution.

Gerald Weinberg quote

In engineering, most people tend to be rational actors most of the time. Most people, even the ones we don’t like or agree with, also tend to be at least relatively smart. This means there was probably a reason why a decision that now seems horribly wrong appeared to be the right idea at the time. I myself have made a few such bone-headed decisions — fortunately not too many, but some. And I’m pretty sure you have, too.

The exact wrong thing to do in such cases is to double-down and dig the hole still deeper — or take a knee-jerk reaction and just do the opposite. The right thing, whether it was your own bad decision or someone else’s, is to take a deep breath, understand what drove the original decision, show some mercy to yourself or the past decision-maker — and then fix it. Until you understand the drivers behind the wrong decision, though, you will never know if your new decision is any better.

Realizing that there were causes behind a current dysfunctional situation is the first step toward looking at it dispassionately enough to make better choices this time. Simply disparaging the previous decision-maker or decision while moving in a new direction can sometimes lead to success. However, it is usually less productive than first figuring out what was behind the old direction in the first place. There may indeed have been a good reason behind what is now clearly a bad choice. You may find that those reasons no longer apply, or that choices were indeed made out of ignorance or other wrong motives. In this case, by all means shed the past and start fresh. However, if there are underlying reasons that still do apply, then you will do better by considering them first before you chose a new direction.

We can all profit from the wisdom of those who came before us; we should never give up on learning. Because, as Yogi Berra once said, “It ain’t over until it’s over.”

  • URL copied!