Archives

Work “user down,” not “implementation up”

We technologists often have a tendency to think “implementation up.” That is, when we see a business or technical problem we almost automatically jump directly to the technical solution. Even though it can be uncomfortable to wait, it pays off significantly to spend time dwelling on the problem first. In particular, take time to put yourself in the shoes of the “users” of the software you are developing.

Users exist at all levels of the software stack. At the very top we have the “end users”—the people exercising the software through its user interface—for example, the users of a web or mobile client, or the developers in your external ecosystem using your productized APIs. In a software product, the end users are generally the people from which your company derives direct or indirect monetary value. But at all the levels of the software, we have users or “customers”—though they could be fellow members of your own engineering organization.

For example, the developers who write the client web or mobile applications are “customers” of the experience APIs and maybe an SDK. The people who write the experience APIs are “customers” of the composition API and platform services or microservices that embody the functionality. The people who write the platform services are “customers” of the core systems APIs, shared services and other systems. And so on, down to the microcode.

If we invert the normal tendency to think implementation up, and instead think “user down,” we often end up with insights we wouldn’t otherwise have—and a better system with more robust abstractions. For example, I talked to a developer recently who was responsible for writing a “portfolio” view of assets in a particular legacy system. The APIs the developer had to work with were pretty much pass-throughs of the database table structure—which was different for each asset type. The developer had to first figure out what types of assets the system supports (which required basically reverse-engineering the API set), make individual calls to each asset type to determine which assets a particular user actually had, and then fetch that user’s assets individually, type-by-type.

The developer also had to deal with creating a “mashup” of the information provided, reverse engineering the various naming conventions and data structures, to figure out what attributes were actually common across asset types. This was difficult because the names and data types of these common attributes differed from asset type to asset type, because they had been added by various people at various points in time. Creating what should have been a simple report was a lot of work, and error-prone because of the various reverse engineering of asset types, naming conventions, and data structures that was required.

Suppose instead that the original API developer had thought first about how a developer calling that API would likely use it (user-down), rather than thinking first about how the data was stored (implementation-up). I think that if they were thinking user-down, the API developer would probably have implemented a “createAssetType” API, a “getAssetTypeList” API, a “CRUD Asset” API—along with a handful of others. Think about how this would have made life much easier and less error prone for our “portfolio” developer.

The “cost” of making life easier for the user is that it generally makes life harder for the implementer. In our API example, the API developer has the responsibility of creating an abstraction of an “Asset” and an asset type, and for hiding the details of the database schema used to store these from the user of the API. I would argue that the design principle that “each interface should hide something” is a good one, and that it’s worth the time it takes. Any successful interface, be it a technical interface or a “UI,” will have many callers, often over a long period of time. By investing extra effort in taking your end user’s point-of-view, you greatly simplify life and reduce errors for all of those users, all the way up the stack.

In the industry, the jargon for this approach is “user-centered development” or “user-centric design.” This is not a new concept, and many developers already think this way naturally. But adopting a systematic user-centric or “design-centric” approach to every interface in your product can make a huge different to extensibility, productivity and many other areas.

In this 12-part blog series, Dr. Jim Walsh (CTO, GlobalLogic) explores how businesses can effectively embrace digital transformation. Read "Secret #2: Define and Become the Competitor You Fear Most" here.

Secret #3: Be Flexible About the End Goal and the Means; Look for the “Unexpected Upside”

Having just made the argument that you should begin by visualizing where you want to end up, I’ll also stress the contradictory point that you should not be too rigid about achieving it in detail. If you do transformation right, you will learn at every step of the way—and each step should inform the next.

In particular, having seen a lot of initiatives, I’ve come to believe that it’s a myth that brilliant business strategies are invented out of whole cloth and then executed mechanically. Successful strategies emerge in the course of execution, as one phase of work unveils additional possibilities and opportunities. There is no law against claiming that the end state was your secret idea all along—and many careers have been based on this kind of retrospective brilliance.  But the truth is that by-and-large the best strategies are discovered, not conceived of from the outset. The key is to get moving in generally the right direction, and then to stay alert and adapt.

In the course of your transformation work, entire new business models may present themselves—and you need to be alert to that kind of “unexpected upside” if you’re going to take advantage of it. For example, exposing your APIs or data to a 3rd party developer ecosystem can—and likely will—result in your systems being used in ways you would never have imagined. This can uncover new revenue streams, if you stay alert for them. Also, you and your team may develop additional insights as each step forward potentially reveals more opportunities. This is not wishful thinking. In the course of executing a transformation project, additional business opportunities are almost always exposed. This happens whether you are consciously trying to enable a “pivot” to a new business model or whether you’re simply being alert to possibilities as they come along.

On the technical side, companies emerging from a classic IT mindset often have a great fear of failure overall—and sometimes even a great fear of being wrong step-by-step. We have one client who wanted to take months to decide between two competing containerization technologies, both of them leaders: one the established leader, and the other emerging. While it’s good to give these technology choices critical and intelligent thought, in a transformation project the choice between leading open source technologies should take, in general, at most one or two calendar weeks of effort. If you are working with an already-experienced person, it can often be done effectively in days or even hours—again, assuming you are choosing between the leaders.

In almost every case today there are at least two or three good options for each technology choice, each of them a “market” leader (I say “market” advisedly, because many of these technology leaders are actually open source and free). You are almost always better off thoughtfully picking one of the leaders and moving forward than you are going through a traditional rigorous months-long procurement-oriented selection process. The key goal of your research should be to uncover who the leaders are in a particular technology area—those driving innovation in their space, with wide or rapidly growing adoption, and with a vigorous community of support or “momentum” behind them. All the leading systems tend to be highly “competitive” and features that one lacks will often be added in very short order—often weeks or months. The key is innovation leadership, momentum, community interest and support—as well as a fit with your architecture, and the “philosophy” or direction of your development approach.

The optimum technology selection process has changed radically in part because the technologies themselves evolve so rapidly and in part because there are so many good and widely used options available. Much of this new software is free, all of it is easily available to developers, and much of it has a broad community of talent and support—including proof points in very large-scale projects, some of it most likely in businesses similar to your own. In this environment, what is “best” today is unlikely to be “best” eighteen months from now. This is not because the current best suddenly becomes “bad”, but because something even better will emerge. This constant stream of technology improvements is unlikely to let up for many years, if ever. It is therefore far better to plan for change than it is to spend critical months of your project looking for the ideal solution.

Similarly, your end-state architecture will also evolve as you move ahead. This should not be because you compromise it, but rather because you’ve learned there’s a better and simpler way to do it than you originally thought. You might also learn that the original business goals have themselves shifted—again, because you have learned more. If you accomplish only one thing in a transformation project, it should be to throw away the notion once and for all that “change is failure.” Change is not failure if it captures a business opportunity, simplifies or otherwise improves your system, and makes it more extensible and robust. Failure is being too rigid to meet the opportunities that present themselves.

In this 12-part blog series, Dr. Jim Walsh (CTO, GlobalLogic) explores how businesses can effectively embrace digital transformation. Read "Secret #1: Stop Digging the Hole Deeper" here.

Secret #2: Define and Become the Competitor You Fear Most

When we work with companies to design their next-generation product architectures, there’s an exercise we like to do. We get the smartest technical people we can on the company side together with the smartest technical people we can on the GlobalLogic side. We sit them down face-to-face in a big room with lots of whiteboards. We then ask a simple question:

“Suppose all of us in this room left your company today and did a startup together. We take everything we know with us—all the domain knowledge you’ve acquired, all your expertise about what works and what doesn’t, GlobalLogic’s expertise on the technology side, everything. HOWEVER, we are not allowed to use any existing code at all, because we’re no longer in your company—we’re now a startup. What system can we design and implement that would start to put your current company out of business in, say, 6 months?”

Removing the constraints that exist in your current system unblocks a lot of thinking. In every case I can think of, we ended up with a next-generation system architecture that’s simpler, faster and cheaper to develop, as well as far more flexible and powerful than the existing system. It’s often disappointing to the team when they have to come back to reality and face the “real world” of the system as it currently exists.

The software you have today gives your company huge benefits. It’s probably very feature rich, your end users are familiar with it, it generates revenue and gives you market presence and, most important, it exists in the real world, not just on a whiteboard.

However, it also places a lot of burdens and costs on you that your competitors may not have—and which startups in your space certainly don’t have.

The competition that established companies in transition generally fear the most is, and probably should be, the “technology upstart” who comes from left field and disrupts the entire market. Their traditional competitors are, by comparison, known quantities and therefore manageable. In retrospect, Tower Books and Blockbuster Video had nothing to fear from their competitor Borders; their disrupters were Apple Computer, Amazon Kindle, and Netflix streaming. Similarly, competing pharmaceutical companies today don’t really worry so much about each other—they worry about Google and their massive DNA repository and data science capabilities.

It is nearly impossible to evolve incrementally into a great system without first visualizing where you want to end up. Even if your end goal is fuzzy and changes from time to time, starting with some sort of end in mind helps organize your efforts and get people aligned.

It seems like open-ended incremental evolution should work, but in practice it just doesn’t happen unless your activities and thinking are focused toward some type of end goal, however imperfect. Biological evolution works because it uses a massive number of trials. Millions of mistakes and dead-ends over long periods of time, together with ruthless pruning are required to achieve a relative handful of successes. In business, we generally don’t have the option of using a pure biological process. We will, of course, evolve as we make mistakes and learn from them, but in general we can’t afford to make enough mistakes to drive a truly and exclusively evolutionary process. Instead we need to start with a goal in mind to create enough alignment to focus our evolution.

Thinking about how you could become the competitor you would fear the most helps you envision the software you really need to have. There’s a lot of work after that to figure out how to get from where you are to where you want to be—and even more work actually doing it. This thinking needs to take into account not only your long-term business goals, but also the intermediate goals and constraints you need to accommodate along the way. Start out by visualizing where you’d like to end up; it’s a major step in getting there.

Introduction

Does your company buy software, or make software? For the overwhelming majority of businesses across the full spectrum of industries and occupations, the answer is “both.” Almost without exception, every company buys, subscribes or indirectly pays for software or software-based services to run their business. Nearly as many companies develop software. Companies may develop software at a near unconscious level—for example, putting together complex spreadsheet formulas for payroll calculations or budgeting. Or they may have hundreds or even thousands of professional software developers and IT staff. 

What differentiates a software product company from other businesses is not whether they make software or not; rather it’s the realization by product companies that the software IS the business. Software is not a supporting function, it’s the product: the software is what drives the company’s income and delivers value to its clients.

For some companies, this is blindingly obvious. There would be little debate that Microsoft, Oracle, Apple, Amazon, Salesforce, Uber, Airbnb and many others are software product companies first and foremost. But increasingly companies of every type find it necessary to become software-centric. This is because the people who are your customers and employees have, since the late 2000’s, all moved online. It’s like everyone in the world moved to a single city—let’s say, Cleveland Ohio. If that were the case, you would need to learn to do business in Cleveland, Ohio—because that’s where all your customers and employees now live. Instead of a physical geographic relocation, though, the “demographic” shift we’ve seen was introduced by powerful smart devices in the late 2000’s. The net effect is the same: your customer base and workers all now live in a new virtual location. They are all on-line, all the time. So, you need to learn to do business there.

In the same time period, powerful open-source and open-source derived technologies have demolished many barriers to entry for business, while raising certain others. Barriers like capital investment in infrastructure and connectivity have largely gone away, eliminated by public cloud-based platforms; cheap, fast and ubiquitous network connectivity; and the universal and global availability of powerful mobile computing devices. At the same time, the technological stakes have been raised: people’s expectations of software are very high since they see the best-of-the-best on a daily basis. And while the widespread availability of powerful open-source software means it’s available to you, it also means it’s available to all your current and potential competitors—and some of them are, or are trying to get, really good.

For some companies, “digital” or “technology” transformation is no longer optional—it’s now a matter of survival. This may have happened because a competitor or up-start has moved first and begun to capture the “digital high ground.” Or it may be caused by a fundamental disruption to your entire business model or industry. In any event, instead of a laser-focus on capturing new opportunities, you may now find yourself playing catch-up. 

The software-centric aspect of our age means that very nature of many businesses—and of the value they produce—has become unclear. If you are a provider of personalized on-line content, is your core value still the creation and curation of that content, or is it really the platform that determines your consumer’s personal needs and delivers the most relevant material to them? If you are a package delivery company, is your core value still the timely and safe delivery of a physical package? Or is it providing the infrastructure to track and direct the efficient movement of an item through all the elements of a multi-party delivery chain? If it’s the latter, the software-centric focus, you have a number of options available to you that would never have been available before. For example, the logistics company’s software platform could potentially leverage distributed “gig” labor in developing countries to do package delivery by bicycle, coordinated by the platform, opening a new market. The content company could expand their set of consumers by broadening the amount of content they currently provide, and focusing on the best possible personalized experience. If their platform was good enough, either “platform-focused” company could pivot and become a SaaS platform provider to their current competitors. 

For many large companies the answer to “software vs. other businesses” is “both-and” rather than “either-or.” You can indeed be a company that both produces content and also provides a platform for delivering it. You can be a logistics company that both owns a platform to optimize the movement of a package through a complex distribution network and that also owns and operates key elements of that network. However, the difference between a “digitally transformed” company and a pre-transformation “software-enabled business” is that these two aspects of your business have equal stature. The software is not a supporting function or cost center; it is a revenue generator and as much a part of your top-line “offering” as the rest of your business. It’s the extent to which that is true that measures the degree of your digital / technology transformation.

Technology transformation is hard work. Taking a company which is not software product centered and making it software-centric is a profound shift. The benefits are huge, not only for disruptive businesses like Uber and Airbnb, but for established companies in every industry. If you have decided to be a leader in your company’s digital transformation process—congratulations! You have an amazing journey ahead of you. It’s hard work, but it can pay huge dividends for your company—and for you personally as a champion and change agent. 

At GlobalLogic we’ve worked with companies in all sorts of transformation situations: the first-moving disrupters; the aggressive early-adopter digital champions; and the late adopters who must transform to survive and stay relevant. Most of the “secrets” of digital transformation we have learned apply to all three categories. We offer these principles as a distillation of some of the things we have learned, and hope they help you on your journey.

Secret #1: Stop digging the hole deeper

While this principle is probably the most obvious, it’s almost always the hardest to do. Companies who are still on the wrong side of the digital divide generally find themselves adding to their technology debt with everything they do. Given the challenges of the legacy architecture, it’s hard to meet schedules and satisfy users, so shortcut after shortcut is taken to meet demand or just stay afloat. These shortcuts further complicate or compromise the system, making future changes even more challenging. There’s never enough time to do things right—let alone address accumulated technical debt. It’s a vicious self-reinforcing cycle: one that many companies find difficult or impossible to break out of.

But, as a champion of digital transformation, that’s the job.

There’s a great quote from software management guru Gerald Weinberg: “Things are the way they are because they got that way.” While this may seem blindingly obvious, it’s a reminder that everything you see today has or had a cause. That includes the way things are currently implemented in the software, as well as the way people behave. Without understanding and addressing those causes, you doom yourself to follow along the same path.

For example, demands from sales or product that engineering complete software changes in unreasonably short periods are so common that in many organizations it has almost become a joke. But why does it happen? Unreasonable time demands are often driven by product management and sales concerns that no matter what date they ask for, engineering will be late. This fear is often founded on real, actual past experience. The sales and product management groups therefore ask to get things earlier than they really need them, in the hope that even if engineering is late, the software will still be ready when really required. Similarly, engineering often learns to over-estimate their dates externally so that even if they are late relative to their secret internal plan date, they still will not disappoint their internal and external “customers.”

The combination of overly aggressive “ask” dates, and overly conservative “commit” dates means that engineers are constantly working against an artificially compressed schedule, with padding on both sides. This dynamic drives a lot of dysfunction in a lot of organizations. However, given the history, both sides are doing what seems both reasonable and necessary to them. In other words, both sides are trying to “do the right thing” for the company and the customers, from their own perspective. “Things are the way they are because they got that way.”

The only way out of this particular situation that I’ve seen work is for engineering to start giving and meeting realistic dates—however bad they may look—and independent of the date being requested. In other words, engineering has to start making accurate, non-padded forecasts and standing by them regardless of any pressure on them to say they can deliver sooner than it takes to do a good job. This is scary and, in some organizations, job-risking for all concerned. But once you understand the cause—which in this example is the fear that engineering will be late—it’s clear what the cure has to be: Engineering needs to start estimating accurately, and delivering to their estimates.

In this scenario, until the cycle of padding dates in both directions gets broken, system quality will continue to decrease because of the shortcuts demanded by unmeetable schedules. Perversely, continuing the dysfunctional scheduling behavior will continue to increase the time to meet the next request, because the shortcuts have made the system harder to manage: re-enforcing a negative cycle. This is despite the fact that there really is no “bad guy” in this scenario. Everyone is trying to do the right thing to set expectations correctly and meet customer needs.

The deal-driven downward quality spiral is a fact of life in many companies, regardless of size. One client, a multi-billion-dollar (USD) giant in their domain, suffered from this very issue. No matter how big you are, there is someone bigger who, as your customer, can drive organizations to “commit” to unrealistic and literally unmeetable schedules.

There are, of course, other reasons besides “padding” for marketing and sales to request a challenging delivery date. Some dates really are immovable—a trade show launch, for example. In those cases, the best solution I’ve found is, where possible, to compromise on scope, but not on system or architectural quality. Short-term architecture compromises made to hit a date inevitably come around and bite you later. Sometimes you have to do it—there is no viable option. But in those cases, you should budget the time to fix it in a pre-planned, rapid follow-on release—for example, a “2.01” release immediately after your big “2.0” launch. This release should undo the short-term compromises you made to hit the artificial date. Otherwise, you’ll pay a long-term penalty for this short-term advantage. Again—don’t dig the hole deeper.

There are other sources of digital dysfunction besides schedule pressure. None of them are particularly easy to fix, but you can at least halt the downward spiral starting today. You not only can, but you must at least arrest the decline if you are going to transform in to a healthy software-product centric company. If not today, when? For example, you may have a huge amount of technology debt because engineers never seem to have time to do unit testing. This is a false premise in my mind—the time taken to fix missed bugs downstream far outweighs the time it would have taken to create the tests. You can’t fix the past overnight, but you can resolve to at least stop making things worse. For example, you can mandate that, starting today, no NEW code may be checked in without a minimum of (for example) tests giving 85% statement coverage also being checked into the unit test repository. You haven’t fixed the past by doing this—but you have stopped making things worse, at least in this particular area.

Other issues can include earlier technology direction missteps (Flash-based or server-side template-driven UIs for example), lock-in to a ten- or 15-year-old technology, organic unplanned growth of systems, partially or non-integrated acquisitions—and many others. The goal here is not to fix these overnight but to begin, now, to stop making things worse. This may involve a technological “fix” such as wrapping a last-generation subsystem in an API so that you can build on top of it “properly”. Or it may involve a negotiation with other stakeholders about behavior changes, as in the deal-driven downward quality spiral.

Generally, once you’ve identified and are honest about the causes, there will be at least a small step you can take today to stop making things worse—or at least to begin slowing the rate of decline. Improvement might be microscopic at first, but the key thing is to get started. Eradicating the underlying causes of deep-seated digital dysfunction does not happen all at once; it’s accomplished through a series of improvements to process, technology and other areas whose net effect is transformative. The first step along that path is to identify your real problems, do an honest assessment of what has caused or is causing them, and then figuring out concrete steps you can take today to stop making things worse.

When I was at NeXT Software in the mid-1990s, Steve Jobs used to say, to the point we all got sick of hearing it: "A players attract A players. B players attract C players." This was in the context of making each new hire count, from the most junior to the most senior, regardless of role.

A couple of years ago, I ran into one of our former IT desktop support people who had worked with me at NeXT. After Apple, this guy had subsequently became the founder and CMO of a successful startup, cashed out, and is now wealthy and working as a high-powered marketing consultant in Silicon Valley. This is the guy who, fresh out of school, used to fix the network connections on our desktops.

Every new hire in every department at NeXT was required to be an "A" player. Given the impact the NeXT technology and team subsequently had on Apple, I think we've all seen what happens when that's true.

An "A" player is one who excels at his or her current job and is always hungry to learn and do more. They are highly intelligent self-starters, never make excuses, and always find a way to get the job done. They never quit. When they make mistakes, they might kick themselves briefly, but mostly they learn and don't make the same mistake ever again. As I like to tell my teams: "It's OK to make mistakes; if you don't make mistakes, you aren't learning. Just don't make the same mistake repeatedly. I want you to go out and make some new mistakes."

You can be an "A" player right out of school, or as a veteran engineer or VP. It's not a question of years of experience or current knowledge — it's a question of attitude and focus.

As a general rule, I've found that you are better off hiring "A" players who can learn, than "B" or "C" people who already know. Sometimes you have no choice but to hire for knowledge. However in those cases, when building a team, you are better off using these people as "consultants" or educators rather than as team members. Your go-forward team should consist of "A" players.

I'm not talking about firing people for failure. In my experience, 9 times out of 10, this is a mistake. Being an "A" player is not about constant success — it's about how you respond to setbacks and even failure. An "A" player learns, adjusts, and tries again until they do succeed. That being said, you can be an "A" player in the wrong role. That doesn't make you any less of an "A" player. It means that you or your manager (who you've hopefully picked to be an "A" player yourself) has some work to do to get you in the right place.

A surprising fact: "A" players don't necessarily cost any more to hire than "B" players. As Steve Jobs implied, an "A" player will be attracted to working with other "A" players. Learning and growing by doing interesting work and being on the best possible team is their principle motivation. They certainly don't want to be unfairly paid, but once the money is taken care of, the quality of the work and their ability to learn from the other "A" players around them is what matters.

What it does cost you to assemble an "A" team is time, energy, and the willingness (you could call it ruthlessness) to do so. Being so selective is hard work. However, as the world has seen from the NeXT people and technology that contributed to the success of Apple — including Steve Jobs himself — the payoff can be almost beyond belief.

I recently had a conversation with the founders of a 17-person startup. They had just received double-digit millions (USD) of venture funding and were looking to scale their engineering organization.

While GlobalLogic can certainly help them scale more rapidly on-shore in their current California location, the cost advantage they will see from this is not as great as scaling a portion of their engineering team in Eastern Europe, India or South America. Scaling 100% on-shore, in the very competitive and expensive California labor market, severely limits how fast they can move and capture the market opportunity ahead of them.

While they saw the benefits, they were concerned about the distraction and headache of building and managing a team in a remote geography. This is a legitimate concern. While GlobalLogic and companies like us are true experts in distributed software development and can greatly ease the pain of working with global engineering staff, there is no question that it introduces a new set of challenges. Travel, for example.

I've done multiple startups; overall I have worked in seven pre-public companies to date. I completely understand the need to pick your battles and fight the right one at the right time. The key question I put to their founder and CTO was: When is the right time to eliminate the risk of being able to scale engineering with high-quality resources on-demand? I don't pretend to know the answer for them — in a startup everything seems urgent — but I think that is what he needs to decide.

A key benefit to working with a company like GlobalLogic from a CTO / VP Engineering perspective is that it can essentially eliminate the risk of staffing and delivery. In my previous startups, it was always a challenge to get the right team, set up the right process and tools, and then deliver. I never failed in this, thank goodness, but it was an ongoing and somewhat repetitive challenge, a bit like Groundhog Day.

Working with a company like GlobalLogic, the struggle of staffing and delivery largely goes away. Instead, you can focus on building the right product for the market. I have very much come to agree with the saying that goes: "building the right product is more important to success than building the product right." It bugs me as a quality-focused person, but I've seen time-and-time again that quality alone is not enough to win.

I've been fortunate in that three of the seven pre-public companies I've worked at have become successful. If you're not in the software industry, that may not sound like a lot. However for software startups, a 43% hit rate is considered very good. There are various figures, but it's generally accepted that only 1 in 10 or 20 (that is, 5% to 10%) of venture-funded startups ever even make back their invested capital.

I'm sure it sounds arrogant, but I can assure you that within the sphere of my influence at the time, every one of the products I've been involved with have been "built right." So why did almost 60% of those companies still fail in the market?

In my view — and I say this as "one of the guys in the back room making the product happen" — it's because as a management team we put TOO MUCH focus on development, and NOT ENOUGH on building the right product. In other words, the mechanics of development consumed too much of the company's energies.

Don't get me wrong— development is hard work, and someone needs to invest the skill, care, attention and energy to do it well. What I've come to believe is that this energy, skill, care and attention should largely come from specialized companies like GlobalLogic, rather than from the startup's founding team. Founders should, in my view, focus with laser-like precision on their customers and on their market— including guiding and directing the development effort technically and feature-wise. They should not take on the distraction of re-inventing the wheel on the mechanics of software delivery any longer than they have to, unless "DIY" presents some compelling advantage to them.  And it rarely does.

I meant what I said when I told the startup CTO and other founder that it may or may not be the right time to start working with a company like GlobalLogic. But if it's not now, I think it should be very soon — or their risk of being another company with a great idea that misses the market gets a lot higher than it needs to be.

Over the last ten years the concept of what we consider television has been evolving at an increasingly rapid pace. The increased processing power and flexible frameworks of TV-connected devices, tablets and mobile phones have driven a new era in how we navigate and consume video content. It’s no longer enough for a channel to produce great content—the perception of their brand is also dependent on the user experience and performance of their channel applications. Every person wants to dive into their favorite video quickly and smoothly without complicated channel flows, unclear logic and delays. In addition, channel applications introduce the opportunity for new discovery experiences that increase engagement and viewing time.

In today’s extremely competitive business environment, customers are more demanding than ever and will abandon a business that is too slow to respond. This has put an onus on IT to deliver solutions that provide a holistic and uniform experience to the customer, across all business channels. Microservice architecture has the potential to address this business challenge; it is all about achieving speed and safety at scale and it provides the flexibility to pick and choose technology for implementing a solution. This approach positions IT as a business partner rather than in a traditional support role.

People are becoming more and more dependent on communications networks for both business and personal use. In only 10 years since the launch of the first iPhone, and starting from October 20161, Internet usage by mobile and tablet devices has exceeded desktop usage worldwide. This year, the global smartphone installed base reached 2.8 billion (it was less than half a billion in 2009). Nowadays, the smartphone is a more capable and sophisticated platform than any previous-generation PC and is equivalent to a pocket supercomputer.

This blog was originally posted by GlobalLogic’s experience design arm, Method.

Technology in healthcare is rapidly progressing. Tests and procedures previously only available in labs are now becoming accessible to the general public. Not only are these technologies cheaper and faster, they also generate large swathes of data that can be used to improve diagnosis, treatments, and outcomes.

As part of a project run in collaboration with EVRY Strategic Design Lab, we looked into emerging trends in healthcare, particularly at the intersection of health, technology, and people. Our research pointed to three areas where technology is forging a new paradigm of healthcare.

Genetic Information

The first of these areas, genetic information based on testing, was previously cost prohibitive. Major advances in computing, however, have now made testing widely available to patients. The main benefit of genetic information becoming more available is that as more people use it, both in scope and frequency, the data becomes more accurate.

Clinical Records

Clinical facilities are collecting ever increasing quantities of patient data. In combination with machine learning, this data is being harnessed to provide more accurate and efficient diagnoses and prognoses. The decision-making gets faster to examine more patients, and the outcomes become more precise supported by richer reasoning.

Monitoring Behavior

Widespread adoption of mobile devices and wearables is enabling people to effortlessly and constantly capture detailed information about their activity, behaviors, and physiology. This data provides a new contextual layer that augments information collected through clinical settings.

The aggregation of the data generated by three sources of patient information is generating more attention to a critical stage in healthcare: prevention rather than pure focus on prescription. With this new pre-emptive approach, identifying and responding to health issues before the initial signs of illness has the potential to greatly reduce human suffering and save countless lives.

Historically, prevention in health care has targeted generic risks by providing vaccinations or screen tests for early detection of common diseases. And the prevention plan has been about promoting generally healthy lifestyles with physical fitness and diet at the core. However, technology and data-enabled services allow for more acute preventative healthcare journeys to identify specific types of disease and illness, and in turn, guide prevention plans. By way of the data, plans can be highly personalized around specific patient needs. While this approach relies on accurately and efficiently extrapolating relevant information from patient data, its basis has the potential to significantly lower healthcare costs and realize improved outcomes.

Healthcare providers are already starting to adopt a data-oriented approach to treat diabetes and cardiovascular disease. These programs, however, deal with treatments of chronic diseases. Some providers offer predictive genetic tests to patients whose family history has an inherited gene mutation that increases the risk of developing certain types of cancer. These data-oriented care and prevention tactics are frequently initiated by clinicians, with the new journeys not significantly dissimilar from existing pathways.

LEFT: AliveCor a device to measure heart rate using a sensor and an app. RIGHT: Verily project to monitor glucose levels.

This new paradigm raises a number of important challenges. For preventive care to truly take hold and fulfill its promise, critical considerations need to be addressed:

  • How can the first signs of diseases and conditions be detected to trigger prevention at the right time?
  • What enablers will help design personalized prevention plans?
  • What is the best way to continuously track and evaluate prevention results?

We look forward to exploring this evolving space and working on how design can address these challenges.

  • URL copied!