Rapidly advancing technology and an increased rate of global convergence have transformed how the world produces and consumes products and services — agriculture is no exception. In addition to contending with new economies of scale, today the agriculture industry faces a slew of variable pressures stemming from radical changes in climate and population density.
As companies in the agriculture space turn to innovation to contend with these changes, design can play a fundamental role in helping transform their ideas into reality.
Role of Design in Agriculture Innovation
As an industry rooted in traditional practices and perceptions reaches a point of inflection, agriculture must contend with major shifts in who it serves, what it offers and how it operates.
Who it serves
With increases in health awareness and the growth of sustainable consumption practices, consumers have taken a more active role in shaping food standards, shifting the role of consumer preference from passive to active.
Not only has an increase in visibility and awareness transformed the types of food consumers want, it has also created new opportunities for organizations to design visible and democratized processes around how these foods are grown, sourced and purchased.
What it offers
Traditionally beginning with farming inputs and ending with consumer consumption, companies at the start of agriculture’s value chain have begun to repurpose and re-introduce what was once considered waste.
As organizations continue to imagine new purposes and applications of waste, they will also need to imagine new internal processes and skills to support and advance these new streams of revenue. Through organizational design, companies can ensure their internal value chains meet all the demands of new external ones.
How it operates
As the operations of agriculture companies continue to scale in order to meet the pace of increased demand and technological innovation, the industry’s leading players will need to establish partnerships with organizations and services that were once considered outside the industry’s purview.
As large agriculture companies look to partnerships and acquisitions to expand their existing capabilities, they will need to not only consider the present demands of their industry, but also those of the future. Through design thinking, organizations can push their explorations further, enabling them to uncover opportunities in even the most unlikely of places.
I’m not being cynical when I caution you to expect misalignment in the course of your transformation initiative. Misalignment is inevitable. Your company already has established products, services, processes and internal and external constituencies that support them. Transformation, by definition, affects all of these.
In most organizations, waiting to achieve full consensus is a prescription for failure. In many organizations, there is no consensus to be found among your peer group and other stakeholders about either the near-term or the long-term direction of the company. In many cases that is actually a good thing—because you will not be right all the time. However, if you are paralyzed into inaction by the lack of consensus on direction, you will never get anything done. Disruption is, by nature, disruptive. Not everyone will like it, and not everyone will buy in. The best you can hope for in a company with diverse points of view is the lack of organized opposition, and an overall sense that things do, or will, need to change.
The most effective strategy I’ve seen to combat misalignment is transparency. You want to give people who oppose you no ammunition to claim they were left out of the loop and didn’t know what was going on. Over-communicate. It does not guarantee alignment by any means, but it at least denies your critics the opportunity to paint you as operating too independently. This is the deadliest criticism in many organizations, and the one to watch out for since by definition you are working outside the mainstream. Listening to your critics is also important. You don’t need to agree with them, but understanding their point of view and needs is critical. They may even be right—and you may be wrong. Ideally you should understand your critic’s point of view well enough that you can present it as persuasively as they can. In any event, by listening you will learn, and your critics will at least know you’ve heard them.
In many organizations, you need alignment or at least tacit buy-in from your boss and your direct management hierarchy. In other, flatter, organizations this might not be required to get started. For example, sometimes the best approach to run a limited “bootleg” project to show success, and only then take it to your boss. But ultimately you will need to have your boss and your boss’s boss (which may be the board) on your side. At a minimum, they should be at least be tolerant and broadly supportive of your goals, and willing to give you a chance to demonstrate real progress.
Some organizations in transformation can exhibit what seems like a “Digital Death Wish”. Despite a clear imperative and compelling need to change, obstacle after obstacle is surfaced. These might be around “who owns what”; they might be about a suddenly compelling need to re-organize (repeatedly); they might be about process or technology standards. In fact, the obstacles raised can be about nearly anything—except a focus on getting the job done.
Organization, technology, process and other areas are all important to an effective transformation. However, an excessive focus on them can and often are used as an excuse to avoid actually transforming. The key to a successful transformation is to keep the focus on the end goal, and to the greatest degree possible, avoid putting energy into anything that is not needed to accomplish it. As management guru Stephen Covey observed: “The main thing is to keep the main thing the main thing.” As a leader of your company’s digital transformation, keeping a maniacal focus on the “main thing” is your primary job.
Why do individuals within organizations initiate this “Digital Death Wish” behavior? Some common patterns we see:
Fear that this change will eliminate my job, reduce my status, or make my skillset / team less valuable
Fear of the unknown—is the status quo really that bad? Are the changes being addressed really inevitable and here to stay?
A desire to stay relevant by making my job function or team integral to the transformation initiative—even if that actually gets in the way of results
A desire to “protect my team” by bolting their work product onto the current initiative, whether it is relevant or not
The knowledge that I will not be held accountable for the company’s future performance because I’ll be gone by then. Instead I’m better off milking my current role for all it’s worth even if that compromises the company’s long-term best interest.
There are many other motivations to oppose change, of course—some of them based on honest disagreement about direction, and by no means all of them fear-based or self-serving. In general, however, as transformation champion you will need to overcome opposition—even if it is structured to appear “helpful”—from teams trying to slow or derail your initiative. To do this, you need to get buy-in from at least the first level of management common to the teams obstructing your efforts. Even then, you will have to help that person understand the negative impacts of these obstacles on the overall initiative. This is harder than it might look because it’s difficult to argue that process improvements, for example, are a bad thing—and why in this instance they are hurting rather than helping. You also need to show how this individual and their organization will benefit from the changes you are driving—or at least help them plan for and mitigate the downside.
An emerging trend we are seeing is for large companies to put an experienced technology executive on their board of directors to head a “technology transformation committee”. In the cases where we’ve seen this done, the person in this role has previously led a technology transformation at similar scale in a related or somewhat related industry. In theory, a board-level degree of empowerment should be sufficient to crush any non-principled opposition to change within the company. In practice, to be effective at doing it, this person needs to be far more hands-on than the usual board-level director tends to be. Making sound judgements about who is helping and who is hurting an initiative can be subtle and requires a lot of insight—especially when some are actively masking their motivations. The jury is out on the effectiveness of the “technology transformation committee” approach, but overall—given the high caliber of the individuals we’ve seen in these roles—it is promising.
In addition to managing “up”, you will also need to get at least some degree of alignment from your own team. As the boss, you have the authority to tell people how to spend their time. However, any smart and creative professional only does their best work when they are excited about the goal. Spend as much time as you need “selling” your best people on your transformation vision. They should ideally take your ideas further than you could yourself. You may need to shuffle people around to find the right team, but do what it takes to get people you respect and trust excited about taking the next steps with you. When you’re doing something really new, even really good people may not understand what you mean and why it matters. Be patient with this. Get them to take one step at a time with you and, if you’ve got the right people and you’re on the right track, they will soon start racing ahead.
The most effective change agents I’ve worked with tend to be excellent networkers not only in their own hierarchy, but also laterally within their own organizations. They might not get actual agreement on the detailed means, but at least they can aim for shared big-picture goals across silos. Sticking in your own silo—be it engineering, product management, or some other—will not get you the broad transformation that you want. You may need to accept amused toleration from some, while you get active support from others—but try to get all the key stakeholders at least in the neutral to positive range in terms of their broad support for your end-game goals. Then keep them informed on your progress.
Current software trends, such as Microservices, are trending in a very interesting direction: that of collaborating autonomous systems.
This paradigm is very familiar to us from the natural world. Swarms of bees, colonies of ants, schools of fish, even planets in motion “collaborate” through the exchange of information (or forces) while remaining “autonomous” (capable of independent action) in isolation.
One manifestation of this pattern in software is the “containerized Microservices” paradigm that is transforming many large-scale systems. In this paradigm, the individual Microservices are designed to be as autonomous as possible. In particular, each microservice instance should have all the data and other information it requires to continue to work locally in isolation. Even if every other microservice in its system were to crash, an individual microservice instance should be able to continue functioning on its own (though perhaps with data that gets more “stale” over time as it stops receiving updates, for example).
A key concept in coordinating these autonomously operating microservices into a coherent system is that of “choreography”. The word choreography is borrowed, oddly enough for a software concept, from the world of dance. When dancers perform, there is no person, no director, who tells them what to do. Instead, each dancer learns his or her part in advance. The individual dancers then perform their part as learned, coordinating their actions by the exchange of non-verbal cues with other dancers and from the music. Microservices systems are similar, but in this case the “cues” and information sharing are propagated through the system as broadcast, published events.
This type of choreography has a lot in common with physical systems like a swarm of bees, or an ant colony. While each insect is capable of independent actions, they co-ordinate among themselves through an exchange of signals—chemical, behavioral and even dance. Our information systems are headed in this direction—because it scales, and it works. What we are beginning to see is this same programming paradigm emerging at a “macro” scale in the physical world, due to connected devices.
More and more information handling capability is available at the “edge” of our connected systems. This trend will continue as the cost of compute power and related resources continues to decline. We can see this effect in our personal mobile devices already: The aggregate processing power of Apple’s iPhone X was comparable to that of Apple’s “Pro” line of laptops when it was launched in 2017[1] (though used for different purposes). This same “smart” phenomenon is clearly happening to all types of connected devices, including cars, smoke alarms, thermostats, and even light bulbs[2]. There is no doubt, at least in my mind, that the trend of smart edge devices will continue, even as network connectivity grows increasingly capable (as it will).
As the “edges” get smarter, collaboration between them gets more powerful. The more an edge node computes, or senses about its environment, the more it can share. Whether this sharing takes place in the form of peer-to-peer interaction, or through a central mediator such as a network, remains to be seen. But certainly, our smart edges will collaborate with each other one way or another.
Self-driving or “autonomous” cars will certainly be a dramatic manifestation of the phenomena of collaborating autonomous systems, but there are many others such as warehouse robots, home automation systems, and systems of smart devices generally that we are seeing now or will in the near future. By combining autonomous operation with the notion of signaling and “choreography”, we will see the emergence of robust, scalable and powerful systems whose behavior is astonishingly nuanced.
Why am I so confident this will happen? Just take a look at the bees…
In Part 3 of our blog series, we defined a comprehensive cost management framework for enterprise clouds and took a deep dive into initial planning processes and operational visibility. In this final part of our series, we explore how to optimize costs, infrastructures, and billing by effectively combinin two approaches: manual action by app teams (with in-depth operational visibility) and automatic resource cleanup (when no manual action is taken).
Cost Optimization
Stakeholders should be given multiple opportunities to take action or register exceptions for their apps. If no stakeholder action is taken on dev/test environments, then recommendations can be automatically actioned. These two approaches, used in combination, will lead to better awareness and accountability.
Infrastructure Optimization
This section discusses various opportunities to optimize the cloud infrastructure landscape to fit the given utilization. This follows the cloud’s tenet on provisioning only what you need, and paying for only what you use.
Instance Rightsizing
Upsize or downsize the instances based on actual utilization trends, so that the peak average utilization hovers around the optimal (70%-80%) range.
Cleanup of Unused Resources
Remove any orphaned resources that are no longer being used. Some of these include:
Unattached disks (delete)
Orphaned snapshots (delete)
Unallocated IPs (release)
Unused Storage (recommend moving to Glacier/ColdLine)
Cleanup of Underutilized Resources
Identify and recommend clean up of resources that have been provisioned but are not being actively used. A common example is dev environments that were not deleted after testing. Metrics that can be used to identify these type of resources are:
Minimal or no CPU utilization
Minimal or no disk activity
Minimal or no IO activity
Instance Scheduling
Turn resources on and off based on when they are needed, rather than running them all the time. Considerations include:
Based on spikes in usages patterns
Instance scheduling for dev/test servers that don't need to be run 24/7
Instance Modernization
Cloud providers regularly release new versions of their instance families. These are based on the latest hardware and are often faster and cheaper than the older instance families. Modernizing instance families to the latest versions can optimize both performance and costs.
Cleanup of Other Cloud Services
For managed services provided by the cloud provider, metrics can be used to identify if services are being used, and released if they are not needed.
Billing Optimization
To optimize billing processes, (1) leverage reserved/committed use discounts in the production environment and (2) enable committed use and spot/pre-emptible instances in the dev/test environment. This allows users to fully utilize the usage discounts provided by cloud platforms. Some of these discount categories make sense for specific application environments. Details are below:
Production Environment
Start with 30% - 40% servers to achieve immediate cost savings before the app stabilizes in the cloud.
End with 100% servers after the app stabilizes in the cloud.
Dev/Test Environment
Start with 10% servers that need to run 24x7 (e.g., build servers).
End with 100% servers after the app stabilizes in the cloud.
Use spot/pre-emptible instances for environments that can be torn down and recreated.
Integrate the use of spot/pre-emptible instances with DevOps build processes.
Automation Approach & Opportunities
If automation tools are not available, then you should build them in-house. Start small and grow the automation catalogue. Remember, no single tool will solve all cost management problems — build and integrate tools as services. Below are areas in which you can apply automation, along with some tips on how to do it.
Tagging and Labeling
Report on tag non-conformance
Automatically add certain missing tags such as “created-by” (use to track creators of orphaned resources)
Create and maintain virtual tags for cloud services that don't yet support tags in the inventory management system
Reporting
Send daily reports directly to stakeholders on costs, projections, violations, and non-conformance
Resource Scheduling
Detect usage patterns and suggest server start/shut down schedules (to be used only during their usage periods)
Inform stakeholders and automatically implement scheduling for dev/test environments
Resource Cleanup
Automatically shut down instances/resources that don't have the required tags
Automatically shut down instances/resources that are not being used
Recommend and implement auto instance scheduling based on usage patterns
Remove unattached volumes and old snapshots (unless tagged)
Clean up other resources
Reservation Planning (committed use)
Track usage patterns and recommend instances for committed use
Track usage commitments and renew automatically (inform stakeholders of reservation expiry)
Track total savings and ROI for committed use discounts
Instance Modernization
Recommend instances that can be modernized to new instance types (i.e., cheaper and more efficient)
Spot/Pre-Emptible Instances
Track CPU load patterns for dev/test environments and recommend spot/pre-emptible instances
Tools Reference
The following table shows a representative list of tools that can be used for cost management at the various stages of cloud adoption. This is not an exhaustive list, as there are other tools in the market that fulfill niche requirements.
We hope that this blog series has helped you start thinking about cost management holistically. The information given in this blog is not limited to any one cloud, either — these principles can be applied to all public clouds. With private clouds, some of these principles can be used to optimize resource densification, rather than the direct cost itself. If you would like more information about how GlobalLogic can help your business with cloud adoption, please email us at practice-cloud@globallogic.com.
When focusing on accomplishing hard things like the transformation of your company into a product company, it’s all too simple to lose sight of the easy, quick wins. I’ve seen a lot of situations where the big picture items are working great, but the “little things” like status reporting, defect and progress tracking, build and deployment automation, test automation frameworks, performance and system monitoring, and so on—these get neglected.
These things are “easy” in the sense that while they take intelligence and work—often hard work—the solutions are generally well understood. Good people exist who can do them and do them well, using well-understood principles and tools, and without a whole lot of day-to-day attention from you once things get started. However, you need to get good people in place, monitor and tune their output, and turn them loose.
If you don’t take care of the “easy” stuff, it can turn around and bite you, derailing your big-picture initiative. You don’t want to find yourself in a situation where you cannot answer simple-sounding questions like “When will you be done?”, “How is it working?” or “Does it scale?” You should get these basic items attended to first—so you can focus on the “big stuff.”
Secret #7: Be realistic about your company, yourself, and your peers
Digital transformation needs moral courage and sometimes job-risking determination to make it happen. The “powers that be” and the “status quo” may give lip service to the need for transformation, but in many organizations, opponents will come out of the woodwork to oppose or undermine any concrete proposal that has transformation as its goal. This is because by nature, transformation upsets the established order and therefore affects the people who benefit from the established order. They may see themselves as losing out or as needing to think and act differently.
Understand what motivates you. You may believe your motives to be pure and selfless, but they will be tested. Also study your peers. Are you in a company where the people at your level and above are genuinely working toward the best interests of the company? Does your company have a mission that will be served better by this transformation? In these companies, while the people involved may have real disagreements on how to get there, their focus and commitment is to the company’s success. Where this is the case, an appeal to that shared value will help you get buy-in. In many more companies, at least some of the people will be more concerned with their individual careers than with the success of the company’s mission. Where that’s the case, you need to make a different type of appeal—one based on how transformation will help the careers of the individuals who support it.
In addition, there’s the matter of how long you intend to stay at your current company, how long your boss intends to stay, and how long before the investors want to take their profit. It may be satisfying to begin a transformation project, but unless you and your supporters are there to see it through, you may be making the system worse by taking it in a direction that will not be sustained. In many cases, the honest answer is “it depends on how this transformation goes.” I think that’s a fair basis to begin a transformation—provided you are indeed committed to seeing it through—or at least leaving it in good hands—once it proves its value.
Your company’s realistic ability to make a sustained commitment is a key factor in which transformation route you should take. The level of commitment often, but not always, depends on ownership. Private Equity (PE) owned companies, and often public companies as well, generally have an explicitly stated short-term focus. Family- or management-owned companies, as well as some venture-backed and some PE owned companies, may have a longer time horizon. Short or long, it’s not a “killer” either way—there are fast transformation strategies and slow ones, both with different risk profiles. The “killer” is a mis-match: adopting a long-term transformation strategy in a company with a short-term focus, or sometimes vice-versa.
In one large public company we work with, one of the senior architects described their legacy 15-year-old production software stack as looking like an “archeological dig” of technology trends. By examining the code, he said, you could go back in history layer-by-layer and see all the waves of technology in the partially-completed architectural updates that started but never finished over the years. This is not uncommon. When companies adopt a long-term gradual evolution software strategy, they need to be able to sustain a long-term focus or they will end up with layers of half-completed work—and more complexity than they started out with.
On the other hand, there are some short-term strategies for technology transformation that can often be deployed in quarters rather than years. However, these tend to be “high risk” in the sense that they have little impact on the currently deployed systems until operational. Some companies with a long-term evolutionary focus are unable to tolerate an up-front short-term risk. There is no right or wrong here, but you need to pick a strategy that suits your company as it really is, or your transformation initiative will be short-lived.
People tend to think of UX for connected products in three ways:
· The industrial design of embedded devices (like the iconic Nest thermostat)
· The UI design of companion smartphone or web apps
· Exploring novel, alternative UI paradigms. That could mean voice, gesture, or responsive environments which minimize explicit interaction.
These are all important. But they miss one major issue. Connected products are systems, which require a holistic approach to design. It’s possible to do a good job of industrial and UI design as individual parts, yet still to end up with a poor overall experience. If the parts are designed in isolation, they often won’t hang together as a coherent whole.
Creating a good UX for a connected product also goes much deeper than the visible design touch points of devices and apps. Poor commercial and technical architecture decisions can undermine the perceived value of the product. They can also bubble up to shape physical and app UI design. Aligning commercial, technical and experience design factors is critical to any type of product or service. But there are particular challenges that are different about working with connected embedded hardware.
Here are 5 fundamental questions connected product developers should consider to ensure great user experience.
1.How does your product work… and how can it fail?
Introducing connectivity into hardware products introduces new possibilities. But it also creates new ways for them to fail. What were once simple everyday objects now have dependencies on power, connectivity, other devices and cloud services. All of these can (at least occasionally) fail.
To be fit for purpose, the perceived benefit of your product must outweigh the impact of any failures. Any product that users rely on should maintain some basic function in the temporary absence of connectivity. Will automated rules, e.g. for connected lighting, still run? Can you allow for basic local authentication of known users so that they can still control local devices in the absence of internet connectivity? For example, a connected power tool which may be used in a deep basement should not require constant cloud access to operate.
Decisions as to which code runs where — in the edge devices, in a local gateway, or in the cloud — can be critical to product and UX. They determine what will still work if parts of the system are unavailable. Explaining this to users in simple terms can be a challenge.
2.Is your business model a good fit for user expectations?
Connected products carry a different cost structure to conventional hardware products. On top of the cost of manufacturing and distributing the hardware, there are ongoing costs involved in maintaining the service for the expected lifespan of the product. Underestimating these is a major risk to the viability of a business.
In October 2017, Canary slashed the level of service provided to free customers because it was proving too expensive to maintain. Removing night time recording, and cutting video clips of motion detection instances to 10 seconds, understandably made customers angry. Canary are now reinstating some of this functionality.
Increasingly, companies seek to cover ongoing costs through service subscriptions. Service offerings are also seen as a valuable opportunity to develop direct, sustained relationships with customers. But it can be tricky to persuade customers that a product they think of as a one-off hardware purchase should justify an ongoing monthly fee. A subscription to a door lock — paying a monthly fee just to open your own front door — is a hard sell.
A connected product business model needs to be a good fit for where users perceive the value to be. If you can frame your product as a service enabled by a (lower profile) device, consumers will perceive the value to rest with the service. They may then be more prepared to pay recurring charges.
If they perceive the value principally to rest in a device (which benefits from a service), ongoing payments are a harder sell. They will expect to get access to any data from their devices for free. To make a service subscription palatable, the service may need to be bundled with other benefits which are attractive in their own right. A thermostat might come with a heating maintenance contract. A door lock might be an enabler for a security monitoring service, or AirBnB management services.
3. How often do devices connect? How responsive are they?
Product developers aspire to create connected products that feel seamless. But this is often unrealistic. Delays and glitches are part of normal use for most connected products, and creating a good UX for IoT is often about handling this gracefully.
The need to conserve power means that many battery-powered devices connect only intermittently. So it can take time to get new data from them, or for them to wake up and check in for new instructions.
And when they do, the nature of internet networking means that they may take a little time to respond (latency) or occasionally messages may go missing (reliability).
People have mental models of the conventional internet that allow them to understand that sometimes downloads will run slowly, or Skype calls will fail. But we also have millions of years of evolutionary experience which tells us that physical objects respond to us immediately and don’t ‘lose’ our instructions. Internet-like glitches, experience through the real world, can feel very disconcerting.
This can be serious enough to undermine a value proposition. The first generation of a popular connected doorbell took over 30 seconds to connect to the cloud service before even alerting the home’s occupier: enough time for most couriers to give up and leave on the assumption that no-one was responding.
Even if the product is working correctly, delays introduce doubts about reliability. If you turn a light on and it takes even a couple of seconds to respond, that’s long enough to wonder whether it’s actually going to work or not. Through a combination of latency and intermittent connectivity, some devices, such as battery-powered heating controllers, could take up to a couple of minutes to respond to an instruction. During this time, a user standing in front of them might have two displays giving conflicting information about the state of the system: which breaks some fundamental usability guidelines.
Product developers need to establish how quickly — at a minimum — a product needs to respond to users in order to fulfill its basic purpose, and ensure the technical design can meet that need. Where delays are unavoidable, ensure users know that something is happening: confirm to the user that their request is in progress, even if you can’t yet confirm whether it has been completed. If a device’s status may be out of date because it hasn’t checked in for a while, timestamp its data so it’s clear what’s happening.
Connectivity patterns also shape the data that connected devices gather, and thus the value and nature of the insights they can support. Smart meter electricity sensors are mains powered, and can send updated readings about 7 seconds apart. This is close enough to a reading of live power consumption to determine with some accuracy which appliances are likely to be on right now. A smart meter’s natural gas sensor is likely to be battery powered, because that is safer than running mains electricity next to a gas pipe. As a result it may only send readings of cumulative energy usage every 15 minutes. This can show patterns of use throughout the day but cannot reliably indicate whether an appliance has been left on by mistake.
And data from connected devices may often be recent, but isn’t always guaranteed to be 100% up to date. It’s important to communicate how old data is: for example, a fire alarm service that says ‘Everything’s OK’ in the absence of fire should offer a timestamp. If the data is 5 minutes old, that means there was no fire 5 minutes ago — not that everything is guaranteed to be OK right now.
4.Design not just for individual UIs but for interusability
Connected product UX is often thought of as the industrial design of devices, and the UI design of companion web or mobile apps. The two are treated as separate. But this often creates incoherence. Users experience them together as part of a system, and design needs to consider them at the same time.
A key design decision is determining which user-facing controls need to be on the hardware, vs in web/mobile apps, or both. Products of very similar types may take very different design decisions. For example, the Tado heating controller is a simple box with very few hardware controls. Most of the user interaction is handled by a smartphone app. By contrasts, the Ecobee mirror UI controls between a full-featured device and an app. Keeping hardware simple keeps the bill of materials down, because physical UI components add cost. It also makes the product easier to update over time, as features aren’t baked into a hardware UI. But if the user can’t access the app for some reason, they lose control of the device. Devices with full onboard UIs can support all features locally. But they will be more expensive to manufacture, and provide less flexibility for future updates.
Other options include putting a subset of key interactions on a hardware UI, with a full-featured app (e.g. the Willow breast pump, the GoPro camera). Or a device UI may support core tasks, with supplementary features offloaded to the app. For example, parental controls on the Nintendo Switch are available in a companion app.
Putting connectivity in a device can also change the nature of the physical UI it needs. Many physical controls, such as dials and switches, are used both to control the device but also to communicate its status. A washing machine dial shows that the machine is set to the delicates program. If the program can be changed from a remote app, then the machine UI needs to be updated. This could mean motorizing the dial so it can reflect that change. Or it could mean replacing it with a different type of interface which can change more easily, such as a touchscreen.
A second issue is figuring out how different UIs across what may be very different types of device can feel like a family. Appropriate consistency across terminology, visual language, interaction patterns is important. But although this is an easy issue to grasp, it is a challenging one to implement. What’s more important? Consistency with platform conventions (e.g. iOS/Android)? Consistency between your own service/device UIs? Or consistency with industry standards or hardware conventions? The result is often a juggling act with many compromises. At the most basic level, it’s important to give the same features the same name across all interfaces. But even this can be a challenge when the same app needs to support multiple versions of legacy hardware. Perhaps device designers had different ideas about whether ‘auto’ or ‘timer’ was the ‘best’ name for the heating schedule function. Subtle touches can help to tie the experience together; such as the Nest app which emits the same click as the thermostat bezel when the temperature is changed.
5.How can we prototype experiences in parallel with technical feasibility?
In software, iteration is easy. The current trend for lean product development encourages companies to rush out an MVP, learn from how it is used and evolve and improve it over time. Even if the value proposition turns out to be wrong, there is the option to pivot.
This rarely works with hardware. Devices need to be designed, specified, built and tested to fulfill specific requirements. Changing your mind about those requirements half way through the hardware development process can be prohibitively expensive in terms of time and rework. Pivoting can be nigh on impossible.
So it’s important to test the value proposition, and uncover core user requirements, early on. Doing this in parallel with hardware development means that changes can be made. But it means finding ways to prototype and simulate the product experience that don’t rely on functioning hardware.
Experience prototyping techniques for connected products do just this: simulating the experience of using the product for both design exploration and testing with users.
Video mockups can be compelling ways to explore the proposition. Bike navigation startup Beeline released a video of an imagined product on Youtube to test the proposition and win early investment. But even something as simple as writing the press release for an imagined product can be enough to gauge initial customer interest. And videos for customer testing interactions can be very low fidelity (c.f. Method alumnus Martin Charlier’s techniqueusing cardboard mockups and Instagram video).
Prototyping techniques for connected products also need to consider the whole experience. Conventional digital prototyping software focuses on interactions with a single app UI. But for the connected product, this is designing only half the conversation. IoT designers need to consider how users will interact with apps and devices at the same time. Again, simple changes to techniques make a big difference. Designers can map out process flows as swimlane diagrams spanning multiple UIs. Sketching storyboards, instead of screens alone, is a great way to map out interactions with the whole system and consider the context of use. Service design, with its focus on orchestrating interactions around an ecosystem of parts, offers some of the best techniques for IoT concept design.
In summary… always design for the ecosystem
IoT technology is evolving, and new business models and prototyping methods may emerge. But we think one thing about connected product design won’t change. If you want your product or service to offer users a coherent and compelling experience, you can’t design the components in isolation. You need to understand, and design, how they work together as networked parts of a system.
The cost management lifecycle for an enterprise landscape closely follows the path from migration to operations. The diagram below highlights the various parts of the framework.
Figure 1: Cloud Cost Management Framework
Cost control should be considered across the application lifecycle — from the initial planning, to day-to-day operations, to periodic architecture optimization. Many enterprises do this through a well-defined underlying governance framework that is optimized using various automation techniques. In this blog, we provide our experience-based recommendations on how to execute cloud governance, initial planning and sizing, and operational visibility and forecasting for enterprise cloud infrastructures.
I. Cloud Governance
Cloud Resource Ownership
Transform provisioning practices for the cloud through existing cloud management platforms or cloud-enabled CMDBs like ServiceNow.
IT teams can provide the enterprise governance and management infrastructure and practices, while individual LOBs are responsible for managing their application infrastructure as per enterprise best practices.
App teams should be responsible for the cost ownership of resources within projects (i.e., you provision it, you pay for it).
App teams should be responsible for tagging/labelling all created resources.
App teams should be responsible for the clean-up of unused resources.
Cloud Resource Provisioning
Define clear access control policies (i.e., who can provision what resources).
Build standard enterprise reference architectures and templates for provisioning resources (this should be the Enterprise Architecture's responsibility).
Automate the provisioning of reference architectures and templates.
Use a cloud-based configuration management tool where appropriate (check if existing configuration management databases provide cloud support).
Tagging and Labeling
Use tags for resource management and labels for resource identification, grouping, searches, and billing.
Define a list of labels and tags to be applied.
GlobalLogic recommends the following tags (as reference):
Identification/Classification Tags
BU/Cost Center
Application
Owner-email – application owner/group
Environment – Prod/Dev/Test/QA/Perf
Environment-Name – Prod1a, Dev4 etc.
Chargeback/Showback ID
created-by - User who created resource.
role = <db, appserver, proxy, etc.> - Classify by application role within a project
Operations/Automation Tags
schedule-* - Used to drive instance scheduling
can-delete = <true/false>
Can be added by app teams once resources are ready to be removed.
Can also be added by automation scripts, after untagged resources have been reported and no action taken.
Subsequently, a delete script will read this label and clean up this resource).
image-type - App type for baseline images, e.g. Apache, Cassandra etc.
image-version - Adds version ID of all the images of a certain app.
Reservation-expiry – Used to alert and renew reservations
Other tags can be added as per the business need.
Inventory Management
Build or use a lightweight inventory management system to:
Track current cloud sprawl
Report data on current inventory, new resources, projected cost for new resources, etc.
Find gaps between what was planned and what exists in the cloud
II. Initial Planning (Sizing and Provisioning)
TCO and Budgeting
Use the max CPU/RAM for budgeting, but execute the initial sizing based on CPU utilization, etc. (especially for dev/test).
For dev/test, be sure to consider the uptime hours (i.e., 9x5 as opposed to 24x7) for TCO calculations.
Execute instance right-sizing based on performance characteristics.
Use on-premise monitoring data to arrive at a more accurate initial cloud sizing.
For new migrations, enforce budgets from Day 1.
Service Catalogs and Provisioning
Create IAM policies so that teams only create services that are needed by the app in that project.
Build IT-certified base images and templates for reference architectures.
Publish and enable self-provisioning through tools like ServiceNow.
Integrate with approval processes.
Complement provisioning policies with proactive reporting and automated resource clean-up to build awareness and discipline while controlling costs.
III. Operational Visibility and Forecasting
Reporting Approach
Daily Reporting with cost, utilization, non-conformant resources:
Automatically send daily reports directly to stakeholders with key data points.
Obtain intelligence by analyzing individual resource level data points and environment-level correlations.
Recommendations should be generated based on analytics; data points include:
No or low CPU, memory or disk utilization, or during limited times (e.g., office hours for dev/test)
No or low network traffic
No login on VM
VM uptime (but no activity)
For cloud services, use cloud-provided metrics
Reporting and Automation Architecture
The following diagram describes the reporting and automation architecture for a cloud landscape:
Figure 2: Reporting and Automation Architecture
Data Points to Report
Cost (filtered by app/environment)
Daily, MTD, and projected monthly spend
Budgeted vs actual, and overrun projection
Alerts on any change in usage pattern and/or budget overruns
Utilization
Show unused resources + age + wasted cost:
Unattached disks
Orphaned snapshots
Unallocated IPs
Unused/unaccessed storage (recommend moving to archive: Glacier/ColdLine)
Show underutilized resources
Show individual instances
Show environments that have predominantly no utilization (e.g., dev27 is not being used)
Inventory
Current inventory
New resources created + corresponding cost
New projected monthly spend based on new resources
Conformance
List of resources without tags and labels
List of resources not confirming to naming conventions
List of instances based on older versions of baseline images
Recommendations
Rightsizing + corresponding cost savings
Reservation planning/committed use recommendations + corresponding cost savings
Instance/environment cleanup candidates (based on consistent low/no usage)
Instance/environment cleanup candidates (based on non-conformance)
Reserved/committed instance renewal alerts (for instances with approaching expiry dates)
Conclusion
Using the above best practices, enterprises can create an effective governance framework that proactively manages costs across the entire cloud infrastructure lifecycle. In the final installment of this blog series, we will provide recommendations for cost optimization and automation, including some popular tools currently in the market.
Coinciding with the release of their Future of Retail report, PSFK held their Future of Retail conference last week. The speakers and panelists were from a variety of backgrounds, but three key themes stood out to me from their talks.
Community amplifies brand message
Creating a community around your products, or leveraging an existing community and giving them the tools they need to feed their own fire is an area ripe with opportunity. Ron Faris heads up the SNKR team at Nike, and talked about how in the sneaker community, 15% are constantly seeking out the newest and most interesting products, and then hyping them to the other 85%. In some cases, they can even become a single point of sale, like when Nike partnered with David Chang to turn the Momofuku menu into an AR powered buyable moment. Instagram users could snap a photo and then share with their followers, resulting in a quiet spread of a “secret” shoe.
“Shopping malls should see themselves as evolution of community center,” says Melissa Gonzalez
Rachel Shechtman from Story talks about creating spaces that allow communities to come together, as Story did when they powered a shop for Mr Robot. “Store as community center is something I’m obsessed with” she said, leading into asking how we create community around spaces, how can shopping malls reinvent themselves as the evolution of the community center?
Physical supporting digital
Piers Fawkes and Scott Lachut started off the conference talking about the end of the digital-offline divide. They talked about the increase in conversion for retailers offering “experiences”, and that Target is investing $7 billion to update their stores, digital and supply chain to “work together as a smart network”. Amazon acquiring Whole Foods starts to create that integrated relationship, as does the Walmart acquisition of Jet in 2016.
“Popups are retail with commitment issues — appear quickly, disappear before people get bored” says Ross Bailey
PSFK had recently published a report called “Why Retailers Should Program Stores Like Galleries”, so this felt like a logical step to turning physical spaces into an enabler for digital purchase. Melissa Gonzalez, author of “The Pop-Up Paradigm” talked about creating a stickier relationship with their customers — and that using physical can help capture mindshare. Thinking of a store as gallery or experience certainly starts to create that opportunity to be captured and intrigued in person.
Experience as a driver for differentiation
Experience really played out as the key point for much of the day. Rachel Shechtman said “Places used to sell things, now experiences sell things” — and her company has been built to capitalize on this.
Places used to sell things, now experiences sell things, says Rachel Shechtman
I posted Rachel’s quote on Twitter, and Missy Kelley (missy_kelley) from Hello, Alfred (and also my wife) responded with a spin on that sentiment “I’d argue experiences sell experience and people are becoming less interested in ‘things’”. This is a complicated thought — how do retailers continue selling their goods if people aren’t interested in the goods themselves? Will we find retailers creating massive marketing experiences to convince people, temporarily, that they need something? Will a relationship remain with the brand?
Lee Anne Grant from Brandless touched on this with the idea of false narrative — a photo of an Italian chef on a bottle of tomato sauce. Maybe those labels were an initial, small, foray into creating experience.
What really pique my interest was this thought from Marcela Sapone, CEO and Co-Founder at Hello, Alfred:
“Think about stores as places for experiences — Fifth Ave should be like Disneyland where people go to experience new things.”
This seems to hit the balance, an interesting space for people to see, touch, and use products in order for us to believe we want to spend our money on that thing.
That thought tied up the thinking of the day — a look to the future where physical is a playground, a place to discover and explore, but maybe not the place you go to buy something. We still have a good distance to go to figure out how to bring this vision to life, and get the stores of today moving towards a customer-centered mindset. Design certainly will play a large part in this, particularly where we can influence experience with human-centered understanding, keeping those experiences from becoming yet another gimmick.
Secret #5: Products are not projects — they are a way of life
One of the major differences between a post-transformation “product” company and a pre-transformation technology-aided business is that the conventional company thinks in terms of projects. The product company thinks in terms of versions, enhancements and maintenance of the product.
A “project” starts and stops. Work on a “product” never stops. A product is enhanced and evolved continuously, in a never-ending work stream. In fact, the more continuous that work stream is, the better: supporting an uninterrupted cycle of work is an explicit goal of many Agile methodologies.
The investment horizon for a product has a timeframe of years, not weeks or months. The team who works on a product also stays pretty much the same over a long period of time—the team is not formed then re-allocated, as in a project. This is because, simply, a “product” is software used to drive the company’s business and revenues. Unless the company goes out of business—or changes businesses—the need for a product is constant. A project, on the other hand, addresses a one-time need with a largely one-time solution. Since the defining characteristic of a product is that it drives revenue over a long period of time, it makes sense to nurture, sustain and enhance it on an on-going basis. If engineering efforts start and stop, the product will predictably suffer from incompatible waves of technologies, and it will lose its coherence.
A product is driven by a roadmap that is largely independent of external events, even major ones such as acquiring a new large customer. Many software businesses, even very large ones, do indeed need to respond when they win a large customer or deal. These customers often demand new features, new integrations, and other changes on their own timetable—which may or may not coincide with the product roadmap. In the healthiest product companies, these customer demands are accommodated by adding or changing priorities on the roadmap, not by dispensing with the roadmap altogether. Those changes that are not useful for the “base product” are done by professional services using supplied customization, integration and configuration “hooks” in the base product. In other words, acquiring a new customer becomes an exercise in how the core product can be improved and configured, rather than how it can be customized or “one-off” features added.
Budgeting improvements to a product as “projects” is generally not a good idea because of the sustained effort required to keep the product maintainable and supportable. Also, because products tend to be complex, you will see major productivity and quality improvements by having a continuously staffed team who knows how things work and who understand best practices for your system. Budgeting product work as a sustained effort with potential peaks and valleys for enhancements generally proves most effective. Also, if you need it, create a separate “professional services” (PS) organization who configures and customizes (via plug-ins and integration points) the product for a given customer, on the customer’s timetable. This PS team is generally project-based.
It’s hard for many companies moving through digital transformation to move from a “project” to a “product” mindset. The way work is budgeted, communicated and planned needs to change. However, this mindset shift is core to your digital transformation into a product company. By thinking and treating the software product as core to your business—rather than peripheral supporting infrastructure—you’ve taken a large step in your transformation journey.
This blog is the first of a 3-part blog series that identifies challenges that enterprises face in the cost management of their cloud infrastructures. This blog covers the major challenges and makes some key recommendations. Subsequent parts propose a comprehensive cost management framework and do a deep-dive into some of these recommendations.
Cost Management Key Questions
Cloud adoption is no longer an "if" but rather a "what, when and how." More and more enterprises are asking the questions, "What (to move to the cloud)?" "When (to move it)?" and "How (to choose the right architecture and services)?"
As enterprises move more and more workloads to the cloud, the first pain our customers feel is the sting of cost overruns. So what has happened? The budgets were planned. Some initial sizing was done. But almost immediately after a migration, costs are the first factor that start causing headaches to IT managers. In this blog, we talk about some of the pitfalls and lay out a comprehensive framework for managing costs of cloud workloads.
From a cost perspective, there are four phases of a typical lifecycle that a workload goes through:
Planning
Migration
Operations
Optimization
Let’s start with the key questions that should be asked during each of these phases:
Cost Management Challenges
When starting their cloud adoption journey, enterprises sometimes do not consider the above questions, and they miss putting a cost management framework in place. This usually results in a situation commonly called the “cloud sprawl.” It means that the enterprise has lost visibility and control of its cloud landscape and costs. These situations lead to (often substantial) cost overruns. Some of the common reasons are listed below.
Cost Ownership
This is a key challenge. To utilize the full benefits of the speed and agility that cloud provides, modern IT usually provides a common services framework, wherein the business teams are allowed to manage the cloud resources for their applications themselves. While this is the recommended practice, cost ownership often falls through the cracks. We’ve seen customer situations where IT creates accounts and projects for business teams to use, and then hands them over to the business teams (but still owns the costing and billing).
What results from this arrangement is that the business teams get free reign to create resources, which they do — and often well outside of their allocated budgets. They are also neither aware and often not bothered with the mounting spends since they are not the ones footing the bill.
This is usually made worse with the fact that IT does not have strong cost reporting mechanisms to bring visibility into the who and what of the budget overruns.
Budgets and TCO
Doing an initial cloud TCO is absolutely essential to arrive at a budget for your cloud landscape. When this is not done, stakeholders have no visibility into what their infrastructure is going to cost. Cost savings is often one of the biggest reasons for cloud adoption, but not doing this exercise results in a bill shock to the enterprise and often takes the steam out of the momentum.
Even when enterprises do a TCO exercise, they often do the TCO for the final production landscape. They sometimes miss taking into account the migration plan, DevOps processes, and Go-Live dates (and also do not sufficiently size for them). This causes situations where costs skyrocket even before the application is fully migrated. Dev/Test environments tend to severely bloat up and eat into the overall budget.
Visibility
Even when enterprises have done initial sizing and defined cost ownership, having day-to-day visibility into the costs is important. Because it’s very easy to create resources in the cloud (within minutes), waste becomes a concern. Resources may be created for temporary use but never shut down. We have also seen situations where hackers have obtained access to customers’ cloud accounts and created hundreds of servers. The problems with a lack of visibility can be summarized below:
Stakeholders do not have granular visibility and actionable insights into their cloud landscape
Continuous monitoring is not in place, resulting in month-end bill shocks
No available projections on cloud utilization trends
Governance
Building the correct cost governance is a key pillar of the overall cloud governance framework. Problems occur when some of the following governance structures are not put into place:
Tagging and labeling strategy is required for both automation and chargeback/showback. When a comprehensive tagging strategy is missing, it gets very difficult to do a deep dive into billing data to identify which applications resources belong to which group and who created them.
Enterprise-level access control and provisioning policies, when not clearly defined and enforced for cloud, result in unauthorized actors creating resources. Controlling who can create which resources is essential to managing cloud sprawl.
Cloud governance requires behavioral change across organizations. Enterprises that try to retrofit existing processes that work on-premises to the cloud will lose the advantages that come with it. On the other hand, moving to cloud without training the various stakeholders on the new governance models will also result in lapses and corresponding loss in visibility and tracking.
Automation
Even when governance models are defined, for large landscapes, enforcing governance manually comes close to not enforcing it at all (for example, imagine tagging 1,000 VMs manually). When tools and automation strategies are not used and applied across the entire cloud landscape, IT teams always play catch-up and endure a lot of manual work to keep the landscape in shape.
Similarly, when cost management and remediation tools are not used, manual compliance, cost reporting, and optimization become simply untenable and are often abandoned.
Optimization
Public clouds are evolving fast. They already provide innovative features like autoscaling that are not available within on-premise environments. In addition, they provide innovative costing models and multiple discount options.
Lastly, they come up with new managed services that not only allow the customer to pay for just what they use, but also lift the management overhead for these services. Enterprises miss out on these benefits when:
Apps don’t utilize cloud features to optimize cost (e.g., autoscaling)
Enterprises don’t use cloud platform discounts
Enterprises don’t do periodic reviews for validating evolving application architectures
Approach and Key Recommendations
Based on our experiences with customer landscapes and cloud best practices, we have come up with an approach that can help enterprises control and optimize costs effectively.
Define and implement clear cloud governance model
Provide deeper visibility and actionable insights for cost management
Enforce governance via automation
Enable behavioral change and discipline using a combination of the above
While the cost management framework covers a lot of ground in the following sections, here are some of the key recommendations that enterprises can get started with immediately:
Define and implement governance and cloud provisioning methodology
Define and implement access control and ownership of cloud resources
Enforce resource tagging and labeling
Build lightweight inventory management for cloud resources
Build reporting and recommendations on:
Cost and projections
Utilization
Inventory
Non-conformance
Clean up non-conformant resources automatically (3 stage process)
Use reservations for production Instances
Use spot or pre-emptible instances for Dev/Test combined with instance scheduling