Archives

This blog post on software development and AI was originally published in 2009.

“Collector” is probably too strong a word, but I am definitely an aficionado of handcrafted items. While true hand craftsmanship has become rare and generally prohibitively expensive in the West, I’m fortunate to travel to places where handmade items are still relatively affordable. 

Over the years, I’ve managed to accumulate artwork, metalcraft, furniture, embroidered tablecloths, handwoven shawls, silks and other items that my family and I really love. I’ve been luckier in finding great women’s clothing items than men’s, but at least that has made me a popular gift-giver in my family! 

Having beautiful, handmade things in my life is a source of real satisfaction. I like the thought that an actual human being made something I have or use and, I like to think, perhaps they cared about what they made, and took pride that what they made was really good. Maybe they even hoped that someone like me would come along who would appreciate what they were making. 

A handcrafted item puts me in touch with a different way of life — that of the craftsperson. And the best of these items have an elegance and, well, “soul” to them that machine-made items just don’t seem to have.

When I’m looking at these items, it strikes me that my work is not so different.

Software is the last handmade thing in common use in the developed world.

For those of us in the software industry, that software is “handcrafted” is no great revelation. To us, it’s clear that beyond the technology (which, though sophisticated, is at least largely amenable to human control) the true difficulty in producing a software product are the human factors and the imponderables that human beings and human interactions introduce. 

Humans misunderstand directions, give unclear requirements, make mistakes and wrong assumptions, have prejudices and divergent career goals, are good at some things and bad at others, can act lazy or be overly aggressive, and generally are very human. 

Recommended reading: ChatGPT and what makes us human 

Though the technology is compelling, to me the people aspect of the business is the bigger challenge. How do you take a collection of imperfect human beings and get them to work together to quickly produce a quality product that does what you and your customers really want it to do? That’s the challenge, and why software product development is so much a human activity.

A few years ago, I acquired a small carpet I really love. This is a fairly elderly appearing tribal pattern carpet, and the seller made extravagant claims for its origins and history. After a series of visits and hard-bargaining sessions, I finally bought it. But I still had doubts about whether I had really purchased a handmade masterpiece or an artificially aged, cheap, machine-made knock-off. 

After living with the carpet a while, I began to notice a lot of small asymmetries in the intricate patterns. A series of “gulls” or “C” shapes make up part of the border, for example. I began to notice that some of the “C” shapes opened to the right, and others to the left, and that while the two sides of the carpet were close to being mirror images of each other, they actually were not. 

There were a number of other such small asymmetries here and there throughout the carpet. After a few weeks of observing these imperfections, I became completely convinced that this was indeed a laboriously handmade carpet. It would be prohibitively expensive or even impossible to achieve this degree of asymmetry and imperfection by machine; the only way it could be done was by hand, knot-by-knot. 

I’ve also read that in more than one culture, such asymmetries are inserted purposely as intentional flaws. This is to avoid tempting fate by aspiring to make something perfect. 

While I think this philosophy is probably a good example of making a virtue out of necessity, I nonetheless appreciate the sentiment. It is, unfortunately, not for us humans to produce perfect work — at least, I’ve never seen it. Machines, maybe; but a human endeavor, no.

Learning to appreciate the rough edges.

As another case in point, I once tried to get a business suit made for myself in India, figuring the labor cost would be low — which it was. I had read that one of the hallmarks of a good tailor-made suit were buttonholes made by hand. I had also read that you can tell whether a buttonhole is handmade or not by looking at the side of the buttonhole on the inside of the suit, facing the wearer. If the buttonhole is imperfect on the inside, it means it has been hand-embroidered; if it’s perfect on both sides, it’s machine-made. In the West, you will pay a lot more for a suit with “imperfect” buttonholes than for a suit with “perfect” ones, because hand embroidery is a sign of extra, skilled effort.

However, when I asked the Bangalore tailor, “Do you put handmade buttonholes on your suits?” he looked embarrassed and responded, “Yes. We’ve been trying to save up for a machine but so far, we can’t afford one.” 

The tailor’s perspective regarding handwork and my own were quite different in this situation. To me, the value of the suit was increased by skilled handwork, even if the result was in some sense imperfect. To the tailor, the imperfections (and the extra time that came from the required handwork) were a negative. 

It is imperfection that is the hallmark of a handmade item, not perfection. Both the Bangalore tailor and I agreed on that point. But where I valued the “imperfection” in this case, he was embarrassed about it.

As consumers, our standard for software products really is perfection. 

We like our buttonholes perfect on both sides. Like that tailor in India, though, as software developers we are in a situation where there are no tools available to us that will rapidly produce a perfect product. In large part, such tools are not even theoretically possible, because the goals or requirements for a software product are invariably “fuzzy” to a smaller or larger degree when we begin a project. 

Years ago, in the early 1990s, I was on the team that developed Rational Rose 1.0. I believed — as did a number of my colleagues at the time — that we were helping to create a new way of developing software. We felt that in the future, people would generate code directly from a graphical version of their software architecture, and would then be able to use the same tool to reverse engineer hand-modified code back into an updated graphical representation. 

Alternatively, you could start with an implementation, semi-automatically extract the architecture from it, and proceed from there. The round-trip nature of this process would overcome what we then saw as one of the major obstacles to good software architecture, which was keeping the architecture diagrams up to date with the actual implementation. 

Recommended reading: How to Hire a Software Architect

The reverse engineering piece, once fully in place, would allow people to effortlessly toggle between a high-level architectural view of their code and the detailed implementation itself, we thought.

Now, we’re fifteen years down the road. 

Why isn’t a rigorous architectural approach to software development widely used? 

Why don’t people architect their product and then just mechanically generate the implementation code directly from the detailed architecture? 

Surely enough time has passed since Rose 1.0 that its descendents and competitors could have made this approach completely practical if that’s what the market wanted. There are probably many reasons why this is not the approach companies actually tend to take today, but I would argue that a key factor is that people generally do not know exactly what their product is supposed to do when they start work on it. 

I would also argue that in most cases, they can’t know. Even in the case of a relatively mechanical port, the urge to add features, fix problems, act on mid-course learning and/or exploit the possibilities of a new technology generally prove irresistible. And in real life, the resulting product will invariably be different from the original concept.

There is no tool yet devised that will mechanically turn a “fuzzy” or currently unknown requirement into one that is clear and unambiguous. There are definitely techniques that can help the product owner refine his or her vision into something less ambiguous: Innovation Games, Agile kick-off meetings, requirements as test cases, rapid prototyping and the “fail fast” approach, for example.  

And I can still imagine a world where completely unambiguous requirements are fed into a machine and something perfect pops out of the other end — like code from a Rose diagram or, to return to our tailoring analogy, a buttonhole from an automatic sewing machine. What I cannot imagine, though, is human beings producing specifications so perfect that what comes out is actually what is desired, perhaps even for a buttonhole.

Until humans are out of the requirements and specification loop (which I can’t imagine if the software is to be used by human beings) I think we need to live with imperfection in this sense. 

To be sure, our ability to implement requirements mechanically has and will continue to steadily increase. Programming paradigms like convention over configuration (Ruby) and aspect-oriented programming (Spring) are reducing the amount of code required to implement a new feature, reducing development times and eliminating systematic sources of error. Tools, languages and reusable components and frameworks have vastly increased programmer productivity over the last few decades, and I am sure this trend will continue. But to date, people are very much part of the process of creating software as well as specifying it.

Should we hope that human beings are eventually eliminated from the software development process? 

Right now, a large part of the work of a professional software engineer could arguably be characterized as identifying and eliminating ambiguities in the specifications, to the point where a machine can mechanically carry them out. A software engineer often makes dozens of decisions each day about what the software should do in a given situation not specified or anticipated by the product owner or requirements, often because the situations are considered “edge cases” or “too low level.” 

A theoretical device that completely eliminated these ambiguities would have to first identify them, and then surface them for resolution by the product owner at requirements-generation time. But would the product owner be able to cope with a huge volume of decisions about, say, out-of-memory conditions in a very specific situation many levels deep?

My guess is that while in the future “programming” will be done at a higher level, with better tools, more reusable frameworks and even perhaps artificially intelligent assistance, it will remain a human activity at core for years to come. 

As long as humans are better at understanding the intent of other humans than machines are, I think this must be the case.  

At some point, machines will probably become so smart, and the collection of reusable frameworks so deep, that AI systems can assemble better software from vague requirements than people can. Until then, however, I think we will have to learn to appreciate our buttonholes with one rough side, and use approaches like Agile that acknowledge that software development is a human activity.

More helpful resources:

These days, a modern home will have an array of smart gadgets, from automatic garage doors that close at a specific time every night to internet-connected devices that are programmed to set the temperature or adjust the lighting in homes. These advanced products give connected users remote access to control and respond to real-time events through a phone app or various forms of voice control, making it easier to keep homes organized.

What has contributed to the increasing popularity of these innovative technologies is how accessible, affordable and mainstream they are becoming, giving households a choice of options that reflect their budgets. However, one only needs to look back a few years to understand the extent of growth in American smart home trends. 

In 2018, smart home statistics show approximately 29.5 million households were using smart home devices. While between 2021 to 2022, American households saw a 6.7% increase in smart home devices, with about 57.4 million homes actively using smart home devices. And by 2025, this figure is expected to reach 64.1 million households.

These figures indicate that smart home trends are here to stay, and as we progress into the future, they will develop an even stronger presence. The problem now for consumers is navigating the wide variety of products and selecting the best products for their homes. Before diving into trending smart products, let’s first look at why smart technology is so popular. 

Why are Smart Products Popular?

Internet-connected devices characterize smart products, often including home automation or domotics, which allow users to control and monitor their connected devices remotely to suit their lifestyles. 

The key to why smart products are so popular is the automation technology that allows connected devices to perform everyday tasks like turning off lights when a room is empty. Below, we will highlight some key benefits of why smart products are so popular:

  • Added convenience and comfort: With a simple swipe, tap, or voice command, users can use smart devices to manage everyday tasks according to a user’s preferences. 
  • Increased safety and security: Installing smart home security systems, such as sensors that detect human activities, provides protection when unwanted intruders are detected, automatically notifying local law enforcement. It is prevalent for users to connect their homes with smart security before going on vacation, giving them peace of mind while away. 
  • Improved accessibility: Intuitive smart products also allows for improved accessibility. Smart features, like voice command, allow hands-free experiences, which is practical for the elderly or those with limited mobility. The connected devices improve their quality of life, giving them added convenience and ease. 
  • More efficiency and savings: Smart devices, like smart heating and cooling systems, are considerably more energy efficient and can save a household up to 50% in energy bills. When connected with smartphone apps, users can monitor and track their energy usage, regulating the temperature of their homes to their preferences while saving on their energy bills. 

Recommended reading: The Economics of Digital Transformation [Whitepaper]

Current Smart Home Trends

Now that we understand why smart devices are popular, let’s look at current smart home trends. The current smart home device market is growing at an impressive pace. In 2021, the global smart device market was valued at US$84.52 billion with an annual growth rate of 10.4%. However, by 2022, the market has grown to US$115.70 billion with an 13.97% CAGR increase. While by 2026, the market is expected to surpass US$195.20 billion. 

While there is already a wide array of established smart home brands, new companies are also entering the market. With so many choices, knowing which products are reliable can be difficult. Therefore, below we have highlighted some leading companies in the smart home automation market:

  • Amazon Inc (Echo Dot with clock): This is a clever, smart device that can be integrated with various Amazon products and apps like Spotify, Apple Music, and YouTube. Besides displaying the time, the device also displays the weather and song titles. Additionally, it has a motion sensor, temperature sensors, a timer, and improved audio, making it a popular innovative product in connected homes. 
  • Apple Inc (HomeKit): The Apple HomeKit is another popular product range because it allows users to control their smart home accessories quickly and conveniently. Users can control lights, security, and room temperature from their connected devices with color-coordinated icons, making it easier to view and manage different rooms in the house. 
  • Google LLC (Nest Hub): The Nest Hub has now been integrated into the Google Store, making it easier for users to manage their smart home products. There is a wide array of smart products, including audio, home security, and video entertainment, so users can mix and match the products to create the perfect balance of connected devices. 
  • Samsung (SmartThings Hub): Connected devices to Samsung SmartThings allows users to perform an array of activities, such as controlling the lights, preheating the oven, planning family meals, and checking the front door, all with a single tap. The product is compatible with Android and iOS devices, making it convenient for users to program their preferred automation. 
  • LG Electronics (LG ThinQ): Similar to Samsung SmartThings, LG ThinQ is also compatible with Android and iOS devices. It can also be integrated with various services, including Google Assistant and Amazon Alexa, making it convenient for users to purchase different products. Additionally, it can be programmed to set helpful tips, such as reminding users to clean their washer drains or to change the refrigerator’s filters. All users need is an internet connection to monitor and manage their devices remotely.
  • ABB Ltd (Smart Home Solutions): Smart Home Solutions are controlled via connected devices so users can personalize their automation for blinds, lighting, heating, air-conditioning, and door communication. ABB is known for its energy-efficient products, making it a popular choice for those who want to purchase sustainable products. 
  • General Electric (Smart Appliances): Connecting to General Electric’s smart products via the SmartHQ app allows users to program various automation, such as laundry and meal planning. Users can also set helpful alerts and push notifications and use their voice via Amazon Alexa to control connected devices for various smart appliances partnered with General Electric, including Bose, Sonos, Hey Google, and Amazon Alexa.
  • Siemens AG (LOGO!): LOGO is a compact controller with a cloud interface that is easy to program. Known for its versatility, it gives users access to the Internet of Things (IoT) to manage and maintain different automation devices.

All the companies above are well-established with innovative home automation products that allow users to access and manage their homes remotely via a mobile device. Additionally, some semi-vertical SW platforms enable users to mix and match different smart devices, offering more flexibility while staying connected and in complete control. Some of these companies and smart products include:

  • Meta (Portal): Meta Portal is a video calling device that allows users to connect via Zoom with other apps. It has a built-in smart camera, which adjusts automatically when a user moves to widen the view automatically, keeping everyone in sight. The portal also has Alexa built-in, so users can ask questions, set a timer, and add items to their shopping lists. 
  • Xiaomi (Mijia Smart Steam Oven): This steam oven is popular with users because there are a variety of cooking modes, including steam and fry options. It also heats and cooks food fast and has five cleaning modes.  
  • Baidu (DuerOS): DuerOS products have been embedded with AI speech and image recognition technology, making them popular devices for users to command and converse. The smart products have also been integrated with various appliances, such as Bluetooth speakers and home appliances, like television sets and home telephones, so users can use voice commands to control and operate the devices. 
  • Tencent-Midea Smart Home Appliances: Tencent and Midea have joined forces to produce more interactive devices, improving the user experience with better coverage for all smart home appliances. Midea devices are compatible with Amazon Alexa and Hey Google, making it easier for users to control and monitor their homes. 
  • AT&T (Digital Life): This is a digital home security and automation tool that users can connect via their PC, smartphone, or tablet. It is a popular product because users can also set up the service, even when moving to a new home. 
  • Comcast (XFINITY Home): XFINITY Home provides 24/7 home security with video recording. The XFINITY smart camera devices have built-in sensors to detect motion, which are programmed to trigger the Kwikset Smart Lock, giving users peace of mind when they are away from home. 

Smart home devices have already penetrated our daily lives, with the connected home being a foundation for collecting and sharing home data. On one end of the spectrum are the sensors and devices that collect data and provide user interfaces, and on the other are the local and remote cloud services and analytics that add intelligence and value. The internet gateway is at the core of this connection of devices and services, while a layer of security spans the entire ecosystem.

A diagram showing the different layers of the smart home ecosystem with end-to-end components for connected devices and services.
End-to-end components of smart home ecosystem                                                                     

Recommended reading: Security for the Internet of Things [Whitepaper]

Voice is the Natural UI for the Smart Home

As a design-led engineering company, GlobalLogic has accumulated tremendous experience driving smart home automation for various global customers, from startups to technology leaders. From our experience, we’ve found that mobile application interfaces are a great tool, making it easier and more natural to communicate and receive feedback by voice while at home.

Speaking to a device with AI that understands you and can execute your commands elevates the experience to another level. According to Juniper research, in 2019, the global use of chatbots was valued at 586 million in 2019. However, by 2030, it is expected to reach 7 billion. At the same time, voice assistants like Amazon Alexa have also gained traction, with Alexa holding 62% of the global market.

This calls for product leaders at technology and service providers to invest and improve the voice interfaces, focusing more on personal home assistants and, ultimately, voice-controlled smart products, which can be integrated into the home.

A fantastic example is Amazon Alexa. Alexa drives voice-enabled control of smart home devices (among other skills).  After years of development, Siri, Google Now, and Cortana are other sophisticated personal assistant technologies inspiring the smart home ecosystem.

We see many iterations around smart home voice-enabled functions. For example, Mark Zuckerberg has even created a home automation system called Jarvis, which Morgan Freeman voices. The Javis system has an array of features allowing Zuckerberg to control everything in the house, including lights, music, lights, and temperature, all via voice control. It also has a textbot connected to registered devices, allowing users to control devices via text. 

A fun fact about Jarvis. It is named after Tony Stark’s artificially intelligent computer J.A.R.V.I.S. in Iron Man, which stands for Just A Rather Very Intelligent System.

Amazon echo
Amazon Echo with Alexa voice assistant

While there is momentum, hurdles remain

AI is an essential part of this human-machine interaction. Consider that even simple functions, such as switching on and off the lights, might be challenging considering all the ways the command could be given—“Shut off the light in the back bedroom” vs. “Turn off the lamp in the kids’ room”.

Teaching your smart home assistant to understand different linguistic nuances is essential for a smooth consumer experience. That’s where voice biometrics and intelligence come into play. You should be able to say, “Play my favorite song,” and have your smart home assistant recognize your voice and your preferences.

Consider also the possibility of combining a shopping service, like Amazon Prime Now, Amazon Echo, and a smart stove, and you can see how the complexity of commands can quickly escalate—“I want to order the ingredients for fettuccine alfredo for four people. I already have the butter and garlic. Deliver it today before 6 p.m. Start to boil the water 5 minutes before it’s delivered.” 

Developers can work with the Alexa Skills Kit to add AI capabilities of understanding the human voice instead of developing their own proprietary AI to support a smart home ecosystem.

To realize the opportunities of the smart home, the leading players will need to collaborate and form partnerships to ensure disparate devices and services connect in the background to meet that all-important desire for simplicity. Currently, there is a focus on the “big picture” benefit of the smart home, while steps are being taken to convert the smart home from a “good to have” to an “essential to have.”

Ultimately, security and safety are the primary concerns that must be addressed as smart home technologies evolve. For example, smart home voice commands may activate security-related functions. Still, they will require sophisticated recognition technologies to ensure a recorded voice cannot be used to activate these commands. 

Soon we’ll all have a chance to be like Iron Man with our own Jarvis or “Star Trek computer” on board to manage home, car, and day-to-day activities.

Learn more:

Across education, healthcare, banking, and more, connected device solutions have revolutionized how companies communicate. Direct communication with end users is essential across all channels including voice calls, video calls, SMS, web notifications, and social media. When done right, these consistent communication channels improve the user experience and drive revenue. Organizations are looking for modern software that provides a fully integrated solution.

Enter the Communication Platform as a Service (CPaaS), which has practical and impactful applications across every industry. In this article, you’ll learn what CPaaS is, how various platforms provide off-the-shelf CPaaS solutions, and how CPaaS is used in various sectors. You’ll also find evaluation parameters and guidance on selecting a suitable CPaaS provider to help inform your own search. Let’s get started.

What is CPaaS?

CPaaS is a cloud-based delivery model that enables businesses to improve communications channels end-to-end with seamless application integrations that do not require expertise to understand underlying complexity related to real time communication.

Consumers expect great service across various communication channels such as instant messaging and chat, video calls, email, social media, and SMS notifications. CPaaS facilitates these communication capabilities with minimal spending on deployment and maintenance. CPaaS provides APIs (Application Programming Interfaces), SDKs (Software Development Kits), libraries, and unique components which help developers build and embed communication strategies in existing solutions. 

Recommended reading: Cloud-Driven Innovations: What Comes Next?

CPaaS offers small to medium companies an affordable option to add communication streams and digitally transform their products. In addition, CPaaS provides unique solutions and use cases to deliver services. 

CPaaS vs UCaaS: What’s the Difference?

Like CPaaS, UCaaS (Unified Communications as a Service) facilitates communication between employees and their customers, enhancing communication without owning and maintaining the infrastructure. And like CPaaS, it also provides communication tools through the cloud, enabling teams to use standard messaging, video, and phone capabilities.

But while CPaaS provides unique API, SDK, and libraries for integrated and customized application solutions, UCaaS offers unique integration capabilities with CRM tools such as SalesForce. There used to be a difference between UCaaS and CPaaS due to their customization and API support options, but the lines are beginning to blur as many UCaaS providers have started providing APIs for customization. 

Here are a few key differences between CPaaS and UCaaS:

 

CPaaS UCaaS
Requires integration using API/SDK  Ready to go without any developer intervention
Focused on customization of solutions Focused on communication (employee-employee or employee-customer)
Can be initiated by application Majorly initiated by user
Pay-as-you-go pricing model Per-seat pricing model

 

Emerging Use Cases for CPaaS

Healthcare

During times of peak COVID infection, healthcare systems were put to the test like never before. Many healthcare providers were forced to begin building or enhancing their existing applications and solutions. One innovation was providing secure telehealth video calls for remote assistance and patient consultation, for example. Additionally, hospitals opted for CPaaS platforms to build messaging and voice solutions for communication between hospital staff.

Education

Online education has gained mass adoption in recent years and is now largely powered by CPaaS solutions that provide video calling and presentation. Education platforms can enhance learning services by adding interactive solutions like digital blackboards.

Banking, Financial Services & Insurance (BFSI)

Over the last decade, BFSI companies and organizations have increasingly digitized to keep pace with evolving customer expectations and security, privacy, and operational challenges. By using CPaaS, banks can enhance their applications. For example, many banks provide a dedicated relationship manager or customer service provider to customers through online chat or phone through their banking applications. Similarly, insurance companies now often use video calls to meet with customers.

Recommended reading: Cloud – A Great Refactor for the Financial Services Industry

Tips for Choosing a CPaaS Solution

There can be business and technological implications in your choice of CPaaS. The following are the major considerations to consider while evaluating your options and selecting a CPaaS provider.

Feature Coverage

The CPaaS solutions space is crowded, with some covering every aspect and others providing niche functionalities. Choosing the correct solution is important as it impacts the short-term vision of early market release and the long-term vision of future product expansion and maintenance.

API and SDK

One of the major differences between UCaaS and CPaaS is the customization options of APIs and SDKs. Ideally, you’re looking for a comprehensive solution for both API and SDK. For example, if the platform claims to provide a notification service using API for Android devices but lacks notification capabilities for iOS and web browsers, it’s not a comprehensive solution. In addition, there should be coverage for development platforms and languages for SDK, such as iOS, Java, JavaScript, and C#.

Community Support

Platform providers should have the infrastructure to support end-to-end environments for application development. However, even when providers have these aspects, developers may face challenges. Resolving these issues alone by trial and error can be time-consuming and resource-intensive.  Active community helps to have access to support and a community of expertise to help resolve issues.

Security and Compliance

Security and compliance are essential not only in regulated industries such as healthcare or BFSI but in general, given the inherent vulnerabilities of customer-facing communications and data. Look for security policies and a history of updates that safeguard usage and personal data.

Pricing

Consider licensing costs and the support and usage structure of each CPaaS candidate. In general, CPaaS providers claim to follow a pricing model per interaction rather than per set basis. Once an application is launched there is no turning back, which is why it’s important to consider the CPaaS platform cost thoroughly from the beginning.

Prominent CPaaS Providers Compared

Based on the latest Gartner CPaaS Review and Ratings report, there are three CPaaS providers with top ratings: Twilio, Message Bird, and Bandwidth.

CPaaS Features Comparison (as of Dec 2022)

Features Twilio Message Bird Bandwidth
SIP Trunking Yes Yes Yes
SMS Yes Yes Yes
Bulk SMS Yes Yes Yes
Email Yes Yes No
Bulk Email Yes No No
Chat Yes Yes Yes
Notification Yes Yes Yes, Limited
Audio Call Yes Yes Yes
Video Call Yes Yes No
PSTN calling Yes Yes Yes
Conferencing Yes No Yes
Voice Recording Yes No Yes
Video recording Yes Yes No
Screen Sharing Yes , limited No No
Social Media Whatsapp API Whatsapp API, also with other social media available No

 

CPaaS Parameters Comparison (as of Dec 2022)

 

Parameters Twilio Message Bird Bandwidth
API/SDK Coverage Good support for both Server as well as client SDK

https://github.com/twilio

https://www.twilio.com/docs/libraries

Good support for Server SDK, No client SDK

https://github.com/messagebird

https://developers.messagebird.com/libraries/

Good support for both Server as well as client SDK.

https://github.com/Bandwidth

https://dev.bandwidth.com/sdks/about.html

Community and Support Support and Active Twilio community

https://community.twilio.com/

https://support.twilio.com/

Support

https://support.messagebird.com/

Support and Less Active developer community

https://bandwidthdashboard.discussion.community/

https://support.bandwidth.com/hc/en-us

Security and compliances Certified ISO/IEC 27001

Major Compliance : HIPAA, GDPR

https://www.twilio.com/security

Certified ISO/IEC 27001:2013

Major Compliance : GDPR

https://www.messagebird.com/security/

Certified ISO 27001:2013

https://www.bandwidth.com/security/

Pricing Pay as you go Plans, no cost for support

https://www.twilio.com/pricing

Monthly as well as Pay as you go plans. Additional support plans

https://messagebird.com/en/pricing/

Pay as you go Plans, no cost for support

https://www.bandwidth.com/pricing/

Conclusion

Each application has a similar goal: to provide users with the best information or communication features inside a seamless experience. With a consistent need for application digitalization, CPaaS will continue to play an important role in improving customer communications in a wide spectrum of industries. 

With the rise of AI in the last few years throughout every domain, application-initiated communication is more prominent. We should expect to see CPaaS remain a significant partner in delivering quality communication options to end users for years to come.

Looking to modernize and personalize your company’s contact center? We help clients craft proactive, predictive customer experiences across channels and adapt quickly to your customers’ needs. Explore GlobalLogic’s data-driven customer experience services here.

More helpful resources:

Every project has its challenges and triumphs. In this particular example, GlobalLogic partnered with a multinational manufacturer and provider of animal care services to find an alternative to an existing application. Its limitations in client system deployment and application scalability for users and hospitals called for a robust, cloud-based Point-of-Care technology solution.

In this post, you can see how we tackled this complex project and overcame critical engagement challenges. We’ll share the lessons learned in QA; for example, how the customer QA manager worked dynamic insights into the daily project objectives. You’ll also discover how each release and iteration drove improvements.

A few data points of note for this project:

  • Line of Codes- 9,67,883 (FE) + 49,494 (BE) = 10,17,377 LoC 
  • Project Members- 274
  • Headcount of QA Members - 64
  • Independent Scrum Teams - 16 
  • Delivered Application Modules or Features - 248 
  • Delivered User Stories, Enabler & Change Requests - 3,931 
  • Valid Defects raised till Release 1 - 16,805

Our Technology Stack

# Area Tools, Languages, Libraries
1 Backend Development C#.NET Core 3.1
2 Front End Development Angular, Angular Workspace , NextJs, Puppeteer, Microsoft, Angular Material, Syncfusion, Jest, SonarCube, TypeScript, HTML, SCSS, node js
3 Database Cosmos DB, Managed SQL Instance (Cloud DB, Search Index)
4 DevOps & Infra Azure Cloud, Azure DevOps (Planning, Pipelines & Artifacts), Event Hub, App Config, Function App, App Insights, Azure Key Vault, Signal R, Statsig, Redis Cache Docker, Cloud Flare (CDN), Palo Alto(Networks), Azure Kubernetes (For Orchestrating Containers)
5 Requirement Management Microsoft AzureDevOps - Epic, Feature, User Story, Enabler, Change Request, Observation
6 Defect & Test Management Microsoft AzureDevOps - Test Plans & Defects
7 Test Automation, Security & Performance Protractor, JavaScript, Axios, Jasmine, Azure keyvault, npm Lib, ReportPortal, log4js, Page Object Model, VeraCode, JMeter, Blazemeter

Discovery, Proposal & Kickoff

June 2019 marked the beginning of our discovery phase. We first learned that an animal hospital brand that had been acquired by our client required a system to replace its outdated one that could hold 1000+ hospitals and 1000+ staff for each of the hospitals. By contrast, the existing application could only support 40 hospitals. 

The client sought a robust, scalable cloud-based web application equipped with the latest features for the pet care industry. It also needed the newest technology stack to replace the existing desktop application. 

After taking time to understand the business requirements, we sent a request to gauge the existing team’s capability to deliver Point of Care technology.

The Proposal

In October, five team members were hand-picked to deliver a proof of concept (POC) application. The main expectation for the application was to make it front-end heavy with cloud support. The team completed the POC application in December 2019. 

The client was satisfied with the POC application since the design met user interface (UI) expectations. 

The customized agile model was so well-designed to meet customers’ needs that the team won an award for their work in December 2019.

Recommended reading: POC vs MVP: What's The Difference? Which One To Choose?

The Kickoff

When beginning a project, it’s crucial to establish a team with diverse expertise. As it can be challenging to hire technical experts, we implemented a hiring plan to thoroughly vet applicants which enabled us to quickly establish the required Scrum teams to begin the project.

In January 2020, the teams met in the India office to discuss GlobalLogic’s standards and practices, meet new team members, and review the POC project schedule.

Project Increments

PI0 - Planning & Estimation

Initially, we only had visual designs to help depict the customer’s expectations. Creating a list of initial requirements was challenging. 

After several technical brainstorming sessions, the teams could decipher the visual designs and were able to create a plan for the project. This included an estimate of the resources and work hours needed to complete it, as well as formulating test strategies. 

Recommended reading: 6 Key Advantages of Quarterly Agile Planning [Blog]

PI1 - Execution

Once the project was approved, we refined the requirements, evaluated potential gaps in knowledge, and formulated user stories.

PI1 began with domains such as [User And Staff], [Schedule And Appointment], and [Client And Patient Management]. After a few iterations, we added Admin Domains.

To create the graphical user interface (GUI) and application programming interface (API) automation, we established test automation for the POC and created a framework structure.

PI2 - Continuation

The development and testing of the POC application were on schedule. However, several problems arose with the [Ontology] domain without a frontend and exclusively data-driven (BE). 

To fix this, quality assurance (QA) began making stacks of defects and flooded the system.

With the completion of API and GUI automation, development started to reduce the regression effort in future test cycles. We also set up a User Acceptance Testing (UAT) environment and a QA environment for testing and assessing user stories.

Recommended reading: Zero-Touch Test Automation Enabling Continuous Testing

PI3 - The First Cut

As corner cases increased, more defects and heavy regressions were launched to bombard the application. We completed multiple test cycles and fixed the defects. 

Then, architects started their code standardization processes and helped to fix defects. After many evaluation cycles, we were ready to deliver the project to the customer.

PI4 - Project Scales

Given the customer’s satisfaction with the application, our team was asked to take on additional needs, including plans for the Electronic Medical Records (EMR) domain. There was also a new tower three and a team to create the EMR domain at a new location.

At tower two (Bangalore), there were two domains, the [Orders] and [Code Catalog]. The team quickly discovered that both domains had technical challenges. 

In tower one, there was also a new domain, the [Visit], which was an Azure Event-based domain with more problem statements.

QA Reforms & Process Enrichment

One challenge the customer QA manager encountered was the need to get dynamic insights into the daily project objectives. The solution to this came from the ADO dashboard since it could present dynamic queries making it easier to track progress in the project.

The team then identified, discussed, and documented the Test Automation Framework for the POC, intending to incorporate automation to reduce the time and effort for the testing cycle. With consistent focus, time, and effort, the team was able to implement automation successfully. Another focus for the team was to create 100% API Automation and 65% GUI Automation. 

The team also worked on identifying tools for Non-Functional Testing, such as Security Testing, Performance Testing, Resolution Testing, Cross Browser Testing, Globalization Testing, Keyboard Testing, and Scalability Testing. Not for resale (NFR) testing was a primary deliverable. 

The various process was laid down formally and revised as:

  • User Story Life Cycle 
  • ADO Defects Life Cycle 
  • ADO Tasks Creation & time logging 
  • Test Cases Design Guidelines 
  • Dev Environment Testing by QA

Tracking of QA work and regression testing became effective. Scrum & SoS trackers were upgraded with several ways to track the project better.

Releases & Iterations

Release Part 1 (First 10 Iterations)

The project delivery model changed after the PI model was released, and we started working on a new feature-based approach. This created a solid foundation for Release 1.

We took many steps to make the project transparent, manageable, and well-documented. We also tracked the solution design, HLD, and LLD for each feature. For TechDebt activities, we implemented code sanitization iterations. Then, the integration of User Stories began to capture the regression efforts, and the end-to-end feature testing began after each feature.

After implementing CI/CD, we began hourly deployments for the QA1 environment. We ran sanity test runs in pipelines and began building promotions controls. We then selected the QA2 environment for manual testing, and the certification of the User Stories for Scrum teams began.

Release Part 2 (Second 10 Iterations)

We conducted workshops with customers to estimate new domains and kicked off groomings for newly added domains in Release1, namely [Pharmacy],[Communication], [Document Template].

Release Part 3 (Last 10 Iterations)

After the domains were stabilized, we conducted a regular bug bash and completed the final features for a few older domains. A few domains went into maintenance mode, while others had more features to deliver.

QA Challenges

We encountered many challenges throughout this project’s journey and would like to share a few, along with the steps taken to overcome them.

A. Increasing Functionality & Features - Automation 

Due to significant efforts and iterations in regression testing, there were increasing functionality and test cases in the system. 

Solution: Several initiatives to gear up API & GUI Automation

  1. Framework Enhancements in libs and functions 
  2. Redesigning several aspects 
  3. Code sanitization and standardization 
  4. Prioritizing automation test cases
  5. Smart Automation by clustering the functional flows

B. Continuous Implementation & Deployments

Numerous scrub teams involved in the implementation and deployment process introduced several constraints.

Solution: Several steps were taken to improve the customer experience: 

  1. Automated Build Deployments
  2. Hourly Deployment from Master Branch to Q1. 
  3. Sanity Test Execution is Pipeline on QA1 Env. 
  4. Every 4 Hours Code Promotion to QA2 Env. 
  5. Regression Test Execution in Pipeline on QA2 Env.

Recommended reading: Experience Sequencing: Why We Analyze CX like DNA

C. Testing Layers

Various QA testing stages in multiple environments – including Dev, QA1, QA2, UAT, Stag, Train1, and Train2 – added to this project’s complexity.

Solution: A lengthy work item cycle with different states tracked the defects from new state to closed.

D. Reports & Statistics

We generated reports, statistics, and a representation of Work Items, as Ado is not a great defect management tool and people are less familiar with it. 

Solution: We worked in multiple directions by breaking and solving one by one.

  1. Extensive usage of tags. 
    1. While defect logging for environment identification. 
    2. For retesting of a defect in different environments. 
    3. Categorizing User Stories, Enablers, Change Requests, and Defects using tags for release notes.
    4. Categorization of blocker defects. 
  2. Extensive usage of Queries 
    1. Tracking defects raised by various teams for different features. 
    2. Tracking defects fixed and ready for QA. 
    3. Assignment for testing of defects on multiple environments. 
    4. Scrum Of Scrum - Defect Dashboards. 
    5. Preparing Release Notes. 
    6. Data submission for Metrics.

E. Finding Defects

It was crucial to locate any other defects to ship quality products. 

Solution: We created specialized defect hunters to identify defects. We saw significant results in different domains with this approach. 

F. Defect Hunters 

Quality requires discipline, environment, and a culture of quality product shipping

Solution: We identified and groomed specialized defect hunters with encouragement and support for bombarding defects. In a few domains, this was carried out as a practice, and achieved fantastic results despite consumer Domains.

G. Flexibility

The team often worked around 15 hours daily to meet the client’s deliverables. 

Solution: Many managerial and individual initiatives were taken to achieve the milestones.

  1. Teams showcased commitment.
  2. The teams conducted numerous brainstorming sessions to be able to diagnose and solve problems.
  3. Extensive usage of chat tools. 
  4. Limited emails.
  5. Thorough communication. 
  6. A proactive approach and agility.

H. Conflict Management - Dev vs. QA conflicts

It’s often said that “developers and testers are like oil and water,” and indeed, there was friction when the teams collaborated. 

Solution: With patience, mentoring, and guidance from leadership, they were able to work together cohesively. We implemented a QA bidimensional for major problems where each QA team member worked closely with Scrum teams.

Lessons Learned from Challenges and Bottlenecks

A. Requirement Dependency Management

Given the project’s magnitude and the multiple scrum teams involved, there were still areas where improvements could be made in the future. 

  1. There was less coordination among domain POs for required dependencies, causing problems for consumer domains, as producer domains make delays and insert defects at each level in the project life cycle

Solution: By having onshore & offshore domain POs, you could enforce better communication practices.

  1. Defects are not logged by developers in code but are due to the integration with various other functionalities and domains.

Solution: Due to a lack of formal Product Requirement Documentation, POs and developers deviate or miss the defects at the integration point. Teams can reduce risk by adding national Reviews of User Stories ACs. 

  1. Due to frequent requirement changes and gaps in communication, we encountered delays and defects. The project had cohesive functionality and features with a high degree of interdependence. 

Solution: The customer cannot isolate functionalities due to the tight coupling of the features for an end-user. Daily defect triage was conducted with POs to reduce the gaps and conclude requirements. However, we were still unable to control the delays.

B. Locking Master

By locking master while the sprint ends regressions, we lost time for other work items or next sprint deliverables. 

Solution: For a few sprints, master was not locked and control code promotion by QA Code approvals for each of the work items. This solved the problem somewhat, but only temporarily. Further developer discipline enhanced it and resulted in a regular cadence.

C. Sanity Failures at QA1 

Domains must wait until the other domain sanity failures at QA1 are resolved. 

Solution: We assigned other productive tasks to the team during this time.

D. Unplanned Medical Leaves

Unplanned medical leave due to COVID and medical emergencies.

Solution: With COVID restrictions, more teams could work from home, which helped to balance any progress lost due to unplanned medical leave. 

Recommended reading: 3 Tips for Leading Projects Remotely with Flexible Structure

E. Adhoc Work 

High level of Adhoc work and activities assigned, which were not planned and enforced to achieve. 

Solution: At a later stage in the project, its solution and Tech Debt is being taken care of along with regular development. Due to this reduction in Adhoc work and more planned work allocated.

F. Multiple Environments

Having multiple environments for testing for QA presented challenges for producing high-quality products. 

Solution: Scope of testing per environment decided like on Development Environment only positive scenarios to be checked. On QA2, in-depth certification was to be done over the build. On UAT, only defect verification is to be ensured. By this approach, a significant amount of work was reduced, but it came late.

Project Highlights 

Some of the highlights from the project include: 

  1. Having Automation QA focus on scripting and Manual QA focus on defect hunting. 
  2. Not pushing the dev team to participate in functional testing.
  3. Cross-domain cohesiveness in the QA track to understand the overall product requirements for shipping.

We met display requirements, and developers' input helped improve the overall application. QA also provided various suggestions and observations, which helped to enrich the user experience. With guidance from the project’s architects, we created stability through complex engagement.

The problem must be taken as a challenge to solve. For example, in Agile, an Epic is broken into User Stories which are broken down into simple, achievable user stories and at the end each of the acceptance criteria is agitated to achieve the goals. 

As you can see, the team was effective in our mission and learned valuable skills along the way. If you’re presented with a complex problem, as we were, it helps to plan out the processes step-by-step. The more the problem is broken down, the more realistic its potential solutions become. 

More helpful resources:

We have entered a new era of how television content is created, delivered, and even defined. The digitalization of papers, the evolution of streaming media, an explosion of mass content creation, and on-demand access to various content are among the factors driving this transformation. 

What’s next in the future of television? 

And for businesses in media and entertainment, a more pressing question looms: will this evolution drive growth, or will the television market become stagnant?

The TV landscape has changed dramatically over the last decade. From DVRs to streaming services, the way we watch TV has significantly changed. As new technologies emerge, they often disrupt existing industries. We can see this demonstrated in the rise of streaming services such as Netflix, Hulu, Amazon Prime Video, and HBO Max, which meant that consumers no longer needed cable to watch their favorite shows. 

This shift has led to increased competition between companies that produce original programming. As a result, networks are looking to adapt to meet viewer demands. Let's examine how television has evolved over time and what companies need to focus on next to meet consumer demands.

Television’s Past

In 1926, Japan developed the first working example of a fully electronic television receiver, a system that employed a cathode ray tube (CRT) display with just 40-line resolution scan lines. Now, try to compare this 40-line resolution to 4320 pixels (separate dots) of vertical resolution (forming the total image dimensions of 7680×4320). This will give you a rough but vivid comparison between the first TV and the current highest “ultra-high definition” television (UHDTV) resolution used in digital television and digital cinematography (8K UHD).

Kenjiro Takayanagi transmitted the picture of a Japanese katakana character comprised of 40 scan lines.

Kenjiro Takayanagi transmitted the picture of a Japanese katakana character comprised of 40 scan lines.

A lack of respective available content is why UHD TVs are mostly just interesting tech rather than everyday devices in our living rooms. But with Netflix, Amazon, Hulu, and many other services now offering 4K streaming – and Comcast,  Verizon, and Virgin all ramping up 4K sports and movies for their platforms – that excuse is firmly vanishing.

Still, let’s be honest. We’re reaching a point when it’s hard or even impossible for the human eye to see the difference in resolutions, which is why manufacturers will shift their focus toward image quality (e.g., color scheme and black levels). For example, I am using the HDR feature on my phone to edit pictures. It’s a method of obtaining more significant variance in contrast and color. This high dynamic range technology is becoming essential for modern TVs.

Television’s Present

Recent studies show that more people across age groups are moving away from traditional cable TV. On average, families can save money by selecting a couple of popular streaming services over standard cable, and they don’t have to deal with contracts and enjoy ad-free viewing. It’s no wonder people are moving away from cable TV.

The media industry is progressing and transforming significantly. The growing number of “cord cutters” and emerging group of “cord nevers” just confirms this trend.

The future of television is unlikely to result in a shift back to cable TV, as video streaming technology and content improves.

The latest trends also show that TV is slowly but steadily merging with social media. I’m not talking about Facebook pages for TV channels or comments on live shows, but social channels partnering with major media industry incumbents to host video content on their platforms. We see more news streams on Twitter, Facebook, and other platforms, and Facebook even invests and pays for creating unique live video content. At the same time, Google is launching a streaming bundle of channels under the YouTube umbrella.

These blurred boundaries aren’t just on social media. While the traditional media industry still seems to be robust, the disruption created by new online digital video services is massive. Cable networks, telecom operators, and traditional content producers are all trying to rethink their current business models and find solutions to capitalize on modern technology and retain a large user base. 

Even though traditional media industry players provide slightly different ways of consuming entertainment from newer online video services, at the end of the day, all of them are competing for viewers and utilizing the same revenue models (e.g., advertising or subscription).

The widespread deployment of broadband internet access, combined with many connected devices (e.g., tablets, phones, STBs) and their respective software solutions, have given viewers access to high-quality video content anytime and anywhere. This effectively made distribution almost free to the end user. 

According to Statista, US viewers spent an average of 8 hours and 5 minutes on digital media each day in 2021. The impact of digital media is further impacting how users spend their days each year.

There’s no need to stick to broadcaster scheduling anymore. Even many traditional broadcasters and providers are distributing their content through OTT software video platforms. Of course, each company differs in its approach to mitigating the current market situation. Some offer smaller channel bundles delivered via their online streaming services, while others try to integrate content production with distribution.

Recommended reading: Best Practices for Managing Video Streaming Platforms

Television’s Future

In light of all these new trends and changes, where should media companies focus their attention for innovation and R&D?

1. User Experience

The first and most important aspect is user experience. Users usually don’t care about different delivery and consumption technologies. They just look for the best content with an intuitive platform and high-quality resolution.

Many of my friends get frustrated because of all the different devices and remote controls in their living rooms (e.g., STB, Smart TV, Xbox, and Google Chromecast). Similar feelings arise when you jump between dozens of different apps to get your desired content. 

Ideally, there should be a universal search to manage the flood of content and for situations when you know exactly what you want to watch. Users should also be able to channel surf when they want to relax and explore — like the traditional TV experience.

A successful company or service will always put the customer first, but it’s also important to go beyond just a momentary user experience. Companies must work on long-term product strategies (i.e., employing new technology and business models) rather than simply working on current products and trying to get the most out of existing revenue models. This will allow them to better personalize their offerings, deliver differentiated value, and ultimately gain new users and retain existing customers.

2. Data

The second important aspect to focus on is data in content distribution and advertisement. Businesses shouldn’t underestimate user data. Relevant content distribution and targeted advertisement are based on user data and machine learning capabilities. 

Companies that utilize this wisely can provide a better user experience and boost their business, thus gaining a tremendous competitive advantage in the market. I think big data and analytics offer a good opportunity for OTT providers. Growing a user base from both “cord cutters” and “cord nevers” will lead to increased customer data, which can positively impact revenue through improved analytics and targeting.

3. Content

The final important element is content. It has always been and will remain a key part of the media industry. TV services that provide as much original content as possible will succeed (although this does not always imply producing their own movies). 

I should note that there is a high probability that the role of super aggregators will be occupied not by industry-relevant TV and video services providers but by companies like Google or Facebook. One interesting peculiarity that can contribute here is the growing amount of amateur content. Many children and young adults subscribe to at least one amateur YouTube channel, Instagram creator, or vlog. Even though most creators make this content on their smartphones or GoPro cameras, it’s still attracting millions of viewers.

Top social media and video content creators are creating content right on their smartphones, a trend that will have implications for the future of television.

On the other hand, content is something that can impede the media industry’s progress. Even when all necessary technology solutions are in place, media companies can struggle with commercial deals to get content into their systems. Rights-holders often restrict various aspects of content delivery to a particular channel or service, country, or date interval. This significantly affects the user experience, forcing us to jump from app to app, although I think such restrictions are part of an old-school approach to media and won’t change anytime soon.

Recommended reading: Digital Rights Management in the OTT Ecosystem

The Future of Television is Still Bright 

In this environment where digital technologies are rapidly changing the media landscape, it is crucial to understand how consumers' behavior is trending in order to develop effective strategies for reaching them.

Our goal is to provide insights into what drives consumer decisions and behaviors and how they interact with each other and help provide solutions to meet consumer demands. We help media and entertainment companies including OTT brands, broadcasters, studios, and ad tech providers design and develop innovative, next-gen solutions and platforms that captivate audiences and generate revenue. Check out our Media Software Development Solutions & Services to learn more.

Keep Reading:

Since the AI-driven chatbot “ChatGPT” was introduced to the public in November 2022, it has been a hot topic for discussion. The ability of AI-based technology to perform characteristically ‘human’ tasks such as telling stories, writing code, authoring poetry, telling jokes and composing essays on virtually any topic has shocked and astonished many. 

These activities are among those that we think of as particularly human. If a software package can do these very human tasks, what does it mean to be human?

I’m pretty sure that humanity has asked a variant of this question every time a new technology has appeared. Probably the invention of the wheel was greeted with dismay by some because lifting and carrying items—or transporting those items on a horse or donkey—was thought of at the time as a human or animal task. Not something to be done by an inanimate object such as this new 'wheel' gadget.

Does ChatGPT spell the end of human creativity?

The AI-driven chatbot invention strikes particularly close to home for me, however, because what I’ve conventionally seen myself as being good at is making associations between concepts that might seem very different. This can be a simple association like answering a question for my colleagues like, “Where has GlobalLogic done something similar to this project before?” My experience, memory, and ability to make associations have served me well in helping me answer this type of question. 

Such questions are a bit harder to answer and require more creativity than might appear at first glance because GlobalLogic has hundreds of clients and does literally thousands of projects per year. Also, associations can happen across many dimensions—similar technology, similar business problem, similar situation, and so on. GlobalLogic of course does have search and other electronic means of answering most such questions. Nonetheless, for the hard or critical ones—those requiring ‘lateral’ thinking--I’ve been a good resource and am frequently called on to answer this type of question.

Recommended reading: The AI-Powered Project Manager

Likewise, I enjoy writing: essays, stories, and even the occasional bit of poetry. I think writing is good when people can relate to the author’s experiences or narrative, and when it makes sometimes unexpected associations that people might find interesting, funny, or engaging. When I write, I know that I certainly aspire to do this. What is surprising and a bit disconcerting, to me and I think to others, is that the AI-driven ChatGPT does a pretty good job at both! I’ve seen ChatGPT make some fairly surprising—but valid—associations, and it can even describe situations in a way that is emotionally moving. Its grammar is also fluid, and readable.

Relatable story-telling and surprising associations were generally thought of as uniquely human activities requiring creativity. The fact that a mechanical process can do both, and do them fairly well—even in its relative infancy—is disconcerting to many of us. 

Advances in AI are helping us redefine what it means to be human.

However, much of creativity—human or otherwise—has always been about forging associations between items previously thought to be different. For example, between green-colored rocks and copper; between “my love” and “a summer’s day”; between space and time. Indeed, it would be more surprising if a software process that can read, process, and classify all of the literature; all of the scientific knowledge; and literally everything written, did NOT make some surprising connections. The programming challenge would be more around pruning the possible associations for relevancy, rather than generating the possible associations in the first place.

In prehistory, the inventions of storytelling and drawing, and many thousands of years later, of writing, were considered milestones in the human journey. All of these enabled one person to leverage the experiences gained by others, expanding what a single person could know and do. 

The introduction of the printing press 500 years ago multiplied this capability by making the writings, drawings, and stories of others available to a wider audience. More recently—arguably starting in the 1990s—the large-scale digitization of printed content, along with the generation of digital-native content such as blogs and websites, eliminated the need for the physical production of printed media before information could be consumed. This had the effect of drastically lowering the cost of distribution and making more content available to a wider audience than had ever before been possible.

Many of us did not appreciate the full implications at the time: digitized content is also machine-readable. Therefore, not only people but also software can use it to ‘learn’. We knew this in a limited sense, with powerful search engines and knowledge digests such as those provided by Google, Microsoft, and others being part of our lives for decades. However, a general-purpose, interactive AI that itself digests and synthesizes this information in ‘creative’ ways is new to many of us and has become a fact we must all come to terms with.

Recommended reading: Cloud-Driven Innovations: What Comes Next?

Throughout the history of technology, many inventions and discoveries have forced people to rethink and redefine who they are, and what it means to be ‘human.’ One such instance was the Copernican revolution in the 1500s, where it became widely accepted that the Earth goes around the Sun rather than vice-versa. This required a major shift in humanity’s thinking about our central role—or lack thereof—in the universe. But many smaller inventions and discoveries have had deep consequences on our individual identities when our identity has become tied up with a particular capability.

One example from American Folklore is told in “The Ballad of John Henry.” 

John Henry is a man who works building the railroads of pre-Civil War America (before 1861), manually using a hammer to pound in the nails that held the tracks. His pride is his speed and physical strength. When the technical innovation of the steam drill is introduced, John Henry is defiant and refuses to admit that this new mechanical device could perform his job as well as he could. He says to his boss (the “captain”), who has introduced this new machine:

John Henry said to his captain, 

"A man is nothing but a man, 

But before I let your steam drill beat me down, 

I'd die with a hammer in my hand, Lord, Lord,

I'd die with a hammer in my hand."

In the ballad, in a contest between man and machine, John Henry does indeed outperform the first version of the steam drill. However, he works so hard to do so, he dies from a heart attack in the process. And we can only suppose that future versions of the steam drill would outperform any human’s best efforts.

We’re being challenged now by the “creative” AI.

John Henry clearly identified with his physical strength and speed—to him, that was what he was good at, and what had become his identity. He believed his physical strength and speed made him worthwhile, both as a person, and—rightly or wrongly—also as an employee. 

With this emerging technology, those of us in the professions as well as all "creative" types: doctors, lawyers, poets, writers, artists, researchers, inventors, engineers--and yes, CTOs--now also face a new form of ‘competition’ for what we believe we have to offer the world, and for what we do best. This might be seen as poetic justice or karma since the previous innovations with the potential to change how we see our value--including this one--originated from this very group.

Those of us who identify with our creativity are now challenged by a new technology: what we might call the “creative AI.” Like John Henry’s situation, it’s clear that a machine—an AI—can now start to do things we thought of as uniquely human and, to some extent, uniquely “us.” It’s clear that—like the steam drill--even if creations generated by this new technology are somewhat primitive today, they will become increasingly better over time. 

We can either face this fact, even if it means re-assessing what it is that truly makes us ‘human’ and valuable to others, or we can fight it and deny that machines have any role to play in the creative/generative process. The latter course, I fear, will result in an outcome along the lines of John Henry’s contest with the steam drill.

As creative people, we can let the fact that there are now creative machines (AIs) detract from our feelings of self-worth, and make us fear for our future and our jobs. On the other hand, we can accept these AIs as a fact, and embrace the possibility that human creativity coupled with AI creativity might produce results that are truly awesome. If John Henry had leveraged his skills and experience and learned to co-exist with or even to operate that steam drill, we would have missed out on a great American folk ballad. However, I think that John Henry—and his employer and humanity—would have been better off.

More helpful resources:

If you missed the previous posts on Deploying a Landing Zone with AWS Control Tower or you’ve not had much experience with the service, we’d recommend reading Parts 1 to 3 before continuing.

  • Part 1 - Deploying AWS Control Tower
  • Part 2 - AWS Control Tower Post Configuration Tasks focusing on Organisational Structure and Guardrails
  • Part 3 - AWS Control Tower Post Configuration Tasks focusing on IAM Identity Center and Provisioning New AWS Accounts

In this post, we’re going to walkthrough how you can start customising Control Tower using the Security Reference Architecture (SRA). The SRA utilises Customisations for Control Tower (CfCT) which deploys a DevOps pipeline that works with CloudFormation templates and Control Tower lifecycle events.

By no means is this the only way of customising the Landing Zone that Control Tower deploys, but it’s how the previous version of AWS Landing Zones was based upon and therefore, more users will be familiar with its setup and configuration. It does have some drawbacks though, in that it is only single threaded and therefore slow in large environments.

Here are some alternatives:

Why would I want to customise Control Tower?

The easiest way to answer this question is simply because whilst Control Tower provides the foundations for a Well-Architected Multi-Account Landing Zone, it’s not completely perfect.

In terms of AWS Services, Control Tower is still in its infancy and whilst AWS is constantly adding new functionality and guardrails, there are still some basic best practices that aren’t there natively. For example, in Part Three we mentioned that AWS Config doesn’t get configured in the Management Account but it is in every other Member AWS Account.

The reality is, there is no one size fits all, but there are synergies between them. With this in mind, the majority of organisations will need to tailor the Landing Zone to meet their specific security and governance requirements.

Enable Trusted Access for CloudFormation StackSets in AWS Organisations

If you already have Control Tower enabled for you, this next section might not be relevant. However, it’s always worth double checking just to play safe.

  • Login to the AWS Management Console using an Account with administrative permissions and navigate to the AWS Organisations Console. This should be done within the Management Account.
  • Click Services.
  • Scroll down to CloudFormation StackSets and check that its Trusted Access is set to Access enabled. If not, then Click CloudFormation StackSets and then Click Enable trusted access.

Configure an AWS CLI Profile to the Management Account

  • Establish an AWS CLI Profile to the Management Account with administrative credentials via the AWS CLI using either a Command Prompt or from Powershell:

  • In the SSO start URL, type the URL of the SSO Login page. For example., https://d-1234567890.awsapps.com/start This can be found by logging into the IAM Identity Center Console and looking for the AWS access portal URL in the Settings.
  • In the SSO Region, type the AWS Region that was used for the Home Region when deploying Control Tower. For example., eu-west-2

A web browser will then open prompting for login credentials if you’re not already logged in.

  • Login with your Username and Password.
  • Click Allow.
  • Select the AWS Management Account using the cursor keys.
  • Press Return for the default client Region and the default output format.
  • For the Profile name use something memorable as this can be anything. For example., ct-mgmt

Deploying the SRA Common Pre-Requisites

There are a few things that need to be installed on our local device as a pre-cursor for this part, including Git, Bash Shell, the AWS CLI v2 and 7-Zip. The following instructions will be based on running a Windows Device.

  • Clone the SRA Source Files from GitHub via either a Command Prompt or from Powershell:

Now that we have the SRA source files locally, we need to start creating some CloudFormation Stacks in our Management Account using the YAML templates within the source. These templates setup the functionality for SRA to work before we even install the Customisations for Control Tower solution.

  • Launch the sra-common-prerequisites-staging-s3-bucket.yaml via the AWS CLI using either a Command Prompt or from Powershell:

  • Package and upload all the SRA Solutions to the Staging S3 Bucket via GitBash:

  • Launch the sra-common-prerequisites-management-account-parameters.yaml via the AWS CLI using either a Command Prompt or from Powershell:

  • Launch the sra-common-prerequisites-main-ssm.yaml via the AWS CLI using either a Command Prompt or from Powershell:

Deploy the Customisations for Control Tower Solution

The team at AWS has developed the SRA utilised Customisations for Control Tower (CfCT) as the delivery mechanism for their customisations. But since they don’t maintain that solution itself, it’s strongly recommended to check the current version of CfCT here prior to launching the CloudFormation Template.

You may find that you wish to edit sra-common-cfct-setup-main.yaml to reflect the following change instead:

The architecture that is deployed by CfCT is shown below.

  • Launch the sra-common-cfct-setup-main.yaml via the AWS CLI using either a Command Prompt or from Powershell:

What Customisations should I make?

This is always very subjective and there are many things that may factor into the answer. That being said, here are a few suggestions, in no particular order! And best of all, they are all included within the SRA Source Files with the exceptions of the Service Control Policies (SCPs). There are also other CloudFormation Templates available within the SRA source files that could be used, or alternatively, you may wish to create your own.

CloudFormation

  • Enable Config in the Management Account
  • Enable CloudTrail Organisational Trail for Data Events
  • Enable EC2 Default EBS Encryption
  • Configure a Hardened IAM Account Password Policy
  • Enable S3 Block Public Access at the Account Level
  • Configure AWS Account Alternate Contacts
  • Enable IAM Access Analyzer and Configure for Delegated Administration
  • Enable GuardDuty and Configure for Delegated Administration
  • Enable Macie and Configure for Delegated Administration
  • Enable Security Hub and Configure for Delegated Administration

Service Control Policies

  • Prevent Accounts from Leaving the Organisation
  • Prevent the Disabling of any Security Tooling
  • Prevent IAM User Creation

Time to Customise our Control Tower Setup

This section will go through customising Control Tower based on the author’s personal recommendations.

  • Install the git-remote-codecommit module via either a Command Prompt or Powershell.

  • Clone the CodeCommit repository that is deployed by CfCT via either a Command Prompt or Powershell.

Note: You’ll need to ensure that you use the name of your AWS CLI profile prior to the @ as shown in the example above.

  • Within your IDE of choice, under the custom-control-tower-configuration folder, delete the example-configuration folder.
  • Under the custom-control-tower-configuration folder, create three new folders named parameterspolicies and templates.

  • Copy the following files from the SRA source files to custom-control-tower-configuration\templates.
    • sra-account-alternate-contacts-main-ssm.yaml
    • sra-cloudtrail-org-main-ssm.yaml
    • sra-config-management-account-main-ssm.yaml
    • sra-ec2-default-ebs-encryption-main-ssm.yaml
    • sra-guardduty-org-main-ssm.yaml
    • sra-iam-access-analyzer-main-ssm.yaml
    • sra-iam-password-policy-main-ssm.yaml
    • sra-macie-org-main-ssm.yaml
    • sra-securityhub-org-main-ssm.yaml
  • Copy the following files from the SRA source files to custom-control-tower-configuration\parameters.
    • sra-account-alternate-contacts-main-ssm.json
    • sra-cloudtrail-org-main-ssm.json
    • sra-config-management-account-main-ssm.json
    • sra-ec2-default-ebs-encryption-main-ssm.json
    • sra-guardduty-org-main-ssm.json
    • sra-iam-access-analyzer-main-ssm.json
    • sra-iam-password-policy-main-ssm.json
    • sra-macie-org-main-ssm.json
    • sra-s3-block-account-public-access-main-ssm.json
    • sra-securityhub-org-main-ssm.json
  • Amend the values as required in each of the JSON files above to customise the configuration of each of the different templates. For example., IAM Password Policy configuration will be defined in the sra-iam-password-policy-main-ssm.json.
  • Create scp-prevent-accounts-leaving-org.json in custom-control-tower-configuration\policies and paste in the below contents.
  • Create scp-prevent-disabling-security-tooling.json in custom-control-tower-configuration\policies and paste in the below contents.

  • Create scp-prevent-disabling-security-tooling.json in custom-control-tower-configuration\policies and paste in the below contents.

  • Create scp-prevent-iam-users-creation.json in custom-control-tower-configuration\policies and paste in the below contents.
#image_title
  • Modify the contents of manifest.yaml as per below.

 

 

  • Commit the files that you’ve previously copied, modified and deleted to CodeCommit via either a Command Prompt or Powershell.

This will now trigger the DevOps Pipeline and, assuming that no issues have occurred, will show as Succeeded.

 

This is the end of our AWS Control Tower part four series. We hope it proved useful and enables you to customise your own Control Tower Environments.

Should you have any additional questions around cloud security governance, or comments in general, we’d love to hear from you. Drop us a message and the team will be in touch to arrange a follow-up call.

About the author:

Adam Divall, Solutions Architect at GlobalLogic with over 20 years demonstrable experience in design, implementation, migration and support of large, complex solutions to support a customer’s long term business strategy. Divall holds all 12 available certifications for Amazon Web Services with specialisations including Networking, Security, Database, Data Analytics and Machine Learning.

 

Previously in Part 2, we looked at how to create an organisational structure and enable guardrails within Control Tower.

In this post, we’re going to walkthrough some of the remaining post configuration tasks including configuring IAM Identity Center and provisioning a new AWS Account through Account Factory.

Configuring IAM Identity Center for Single Sign-On

AWS IAM Identity Center (formerly known as AWS SSO) is a service that enables you to have a single point of entry for managing resources within all of your AWS Accounts in an organisation.

As part of the Control Tower deployment this gets enabled using the native Identity Center directory. This allows you to create Users, Groups and Permission Sets that, when assigned to an AWS Account, would allow you to authenticate and have authorisation to different resources based on the policies defined in the Permission Set. Whilst the Identity Center directory is the default configuration, a post deployment activity is typically to change this to either a 3rd Party Identity Provider such as Azure Active Directory or to perhaps an on-premise Active Directory Domain (AAD).

For those without access to an Azure Active Directory Domain, please refer to the instructions below:

When IAM Identity Center is integrated with a 3rd Party Solution such as AAD, you add your AAD Groups to the Azure Enterprise Application. As part of the System for Cross-domain Identity Provisioning (SCIM), Groups and the Users that are member of those Groups will be replicated and created within IAM Identity Center. This provides the user the ability to login through the AWS access portal URL and authenticate using their standard login details – those used for other business workloads such as email etc.

Since all the identity management is now connected to the corporate AAD, things such as password policies are handled by AAD. However, Multi Factor Authentication (MFA) could be handled either by AAD or alternatively, you may decide to handle that within IAM Identity Center.

Enabling MFA in IAM Identity Center

  • Login to the AWS Management Console and Navigate to IAM Identity Center.

 

  • Click Settings.

  • Click the Network & security tab.

  • Click Configure under Multi-factor authentication section.

  • Select Every time they sign in (always-on) under the “Prompt users for MFA” section.
  • Select Security keys and built-in authenticators and Authenticator apps under the “Users can authenticate with these MFA types” section.
  • Select Require them to register an MFA device at sign in under the “If a user does not yet have a registered MFA device” section.
  • Click Save changes.

Creating a Permission Set

As a best practice, permissions should follow the principle of least privilege access. An enabler of this is through the use of Permission Sets with IAM Identity Center. There are several default Permission Sets created by Control Tower, although these don’t always meet all requirements.

Behind the scenes, once you’ve created a Permission Set and you’ve assigned it to the AWS Account(s) that you want that applied to and the Groups you want to associate, an IAM Role is created hwhich has a Trust policy configured to only allow the role to be assumed using SAML and it must have come via the IAM Identity Provider within that Account which was also created by IAM Identity Center.

  • Login to the AWS Management Console and Navigate to IAM Identity Center.

  • Click Permission sets.

  • Click Create permission set.

  • Select Custom permission set and then Click Next.

Depending on what you’re trying to achieve from a permissions allocation perspective, you might attach different types of policies or a combination of them all. This could include AWS Managed Policies, Customer Managed Policies, Inline Policies and or Permissions Boundaries. In this example, we’re going to show examples of using just an AWS Managed Policy as we only want to give S3 Full Access to people via SSO.

  • Expand AWS Managed Policies.

  • Filter by AmazonS3, Select AmazonS3FullAccess and then Click Next.

  • Give the Permission Set a name and then Click Next.

  • On the Review and create page, Click Create.

Assigning a Permission Set to a Group

  • Login to the AWS Management Console and Navigate to IAM Identity Center.

  • Click AWS Accounts.

  • Select the AWS Account you wish to allow Groups access to and click Assign users or groups.
  • Click the Groups tab.

  • Select the Group(s) that you wish to assign the Permission Set to and Click Next.

  • Select the Permission Sets that you wish to assign and Click Next.

  • Click Submit.

The next time the user authenticates through Single Sign-On they’ll be able to leverage the new permissions as they’ll see another role available to them.

Working with the Account Factory

One of the capabilities that Control Tower provides is the Account Factory. Account Factory is used for provisioning new AWS Accounts that will in turn be governed via Control Tower and configured with all the baselines that Control Tower will provide, such as CloudTrail, Config, CloudWatch as well as guardrails.

The Account Factory provides the ability to create a VPC as part of the Account provisioning. A key challenge of this functionality is that the Network configuration is controlled within the Control Tower Console. This configuration means you must choose whether you have Public Subnets and/or Private Subnets and you can only have a maximum of 2 Private Subnets per Availability Zone and deployed based on a Well-Architected design. One of the configuration choices is the CIDR range that you select for the entire VPC, but you have no option as to how this is then utilised for the Subnets; it’s simply split evenly across them all. Another is the region(s) that this same VPC configuration is implemented in, which is determined by the regions that are governed by Control Tower. In situations where you have multiple regions that require VPCs and the Account is provisioned via the Account Factory, this goes against best practice since you end up with overlapping CIDR ranges which would cause network routing issues should these VPCs need to communicate with each other.

With this in mind, we would recommend disabling this functionality in the Account Factory, by unchecking any regions in the Account Factory Network Configuration. This can be done by:

  • Login to the AWS Management Console and Navigate to Control Tower.

  • Click Account factory.

  • Click Edit.

  • Uncheck all Regions to disable the VPC provisioning element of the Account Factory and then Click Save.

Creating a New AWS Account

  • Login to the AWS Management Console and Navigate to Control Tower.

  • Click Account factory.

  • Click Create account.

  • Under the Account email section, enter the email address that you want to associate with the root user of the new AWS Account.
  • Under the Display name section, enter the Name that you want to assign to the new AWS Account.
  • Under the Identity Center user email section, enter the first name and surname of the IAM Identity Center user. This user will then be granted the Administrator Access Permission to the new AWS Account.
  • Under the Organisation unit section, select the OU that you want the new AWS Account to be provisioned in. This will then determine both the Preventative and Detective Guardrails that will be applied to it as part of the Account Baseline.

Once the AWS Account has been fully provisioned the Account will show as Governed within the Control Tower console.

That’s all for the basic configuration of AWS Control Tower. In an upcoming post, we’ll walkthrough how you can customise Control Tower.

About the author:

Adam Divall, Solutions Architect at GlobalLogic with over 20 years demonstrable experience in design, implementation, migration and support of large, complex solutions to support a customer’s long term business strategy. Divall holds all 12 available certifications for Amazon Web Services with specialisations including Networking, Security, Database, Data Analytics and Machine Learning.

Previously in Part 1, we delivered a brief background on what a Landing Zone is, before going through how to launch AWS Control Tower as the foundation of a Multi-Account Architecture.

Part 2 of the blog series will walkthrough some of the initial post configuration activities with Control Tower, including setting up the organisational structure and enabling guardrails.

What has Control Tower deployed?

As part of the setup, Control Tower has utilised a number of other AWS Services including:

  • AWS CloudFormation: This has been utilised for provisioning resources through Infrastructure as Code (IaC) across the multiple AWS Accounts, using a combination of both Stacks and StackSets.
  • AWS CloudTrail: An Organisational Trail has been configured in the Management AWS Account. This Trail is configured to: monitor all AWS regions, send its logs to an S3 Bucket (that is in the Log Archive account), is encrypted using the KMS CMK that was created during the Control Tower setup, has Log File Validation Enabled, is integrated with CloudWatch Logs and also configured to send notifications to an SNS Topic in the Audit Account when new logs files are sent to the S3 Bucket.
  • Amazon CloudWatch: CloudWatch Log Groups are created as part of the integration with the CloudTrail Trail, as well as any execution of the Lambda Functions deployed by the Control Tower setup.
  • AWS Config: An Organisation Config Aggregator is created within the Management Account and Config Recorders are created in all AWS Accounts within the AWS Organisation, except for the Management Account. In addition, several Config Rules will be created as part of the Mandatory Guardrails configured during the setup process.

Quick note: Since Control Tower doesn’t create a Config Recorder in the Management Account, AWS Config is something that should have been enabled in all AWS Accounts. We will explain how you can do this using Customisations for Control Tower later in the blog,

  • Amazon EventBridge: An EventBridge Rule is created within all AWS Accounts except for the Management Account to trigger a Lambda Function on any Config Rule Compliance Change.
  • AWS IAM: Several IAM Roles are created including IAM Service Roles. The IAM Roles have IAM Permissions Policies added to them to grant the relevant level of permissions, and the Trust Policies are configured to allow the Role Assumption by only specific Source AWS Accounts, AWS Services or via the SSO Identity Provider using SAML.
  • AWS IAM Identity Centre: This service gets enabled within the Home Region of Control Tower to provide Single Sign-On. Several Groups & Permission Sets are created, and those Groups then have a Permission Set assigned to them against the provisioned AWS Accounts within the AWS Organisation (Management, Audit and Log Archive Accounts). A User is also created that maps to the e-mail address of the root user of the Management AWS Account.
  • AWS KMS: AWS Managed KMS keys are utilised for the encryption of data at rest in conjunction with the creation of the S3 Buckets. In addition, as part of our setup configuration we also created a Customer Managed Key (CMK) to encrypt the CloudTrail Trail.
  • AWS Lambda: A Lambda Function is created within all AWS Accounts except for the Management Account. This Function is used as part of the mechanism for forwarding notifications of Config Rule compliance changes.
  • AWS Organisations: This has been used to create the Organisation which is crucial to a multi-account setup. As part of the Organisation, it has then setup two Organisational Units (OU) that were defined during the Shared Accounts page of the Control Tower setup. Typically, these OUs will be named Security (in previous versions of the Control Tower service it was named Core) and the other Sandbox. In addition, several Service Control Policies (SCP) will have been created as part of the Mandatory Guardrails configured during the setup process.
  • Amazon S3: Two S3 Buckets are deployed within the Log Archive account.
    • One Bucket is used for the storage of CloudTrail and Config logs as part of a Centralised Logging Solution. It is configured with Default Encryption using KMS (AWS Managed Keys), has Versioning enabled, a Bucket Policy to restrict access, is configured to Block Public Access, Access Logging enabled and is configured with a Lifecycle Policy.
    • The second Bucket  is used for the storage of the S3 Access Logs. It is configured with Default Encryption using KMS (AWS Managed Keys), has Versioning enabled, a Bucket Policy to restrict access, is configured to Block Public Access, Access Logging enabled and is configured with a Lifecycle Policy.
  • AWS Service Catalog: A Portfolio is created with a Product added that provides Control Tower with Account Factory component.
  • Amazon SNS: An SNS Topic is created within all AWS Accounts except for the Management Account. That Topic has a destination of a Lambda Function that forwards the message to another SNS Topic in the Audit Account, which in turn sends an e-mail to the e-mail address assigned to the root user of the Audit Account.
  • AWS Step Functions: Whilst the Control Tower setup implements State Machines that is used as part of the wider orchestration and for the Account Factory element, these are not visible within any of the AWS Accounts that exist in the AWS Organisation. These are under the control and management of AWS as part of the Service Offering.

Organisational Structure

When considering organisational structure, there is a really good blog post from AWS on the Best Practices for Organisational Units (OU) that describes each of the OUs and their purpose. Please note, these are just guidelines and should be tailored to meet the needs of your particular business.

The diagram below is based on what can typically be seen when working with Clients. We’ve also outlined steps to creating the OU structure.

Creating the Organisational Units Structure

  • Login to the AWS Management Console and Navigate to Control Tower.

  • Click Organisation.

  • Click Create resources and then select Create organisational unit.

  • On the Add an OU page; enter the OU Name, click Parent OU and then select the OU Name to replicate the high level organisation layout.

Once configured it will look something like the below screenshot.

Please note: Only Organisation Units that have been created through the Control Tower Console will show a state of “Registered” on the Organisation page in Control Tower. If the Organisation Unit was created either via the AWS CLI or within AWS Organisations, it will show a state of “Unregistered” and will therefore need to be registered by selecting the OU in question on the Organisation page in the Control Tower console, selecting “Actions” and then clicking “Register organisational unit”.

Once you’ve created your OU Structure, you’re ready to configure guardrails.

Configuring Guardrails in Control Tower

Guardrails are rules that enable you to provide ongoing governance and oversight across your environment. In terms of guardrails within Control Tower there are two different types – preventative and detective.

Preventative guardrails are implemented through Service Control Policies (SCP) and stop you from going outside of a specific set of boundaries, as defined within the SCP. Since SCPs are implemented at the Organisation level, it provides a layer of control over all AWS Accounts within the organisation without needing to implement something directly in every single AWS Account.

Detective guardrails are implemented through Config Rules and will send notifications if a resource within the individual AWS Account doesn’t adhere to the settings within the rule. For example, if the rule says that all EBS Volumes must be encrypted and there is an EBS Volume within the Account that isn’t, it will notify you.

Control Tower guardrails can only be implemented on Organisation Units and not directly on AWS Accounts. That’s not to say you couldn’t create something customised but in that case, you would need to write some automation to do this and it wouldn’t be shown within the Control Tower console, if something was non-compliant.

Enabling a guardrail in Control Tower creates a CloudFormation StackSet in the Management Account. Leveraging the integration with AWS Organisations adds a CloudFormation Stack Instance for each AWS Account, residing within the hierarchy of the OU that the guardrail was enabled onto the StackSet. This in turn creates a CloudFormation Stack within the corresponding AWS Account. Similarly with the disabling of a guardrail, it deletes the Stack Instance from the StackSet and then deletes the Stack from the corresponding AWS Account.

How to enable and disable a guardrail

We’ve covered what happens when you enable a guardrail. Now let’s walk through how you go about enabling and disabling a guardrail from scratch. Both processes are similar, but obviously you will need to have enabled a guardrail before being able to disable it!

  • Login to the AWS Management Console and Navigate to Control Tower.

  • Click 'Guardrails’.

  • Find the guardrail that you want to implement by either scrolling through the pages until you locate it or by using the filter mechanism.

  • Click the Name of the guardrail. For example, “Detect whether public read access to Amazon S3 buckets is allowed”.

If the guardrail is already enabled and you’re looking to disable, click the ‘Disable guardrail’ option – like so:

If you’re looking to enable a guardrail for the first time, click on the ‘Enable guardrail on OU’ button. The ‘Disable guardrail’ option should be greyed out.

  • A new page will load – click the ‘Enable guardrail on OU’ button again to enable.

From here:

  • Select the OU that you want to enable/disable the guardrail on and then click Enable guardrail on OU or Disable guardrail.
  • Repeat the process again for each OU that you want to enable/disable the guardrail on. Unfortunately, at this moment in time, it can’t be added or removed from multiple OUs at the same time.

Repeat the process for all guardrails that you wish to enable.

And that’s it. You’re good to go.

Part 3 of this blog series will continue with the remaining post-deployment activities within Control Tower – including configuring IAM Identity Centre and provisioning a new AWS Account through Account Factory.

About the author:

Adam Divall, Solutions Architect at GlobalLogic with over 20 years demonstrable experience in design, implementation, migration and support of large, complex solutions to support a customer’s long term business strategy. Divall holds all 12 available certifications for Amazon Web Services with specialisations including Networking, Security, Database, Data Analytics and Machine Learning.

This doesn’t sound right. What are you trying to say?

Welcome to part one of a four-part blog series on AWS Control Tower. This blog provides a step-by-step guide to setting up Control Tower within an AWS Account that is not part of an existing AWS Organisation – starting with a short introduction on Landing Zones.

Landing Zones

One starting point for many organisations using Public Cloud is the establishment of a Landing Zone. A Landing Zone is a well-architected, multi-account environment that’s based on security and compliance best practices.

There are several reasons why organisations leverage a multi-account strategy including but not limited to:

  • Service Quotas: Each AWS Service typically has different quotas; some of these are soft limits that can be increased by requesting an increase in the limit through a support ticket, whilst others have hard limits that cannot be increased.
  • Limiting the Blast Radius: As an AWS Account is a boundary of isolation, potential risks and threats can be contained within an account without affecting others.
  • Security Controls: Workloads may have different compliance needs based on the industry or the geographical location. Whilst there are synergies between the different compliancy frameworks, the security controls that are implemented to help achieve the compliance may need to be implemented in a slightly different manner or may not be required at all.
  • Billing Separation: AWS Accounts are the only real way to separate items at a billing level e.g., Data Transfer costs.

When we first started using AWS in 2016, there was no pre-packaged solution for a Landing Zone; there were several recommendations provided by AWS but in essence it was something that organisations had to build themselves.

The Landing Zone Implementation was then developed by several different teams at AWS to help Clients expedite the setup and creation of their Landing Zones through the use of automation. This solution accelerator provided extensible capabilities to manage the most complex and advanced environments. However, one of the downsides was the fact it was not officially supported by AWS Support, meaning that any issues typically required costly engagements with Professional Services or Partners to remediate.

AWS Control Tower came about as the successor to the AWS Landing Zone solution, which is currently in Long-term Support and will not receive any additional features. This, technically was never officially supported by AWS Support. It’s still a relatively new service in AWS Terms having only been made GA in June 2019, although since then it has been enhanced considerably with new features and functionality, as well as being made available in more regions. A key differential of Control Tower is that it is an AWS Managed Service whilst providing parity with the functionality of what the Landing Zone Implementation does.

Prior to setting up Control Tower, there is a dependency on having two unique e-mail addresses that aren’t already associated with an AWS Account. These will be used for creation of the Audit and Log Archive Accounts that Control Tower will provision during the setup. The following section will walk you through the setup of Control Tower within an AWS Account that is not part of an existing AWS Organisation.

Setting up Control Tower

  • Login to the AWS Management Console using an Account with administrative permissions and switch to the AWS Region that you’re going to use as the Home Region e.g., eu-west-2 (London).
  • Navigate to the Control Tower Service.

  • Click Set up landing zone

  • On the Review pricing and select Regions page, ensure that the Home Region is set to the region that you want.
  • Under the Region deny settings section, click Not enabled. If you wish to change this setting later, it can be easily modified.
  • Under the Additional AWS Regions for governance section, leave it as it is for the time being. If you wish to add additional regions to be governed later, it can be easily modified.
  • Click Next

  • On the Configure organisational units (OUs) page, click Next.

  • On the Configure shared accounts page, Under the Log archive account and Audit account sections enter the corresponding e-mail addresses that you created as a pre-requisite for the deployment and then Click Next.

  • On the Configure CloudTrail and encryption page; under the AWS CloudTrail configuration section, ensure that its set to Enabled.
  • Under the Log configuration for Amazon S3 section, configure the retention policy as per your requirements.
  • Under the KMS Encryption section, select Enable and customise encryption settings and then click Create a KMS Key.

This will now open a new browser tab and start the process of creating a Customer Managed Key.

  • On the Configure key page, click Next.

  • On the Add labels page; under the Alias section, enter an Alias for the CMK. In this case ControlTowerEncryptionKey has been used.
  • Under the Description section, enter a description. In this Control Tower Encryption Key for CloudTrail has been used.
  • Click Next.

  • On the Define key administrative permissions page, click Next.

  • On the Define key usage permissions page, click Next.
  • On the Review page, click Finish.

Switch back to the browser tab with the Control Tower Setup.

  • Under the KMS Encryption section, select the KMS CMK that was just created and then click Next.
  • On the Review and set up landing zone page, review the configuration settings and click Set up landing zone.

Control Tower will then start the process of setting up the Landing Zone and will take approximately 30-45 minutes.

Coming up…

Part 2 of this AWS Control Tower walk-through will continue with the initial post-deployment activities within Control Tower including Organisations and Guardrails.

About the author:

Adam Divall, Solutions Architect at GlobalLogic with over 20 years demonstrable experience in design, implementation, migration and support of large, complex solutions to support a customer’s long term business strategy. Divall holds all 12 available certifications for Amazon Web Services with specialisations including Networking, Security, Database, Data Analytics and Machine Learning.

  • URL copied!