Docker: What, Why and When?

It’s a very common problem: you develop an application that works perfectly on your laptop but not in other environments

Categories: PerspectiveAutomotiveCommunicationsConsumer and RetailFinancial ServicesHealthcareManufacturing and IndustrialMedia

It’s a very common problem: you develop an application that works perfectly on your laptop but not in other environments. You use the stack you like and the language you like, with the version of the libraries and tools you like. But when you push the app into a new environment, it doesn’t work because it’s not the same environment.

For instance, maybe you used a new version of a library. The Ops guy tells you that you can’t use this library because all the other applications that are running on the server will break. So, there’s a lot of back and forth between the Ops and the developers.

Docker supports a level of portability that allows a developer to write an application in any language. He / she can then easily move it from a laptop to a test or production server — regardless of the underlying Linux distribution. It’s this portability that has attracted the the interest of developers and systems administrators alike.

When you develop with Docker, you package everything inside a container, or inside several containers that can talk to each other. You just push this container to another environment. The Ops guy doesn’t have to care about what’s inside the container or how it got developed. This helps speed up the development cycle and allows you to move containers around easily.

The History of Containers

Although Docker helped draw the attention of the developer community to containers, this new containerized approach is not new. The idea of containers has been around since the early days of Unix with the chroot command. For instance, FreeBSD-based jail serves similar concerns as Docker does.

Since the applications rely on a common OS kernel while using chroot, this approach can work only for applications that share the exact OS version. Docker found a way to address this limitation through an integrated user interface. It provides a greater level of simplicity. With Docker, you don’t have to be a Linux kernel expert to use Linux container-based technology.

Containers vs. Hypervisor-Based Virtualization

Both hypervisor-based virtualization and containers enable isolation. Hypervisor-based virtualization abstracts the underlying physical hardware of a server through a software layer (i.e., the hypervisor). This configuration allows you to create virtual machines on which an operating system and then applications can be installed.

Unlike hypervisor-based virtual machines, containers do not aim to emulate physical servers. Instead, all containerized applications share a common operating system kernel on a host. This eliminates the resources needed to run a separate operating system for each application. An application can be deployed in a matter of seconds and uses fewer resources compared to hypervisor-based virtualization. The container size is also leaner compared to VM. So, where a VM would be measured in gigabytes and boot in one or two minutes, the container will be megabytes and will boot in milliseconds.

Docker Limitations

Although Docker is a simpler technology to use, it has following limitations:

  • There is the risk of workload disruption if the hardware fails (this risk is also inherent in hypervisor-based virtualization).

  • A single kernel exploit could affect all containers on a host.

  • As of now, orchestration tools and advanced management features are missing for containers (they are available on VMs).

This last limitation means that orchestration must be handled in the software application. In other words, Docker is an intrusive technology. As a result, for existing applications, introducing Docker requires a lot of changes in the application architecture. However, if you apply Docker in a greenfield project, it’s still workable as you design your architecture considering Docker.

Since orchestration needs to be handled programmatically as of now, you need to use a non-standard Docker interface-based approach. In the future, if you want to move from Docker to any other container-based approach, it will not be straightforward and will require code changes.

Conclusion

Docker is a revolutionary technology that simplifies isolation and provides environment independency. However, in its current shape, you should only use it in development and testing environments. I would not recommend using Docker in production applications yet, as it requires a bit more maturity.

Author

4aa901e7ab8295f07139911c8bed1ae6?s=256&d=mm&r=g

Author

Shrikant Vashishtha

View all Articles

Top Insights

Manchester City Scores Big with GlobalLogic

Manchester City Scores Big with GlobalLogic

AI and MLBig Data & AnalyticsCloudDigital TransformationExperience DesignMobilitySecurityMedia
Twitter users urged to trigger SARs against energy companies

Twitter users urged to trigger SARs against energy...

Big Data & AnalyticsDigital TransformationInnovation
Retail After COVID-19: How Innovation is Powering the New Normal

Retail After COVID-19: How Innovation is Powering the...

Digital TransformationInsightsConsumer and Retail

Top Authors

Yuriy Yuzifovich

Yuriy Yuzifovich

Chief Technology Officer, AI

Chet Kolley

Chet Kolley

SVP & GM, Medical Technology BU

Amit Handoo

Amit Handoo

Vice President, Client Engagement

Richard Lett

Richard Lett

VP of Healthcare Technology

Top Insights Categories

  • URL copied!