-
-
-
-
URL copied!
It’s a very common problem: you develop an application that works perfectly on your laptop but not in other environments. You use the stack you like and the language you like, with the version of the libraries and tools you like. But when you push the app into a new environment, it doesn’t work because it’s not the same environment.
For instance, maybe you used a new version of a library. The Ops guy tells you that you can’t use this library because all the other applications that are running on the server will break. So, there’s a lot of back and forth between the Ops and the developers.
Docker supports a level of portability that allows a developer to write an application in any language. He / she can then easily move it from a laptop to a test or production server — regardless of the underlying Linux distribution. It’s this portability that has attracted the the interest of developers and systems administrators alike.
When you develop with Docker, you package everything inside a container, or inside several containers that can talk to each other. You just push this container to another environment. The Ops guy doesn’t have to care about what’s inside the container or how it got developed. This helps speed up the development cycle and allows you to move containers around easily.
The History of Containers
Although Docker helped draw the attention of the developer community to containers, this new containerized approach is not new. The idea of containers has been around since the early days of Unix with the chroot command. For instance, FreeBSD-based jail serves similar concerns as Docker does.
Since the applications rely on a common OS kernel while using chroot, this approach can work only for applications that share the exact OS version. Docker found a way to address this limitation through an integrated user interface. It provides a greater level of simplicity. With Docker, you don’t have to be a Linux kernel expert to use Linux container-based technology.
Containers vs. Hypervisor-Based Virtualization
Both hypervisor-based virtualization and containers enable isolation. Hypervisor-based virtualization abstracts the underlying physical hardware of a server through a software layer (i.e., the hypervisor). This configuration allows you to create virtual machines on which an operating system and then applications can be installed.
Unlike hypervisor-based virtual machines, containers do not aim to emulate physical servers. Instead, all containerized applications share a common operating system kernel on a host. This eliminates the resources needed to run a separate operating system for each application. An application can be deployed in a matter of seconds and uses fewer resources compared to hypervisor-based virtualization. The container size is also leaner compared to VM. So, where a VM would be measured in gigabytes and boot in one or two minutes, the container will be megabytes and will boot in milliseconds.
Docker Limitations
Although Docker is a simpler technology to use, it has following limitations:
-
There is the risk of workload disruption if the hardware fails (this risk is also inherent in hypervisor-based virtualization).
-
A single kernel exploit could affect all containers on a host.
-
As of now, orchestration tools and advanced management features are missing for containers (they are available on VMs).
This last limitation means that orchestration must be handled in the software application. In other words, Docker is an intrusive technology. As a result, for existing applications, introducing Docker requires a lot of changes in the application architecture. However, if you apply Docker in a greenfield project, it’s still workable as you design your architecture considering Docker.
Since orchestration needs to be handled programmatically as of now, you need to use a non-standard Docker interface-based approach. In the future, if you want to move from Docker to any other container-based approach, it will not be straightforward and will require code changes.
Conclusion
Docker is a revolutionary technology that simplifies isolation and provides environment independency. However, in its current shape, you should only use it in development and testing environments. I would not recommend using Docker in production applications yet, as it requires a bit more maturity.
Top Insights
If You Build Products, You Should Be Using...
Digital TransformationTesting and QAManufacturing and IndustrialPredictive Hiring (Or How to Make an Offer...
Project ManagementTop Authors
Blog Categories
Let’s Work Together
Related Content
Unlock the Power of the Intelligent Healthcare Ecosystem
Welcome to the future of healthcare The healthcare industry is on the cusp of a revolutionary transformation. As we move beyond digital connectivity and data integration, the next decade will be defined by the emergence of the Intelligent Healthcare Ecosystem. This is more than a technological shift—it's a fundamental change in how we deliver, experience, … Continue reading Docker: What, Why and When? →
Learn More
Leveraging SaMD Applications to Improve Patient Care and Reduce Costs
One of the most exciting developments in healthcare is the emergence of Software as a Medical Device (SaMD) as a more convenient and cost-effective means to deliver superior care to the tens of millions of people worldwide who suffer from various health conditions.
Learn More
Share this page:
-
-
-
-
URL copied!