“Kubernetes vs. Docker” is a phrase that you hear more and more these days as Kubernetes becomes ever more popular as a container orchestration solution.
However, “Kubernetes vs. Docker” is also a somewhat misleading phrase. When you break it down, these words don’t mean what many people intend them to mean, because Docker and Kubernetes aren’t direct competitors. Docker is a containerization platform, and Kubernetes is a container orchestrator for container platforms like Docker.
This post aims to clear up some common confusion surrounding Kubernetes and Docker, and explain what people really mean when they talk about “Docker vs. Kubernetes.”
The Rise of Containerization and Docker
It is impossible to talk about Docker without first exploring containers. Containers solve a critical issue in the life of application development. When developers are writing code they are working on their own local development environment. When they are ready to move that code to production this is where problems arise. The code that worked perfectly on their machine doesn’t work in production. The reasons for this are varied; different operating system, different dependencies, different libraries.
Containers solved this critical issue of portability allowing you to separate code from the underlying infrastructure it is running on. Developers could package up their application, including all of the bins and libraries it needs to run correctly, into a small container image. In production that container can be run on any computer that has a conterization platform.
Advantages of Containers
In addition to solving the major challenge of portability, containers and container platforms provide many advantages over traditional virtualization.
Containers have an extremely small footprint. The container just needs its application and a definition of all of the bins and libraries it requires to run. Unlike VMs which each have a complete copy of a guest operating system, container isolation is done on the kernel level without the need for a guest operating system. In addition, libraries can be across containers, so it eliminates the need to have 10 copies of the same library on a server, further saving space. If I have 3 apps all running node and express, I don’t have to have 3 instances of node and express, those apps can share those bins and libraries. Allowing for applications to become encapsulated in self-contained environments allows for quicker deployments, closer parity between development environments, and infinite scalability.
What is Docker?
Docker is currently the most popular container platform. Docker appeared on the market at the right time, and was open source from the beginning, which likely led to its current market domination. 30% of enterprises currently use Docker in their AWS environment and that number continues to grow.
When most people talk about Docker they are talking about Docker Engine, the runtime that allows you to build and run containers. But before you can run a Docker container they must be built, starting with a Docker File. The Docker File defines everything needed to run the image including the OS network specifications, and file locations. Now that you have a Docker file, you can build a Docker Image which is the portable, static component that gets run on the Docker Engine. And if you don’t want to start from scratch Docker even has a service called Docker Hub, where you can store and share images.
The Need for Orchestration Systems
While Docker provided an open standard for packaging and distributing containerized applications, there arose a new problem. How would all of these containers be coordinated and scheduled? How do you seamlessly upgrade an application without any interruption of service? How do you monitor the health of an application, know when something goes wrong and seamlessly restart it?
Solutions for orchestrating containers soon emerged. Kubernetes, Mesos, and Docker Swarm are some of the more popular options for providing an abstraction to make a cluster of machines behave like one big machine, which is vital in a large-scale environment.
When most people talk about “Kubernetes vs. Docker,” what they really mean is “Kubernetes vs. Docker Swarm.” The latter is Docker’s own native clustering solution for Docker containers, which has the advantage of being tightly integrated into the ecosystem of Docker, and uses its own API. Like most schedulers, Docker Swarm provides a way to administer a large number of containers spread across clusters of servers. Its filtering and scheduling system enables the selection of optimal nodes in a cluster to deploy containers.
Kubernetes is the container orchestrator that was developed at Google which has been donated to the CNCF and is now open source. It has the advantage of leveraging Google’s years of expertise in container management. It is a comprehensive system for automating deployment, scheduling and scaling of containerized applications, and supports many containerization tools such as Docker.
For now, Kubernetes is the market leader and the standardized means of orchestrating containers and deploying distributed applications. Kubernetes can be run on a public cloud service or on-premises, is highly modular, open source, and has a vibrant community. Companies of all sizes are investing into it, and many cloud providers offer Kubernetes as a service. Sumo Logic provides support for all orchestration technologies, including Kubernetes-powered applications.
How does Kubernetes work?
It is easy to get lost in the details of Kubernetes, but at the end of the day, what Kubernetes is doing is pretty simple. Cheryl Hung of the CNCF describes Kubernetes as a control loop. Declare how you want your system to look (3 copies of container image a and 2 copies of container image b) and Kubernetes makes that happen. Kubernetes compares the desired state to the actual state, and if they aren’t the same, it takes steps to correct it.
Kubernetes architecture and components
Kubernetes is made up many components that do not know are care about each other. The components all talk to each other through the API server. Each of these components operates its own function and then exposes metrics, that we can collect for monitoring later on. We can break down the components into three main parts.
- The Control Plane – The Master.
- Nodes – Where pods get scheduled.
- Pods – Holds containers.
The Control Plane – The Master Node
The control plane is the orchestrator. Kubernetes is an orchestration platform, and the control plane facilitates that orchestration. There are multiple components in the control plane that help facilitate that orchestration. Etcd for storage, the API server for communication between components, the scheduler which decides which nodes pods should run on, and the controller manager, responsible for checking the current state against the desired state.
Nodes
Nodes make up the collective compute power of the Kubernetes cluster. This is where containers actually get deployed to run. Nodes are the physical infrastructure that your application runs on, the server of VMs in your environment.
Pods
Pods are the lowest level resource in the Kubernetes cluster. A pod is made up of one or more containers, but most commonly just a single container. When defining your cluster, limits are set for pods which define what resources, CPU and memory, they need to run. The scheduler uses this definition to decide on which nodes to place the pods. If there is more than one container in a pod, it is difficult to estimate the required resources and the scheduler will not be able to appropriately place pods.
How Does Kubernetes Relate to Docker?
Kubernetes and Docker are both comprehensive de-facto solutions to intelligently manage containerized applications and provide powerful capabilities, and from this some confusion has emerged. “Kubernetes” is now sometimes used as a shorthand for an entire container environment based on Kubernetes. In reality, they are not directly comparable, have different roots, and solve for different things.
Docker is a platform and tool for building, distributing, and running Docker containers. It offers its own native clustering tool that can be used to orchestrate and schedule containers on machine clusters. Kubernetes is a container orchestration system for Docker containers that is more extensive than Docker Swarm and is meant to coordinate clusters of nodes at scale in production in an efficient manner. It works around the concept of pods, which are scheduling units (and can contain one or more containers) in the Kubernetes ecosystem, and they are distributed among nodes to provide high availability. One can easily run a Docker build on a Kubernetes cluster, but Kubernetes itself is not a complete solution and is meant to include custom plugins.
Kubernetes and Docker are both fundamentally different technologies but they work very well together, and both facilitate the management and deployment of containers in a distributed architecture.
Can you use Docker without Kubernetes?
Docker is commonly used without Kubernetes, in fact this is the norm. While Kubernetes offers many benefits, it is notoriously complex and there are many scenarios where the overhead of spinning up Kubernetes is unnecessary or unwanted.
In development environments it is common to use Docker without a container orchestrator like Kubernetes. In production environments often the benefits of using a container orchestrator do not outweigh the cost of added complexity. Additionally, many public cloud services like AWS, GCP, and Azure provide some orchestration capabilities making the tradeoff of the added complexity unnecessary.
Can you use Kubernetes without Docker?
As Kubernetes is a container orchestrator, it needs a container runtime in order to orchestrate. Kubernetes is most commonly used with Docker, but it can also be used with any container runtime. RunC, cri-o, containerd are other container runtimes that you can deploy with Kubernetes. The Cloud Native Computing Foundation (CNCF) maintains a listing of endorsed container runtimes on their ecosystem landscape page and Kubernetes documentation provides specific instructions for getting set up using ContainerD and CRI-O.
Source: sumologic
RECENT POST