5 Reasons to Use Kubernetes for Container Management

SADA Says | Cloud Computing Blog

By SADA Says | Cloud Computing Blog

Containers have formalized the contract that was previously implicit between servers, VMs, configuration management, and application code. By bundling the application and all of its dependencies, we now have one tool to define what an application (written in any language) actually needs in order to build and run. However, it’s not enough to simply run containers; they have to be orchestrated and managed to ensure that applications run properly. On a small scale, manual container orchestration isn’t a big deal, but a typical enterprise production environment holds, hundreds, perhaps thousands of containers. 

Kubernetes, an open-source platform originally developed by Google that is now maintained by the Cloud Native Computing Foundation, was designed to alleviate the problems caused by container sprawl. It quickly became the de facto standard for automating container orchestration at scale; a recent survey by Stackrox found that Kubernetes adoption grew by nearly 50% in the first half of 2019 alone.

Here are five of the biggest reasons why so many enterprises love Kubernetes:

1. Portability & multicloud support

Portability and avoidance of vendor lock-in are important as the world shifts to multicloud; a typical enterprise leverages 4 to 5 clouds. Kubernetes is completely portable and vendor-agnostic. It can run in public or private clouds and in hybrid, on-prem, or multicloud environments. Kubernetes clusters can be moved between cloud vendors easily, with only minimal changes to their deployment and management processes. Google Cloud Platform launched Google Kubernetes Engine (GKE), a managed Kubernetes service, in 2015; four years later, Kubernetes is the only platform to be offered as a managed service by the top 5 public cloud vendors.

2. Scalability & self-healing

Kubernetes ensures system reliability by automatically scaling containers in response to workload requirements and by self-healing containers when things go wrong. Containers that fail, or that that don’t respond to user-defined health checks are restarted, and they won’t be advertised to clients until they are ready to serve. Kubernetes Horizontal Pod Autoscaler (HPA) automatically increases or decreases the number of pods (applications pods) based on observed CPU utilization or custom metrics.

3. Declarative management of applications

The traditional imperative method of managing software applications can be tricky, involving a number of steps. To solve the challenges of imperative management, Kubernetes allows developers to declaratively manage applications, containers, clusters, and other objects by using a YAML or JSON text file to declare the object’s target state. Kubernetes ensures that each object always matches this desired state, not just at a single point in time but continuously. In addition to ensuring consistency and predictability, this declarative model allows for automation of rollouts and rollbacks. Since target states are defined using text files that both machines and humans can understand, it’s also easier for administrators to document desired system states.

4. System optimization & efficiency

In addition to automating container management and orchestration, Kubernetes’ built-in load balancing, scheduling, and rescheduling features optimize the availability of containers and the utilization of clusters. Kubernetes organizes containers into sets of 1 or more. These sets are called pods. Pods have their own IP addresses, sets of pods can use a single DNS name, and Kubernetes load-balances across them. The use of DNS for service discovery means developers don’t need to modify their applications to use an unfamiliar service discovery mechanism.

5. Vast and growing open source community

Kubernetes is open source, and its vast and rapidly growing user and developer community ensures a rich ecosystem of support tools and add-ons. It ranks no. 2 for authors/issues on Github, second only to Linux, and the Cloud Native Computing Foundation manages more than 30 open source projects related to Kubernetes. Users who need to extend the capabilities of Kubernetes to fit a particular use case can search hundreds of existing add-ons — or create their own via the CRD pattern. The extensive user community is also helpful for technical support or advice on how best to implement a particular feature.

Like all technologies, Kubernetes may be overkill for your application, and it may be the case that all your needs could be met with a 2U in your utility closet. There’s nothing wrong with that.

However if you face issues of scalability, repeatability, or reliability, or if you just want access to cloud-specific features, Kubernetes is a great way to achieve those goals in a way that is easy to pick up and understand, and a joy to master.


LET'S TALK

Our expert teams of consultants, architects, and solutions engineers are ready to help with your bold ambitions, provide you with more information on our services, and answer your technical questions. Contact us today to get started.

Scroll to Top