These days, many tech enthusiasts are engaged in exciting discussions about Kubernetes and its surrounding tools. But does Kubernetes truly deserve all this attention? Is it worth considering for your next project?
Let’s take a superficial look at this technology to answer these questions. We’ll begin by examining the official definition of Kubernetes as stated on its website.
Table of Contents
What is Kubernetes actually?
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
Cloud Native Computing Foundation (CNCF)
So at first, it’s about containers, that’s the starting point.
Modern software architecture assumes that your application is divided into interconnected logical parts (microservices), each of which can be encapsulated in a container for improved operability and isolation. However, as an application grows, the number of containers increases, necessitating a specific toolset to orchestrate this complex ecosystem. You need a way to deploy all the containers to various environments, monitor their liveness, and ensure high application responsiveness even during peak load periods through scaling up and down. Ideally, all of this should work automatically, without requiring constant attention from the operating team (Ops) and the developers (Devs).
This is where Kubernetes comes in. It is a collection of various open-source components, predominantly supported by the Cloud Native Computing Foundation, working together towards a common goal: enabling your application in the cluster to operate reliably.
Fault tolerance of Kubernetes
Kubernetes is built upon promise theory, ensuring the Operator (the person responsible for application delivery, DevOps) that the defined configuration of the application will continue to work despite failures of containers or even when parts of the cluster (virtual machines) experience crashes.
In essence, the entire system must be fault-tolerant, capable of accepting errors and failures in any part. If something goes wrong, the system must strive to restore the application to a working state based on the defined configuration. For example, when a container stops responding to health checks, Kubernetes automatically restarts that container. Similarly, if a virtual machine in the cluster fails due to hardware or networking issues, power supply outage, or operating system errors, Kubernetes takes appropriate actions. It relocates containers from the failed machine to another cluster node and then recycles/restarts the problematic machine to restore its functionality.
However, there is one crucial remaining: properly defining the application’s configuration in the appropriate format for Kubernetes, so that Kubernetes can effectively manage the infrastructure. This configuration is transmitted to Kubernetes’ control plane, and the rest is handled by Kubernetes itself.
This setup offers another advantage: the ability to share the composed infrastructure/application configuration via Git, making it accessible to all team members at any time. This facilities better collaboration on the infrastructure’s state. Instead of manually adjusting the infrastructure, the continuous delivery system in each environment fetches the infrastructure’s configuration from Git and applies any changes. As a result, the actual state of the infrastructure is always documented with the code, centrally managed through Git, and transparent to the entire team. These valuable approaches are commonly referred to as GitOps and Infrastructure as Code (IaC).
New mindset with Kubernetes
In the past, each application was tended to individually, like a delicate plant in a greenhouse. However, with the rise of cloud services and containerization, the focus has shifted towards managing a large group of similar applications, akin to herding sheep in a field. This approach enables quick replacement and scaling, but it requires a different skill set and tools to effectively manage the herd.
“Herding sheep instead of nurturing individual plants”
The popularity of Kubernetes can be attributed, in part, to the availability of these new and useful tools for managing your application ecosystems and providing observability into its processes. With Kubernetes, you define the desired state of container groups called pods, establish procedures to check the health of each container, and provide these details to Kubernetes. From there, Kubernetes takes on the role of a shepherd’s dog, tending to you application ecosystem. It downloads container images from an image registry, deploys containers to cluster nodes with sufficient resources, facilitates service discovery for seamless communication, and monitors container health. Kubernetes tracks the load on containers and virtual machines in the cluster, automatically scales the number of containers horizontally based on their load, and can even extend the cluster if there are insufficient resources available for your containers. Isn’t that cool?
Open source, in the cloud or on-premise
Apart from Kubernetes, there are several valuable container orchestration frameworks available. Almost every major cloud provider has attempted to create and popularize their own container orchestration projects. However, Google’s approach stood out as the most theoretically well-grounded and robust among the existing tools. After internally testing and deploying the initial releases of Kubernetes, Google sought to attract prominent players to validate Kubernetes in real-life scenarios.
One notable success story was the hosting an management of the immensely popular online game, “Pokémon Go”, using Kubernetes. The game withstood sudden surges of players, showcasing Kubernetes’ undeniable success. Subsequently, the next step was to propel Kubernetes into the open-source community to accelerate its development and adoption.
Within a few years, Kubernetes gained a stellar reputation, prompting all major cloud providers to acknowledge its significance and offer Kubernetes as a managed service, as Kubernetes had become the de facto global standard for container orchestration.
Kubernetes is an open-source project that can be utilized as a managed service from any major cloud provider, or it can be deployed in an on-premise cluster. This affords you the freedom of choice in how and where you choose to leverage Kubernetes for you applications.
Kubernetes - Do we really need it?
We’ve just discovered the main features of Kubernetes, and it looks really powerful, providing many advanced tools out of the box. However, it’s important to note that it’s not a panacea for all your use cases. Here are some cases where you may not need Kubernetes for your applications:
- Firstly, Kubernetes is primarily about container orchestration. If your application is not containerized, then Kubernetes doesn’t suit your needs for deployments. Migrating a monolithic application to a microservices architecture and encapsulating microservices into containers can be a labor-intensive task for developers and DevOps.
- Secondly, using Kubernetes effectively requires a high level of expertise. In addition to understanding Kubernetes’ own concepts, developers need to master Docker containerization and the main principles of designing cloud-friendly applications (you should start with the twelve-factor app concepts). In some cases, it may be more sensible to try simpler orchestration solutions like Docker Swarm or Mesos.
- Thirdly, Kubernetes recommends starting with a small cluster. Whether you install it on-premise or opt for a managed service, it requires a significant budget compared to virtual hosting. Therefore, for smaller projects or applications, the benefits may not outweigh the costs. Adopting Kubernetes in such cases may lead to increased expenditure in terms of both time and resources. Unless you have sufficiently large microservices environment, Kubernetes is unlikely to provide significant added value.
One of the main goals when using Kubernetes is high availability. Redundancy is part of Kubernetes’ design. Even the minimal recommended configuration is built for high availability. So, if your SLA doesn’t require really steady uptime, then perhaps Kubernetes could just be overkill.
Conclusion about Kubernetes
Based on the arguments presented, it is recommended that your team carefully analyze and evaluate the feasibility and cost-effectiveness of using Kubernetes in your next project. Consider factors such as whether your application is containerized, the level of expertise required, the size of the project, and the need for high availability.
At freshcells, we began adopting Kubernetes several years ago when we transitioned our products to a microservices architecture. This journey was filled with challenges but allowed us to gain the expertise we have today. We must admit that the state of our customers’ environments managed with Kubernetes has significantly improved and become more advanced and professional.
Ultimately, the decision to adopt Kubernetes should be based on a thorough assessment of your project’s requirements and resources, considering both the potential benefits and the associated challenges.