Kubernetes And Its Use Cases

Akshay Gupta
7 min readMar 10, 2021

In this Report, we will see what is Kubernetes, some of its use cases & how its popularity is revolutionizing the development industry.

We will see how Kubernetes work and what are containers, their uses and the need for Containers; key areas of Kubernetes Architecture; what is orchestration; its uses in the industry and some of its major advantages !!

Kubernetes has emerged as one of the most exciting technologies in the world of DevOps that has gained a lot of attention of the DevOps professionals. Kubernetes, or commonly known as ‘k8s’, is an open source, vendor-agnostic cluster and container management tool, portable and extensible platform, for managing containerized workload and services. This container orchestration system is put to automating the deployment, scaling, as well as management of the application.

Kubernetes provides a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. This highly lowers the cost of cloud computing expenses and simplifies operations and architecture. So before diving deep into Kubernetes, let’s understand what are containers.

Containers :

A container is a mini-virtual machine. It is small, as it does not have device drivers and all the other components of a regular virtual machine. Docker is by far the most popular container and it is written in Linux. Microsoft also has added containers to Windows as well, because they have become so popular. Suppose, for example, we have a requirement for installing a web server on a Linux OS. First, you could install it directly on the physical server’s OS. But most people use virtual machines now, so you would probably install it there. But setting up a virtual machine requires some administrative effort and cost as well. And machines will be underutilized if you just dedicate it for just one task, which is how people typically use VMs.

Need for Orchestration

Orchestration is the effective management and execution of multiple workloads cohabiting an IT platform. In Kubernetes’ case, certain workloads may arrive on the platform having already been subdivided into microservices. Now, there is an inherent problem with containers, just like there is with any other virtual machines that we need to keep track of them. When public cloud companies bill you for CPU time or storage then you need to make sure you do not have any orphaned machines spinning out there doing nothing. Plus there is the need to automatically spin up more when a machine needs more memory, CPU, or storage, as well as shut them down when the load lightens. Orchestration tackles such problems. This is where Kubernetes comes in.

So, with the rise of containers comes the challenge of having to manage hundreds or even thousands of containers running complex enterprise applications. This is a significant task that requires an orchestration platform. Kubernetes has become the go-to orchestration platform since it was launched in 2014. Originated by Google, Kubernetes was created to work in any environment, including on-premises or in the public cloud.

Google built Kubernetes and has been using it for 10 years. That it has been used to run Google’s massive systems for that long is one of its key selling points. Two years ago Google pushed Kubernetes into open source. Kubernetes is a cluster and container management tool. It lets you deploy containers to clusters, meaning a network of virtual machines. It works with various containerization technologies, not just Docker.

The basic idea of Kubernetes is to further abstract machines, storage, and networks away from their physical implementation. So it is a single interface to deploy containers to all kinds of clouds, virtual machines, and physical machines.

Kubernetes Architecture –

The Kubernetes has a primary/replica architecture. Kubernetes architecture consists of a lot of components. These components can be divided into the ones that manage an individual node, and the others are a part of the control plane. It is essential to understand the architecture if you wish to learn Kubernetes.

In the Kubernetes architecture, there is one or more master and multiple nodes. One or masters used to provide high-availability. The Master node communicates with Worker nodes using Kube API-server to kubelet communication. In the Worker node, there can be one or more pods and pods can contain one or more containers. Containers can be deployed using the image also can be deployed externally by the user.

In Kubernetes architecture, both the master node and worker nodes are managed by the user. But in Managed Kubernetes service third-party providers manages Master node & user manages Worker node also manage Kubernetes offers dedicated support, hosting with pre-configured environments. Managed solutions take care of much of this configuration for you.

Benefits of Using Kubernetes –

So why would you use Kubernetes on, for example, Amazon EC2, when it has its own tool for orchestration (Cloud Formation)? Because with Kubernetes you can use the same orchestration tool and command-line interfaces for all your different systems. Amazon Cloud Formation only works with EC2. So with Kubernetes you could push containers to the Amazon cloud, your in-house virtual and physical machines as well, and other clouds.

Reliability is one of the major benefits of Kubernetes; Google has over 10 years of experience when it comes to infrastructure operations with Borg, their internal container orchestration solution, and they’ve built Kubernetes based on this experience. Kubernetes can be used to prevent failure from impacting the availability or performance of your application, and that’s a great benefit.

Automated rollouts and rollbacks: Want to roll-out a new version of your app or update its configuration? Kubernetes will handle it for you without downtime, while monitoring the containers’ health during the roll-out. In case of failure, it automatically rolls back.

Scalability is handled by Kubernetes on different levels. You can add cluster capacity by adding more worker nodes, which can even be automated in many public clouds with Auto-scaling functionality based on CPU and Memory triggers.

Kubernetes also comes in with built-in load balancers to distribute your load across multiple pods, enabling you to (re)balance resources quickly in order to respond to outages, peak or incidental traffic and batch processing. It’s also possible to use external load balancers.

Batch execution: in addition to services, Kubernetes can manage batch and IC workloads, replacing failed containers

Health checks and self-healing: Kubernetes guards your containerized application against failures by constantly checking the health of nodes and containers. Kubernetes also offers self-healing and auto-replacement: if a container or pod crashes due to an error, Kubernetes has got you covered.

Kubernetes Use Cases -

Since its inception, Kubernetes has been a project that has enjoyed great recognition and has always had a lot of impact. Companies are looking to develop applications, and containers and open source are becoming very important, as they realize that Kubernetes is the first step to create modern scalable applications.

Kubernetes is a system that can be used to efficiently implement applications. As a result, it can help companies save money by using less labor to manage their IT infrastructure. It effectively automates container management. Because containers allow for the assembly of code into smaller, easier to transport parts, and larger applications involve a package of multiple containers, Kubernetes can organize multiple containers into units. Therefore, containerized applications can be scaled automatically, making it more feasible with only fewer resources needed to manage multiple containers. Some Use Cases are -

BlaBla Car

Location: India, Paris, France

Industry: Ridesharing Company

Challenge

The world’s largest long-distance carpooling community, BlaBlaCar, connects 40 million members across 22 countries. The company has been experiencing exponential growth since 2012 and needed its infrastructure to keep up. “When you’re thinking about doubling the number of servers, you start thinking, ‘What should I do to be more efficient?’” says Simon Lallemand, Infrastructure Engineer at BlaBlaCar. “The answer is not to hire more and more people just to deal with the servers and installation.” The team knew they had to scale the platform, but wanted to stay on their own bare metal servers.

Solution

Opting not to shift to cloud virtualization or use a private cloud on their own servers, the BlaBlaCar team became early adopters of containerization, using the CoreOs runtime rkt, initially deployed using fleet cluster manager. Last year, the company switched to Kubernetes orchestration, and now also uses Prometheus for monitoring.

Impact

“Before using containers, it would take sometimes a day, sometimes two, just to create a new service,” says Lallemand. “With all the tooling that we made around the containers, copying a new service now is a matter of minutes. It’s really a huge gain. We are better at capacity planning in our data center because we have fewer constraints due to this abstraction between the services and the hardware we run on. For the developers, it also means they can focus only on the features that they’re developing, and not on the infrastructure.”.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Akshay Gupta
Akshay Gupta

No responses yet

Write a response