The word for Kubernetes comes from Greek and means helmsman or pilot. You can see Kubernetes written as K8s in most sources. This is because there are 8 letters between the letters K and s.
Kubernetes is a container cluster tool that allows you to manage your existing containerized applications, which are supported by the Cloud Native Computing Foundation, developed by Google in GO language, together with operations such as automatically deploying, increasing, and decreasing their number.
Kubernetes is now the market leader and industry standards managing tool for containers and distributed applications. It is also open-source.
Kubernetes makes it easy to deploy and run applications in a microservices architecture. It does this by creating an abstraction layer on top of a group of hosts so that development teams can deploy their applications. With the widespread use of cloud technologies, we encountered the concept of a container and it started to become important.
By using this technology, we can manage our applications more easily through the microservice architecture, find solutions to our problems more easily, and more importantly, we can move our applications very easily because it provides a portable environment. In fact, these features happen with Kubernetes managing Docker or similar tools.
Kubernetes supports a few basic container engines, and Docker is just one of them. The two technologies work great together because Docker containers are an effective way to distribute packaged applications, and Kubernetes is designed to coordinate and schedule those applications.
Kubernetes keeps track of your container applications deployed in the cloud. Restarts the containers, shuts down the containers when not in use, and automatically supplies power to resources such as memory, storage, and CPU when needed.
How does Kubernetes work?
A Container orchestrator is essentially a manager responsible for operating a fleet of Container applications. The orchestrator will take care of it if it needs to reboot or get more resources.
This is a pretty broad summary of how most Container orchestrators work. Let’s take a more detailed look at all the specific components of Kubernetes that do this.
This is the main entry point for administrators and users to manage various nodes. Operations are done through HTTP calls or connecting to the machine and running command-line scripts.
Kubernetes Node manages and runs partitions; it is the machine (virtualized or physical) that does the given job. If pods collect individual containers working together, a node collects all pods working together.
The Kubernetes pod is a group of containers and is the smallest unit Kubernetes manages. Pods have a single IP address that applies to each container in the pod. The containers in the compartment share the same resources such as memory and storage. This allows individual Linux containers in a capsule to be treated collectively as a single application; as if the container processes were all running together on the same host in more traditional workloads. It is quite common to have a capsule with only one container when the application or service is a single process that needs to run. But when things get more complicated and multiple processes need to work together using the same shared data volumes for the correct operation.
Kubernetes Deployments defines the scale at which you want your application to run, allowing you to set the details of how you want containers to be replicated on your Kubernetes nodes. Deployments define the number of identical pod replicas you want to run and the preferred update strategy used when updating Pods. Kubernetes monitors pod health and removes or adds Pods to bring your application deployments to the state you want.
The service is an abstraction on containers and is actually the only application interface that consumers interact with. As pods are changed, their internal names and IPs may change. A service shows a single machine name or IP address whose base names and numbers are mapped to untrusted pods. A service makes everything appear unchanged on the external network.
It is the agent that works on every node in the cluster. This agent allows the containers to work inside the pod. Kubelet cannot manage containers not created by Kubernetes.
Kube-Proxy provides and manages the network communication on the node. It also allows you to create and manage sets of rules for network communications of nodes.
When a new pod is requested through the API Server, Scheduler is the unit that decides on which node the pod will run. It triggers the Kubelet, and the corresponding pod and its container are created.
Controller Manager is the unit that manages the controllers on the master node such as Node Controller, Replication Controller, etc. Each controller actually works as logically different operations, but in order to eliminate the complexity, they are all compiled as a single binary and run as a single operation.
etcd is the component designed with a distributed architecture with high availability, consistency, and traceability features and where all cluster data is stored. Data is stored in a structure called a key-value database.
In fact, everything is asked from Kubernetes API Server, all requests from the master or worker nodes are managed here. API Server is responsible for the management of all REST requests coming to our master server. It is managed with JSON files.