How kubernetes works

2 months ago

Kubernetes component

Kubelet -> spy on node for master.

A Kubelet tracks the state of a pod to ensure that all the containers are running. It provides a heartbeat message every few seconds to the control plane. If a replication controller does not receive that message, the node is marked as unhealthy.

Scheduler -> like Human resource assign node to master.

The scheduler is responsible for assigning work to the various nodes. It keeps watch over the resource capacity and ensures that a worker node’s performance is within an appropriate threshold.

The scheduler is responsible for identifying the node that pods will run on. The details of how this is determined vary based on the characteristics of the pods and the existing state of the available nodes. The strategy for how the scheduler approaches this decision making can be tuned all the way up to the ability to write custom schedulers. The scheduler interacts with the API server in performing its work.

etcd -> like database

The etcd database is accessible only through the API server. Any component of the cluster which needs to read or write to etcd does it through the API server.

API Server -> like a phone for communication.


the API server is the central communication mechanism for the cluster. It brokers the interaction between the control plane, the worker nodes, and the administrators as they apply configuration changes via the Kubernetes command line tools (like kubectl) or other UI.

Kube Proxy -> like security gard.

The kube proxy is responsible for enforcing network rules on the node and allowing for traffic to and from the node.

The Kube proxy routes traffic coming into a node from the service. It forwards requests for work to the correct containers.

Controller -> Boss monitor the cluster and take action. like create/remove pods.

The controller component is responsible for keeping the cluster in the desired state as configured, and moving it towards that state when it drifts away from it.

More accurately, the controller manager oversees various controllers which respond to events (e.g., if a node goes down).

In Kubernetes, controllers are control loops that watch the state of your cluster, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state. and then reports current state back to your cluster's API server.
controller tells the API server to create or remove Pods
Controller Control via API server