kubernetes

[box]Kubernetes—also known as ‘k8s’ or ‘kube’—is a container orchestration platform for scheduling and automating the deployment, management, and scaling of containerized applications.[/box]

As containers proliferated—today, an organization might have hundreds or thousands of them—operations teams needed to schedule and automate container deployment, networking, scalability, and availability. And so, the container orchestration market was born.

Kubernetes schedules and automates these and other container-related tasks:

  • DeploymentDeploy a specified number of containers to a specified host and keep them running in a desired state.
  • Rollouts: A rollout is a change to a deployment. Kubernetes lets you initiate, pause, resume, or roll back rollouts.
  • Service discoveryKubernetes can automatically expose a container to the internet or to other containers using a DNS name or IP address.
  • Storage provisioningSet Kubernetes to mount persistent local or cloud storage for your containers as needed.
  • Load balancing and scalingWhen traffic to a container spikes, Kubernetes can employ load balancing and scaling to distribute it across the network to maintain stability.
  • Self-healing for high availabilityWhen a container fails, Kubernetes can restart or replace it automatically; it can also take down containers that don’t meet your health-check requirements.

[box]Kubernetes vs. Docker[/box]

If you’ve read this far, you already understand that while Kubernetes is an alternative to Docker Swarm, it is not (contrary to persistent popular misconception) an alternative or competitor to Docker itself.

In fact, if you’ve enthusiastically adopted Docker and are creating large-scale Docker-based container deployments, Kubernetes orchestration is a logical next step for managing these workloads.

[box]Kubernetes architecture[/box]

 

Cluster

A Kubernetes cluster is a set of node machines for running containerized applications. If you’re running Kubernetes, you’re running a cluster. At a minimum, a cluster contains a control plane and one or more compute machines, or nodes. The control plane is responsible for maintaining the desired state of the cluster, such as which applications are running and which container images they use. Nodes actually run the applications and workloads.
The cluster is the heart of Kubernetes’ key advantage: the ability to schedule and run containers across a group of machines, be they physical or virtual, on premises or in the cloud. Kubernetes containers aren’t tied to individual machines. Rather, they’re abstracted across the cluster.

A Kubernetes cluster has a desired state, which defines which applications or other workloads should be running, along with which images they use, which resources should be made available for them, and other such configuration details.

A desired state is defined by configuration files made up of manifests, which are JSON or YAML files that declare the type of application to run and how many replicas are required to run a healthy system.

The cluster’s desired state is defined with the Kubernetes API. This can be done from the command line (using kubectl) or by using the API to interact with the cluster to set or modify your desired state.

Kubernetes will automatically manage your cluster to match the desired state. As a simple example, suppose you deploy an application with a desired state of “3,” meaning 3 replicas of the application should be running. If 1 of those containers crashes, Kubernetes will see that only 2 replicas are running, so it will add 1 more to satisfy the desired state.

Control plane

Control plane components can be run on any machine in the cluster. However, for simplicity, set up scripts typically start all control plane components on the same machine, and do not run user containers on this machine.

The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane. The main implementation of a Kubernetes API server is kube-apiserver. kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.

Etcd is a consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data. If your Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for those data.

Kube-scheduler is a control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on. Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.

Kube-controller-manager is a control Plane component that runs controller processes. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
These controllers include:

  • Node controller: Responsible for noticing and responding when nodes go down.
  • Replication controller: Responsible for maintaining the correct number of pods for every replication controller object in the system.
  • Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods).
  • Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.

Cloud-controller-manager is a Kubernetes control plane component that embeds cloud-specific control logic. The cloud controller manager lets you link your cluster into your cloud provider’s API, and separates out the components that interact with that cloud platform from components that just interact with your cluster.

The cloud-controller-manager only runs controllers that are specific to your cloud provider. If you are running Kubernetes on your own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager.

As with the kube-controller-manager, the cloud-controller-manager combines several logically independent control loops into a single binary that you run as a single process. You can scale horizontally (run more than one copy) to improve performance or to help tolerate failures.

The following controllers can have cloud provider dependencies:

  • Node controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding
  • Route controller: For setting up routes in the underlying cloud infrastructure
  • Service controller: For creating, updating and deleting cloud provider load balancers

Kubelet is an agent that runs on each node in the cluster. It makes sure that containers are running in a Pod. The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.

Kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept. kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster. kube-proxy uses the operating system packet filtering layer (ex: iptables) if there is one and it’s available. Otherwise, kube-proxy forwards the traffic itself.

The container runtime is the software that is responsible for running containers. Kubernetes supports several container runtimes: DockercontainerdCRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).

Pod

Scripts specify container configuration and what resources are needed to run the app, such as persistent storage, services, and so on. In Kubernetes, pods are the smallest deployable units in a cluster, and they group containers that must be treated as a single unit. Kubernetes creates pods to host application instances. Pods hold one or more app containers and share resources, such as storage or networking information.

What makes these containers a pod, is that all containers in a pod run as if they would have been running on a single host in pre-container world. Thus, they share a set of Linux namespaces and do not run isolated from each other. This results in them sharing an IP address and port space, and being able to find each other over localhost or communicate over the IPC namespace. Further, all containers in a pod have access to shared volumes, that is they can mount and work on the same volumes if needed

In order to gain all this functionality a pod is a single deployable unit. Each single instance of the pod (with all its containers) is always scheduled together.

Service

Kubernetes Pods are created and destroyed to match the state of your cluster. Pods are nonpermanent resources. If you use a Deployment to run your app, it can create and destroy Pods dynamically.

Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.

This leads to a problem: if some set of Pods (call them “backends”) provides functionality to other Pods (call them “frontends”) inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?

With Kubernetes you don’t need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.

In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a selector.

A Service in Kubernetes is a REST object, similar to a Pod. Like all of the REST objects, you can POST a Service definition to the API server to create a new instance. The name of a Service object must be a valid DNS label name.

For example, suppose you have a set of Pods that each listen on TCP port 9376 and carry a label app=MyApp:

This specification creates a new Service object named “my-service”, which targets TCP port 9376 on any Pod with the app=MyApp label. Kubernetes assigns this Service an IP address (sometimes called the “cluster IP”), which is used by the Service proxies. The controller for the Service selector continuously scans for Pods that match its selector, and then POSTs any updates to an Endpoint object also named “my-service”.

For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that’s outside of your cluster.

Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.

Type values and their behaviors are:

  • ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
  • NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
  • LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
  • ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.

You can also use Ingress to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules into a single resource as it can expose multiple services under the same IP address.

We saw that we could use labels to match a frontend service with a backend pod automatically by using a selector. If any new pods had a specific label, the service would know how to send traffic to it. Well the way that the service knows to do this is by adding this mapping to an endpoint. Endpoints track the IP Addresses of the objects the service send traffic to. When a service selector matches a pod label, that IP Address is added to your endpoints and if this is all you’re doing, you don’t really need to know much about endpoints. However, you can have Services where the endpoint is a server outside of your cluster or in a different namespace.

What you should know about endpoints is that there is a list of addresses your services will send traffic and its managed through endpoints. Those endpoints can be updated automatically through labels and selectors, or you can manually configure your endpoints depending on your use case.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Proudly powered by WordPress | Theme: Looks Blog by Crimson Themes.